CHF118.90
Download est disponible immédiatement
The purpose of computer vision is to make computers capable of understanding environments from visual information. Computer vision has been an interesting theme in the field of artificial intelligence. It involves a variety of intelligent information processing: both pattern processing for extraction of meaningful symbols from visual information and symbol processing for determining what the symbols represent. The term "3D computer vision" is used if visual information has to be interpreted as three-dimensional scenes. 3D computer vision is more challenging because objects are seen from limited directions and some objects are occluded by others. In 1980, the author wrote a book "Computer Vision" in Japanese to introduce an interesting new approach to visual information processing developed so far. Since then computer vision has made remarkable progress: various rangefinders have become available, new methods have been developed to obtain 3D informa tion, knowledge representation frameworks have been proposed, geometric models which were developed in CAD/CAM have been used for computer vision, and so on. The progress in computer vision technology has made it possible to understand more complex 3 D scenes. There is an increasing demand for 3D computer vision. In factories, for example, automatic assembly and inspection can be realized with fewer con straints than conventional ones which employ two-dimensional computer vision.
Contenu
1 Introduction.- 1.1 Three-Dimensional Computer Vision.- 1.2 Related Fields.- 1.2.1 Image Processing.- 1.2.2 Pattern Classification and Pattern Recognition.- 1.2.3 Computer Graphics.- 1.3 Mainstream of 3D Computer Vision Research.- 1.3.1 Pioneering Work.- 1.3.2 First Generation Robot Vision.- 1.3.3 Interpretation of Line Drawings.- 1.3.4 Feature Extraction.- 1.3.5 Range Data Processing.- 1.3.6 Realizability of Line Drawings.- 1.3.7 Use of Knowledge About Scenes.- 1.3.8 Use of Physics of Imaging.- 1.3.9 Marr's Theory of Human Vision and Computer Vision.- 2 Image Input.- 2.1 Imaging Geometry.- 2.2 Image Input Devices.- 2.2.1 Image Dissector.- 2.2.2 Vidicon.- 2.2.3 Solid Devices.- 2.3 Color.- 2.3.1 Color Representation.- 2.4 Color Input.- 2.4.1 TV Signals.- 2.5 Range.- 2.5.1 Optical Time of Flight.- 2.5.2 Ultrasonic Ranging.- 2.5.3 Spot Projection.- 2.5.4 Light-Stripe Method.- 2.6 Moiré Topography.- 2.7 Preprocessing.- 2.7.1 Noise Reduction.- 2.7.2 Geometrical Correction.- 2.7.3 Gray-Level Correction.- 2.7.4 Correction of Defocusing.- 3 Image Feature Extraction.- 3.1 Edge Point Detection.- 3.1.1 Edge Types for a Polyhedral Image.- 3.1.2 One-Dimensional Edge Operators.- 3.1.3 Two-Dimensional Edge Operators.- 3.1.4 Pattern Matching Operations.- 3.1.5 Color Edge Operators.- 3.1.6 Determination of Edge Points.- 3.1.7 Zero-Crossing Method.- 3.1.8 Edge of a Curved Surface.- 3.2 Local Edge Linking.- 3.2.1 Roberts' Edge-Linking Method.- 3.2.2 Edge Linking by Relaxation.- 3.3 Edge Point Clustering in Parameter Space.- 3.3.1 Hough Transformation.- 3.3.2 Extension of Hough Transformation.- 3.4 Edge-Following Methods.- 3.4.1 Detection of Starting Point.- 3.4.2 Prediction of Next Edge Point.- 3.4.3 Detection of Edge Point on Basis of Prediction.- 3.4.4 Determination of Next Step.- 3.4.5 Obtaining Connected Edge Points.- 3.5 Region Methods.- 3.5.1 Region Merging.- 3.5.2 Region Splitting.- 3.5.2.1 Region Splitting by Mode Methods.- 3.5.2.2 Region Splitting Based on Discriminant Criterion.- 4 Image Feature Description.- 4.1 Representation of Lines.- 4.1.1 Spline Functions.- 4.1.2 Smoothing Splines.- 4.1.3 Parametric Splines.- 4.1.4 B-Splines.- 4.2 Segmentation of a Sequence of Points.- 4.2.1 Approximation by Straight Lines.- 4.2.2 Approximation by Curves.- 4.3 Fitting Line Equations.- 4.3.1 Using Errors Along a Single Axis.- 4.3.2 Using Errors of Line Equations With Two Variables.- 4.3.3 Using Distance From Each Point to Fitted Line.- 4.4 Conversion Between Lines and Regions.- 4.4.1 Boundary Detection.- 4.4.2 Boundary Following.- 4.4.3 Labeling Connected Regions.- 5 Interpretation of Line Drawings.- 5.1 Roberts' Matching Method.- 5.2 Decomposition of Line Drawings Into Objects.- 5.3 Labeling Line Drawings.- 5.3.1 Vertex Type.- 5.3.2 Interpretation by Labeling.- 5.3.3 Sequential Labeling Procedure.- 5.3.4 Labeling by Relaxation Method.- 5.3.5 Line Drawings with Shadows and Cracks.- 5.3.6 Interpretation of Curved Objects.- 5.3.7 Interpretation of Origami World.- 5.4 Further Problems in Line Drawing Interpretation.- 6 Realizability of Line Drawings.- 6.1 Line Drawings Without Interpretations.- 6.2 Use of Gradient Space.- 6.2.1 Gradient Space.- 6.2.2 Construction of Gradient Image.- 6.3 Use of Linear Equation Systems.- 6.3.1 Solving Linear Equation Systems.- 6.3.2 Position-Free Line Drawings.- 6.3.3 Realizability of Position-Constrained Line Drawings.- 7 Stereo Vision.- 7.1 Stereo Image Geometry.- 7.2 Area-Based Stereo.- 7.2.1 Feature Point Extraction.- 7.2.2 Similarity Measures.- 7.2.3 Finding Correspondence.- 7.2.4 Multistage Matching.- 7.2.5 Matching by Dynamic Programming.- 7.3 Feature-Based Stereo.- 7.3.1 Feature-Based Stereo for Simple Scenes.- 7.3.2 Marr-Poggio-Grimson Algorithm.- 8 Shape from Monocular Images.- 8.1 Shape from Shading.- 8.1.1 Reflectance Map.- 8.1.2 Photometric Stereo.- 8.1.3 Use of Surface Smoothness Constraint.- 8.1.4 Use of Shading and Line Drawing.- 8.2 Use of Polarized Light.- 8.3 Shape from Geometrical Constraint on Scene.- 8.3.1 Surface Orientation from Parallel Lines.- 8.3.2 Shape from Texture.- 8.3.2.1 Shape from Shape of Texture Elements.- 8.3.2.2 Shape from Parallel Lines in Texture.- 8.3.2.3 Shape from Parallel Lines Extracted from Texture.- 9 Range Data Processing.- 9.1 Range Data.- 9.2 Edge Point Detection Along a Stripe Image.- 9.2.1 One-Dimensional Jump Edge.- 9.2.2 One-Dimensional Discontinuous Edge.- 9.2.3 One-Dimensional Corner Edge.- 9.3 Two-Dimensional Edge Operators for Range Images.- 9.3.1 Two-Dimensional Jump Edge.- 9.3.2 Two-Dimensional Discontinuous Edge.- 9.3.3 Two-Dimensional Corner Edge.- 9.4 Scene Segmentation Based on Stripe Image Analysis.- 9.4.1 Segmentation of Stripe Image.- 9.4.2 Construction of Planes.- 9.5 Linking Three-Dimensional Edges.- 9.6 Three-Dimensional Region Growing.- 9.6.1 Outline of Region-Growing Method.- 9.6.2 Construction of Surface Elements.- 9.6.3 Merging Surface Elements.- 9.6.3.1 Kernel Finding.- 9.6.3.2 Region Merging.- 9.6.4 Classification of Elementary Regions.- 9.6.5 Merging Curved Elementary Regions.- 9.6.5.1 Kernel Finding.- 9.6.5.2 Region Merging.- 9.6.6 Making Descriptions.- 9.6.6.1 Fitting Quadratic Surfaces to Curved Regions.- 9.6.6.2 Edges of Regions.- 9.6.6.3 Properties of Regions and Relations Between Them.- 10 Three-Dimensional Description and Representation.- 10.1 Three-Dimensional Curves.- 10.1.1 Three-Dimensional Curve Segments.- 10.1.2 Three-Dimensional B-Splines.- 10.2 Surfaces.- 10.2.1 Coons Surface Patches.- 10.2.2 B-Spline Surfaces.- 10.3 Interpolation of Serial Sections with Surface Patches.- 10.3.1 Description of Problem.- 10.3.2 Determination of Initial Pair.- 10.3.3 Selection of Next Vertex.- 10.4 Generalized Cylinders.- 10.4.1 Properties of Generalized Cylinders.- 10.4.2 Describing Range Data by Generalized Cylinders.- 10.5 Geometric Models.- 10.6 Extended Gaussian Image.- 11 Knowledge Representation and Use.- 11.1 Types of Knowledge.- 11.1.1 Knowledge About Scenes.- 11.1.2 Control.- 11.1.3 Bottom-Up Control.- 11.1.4 Top-Down Control.- 11.1.5 Feedback Control.- 11.1.6 Heterarchical Control.- 11.2 Knowledge Representation.- 11.2.1 Procedural and Declarative Representations.- 11.2.2 Iconic Models.- 11.2.3 Graph Models.- 11.2.4 Demons.- 11.2.5 Production Systems.- 11.2.6 Blackboards.- 11.2.7 Frames.- 12 Image Analysis Using Knowledge About Scenes.- 12.1 Analysis of Intensity Images Using Knowledge About Polyhedral Objects.- 12.1.1 General Strategy.- 12.1.2 Contour Finding.- 12.1.3 Hypothesizing Lines.- 12.1.4 Example of Line-…