This Glossary contains key terminology, definitions, and abbreviations used for 3D scanning, imaging, capture, and digital vision. This Glossary conforms to the ASTM E2544 and ASTM F2792 terminology standards.

This terminology below contains common terms, definitions of terms, descriptions of terms, nomenclature, and acronyms associated with three-dimensional (3D) imaging systems in an effort to standardize terminology used for 3D imaging systems.

The definitions of the terms presented in 3.1 are obtained from various standard documents developed by various standards development organizations. The intent is not to change these universally accepted definitions but to gather, in a single document, terms and their definitions that may be used in current or future standards for 3D imaging systems.

* Denotes source as ASTM E2544 Standard Definitions and Terminology for 3D Imaging Systems.

** Denotes source as ASTM F2792 Standard Terminology for Additive Manufacturing Technologies.

1D — A point in one-dimensional space.

2D — In two-dimensional space, the two dimensions are commonly called length and width, and both directions lie in the same plane. Also known as two-dimensional Euclidean space.

3D — Three-dimensional space is a geometric three-parameter model of the physical universe (without considering time) in which all known matter exists. These three dimensions can be labeled by a combination of the terms length, width, height, depth, and breadth. Any three directions can be chosen, provided that they do not all lie in the same plane. Three coordinate axes are given, each perpendicular to the other two at the origin, the point at which they cross, and are usually labeled x, y, and z. Also known as three-dimensional Euclidean space.

3D capture — Same as 3D scanning.

3D imaging — Same as 3D scanning

3D reconstruction — The process of capturing the shape and appearance of real objects. This process can be accomplished either by active or passive methods. Also known as 3D scanning

3D scanning — Metrological method of determining the size and shape of an object using some degree of automation; often involves an optical device, such as a laser, and sensors that calculate x, y, and z coordinates using a technique called triangulation.

3D scanning system* — A non-contact measurement instrument used to produce a 3D representation (for example, a point cloud) of an object or a site.

Some examples of a 3D imaging system are laser scanners (also known as LADARs or LIDARs or laser radars), optical range cameras (also known as flash LIDARs or 3D range cameras), triangulation-based systems such as those using pattern projectors or lasers, and other systems based on interferometry.

In general, the information gathered by a 3D imaging system is a collection of n-tuples, where each n-tuple can include but is not limited to spherical or Cartesian coordinates, return signal strength, color, time stamp, identifier, polarization, and multiple range returns.

3D imaging systems are used to measure from relatively small scale objects (for example, coin, statue, manufactured part,human body) to larger scale objects or sites (for example, terrain features, buildings, bridges, dams, towns, archeological sites).

3D digitizing — Same as 3D scanning.

4D — 3D plus the addition of a time, density, or motion element/dimension.

Accuracy of measurement* — Closeness of the agreement between the result of a measurement and a true value of the measurand. Accuracy is a qualitative concept and is not synonymous with precision.

Additive Manufacturing File Format (AMF) — An open standard for describing objects for additive manufacturing processes such as 3D printing. The official ISO/ASTM 52915:2013 standard is an XML-based format designed to allow any CAD software to describe the shape and composition of any 3D object to be fabricated on any 3D printer. Unlike the STL format, AMF has native support for color, materials, lattices, and constellations.

Angular increment* — The angle between reported points in either the azimuth or elevation directions (or a combination of both) with respect to an instrument’s internal frame of reference. For scanning instrument, the angular increment is also known as angle step size.

Avalanche photodiode (APD)* —  A highly sensitive semiconductor electronic device that exploits the photoelectric effect to convert light to electricity. APDs are photodetectors that provide a built-in first stage of gain through avalanche multiplication. From a functional standpoint, they can be regarded as the semiconductor analog to photomultipliers.

Azimuth — The angle of horizontal deviation, measured clockwise in degrees, of a bearing from a standard direction, as from north or south.

Beam diameter* — For a laser beam with a circular irradiance pattern, the beam diameter is the extent of the irradiance distribution in a cross section of the laser beam (in a plane orthogonal to its propagation path) at a distance z.

Beam divergence angles* — Measure for the asymptotic increase of the beam widths, dsx (z) and dsy (z), with increasing distance, z, from the beam waist locations.

Beam propagation ratios* — Ratios of the product of the divergence angle, u, and the beam width, d, at the beam waist location z0, for a given laser beam to the same product for a perfect Gaussian beam at the same wavelength.

Beam width* — The extent of the irradiance distribution in a cross section of a laser beam (in a direction orthogonal to its propagation path) at a distance z.

Bias* — Difference between the average or expected value of a distribution and the true value. In metrology, the difference between precision and accuracy is that measures of precision are not affected by bias, whereas accuracy measures degrade as bias increases.

Bias (of a measuring instrument)* — Systematic error of the indication of a measuring instrument. The bias of a measuring instrument is normally estimated by averaging the error of indication over an appropriate number of repeated measurements.

Calibration* — Set of operations that establish, under specified conditions, the relationship between values of quantities indicated by a measuring instrument or measuring system, or values represented by a material measure or a reference material, and the corresponding values realized by standards. A calibration may also determine other metrological properties, such as the effect of influence quantities. The result of a calibration may be recorded in a document, sometimes called a calibration certificate or a calibration report.

Capacitively coupled detector — A class of detectors that are single-photon detectors (SPDs) and are the most sensitive instruments for light detection. In the near-infrared range, SPDs are based on III–V compound semiconductor avalanche photodiodes, such as InGaAs.

Charge-coupled device (CCD) — A device for the movement of electrical charge, usually from within the device to an area where the charge can be manipulated, for example conversion into a digital value. This is achieved by “shifting” the signals between stages within the device one at a time. CCDs move charge between capacitive bins in the device, with the shift allowing for the transfer of charge between bins

Coaxial — Two or more three-dimensional linear forms that share a common axis — concentric in three-dimensional linear forms.

CAD** — Computer-Aided Design. The use of computers for the design of real and virtual objects

CAE — Computer-Aided Engineering. CAE software offers capabilities for engineering simulation and analysis, such as determining a part’s strength or its capacity to transfer heat.

CAM** — Computer-Aided Manufacturing. Typically refers to systems that use surface data to drive CNC machines, such as digitally-driven mills and lathes for producing parts, molds, and dies.

Compensation* — The process of determining systematic errors in an instrument and then applying these values in an error model that seeks to eliminate or minimize measurement errors.

Computerized Tomography (CT) — Computed tomography; CT scanning is a method of capturing the internal and external structure of an object using ionizing radiation. A CT scan creates a series of two-dimensional gray-scale images.

Computer vision — Computer vision includes methods for acquiring, processing, analyzing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information. A theme in the development of this field has been to duplicate the abilities of human vision by electronically perceiving and understanding an image. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. Computer vision has also been described as the enterprise of automating and integrating a wide range of processes and representations for vision perception.

Computer vision is also concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models to the construction of computer vision systems.

Conoscopic holography — Conoscopic holography measures distances by using the polarization properties of a converging light cone that reflect from an object. At the core of the technology stands an anisotropic crystal: a ray that traverses it splits into two components that share the same path but have orthogonal polarizations. The crystal’s anisotropic structure forces each of the polarized light rays to propagate at a different velocity, thus creating a phase difference between them. This phase difference enables the formation of an interference pattern that varies with the distance from the object under measurement.

Control network* — A control network is a network of nodes that collectively monitor, sense, and control or enable control of an environment for a particular purpose. A collection of identifiable points (visible or inferable), with stated coordinate uncertainties, in a single coordinate system.

Control point* — A control point is a member of a set of points used to determine the shape of a spline curve or, more generally, a surface or higher-dimensional object. An identifiable point which is a member of a control network.

Coordinate system — A system that uses one or more numbers, or coordinates, to uniquely determine the position of a point or other geometric element on a manifold such as Euclidean space.

Conventional true value (of a quantity)* — A value attributed to a particular quantity and accepted, sometimes by convention, as having an uncertainty appropriate for a given purpose.

Error (of measurement)* — Result of a measurement minus a true value of the measurand. Since a true value cannot be determined, in practice, a conventional true value is used. When it is necessary to distinguish “error” from “ relative error,” the former is sometimes called “absolute error of measurement.” This should not be confused with the “ absolute value of error,” which is the modulus of error.

Facet** — Typically a three- or four-sided polygon that represents an element of a 3D polygonal mesh surface or model. Triangular facets are used in STL files.

Field of view (FOV)* — The angular extent within which objects are measurable by a device such as an optical instrument without user intervention. For a scanner that is based on a spherical coordinate system, the FOV can typically be given by two angles: horizontal (azimuth) angle and vertical (elevation) angle.

First return* — For a given emitted pulse, it is the first reflected signal that is detected by a 3D imaging system, time-of-flight (TOF) type, for a given sampling position, that is, azimuth and elevation angle.

Flash LADAR* — 3D imaging system, comprised of a source of light (commonly a laser, but for close proximity it can be a bank of LEDs) and a focal plane array (FPA) detector, that is designed so that the range (and in some cases intensity) for all the pixels in the frame are acquired nearly simultaneously in a single flash of illumination. Flash LADAR can allow for high frame rates (for example, 30 frames per second or faster) which is critical for real time applications such as collision avoidance and autonomous vehicle navigation.

Focal plane array (FPA) — A staring array, staring-plane array, focal-plane array (FPA), or focal-plane is an image sensing device consisting of an array (typically rectangular) of light-sensing pixels at the focal plane of a lens.

Hybrid imaging/scanning — A system consisting of a combination of contact and non-contact technologies.

Imaging — Imaging is the representation or reproduction of an object’s form; especially a visual representation (i.e., the formation of an image).

Image-based meshing — Image-based meshing is the automated process of creating computer models for computational fluid dynamics (CFD) and finite element analysis (FEA) from 3D image data (such as magnetic resonance imaging (MRI), computed tomography (CT) or microtomography). Although a wide range of mesh generation techniques are currently available, these were usually developed to generate models from computer-aided design (CAD), and therefore have difficulties meshing from 3D imaging data.

Indicating (measuring) Instrument* — Measuring instrument that displays an indication. The display may be analog (continuous or discontinuous) or digital. Values of more than one quantity may be displayed simultaneously. A displaying measuring instrument may also provide a record. Examples include analog indicating voltmeter, digital frequency meter, and micrometer.

Instrument origin* — Point from which all instrument measurements are referenced, that is, origin of the instrument coordinate reference frame (0, 0, 0).

Interferometry — Interferometry is a family of techniques in which waves, usually electromagnetic, are superimposed in order to extract information about the waves.

Last return* — For a given emitted pulse, it is the last reflected signal that is detected by a 3D imaging system, time-of-flight (TOF) type, for a given sampling position, that is, azimuth and elevation angle.

LADAR* — Laser detection and ranging system. LADAR systems use light to determine the distance to an object. Since the speed of light is well known, LADAR can use a short pulsed laser to illuminate a target and then time how long it takes the light to return. The advantage of LADAR over RADAR (Radio Detection And Ranging) is that LADAR can also image the target at the same time as determine the distance. This allows a 3D view of an object.

LIDAR* — Light detection and ranging system. LIDAR is a remote sensing method that uses light in the form of a pulsed laser to measure ranges (variable distances). These light pulses—combined with other data recorded by the system— generate precise, three-dimensional information about the shape of objects and their surface characteristics.

Limiting conditions* — The manufacturer’s specified limits on the environmental, utility, and other conditions within which an instrument may be operated safely and without damage.

Machine vision — Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance in industry.

Magnetic Resonance Imaging (MRI) — An alternative imaging technology to CT scanning that offers better soft-tissue contrast. MRI does not use ionizing radiation.

Maximum permissible error (MPE)* — Extreme values of an error permitted by specification, regulations, and so forth for a given measuring instrument.

Measurand* — A specific quantity subject to measurement.

Measurement rate* — Reported points per second.

Mesh — A polygon mesh is a collection of vertices, edges and faces that defines the shape of a polyhedral object in 3D computer graphics and solid modeling. The faces usually consist of triangles (triangle mesh), quadrilaterals, or other simple convex polygons, since this simplifies rendering, but may also be composed of more general concave polygons, or polygons with holes.

Metrology — The science of measurement.

Multiple-return range resolution* — The smallest measurable difference in range between two surfaces that produce multiple returns, when measured with a 3D imaging system, as established by a standard or a formal test method.

Multiple returns* — The signals returned to a single detector element from simultaneously-illuminated multiple surfaces.

NURBS — Non-Uniform Rational B-Splines. NURBS surfaces are used to describe the shape of 3D computer models that are mathematically accurate.

OEM — Original Equipment Manufacturer.

Phase shift — Phase shift is how far a function is horizontally to the right of its usual position. The Vertical Shift is how far the function is vertically from its usual position.

Photogrammetry — The use of photography in to measure distances between and features on objects.

Pixel* — A single cell in a focal plane array sensor, or a discrete element in a 2D grid representation of data. The term pixel was originally coined to stand for “picture element” in the image analysis domain. In computer graphics, a pixel represents the smallest discrete element of an image displayed on a screen. Pixel shape is dependent on the technology used to display it. Pixel size has also been used to describe the resolution of a sensor, but pixel size can also be an arbitrary dimension specified by the user (for example, it is possible to “pixelate” a digital image so that the smallest pixel size becomes a multiple of the smallest pixel size in the original image). In 3D imaging applications, 2D projections of point clouds are often displayed on a computer screen (or printed). Hence, although the underlying data may be 3D, it is visualized in terms of pixels projected onto a 2D grid.

Point* — An abstract concept describing a location in space which is specified by its coordinates and other attributes. A point has no dimensions (volume, area, or length). A point can be derived. For example, a centroid of a group of points or a point representing the corner of an object derived by the intersection of three planes. Examples of attributes are color, time stamps, point identifier, return pulse intensity, and polarization.

Point cloud* — A collection of data points in 3D space (frequently in the hundreds of thousands), for example as obtained using a 3D imaging system. The distance between points is generally non-uniform and hence all three coordinates (Cartesian or spherical) for each point must be specifically encoded.

Polygon mesh — A polygon mesh is a collection of vertices, edges and faces that defines the shape of a polyhedral object in 3D computer graphics and solid modeling. The faces usually consist of triangles (triangle mesh), quadrilaterals, or other simple convex polygons, since this simplifies rendering, but may also be composed of more general concave polygons, or polygons with holes.

Precision* — Closeness of agreement between independent test results obtained under stipulated conditions. In metrology, the variability of a measurement process around its average value. Precision is usually distinguished from accuracy, the variability of a measurement process around the true value. Precision, in turn, can be decomposed further into short-term variation or repeatability and long-term variation or reproducibility.

Random error* — Result of a measurement minus the mean that would result from an infinite number of measurements of the same measurand carried out under repeatability conditions. Random error is equal to error minus systematic error. Because only a finite number of measurements can be made, it is possible to determine only an estimate of random error.

Range* — The distance, in units of length, between a point in space and an origin fixed to the 3D imaging system that is measuring that point. In general, the origin corresponds to the instrument origin.

Range finding — The determination of a range by means of a range finder at which to start adjustment on a target.

Range resolution* — The smallest change in range that causes a perceptible change in the corresponding range measurement indication. There are two ways to use this term, and to avoid ambiguity it is recommended that the terms single-return range resolution or multiple-return range resolution be used for quantitative specifications. The difference in range may be a result of a change between two separate distance measurements or the distance between two static objects separated from each other in range.

Rated conditions* — Manufacturer-specified limits on environmental, utility, and other conditions within which the manufacturer’s performance specifications are guaranteed at the time of installation of the instrument.

Registration* — The process of determining and applying to two or more datasets the transformations that locate each dataset in a common coordinate system so that the datasets are aligned relative to each other. A 3D imaging system generally collects measurements in its local coordinate system. When the same scene or object is measured from more than one position, it is necessary to transform the data so that the datasets from each position have a common coordinate system. Sometimes the registration process is performed on two or more datasets which do not have regions in common. For example, when several buildings are measured independently, each dataset may be registered to a global coordinate system instead of to each other.

In the context of this definition, a dataset may be a mathematical representation of surfaces or may consist of a set of coordinates, for example, a point cloud, a 3D image, control points, survey points, or reference points from a CAD model. Additionally, one of the datasets in a registration may be a global coordinate system.

The process of determining the transformation often involves the minimization of an error function, such as the sum of the squared distances between features (for example, points, lines, curves, surfaces) in two datasets. In most cases, the transformations determined from a registration process are rigid body transformations. This means that the distances between points within a dataset do not change after applying the transformations, that is, rotations and translations. In some cases, the transformations determined from a registration process are non-rigid body transformations. This means that the transformation includes a deformation of the dataset. One purpose of this type of registration is to attempt to compensate for movement of the measured object or deformation of its shape during the measurement.

Registration between two point clouds is sometimes referred to as cloud-to-cloud registration; between two sets of control or survey points as target-to-target; between a point cloud and a surface as cloud-to-surface; and between two surfaces as surface-to-surface. The word alignment is sometimes used as a synonymous term for registration. However, in the context of this definition, an alignment is the result of the registration process.

Relative error* — Error of measurement divided by a true value of the measurand.

Repeatability (of results of measurements)* — Closeness of the agreement between the results of successive measurements of the same measurand carried out under the same conditions of measurement. These conditions are called repeatability conditions. Repeatability conditions include: the same measurement procedure; the same observer; the same measuring instrument used under the same conditions; the same location; and repetition over a short period of time.

Reproducibility (of results of measurements)* — Closeness of the agreement between the results of measurements of the same measurand carried out under changed conditions of measurement. A value statement of reproducibility requires specification of the conditions changed. The changed conditions may include: principle of measurement; method of measurement; observer; measuring instrument; reference standard; location; conditions of use; and time. Reproducibility may be expressed quantitatively in terms of the dispersion characteristics of the results.

Resolution — The detail an image holds. The term applies to raster digital images, film images, and other types of images. Higher resolution means more image detail. Image resolution can be measured in various ways. Basically, resolution quantifies how close lines can be to each other and still be visibly resolved. Resolution units can be tied to physical sizes (e.g. lines per mm, lines per inch), to the overall size of a picture (lines per picture height), or to angular subtenant. Line pairs are often used instead of lines; a line pair comprises a dark line and an adjacent light line.

Reverse engineering** — A method of creating a digital representation from a physical object to define its shape, dimensions, and internal and external features.

Scanning — To direct a finely focused beam of light or electrons in a systematic pattern over (a surface) in order to reproduce or sense and subsequently transmit an image. To direct a radar or light beam in a systematic pattern across a sector of an object in search of a target.

Second order moments* — The second order moments of an irradiance distribution of a simple astigmatic laser beam at a given range, z, along the principal axes, x and y, as defined in ISO 11146-1.

Simple astigmatic beam* — A beam having non-circular power density distributions and whose principal axes retain constant orientation under free propagation. This definition is adapted from ISO 11146.

An alternative definition found in the literature: a beam having two principal axes orthogonal to the propagation direction that are defined by non-spherical curvature of the phase front (a surface of constant phase). In particular, the non-spherical curvature will be cylindrical in nature, giving rise to the beam waist for each principal axis occuring at different propagation planes. The distance between these planes is called the astigmatic difference or interval.

Single return* — The signal returned to a single detector element from an illuminated object perceived as a single surface by the 3D imaging system. Historically, the word “return” applies to active systems that illuminate the target. However, the above definition of single return also applies to passive systems that use ambient illumination. Signals can be electromagnetic or acoustic.

Single-return range resolution* — The range resolution where each range measurement is obtained from a single return and is determined by a standard or a formal test method. The single-return range resolution is dependent on several factors such as beam width, object reflectivity, distance to the object, angles of incidence and observation with the object, object material/texture, scan speed (for scanning systems), point density, and orientation of the surface with respect to the scan direction.

Solid model — 3D CAD representation defined using solid modeling techniques. Many solid-modeling software products use geometric primitives, such as cylinders and spheres, and features such as holes and slots, to construct 3D forms. Solid models are preferred over surface models for additive manufacturing because they define a closed, “water tight” volume—a requirement for most additive-manufacturing systems.

Spot size* — Although not a recommended term, the term spot size has been used to mean the radius or the diameter of the laser beam.

STEM/STEAM — An educational model/philosophy that emphasizes Science, Technology, Engineering, and Math (STEM), or Science, Technology, Engineering, Arts, and Math (STEAM)

STL** — A file format for 3D model data used by machines to build physical parts. STL is the de facto standard interface for additive-manufacturing systems. STL originated from the term stereolithography. The STL format uses triangular facets to approximate the shape of an object, listing the vertices, ordered by the right-hand rule, and unit normals of the triangles, and excludes CAD model attributes.

Structured light — The process of projecting a known pattern of pixels (often grids or horizontal bars) on to a scene. The way that these deform when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene, as used in structured light 3D scanners.

Surface model** — Mathematical or digital representation of an object as a set of planar or curved surfaces, or both, that may or may not represent a closed volume. Surface models may consist of Bezier B-spline or NURBS surfaces. A surface model may also consist of a mesh of polygons, such as triangles, although this approach only approximates the exact shape of the model.

Systematic error* — Errors associated with a flaw in equipment or in the design of an experiment. Unfortunately, systematic errors cannot be estimated by repeating the experiment with the same equipment. A mean that would result from an infinite number of measurements of the same measurand carried out under repeatability conditions minus a true value of the measurand. Systematic error is equal to error minus random error. Like true value, systematic error and its causes cannot be completely known.

Texture mapping — A graphic design process in which a 2D surface, called a texture map, is “wrapped around” a 3D object. Thus, the 3D object acquires a surface texture similar to that of the 2D surface.

Time-of-flight (ToF) — Describes a variety of methods that measure the time that it takes for an object, particle or acoustic, electromagnetic or other wave to travel a distance through a medium. A time-of-flight camera is a range imaging camera system that resolves distance based on the known speed of light, measuring the time-of-flight of a light signal between the camera and the subject for each point of the image. The time-of-flight camera is a class of scannerless LIDAR, in which the entire scene is captured with each laser or light pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems.

Touch probe scanning — Also known as contact scanning or hard probing. Used for gathering discrete points on a solid surface. Can be labor intensive, but is often more accurate and capable over non-contact scanning for certain types of objects and geometries.

Triangulation — A method of inferring the location of a point on a surface by projecting light onto the surface and observing that light, if possible, from a different angle or orientation.

Tribrach — An attachment plate used to attach an instrument, for example a theodolite, total station, or target to a tripod. A tribrach allows the instrument to be repeatedly placed in the same position with sub-millimeter precision, by loosening and re-tightening a locking handle or lever.

True value (of a quantity)* — A value consistent with the definition of a given particular quantity. This is a value that would be obtained by a perfect measurement.

Uncertainty of measurement* — A parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises, in general, many components. Some of these components may be evaluated from the statistical distribution of the results of series of measurements and can be characterized by experimental standard deviations. The other components, which can also be characterized by standard deviations, are evaluated from assumed probability distributions based on experience or other information.

Voxel* — A discrete volumetric element in a 3D grid representation of data.

X-ray — A form of electromagnetic radiation that is emitted by electrons.