The practice of aero and hydrodynamic researches widely exploits
physical models
of flow in aerodynamic and hydrodynamic tunnels,
that allow
obtaining
aerodynamic forces and momentums for various flow velocities and aircraft
evolutions. The similitude concept serves as the basis for establishing correct
conditions of an experiment to obtain results that are adequate to real flight.
The similitude concept is based on correspondence of several dimensionless
parameters of flow (Reynolds number, Mach number, Prandtl number and some
others). The equivalence of these dimensionless parameters provide
justification for results
obtained in real and
simulated
environments.
Based
on similitude
concept, aerodynamic process can be studied in hydrodynamic tunnel
that allows
taking the advantage of studying flow at low velocities.
Another part of aerodynamic research
is to obtain reliable information about the characteristics of the flow and its
behaviour in various flight conditions. And the visualization of
the stream
plays a very
important role in this field of research, as it provides valuable
qualitative information about the distribution of
pressure and velocities in the flow. Various techniques for flow visualization
were proposed, such as utilizing filaments, small light particles or coloured
gas or smoke. WIth the progress in optical noncontact optical measurement
techniques new possibilities
arise
for retrieving not only qualitative, but also quantitative data
about flow
behavior.
For exploiting the advantages of
optical noncontact studying flow process at low velocities in hydrodynamic
tunnel (Figure 1), the effect of light refraction at the boundaries of various
optical media should be taken into account. The paper presents the developed
technique for accurate 3D registration and visualization of flow during
experiments in a hydrodynamic tunnel.
The main contributions of the
study are:
(1) implementation of the
developed techniques for accurate 3D measurements in multimedia optical working
space in photogrammetric 3D measurement system;
(2) experimental 3D registration and 3D
visualization of the flow jets in laboratory hydrodynamic tunnel;
(3) experimental evaluation of accuracy of
multimedia 3D measurements for 3D visualization of the flow jets.
Visualization is playing an important role in scientific research,
providing the best for analysis
data representation. Visionbased methods for automatic generating
accurate photorealistic 3D models of real objects of complicated shape [1, 2,
3, 4] or processes of complex nature and behaviour [5, 6, 7, 8] provide a new
quality of data representation and, as a
result,
expand the
possibilities of research efficiency. In aerodynamic and hydrodynamic flow
visualization allows to exhibit the inner characteristics of the flow, that are
essential for the process understanding.
Figure 1: Flow visualization in hydrodynamic tunnel
Etienne
Jules Marey was the first who visualized and recorded air flow in the first
wind tunnel [9]. Gustave Eiffel was a designer of this wind tunnel, that was
installed at the foot of the Eiffel tower [10]. Marey proposed to inject into
flow thin and parallel smoke jets, that can be recorded by photographic camera.
Such invention allowed to analyze the direction of the flow and to retrieve
information about the distribution of the velocities in different parts of the
flow [11].
Many various techniques for flow
visualization
have
been developed since the first experiments of Etienne Jules
Marey. Among these there are visualisation
with
tufts, glued by one
end to a model surface; generating air bubbles of small diameter, injected in
the flow; techniques, based on registering variation in flow density, and
interferometry methods. A detailed review of the modern flow visualization
methods can be found in [10].
Addition of
small
highcontrast particles to flow is widely used for visualization and
registration. Usually a system of high speed cameras working in synchronized
mode register flow motion. Then the processing of these registrations are
performed according either Eulerian or Lagrangian approach [12, 13, 14, 15].
Eulerian methods carry out voxelbased reconstruction of particles per time
step, followed by 3D motion estimation, with some form of dense matching
between the precomputed voxel grids from different time steps [16, 12].
Lagrangian techniques perform
reconstructing
an explicit
sparse set of particles, the individual particles being tracked
over time. Physical constraints can only be incorporated in a postprocessing
step when interpolating the particle tracks to a dense motion field [13].
Some recent
methods for object shape 3D reconstruction incorporate deep
learning for multiview or even singleview 3D shape reconstruction [17, 18, 19,
20]. These methods look promising for further research in 3D flow analysis.
The
impressive progress
in visionbased 3D reconstruction methods
allows
accurate
quantitative registration for further 3D visualization of flow process. The
basis for accurate 3D measurements by optical system is
an
imaging model with
correct parameters of system calibration. Calibration techniques, required for
photogrammetric 3D measurements [21, 22], are mostly developed for the case of
single optical environment, and for flow studying in a hydrodynamic tunnel they
have to
consider
light refraction at optical media boundaries separating flow from
optical measurement system.
Methods for calibration of optical
systems
for
measurement in
multimedia
optical case can be
roughly
classified as follows.
Some techniques
target
compensating
distortion effects, that are caused by refraction. To exploit the effect of
vanishing aberrations for light rays running through the optical interface at
90o
angle, special optical elements (such as prisms filled with water)
are used [23]. Such technique is often applied in the case of fluid flow
analysis using the methods of stereoscopic particle velocity measurements (PIV
– particle image velocimetry) [24].
Another group of techniques account
for the refraction in imaging model [25, 26, 27], and thus obtaining necessary
accuracy of 3D measurements. To apply a modern photogrammetric workflow based
on structurefrommotion and multiview stereo techniques [28] within existing
software and workflow, refraction correction is applied at the photo level
[29].
For some multimedia optical
measurements applications
an equitable approach is to “absorb”
refraction effects by
the estimated calibration parameters of the camera [30].
The
approach is
reasonable for the cases when the main effect of refraction is radially
symmetric relative to the principal point. The “absorbing” technique gives
appropriate description of the distortion model for the case of optical axis of
the camera being close to perpendicular to the optical interface plane.
Unfortunately, the method of “absorbing of refraction effects” always has some
systematic errors, that are not accounted in the imaging model. The effect of
refraction invalidates the assumption that the camera has a single center of
projection [25, 31], which is the main assumption for such model.
To obtain
a
reliable and
transparent method for accounting refraction effects,
an
accurate imaging
model for the case of image acquisition through two optical media interfaces
was developed [32]. It derives a set of equations, that directly
describe the process
of ray pass from a given object
point to the image plane.
The proposed technique for accurate
flow 3D visualization is developed for flow behaviour analysis in hydrodynamic
tunnel HDT400 of Central Aero and Hydrodynamic Institute (TsAGI). The HDT400
working space for placing scaled model of an aircraft or its wing is 400
× 400 ×400 mm. The workspace is available for monitoring the flow through
glass walls. HDT400 has a vertical structure, and, being moved by gravity,
water enters the working part of the tunnel from a water tank installed on top.
The range of flow velocities in HDT400 is 2 . . . 10cm/s. The advantage of
HDT400 hydrodynamic tunnel is possibility of studying aerodynamic process at
velocities.
For preliminary technique study and
testing a laboratory setup was created (Figure 2), that reconstruct study
conditions of HDT400 hydrodynamic tunnel. It has vertical design similar HDT400
with a working part with dimensions 110 × 110 × 200 mm. A water
tank is mounted above the working part, having a set of injectors for colozring
flow jets during
tests.
The common view of the laboratory
setup is shown in Figure 2(a), and the working part with mounted
stereolithography model of a wing is presented in Figure 2(b).
|
(a)
Common view of the laboratory setup
|
|
(b)
Working part with mounted
stereolithography model of a wing and coloured jets
|
Figure 2: Laboratory setup for 3D
visualization technique evaluating
The photogrammetric 3D reconstruction
system “Mosca” [33] was used for 3D flow registration and accurate
measurements. In laboratory setup the photogrammetric system “Mosca” in
twocamera mode was used. For exploiting in optical multimedia conditions, “Mosca”
photogrammetric software was extended to implement the developed algorithms,
that account for refraction effects.
Table 1: DMK 37BUX273 camera
specification
Parameter
|
Value
|
Sensor type
Format
Dynamic range
Resolution
Pixel size
Frame rate
Shutter
Lens
|
CMOS Pregius
1/2.9”
10 bit
1,
440
×
1,
080
3.45
µm
×
3.4
µm
up to 238 fps
1µs
to 30s
6
mm
|
The optical 3D measurement system
consists of two DMK 37BUX273 cameras equipped with the IMX273LLR Sony CMOS
sensor, and Epson EMP1705 projector of structured light mounted on a rigid
platform, providing stable exterior orientation. Main technical characteristics
of the cameras are given in Table 1.
For application in
optical multimedia environment
we modify
the standard
imaging model in form of collinearity equations
to account for the
refraction at optical media interfaces. Accurate imaging model [32] for this
case consider refraction of light ray from an object point
A
to the corresponding image point
a
(Figure 3) at two optical
interfaces: “airglass” and “glassliquid”.
The ray path for this case can be presented as three vectors
r1,
r2,
r3
for air, for glass, and for liquid correspondingly.
Object
Figure 3 presents the systems of coordinates, that are considered in the study.
Coordinate
system
OXY Z
is related to studied object, image
system of coordinates
Cxyz
is related to
the camera, and glass system of coordinates ΩXgYgZg
is
related to glass wall of the working part.
Figure 3: Systems of coordinates and the path of light ray.
For each vector
r1,
r2,
r3
the equations defining its
position in object coordinate system are derived using Snell law in form:
The coordinates of origin of each vector C, A1,
A2
are defined using parameters of camera exterior orientation and
conditions of intersection with glass planes, the refraction indexes of glass
n1
and water
n2
are taken as known or determined
during calibration [32]. The system of equations for light ray path from object
point
A
to corresponding image point
a
can be written in form:
F(xa, n1, n2, XΩ, XA − XC)
= 0,
|
(4)
|
The equation 4 establish the relations between object point
XA,
the center of projection
XC,
and image point
xa.
So it is some kind of analog of
standard photogrammetric colnearity equations and can be used photogrammetric
system calibration and object points 3D coordinates determination. The
nonlinear distortion parameters are accounted as additional terms
∆x,
∆y
in the equation 4. These terms are taken in form of BrownConrady
model [34, 35]:
∆x
=
a0 · y + x(a1r2 + a2r4
+ a3r6) +
a4(r2+ 2x2) + 2a5xy;
|
(5)
|
∆y
=
a0 · x + y(a1r2
+ a2r4+ a3r6)
+ a5(r2+ 2y2) + 2a4xy;
|
(6)
|
with
with r2= x2+ y2.
Here
xa, ya
– coordinates of a point on the image,
a0
, ..., a5
– camera interior orientation parameters:
a0
– coefficient of affine distortion;
a1, a2, a3
– coefficients of radial distortion;
a4
, a5
– coefficients of tangential distortion.
The vector
vel
= (xp, yp, mx,
my, a0, ..., a5)T
of interior orientation parameters is estimated by calibration
procedure [32].
vel
includes coordinates of principal
point, image scales and additional parameters correspondingly, spatial
coordinates of reference points being known by independent precise
measurements. The unknown parameters are determined by least mean square
estimation using image coordinates of a set of the test field reference points
as observations [36, 37].
Table 2: Interior orientation parameters
The laboratory setup was used
to evaluate
the developed technique. Firstly, evaluation of the calibration
technique for the optical multimedia case was performed. The results of the
estimation of the interior orientation parameters at the laboratory
hydrodynamic tunnel were compared with results of the calibration at the
special multimedia calibration stand [38]. Table 2 presents results of the
calibration for both cases, demonstrating good correspondence of two different
calibrations.
|
|
(a) CADmodel of the
wing used for 3D printing
|
(b) Comparison of 3D
scan with CADmodel used for 3D printing
|
Figure 4: Results of measurement comparison
To obtain another kind of estimation of the accuracy of 3D
measurements, experimental 3D scanning of a reference object was carried out.
Wing stereolithography (SLA) model (Figure 4(a)) designed for experiments in
the hydrodynamic tunnel was used as the reference object, and 3D scanning was
performed for SLAmodel placed in working part of laboratory hydrodynamic tunnel.
The result of comparison CADmodel of the wing with 3D scan of the SLAmodel is
shown in Figure 4(b).
CloudCompare
software
1
was used to match and to compare 3D model.
CloudCompare
is open source
software developed for 3D point cloud and mesh processing, alignment and
comparison.
|
|
(a) Stereo pair of the flow in the
working part of the laboratory setup
|
|
(b) A time series of 3D reconstructions of flow
in working part of the laboratory setup
|
Figure 5: Stereo pair of the flow and
3D reconstruction of flow in the working part of the laboratory setup
In
Figure 4b
we can see a
high level of correspondence between surface 3D reconstruction in
multi media case corresponds and CADmodel, mean error between surfaces being
about 0.03
mm.
The second stage of experimental
evaluation was aimed at 3D registration, 3D reconstruction and 3D visualization
of flow jets at the laboratory hydrodynamic tunnel. Figure 5 shows a stereo
pair of images from left and right cameras and the results of flow jets 3D
reconstruction.
Combined 3D scanningjet detection
technique was applied for 3D reconstruction and 3D visualisation of flow jets.
3D scanning
allows reconstructing
the working part of the laboratory hydrodynamic tunnel with the
SLAmodel of a wing, installed there. 3D reconstruction of flow jets was carried
out
with
jets detection algorithm. For robust jets detection in the image,
preliminary acquired images of the hydrodynamic tunnel working part were used
for separating background from jets images. Then detects jets were
reconstructed by photogrammetric technique.
Figure 5(b) demonstrates several frames from a sequence of flow
jets 3D registration. The developed technique allows to visualize and to
analyze the 3D evolution of flow jets in time.
The technique for accurate 3D visualization of the flow motion in
a hydrodynamic tunnel has been developed. The basis for an accurate 3D
reconstruction of the shape of the stream jets is the calibration of the
photogrammetric motion capture system using a developed image formation model
that takes into account refraction at the interface of optical media.
Experimental evaluation of the developed
technique using a laboratory hydrodynamic tunnel shows high accuracy of 3D
measurements for spatiotemporal visualization of the flow. The developed
technique provides accurate 3D visualization of flow jets in time, thus
allowing to analyze the 3D evolution of the flow. The experiments proved the
applicability of the developed techniques of optical system calibration and
flow motion 3D visualization for exploiting in aircraft icing study.
The reported study was supported by Russian Foundation for Basic
Research (RFBR) according to the research project 19-29-13040.
[1] Eker, R., Elvanoglu, N.,
Ucar, Z., Bilici, E., Aydın, A.: 3d modelling of a historic windmill:
Ppkaided terrestrial photogrammetry vs smartphone app. The International
Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
XLIIIB22022, 787–792 (2022),
https://www.intarchphotogrammremotesensspatialinfsci.net/XLIIIB22022/787/2022/
[2] Girelli, V.A., Tini, M.A., D’Apuzzo, M.G., Bitelli, G.: 3d
digitisation in cul
tural heritage
knowledge and preservation: The case of the neptune statue in bologna and its archetype.
The International Archives of the Photogram
Remote Sensing and Spatial Information Sciences XLIIIB22020, 1403–metry,
1408 (2020),
https://www.intarchphotogrammremotesensspatialinfsci.
[3] Knyaz, V.A., Kniaz, V.V.,
Remondino, F., Zheltov, S.Y., Gruen, A.: 3d reconstruction of a complex grid
structure combining uas images and deep learning. Remote Sensing 12(19), 3128
(Sep 2020),
http://dx.doi.org/10.3390/rs12193128
[4] Andreev, S.V., Bondarev,
A.E., Bondarenko, A.V., Vizilter, Y.V., Galaktionov, V.A., Gudkov, A.V.,
Zheltov, S.Y., Zhukov, V.T., Ilovayskaya, E.B., Knyaz, V.A., Manukovsky, K.V.,
Novikova, N.D., Ososkov, M.V., Silaev, N.Z., Feodoritova, O.B., Bondareva,
N.A.: Modelling and visualisation of blade assembly with complicated shape for
power turbine.
Scientific
Visualization 7(4) (2015)
[5] Lin, J., Foucaut, J.M.,
Laval, J.P., P´erenne, N., Stanislas, M.: Assessment of Different SPIV
Processing Methods for an Application to NearWall Turbulence, pp. 191–221.
Springer Berlin Heidelberg, Berlin, Heidelberg (2008),
https://doi.org/10.1007/978354073528110
[6] Kniaz, V.V.: Fast
instantaneous center of rotation estimation algorithm for a skiedsteered robot.
In: Remondino, F., Shortis, M.R. (eds.) Videometrics, Range Imaging, and
Applications XIII. vol. 9528, pp. 194 – 204. International Society for Optics
and Photonics, SPIE (2015),
https://doi.org/10.1117/12.2184834
[7] Chatzitofis, A., Albanis, G., Zioulis, N.,
Thermos, S.: A lowcost & realtime motion capture system. In: Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp.
21453–21458 (June 2022)
[8] Kniaz, V.V.: Robust visionbased pose estimation algorithm for
an uav with known gravity vector. The International Archives of the
Photogrammetry, Remote Sensing and Spatial Information Sciences XLIB5, 63–68 (2016),
https://www.intarchphotogrammremotesensspatialinfsci.net/XLIB5/63/2016/
[9] Des mouvements de l’air lorsqu’il rencontre des surfaces de
diff´erentes formes. Comptes rendus hebdomadaires des s´eances de
l’Acad´emie des sciences 131, 160–163 (1900)
[10] Chanetz, B., D´elery, J., Gilli´eron, P., Gnemmi,
P., Gowree, E.R., Perrier, P.: Flow Visualisation Techniques, pp. 165–182.
Springer International Publishing, Cham (2020),
https://doi.org/10.1007/97830303556237
[11] Fermigier, M.: The use of images in fluid mechanics. Comptes
Rendus M´ecanique 345(9), 595–604 (2017),
https://www.sciencedirect.com/science/article/pii/S1631072117300918,
a century of fluid mechanics: 1870–1970
[12] Lasinger, K., Vogel, C.,
Schindler, K.: Volumetric flow estimation for incompressible fluids using the
stationary stokes equations. In: 2017 IEEE International Conference on Computer
Vision (ICCV). pp. 2584–2592 (Oct 2017)
[13] Schanz, D., Gesemann, S.,
Schr¨oder, A.: Shakethebox: Lagrangian particle tracking at high particle
image densities. Experiments in Fluids 57(5), 70 (2016),
https://doi.org/
10.1007/s0034801621571
[14] Lasinger, K., Vogel, C.,
Pock, T., Schindler, K.: 3d fluid flow estimation with integrated particle
reconstruction. International Journal of Computer Vision 128(4), 1012–1027 (2020),
https://doi.org/10.1007/s11263019012616
[15] Rubbert, A.,
Schr¨oder, W.: Iterative particle matching for threedimensional particle tracking
velocimetrys. Experiments in Fluids 61(2),
https://doi.org/10.1007/s0034802028912
58 (2020),
[16] Barbu, I., Herzet, C., M´emin, E.: Joint Estimation of
Volume and Velocity in TomoPIV. In: 10TH INTERNATIONAL SYMPOSIUM ON PARTICLE
IMAGE VELOCIMETRY PIV13. p. 45. Delft, Netherlands (Jul 2013),
https://hal.archivesouvertes.fr/ hal00880712
[17] Huang, Q., Wang, H., Koltun,
V.: Singleview reconstruction via joint analysis of image and shape
collections. ACM Trans. Graph. 34(4) (Jul 2015),
https://doi.org/10.1145/2766890
[18] Roth, S., Richter, S.R.: Matryoshka networks: Predicting 3d
geometry via nested shape layers. In: 2018 IEEE/CVF Conference on Computer
Vision and Pattern Recognition. pp.
1936–1944
(June 2018)
[19] Kniaz, V.V., Remondino, F.,
Knyaz, V.A.: Generative adversarial networks for single photo 3d
reconstruction. ISPRS International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Sciences XLII2/W9, 403–408 (2019),
https://www.intarchphotogrammremotesensspatialinfsci.net/XLII2W9/403/2019/
[20] Knyaz, V.: Machine learning
for scene 3d reconstruction using a single image. Proc. SPIE 11353, Optics,
Photonics and Digital Technologies for Imaging Applications VI 11353, 1135321
(2020),
https://doi.org/10.1117/12.2556122
[21] Remondino, F., Fraser, C.:
Digital camera calibration methods: Considerations and comparisons. ISPRS International
Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
XXXVI5, 266–272 (September 2006)
[22] Vo, M.N., Wang, Z., Luu, L., Ma, J.: Advanced geometric
camera calibration for machine vision. Optical Engineering 50(11), 1 – 4
(2011),
https://doi.org/10.1117/1.3647521
[23] Raffel, M., Willert, C.E.,
Scarano, F., K¨ahler, C.J., Wereley, S.T., Kompenhans, J.: Stereoscopic
PIV, pp. 285–307. Springer International Publishing, Cham (2018),
https://doi.org/10.1007/9783319688527{_}8
[24] Teich, M., Grottke, J.,
Radner, H., Buttner, L., Czarske, J.W.: Adaptive particle image velocimetry
based on sharpness metrics. J. Eur. Opt. Soc.Rapid Publ. 14(5) (2018),
https://jeos.springeropen.com/articles/10.1186/s4147601800730
[25] Sedlazeck, A., Koch, R.:
Perspective and nonperspective camera models in underwater imaging – overview
and error analysis. In: Dellaert, F., Frahm, J.M., Pollefeys, M.,
LealTaix´e, L., Rosenhahn, B. (eds.) Outdoor and LargeScale RealWorld
Scene Analysis. pp.
212–242.
Springer Berlin Heidelberg, Berlin, Heidelberg (2012)
[26] Murase, T., Tanaka, M.,
Tani, T., Miyashita, Y., Ohkawa, N., Ishiguro, S., Suzuki, Y., Kayanne, H.,
Yamano, H.: A photogrammetric correction procedure for light refraction effects
at a twomedium boundary. Photogrammetric Engineering and Remote Sensing 74(9),
1129–1136 (2008),
http://www.documentation.ird.fr/hor/{PAR}00002751
[27] Gonz´alezVera, A.S.,
Wilting, T.J.S., Holten, A.P.C., van Heijst, G.J.F., DuranMatute, M.:
Highresolution singlecamera photogrammetry: incorporation of refraction at a
fluid interface. Exp Fluids 61(3) (2020),
https://doi.org/10.1007/s003480192826y
[28] Knyaz, V., Zheltov, S.:
Accuracy evaluation of structure from motion surface 3D reconstruction. In:
Remondino, F., Shortis, M.R. (eds.) Videometrics, Range Imaging, and
Applications XIV. vol. 10332, pp. 200 – 209. International Society for Optics
and Photonics, SPIE (2017),
https://doi.org/10.1117/12.2272021
[29] Skarlatos, D., Agrafiotis,
P.: A novel iterative water refraction correction algorithm for use in
structure from motion photogrammetric pipeline. Journal of Marine Science and
Engineering 6(3) (2018),
https://www.mdpi.com/20771312/6/3/77
[30] Menna, F., Nocerino, E.,
Fassi, F., Remondino, F.: Geometric and optic characterization of a
hemispherical dome port for underwater photogrammetry. Sensors 16(1) (2016),
https://www.mdpi.com/14248220/16/1/48
[31] Chadebecq, F., Chadebecq, F., Vasconcelos, F., Lacher, R.,
Maneas, E., Desjardins, A., Ourselin, S., Vercauteren, T., Stoyanov, D.:
Refractive twoview reconstruction for under water 3d vision. International
Journal of Computer Vision (2019),
https://doi.org/10.1007/s11263019012189
[32] Knyaz,
V.A., Stepaniants, D.G., Tsareva, O.: Optical system calibration for 3d
measurements in hydrodynamic tunnel. Computer Optics 45(1), 58–65 (2021),
http://computeroptics.ru
[33] Knyaz,
V.A.: Scalable photogrammetric motion capture system “mosca”: Development and
application. ISPRS International Archives of the Photogrammetry, Remote Sensing
and Spatial Information Sciences XL5/W6, 43–49 (May 2015),
https://www.intarchphotogrammremotesensspatialinfsci.net/XL5W6/43/2015/
[34] Brown,
D.: Decentering distortion of lenses. Photogrammetric Engineering 32(3),
444–462 (1966)
[35] Beyer,
H.: Advances in characterization and calibration of digital imaging systems.
Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XXIX, 545–555 (1992)
[36]
Knyaz, V.A.: Automated calibration technique for photogrammetric system based
on a multimedia projector and a ccd camera. ISPRS International Archives of the
Photogrammetry, Remote Sensing and Spatial Information Sciences XXXVI5, 1–5
(2006),
https://www.isprs.org/proceedings/XXXVI/part5/
[37] Kniaz, V.V.,
Grodzitskiy, L., Knyaz, V.A.: Deep learning for coded target detection. The International
Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
XLIV2/W12021, 125–130 (2021),
https://www.intarchphotogrammremotesensspatialinfsci.net/XLIV2W12021/125/2021/
[38] Knyaz,
V.A., Ippolitov, E.V., Novikov, M.M.: Accuracy assessment of optical 3D measure
ments in hydrodynamic tunnel. In: Lehmann, P., Osten, W., Jr., A.A.G. (eds.)
Optical Measurement Systems for Industrial Inspection XII. vol. 11782, pp. 326
– 336. International Society for Optics and Photonics, SPIE (2021),
https://doi.org/10.1117/12.2592622
1
https://www.cloudcompare.org