Selective laser
melting (SLM) technology is a layer-by-layer additive manufacturing technology
using a laser (heating a fine metal powder with various types of laser
radiation intensity distributions and consolidation). Today, the SLM method is
the most rapidly developing and widely studied [1-4] additive manufacturing
technology. During the industrial implementation of this technology, we faced
the problem of visual control, analysis and data visualization. A large number
of scientific works are devoted to various aspects of the development of
technological solutions using methods and algorithms of machine vision
(Computer Vision), for example, [5-14]. Moreover, problems are often
considered, the solution of which allows the use of explicit linear algorithms.
Examples of such problems are: finding barcode or QR code, determining the
position of the product, adjusting the processing path, analyzing the number
and quality of holes in the perforation of the material, determining the
geometry of the working sheet, quality control of the weld, determining the
geometry of flat products or products made by selective laser melting SLM.
Mentioned tasks are solved in a built software environment
with an emphasis on accessibility and ease of use. The user can collect data
from various image sources, define a processing sequence using various image
processing filters (Fig. 1), and perform analysis using prepared software tools.
For this purpose, the LAMachineVision software platform was made (Fig. 1).
Fig. 1.
Sequence
of image processing.
The software platform uses a visual programming
approach that has proven itself by providing a wide range of users with the
ability to solve machine vision problems. The same principle applies to such
development environments and platforms as LabVIEW [11] and Cognex VisionPro.
LAMachineVision is written in the C# language of the .NET Framework version
4.7.2. WPF is used to create the main user interface, using DirectX graphics
technology and using the declarative XAML language. The user interfaces of the tools
used (various filters and processors) are implemented using WPF and Windows
Forms tools using the GDI/GDI+ graphics technology. Due to the use of the C#
language, the main operating system for the LAMachineVision platform is Windows
version 7 and higher. The main part of the platform is written according to the
Model-View-ViewModel (MVVM) pattern.
Using the link establishment function, a connected
directed graph is formed, along which the system will subsequently transfer
data from sources to filters and further along the route specified by the user.
The only condition for this construction is the work with absence of cycles in
the solution graph.
The use of a high level of abstraction allows us to exclude
a wide range of restrictions for developers, allowing to implement own filters
and processors and simply embed them into the platform using the API, allowing
third-party users, including technologists and researchers who do not have
programming skills, to solve a wide range of machine vision tasks. API
(Application Programming Interface) - an application programming interface, a
description of the ways (a set of classes, procedures, functions, structures or
constants) in which one software can interact with another software. In order
for a new filter to be embedded in a decision tree, it is enough that its outer
software shell implements a simple IProcessing interface with a small number of
supported properties and methods:
•
string Name { get; set;
}
•
int Num { get; set; }
•
ObservableCollection < IInputStep >
InputSteps { get; set; }
•
ObservableCollection < IOutputStep >
OutputSteps { get; set; }
•
ObservableCollection < Info >
InfoInputSteps { get; set; }
•
ObservableCollection < Info >
InfoOutputSteps { get; set; }
•
void DoProcess();
•
void UpdateInfo();
•
Page UiPage { get; set;
}
The transfer of information is carried out using I / O
"ports", also with a high level of abstraction, there are no
additional requirements for both the user interface and the specific
implementation. Thus, each filter and processor is an object that implements
the IProcessing interface, receives data for its work from input ports,
performs some action and passes the result of its work further along the
decision graph through the output port.
This approach allows both using your own
implementations of machine vision algorithms and using third-party libraries
and frameworks, in particular, AForge.NET and OpenCV solutions were used. To
connect various data sources, interaction with Basler machine vision cameras
using the Pylon SDK packages, with Hikvision cameras using the Hikvision SDK
packages, and IP cameras using the DirectShow framework is implemented.
One
of the possibilities implemented by the platform is the interaction with
various camera models. In particular, the use of Basler matrix cameras (Fig. 2)
was tested, which are mainly used in factory automation, traffic monitoring,
retail, as well as medicine and life sciences.
Fig. 2.
Matrix
camera.
Further,
the data from the camera are used for subsequent analysis or video recording
(Fig. 3) of the processes under study. The use of machine vision cameras allows
you to monitor processes in environments inaccessible to the observer. In
particular, in DMD technology (direct metal deposition, technology for
manufacturing metal parts by direct metal deposition).
Fig. 3. Video
recording of the DMD process of welding a turbine blade.
One of the most
common machine vision tasks is object recognition. One of the special cases of
such a task in industry is finding a sheet in the working field of the machine.
One of the options for solving a similar problem using the capabilities of the platform
under consideration has been implemented. A Basler matrix camera is used as the
data source. Sheets can be located in different parts of the working field,
have different shapes and consist of different materials, so it is supposed to
formulate the task as a search for abstract objects in the image. For this,
various image comparison filters are used. The idea of the
approach is simple: get an image of an empty workspace, then get a new image of
the space with the sheet already in place and recognise its location by
comparing it with the original image. In addition, the implemented algorithm
for correcting radial distortion caused by the use of wide view lenses is
demonstrated (Fig. 4).
Fig. 4. Correction
of distortion.
Several filter
options have been implemented that determine the difference between two images,
allowing you to compare a field with a sheet and a field without a sheet and
find the desired object in the image. For color images, there are several
similar processors, which are based on different metrics for calculating the
color distance between pixels, and several methods for finding corners in an
image using different implementations of corner detection algorithms.
An algorithm for
determining the contours of an object from previously found angles has been
developed and implemented (Fig. 5, 6).
Fig. 5. Searching
for boundaries algorithm (inner boundary).
Fig. 6. Searching
for boundaries algorithm (outer boundary).
The idea of the
algorithm is simple: based on the filters mentioned earlier, the image is
binarized (most often based on comparison with the background image) and the
set of corners is determined, the upper left unused corner is taken from the
set of defined ones. If the top pixel is black, then the bypass is
counterclockwise; if it is white, then the bypass is clockwise.
Thus,
we divide the contours into
outer
and
inner.
Fig. 7.
Found
sheets.
The quality of the
result obtained depends on the quality of the image, lighting and the accuracy
of the selection of processing parameters. The same approach is applied to find
the geometry of the fused layer in the SLM process. The preview image is a
photo of the previous layer. The new image is a photo of the next fused layer.
The problem of finding and analyzing the quality of holes during perforation is
solved in a similar way.
Data on the fused
layer of powder material are processed in a similar way in the manufacture of
products by the method of selective laser melting. After determining the
geometry of each layer, it is compared with the pre-calculated one and the
so-called “layer map” is formed (Fig. 8), on which the data of the temperature
model are subsequently superimposed.
Fig. 8. Composition
of temperature data on the product layer (one of the representation models).
Tracking systems are widely used in laser technology
for processing products from various materials. When using pulsed laser
radiation of the picosecond and femtosecond range in the process of
micromachining, there is often an increased sensitivity of the parameters of
the technological process of processing to the deviation of the focal length of
the laser-optical system from the surface of the product.
In laser micromachining processes, micron deviations
are critical, therefore, accurate and fast methods of compensating for the
error in real-time movement of the product surface relative to the optical
system during processing are required. The algorithms and methods of machine
vision included in the developed software automation platform make it possible
to provide the required parameters of the micromachining process.
For the processing of dielectrics and metals, a system
for automatically controlling the position of the laser beam focus (Fig. 9) is
proposed, using the idea of a laser optical rangefinder
(triangulator), which has the following advantages:
1. non-contact measurement, which is safe for the
workpiece,
2. the ability to work with dielectrics,
3. no mechanical moving joints,
4. the possibility of using the system in especially
clean rooms,
5. high performance, i.e. high measurement and
processing speed, and high accuracy,
6. the possibility to obtain a wide range of
characteristics by changing the source, detector and geometry.
In general terms, the automatic control system for the
position of the focus of the laser beam of special technological equipment
using a laser optical rangefinder can be classified: 1. According to the basic
characteristics. 2. According to the location of the laser rangefinder relative
to the optical system of the working laser. 3. If possible, synchronous
operation of measurement systems, calculation of the automatic control system
for the position of the laser beam focus and the control system for the kinematic
systems of the machine tool with numerical control. 4. According to the design
of the rangefinder. 5. According to the type of adjustment on the Z axis. 6.
According to the capabilities of the built-in analysis of the measured data. A
number of devices have the ability to pre-filter data, ensuring the stability
and repeatability of the results.
Fig. 9. Common block scheme of the automatic control system
for the position of the laser beam focus.
According to the location
of the laser rangefinder relatively to the optical system of the working laser,
the system of automatic control of the position of the laser beam focus (Fig.
9) proposed is coaxial, which ensures the operability of the tracking system
for any laser processing trajectories in the XY plane. According to the type of
adjustment along the Z axis, two possible versions are possible - with the
movement of the sample itself (Z1), and in the version with the movement of the
objective (Z2). Focus adjustment by moving the lens is recommended because the
lens is lighter than a product with a holding and alignment tools. This allows
for more dynamic focus adjustment. The system is synchronous, i.e. focus
adjustment is performed during laser processing. By type of execution, the
system is distributed, i.e. it’s ready for integration into an opto-mechanical
laser processing system.
The automatic control
system for the position of the laser beam focus includes:
1. "Source (LD)"
- a laser diode (Laser diode LD), the source of laser radiation of the tracking
system. The LD emission wavelength for most dielectrics is in the range of
650~900nm. This makes it possible to use optical coatings of optical components
between the first and second harmonics of the operating range of laser
radiation, as well as the possibility of integrating a coaxial video system
with illumination. There is a possibility of amplitude modulation of radiation
by external control.
2. "Detector" -
a matrix or line camera with a digital interface.
3. "OD1",
"OD2", "OD3" - optical dividers, optical elements, the
dielectric coatings of which are selected depending on the radiation wavelength
of the "Source (LD)" and the working laser for micromachining
"Laser". The presence of "OD3" is due to the possibility of
using a coaxial video channel for automation and application of machine vision
algorithms.
4. "Tracking
controller" - the main controller of the automatic focus position control
system, which reads and processes data from the detector in real time,
mathematical processing of the received data and calculation of the focus
position, control of the Z coordinate, by sending commands to the CNC system of
the laser machine. This controller works autonomously, a communication line of
this controller with the CNC system is also provided for transferring
parameters and monitoring states. The controller of the automatic focus
position control system is implemented on the basis of a field-programmable
gate array FPGA.
5. "SHC" - a
software and hardware complex of a laser system, including a software module
for automatic tracking of the sample profile in the CNC system of the laser
installation, as well as a user interface for controlling, configuring and
testing the automatic focus position control system. The software and hardware
complex can work autonomously without the participation of the CNC.
6. Z axis (Z1/Z2) - can be
implemented on any type of drive that provides the required accuracy and dynamics:
linear drives, piezo actuator, etc. The Z axis is controlled by the interaction
of the controller with the appropriate driver.
The collimated laser beam
of the automatic focus position control system from the LD source passes
through the power lens to the surface of the product, part of this radiation is
reflected from the surface, passes back through the power lens, and the
reflected beam is recorded on a linear detector. The position of the peak on
the linear sensor of the detector corresponds to the current distance between
the lens and the surface of the workpiece. One of the peak positions is stored
in the tracking controller as the focus position. While tracking the sample
surface, the tracking controller moves the lens in such a direction that the
position of the current peak tends to the position of the stored focus. The
variable length of the return pass to the camera ensures the adjustment of the
scale of the automatic focus position control system, namely the accuracy and
measurement range at a fixed camera size. Setting the source beam collimation
allows you to adjust the size of the laser spot on the surface of the product.
Thus, the possible types
of configurations of surface tracking systems are analyzed, the configuration
is selected, its composition, structure and principle of operation are
described.
Because of the work carried out, it became possible to:
1. Solve a number of tasks of machine vision and
scientific visualization in the framework of the industrial implementation of
new laser technologies, in particular, the technology of selective laser melting.
2. Create a specialized machine vision software
platform that makes it possible to simplify the solution of a wide range of
machine vision and scientific visualization tasks.
3. Provide the possibility of video recording of
processes in isolated environments, determination of the boundaries of objects
in the image, analysis and processing of visual data, formation and
presentation of a picture of heat distribution in a three-dimensional object,
we implemented process based on the data obtained as a result of a numerical
experiment in accordance with the mathematical model of the object.
4. Realize the combination of calculated data on the
geometry of the product, data obtained by analyzing video data from visual
observation tools, with data on the thermal distribution of the parameters of
the selective laser melting process.
5. Create a system for automatic control of the
position of the laser beam focus, designed to be built into laser
micromachining systems and laser systems for additive technologies and which
can significantly improve the quality, repeatability and productivity of laser
micromachining processes using pulsed laser radiation of picosecond and
femtosecond range.
The created machine vision software platform has been
tested and implemented in software and technological solutions used in high-tech
equipment.
The authors are
grateful to the management of the group of companies "Lasers and Apparatus
TM" for assistance in the material and technical support of experimental
studies and modeling of the process under consideration.
[1]
Оптико-электронный комплекс для исследования процессов
тепло- и массообмена лазерным бесконтактным методом / А.В.
Ведяшкина [и др.] // Научная
визуализация. 2019 № 11.3.
C.
43-53
[2]
Aluminum alloys for selective laser
melting – towards improved performance / P. Rometsch, Q. Jia, K. Yang, X. Wu //
Additive Manufacturing for the Aerospace Industry, 2019, P. 301-325
[3]
Mechanical properties of Ti6Al4V and
AlSi12Mg lattice structures manufactured by Selective Laser Melting (SLM) / M.
Mazur [et al.] // Laser Additive Manufacturing: Materials, Design,
Technologies, and Applications. 2016. P. 119-161
[4]
Heat Source Modeling in Selective
Laser Melting / E. Mirkoohi, D. Seivers, H. Garmestani, S. Liang // Materials.
2019, № 12, 2052.
[5]
Система технического зрения для информационного
обеспечения автоматической посадки и движения по ВПП летательных аппаратов/
С.М. Соколов, А.А. Богуславский, Н.Г. Фёдоров, П.В. Виноградов // Известия ЮФУ.
Технические
науки.
2015. № 1.
С. 96-109.
[6]
Machine vision based automated
inspection approach for clutch friction disc (CFD) / S. Kaushik, A. Jain, T.
Chaudhary, N.R. Chauhan // Materials Today: Proceedings.
2022. № 62.1.
P. 151-157
[7]
Степанов Д.Н. Методы и алгоритмы определения положения
и ориентации беспилотного летательного аппарата с применением бортовых
видеокамер // Программные продукты и системы. 2014. № 1. С. 150-157.
[8]
Краснобаев А. А. Обзор алгоритмов детектирования
простых элементов изображения и анализ возможности их аппаратной реализации
[Электронный ресурс] // Институт прикладной математики им. М. В. Келдыша
Российской академии наук. 2005. URL:
http://www.keldysh.ru/papers/2005/prep114/prep2005_114.html
(дата обращения 17.08.2022)
[9]
Методы автоматического обнаружения и сопровождения
объектов. Обработка изображений и управление / Б. А. Алпатов, П. В. Бабаян, О.
Е Балашов, А. И. Степашкин // М.: Радиотехника, 2008.
C. 176.
[10]
Рахматулин И.
Нейросети, глубокое обучение, машинное зрение в сельском хозяйстве. Краткий
обзор для 2021 года. [Электронный ресурс] // Препринт. 2021.
URL:
https://www.researchgate.net/publication/350280155_Nejroseti_glubokoe_obucenie_ masinnoe_zrenie_v_selskom_hozajstve_Kratkij_obzor_dla_2021_goda
(дата
обращения 17.08.2022)
[11]
Иванов П.В.,
Бойков А.В. Преимущества применения программного комплекса
LabView
для создания систем машинного зрения
// Записки Горного института. 2011. № 192.
C. 216-218.
[12]
Болотова Ю.А.,
Друки А.А., Спицын В.Г. Методы и алгоритмы интеллектуальной обработки цифровых
изображений: учебное пособие // Т.: Томский политехнический университет, 2016.
— 208 c.
[13]
Smith
S.M., Brady J.M. SUSAN—A New Approach to Low Level Image Processing //
International Journal of Computer Vision. 1997.
№
23, 45–78 (1997)
[14]
Rosten
E., Drummond T. Machine Learning for High-Speed Corner Detection. // In:
Leonardis, A., Bischof, H., Pinz, A. (eds) Computer Vision – ECCV 2006. ECCV
2006. Lecture
Notes
in
Computer
Science, № 3951. p. 430–443.
[15]
Молотков А.А.,
Третьякова О.Н. О возможных подходах к визуализации процесса селективного
лазерного сплавления // Научная визуализация. 2019. Т11. № 4. С. 1-12.
[16]
И. Ф. Лебёдкин,
А. А. Молотков, О. Н. Третьякова Математическое моделирование сложного
теплообмена при разработке лазерных
SLM
технологий [Электронный ресурс] // Труды МАИ. 2018.
№ 101.URL:
https://trudymai.ru/upload/iblock/118/Lebyedkin_Molotkov_Tretyakova_rus.pdf?lang=ru&issue=101
[17]
Молотков А.А.,
Третьякова О.Н. Визуализация результатов моделирования процесса селективного
лазерного сплавления //
GraphiCon
2019 (Брянск, 23–26 сент. 2019 г.): сб. трудов междунар. конференции /
Брянский государственный технический университет. Брянск. 2019. С. 78-81.
[18]
Кондратенко В.С.,
Сапрыкин Д.Л., Третьякова О. Н., Тужилин Д. Н.
Разработка системы автоматического управления
подстройкой фокуса для технологии лазерной микрообработки материалов. /
Приборы.
2022.№ 4. С.26-31.
[19]
Tretiyakova
O.N., Molotkov A.A.
About the development of applied software for mechatronic
systems of SLM technology / MATEC Web of Conferences.
Volume
362, 2022. XXII
International Conference on Computational Mechanics and Modern Applied Software
Systems (CMMASS 2021).
Article Number 01031. Number of page(s) 8. DOI
https://doi.org/10.1051/matecconf/202236201031
. Published online 14
September 2022.
[20]
Молотков А.А.,
Третьякова О.Н. Визуализация и анализ визуальных данных в аддитивной технологии
производства оптико-электронных приборов/ Материалы МНТК «Графикон2022» 19-22
сентября 2022. Рязань.