|
Accepted papers
Visualization of Results of Bibliometric Analysis of Scilit Platform Data on AI & Machine Learning for 2021-2023
B.N. Chigarev
Accepted: 2024-12-16
Abstract
The aim of this study was to demonstrate the ability to visualize the results of the Scilit platform's bibliometric data analysis on the topic "AI & Machine Learning" to identify publications reflecting specific issues of the topic. Data source. Bibliometric records exported from the Scilit platform on the topic "AI & Machine Learning" for the years 2021–2023 were used. For each year, 6,000 records were downloaded in CSV and RIS format. Programs and utilities used. VOSviewer, Scimago Graphica, Inkscape, FP-growth utility, GSDMM algorithm. Used services: Elicit, QuillBot, Litmaps. Results. It has been shown that bibliometric data from the open access abstract database Scilit can serve as a quality alternative to subscription-only databases. Data exported from the Scilit platform require preprocessing to make them available in a format that can be processed by programs such as VOSviewer and Scimago Graphica. The use of GSDMM and FP-growth algorithms is effective for structuring bibliometric data for further visualization. The Scimago Graphica software provides wide possibilities for building compound diagrams, in particular, for representing the network of keywords in such important coordinates for bibliometric analysis as average year of publication and average normalized citation, as well as for building an alluvial diagram of co-occurrence of more than two keywords. The possibility of using such services as elicit.com, quillbot.com and app.litmaps.com to accelerate the selection of publications on the topic under study is shown.
Stochastic Semantics of Big Data (Parallel Computing and Visualization)
D. V. Manakov, P. A. Vasev
Accepted: 2024-12-02
Abstract
First of all the paper considers the problem of verification or formalization of the online visualization and parallel computing system from the point of view of dynamic systems as a development of the theory of computational complexity for random processes. Considering problems involving truly big data inevitably leads to the use of a block approach which is also used in both information theory and stochastic differential equations. As a natural metaphor the graph signals were chosen. This is a graph in nodes, of which a spectral function is defined in the examples considered this is a function of color (RGB), height or amount of data. In parallel computing, a block can be associated with a computing unit (processor) and consider the problem of entropy (performance) maximization. In the developed on-line visualization and concurrent computing system for geometric parallelization, it is possible to implement and compare a stationary random process (equiprobable messages implemented using broadcasting and mixins) and a steady-state random process (point-to-point messages), which have different analytical solutions. Together, this allows concluding that the proposed implementation of a stationary process has a certain novelty; in addition, it was intended to be more convenient for automated parallelization. The problems of automatic load balancing (interpolation problem) and optimal scalability of parallel computing (extrapolation problem) are also considered. Not much has been done in the field of visualization verification for example a mesh visualization has been proposed to be considered as a parameterized model of a white-noise random process. Of course, this work cannot be considered complete, but the direction that the authors called stochastic semantics is obviously promising.
The authors intend to take a close look at the established perturbed processes in the field of visualizations including those that take into account the human factor (the sketches of the formalization in the form of a discussion are given).
Two-channel high-temperature combustion imaging system based on high-speed cameras EVERCAM F 1000-16-C
F.A. Gubarev, L.Yu. Davydova, M.S. Tsiron
Accepted: 2024-11-24
Abstract
The paper presents the results of using Evercam F 1000-16-C high-speed cameras for high-speed visualization of laser initiation and high-temperature combustion of Al-CuO thermite mixture. The possibility of determining process parameters based on the results of high-speed shooting is demonstrated. Two visualization modes are considered: synchronous operation of two cameras to obtain images from two angles, and synchronous operation of two cameras as part of a laser monitor with a copper bromide vapor brightness amplifier. In the case of direct video recording, one of the cameras acts as the master one, and the recording frequency is set in the service program. It is proposed to use a two-angle video recording mode to study the spread of flame in a volume. For the first time, Evercam F 1000-16-C cameras were used as part of a laser monitor with a copper bromide vapor brightness amplifier. Laser monitoring, combined with direct video recording, makes it possible to study the surface of a sample in the area of igniting laser interaction and flame propagation in one of the planes. A feature of the operation of Evercam cameras as part of a laser monitor is the need to generate trainы of clock pulses synchronized with the radiation pulses of the brightness amplifier and the radiation pulse of the igniting laser. In this case, both cameras work in slave mode. The synchronization unit is designed using the STM32F103C8T6 microcontroller board and has galvanically isolated input and output signals.
Development of a programmable 16-frame electron-optical camera NANOGATE-22/16 and its application for measuring the space-time characteristics of fast-flowing processes in ballistics and explosion physics
S.I. Gerasimov, M.I. Krutik, V.S. Rozhentsov, D.Yu. Smirnov
Accepted: 2024-11-24
Abstract
The paper presents the main technical characteristics and results of the application of the programmable electron-optical camera NANOGATE-22/16, developed at NPP NANOSCAN LLC, Moscow. The frames of characteristic experiments from the field of explosion physics are presented. The electron-optical camera is an 8-channel system consisting of one input lens, a mirror-lens unit for dividing the image into eight channels (an additional lens, an octagonal mirror prism, eight mirrors) and the electron-optical channels themselves (K-1, K-8). The data obtained as a result of recording images of a fast-flowing process is transmitted through eight fiber-optic communication lines to a transceiver that converts signals at eight optical inputs into a signal at a single USB-3 output, which is connected to the corresponding computer input. All 16 registered images are visualized on the computer monitor. The dust- and moisture-proof housing of the electron-optical camera provides the possibility of its use in landfill conditions.
Свойство изотропии в локальной компьютерной геометрии
A.V. Tolok, N.B. Tolok
Accepted: 2024-11-18
Abstract
В статье рассматривается свойство изотропии в локальной компьютерной геометрии. Демонстрируются основные принципы применения такого свойства в представлении компьютерных данных об области функции на примере функции двух аргументов. Рассматривается область применения свойства изотропии в таких областях как алгебраические преобразования, упаковка и кодировка данных. Приведен пример применения изотропии в алгебраических преобразованиях на примере произведения двух функций. Даётся пример формообразования области локальных функций поверхности параболоида для описания окружности на основе области локальных функций описания поверхности для квадрата. Разбирается возможность компьютерного представления области локальных функций единственным графическим М-образом.
Additive Manufacturing of Personalized Brain-Computer Interface Headsets Reinforced by Scientific Visualization
D.A. Chiruhin, K.V. Ryabinin
Accepted: 2024-11-17
Abstract
Recently, a large attention has been attracted to the brain-computer human-machine interfaces (BCI) based on electroencephalography (EEG). This emerging technology allows touchless control over digital systems, in which the commands are based on human brain activity. In the ideal case, it means controlling the systems virtually by thoughts, but in reality, also simpler approaches are highly demanded like reacting to concentration, relaxation, or specific emotions. Modern BCIs are based on detecting so-called brain waves, the electromagnetic field oscillations induced by brain neurons. These waves are captured by electrodes either intruded into the brain or placed on top of the head. Obviously, placing electrodes on the head is more demanded for non-medical applications of BCI because it is absolutely harmless for the person. To achieve this, special headsets are needed which can be put on the head like a helmet and ensure the correct positions for the electrodes mounted on them. In this regard, wearing comfort and anatomical accuracy of headsets play an important role in ensuring both ergonomics and precision of BCI. This paper focuses on automation of the personalized EEG headset manufacturing for BCI. The technological chain is proposed and corresponding software tools are developed to foster the complete cycle of BCI headset production for a particular person. The production steps include 3D scanning of the head, interactive editing of the electrodes’ location system, and automatic generation of a collapsible head cap model with sockets for EEG electrodes optimized for 3D printing. The performance of the pipeline has been validated in practice. The accuracy of electrodes’ placement has been evaluated by comparison with the head cap from professional medical equipment and is established as sufficient for BCI. The headset model editing and customizing tools are powered with scientific visualization and cognitive graphics techniques to be friendly for a wide range of users including those with no dedicated IT skills.
Automation of Visual Inspection of Photomasks for Microelectronic Products
T.S. Khodataeva
Accepted: 2024-11-08
Abstract
The choice of a specific research topic was caused by a request for the development and implementation of automatic optical inspection at a semiconductor plant. The purpose of the work was to develop requirements, conduct technical design and create an optical control system for geometry deviations from a photomask drawing and a conductive pattern obtained using a manual stencil printer using relatively inexpensive equipment. As a result of the research, a pilot design sample of an automated optical control system using relatively inexpensive equipment was created and a computational algorithm was developed that ensures increased performance of the visual image recognition system, which allows to significantly reduce the percentage of defects. The work describes the implemented algorithm for obtaining images by an optical system and extracting images from a drawing file. Regardless of the method of obtaining an image (optical system, scanning electron microscope), there remains interest in choosing a criterion for comparing the obtained image with the reference one. The paper studies quantitative empirical metrics - Mean Square Error and Peak Signal-to-Noise Ratio for various noise reduction methods Block-Matching and 3D filtering and the classical spatial filtering method of Gaussian blur. The sensitivity of the structural similarity index metric to structural distortions of images after noise reduction is checked, taking into account the minimum values of the structural elements of metal-ceramic housings. Based on the Rosner test applied to the obtained values of the structural similarity metric, images containing defects are identified. The user interface provides for the output of the image area with a defect to the operator's screen.
Visualization of Points of a Multidimensional Information Text Array on an Elastic Map for Assessing the Cluster Structure of Data
A.E. Bondarev
Accepted: 2024-11-08
Abstract
The article presents the results of computational experiments on displaying the points of the original multidimensional information array on the elastic map scan to assess the relative positions of semantic proximity areas in order to improve the processing of text information. Elastic maps are considered as a tool for providing analytical work with text information. As previous works show, in order to obtain the required distances corresponding to the cluster picture of the studied multidimensional volume, it is necessary to use the distances on the elastic map, which reflects the cluster portrait of the studied multidimensional data volume. The paper presents the cluster structures of points of the studied multidimensional volume obtained in this way on the elastic map scan in the plane of the first two principal components. An analysis of the relative positions of clusters of different configurations at different points in time is presented.
Research into the Application of Additive Technologies in the Development of Tools for Microdeformation of Sheet Blanks Made from Non-Ferrous Metals and Alloys
M.A. Petrov, D.H. Tran
Accepted: 2024-11-08
Abstract
The paper shows the results of manufacturing of stamping tools for microforming operations. The tools and their main parts were obtained by different additive manufacturing technologies, from different materials (polymers and metals). A non-contact 3D scanning system was used to obtain metrological information on the accuracy of individual parts and assembly/subassembly. It was found that in order to match the dimensions of prototypes with the drawing dimensions and fall within the tolerance field, it is necessary to design the initial 3D model taking into account the peculiarities of 3D printing technology, mechanical processing, as well as the performance characteristics of the tool, which can be obtained from the results of wear tests.
The Impact of Input Data Density on the Performance of Graphic Neural Networks
N.A. Bondareva
Accepted: 2024-11-08
Abstract
The paper provides a brief overview of generative neural networks and considers the role of information in training generative neural networks. In the digital environment, each object is surrounded by a vast information field, including unordered information and a set of references to it. The density of the object's information field determines the ability of technologies such as artificial intelligence to recreate its image based on the collected data. The more data is available, the more accurately and completely the digital image can be recreated. The paper considers a number of problems arising from the use of text-to-image networks and possible methods for solving them. The article considers various aspects of the role of personal data and possible ethical and social consequences in the era of generative technologies, as well as the prospects and risks of further development of generative neural networks in specialized areas such as medicine and manufacturing. The rapid development of neural network technologies can have a significant impact on education and social phenomena.
Analysis of Open Well Datasets
D.O. Makienko, I.V. Safonov
Accepted: 2024-10-19
Abstract
Recently, the number of studies devoted to the use of machine learning methods in geophysics has been increasing significantly. Examples of such investigations include the prediction of rock properties and separation of rock types according to quantitative characteristics. Annotated datasets are required to build and evaluate the quality of machine learning based models. This paper analyzes open labeled well datasets and related research. We consider data containing well logs, rock images, laboratory results, labeled zonation by lithotypes. Methods for visualizing well data are presented. We provide recommendations for oil and gas companies on the preferable format for making well data publicly available
Application of PyTorch3D and NERF computer vision tools for building a point cloud of a three-dimensional model and determining the camera position of still images in space
V.V. Konkov, A.B. Zamchalov
Accepted: 2024-10-03
Abstract
Recently, computer graphics plays a key role in solving computer vision problems. The problem of converting 2D images into 3D models continues to be urgent, as it requires precise determination of camera position and construction of accurate 3D models of objects. Traditional methods are often limited in application and do not offer a comprehensive solution. This study examines the use of PyTorch3D and NERF libraries to determine the camera position in 3D space and create a 3D model of an object from a single 2D image. As a method of data preparation, a hardware and software system was used, including a stepper motor control device that provides manual and sequential positioning of the camera and its return to the initial position, a shooting control system to generate a comprehensive set of photos at each camera position, and a mechanism for sending data to a remote computer for further processing. The PyTorch3D library was selected during the study to explore the possibilities of converting 2D images into 3D models or determining the position of an object in the photos. The processing process included several steps: building a point cloud to generate a 3D volumetric model of the object, determining the camera position in 3D space from a single 2D image using inverse problem algorithms, and constructing a 3D object using differentiable rendering, creating 3D voxels and 3D meshes. The results of this study showed successful determination of camera position in 3D space and construction of a 3D object model from a single 2D image, demonstrating the advantages of using the PyTorch3D library over other existing models. These findings can be applied in the development of software and hardware systems for creating 3D images from 2D photographs. The study confirmed the relevance and effectiveness of using PyTorch3D library to solve the problems of converting 2D images into 3D models. Further work will be aimed at expanding the functionality of the system and its use in various areas of computer vision.
Visualization of methods of machine learning. GUI programming
Accepted: 2024-09-24
Abstract
The technologies of artificial intelligence and machine learning have made a fundamental leap in their capabilities in the last five years. The growth of processing power and the emergence of more and more effective methods of machine learning allows AI to not just solve the most typical tasks associated with the field, such as statistical analysis and optimization of mathematical processes, but also to find new applications in related fields of research, as well as practical applications, including those on the free market, available to the mass consumer. Image generation, audio, animation, self-learning models of control of robotic platforms and virtual mechanical models – these and many more novel applications of the recent years have led to a media-boom around AI and a growing interest from developers and authors from various fields and industries.
That being said, the methods for developing, research, testing, and integration of AI have largely remained unchanged and still require the knowledge of programming languages, machine learning libraries, as well as a deep understanding and experience specifically in the narrow field of AI. This barrier of specialization not only demands inclusion of machine learning specialists in the development process of otherwise trivial computer applications, typical for the field of AI, but also prevents small teams and independent developers from using the latest advances in these technologies without significant monetary and time investments into studying the subject.
We offer a novel solution to this issue in the form of a prototype graphical interface that allows the user without technical education and without the need for knowledge of programming languages to develop and tune various architectures of neural nets and other machine learning methods, methods of unsupervised machine learning, and to test these methods on a wide range of experimental tasks – from mathematical equations to controlling virtual mechanical models in a simulated physical environment. In this article, we give a brief description of its structure and organisation, its fundamental principles of operation, and the capabilities of this GUI.
Use of Hadamard matrices in single-pixel imaging
Denis V. Sych
Accepted: 2024-08-13
Abstract
Single-pixel imaging is a method of computational imaging that allows to obtain images of objects using a photodetector that does not have spatial resolution. In this method, the object is illuminated by light having a special spatio-temporal structure, — light patterns, and a single-pixel photodetector measures the total amount of light reflected from the object. The possibility of obtaining an image and the image quality are closely related to the properties of the applied patterns and computational algorithms. In this paper, we consider patterns obtained from modified Hadamard matrices and study the features of image calculation using single-pixel imaging. We show the possibility of reducing both the sampling time and the computational resources required to obtain images by modifying the pattern system. The proposed theoretical method can be used in the practical implementation of the single-pixel imaging method in an experiment.
Numerical visualization of vortex wakes behind large particles
А.А. Mochalov, А.Yu. Varaksin
Accepted: 2024-08-11
Abstract
An attempt has been made to visualize the flow formed in the wake of large particles moving in an ascending turbulent air flow in the channel. Numerical modeling was performed using a simplified version of the approach called "two–way coupling" (TWC) in English literature and taking into account the inverse effect of particles on gas characteristics. The particle motion was calculated in an approximate manner, therefore the method used is called "quasi – two–way coupling", TWC(Q). The results of numerical modeling of the characteristics of turbulent trails behind large moving particles based on the Reynolds averaged Navier-Stokes equations (RANS) are presented.
Deep Learning for Effective Visualization and Classification of Recyclable Material Labels
V.O. Kuzevanov, D.V. Tikhomirova
Accepted: 2024-08-06
Abstract
This paper presents an example of a system to improve the process of sorting recyclables by using deep learning techniques to automatically detect, classify and visualize recycling codes on product packages. In this paper, the authors discuss various approaches to optical character recognition and object detection in a video stream or image. The authors have developed and proposed a combination of neural networks for detection and classification of recycling codes. The proposed neural network system is designed to facilitate efficient recycling processes by automating the identification of recycling symbols, thereby facilitating the sorting and processing of recyclables.
Visualization Metaphors in the Tasks of Exploratory Analysis of Heterogeneous Data
R.A. Isaev, A.G. Podvesovskii, A.A. Zakharova
Accepted: 2024-08-03
Abstract
The subject of the study is the construction and application of visual models using the concept of visualization metaphors in the context of exploratory analysis of heterogeneous data. This study considers improved variants of the previously proposed visualization metaphors that can be used as a basis for building visual models. A technology for exploratory analysis of heterogeneous data based on the joint use of different visualization metaphors is proposed. The process of visual data exploration at the stage of exploratory analysis using the proposed technology is demonstrated to be iterative and multiscenary, contingent upon the analysis goals. The software tool developed to implement the proposed technology is described, along with its additional functionality to calculate and export quantitative characteristics of the visual model. The software tool is then considered in the context of exploratory analysis of a synthetic data set. The future direction of the proposed approach to the construction of visual models, the technology of exploratory data analysis and the software tool for its support are determined.
Visualizing the Impact of Machine Learning on Cardiovascular Disease Prediction: A Comprehensive Analysis of Research Trends
Jeena Joseph, K Kartheeban
Accepted: 2024-08-01
Abstract
Cardiovascular diseases (CVDs) continue to have a negative impact on global health, which highlights the need for accurate and efficient prediction methods. Machine learning (ML) techniques as tools for forecasting CVD has recently showed potential. This paper presents a comprehensive analysis of research trends in the field, focusing on visualizing the impact of ML in cardiovascular disease prediction. We used data visualization techniques to identify patterns and trends in an extensive database of scholarly publications on this subject that were published in Scopus between 1991 and 2023. The analysis reveals a substantial growth in research output, demonstrating the growing demand for ML-based CVD prediction. It reveals essential stakeholders and potential collaborators while highlighting the institutions and authors who have contributed most to this domain. The study also identifies high-impact journals that have published significant research in this domain, facilitating researchers in selecting appropriate outlets for dissemination. The study helps researchers identify the most critical areas for further research and fosters cooperation among subject-matter experts by offering insightful information about machine learning-based cardiovascular disease prediction development. The data is analyzed using the tools VOSviewer and Biblioshiny.
Exploring the Research Landscape of Business Applications of Robotic Process Automation Through Bibliometric Analysis
Shamini James, S. Karthik, Binu Thomas
Accepted: 2024-08-01
Abstract
The field of business process optimization and automation has seen the emergence of robotic process automation (RPA) as a disruptive technology. This research aims to give a systematic bibliometric analysis of the research ecosystem of robotic process automation in business to identify trends, patterns, and developments in this quickly developing area. Bibliometric methodologies, such as co-authorship analysis, keyword analysis, citation patterns, and publishing trends are performed in this work. Research papers from Scopus scientific databases are incorporated into the analysis through the identification of key writers, organizations, and nations that have made a substantial contribution to the growth of RPA literature. The report also explores the temporal evolution of RPA research, highlighting the development of research areas over time and identifying pockets of active research as well as prospective paradigm shifts. The research reveals key publications that have significantly influenced the course of RPA research by looking at citation networks.
The results of this bibliometric analysis enable scholars, practitioners, and policymakers to develop a more detailed grasp of the RPA research landscape in business. This study provides a roadmap for future research directions in robotic process automation by identifying research gaps and emerging trends in business management.
Mapping the Knowledge Base on Visual Reality Technology and the Manufacturing Industry
Geofrey Rwezimula, Zhang Guoxing, Wakara Ibrahimu Nyabakora
Accepted: 2024-08-01
Abstract
Virtual reality applications provide users with more than just realistic sight; they may also sense touch, hear, and even interact with virtual objects. With these significant advancements, virtual reality has seen recent growth surges in a number of sectors, including the manufacturing industry. It has to be successful in drawing attention from both academics and industry. It needs to be known how researchers are interested in the technology application. Therefore, examining the body of research on the connection between visual reality and the manufacturing industry is the goal of this research. The bibliometric study was carried out using the Scopus database. Using PRISMA, the sample procedure was finished. VOSviewer was utilized to search through 2,037 publications. This disclosed the expansion of the network, the most active contributing stakeholders, the backdrop of the intellectual framework, the research gap, and the greatest popular topic that needed to be filled. We observed that starting in 1992, papers pertaining to the influence of virtual reality on the manufacturing industry collected from the Scopus database were included. The words “augmented reality,” “virtual reality,” “process simulation,” “industrial internet of things,” “industry 4.0 technologies,” and “3D technologies" have been widely used since 1992. The density map's representation of contemporary themes includes “artificial intelligence” and “human-robot interaction.” The significance of the findings for researchers lies in their relevance to the past, present, and future, along with the identification of knowledge gaps.
Spectral evaluation of the vital state of Quercus robur L. under simulated drought conditions
P.A. Zybinskaya, A.V. Tretyakova, P.A. Krylov
Accepted: 2024-07-24
Abstract
Non-destructive spectral methods of analysis are increasingly being used to study the content of plant metabolites, evaluate morpho-physiological and biochemical indicators, as well as evaluation of the vital state. Visualization of the vital state through spectral profiles can provide a more detailed picture of plant adaptation to stress. To model experimental drought, 5-6 month-old Quercus robur L. seedlings were divided into three groups: control and experimental groups with and without watering (drought), with 15 seedlings in each group. Spectral evaluation of leaf blades was performed using a portable spectroradiometer SpectraPen SP110 Uvis and a plant analyzer Dualex Scientific+ at 0 (control), 168 (one week), and 336 (two weeks) hours. As a result of spectral analysis, spectrograms of radiation absorption of Q. robur leaf blades were obtained, as well as the content of the sum of chlorophylls, flavonols and anthocyanins under watering and drought conditions. The study revealed changes in the spectrograms of absorption of Q. robur leaves related to the content of metabolites. The difference in absorption peaks between the groups became more expressed over time under the influence of drought. The pigment content in the leaf blades varied during the experiment, which indicates plant adaptation to stress. Preliminary results of the study can be used to expand knowledge about ways to evaluate the vital state of woody plants in the field.
Calculation of a parallaxpanoramogram in autostereoscopic systems with inconsistent monitor and lens raster parameters
N.V. Kondratiev, Yu.N. Ovechkis, A.I. Vinokur, D.A. Arsentiev
Accepted: 2024-07-11
Abstract
A significant disadvantage of the multi-point of view autostereoscopic method is a drop in image resolution with an increase in the number of points of view. An effective means of increasing the resolution is the use of an inclined lens raster and vertical encoding the colors of the point of view. Algorithmically simple coding is obtained at optimal tilt angles, the tangent of which is 1 divided by a multiple of 3 (1/3, 1/6, etc.). This requirement imposes significant restrictions on the coordination of the geometric parameters of the equipment – the display panel and the lens raster. The approaches to spatial color coding proposed in this article and the algorithms implementing them make it possible to significantly expand the possibilities of creating autostereoscopic displays. The experimental work carried out convincingly confirms the theoretical conclusions. The main practical result was the developed software that allows fine-tuning of the angle of inclination of the raster and calculating a multi-point of view parallaxpanoramogram for a specific set of equipment.
On the visualization of subattractor under mixed tidal forcing
Stepan Elistratov, Ivan But
Accepted: 2024-05-18
Abstract
One of the principle conditions of a wave attractor appearance is a periodic external forcing. Real forcing in natural basins caused by tidal interaction is more complex than a monochromatic which is usually used in internal wave attractors investigations. Multi-frequency forcing may lead to the multiple wave attractor formation, some of them may be of low energy, which affects their detection. In the article we simulate a mixed forcing for an internal wave attractor flow and visualize subattractor formed due to this forcing type using several methods, including Proper Orthogonal Decomposition. It is shown that the latter method reveals sub-attractor even in case of highly turbulent flow.
Visualization of turbulent wakes behind large particles
A.A. Mochalov, A.Yu. Varaksin
Accepted: 2024-05-17
Abstract
An attempt was made to visualize the flow formed in the wake of large particles moving in a downward turbulent airflow in the channel. The paper also considers the possibilities of reconstructing velocity fields behind a large particle from visual data. A diagram of the experimental setup is shown (geometry of the working area, auxiliary and main equipment). The PIV (Particle Image Velocimetry) system is briefly described. A technique for visualizing multiphase flow “gas – solid particles” is proposed. The original images of large particles (spheres) are shown. The results of the experimental determination of the characteristics of the wake vortex behind the rear critical point of a large particle are presented.
|
|
|