This paper is devoted to the new methods of organizing interactive museum exhibits by means of additive technologies, programmable microelectronics and multimedia. Traditionally museum exhibition is distanced from the visitor appearing just as a visual image. But new trend is to let the visitor inside the exhibition, providing different forms of interactions. Modern museums constantly search for the new ways of presenting cultural and natural heritage taking into account that exhibitions should be at the same time scientifically accurate, attractive, memorable and, ideally, accessible for the wide audience including disabled people. In this regard, tangible user interfaces based on the Internet of Things and combined with the scientific visualization build up very promising set of technologies that brings to live cyber-physical museum exhibits. These exhibits are an alloy of physical museum items with virtual multimedia content, providing the visitor a unique experience of interactivity. The key feature of cyber-physical exhibits is their tangibility, whereby they become accessible for visually impaired people and deliver much more information for the regular visitors.
There are a lot of successful attempts to build cyber-physical exhibits within the museum space. However, there is a lack of methodological basis for this, as well as the lack of high-level tools to seamlessly integrate related technologies into the existing museum infrastructure. In this paper we propose using ontology-driven adaptive multiplatform scientific visualization system SciVi as a software basis for cyber-physical museum exhibits. This system was previously successfully used in solving problems related to steering hardware and software solvers in different application domains, monitoring lightweight robotics systems and supporting custom hardware human-machine interfaces. So, it contains necessary mechanisms to adapt to the third-party digital infrastructure (including the museum one) and build all the required middleware and visualizers within it.
We tested our approach by developing two cyber physical exhibits: tangible bonobo skull in the State Darwin Museum (Moscow) and tangible titanophone skulls in the Museum of Permian Antiquities (Perm). Bonobo exhibit appears to be a custom joystick in a form of corresponding skull to steer the Sketchfab-rendered 3D model of bonobo monkey head. Titanophone exhibit allows visitors to discover the age variability of titanophone synapsid in an interactive way.
Keywords: museum, scientific visualization, Internet of Things, tangible interface, cyber-physical system, additive manufacturing, ontology engineering.
The idea of tangible user interfaces (TUI)
was first introduced by Hiroshi Ishii in 1997 [1]. Nowadays, with the rise of
the Internet of Things (IoT) technologies [2], TUI experience the rebirth on
the basis of modern programmable microelectronics. Alloy of TUI and IoT opens a
gate for so-called cyber-physical systems (CPS) [3] – systems, where
virtual and real worlds are tightly interconnected. Real and virtual objects in
these systems complement each other: manipulations over real objects affect
virtual ones and vice versa. CPS is all about dissolving the border between the
real world and cyberspace as much, as it is possible with the software and
hardware means used. Ideally, this border should be imperceptible for the user.
CPS enable so-called Fourth Industrial Revolution [4] opening for the mankind
new ways of human-machine interaction, and as the result, new horizons of
digital technologies.
One of the many practical uses of CPS are
interactive exhibits in museums. Currently, the set of IoT-based technologies
related to museums is combined into so-called Smart Museum concept [5]. Smart
Museum covers methods and means to monitor the visitors’ activity, implement
the indoor navigation and compose the interactive exhibitions. In this sense,
building CPS with museum items and related digital content is a logical step in
Smart Museum development.
Organizing the museum space in the form of
CPS can significantly increase the attractiveness and memorability of the
exposition, as well as provide visitors with a wide variety of information and,
as a result, expand the museum’s potential as a scientific, cultural and
educational platform. However, the practical implementation of CPS within
existing museums faces notable difficulties:
1.Seamless integration with the existing digital
infrastructure of the museum: new digital means (like IoT-based TUI or new
multimedia content) should reuse the software and hardware already installed in
the museum. The digital infrastructure of the museum should be enriched by new
technologies, not replaced in their favor.
2.Lack of high-level tools to deploy and control
CPS taking into account small number of IT specialists in the museum staff.
In our previous work we suggested methods
and high-level means to automate hardware human-machine interfaces development
based on ontology engineering and IoT technologies [6], as well as used the IoT
technologies and scientific visualization to create interactive museum exhibits
[7].
The aim of the current work is to
synthesize the results of previous studies and formulate on their basis the
general concept of creating museum CPS. The proposed concept is applied in two
practical cases by creating cyber-physical exhibits in the State Darwin Museum
(Moscow, Russia) and Museum of Permian Antiquities (Perm, Russia).
Normally the museum items may not be
touched. However, museums are constantly searching for the ways to dissolve the
showcase glass and to allow the visitor to contact with the exhibits without
jeopardizing the physical condition of cultural and / or natural
heritage. The closer this contact will be, the more information the visitor
will obtain.
Classical technologies easily extend the
visual contact with the exhibit by the audio channel, as well as present some
digitized information in a form of so-called “live label” – a monitor (or even
touchscreen) displaying related multimedia content near the exhibit showcase. However,
these traditional means lack tangibility. In this sense, CPS technologies are
the way to enrich the museum exhibition with haptics and thereby enlarge the
spectrum of information received by the visitor. Moreover, graspable items
support museum inclusion: tactile contact allows blind and visually impaired
persons to learn much more about the exhibits than just hearing the traditional
voiceovers.
As stated in [8], “An ideal museum will be
one where visitors can appreciate the charm of tactile culture with hands,
fingertips and whole body, not simply learn through looking at exhibits”.
There are many examples of successful usage
of tangible interfaces within museums, which quality in pragmatic and hedonic
aspects was proven in practice [9–15]. Wide range of technologies is utilized
while building cyber-physical exhibits, from programmable microcontrollers with
custom developed circuits (like in [14, 15]) to complicated prefabricated touch-
and display surfaces (like in [11, 12]).
The essence of the museum CPS is the
concept behind it: the idea of the cyber-physical exhibit. The modern
technologies are just a toolset to embody this idea. But analyzing the ways the
most notable ideas were implemented (like, for example, the results obtained by
the authors of [9–15]) we can point out the lack of standards and the absence
of common methodology for creating museum CPS.
Trying to fill this methodological gap, we
propose technological stack to potentially automate or at least simplify the
creation of cyber-physical museum exhibits. Related to the museum space, we
suggest the following structure of CPS:
1.Physical objects:
museum items with IoT-based sensors, which can detect interactions with the
visitors, as well as actuators (motors, solenoids, etc.), which can provide
tangible feedback. Hereafter this part is denoted as P1.
2.Virtual objects:
multimedia content that enriches the museum items by visualizations,
voiceovers, sounds, videos, etc. Software implementation should be based on the
scientific visualization techniques to ensure the required level of scientific
reliability of the information provided to the visitor. Hereafter this part is
denoted as P2.
3.Display: device
that shows the virtual objects. It can be a stationary monitor or projector (if
required, with sound playing system on board) installed in the museum, or
personal mobile device of the visitor (smartphone or tablet computer).
Hereafter this part is denoted as P3.
Practical implementation of such CPS kind
faces the following problems, which solutions are proposed as part of the
formulated concept.
First of all, most of the museum items, due
to their fragility, disrepair and / or uniqueness cannot be equipped
with any electronic or electromechanical devices. Also, it is often not
possible to organize direct (for example, tactile) interaction of visitors with
these items. In this case we propose build CPS physical objects from high-quality
reconstructions of museum items. The creation of such reconstructions can be
automated by additive manufacturing [16] using 3D scanners and 3D printers.
Secondly, nowadays museums already have
some multimedia content presented by different types of displays: “live labels”
(monitors displaying supplementary materials about the exhibits), multimedia
stands (for example, virtual exhibits with gesture-based interface powered by
optical gesture detection systems like MS Kinect), interactive kiosks (displaying
interactive materials or providing mini-games related to the exhibits) and so
on. Content and equipment operate inside a certain digital infrastructure
deployed in the museum. The corresponding museum workers responsible for the
support of this infrastructure are used to the technologies involved and are
aware of the content they have. So, it is crucial to reuse these technologies
and content by introducing the CPS to reduce financial costs and to lower the
entry threshold for employees. In other words, introduction of CPS should
enrich the existing digital infrastructure, not replace it. This in turn
requires high-level adaptive tools to organize CPS, which enable integration of
different TUI with existing content management systems and visualization means
(including scientific visualization software).
As a toolbox that meets the specified
requirements, it is proposed to use the SciVi adaptive multiplatform scientific
visualization system developed during the previous studies [6]. This system’s
behavior is fully governed by ontology knowledge base that ensures its
adaptivity and scalability [6]. The possibility of using SciVi as the basis for
transforming existing digital museum infrastructures into full-fledged CPS is
due to the following system functions available to users through a high-level
graphical interface:
1.Automated integration mechanisms (including
bidirectional communication) with third-party data sources, including static
file storages and dynamic data generators (software and hardware solvers) [17].
2.Extensible set of customizable preprocessing
mechanisms for the visualized data [17].
3.Extensible set of customizable mechanisms,
visual objects and graphical scenes for the data rendering [17].
4.Customizable mechanisms for communication with
custom hardware human-machine interfaces [6].
5.Automated generation of lightweight SciVi
clones, which can run as firmware for electronic devices within IoT ecosystem
[7].
6.Middleware operation mode, in which SciVi acts
as a proxy to enable various combinations of communication of different
hardware within IoT ecosystem, different solvers, generating the data and
different visualizers, rendering these data [6].
The above-mentioned features allow SciVi to
act as a software “glue” to integrate IoT devices, data generators and data
storages, as well as data visualizers into a solid ecosystem, in particular,
into CPS. In is worth noting, that in case some of the CPS elements already
exist inside the museum digital infrastructure, they can be taken unchanged
thanks to the adaptive mechanisms of SciVi. Only the missing elements should be
created from scratch or bought as factory-made modules. This enables seamless
integration of CPS into the museum exhibition.
The proposed concept has been put into
practice by creating various cyber-physical museum exhibits. The two most
interesting cases so far are described below.
Cyber-physical exhibit “Bonobo” is a part
of Bonobo photo exhibition [18] in State Darwin Museum, Moscow, Russia. This exhibit
is based on the 3D model of the monkey bonobo (Pan paniscus Schwarz,
1929) skull, created by 3D scanning the original skull in Royal Museum for
Central Africa, Tervuren, Belgium [19]. The rendering of this model is shown in
the Fig. 1. It is accessible on the Sketchfab cloud service [20] – one of
the largest online platforms for publishing, sharing, discovering, buying and
selling 3D graphics content.
Fig. 1. Pan paniscus skull by Royal
Museum for Central Africa on Sketchfab.
Sketchfab is widely used in the State
Darwin Museum as a service to display 3D models of different exhibits. For the
Bonobo photo exhibition, special visualization is composed using Sketchfab
capabilities: skull model is combined with semi-transparent model of the alive
individual. Most interesting anatomical features on the skull are pinned and
annotated with textual information and corresponding voiceovers. The process of
model preparation is shown in the Fig. 2. In terms of museum CPS, which
structure was described in the Section 2, this content corresponds to P2
(virtual part) of the exhibit.
Fig. 2. Set up of key pins with
annotations and the general setting of the 3D model on Sketchfab.
The P3 (display part) of
CPS was already presented in State Darwin Museum: it is a kiosk with a monitor,
WiFi access and WebGL-capable Web browser. This kiosk fits ideally to display
models using standard Sketchfab player.
So, the only missing part of CPS was P1
– physical object. It was created by 3D printing bonobo skull and using this 3D
print as a shell for IoT device. As per the idea of this exhibit, rotation of
the 3D printed skull should be tracked and synchronized with the rotation of
the 3D model on the screen, while touching the anatomical key points of the
skull should trigger corresponding annotations and voiceover playbacks. So, 3D
printed bonobo skull should play a role of tangible custom “joystick” for the
Sketchfab visualizer, allowing visitors to explore real and virtual models
simultaneously.
To achieve this functionality, IoT device
incorporates inertial measurement unit (IMU) GY-85 and push buttons connected
to the ESP8266 WiFi-enabled microcontroller powered by 1200 mAh accumulator.
The photo of the device is shown in Fig. 3.
Fig. 3. IoT device the “Bonobo”
cyber-physical exhibit is based on.
SciVi system was used to automate the
creation of two software components: firmware for the IoT device and middleware
that transmits commands from the IoT device to Sketchfab service. The operating
schema is shown in the Fig. 4.
Fig. 4. Operating schema of the
“Bonobo” cyber-physical exhibit.
The firmware for ESP8266 (generated by
SciVi in C++) has the following steps:
1.Connect to the WiFi using SSID and password
stored in the data memory (these settings are made in SciVi when tuning the
firmware generation).
2.Start main loop:
a.Poll the push buttons connected via GPIO pins.
b.Obtain acceleration and angular velocity
measured by the IMU.
c.Compose data from (2) into the orientation
quaternion using Mahony filter [21].
d.Transmit quaternion and numbers of pushed
buttons as JSON message via WebSocket connection over WiFi to the middleware.
The middleware
(generated by SciVi in JavaScript, HTML5 and CSS as a standalone Web page)
accomplishes the following:
1.By loading:
a.Connect to Sketchfab via Internet utilizing
Sketchfab Viewer API [22], load the viewer and display the 3D model of bonobo
head (skull and semi-transparent flesh).
b.Connect to the IoT device via local WiFi utilizing
standard WebSocket API of the browser. Currently, static IP address is used for
the IoT device; museum router is responsible to assign it. However, client-side
network discovery features may be easily added in the future if required.
2.By receiving the message via WebSocket from the
IoT device:
a.If one of the buttons is pushed, activate
corresponding annotation of the 3D model using Sketchfab Viewer API and play
the corresponding voiceover using HTML5 audio API.
b.If orientation is changed, apply it to the 3D
model using Sketchfab Viewer API.
Technical details about the Sketchfab API
functions utilized are described in our Sketchfab blog post [23]. To make the
generation of the middleware possible within SciVi, the needed subset of
Sketchfab API was described in the middleware ontology [6].
The photo of the exhibit is shown in the
Fig. 5. The exhibit in action can be viewed in the video, Fig. 6.
Fig. 5. Photo of the “Bonobo”
cyber-physical exhibit.
Fig. 6. “Bonobo” cyber-physical
exhibit in action.
The main exhibition of the Museum of
Permian Antiquities contains anatomic recast of titanophone (Titanophoneus
potens Efremov, 1938), large predatory synapsid of Dinocephalia order,
lived in the Middle Permian. This order is characterized by significant
intraspecific variation: the differences between young and adult individuals
are sufficient to take them for different species, and even different genera.
Efremov described titanophone in 1938 by two skeletons: No. 157/1,
Paleontological Institute, Russian Academy of Sciences [24] (hereafter this
skeleton is denoted as No. 157/1) and No. 157/3, Paleontological
Institute, Russian Academy of Sciences (hereafter denoted as No. 157/3).
Titanophone has such a wide age polymorphism that Orlov even attributed No. 157/3
to a separate genus Doliosauriscus adamanteus Orlov, 1958. However,
Ivakhnenko proved that No. 157/3 is in fact an adult titanophone [25].
Museum of Permian Antiquities possesses the
titanophone recast created under supervision of Ivakhnenko that shows
middle-age animal. But this recast does not give an idea of age-related
variability. To fill this gap, we decided to create cyber-physical exhibit that
will show the age differences in an interactive way using the shape morphing
technique to visualize the growing process as a transition between the skulls
of young and adult titanophones.
The research of titanophone paleobiology
was conducted [26]. During this research, the skeletons belonging to Titanophoneus
genus were measured. The skull measurements of No. 157/1 and No. 157/3
are presented in the Fig. 7.
a b
Fig. 7. Skull sketches with
measurements: No. 157/1 (a) and No. 157/3 (b).
Based on the sketches and measurements
done, the scientifically accurate 3D models of No. 157/1 and No. 157/3
were created using Blender 3D editor. The Fig. 8 demonstrates the model
creation process. It must be noted, that both No. 157/1 and No. 157/3
3D models are made with the same topology (the number of vertices and
connections of these vertices are the same for both models), which enables
trivial morphing based on the interpolation of vertex positions. This is why we
created the models manually and didn’t use 3D scanning in this task.
Fig. 8. Using Blender v2.8 tools
for making correct-sized 3D model of titanophone skulls.
The P1 (physical part) of
the exhibit consists of the two created models of titanophone skulls printed by
the modified Anet A8 3D printer. They are placed on the platforms at a distance
of 40 cm from each other. The platform with No. 157/1 print is equipped
with the IoT device, that constantly measures distance to the nearest obstacle.
This distance is then interpreted as “age” of titanophone, that should be
displayed to the visitor in a form of 3D model (P2, virtual
part of the exhibit), rendered as a morphing from No. 157/1 to
No. 157/3. The visitor can place a hand between the skulls altering the
distance measured and exploring in such a way the ontogenetic changes of the
animal. As the skulls presented are in fact reproducible prints, they can be
touched. So, the visitor can not only discover the ontogenesis of titanophone,
but also haptically examine the skulls this animal was described by.
The IoT device consists of distance
measurement sensor and ESP8266 microcontroller powered by 1200 mAh accumulator
or optionally by 5 V power line. We tested two range sensors:
time-of-light VL53L0X and ultrasonic HC-SR04. While ultrasonic sensor is 4
times cheaper, both work pretty well. However, the time-of-light sensor ensures
better accuracy and is more stable in measurements. Also, it possesses narrower
radiation pattern that reduces false measurements when the visitor is close to
the exhibit: ultrasonic sensor sometimes detects other parts of visitors’ body,
not just hand between the skulls, which leads to incorrect results and lower
quality of experience. Time-of-light sensor has no such problem and therefore
performs better. The photo of the device is shown in Fig. 9.
Fig. 9. The physical part of
“Titanophone” cyber-physical exhibit: 3D-printed skull of young titanophone
individual on the platform’s roof (left) and electronic components inside the
platform (right).
The P3 (display part) of
the exhibit is normally a lightweight computer with a monitor installed in the
museum (for example, Raspberry Pi can be used). But under certain
circumstances, it may be even the visitor’s personal mobile device.
The operating schema of the entire
cyber-physical exhibit is shown in the Fig. 10.
Fig. 10. Operating schema of the
“Titanophone” cyber-physical exhibit.
SciVi system was used to generate both the
firmware for the ESP8266 microcontroller and the visualization client. The
firmware was generated in C++ and contains the following main steps:
1.Start up the WiFi access point (SSID and
password are set up in SciVi when tuning the firmware generation).
2.Start main loop:
a.Poll the range sensor (either VL53L0X connected
to ESP8266 via I2C, or HC-SR04 connected to digital input pins).
b.Transmit the distance as JSON message via
WebSocket connection over WiFi to the visualization client.
The visualization client was generated in
JavaScript, HTML5 and CSS as a standalone Web page. The rendering is based on
Three.js engine [27]. The following main actions are performed on the client:
1.By loading:
a.Load the models of No. 157/1 and
No. 157/3 skulls (stored as static resources in Stanford triangle format
[28]).
b.Initialize morph targets [29] from the loaded
models. As mentioned above, the topologies of both models are equal, so the
simple shape morphing is possible. This kind of morphing is supported out of
the box by Three.js engine.
c.Connect to the IoT device via local WiFi
utilizing standard WebSocket API of the browser. The IP address is static,
because the IoT device is a hotspot.
2.By receiving the message via WebSocket from the
IoT device:
a.If using HC-SR04 range sensor, perform moving
average smoothing for the measurements to compensate jitter. If using VL53L0X
range sensor, take the measurements as they are, because this sensor
compensates jitter itself.
b.Calculate the morphing parameter t. It is
assumed, that the skulls are at 40 cm distance from each other, so to get t
the measured range is divided by 40 and clamped to [0; 1].
c.Render the morphed model based on loaded morph
targets and calculated t parameter.
All the needed software components, like
the support of range sensor and morph targets were integrated into SciVi by
extending related ontologies (the ontology of electronic components and the
ontology of visual objects). The demo of the created exhibit is accessible
online [30] and shown in the Fig. 11. In this demo, the physical part is
replaced by the slider on the bottom of the page. Moving this slider, the user
can choose the intermediate ontogenetic stages of titanophone.
As a result of the reported research, the
concept of deploying CPS in the museum space was proposed. The key idea of this
concept is the use of the adaptive multi-platform scientific visualization and
visual analytics system SciVi. The adaptive mechanisms of this system allow
using it as an efficient middleware for enriching the museum’s digital
infrastructure with the new multimedia content management tools. Thus, seamless
integration of new interactive features into the existing museum exhibition is
achieved, for example, the creation of tangible interfaces to existing museum
objects based on IoT technologies. The scientific visualization methods of
SciVi ensure scientifically correct rendering of the data contributing to the
reliability of presenting the multimedia content related to the exhibit.
The viability and efficiency of the
proposed concept has been tested in practice by creating two cyber-physical
museum exhibits: bonobo monkey skull in the State Darwin Museum and titanophone
skulls in the Museum of Permian Antiquities. The use of TUI in these exhibits
allowed to expand the range of information they provide by supporting haptic
interaction. This increases the attractiveness of corresponding museum objects
and contributes to the museum inclusion, making these objects accessible by
visually impaired visitors.
In the future we plan to study the issues
of protecting wireless cyber-physical exhibits from theft, as well as further
enrichment of the digital infrastructure of museums with tangible user
interfaces.
We thank Aurore Mathys from the Royal
Museum of Central Africa (Tervuren, Belgium), who kindly provided a 3D model of
the bonobo skull.
We thank Constantine Tarasenko, Andrey
Sennikov, Valeriy Golubev from the Paleontological Institute, Russian Academy
of Sciences (Moscow, Russia), who kindly provided the access to the titanophone
holotype and lectotype.
1.Ishii, H., Ullmer, B. Tangible Bits: Towards
Seamless Interfaces Between People, Bits and Atoms // CHI '97 Proceedings of
the ACM SIGCHI Conference on Human Factors in Computing Systems. – ACM, 1997. –
PP. 234–241. DOI: 10.1145/258549.258715.
3.Sanfelice, R. Analysis and Design of
Cyber-Physical Systems. A Hybrid Control Systems Approach // Cyber-Physical
Systems: From Theory to Practice / Rawat D., Rodrigues J., Stojmenovic I. – CRC
Press, 2015. – PP. 3–31. DOI: 10.1201/b19290-3.
5.Chianese, A., Piccialli, F. Designing a Smart
Museum: when Cultural Heritage Joins IoT // Third International Conference on
Technologies and Applications for Smart Cities (I-TASC'14). – 2014. – 7 p. DOI:
10.1109/NGMAST.2014.21.
6.Ryabinin, K., Chuprina S., Belousov K.
Ontology-Driven Automation of IoT-Based Human-Machine Interfaces Development //
Lecture Notes in Computer Science. – Springer, 2019. – Vol. 11540. – P.
110–124. DOI: 10.1007/978-3-030-22750-0_9.
7.Ryabinin, K., Kolesnik, M. Using IoT Devices
Powered by Scientific Visualization Tools to Create Interactive Paleontological
Museum Exhibitions // Proceedings of 28th International Conference
on Computer Graphics and Vision “GraphiCon 2018”. – Tomsk, 2018. – PP. 70–73.
8.Hirose K. Research on Methods of “Touching the
World” – The Aim of the Exhibit Area of Tactile Learning in Japan’s National
Museum of Ethnology // Disability Studies Quarterly. – 2013. – Vol. 33, No. 3.
DOI: 10.18061/dsq.v33i3.3743.
9.Alboul, L., Beer, M., Nisiotis, L. Merging
Realities in Space and Time: Towards a New Cyber-Physical Eco-Society //
Cyber-Physical Systems for Social Applications / Dimitrova, M., Wagatsuma, H. –
IGI Global, 2019. – PP. 156–183.
10.Fischer, T., Herr, C.M., Burry, M.C., Frazer, J.H. Tangible
Interfaces to Explain Gaudi’s Use of Ruled-Surface Geometries Interactive
Systems Design for Haptic, Nonverbal Learning // Automation in Construction. –
Elsevier, 2003. – PP. 467–471. DOI: 10.1016/S0926-5805(03)00031-1.
11.Hsieh, C., Liu, I., Yu, N., Chiang, Y., Wu, H., Chen, Y., Hung, Y.
Yongzheng Emperor’s Interactive Tabletop: Seamless Multimedia System in a
Museum Context // Proceedings of the International Conference on Multimedia. –
ACM, 2010. – PP. 1453–1456. DOI: 10.1145/1873951.1874242.
12.Muntean, R., Hennessy, K., Antle, A., Rowley, S., Wilson, J.,
Matkin, B., Eckersley, R., Tan, P., Wakkary, R. Belongings: a Tangible
Interface for Intangible Cultural Heritage // Proceedings of the Conference on
Electronic Visualisation and the Arts. – British Computer Society, 2015. – PP.
360–366. DOI: 10.14236/ewic/eva2015.41.
13.Okerlund, J., Segreto, E., Grote, C., Westendorf, L., Scholze, A.,
Littrell, R., Shaer, O. SynFlo: A Tangible Museum Exhibit for Exploring
Bio-Design // Proceedings of the TEI’16: Tenth International Conference on
Tangible, Embedded, and Embodied Interaction 2016. – ACM, 2016. – PP. 141–149.
DOI: 10.1145/2839462.2839488
14.Vaz, R., Fernandes, P., Veiga A. Proposal of a Tangible User
Interface to Enhance Accessibility in Geological Exhibitions and the Experience
of Museum Visitors // Procedia Computer Science. – Elsevier, 2016. – Vol. 100.
– PP. 832–839. DOI: 10.1016/j.procs.2016.09.232..
15.Vaz, R., Fernandes, P., Veiga A. Designing an Interactive Exhibitor
for Assisting Blind and Visually Impaired Visitors in Tactile Exploration of
Original Museum Pieces // Procedia Computer Science. – Elsevier, 2018. – Vol.
138. – PP. 561–570. DOI: 10.1016/j.procs.2018.10.076.
16.Hassanin, H., Jiang, K. Chapter 10 – Net Shape Manufacture of
Freestanding Ceramic Micro-components through Soft Lithography // Micromanufacturing
Engineering and Technology (Second Edition). – Elsevier, 2015. – PP. 239–256.
DOI:10.1016/B978-0-323-31149-6.00010-4.
17.Ryabinin, K., Chuprina, S. High-Level Toolset for Comprehensive
Visual Data Analysis and Model Validation // Procedia Computer Science. –
Elsevier, 2017. – Vol. 108. – PP. 2090–2099. DOI: 10.1016/j.procs.2017.05.050.
21.Mahony, R., Hamel, T., Pflimlin J. Nonlinear Complementary Filters
on the Special Orthogonal Group // IEEE Transactions on Automatic Control. –
IEEE, 2008. – Vol. 53, No. 5. – PP. 1203–1218. DOI: 10.1109/TAC.2008.923738.
24.Efremov, I. Some New Permian Reptiles of the USSR // Compt. Rend.
(Dok) Acad. Sci. USSR. Paleontol. – 1938. – Vol. 19, No. 9. – P. 771–776.
25.Ivakhnenko, M. Eotherapsids from the East European Placket //
Paleontological Journal. – 2003. – No. 37. – PP. 339–465.
26.Kolesnik, M. Modern Reconstruction Of Titanophoneus Potens
Efremov (Synapsida. Dinocephalia) // The University of Opole. MSc Thesis. –
Opole, 2019. – 60 p. URL: http://apd.uni.opole.pl/ (last accessed 14.10.2019).