Data Sonification with vlrMemos


The sonification with vlrMemos concerns the physiological signals and the variability data. During the recordings or the readings, by default, there is no sound for these signals or data, but the displays of the parameter values for a channel or for the average of the channels.
Optionally, we will generate a sound per channel (the multichannel will be possible) using a good quality sonification algorithm. We will use the sonification by the spectral mapping (Spectral Mapping Sonification).

For the moment, the sonification is not yet implemented in vlrMemos.
We first present below some results of studies and research, as well as some examples of implementations.
Then we will describe what we will implement in vlrMemos.
For a comprehensive overview of the field, refer to the handbook of the sonification, at the following address:
Sonification Handbook


Artistic Creation:

The artistic creation is possible (musification), by sonification of data as:
- The DNA (deoxyribonucleic acid).
- The solar activities.
- The weather records.
- The tides.
- etc.
See the handbook of the sonification for more information.


Physiological Data:

The sonification may be used as a complementary method of the visualization.
Below there is a link to a study on the sonification oh the heart rate.
Recent studies have shown, for example, you could hear the difference between a normal heart rate and an abnormal heart rate thanks to the sonification of the ECG signals.
Sonification of the Heart Rate


Cancer Detection:

In medicine, certain types of cancer can be detected by sound. The audio can accelerate the analysis of a biopsy of the cancer that can take several weeks now.
The data conversion of stem cells into sounds could help make instant and non-invasive diagnosis of the cancer during a routine check.
For more information:
Cancer Detection by Sonification (1)
Cancer Detection by Sonification (2)
Cancer Detection by Sonification (3)


Higgs Boson Data Sonification:

A simple algorithm is used:
- The same number is associated with the same note.
- The melody changes by following exactly the same pattern that the data.
More information at the following address:
Higgs Boson Data Sonification

With this algorithm, the following sounds are obtained:

- Solo Piano:



- Piano and Marimba:



- More Instruments:
(Piano, Marimba, Xylophone, Flute, Double Bass et Percussion).




Electric Cars:

According to the experts, the electric cars, without noise, present a danger to pedestrians, cyclists, visually impaired, etc, and should be sonified without increasing the noise pollution.
Special versions of the iOS and Android operating systems exist for the cars. Many data are retrieved (wheel speed, GPS data, etc). Some data should allow efficient sonification, but the algorithms are still at the research stage.
For more information on the subject:
Sonification of the Electric Cars (1)
Sonification of the Electric Cars (2)
Android Auto
CarPlay




vlrMemos:

We will use the spectral mapping sonification.
The spectral mapping sonification allows to monitor all the frequencies or a specific band of frequencies.
For more information, see the handbook of the sonification or read the following article:
Spectral Mapping Sonification

We will transform the data of the frequency domain into musical notes:
- The amplitudes will be related to the spectral energies.
- The time will be related to the duration of the frames, with a time compression factor.
- The frequencies will be related to the spectral centroid (mass centers).

By default, for each channel, we will consider an area consisting of all the frequencies. There will be a possibility of limiting the area. We will affect an instrument to that area (by default, the piano).

We will give the possibility to assign an instrument to the low frequencies and another instrument to the high frequencies, in order to study the low and high power ratios.

We will give the possibility to assign an instrument to the foreground (greatest points or points of greatest magnitude) and another instrument to the background (the most energetic bands). This will isolate the background noise or will allow to study the influence or the evolution of this noise.

We will use a library allowing to synthesize the sounds and to use the soundfonts, like Timidity++ or FluidSynth. This approach will allow to customize the instruments by a simple replacement of the fonts.

The multichannel will be possible, each channel being treated as a MIDI file track. This approach will, in multichannel, allow to have an idea of the synchrony of the channels via the spectral coherence.