The starting point of this research is the fact that the representation of reality through an image is flattened and presented on a two-dimensional surface, while sound tends to move through space and therefore no dimensional loss occurs. The aim is to add an additional layer of depth to reality through sound, generated on the basis of visual elements (i.e. light and color). By doing so, the research investigates whether or not it is possible to enhance the experience of depth by translating light into sound? This study analyses the internal structure of light and sound as a wave, to find a valid model of translation. Furthermore, in order to see how our brain responds to stimuli, the phenomena of multisensory experience known as synaesthesia are examined. With the help of examples throughout history such as color organs, abstract cinema and (live) immersive installations, the connection from sound to color will be shown. We will see how neuroscience and technology have helped visually impaired persons to overcome their physical handicaps. The dossier concludes that hearing has a wider range than sight does, which is the reason why an experience of space on the basis of sound will be more intense. This conclusion introduces an experimental project dealing with transmission from one medium (visual) to another (aural), showing a kind of synthetic art. By rendering the audible from the visible (i.e. translating light input into audio output) via Pure Data software, the project’s main concepts such as void, perspective and time, are exposed.
More information at petarkufner.org