In Search of Virtual Synaesthesia

by Josie Thaddeus-Johns

The sun is going down. The green water beneath the bridge you’re standing on is glistening. The last few rays warm the apples of your cheeks, as if the sky is telling you how beautiful you are. You long to stay in this moment, but relent and raise your phone.

When will we stop taking photos of sunsets? We know that the lens always fails to capture their silken glow. Translating that reality to a .jpg is near impossible. But we continue to try to freeze the unfreezable, sticking up our smartphones in a bid to record the experience, blindly trusting of technology’s capacity to replicate this liquid vision. It’s only later, when we look back at the shot, that we admit: ‘Huh. That didn’t capture what I saw at all.’

On a warm day in March at Ravensbourne University, Principal Research Fellow and director of the institution’s Learning Technology Resource Centre, Carl H. Smith is telling me to think of the night sky, and I conjure up its dark carpet of stars, and the haunting moon that illuminates our clear nights, as he continues, ‘When you take a photo of the moon, you see it as a tiny dot, but actually, your eyes see it as a moon. The camera’s squashing everything.’

Why is it so hard to photograph these transcendent natural experiences? Well, it doesn’t help that a standard camera doesn’t fully capture what we actually see. ‘Traditional photography shuts down your field of view by removing your peripheral vision,’ Smith explains. ‘Your normal vision is approximately 180 degrees.’ That is, by using a camera, we are eliminating everything that we see that’s not directly in front of us. This has important implications for any attempt to capture and transmit subjective experience.

But there’s more. Our brains don’t just look out at the night sky and use the data our eyes deliver, they make sense of it and use it to create patterns. As neuroscientist and artist Shama Rahman tells me, most of what we see is formulated by our brains. ‘The vast majority of what we’re seeing is from the internal model in our brain, until there’s some sort of mismatch between what the internal model is predicting and what the external model (i.e. the world) is showing.’ So, for example, you look down—you expect to see the floor, and you do. And until you move your head, your brain will maintain that visual model until something changes—a bug walks across your foot, a shadow shifts, or rain starts to pour onto the previously dry pavement. From a brain physiology point of view, ‘seeing’ mostly consists of waiting for the status quo to change.

Our poor little smartphone cameras also attempt to preserve our memories using linear perspective, a 500-year-old convention, which has vastly affected images and how we consume them, perhaps irrevocably. ‘Linear perspective underpins every imaging technology, but is fundamentally flawed,’ says Smith. ‘We are all so familiar with the narrow field-of-view of photographs that we think it is natural.’

Linear perspective is now even used in 3-D media, which only makes virtual reality seem more virtual and less real. ‘This way of imaging the world leads to a sense of remoteness. If a VR scene is constructed using traditional, linear-perspective photography, then this may be part of what contributes to the nausea that long periods in a VR headset can induce,’ says Smith. Our brains might be able to understand linear perspective’s trick, but our bodies have to find a way to exist within VR. Our sense of proprioception (understanding where we are within a space) needs to match up to what we see.

Fovography is a new way of representing the world visually, without using linear perspective. Created by researchers at Cardiff Met University, it attempts a more natural depiction of our reality. ‘Fovographs can capture the entire field of view and have much greater depth and breadth than conventional photographs or computer graphic renderings,’ Smith says, as he takes me through a slideshow in his Ravensbourne office: images shown in linear perspective, fisheye, and, finally, fovography. It’s a picture that so clearly replicates the experience of viewing through someone’s eyes, that it feels intimate and personal. The image looks at an e-reader tablet, its owner’s legs stretched out on the sofa beneath him. Out of the window, houses. Close up, the shadow of the nose our brains see and ignore. It’s curved like a fish-eye lens, and yet things that should be straight still look straight. I can see much more than I’d expect.

The most crucial change, and one that’s unsurprising when we consider VR’s immersion problem with proprioception, is that I’m included in the image. Smith sees linear perspective, philosophically, as making a Cartesian split between subject and object, whereas fovography sees the world within our subjectivity: with a body attached, even if it might not be mine. This image of ‘me’ (5′1″, female) is ‘me’ as a man, with long legs that stretch over a whole sofa in a smart, terraced house.

In reality, we never perceive anything through linear perspective, our own limbs included. So, in VR, when you look down at your body, you see those phantom limbs in linear perspective, which is not how we naturally see ourselves. ‘We’re minimising our reality,’ says Smith. ‘We’re operating in this constant translation between how we perceive and how it’s recorded.’

Recently, I have been practising walking ‘masc’. On the way to the grocery store, and definitely on my way back from the bar late at night, I’ve begun a performance that’s out of my comfort zone, requiring me to unravel all the quirks of motion that instinctively feel right. The other week, I did this when I was already walking very close to a young guy, who was completely ignoring me, until, as an experiment, I moved my weight to the centre of my body, spread my footprints wider, kept my trunk still as I moved forward. It took just three of my attempted man-steps for him to turn around to check who walking so near to him. Was I a threat? He couldn’t see me until he turned around, but he did, contextually, ‘see’ that my walk had changed. What did he sense that made him have to ensure I wasn’t dangerous?

Smith works on ‘context engineering’—finding new ways to focus less on engineering our sensory content and instead attempting to manipulate and create context for those experiences. ‘This is achieved where we are enabled to reconfigure our own perception and cognitive abilities directly (individually or in groups) as the primary content’. This means that the lenses through which we experience the world are becoming more adjustable than ever,’ he says. Context engineering explores those things that we sense subconsciously—all the ways that looking at a sunset is different from looking at a photo of a sunset. The sounds, the smells, the peripheral sensations of that sunset are what creates our subjective, unreplicable, inexplicably existential experience. What else surrounds the sun as we look at it? Shimmering water reflecting that burning star, the wind in our ears, the feeling of something being over: another summer day lost.

Virtual reality promises to offer us access to new and novel experiences. Artists, startups and innovators of all stripes suggest that this is a way to increase empathy, to allow us to feel a moment more fully. For example, the World Economic Forum commissioned ‘immersive journalist’ Nonny de la Peña’s Project Syria, which plunges us into the experience of child refugees, with the aim of bringing us closer to the news stories that we read every day. Walking a mile in someone else’s shoes has now become living a minute in someone else’s senses.

And yet, we’re currently making do with a greatly impoverished version of those sense experiences. Virtual reality, like film, often has sound attached, but what about taste? What about smell? How much more immersed could we be?

Storytellers in an array of creative fields are beginning to explore these areas. For example, artist Grace Boyle has created VR project The Feelies with perfumer Nadjib Achaibou. Named after a Huxleyan medium that offers tactile as well as visual effects, the two VR films they created mapped the scents of the Amazon rainforest and orchestrated temperature, wind and orientation.

But do we really want to create a VR experience that truly mimics reality? ‘It depends how you approach this,’ says Rahman. ‘You can look at VR as a way to imitate something that’s in real life or you can look at it as a simulacrum—creating a situation of something that used to be or will be or never was real.’ If used for solely artistic purposes, virtual reality, like any medium, can create its own way of expressing reality, just as photography, films, even novels do.

When considered this way, virtual reality becomes less about emulation of every single sense that we already have and use, and more about exploring the boundaries of those senses using technology, and even offering new ones: imagine wandering through a forest in VR with a sensor for carbon dioxide interfacing biologically with us: so that these levels create a vibration, for example. ‘Remarkably, we’re starting to see that when you give the brain access to information from different input devices rather than its usual ‘sensors’, it soon allows you to add to your arsenal of senses by translating this input,’ Rahman says. In this example, it’s not that we are directly sensing the carbon dioxide levels, she cautions, but through the vibrations an almost synaesthetic experience—‘a sense modality’, as she calls it—is created.

Artists outside of VR are also working with these sense modalities to make us reconsider what we really ‘count’ as sensing. This is particularly relevant to people who sense, for example, sound without using their ears. Sound artist Christine Sun Kim is deaf, but nevertheless uses her artwork to explore the ways that sound can be communicated through other senses—vision, for example. In one series of drawings, she creates something like an idiosyncratic musical score with the ps and fs of piano and forte to mean quiet and loud, respectively. The Sound of Anticipation (2016) transmits something that is not a recognisable sound, through a visual language, and yet connects to how Sun Kim perceives audio. A group of heads all instantly turning depicts a sudden clamour, for example.

Likewise, Tarek Atoui’s ongoing project Within (2008-present) works with deaf people, as well as other sound artists to create musical instruments that translate their sounds through visual or haptic (tactile) cues. To take a simple example, compare how running a stick along a fine-ridged plane sounds to running it along one with fewer bumps. Though most people hear this through their eardrums, the sound is comprehensible via other senses, too. Atoui describes his project as ‘decolonialising’ sound from a phonocentric world.

As these artists reveal, our brain already translates experiences in a way analogous to synaesthesia, from sense to sense. Rahman suggests the example of the words ‘bouba’ and ‘kiki’. If you were asked to name a shape for each of those words, you would probably choose bulbous for the former, and sharp for the latter. So, there are certainly ways in which our brains can comprehend across senses, and significantly, in a way that is not arbitrary—lots of brains agree on these translations. ‘These things already exist, it’s about utilising them,’ Rahman says.

Rahman herself was the first artist-in-residence of the mi.mu gloves, which allow sound to be triggered and modulated through gestures and choreography—unlike a traditional laptop and synth setup, which leaves a non-expert audience in the dark when attempting to understand the sounds created by an electronic artist.

A few months ago, I attended a premiere of a VR film in a Berlin club, where the film was also shot. The same DJs were on the decks at the premiere as those inside the headset, which you could experience by jumping around the club’s different dance areas, watching dancers rave out, smokers light up, or the DJs twiddle knobs in 360-degree closeup. I loved the anonymous dancefloor feeling—that I could stare at someone’s amazing moves, or make my own weird ones, without worrying what others would think. Until I went to the chill-out area, that is, where I spotted a friend, in the film, chatting, VR cigarette in hand, only centimetres away. ‘Hi!’ I wanted to say. ‘What happened with that job you applied for last week?’ And ‘wow, I love your top.’ But, I was invisible—in fact, I was from the future. I took the headset off and tried not to feel hurt that she would ignore me in the middle of a club, when I thought we were friends. Like a dream where someone’s betrayed me, it felt as if I’d done something wrong, as if I shouldn’t be disturbing her when she was busy looking cool.

I left the club and texted my friend. She was pleased I’d seen her in the film. Looking back, I realised that even if my VR experience had been made more immersive through sensory context engineering, that lonely feeling still would have arisen. And so, whatever direction the technology progresses in, one thing is certain: VR needs to find ways of connecting rather than isolating us. Luckily, that’s one of the key priorities for those invested in its future and so hopefully, soon enough all of us can experience a virtual reality that connects as well as contextualises.

**********

In Search of Virtual Synaesthesia is taken from Somesuch Stories #3. Buy it online or in select stores.

***

Photograph by Suze Olbrich

Share on Twitter