I have just returned from my recent adventure to South by South West (SXSW) and it was somewhat of an information overload. The event is the annual mashup of creative, techy wizardry that results in over 80,000 people descending upon downtown Austin, Texas. And descend they did. The excitement was high, the queues long, and visitors rushed around carrying with them the a perpetual fear of missing out.
The UK was definitely out in force this year. As well as the stellar line-up of musicians performing across the festival, I spotted representatives from Arts Council England, Marshmallow Laser Feast, Factory 42, BDH, Philharmonia, Punchdrunk, Royal Shakespeare Company, BBC, British Underground, Rewind, AHRC, The Guardian, Virtual Umbrella, Sheffield DocFest, Melody VR, Igloo, Kainos, Vochlea, Sceenic, and many, many more.
But from somewhere amongst the mass of keynotes, pop-up stands, brand takeovers, elevator pitches, surprise gigs, product launches and grilled cheese sandwiches, I came away with five key insights into the trends that are likely to shape the next year or so of our immersive evolution.
The notion of volumetric performance capture has been around for a while, but despite it being regarded as a complex and still (relatively) expensive production technique, it now feels like it is becoming much more of a focus of the immersive production conversation.
This was evidenced by Nonny de la Pena’s keynote talk, which carried a strong message about the potential for volumetric capture and photogrammetry in the creation of engaging and realistic environments, arguing that “room scale, walk around, volumetric VR is the most effective platform for creating spatial presence.”
In addition, Ben Stein (former General Manager of 8i) hosted a panel on ‘How Volumetric Humans Impact Storytelling in VR/AR’ alongside Steve Sullivan (GM, Microsoft Mixed Reality Capture), Cedric Gamelin (Senior Producer, Emblematic Group) and Peter Martin (CEO, Valis), and a poll of the room demonstrated a crowd both familiar with volumetric capture and with an appetite to learn more about its promising future.
Of course, we are only just scratching the surface when it comes to understanding the potential this technology has, and naturally much of the conversation revolves around the challenges still to be answered. For a creator like Peter Martin, number one on his wishlist is a portable studio that can be taken into the wild to capture people wherever they are. For Steve Sullivan, the biggest research challenges for volumetric capture include re-lighting the captured performances (such that your virtual human can be lit in a way consistent with their virtual environment) and being able to effectively overwrite performances (such that captured performances can be seamlessly edited together or indeed tweaked to achieve the desired, natural-looking effect). Indeed, it’s challenges like this that make it so important that facilities like the Dimension Studio in London are used for both creative and technical research alike.
Proving that volumetric capture is delivering incredible and impressive results, the Virtual Cinema showcased a number of new productions that made use of this immersive technology Awake: Episode One (from Start VR), and Hold the World (from SkyVR and Factory 42).
Having taken all of this in, I am certain that we’re going to see a big increase in conversations about volumetric capture over the next year, both in terms of its potential and its problems. After all that’s exactly what happens when new technology ends up in the hands of a critical mass of content makers; they tend to break it. But strangely that’s exactly what you want if you’re hoping to make the transition from research lab to mainstream production technique, and I’m excited to see the next wave of products and experiences that make use of this exciting technology. To quote the words of Nonny de la Pena, “You don’t experience your world as flat, you experience it with volume, so why would you want your media to be that way?”
Another notable theme at SXSW was a drive for sound-led augmented reality. Bose made a particularly big noise (I’m not sorry about that pun) with their Bose AR technology which is due for release in the first half of 2018. The company showed off an array of form factors, from super light eyewear with built in speakers, through to noise cancelling headphones, which all feature built in motion sensors. These, together with your phone’s GPS, mean that you can experience audio that reacts, in real-time, to both your location and the movement of your head. The demos were fairly simple, but were designed to highlight just how this could be used to create a more reactive and personal ‘soundtrack’ to your everyday life. What Bose need now, though, is an array of eager content makers to help make this a reality.
But while some are focussing on AR devices that emit sound, there are others taking it one step further. Imagine having an AR device in your ear that not only played sound, but also listened to you and the world around you and adjusted it accordingly? Well it’s not far off. Poppy Crum, Chief Scientist at Dolby Laboratories, gave a great talk on the possibilities of these wearable, always-on, ‘hearables’ that extend our agency by augmenting our experience of the world in a hyper-personal way. According to the IEEE, who have been backing this work, “hearables listen, record, and some even have electroencephalography (EEG) technologies that analyse their wearer’s brain waves to identify preferences.”
In theory, these devices could sense everything from the direction of our attention, to the anxiety expressed in our voices, which can in turn be used to adapt the media we receive. Crum’s argument is that, while audio devices often fall into one of three classes – entertainment, lifestyle, or health – it’s when these converge that we achieve something that works so harmoniously with our individual bodies, that it “becomes a partner, rather than an assistant.” Of course, these ‘hearables’ are not without complications, not least in terms of the problems to be solved around privacy, data protection and trust, all of which are challenges we are going to have to face in many areas of augmented reality in the years to come.
But for now, when it comes to the possibilities of AR, I am inspired by Crum’s assertion that she “truly believe[s] that the ear will drive the visual elements we want to see in our AR glasses.” It’s easy to get carried away by the visuals, but we definitely need to take sound-led AR just as seriously.
When contemplating the impact of VR on the creative industries, it’s often easy to focus on the entertaining, consumer facing content which gets most of the press. But it’s just as important to remember the huge potential it might have for improving and speeding up other creative processes, such as the pre-visualisation stage of film-making.
Frank Patterson, President of Pinewood Atlanta Studios, hosted a panel on ‘Bringing Big Budget Virtual Production Tools to Indie Filmmakers’ which featured Chris Edwards (CEO of The Third Floor), Wes Ball (Director, Maze Runner Film Series) and Shannon Justison (Senior Visualisation Supervisor, The Third Floor).
Edwards showed off a range of pre-vis tools, including the way VR is used to visualise film sets, which even features a set of virtual camera lenses through which the user can line up their shot as per what they would see through a real camera. According to him, this form of virtual production is used on almost all of Third Floor’s big ticket films, but the challenge is getting it used by smaller companies. The panel were convinced that innovations like this can not only make big savings, but bring big gains in creativity to the movie business as well. As Wes Ball puts it, “movies cost too damn much,” which is why we see a lot of the same films again and again; because they’re ultimately considered safe investments. He argues that, if we can bring the cost down, people can take more risks with what they fund and create, which is obviously good news for filmmakers and movie fans alike.
Hopefully, we’re going to see a lot more of these examples filtering down through the ecosystem in the not-too-distant future.
SXSW also featured much conversation around immersive, location-based entertainment (with or without VR and AR technologies). From immersive ‘Secret Cinema’ style experiences, to escape room quests, through to multiplayer VR challenges, this was a hot topic that one panel dubbed “The Future of Fun.”
An example was The Atrium, shown at the SXSW Virtual Cinema, an experience created by Meow Wolf (an art collective known for their creation of physical, interactive ‘sets’ for audiences to explore). Billed as their ‘first mixed reality installation,’ it attracted a pretty solid queue every day that it was on.
Also, barely a day went by without some reference to “Sleep No More” – the famed immersive theatre production created by Punchdrunk – and the possibilities that AR or VR could bring to this style of site-specific, shared entertainment experience. Of course, there are many different forms this could ultimately take, but many are convinced this type of physical, augmented installation has a lot of potential yet to be explored. If Secret Cinema and Sleep No More have taught us anything, it’s that there is a significant audience out there who actively seek the thrill and excitement of these exclusive experiences. And not only that, are also willing to both pay for them and go miles out of their way to find them. As the VR/AR sector continues its quest to reach its ‘early majority’ of consumers, this kind of experience could provide the ideal playground for creators and technologists to reach a sizeable and savvy audience, eager to try new things.
Of course, with all this noise and excitement about, there inevitably comes a good dose of the scary ‘what if’ scenarios. This revolved around the ever-improving ways of mimicking or re-creating virtual humans, and the potential consequences of not being able to distinguish fact from fiction.
From mimicking a specific human voice, to re-animating volumetrically captured people, or realigning mouth movements to sync with a voiceover track, the potentially ominous combination is a piece of media that appears to be a record of a real person saying and doing things that they never actually said or did, with astonishing realism. Fake news suddenly takes on a whole new dimension, which is a reality that we need to face head-on. In my view, it is useless to suggest we should or will not continue to make the advances that will make this possible, and indeed there are a number of great advantages that are driving this work along. Instead, we need to work harder at envisaging the problems earlier, and how we might cope with them. The concept of ‘fake news’ took many people by surprise, and now we are struggling to adapt our processes and regulations retrospectively. Perhaps with hyper-realistic virtual humans we have an opportunity to look further ahead and prepare ourselves better for this bright new world, whatever it brings.
Now that the dust has settled and the good citizens of Austin have their city back, it’s been interesting to reflect on these themes and what they’ll mean for the immersive landscape in the next year or so. As with any big event, SXSW generates a lot of noise that can be hard to cut through, but in the end we’re left with some really exciting creative opportunities, as well as a few important ethical questions that we need to resolved sooner rather than later. My prediction? It’s going to be a great year.