© The Financial Times Ltd 2016
FT and 'Financial Times' are trademarks of The Financial Times Ltd.
The Financial Times and its journalism are subject to a self-regulation regime under the FT Editorial Code of Practice.
July 3, 2015 10:09 am
That the barriers between the virtual and real worlds are breaking down has become a truism of today’s tech industry. Since Facebook paid $2bn for virtual reality company Oculus more than a year ago, the race has been on to come up with the most creative approaches to virtual and “mixed” reality. At times, it can feel as though the new forms of play, socialisation and work this will lead to are limited only by the imagination of the technologists.
But the outpouring of experimentation that is under way is starting to provoke some intriguing questions. Among them: how will the real world bleed into the virtual — and vice versa?
And would people prefer to live in a virtual world, interacting with representations of reality — or in the real world, interacting with virtual objects?
These are the kinds of question thrown up by the groundbreaking work of Massachusetts Institute of Technology graduate student Abe Davis and his collaborators.
They begin by videoing everyday objects. Tiny vibrations or movements in things like house plants or chip packets, created by sound waves or other environmental factors, are often invisible to the naked eye.
Subjected to close analysis, though, they yield information that can be used to achieve surprising results.
Davis showed last year how he had captured video of vibrations caused by sound waves and translated the images back into sound.
You only need to see him film a chip packet through a soundproof window, and use it to recreate words that were being spoken in the room, to feel the chill of the surveillance state.
Now, he has taken the idea in a different direction, capturing movements — say, a bush moving gently in a breeze — and using them to build a model of how the object would move if subjected to other physical stimuli.
Microsoft HoloLens: an ambitious punt on wearable technology
Apple has its Watch. Facebook has the Oculus Rift virtual-reality headset. Google has — or had — Glass. Now Microsoft’s big bet on wearable technology is almost here: the HoloLens, perhaps the most ambitious of them all.
Davis’ work turns augmented reality on its head. Rather than making it appear as though virtual objects are moving about in the real world — something attempted by companies like Microsoft, with its HoloLens headset, and US start-up Magic Leap — he holds out the possibility of capturing and manipulating real-world objects in virtual space.
There is considerable overlap between the potential applications of technologies like these.
Microsoft, for instance, has demonstrated how someone wearing its augmented reality goggles could assemble a virtual object out of virtual components that appear to float before them in the real world, viewing the object from different angles before sending the design to a 3D printer to be made “real”.
By comparison, Davis’ simulations taken directly from reality could be manipulated in virtual space to see how they would respond to physical forces in the real world.
For ingenious ideas like these to see the light of day, it is only a matter of whether the cost of computer processing can fall fast enough and the algorithms be refined — and whether the imagination of the inventors can keep up.
Copyright The Financial Times Limited 2017. You may share using our article tools.
Please don't cut articles from FT.com and redistribute by email or post to the web.
Sign up for email briefings to stay up to date on topics you are interested in