Friday, 26 February 2016

Immersive experiences roundup at MWC 2016

A quick look through my favourite virtual experiences at mobile world congress 2016.

So the single best thing for me at MWC 2016 was the fourth of Nokia’s OZO demos by a Finnish band called Husky Rescue.  Forget the GearVR chicken wire, and only 30fps capture, those things will get there in time (more of that later) this is one of the best things I’ve seen which showed what a virtual experience can be like.

It’s a very simple, short piece, the lead singer pulls you in on a rope through the musicians, hands you a balloon (yes, it's Scandinavian cinematic electro pop…) then sings to you with really evocative eye to eye contact. 

It delivers indications of how to do so many of things that are often missing in 3D360 content:  The tracking shot adds movement and helps you explore the space;  The singer utterly sells the experience and offers an otherwise impossible to get view inside of the track, within the band, the sound, and the space and a truly personal one-on-one experience.  The rendering and stitching doesn’t jar, and everyone is at a fairly similar focus distance so it’s really comfortable to look round in as well.  It’s a really great experience that makes that piece of music much more special than seeing it on TV or headphones on the tube.  Well worth 3 minutes of anyone's time.  Unfortunately you can’t download it yet though – I’d love to see the full thing in a Rift (Nokia please release!).  However it was explained that they want the first content news splashes to be from dedicated content producers who are starting to receive the first OZO units. 
  
The other content on the Nokia demo I was also not so keen on so this is probably a good thing.  The Husky Rescue video had the most time spent on and it showed.  The pirates suffer from looking fake and staged.  The band on the roof was OK but very static and felt a lot like being separated a TV experience, the stitch behind you and by feet wasn't great either.  The NASA control tower and space suit water testing area  was a good example of what sort of content could be cool, but not a lot happens, also there are a lot of near field and deep field objects and it’s not totally clear on what is interesting to look at. 

But thoroughly good work  Nokia, you had the camera there, and real stuff for people to see on the sort of Gear VR headset that people are likely to consume those sort of short, powerful experiences on.  Getting out there and getting it done.





Elsewhere there were so many things which were all converging on solving those frame and resolution rate issues to up that level of presence.

Qualcomm were showing their new 820 chip with a 1k x 1k per eye prototype phone and headset set up.  Content was a dragon flying around in a dungeon at a stated 60fps.  Screen door wasn’t bad and it was a convincing display of hardware acceleration.  But that really isn’t going to be enough for VR, pushing up towards 8k looks like the target, and everyone wants it without wires, so that means portable limited devices with batteries and power consumption issues.

8k really is awesome, I stood for quite a while outside SK Telecoms 8k live encoder on a giant Samsung screen showing nothing more interesting than basketball (not my game to watch).  I love the argument against headsets being able to give the impression of something the size of wall, just to make something the size of a wall!   But it’s so much energy, materials and cost – headsets will win but they’ll need to be lighter and more like glasses than goggle – which they will become.  However something that big certainly impresses.



Getting there is going to need software, hardware and fundamental improvements to render above that sort of 1k/eye res CV1 / Vive will bring and get the weight and energy down.  If Qualcomm were showing some of the hardware helping in that direction, companies like TheEyeTribe and Imagine were showing some of the rendering cleverness which will help us get there.

TheEyeTribe were showing 2 camera, 5 led rigs for eye tracking integrated into a GearVR and a Rift.  Eye tracking is very cool, allowing difficult to spoof authentication, user identification, and foveated rendering: putting in the most rendering effort where you are actually looking.  They have been specialising for a while in limited device formats, getting the power consumption right down and there is a lot of interest in the device manufactures.  I don’t know which eye tracking company will come out on top, but I see this getting built into the Gen2 headsets to offload some of that rendering workload, and I liked TheEyeTribe’s approach a lot.

Imagination were showing their PowerVR Wizard real-time ray tracing on smart phone power level applicable chips.  How about putting that with eye tracking and getting next gen localised rendering with ray tracing? That’s the sort of thing that will help take us to where the VR experience needs to go.

Today's wires will have to go – whilst I loved the Yellow Submarine display, and they integrated the look and feel of the umbilical cords well, SK Telecom’s display seemed to spent at least half of its time closed with a maintenance sign outside and the glued on leap motions to the front weren’t doing anything.  But full respect to SK Telecom, they had a really fun stand packing a lot of things into a sensible amount of space.

That sort of Leap Motion hand sensing technology will quickly have to become built in.  I absolutely loath the touchpad on the GearVR, it really breaks presence, fumbling for a control on your head with a motion you never do otherwise.   Eye tracking, voice control, contextual understanding, gestures – basically interacting naturally is the way forward for VR experiences.  

Games have control pads which have been well refined and gamers are used to them – they work well and don’t break the experience. But shopping, bands, events, a wider audience - you need something more natural.  

Intel was showing their ‘1.5’ version RealSense prototype with twin depth trackers and cameras on a smartphone prototype put into a headset.  It’s not there yet, you can only see you hands at more or less full reach, close to the headset the cameras can’t get a read, and collision detection was a bit slow, but it’s enough to give you your hands back in VR and I really like that.  Looking forward to v2.






Qualcomm’s depth sensing demos had also come a long way since last year, I was pretty impressed with the speed of creating this render of this gentleman.  Definitely something you can start bringing into the retail environment.

Samsung were doing a great job of selling GearVR in general.  There wasn’t a big Smartphone release this year and I don’t think it can be long before they are looking at depth sensing, stereoscopic cameras and hand detection as big differentiating features for their flagships assuming enough people with Galaxy notes buy (and use!) the GearVR. 

Even the Telcos were starting to realise the impact that content from VR was going to have on them, particularly in terms of the bandwidth the content requires and including that in their 5g demos, last year I just got blank looks trying to explain that.  


Anyhow, looping back to the start, Nokia were streaming live (near live?) content from their OZO at 15-20Mbits/s in their launch event via Wifi.   A bit tricky for 4G consumer speeds, but well within some of the capabilities of the test ultrafast 5G networks like the one near me in Bristol.


Fancy a go at a Telco live stream to many GearVRs Nokia?  I know a couple of UK Telcos that would be interested...

No comments:

Post a Comment