Awesome comment made about VR recently

When you watch someone using HMD VR, you see them looking at stuff you can’t see. You see their arms moving, you hear them laugh manically or scream in terror… It’s like watching someone high on some drug. VR is the drug that’s potentially more addictive than cocaine.

Posted in Uncategorized | Leave a comment

Are 360° Videos going to kill VR?

TLDR: I’m going to rag on Google’s 360° action flick “Help”  as the acme of horrible 360° mobile filmmaking.

I’m not a huge fan of 360° immersive videos. They are terribly hard to do right – and by “right” I mean entertaining, interesting, and intuitive. Unfortunately I see a trend amongst  the companies who are pushing 360° video and in particular labeling it as a “VR” experience. In reality most of the 360° videos I see are utter crap with  prominent stitching  artifacts, and horrible (or the utter lack of) focus on the action.  I see this when traditionally (i.e. film and TV) focused directors try to create a 360° experience, making the naive assumption that what they know works and that 360° video is just some new tech like 3D – you can just do what you’ve always done in the “new” medium and it’ll turn out OK. Sorry, no. This will work with 180° degree videos – in fact I think this will be the ultimate direction that a lot of “VR” video will be used for, but with 360° you have to think about the experience in a totally new way.

It all comes down to vision and talent – the ability to imagine yourself in the center of an interesting experience and deciding how you are going to present it. This can be greatly successful – if it’s done so that the user can easily and intuitively focus on the interesting parts. This shows the greatest success when it’s an immersive experience with either one direction  of interest or else when entire view  is interesting.

Some examples of getting it right are;

WarCraft: Skies of Azeroth  This is the full deal – you are just riding in a big eagle, looking around (like you would do if you really could ride an eagle). It’s the total environment that interesting.

AirPano’s The Land of Bears, Kurile Lake, Kamchatka, Russia which naturally lets you focus on the bears as you fly over them from a drone’s perspective. It’s good because you get to focus in the interesting bits – it’s right there, easily trackable. You want to follow a particular bear? Then it’s as easy as turning your head.

Google’s own Spotlight Stories’ A 360° World, which  is undeniably cute, is easy to follow and an interesting cardboard experience, but probably would have been better using  traditional film focus of the action in front of you. Still, it’s not bad and it’s easy to follow the main action.

Some Horribly done ones are;

Pretty much any 360° Concert Video. Plop a sphere of GoPro’s off the the side of the stage and what do you get?  A weird video where you  get 90°  of interesting content, a big view of the crowd, and a slice of the backstage. Yeah you can rotate around, but I don’t go to a concert to spend much if any time looking at the crown or the stage hands – I come to see the performer. And that tiny sliver I see of them isn’t really a great experience.

Nike’s The Neymar Jr. Effect – a pukefest that breaks a lot of the good VR experience rules. Really who thought up this shit?

neymarYou play as the “head” of a Brazilian soccer player – I mean really.. they chopped his head off,  placed the viewpoint there (see the image)  and you get to ride along – bouncing on his shoulders – able to completely swing through 360° of nauseating motion. All you control is the orientation, you get to bounce along while he skittles along the field and makes a goal. And the beginning is entirely a 2D introduction with helpful arrows pointing you back to the front. Nike probably spent a ton of money on this – guess what – it’s utter crap.

And finally there’s the thing that drove me to write this, Google Spotlight Stories’ latest release – from director Justin Lin –  the live action “Help”.  Now don’t get me wrong, there are things I really like about this film – it’s interesting, the production quality is great, the acting (what you can see of it on a phone) is good, FX are well done. Unfortunately there are two major flows that make it really hard to like as an experience.

Most of the time you are placed in the middle of the action. Which really sucks as there are people running away from the monster in front, and the monster chasing them behind. You can focus on one or the other, but you are always just seeing one half the story. This is absolutely NOT they way to do an immersive experience. Immersive does NOT mean I have to swing my head wildly back and forth like a terrier trying to break the neck of a chicken. And yes I could watch from one orientation then from the other. This would be like watching an argument by first hearing one person’s entire monologue, then the other’s. Neither one will be satisfying by itself. So placing the viewer between the antagonist/protagonist is an utterly bad immersive experience.

So much for the viewing experience – what made it much worst was trying to do this on my phone. It wasn’t my head I was swinging back and forth, it was my phone. Wearing an HMD would have made it a better experience. Unfortunately with a phone, I have a tiny little window through which I have to peer to see what’s going on. I was in a swivel chair wildly trying to spin back and forth, squinting at the tiny screen trying to catch sight of what was going on. This just made the entire experience horribly taxing, frustrating,  and non-immersive. Apparently I’m not the only one, either.  Didn’t anyone actually test this setup before releasing it?

The problem is when there is just one interesting bit of action, I don’t want it spread all over a virtual sphere. And in “Help” there was just the monster chasing the folks for a bit – placing me between them did not make it a 360° experience. You can’t break a single action bit into a panorama – at least we haven’t done this successfully yet. This is where I think traditional film and TV direction does not translate to any sort of VR experience – you can’t take the traditionally single point of interest and plop a spherical camera rig nearby and expect it to be instantly “immersive”. This is where I think it’ll morph into a 180° experience – this is more along the lines of traditional cinema, but gives me the (small) ability to move my focus a bit if I want to see more. Really, with two years of planning, you’d think the execution would have been better thought out.

Don’t get me wrong,   I eventually think we’ll figure out how to consistently create an immersive VR action experience. Full disclosure: The company I work for makes stereo 360° videos all the time. But at least they are either focused on the entire environment, or there’s a single point of interest that moves about the scene. We try *really* hard to make sure the user experience is good, that it’s not taxing or frustrating. It kills me when I see companies trying to do the whole “immersive 360 VR experience” and being totally clueless about how to pull it off. Even Google, who I applaud for trying this stuff out, apparently can’t claim success 100% of the time.

Posted in FILM | Leave a comment

Uploading your GearVR APKs faster – Part 1

One of the recent GearVR projects we did involved an APK that had over 350MB of video files embedded in it. Needless to say that it was really getting tedious to make  a change and upload the APK – which took about 2 ½ minutes on average. This really ate into our debugging process. And before you say it, we were using a pre-loaded video file for most of development (this is a best practice for app development), but when you are working on the “final” build, you’ve turned off all the short-cuts that you’ve implemented to speed development and are working on the “final” version.  Now upload was taking only a short part of the overall build process, but one of the tweaks we decided to implement from the the project post mortem was to do anything that would speed the editing-debugging cycle. It turns out that the variety of devices we have on hand exhibit a wide selection of upload speeds.

It turns out that we could take our old GearVR development platforms – the precursor to the Note 4’s –  the Samsung S5 “Moonlight” (which had their own headsets) and use them for development platforms for our current project. In fact, since the S5’s came with USB3 plugs, we did some investigation on if it was possible to enable a high-speed  USB3 connection. Turns out, it is possible. (Thanks to Nick Fox-Geig for pointing out the USB 3 ports!)

If you connect up an S5 with a USB 3 cable, you’ll need to actually connect it as a USB 3.0 Media device – this option will only be available if you are using a USB 3.0 cable on a USB 3.0 capable socket. You’ll then see this option become available.

Screenshot_USB3I should not that this is a bit deceptive, as the USB 3.0 capability will auto-shut off after about 5 minutes.

I should note that when we were iterating on the video, we had a specialized video viewer app  that the art department would upload the video to the device, and the app would then look for this video and run it. We’ve since made this part of our standard methodology of pipeline review when we’ve got a largish video that is in a process of iteration – thus the dev team does not need to get involved.

Posted in Android, GearVR, Virtual Reality | Leave a comment

I’ve got a new toy….an HTC Vive dev kit!

Got my HTC Vive! Sweet! After a prelim checkout, we’ve decided to give it it’s own space. Need to mount the lighthouses, plus extension cords (really Valve! 3 foot cords for something that’s supposed to be mounted near the ceiling?).HTCViveBox

 

The lighthouses are totally independent, you just need to run a (50′) sync cable between them.

Going to ceiling mount the HMD cables so the cables drop in the center of the “play area”. The HMD connects to the PC. The controllers are (optionally) wireless. Batteries (plus spares) included. Plus chargers!

Kudos to Valve – it’s excellently packaged, with some great, clear instructions (bitching about the short power cables aside). Best of all they had Amazon product URL’s for a bunch of accessories! This is the way dev kits should be delivered.

Now to let the Vive duke it out with the Morpheus!

Posted in Augmented Reality, Virtual Reality, Vive | Leave a comment

Thoughts on Mobile VR

main_bg_leftMost of my previous VR work/ experimentation has been on the DK1 and DK2. It’s tough to get the “full” experience going on these particularly if you’re looking for something equivalent to the rendering quality of the most recent video games – you have many many restrictions on what you can throw at the GPU before you start dropping below 60FPS – when we start requiring 90FPS the improvements in hardware will probably let us maintain the status quo – unless some additional innovations come down the line – like a smarter stereo rendering pipeline or SLI support.

I’ve played with Google Glass, and it’s better than I expected, but it’s still a poor man’s introduction to VR. Not to knock it too much, it’s better than I expected, and it’s a fine portable low cost VR experience, but hardly the immersive experience I expect with VR. So I wasn’t thrilled when I got the job of working on a GearVR title. Meh – a nice phone with a custom plastic cardboard variant.

I’ve since come to realize that maybe I was a bit too hasty. While the rendering quality is pretty good (the Samsungs are pretty nice phones),  but combined with some good quality lenses, faster tracking hardware, and a comfortable, WIRELESS, HMD. it’s actually a pretty impressive combination for VR. And while I’m looking forward to working on some Vive and CV1 titles, I’ve really come to appreciate the GearVR for a passive, wireless VR experience. Every VR virgin I’ve put in one has basically dropped their jaw open with an “OMG this is Awesome!”. So consider me a convert.

Posted in Android, GearVR, Virtual Reality | Leave a comment

From out of Virtual Left Field – Magic Leap arrives!

magicleaplogoThere’s a new player in town that’s bound to shake things up in the VR/AR community. Magic Leap has pretty much come out of nowhere and has thudded to earth with the subtly of an asteroid strike.

Even if you doubt this, just look at some of the particulars. Magic Leap is a Hollywood  Florida start-up. Rony Abovitz, the eccentric President, CEO & Founder of Magic Leap was a co-founder of surgical robotics firm Mako Surgical, which was sold for $1.65 billion in December of 2013. Now he’s hit the ground running and apparently has tech that’s turning heads and garnishing a lot of capital. Magic Leap received $50 million in a 1st round of investing in Feb. What’s left a lot of folks gaping is the $542 million they got in a 2nd round in October – with a large part of that coming from Goggle (corporate, not the investment arms), plus Qualcomm, Legendary Entertainment and some other VC firms.

So now they have nearly $600 million to play with – they are hiring like mad. But what are they doing?  From their press release:

With our founding principles our team dug deep into the physics of the visual world, and dug deep into the physics and processes of our visual and sensory perception. We created something new. We call it a Dynamic Digitized Lightfield Signal™ (you can call it a Digital Lightfield™). It is biomimetic, meaning it respects how we function naturally as humans (we are humans after all, not machines).

In time, we began adding a number of other technologies to our Digital Lightfield: hardware, software, sensors, core processors, and a few things that just need to remain a mystery. The result of this combination enabled our technology to deliver experiences that are so unique, so unexpected, so never-been-seen-before, they can only be described as magical.

We are building a world-class team of experience developers, and are reaching out to application wizards, game developers, story-tellers, musicians, and artists who are motivated by just wanting to make cool stuff.

So what does this mean? According to some accounts of folks who’ve seen the technology and if you glean info from their patents, it’s apparently a very realistic projection system that either projects onto something over the eyes or actually into the eyes. As creepy as that sounds, it’s called Virtual Retinal Display and I remember work being done at the Human Interface Technology Lab at the University of Washington in the 90’s. Couple that with some sort of head mounted display  (not only to mount the projectors, but a front facing RGBZ camera, speakers, headphones, location/orientation tracker) and you’ve got a pretty awesome VR *or* AR system. It’s possible to calculate where (direction AND distance) the user is looking and tailor the display to be focused correctly – this from the reference to a lightfield signal (see Lytro). Throw in some input gloves (ideally haptic) and you’ve a pretty awesome platform. The awesome part if that it crosses the line between very practical (think Google Glasses on steroids – a HUD that’s able to interact with the real-world scene that you eyes are currently viewing) and very entertainment oriented – a totally immersive VR experience that has a Game/VR HUD that encompasses your total field of view or can project high-def movies into you eyes.

I find the most fascinating thing being the involvement/investment by Google corporate. Given the bad press Glass has garnered in the last year, the lack of mention of Glass at Google IO this year, one starts wondering. The folks at Google are smart, perhaps it’s time to spin off the tech. Florida is pretty far from Silicon Valley.

 

 

Posted in Augmented Reality, Graphics Hardware, Virtual Reality | Leave a comment

Android 5.0 “Lollipop” debuts the OpenGL-ES 3.1 API

lollipopAndroid 5.0 – Lollipop  – Arrives. And it’s got lot’s of developer goodies in it.

ART Runtime

Google has replaced “Dalvik” (the Java-esqe runtime, which was a Just-In-Time compiling virtual machine) with ART – the new “Ahead-Of-Time RunTime” compiling virtual machine. This should make apps faster, but also brings 64-bit support in automatically for Java apps. Native code will need to be recompiled, but the new NDK adds support for 64-bit for all platforms. 64-bit is important because it frees you from a 3 Gig memory limit on devices.

OpenGL-ES 3.1 and Android Extension Pack

The Android 5.0 SDK is what devs will use to develop for Android Lollipop, and there’s a new “Expansion Pack” that will help game devs take advantage of things that are in OpenGL-ES 3.1, like new shader models, and has support for new hardware platforms as well (like PC’s). Out of the box you’ll get support for compute shaders, much better instancing support, new compressed texture formats, better texture sizes support and  more render buffer formats.

With the AEP you’ll get access to the newer features of the underlying graphics hardware supports that aren’t in the 3.1 OpenGL-ES spec. Intel HD graphics, Ardeno 420, Mali T6xx, PowerVR Series 6, Nvidia Tegra K1, should all have some extensions supported, so you might find support for tessellation and geometry shaders (likely on most), ASTC texture formats, new interpolation models, etc.

This brings Android graphics on-par with the capabilities of PC’s and consoles – it’s now nearly the same API, just not as fast (yet). This will change.

 

Posted in Android, Graphics Hardware, OpenGL | Leave a comment

Android OpenGL-ES 3.0 surpasses 25%

OpenGL-ES 3.0 is now over 25%.

GLES3Proliferation11-3

The folks at Google must have been busy the last month – they forgot to update the Dashboard for October – which is why there’s a gap between the last two points. (Yes, somebody noticed). With the release of Lollipop we should soon start to see OpenGL-ES 3.1 numbers as well.

Posted in Android, OpenGL | Leave a comment