- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- July 2015
- June 2015
- May 2015
- November 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- November 2013
- August 2013
- December 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- August 2011
- July 2011
- June 2011
- March 2011
- November 2010
- October 2010
- September 2010
- July 2010
- August 2009
- March 2009
- February 2009
- August 2008
- September 2007
- August 2007
- May 2007
- April 2007
- March 2007
- February 2007
- February 2005
- January 2005
- December 2004
- November 2004
- October 2004
- September 2004
- June 2004
- April 2004
- March 2004
- February 2004
- January 2004
- December 2003
- October 2003
- September 2003
- August 2003
- July 2003
- June 2003
- May 2003
- April 2003
- March 2003
- February 2003
- December 2002
- July 2002
- May 2002
- March 2002
- February 2002
When the $99 GearVR hits the market this year, and as folks learn that they can get a great VR experience I really think that it’s going to take off and build momentum. With the obvious (to consumers) advantage that having Netflix and Hulu and games available on your GearVR, that extra nudge to try VR and get converted to a VR enthusiast isn’t that tough a call. Pretty much *anyone* who’s tried a GearVR with good VR content comes away desiring one. It takes entertainment to a whole other level. Great for consumers. Great for content creators. Bad for VR devs.
Developing for GearVR is a monumental pain. Take the disadvantage of Android programming in general, throw in the fact that you’re testing on a device that you not only have to upload the APK to, but you need to put the device on your face to see if it actually looks right. Plus the fact that you do NOT have a USB connection to get the debug log spew, but you are forced to either suck it off the device after it runs or (better) create a WiFi connection with your device to watch the spew scroll by. You eventually learn a bunch of tricks to ease the pain. But in the end, you’ll still need to strap it on to do final testing.
Don’t get me wrong, I love the VR aspects of GearVR – Samsung and Oculus are pushing out a high-quality consumer VR product and the mobility of it blows me away. When I used to be in Developer Relations for Intel, I had to tell my in-laws that I went to game companies and made the game code better. That’s easier to understand than optimizing rendering pipeline operations, reducing the number of samplers, ruthless culling, optimizing cache line alignment, or multi-threading an rendering engine. Pretty much all my in-laws now know I produce “Virtual Reality Experiences” quickly followed by a GearVR demo. The get it right away. It’s an awesome platform for mobile VR (and regular video entertainment and games as well). Did I mention that developing for it is a real pain?
So one of the inevitable steps of development for GearVR is uploading the APK to the device. Now when we had a huge video file, we were smart and just copied it to the phone and loaded in programmatically, not making it part of the APK till the very end. But eventually you need to turn off all the dev. shortcuts and start testing the entire APK, in the near final form. Building & uploading time becomes critical, especially at 3am.
One thing we did was try to choose the devices that could upload the fastest. It turns out that, while their GPU power was slow, if you were interested in how things behaved and looked, and not how they performed, we could get away with using an S5 “Moonlight” and running it in USB3 mode.
I did some performance testing and here’s a graph of relative performance. Note that prior to S6, there could be some hardware (incl. GPU) variations that would give wildly varying performance, including upload speeds. (Only one variant of the Note 4 was tested) So if you end up in the test-edit-upload-test cycle, you might find you can save yourself some time and frustration by trying to optimize upload speeds to the testing devices.
“Be prepared. Luck is truly where preparation meets opportunity.”
I am giving a talk at the NYVR Meetup in which I’ll be talking about how to maximize your chances to get a job in the VR industry. Since it’s pretty much a brand new industry, hardly anyone has experience. This means that you can maximize your chances by doing some ground work now.
Here are the basic tenets;
- Hang out with people who are already doing what you want to do.
These folks are your intellectual stimulation. They know stuff, take the opportunity to learn from them. Informal groups, meetups, conferences, hackathons, or your own team are all ways to hang with similarly minded folks.
- Educate yourself on the topic. Become the expert.
Never turn down an opportunity to learn stuff. Find out the answers to questions. Force yourself to learn, ideally by building a real demo.
- Commit to working on a project. And finish it.
Ideally as part of a group.
The tools you should be using (learning) are;
Unity (C#), Unreal (C++), Maya, Max, Photoshop, VideoStitch.
And the demo should be running on:
Oculus, Vive, GearVR, Cardboard, stereo/360 video rigs
- Share your results
Publish your results. Blog, Webpage, Facebook, Youtube, etc. Don’t forget to show your passion when you are talking about what you’ve done. If your passion fails to show through, you won’t be a convincing hire.
“Experience is what you get when you didn’t get what you wanted. And experience is often the most valuable thing you have to offer.”
Hardly anybody in VR has multi-year on-the-job experience – so what we look for is self driven folks who’ve gone the extra mile to gain experience on their own time.
I talked about two videos folks should watch.
At Oculus Connect Samsung finally finalized the consumer GearVR plans – They are coming out with an updated $99 GearVR that will universally support any of Samsung’s four new flagship phones: the Galaxy Note 5, the Galaxy S6, the Galaxy S6 Edge, and the Galaxy 6S Edge+. They’ve added some differentiating contours to the trackpad, so less chance of fumbling around trying to find the trackpad. They’ve gotten rid of the overhead strap and made the side straps a little softer.
Plus Hulu, Netflix,Fox, Lionsgate, Twitch, Vimeo, Facebook join Oculus Video support, to support that movie-on-your-face concept. Lot’s of games, including some retro games from Oculus Arcade (will have to see how that works) and of course, Minecraft. And I’d be remiss if I didn’t mention that the Samsung announcement had a few seconds of Framestore’s ‘Gear VR Test Drive’ video.
When you watch someone using HMD VR, you see them looking at stuff you can’t see. You see their arms moving, you hear them laugh manically or scream in terror… It’s like watching someone high on some drug. VR is the drug that’s potentially more addictive than cocaine.
TLDR: I’m going to rag on Google’s 360° action flick “Help” as the acme of horrible 360° mobile filmmaking.
I’m not a huge fan of 360° immersive videos. They are terribly hard to do right – and by “right” I mean entertaining, interesting, and intuitive. Unfortunately I see a trend amongst the companies who are pushing 360° video and in particular labeling it as a “VR” experience. In reality most of the 360° videos I see are utter crap with prominent stitching artifacts, and horrible (or the utter lack of) focus on the action. I see this when traditionally (i.e. film and TV) focused directors try to create a 360° experience, making the naive assumption that what they know works and that 360° video is just some new tech like 3D – you can just do what you’ve always done in the “new” medium and it’ll turn out OK. Sorry, no. This will work with 180° degree videos – in fact I think this will be the ultimate direction that a lot of “VR” video will be used for, but with 360° you have to think about the experience in a totally new way.
It all comes down to vision and talent – the ability to imagine yourself in the center of an interesting experience and deciding how you are going to present it. This can be greatly successful – if it’s done so that the user can easily and intuitively focus on the interesting parts. This shows the greatest success when it’s an immersive experience with either one direction of interest or else when entire view is interesting.
Some examples of getting it right are;
WarCraft: Skies of Azeroth This is the full deal – you are just riding in a big eagle, looking around (like you would do if you really could ride an eagle). It’s the total environment that interesting.
AirPano’s The Land of Bears, Kurile Lake, Kamchatka, Russia which naturally lets you focus on the bears as you fly over them from a drone’s perspective. It’s good because you get to focus in the interesting bits – it’s right there, easily trackable. You want to follow a particular bear? Then it’s as easy as turning your head.
Google’s own Spotlight Stories’ A 360° World, which is undeniably cute, is easy to follow and an interesting cardboard experience, but probably would have been better using traditional film focus of the action in front of you. Still, it’s not bad and it’s easy to follow the main action.
Some Horribly done ones are;
Pretty much any 360° Concert Video. Plop a sphere of GoPro’s off the the side of the stage and what do you get? A weird video where you get 90° of interesting content, a big view of the crowd, and a slice of the backstage. Yeah you can rotate around, but I don’t go to a concert to spend much if any time looking at the crown or the stage hands – I come to see the performer. And that tiny sliver I see of them isn’t really a great experience.
Nike’s The Neymar Jr. Effect – a pukefest that breaks a lot of the good VR experience rules. Really who thought up this shit?
You play as the “head” of a Brazilian soccer player – I mean really.. they chopped his head off, placed the viewpoint there (see the image) and you get to ride along – bouncing on his shoulders – able to completely swing through 360° of nauseating motion. All you control is the orientation, you get to bounce along while he skittles along the field and makes a goal. And the beginning is entirely a 2D introduction with helpful arrows pointing you back to the front. Nike probably spent a ton of money on this – guess what – it’s utter crap.
And finally there’s the thing that drove me to write this, Google Spotlight Stories’ latest release – from director Justin Lin – the live action “Help”. Now don’t get me wrong, there are things I really like about this film – it’s interesting, the production quality is great, the acting (what you can see of it on a phone) is good, FX are well done. Unfortunately there are two major flows that make it really hard to like as an experience.
Most of the time you are placed in the middle of the action. Which really sucks as there are people running away from the monster in front, and the monster chasing them behind. You can focus on one or the other, but you are always just seeing one half the story. This is absolutely NOT they way to do an immersive experience. Immersive does NOT mean I have to swing my head wildly back and forth like a terrier trying to break the neck of a chicken. And yes I could watch from one orientation then from the other. This would be like watching an argument by first hearing one person’s entire monologue, then the other’s. Neither one will be satisfying by itself. So placing the viewer between the antagonist/protagonist is an utterly bad immersive experience.
So much for the viewing experience – what made it much worst was trying to do this on my phone. It wasn’t my head I was swinging back and forth, it was my phone. Wearing an HMD would have made it a better experience. Unfortunately with a phone, I have a tiny little window through which I have to peer to see what’s going on. I was in a swivel chair wildly trying to spin back and forth, squinting at the tiny screen trying to catch sight of what was going on. This just made the entire experience horribly taxing, frustrating, and non-immersive. Apparently I’m not the only one, either. Didn’t anyone actually test this setup before releasing it?
The problem is when there is just one interesting bit of action, I don’t want it spread all over a virtual sphere. And in “Help” there was just the monster chasing the folks for a bit – placing me between them did not make it a 360° experience. You can’t break a single action bit into a panorama – at least we haven’t done this successfully yet. This is where I think traditional film and TV direction does not translate to any sort of VR experience – you can’t take the traditionally single point of interest and plop a spherical camera rig nearby and expect it to be instantly “immersive”. This is where I think it’ll morph into a 180° experience – this is more along the lines of traditional cinema, but gives me the (small) ability to move my focus a bit if I want to see more. Really, with two years of planning, you’d think the execution would have been better thought out.
Don’t get me wrong, I eventually think we’ll figure out how to consistently create an immersive VR action experience. Full disclosure: The company I work for makes stereo 360° videos all the time. But at least they are either focused on the entire environment, or there’s a single point of interest that moves about the scene. We try *really* hard to make sure the user experience is good, that it’s not taxing or frustrating. It kills me when I see companies trying to do the whole “immersive 360 VR experience” and being totally clueless about how to pull it off. Even Google, who I applaud for trying this stuff out, apparently can’t claim success 100% of the time.
One of the recent GearVR projects we did involved an APK that had over 350MB of video files embedded in it. Needless to say that it was really getting tedious to make a change and upload the APK – which took about 2 ½ minutes on average. This really ate into our debugging process. And before you say it, we were using a pre-loaded video file for most of development (this is a best practice for app development), but when you are working on the “final” build, you’ve turned off all the short-cuts that you’ve implemented to speed development and are working on the “final” version. Now upload was taking only a short part of the overall build process, but one of the tweaks we decided to implement from the the project post mortem was to do anything that would speed the editing-debugging cycle. It turns out that the variety of devices we have on hand exhibit a wide selection of upload speeds.
It turns out that we could take our old GearVR development platforms – the precursor to the Note 4’s – the Samsung S5 “Moonlight” (which had their own headsets) and use them for development platforms for our current project. In fact, since the S5’s came with USB3 plugs, we did some investigation on if it was possible to enable a high-speed USB3 connection. Turns out, it is possible. (Thanks to Nick Fox-Geig for pointing out the USB 3 ports!)
If you connect up an S5 with a USB 3 cable, you’ll need to actually connect it as a USB 3.0 Media device – this option will only be available if you are using a USB 3.0 cable on a USB 3.0 capable socket. You’ll then see this option become available.
I should note that when we were iterating on the video, we had a specialized video viewer app that the art department would upload the video to the device, and the app would then look for this video and run it. We’ve since made this part of our standard methodology of pipeline review when we’ve got a largish video that is in a process of iteration – thus the dev team does not need to get involved.