A 20Megapixel VR display and Foveated rendering

tl,dr: Some companies (including Google) have announced soon-to-be available (2018) very-high resolution VR/AR displays. To deliver on these, two problems need to be overcome – not enough memory bandwidth or GPU compute available)(to be solved with foveated rendering techniques) which then adds on the problem of gaze tracking to control the foveated rendering. Turns out that both of these are understood, solvable problems. Coming soon to many HMDs.

In a previous post from two years ago I estimated that a VR “Retina” display would require a resolution of about 7410×6840/eye. Now this was with FOV specs of the first consumer crop of HMD’s. I missed the first subtle reveal (March 2017) that Google is working with a partner on a 20 Mpixel/eye OLED display. While this is excellent news – how does it measure up to the guestimates of my 2 year old post? And more importantly, what does it portend for the future.

While (VP of VR at Google) Clay Bavor’s SID Display Week Keynote (March 2017) video was a bit light on specifics, we can guess at some of the values. My original calculations were for 130°/eye horizontal FOV. – we’ll keep my 120° for the vertical FOV. The new display is 200° horiz (up from the original 190 total FOV)– which gives us 140° per eye FOV. For a Retina display (@ 57 pixels/degree) this means we’d need a resolution of 7,980 x 6,840. The accepted resolution value for the human fovea is 60 pixels/degree, but we’ll use my original 57 for now.

So taking our 140° horiz. FOV and 120° vert/ FOV values into 20 MPixels gives us a guess of;

34.5 PPD  for the new display (PPD = Pixels per Degree)

Or a display of about (rounding to more display-normal values) 4960X4096 – about a 7:6 aspect ratio.

4960×4096 = 20.3Mpixels.

Now, onto practical matters. Pushing 40Mpixels (times 2 for each eye) per frame is a sh*t-ton of data, especially at the frame speeds (90-120Hz) we need for good VR.  (at a minimum 40GB/s). Now current HDMI2.1 and DisplayPort bandwidths can come close, but, as it noted in the video and by MIL-STG-1472, there’s “central vision” and “peripheral vision”.

Central Vision is the high-def, color, feature resolving area – called “gist recognition”, while peripheral vision is less color perceptive and is much more sensitive to motion detection – i.e. temporal changes in “pixel” intensity.  It also turns out the the information your brain interpolates from each of these fields is different, so this can be used to fine-tune the perceived scene – so it’s not all just blurrier pixels. It’s nicely illustrated by this video;

Look at one ball, the other seems to follow the contours. From Illusion of the Year, 2012 Finalists

This is also called “Window” (for central) and “Scotoma”

from: Journal of Vision, Sept. 2009. The contributions of central versus peripheral vision to scene gist recognition. by Adam M. Larson, Lester C. Loschky

In general, the peripheral vision cone loses resolution of color, details and shapes, which brings us to “foveated rendering” – rendering just the Window in high resolution, and the Scotoma in lower resolutions, typically with some sort of blending operation used in between.

from the Microsoft Research Paper

And in practice it looks something like this;

It turns out that this has been been an area of research for a while, and with the advent of the resolution needs of VR and AR,  has seen quite an uptick in research papers.

In general, most techniques generate 3 rendering regions, the central (high resolution) region, the outer (lower res) region, and an intermediate, interpolated region. A surprising result is that some  latency in tracking was acceptable, and in particular the technique of rendering the foveated area has significant effect upon the perceived smoothness of the experience.

Techniques that were contrast-preserving and temporally stable gave the best results. The recent Nvidia and Google papers highlight a few different techniques. The Google paper also introduces some post-processing techniques to better accommodate the lens-distortion of HMDs when using foveated rendering. Turns out that LERPing between isn’t at all the best method.

If you’re interested in reading more in depth;

Understanding Pixel Density and Eye-Limiting Resolution

Understanding Foveated Rendering

A multi-resolution saliency framework to drive foveation

Latency Requirements for Foveated Rendering in Virtual Reality

Perceptually-Based Foveated Virtual Reality

Introducing a New Foveation Pipeline for Virtual/Mixed Reality

This all sounds great, but there’s one other thing we need to do get this to actually work – eye or gaze tracking. After all, you can move your eyes (and your gaze) without moving your head. In order to get foveated rendering to work you need to know (and in time to react to it) just where on the display the user is gazing at. (The Nvida paper states that they found up to a 50-70ms lag in updating the Window in foveated rendering acceptable.) Since this can be just about anywhere in the FOV, it requires being able to scan the eyes to detect gaze direction. This a is also a very active field of research as well,

Unlocking the potential of eye tracking technology

with significant progress.  (See products by Tobii, SMI, and Fraunhofer’s bi-directional photodirectional OLED display). It’s pretty much a solved problem, it just needs to be put into HMDs.

The basic technique is the shine infrared light on your eyes (inside the HMD) and look at the reflections. It’s some simple math to extract the retina’s location, and deduce the gaze direction from the highlights from the light reflected on the eye. This can all be done with some simple vision software. The output is then mapped to some pixel location on the HMD display.

Did you know that eye tracking is already in Windows ? Microsoft’s Eye Control came with the Windows 10 Fall Creators Update. Only works with an external tracker from Tobii, but it’s there.

So, there you have it – you can soon expect to see foveated displays – which introduced the requirement for gaze tracking – but that gaze tracking is also desirable for VR/AR. So we’ll soon have very nice HMDs that give us about half-retina resolution with gaze tracking thrown in for free. VR is about to get a ton better.

Posted in Augmented Reality, Graphics, Hardware, Technology, Virtual Reality, Windows Mixed Reality | Leave a comment

First view of the Magic Leap HMD drops, let’s guess the FOV

So Magic leap just announced its HMD, the “Lightwear” (plus accessories) .  I was really hoping for something a bit more wraparound and less cyberpunk. Immersion is really helped by a good field of view (FOV), but immersion might not be what ML is focusing on. The FOV shown looks kinda small. Can we guestimate the FOV?

From an article in Rolling Stone, we get this quote;

Magic Leap’s Lightwear doesn’t offer you a field of view that matches your eyes. Instead, the Magic Leap creations appear in a field of view that is roughly the shape of a rectangle on its side. Because the view is floating in space, I couldn’t’ measure it, so I did the next best thing: I spent a few minutes holding out first a credit card in front of my face and then my hands to try to be able to describe how big that invisible frame is. The credit card was much too small. I ended up with this: The viewing space is about the size of a VHS tape held in front of you with your arms half extended. It’s much larger than the HoloLens, but it’s still there.

Ahh! some numbers. Half of my arm’s length holding a VHS tape is 318mm. A VHS tape is 187mm long. Using that trig I learned and never thought would be useful in real life;We get an angle of 16.4 °, which we double for an HMD FOV of 32.8°, or roughly the same as Hololens (35°). Since the article also says the FOV is larger than the Hololens, I must have a longer arm than the writer of the article. The takeaway is that it’s comparable, and not a great increase in FOV size at all. Later in the article Abovitz admits that they are working to improve the FOV, probably in later versions. For reference, most folks view their desktop computer monitors so they have a 70° FOV.

Basically, the setup is a battery/computer-pack you wear on a belt or in a pocket, a single controller (very Oculus puck-like), and the headset. Said to be very comfortable. No specs on CPU, GPU, display or battery life.  Now to get my hands on one.

Posted in Augmented Reality, Hardware, Virtual Reality | Leave a comment

Samsung HMD Odyssey for Windows Mixed Reality – a review

While I wasn’t exactly *thrilled* to learn that I’d need to add yet another HMD to the list, in fact I’m really eager to play around with Microsoft’s Mixed Reality platform.  And earlier this week I finally received my first MS MR Headset -the $499  Samsung HMD Odyssey. I chose the Samsung HMD Odyssey for a couple of reasons – the largest two being;

  1. it has the “best” resolution of any of the MR HMD’s – 1440×1600, 90Hz OLED displays with a wide (110°) FOV.
  2. It comes with controllers

Installation:

Anyone who’s done as many HMD setup’s as I have come to love the Vive and revile the Oculus. I was somewhat dreading what I’d find, but for the most part it went OK. The most annoying (and time consuming) thing was getting my PC updated to the Windows 10 Fall Creator’s Update, version 1709, without which you’ll never get to do anything with Windows MR. After more than one false start and finally using the Windows 10 Update Assistant (get it here) to actually force it to update, after many hours of effort & downloading I finally got it to install. The Odyssey comes with it’s own bit of software, very much like a Vive or Rift would have. Plugging everything in after the Win10 update everything nearly worked on first try, with the exception that I had to install Bluetooth drivers to actually use the MR Controllers (a requirement for all of MS’s MR controllers). That updated, everything then worked, and much like a Vive, it first had me mark out the play area. (seated is also an option).

Ergonomics:

The Odyssey’s controllers are Samsung’s own, and have a slightly (IMO) nicer design than the stock ones. They are about as clunky as the Vive wands – which means not nearly as nice a the Oculus Touch or the upcoming Vive “Knuckles” controllers.

The HMD is a bit front heavy, but it does have good integrated headphones and a feel somewhat like the Sony VR HMD. Unlike the Sony the lenses don’t have a dolly that lets you move them out from the HMD frame, nor do they flip up like some of the other MS MR HMD’s do – probably the biggest missing feature. There’s some soft rubber around the nose which I find the edges of which start digging into my nose after wearing them for a bit – if it annoys me enough either some tape or a trim with some scissors should fix that. It could be a bit better balanced, lighter and wireless but it’s not a lot different from many of the other HMD’s out there.

Tracking.

HMD

The HMD tracks fine, even when you are facing away from the monitor. Since it needs light to track successfully it’s likely doing some image processing to track objects in the room as you move and rotate in addition to the built-in compass, gyrometer and accelerometer.  Overall (in the play area) it’s pretty accurate and the boundaries on the play area pop up when you (or a controller)  get too close to an edge.

Controllers

I was a bit worried that the inside-out tracking would suck, but it’s much better than I expected. Overall it’s not *quite* as accurate or fast as the Vive controllers, but it’s good enough and doesn’t get in the way – the controllers are fairly accurately tracked and the lag is minimal. They are positionally tracked by the HMD cameras and the tracking extends just outside the display range. Rotational and accerometer information is sent by the controllers themselves and thus is always tracked.

Overall:

The visual fidelity is great, the tracking has sold me on inside-out as a viable method for tracking controllers or (eventually) hands. I’m not in love with the ergonomics, but it’s certainly usable for extended periods of time. It’s certainly ready for some social interaction, anything not requiring a fine degree of control, or extended keyboard input. These will come later, but this is certainly a viable start.

Posted in Augmented Reality, Virtual Reality, Windows Mixed Reality | Leave a comment

Microsoft Mixed Reality announcement

Microsoft held a press conference on Oct. 3rd in San Fransisco, where they announced a few interesting things about their OS support for mixed reality, due to  be public with a Win10 update mid month.

  • They bought Altspace VR
  • There’s a slew of VR/AR HMD’s coming out in a few weeks, priced around $400-$500
  • Steam games will be available
  • All in all it’s starting to look like VR/Ar is becoming more mainstream.

Here’s the video

 

Posted in Uncategorized | Leave a comment

The Mars Bus keeps rolling along…100 awards and counting!

I’m really surprised at how well the Mars Bus has been received. It was a cutting edge VR project using technology never put together before, but so far the total is a staggering 100 awards – many of them the highest prize awarded including 5 Gold Lions from Cannes. (In case you’re wondering why I care – I was the dev lead on this project)

5x Gold Lions Cannes Lions 2016

8x Silver Lions Cannes Lions 2016

5x Bronze Lions Cannes Lions 2016

1x Innovation Lion Cannes Lions 2016

Grand  Clio Awards 2016

2x Gold Clio Awards 2016

4x Silver Clio Awards 2016

2x Gold IAB MIXX Awards

2x Grandy ANDY Awards 2017

8x Gold ANDY Awards 2017

1x Silver ANDY Awards 2017

1x Gold Epica Awards 2016

8x Gold ADC Awards 2017

2x Silver ADC Awards 2017

1x Bronze ADC Awards 2017

Gold Ciclope 2016

2x Yellow Pencil D&AD 2017

2x Graphite Pencil D&AD 2017

1x Wood Pencil D&AD 2017

3x Gold The Hollywood A List Awards

17x Gold One Show 2017

1x Silver One Show 2017

1x Winner Webbys 2017

2x Peoples Choice Webbys 2017

2x Grand Prize New York Festival

6x Gold New York Festival

3x Silver New York Festival 

3x Winner Project Isaac Awards

3x Gold Shots Awards 2016 

1x Winner AdAge Creativity Awards

Posted in Framestore, Virtual Reality | Leave a comment

Virtual Computing

Mike Abrash – chief Scientist at Oculus, gave, as always, an inspiring keynote at this year’s F8 conference. He gave a great talk, coined the term “virtual computing” to glom together VR, AR, the cloud, and how it’ll all mix together. Best quote:

20 or 30 years from now, I predict that instead of carrying stylish smartphones everywhere, we’ll  wear stylish glasses. Those glasses will offer VR, AR,and everything in-between and we’ll wear them all day and use them in almost every aspect of our lives. The distinction between VR and AR will vanish. The real and virtual worlds will just mix and match throughout the day according to our needs.

The only part I disagree with is the 20-30 years – it’s too powerful  of a metaphoric, enabling technology to wait that long. Think of something 20x more powerful and useful than a smart phone, and it gets rid of the phone. 5-6 years before we start seeing early commercial adopters, 10 years it’ll be unthinkable not to have them. In 20-30 years they’ll just look way cooler.

Abrash talking at F8 2017 – Skip to 48:00

Posted in Augmented Reality, Technology, Virtual Reality | Leave a comment

The Soul of a New Machine? 1st move – Avatar firm gets VC Investments

(to repurpose from Tracy Kidder). A number of years ago I was dev lead for a groundbreaking company called LifeF/X , we made fairly believable avatar heads – ones that were pretty far down the uncanny valley – believable enough so you though you were talking to video of  a real person – albeit a person who had some issues. We had many things in our favor – low render target size (about  300×300 pixels for the face) plus some really advanced facial animation software – nothing like it exists today unfortunately.

The lead scientist/facial-animator behind that software was Dr. Mark Sagar. After LifeF/X spent all the VC money and folded he went on to Sony & Weta for become their expert in facial animation. If you’ve seen Spiderman or King Kong or Avatar you’ve seen Mark’s work close up. He then returned back to research at the university where he’s been working on an extremely creepy (sorry Mark!) AI/Avatar project called Baby X.

I find is creepy because he’s creating a neural network that’s starting off basically with a human baby’s level of understanding and teaching it through interaction.  All good, sound tech, – the fact that the model used is based on a real baby’s interactions – his daughter in fact – is  a bit unnerving to me – the animation isn’t quite out of the uncanny valley, so it’s a bit creepy to watch, but the progress is real. Baby X is powered by an artificial brain with inputs layered in through an artificial nervous system. It’s designed to be plugged in to other AI systems that may deliver higher level thought.

I bring this up because he’s gotten the tech far enough along to attract  US$ 7.5M in VC money for a spin-off company – Soul Machines.   From the press release:

Soul Machines is a developer of intelligent, emotionally responsive avatars that augment and enrich the user experience for Artificial Intelligence (AI) platforms.

So, here we see the first VC investment in a company creating AI designed for human interaction – think your personal assistant (ala Siri, Cortana, Alexa, Google Now, etc.) but one that understands human emotions, has their own emotional state, and shows up as a human on your PDA, computer, AR glasses – whenever you need her (or she needs you). You talk, they listen, understand, and respond. The future is getting closer.

Posted in Augmented Reality, Digital Intelligence, Technology, Virtual Reality | Leave a comment

‘The Field Trip to Mars,’ the Single Most Awarded Campaign at Cannes

Inside ‘The Field Trip to Mars,’ the Single Most Awarded Campaign at Cannes 2016 McCann and Framestore dissect their Lockheed Martin marvel – so reads the Adweek headline discussing the Lockheed Martin Mars Bus, and its huge success at the Cannes Film Festival.

titlebusIt is one of the most demanding, largescale, mobile, group VR experiences ever created.  Framestore, working with McCann NY successfully launched Lockheed Martin’s Project Beyond: Mars Bus for the 2016 U.S.A. Science and Engineering Festival at the Washington DC Science Convention on April 15th, where we ran through about 2400 thrilled bus riders over the two days of the convention plus taking school kids out for the actual “live” VR experience the day before.

schoolbus_front

What other project involves a school bus, equipped with opaque/transparent windows, transparent/opaque 4K monitors, 6 rack-mounted gaming PC’s, its own private network, A/C unit and diesel generator, a suite of real-time GPS, inertial sensors, compass, and velocity laser readers? The software consisted of a custom-built data fusion application to spit out real-time velocity, heading, acceleration, and positional data to a suite of PC’s running the Unreal Engine Mars Bus simulation through various “windows” onto Mars. All packing into a moving school bus. A combination that’s never been tried before – and we pulled it off.

schoolbus_midride

I came up with the overall architecture of the software applications. The software team was then able to test out Proof-of-concepts that  let us push the limits of what had been attempted before. We quickly hit upon using Unreal running a driving simulation and needed to figure out how to make Mars drivable – both in a software simulation and in a real-world sense – the actions that the bus went through in the real world had to be real-time simulated in the Mars sim – marrying reality with simulation.

dc_segment2

A real map of DC streets was used generate the “drivable” area of “Mars” – making it possible to drive through the DC metropolitan area (215 sq km) while avoiding the “randomly” placed rock and mountains of Mars. Thus the 24 kids got to drive around DC while seeing Mars go by on the windows. If the bus stopped, turned, or went over a bump, so did the kids – making it the largest, mobile, real-time group VR experience I believe has ever been created. There were some highlights that we wanted the kids to experience. We had a drive by of the Mars Rover, through a futuristic Mars Base Camp, and through a Martian sandstorm – taking advantage of our 500 watts of  5.1 sound system – which was plenty loud in an enclosed bus – the impact sounds provided a visceral feel that things were actually impacting the bus. (See the experience video).

There was a server rack of gaming PC’s in the back of the bus – four that were dedicated to driving the 80 inch 4K monitors; there was one PC doing the data acquisition and data fusion, which then squirted position, velocity, direction info out onto the network via UDP to be picked up by the 5 simulation PCs – the additional one showing the real time top down bus position on the road map – serving as a witness to the accuracy of the simulation. There was a Xbox controller attached so that the position/speed could be manually modified  in case it was needed or the GPS system went south. We never needed to use it. Finally there was one PC for the “monkey” – the poor sod who had to sit in the back with the servers in a dark tiny room on a folding chair whose job it was to start and if necessary restart the experience. It was also used to provide some systems monitoring – a lot of which was added at the last minute because we had some trouble with the PC’s failing to respond initially – we finally added some software that would ping each PC every second to make sure it was responding – the simulation status, and PC status were all shown on the experience control display.

Once in the convention center we switched over to a canned ride, but once word got out about the experience, we had lines of well over an hour wait to ride the bus. Overall the ooo’s and ahhh’s we got, plus the frequent clapping at the end by the thrilled riders made the demanding, 3-month effort to bring it together a cherished accomplishment.  Overall it was a pretty daring experience to bring off – given that we were attempting things never before attempted using technology in a way that had never been used together.

bus_line

The final result was that there was a lot or recognition when the project was shown to the general public. The Field Trip to Mars was the single most awarded campaign at the 2016 Cannes Lions International Festival of Creativity. It got a total of 19 Lions across 11 categories.

  • 1 Innovation Lion
  • 5 Gold Lions
  • 8 Silver Lions
  • 5 Bronze Lions

It was also nominated for a Titanium Lion among 22 out of 43,101total entries across all categories from around the world.

ADWEEK also liked the bus, as it won top honor from ADWEEK’s Project Isaac – wining 1 Creative Invention, 1 Event/Experience Invention and the highest award, the Gravity.

In all it was one of the most challenging VR experiences I have ever had the pleasure to work on, and with the extraordinary team of engineers, artists and producers working furiously during the last week it came together and created one heck of a VR experience.

Ron’s Video links for the Mars Bus;

7 Days to go

less than 24 hours to go…

The experience from inside

It was all worth it!

Framestore

Framestore’s Mars Bus page (click on credits to see who worked on it)

External sites

Lockheed Martin’s new magic school bus wants to virtually take kids to Mars

Lockheed Mars Bus Experience

Alexander Rea’s page

Anima Patil-Sabale‘s video as a passenger

Posted in Augmented Reality, Technology, Virtual Reality | Leave a comment