More on Space Filling Algorithms…

A few years ago I wrote about Procedurally Distributing Game Objects, essentially how to sprinkle objects in game space so that they might appear randomly distributed, they are always somewhat equi-distant, thus making the density of object’s distribution in game space is nearly homogeneous.  These types of uniform distributions are frequently used in  techniques that use Monte Carlo simulations, such as ray tracing algorithms. A nice paper from Pixar’s Per Christensen, Andrew Kensler and Charlie Kilpatrick  – Progressive Multi-Jittered Sample Sequences adds three new sequential distributions for use is generating space-filling distributions. The paper goes in depth about measuring the distributions including some nice Fourier analyses – a great example of how to measure distribution and “randomness”.

Posted in Code, Graphics, Technology | Leave a comment

Case Study: Can XR provide any benefits in learning tasks?

I was taken with the results of a study that was co-run by Siemens and Daqri that was brought to my attention by Colin Couper from Daqri in a talk at AWE 2018. I was going to post about it but I hadn’t gotten around to it when someone asked a question on Quora that the article was a great answer to.

TLDR: Can AR provide any benefits to education/learning of tasks? Using AR provides better efficiency, reliability, confidence, and speed, with less likelihood of error. Workers needed much less oversight for training and learn much faster and with greater confidence.

In the summer of 2016, Siemens was investigating AR and teamed up with Daqri (an innovative industrial HMD manufacturer). Among other things Siemens manufactures huge gas turbines and it takes 3 years to train up turbine fitters to work on them. Siemens wanted to see if AR could improve that process.

It was a pilot project where four engineers with experience ranging from novice to trainer used Daqri’s helmets to assemble parts of a 1,000-pound gas burner – a complex component of a gas turbine.

It was better than successful.

To summarize, using AR;

  • One novice estimated that it would have taken one day to assemble the burner – he did it in 45 mins – independently! (That’s a 90% reduction) The expert trainer took 40.
  • All participants noted increased confidence. When you have floating 3D diagrams and checklists (the HMD supports voice, so it’s all hands-free), they all felt more confident and “safe” about what they were doing.
  • With the AR guidelines, the chances of errors is greatly reduced. Even the experts said it provided a nice reminder of all the things they had to keep straight.

The results were a success so Siemens is expanding the study to a much larger group. WSJ Article on the study follow-up.

I think there’s something to this AR training idea.

Here’s Colin Couper’s AWE 2018 talk about the study;

Here’s a link to the study results. (sign up required)

Posted in Augmented Reality, Technology, Virtual Reality, VR/AR/XR, Windows Mixed Reality | Leave a comment

A 20Megapixel VR display and Foveated rendering

tl,dr: Some companies (including Google) have announced soon-to-be available (2018) very-high resolution VR/AR displays. To deliver on these, two problems need to be overcome – not enough memory bandwidth or GPU compute available)(to be solved with foveated rendering techniques) which then adds on the problem of gaze tracking to control the foveated rendering. Turns out that both of these are understood, solvable problems. Coming soon to many HMDs.

In a previous post from two years ago I estimated that a VR “Retina” display would require a resolution of about 7410×6840/eye. Now this was with FOV specs of the first consumer crop of HMD’s. I missed the first subtle reveal (March 2017) that Google is working with a partner on a 20 Mpixel/eye OLED display. While this is excellent news – how does it measure up to the guestimates of my 2 year old post? And more importantly, what does it portend for the future.

While (VP of VR at Google) Clay Bavor’s SID Display Week Keynote (March 2017) video was a bit light on specifics, we can guess at some of the values. My original calculations were for 130°/eye horizontal FOV. – we’ll keep my 120° for the vertical FOV. The new display is 200° horiz (up from the original 190 total FOV)– which gives us 140° per eye FOV. For a Retina display (@ 57 pixels/degree) this means we’d need a resolution of 7,980 x 6,840. The accepted resolution value for the human fovea is 60 pixels/degree, but we’ll use my original 57 for now.

So taking our 140° horiz. FOV and 120° vert/ FOV values into 20 MPixels gives us a guess of;

34.5 PPD  for the new display (PPD = Pixels per Degree)

Or a display of about (rounding to more display-normal values) 4960X4096 – about a 7:6 aspect ratio.

4960×4096 = 20.3Mpixels.

Now, onto practical matters. Pushing 40Mpixels (times 2 for each eye) per frame is a sh*t-ton of data, especially at the frame speeds (90-120Hz) we need for good VR.  (at a minimum 40GB/s). Now current HDMI2.1 and DisplayPort bandwidths can come close, but, as it noted in the video and by MIL-STG-1472, there’s “central vision” and “peripheral vision”.

Central Vision is the high-def, color, feature resolving area – called “gist recognition”, while peripheral vision is less color perceptive and is much more sensitive to motion detection – i.e. temporal changes in “pixel” intensity.  It also turns out the the information your brain interpolates from each of these fields is different, so this can be used to fine-tune the perceived scene – so it’s not all just blurrier pixels. It’s nicely illustrated by this video;

Look at one ball, the other seems to follow the contours. From Illusion of the Year, 2012 Finalists

This is also called “Window” (for central) and “Scotoma”

from: Journal of Vision, Sept. 2009. The contributions of central versus peripheral vision to scene gist recognition. by Adam M. Larson, Lester C. Loschky

In general, the peripheral vision cone loses resolution of color, details and shapes, which brings us to “foveated rendering” – rendering just the Window in high resolution, and the Scotoma in lower resolutions, typically with some sort of blending operation used in between.

from the Microsoft Research Paper

And in practice it looks something like this;

It turns out that this has been been an area of research for a while, and with the advent of the resolution needs of VR and AR,  has seen quite an uptick in research papers.

In general, most techniques generate 3 rendering regions, the central (high resolution) region, the outer (lower res) region, and an intermediate, interpolated region. A surprising result is that some  latency in tracking was acceptable, and in particular the technique of rendering the foveated area has significant effect upon the perceived smoothness of the experience.

Techniques that were contrast-preserving and temporally stable gave the best results. The recent Nvidia and Google papers highlight a few different techniques. The Google paper also introduces some post-processing techniques to better accommodate the lens-distortion of HMDs when using foveated rendering. Turns out that LERPing between isn’t at all the best method.

If you’re interested in reading more in depth;

Understanding Pixel Density and Eye-Limiting Resolution

Understanding Foveated Rendering

A multi-resolution saliency framework to drive foveation

Latency Requirements for Foveated Rendering in Virtual Reality

Perceptually-Based Foveated Virtual Reality

Introducing a New Foveation Pipeline for Virtual/Mixed Reality

This all sounds great, but there’s one other thing we need to do get this to actually work – eye or gaze tracking. After all, you can move your eyes (and your gaze) without moving your head. In order to get foveated rendering to work you need to know (and in time to react to it) just where on the display the user is gazing at. (The Nvida paper states that they found up to a 50-70ms lag in updating the Window in foveated rendering acceptable.) Since this can be just about anywhere in the FOV, it requires being able to scan the eyes to detect gaze direction. This a is also a very active field of research as well,

Unlocking the potential of eye tracking technology

with significant progress.  (See products by Tobii, SMI, and Fraunhofer’s bi-directional photodirectional OLED display). It’s pretty much a solved problem, it just needs to be put into HMDs.

The basic technique is the shine infrared light on your eyes (inside the HMD) and look at the reflections. It’s some simple math to extract the retina’s location, and deduce the gaze direction from the highlights from the light reflected on the eye. This can all be done with some simple vision software. The output is then mapped to some pixel location on the HMD display.

Did you know that eye tracking is already in Windows ? Microsoft’s Eye Control came with the Windows 10 Fall Creators Update. Only works with an external tracker from Tobii, but it’s there.

So, there you have it – you can soon expect to see foveated displays – which introduced the requirement for gaze tracking – but that gaze tracking is also desirable for VR/AR. So we’ll soon have very nice HMDs that give us about half-retina resolution with gaze tracking thrown in for free. VR is about to get a ton better.

Posted in Augmented Reality, Graphics, Hardware, Technology, Virtual Reality, Windows Mixed Reality | Leave a comment

First view of the Magic Leap HMD drops, let’s guess the FOV

So Magic leap just announced its HMD, the “Lightwear” (plus accessories) .  I was really hoping for something a bit more wraparound and less cyberpunk. Immersion is really helped by a good field of view (FOV), but immersion might not be what ML is focusing on. The FOV shown looks kinda small. Can we guestimate the FOV?

From an article in Rolling Stone, we get this quote;

Magic Leap’s Lightwear doesn’t offer you a field of view that matches your eyes. Instead, the Magic Leap creations appear in a field of view that is roughly the shape of a rectangle on its side. Because the view is floating in space, I couldn’t’ measure it, so I did the next best thing: I spent a few minutes holding out first a credit card in front of my face and then my hands to try to be able to describe how big that invisible frame is. The credit card was much too small. I ended up with this: The viewing space is about the size of a VHS tape held in front of you with your arms half extended. It’s much larger than the HoloLens, but it’s still there.

Ahh! some numbers. Half of my arm’s length holding a VHS tape is 318mm. A VHS tape is 187mm long. Using that trig I learned and never thought would be useful in real life;We get an angle of 16.4 °, which we double for an HMD FOV of 32.8°, or roughly the same as Hololens (35°). Since the article also says the FOV is larger than the Hololens, I must have a longer arm than the writer of the article. The takeaway is that it’s comparable, and not a great increase in FOV size at all. Later in the article Abovitz admits that they are working to improve the FOV, probably in later versions. For reference, most folks view their desktop computer monitors so they have a 70° FOV.

Basically, the setup is a battery/computer-pack you wear on a belt or in a pocket, a single controller (very Oculus puck-like), and the headset. Said to be very comfortable. No specs on CPU, GPU, display or battery life.  Now to get my hands on one.

Update July 31 2018:  From leaked documentation “Magic Leap One has a horizontal FOV of 40 degrees, a vertical FOV of 30 degrees, and a diagonal FOV of 50 degrees.”

Posted in Augmented Reality, Hardware, Virtual Reality | Leave a comment

Samsung HMD Odyssey for Windows Mixed Reality – a review

While I wasn’t exactly *thrilled* to learn that I’d need to add yet another HMD to the list, in fact I’m really eager to play around with Microsoft’s Mixed Reality platform.  And earlier this week I finally received my first MS MR Headset -the $499  Samsung HMD Odyssey. I chose the Samsung HMD Odyssey for a couple of reasons – the largest two being;

  1. it has the “best” resolution of any of the MR HMD’s – 1440×1600, 90Hz OLED displays with a wide (110°) FOV.
  2. It comes with controllers

Installation:

Anyone who’s done as many HMD setup’s as I have come to love the Vive and revile the Oculus. I was somewhat dreading what I’d find, but for the most part it went OK. The most annoying (and time consuming) thing was getting my PC updated to the Windows 10 Fall Creator’s Update, version 1709, without which you’ll never get to do anything with Windows MR. After more than one false start and finally using the Windows 10 Update Assistant (get it here) to actually force it to update, after many hours of effort & downloading I finally got it to install. The Odyssey comes with it’s own bit of software, very much like a Vive or Rift would have. Plugging everything in after the Win10 update everything nearly worked on first try, with the exception that I had to install Bluetooth drivers to actually use the MR Controllers (a requirement for all of MS’s MR controllers). That updated, everything then worked, and much like a Vive, it first had me mark out the play area. (seated is also an option).

Ergonomics:

The Odyssey’s controllers are Samsung’s own, and have a slightly (IMO) nicer design than the stock ones. They are about as clunky as the Vive wands – which means not nearly as nice a the Oculus Touch or the upcoming Vive “Knuckles” controllers.

The HMD is a bit front heavy, but it does have good integrated headphones and a feel somewhat like the Sony VR HMD. Unlike the Sony the lenses don’t have a dolly that lets you move them out from the HMD frame, nor do they flip up like some of the other MS MR HMD’s do – probably the biggest missing feature. There’s some soft rubber around the nose which I find the edges of which start digging into my nose after wearing them for a bit – if it annoys me enough either some tape or a trim with some scissors should fix that. It could be a bit better balanced, lighter and wireless but it’s not a lot different from many of the other HMD’s out there.

Tracking.

HMD

The HMD tracks fine, even when you are facing away from the monitor. Since it needs light to track successfully it’s likely doing some image processing to track objects in the room as you move and rotate in addition to the built-in compass, gyrometer and accelerometer.  Overall (in the play area) it’s pretty accurate and the boundaries on the play area pop up when you (or a controller)  get too close to an edge.

Controllers

I was a bit worried that the inside-out tracking would suck, but it’s much better than I expected. Overall it’s not *quite* as accurate or fast as the Vive controllers, but it’s good enough and doesn’t get in the way – the controllers are fairly accurately tracked and the lag is minimal. They are positionally tracked by the HMD cameras and the tracking extends just outside the display range. Rotational and accerometer information is sent by the controllers themselves and thus is always tracked.

Overall:

The visual fidelity is great, the tracking has sold me on inside-out as a viable method for tracking controllers or (eventually) hands. I’m not in love with the ergonomics, but it’s certainly usable for extended periods of time. It’s certainly ready for some social interaction, anything not requiring a fine degree of control, or extended keyboard input. These will come later, but this is certainly a viable start.

Posted in Augmented Reality, Virtual Reality, Windows Mixed Reality | Leave a comment

Microsoft Mixed Reality announcement

Microsoft held a press conference on Oct. 3rd in San Fransisco, where they announced a few interesting things about their OS support for mixed reality, due to  be public with a Win10 update mid month.

  • They bought Altspace VR
  • There’s a slew of VR/AR HMD’s coming out in a few weeks, priced around $400-$500
  • Steam games will be available
  • All in all it’s starting to look like VR/Ar is becoming more mainstream.

Here’s the video

 

Posted in Uncategorized | Leave a comment

The Mars Bus keeps rolling along…100 awards and counting!

I’m really surprised at how well the Mars Bus has been received. It was a cutting edge VR project using technology never put together before, but so far the total is a staggering 100 awards – many of them the highest prize awarded including 5 Gold Lions from Cannes. (In case you’re wondering why I care – I was the dev lead on this project)

5x Gold Lions Cannes Lions 2016

8x Silver Lions Cannes Lions 2016

5x Bronze Lions Cannes Lions 2016

1x Innovation Lion Cannes Lions 2016

Grand  Clio Awards 2016

2x Gold Clio Awards 2016

4x Silver Clio Awards 2016

2x Gold IAB MIXX Awards

2x Grandy ANDY Awards 2017

8x Gold ANDY Awards 2017

1x Silver ANDY Awards 2017

1x Gold Epica Awards 2016

8x Gold ADC Awards 2017

2x Silver ADC Awards 2017

1x Bronze ADC Awards 2017

Gold Ciclope 2016

2x Yellow Pencil D&AD 2017

2x Graphite Pencil D&AD 2017

1x Wood Pencil D&AD 2017

3x Gold The Hollywood A List Awards

17x Gold One Show 2017

1x Silver One Show 2017

1x Winner Webbys 2017

2x Peoples Choice Webbys 2017

2x Grand Prize New York Festival

6x Gold New York Festival

3x Silver New York Festival 

3x Winner Project Isaac Awards

3x Gold Shots Awards 2016 

1x Winner AdAge Creativity Awards

Posted in Framestore, Virtual Reality | Leave a comment

Virtual Computing

Mike Abrash – chief Scientist at Oculus, gave, as always, an inspiring keynote at this year’s F8 conference. He gave a great talk, coined the term “virtual computing” to glom together VR, AR, the cloud, and how it’ll all mix together. Best quote:

20 or 30 years from now, I predict that instead of carrying stylish smartphones everywhere, we’ll  wear stylish glasses. Those glasses will offer VR, AR,and everything in-between and we’ll wear them all day and use them in almost every aspect of our lives. The distinction between VR and AR will vanish. The real and virtual worlds will just mix and match throughout the day according to our needs.

The only part I disagree with is the 20-30 years – it’s too powerful  of a metaphoric, enabling technology to wait that long. Think of something 20x more powerful and useful than a smart phone, and it gets rid of the phone. 5-6 years before we start seeing early commercial adopters, 10 years it’ll be unthinkable not to have them. In 20-30 years they’ll just look way cooler.

Abrash talking at F8 2017 – Skip to 48:00

Posted in Augmented Reality, Technology, Virtual Reality | Leave a comment