Largest shared VR installation ever?

I’m currently pretty busy building out and managing the development of what may be the largest shared VR installation ever. It’s designed to surround up to about 25-30 people and they will be sharing a virtual experience, so while one person will be directing the experience in real-time, everyone else is along for the ride, so to speak. We don’t have the physical space yet to set this up, so I had to build a small-scale prototype for testing out the proof-of-concept and (assuming that goes well) to validate our rendering strategy. The first step was the monitors; Here’s 4 (we could not fit the desired 5) 4K 55″ monitors.

wallOMonitorsThe next step is the PC hardware. We’re trying to determine exactly how many 4K displays we can drive from one beefy PC. Since the PC’s have special requirements, we’re specifying the hardware – a water cooled Intel Core i7 6700K CPU, 32GB memory, 1TB SSD, a water cooled Nvidia 980ti GPU. Here’s the parts;

PCEquipSo far it’s all come together pretty well. We’ve had multiple folks stop by to gawk at the displays – we have a synchronized scene running across all the monitors (which I can’t show yet). It’s starting to come together nicely. The 4K displays really do look pretty good. I can share that unfortunately that, no, one beefy PC can not handle 2x 4K displays running at 60fps. 30fps is pushing it. The final installation size is anywhere from 8 to 10 4K screens, all synchronized rendering different views of some out-of-this world scenes. Stay tuned for some in-game rendering when the project makes it’s public appearance.

Posted in Hardware, Technology, Virtual Reality | Leave a comment

Simple Exponential Smoothing, explained

I was assisting a coworker deal with some noisy real-world data.  Normally my first instinct is to use some averaging algorithm. Frequently I’ve used something like this to output a smoothed value of frames-per-second (FPS) for a graphics program. An averaging algorithm is a good choice since you’ve got a fairly rapid number of samples-per-second and you want to get a smoothed value that doesn’t jump around but still seems responsive. However, I now usually use a better algorithm that eliminates a lot of the issues with trying to use an averaging algorithm.  Ideally you’d really want a smoothing algorithm that allows you to weigh recent values higher than older values. Keeping weights and and array of values has all sorts of startup and storage problems, but there’s a much simpler way – it’s called simple exponential smoothing and it’s incredibly simple, yet able to be tuned to the desired amount of responsiveness.

The way it works is to denote a “smoothing constant”  α.  This constant is a number between 0 and 1. If α is small (i.e., close to 0), more weight is given to older observations. If α is large (i.e., close to 1), more weight is given to more recent observations.

Implementations typically define a series S that represents the current smoothed value (i.e., local mean value) of the series as estimated from data up to the present.   The smoothed value of St is computed recursively from its own previous value and the current measured value Xt, like this:

St = αXt + (1-α) St-1

The calculation is pretty simple. You take the previous smoothed value, St-1,  the current raw value, Xt, and use the function to get the current smoothed value, St. (This is called the component form, and there are other forms that are a little more complicated.  There’s also a version that lets you predict the next value rather than just smoothing the values)

If we are using it for frequently updated raw values, like this graph of FPS, we can easily tune the constant like so. I’ve let it run at 60 FPS for about half a second then the rate drops to 30 FPS for half a second before returning to 60 FPS. There’s a single frame drop to 45 FPS towards the end. We can plot the simple exponential smoothed value with various values of α, and see how the smooth curves look.

ExponentialSmoothingYou can see that for α = 0.10, the curve shows a very gradual drop, a little too slow for the purpose we want it for. Conversely even for values of 0.75 or greater, we still get a response that’s a bit too quick to show a smoothly changing FPS value.  For FPS measurements I typically use α = 0.5.

The great thing about this function is that it’s simple to implement and use and fairly easy to tune. The value of α that’s optimal for your particular need depends upon the frequency of samples, plus the responsiveness you desire. The only tricky implementation detail is the initial smoothed value, which I usually ignore by providing the “expected” value as the initial St-1 value. It quickly gets smoothed to a more valid value.


Posted in Code, Miscellaneous | Leave a comment

Realtime editing in VR is (almost) here


There are two huge problems with creating content for VR – Epic has addressed the major one, being able to interactively edit in VR. This is the way VR specific content will be created from now on – in VR for VR. The UI can only get more intuitive from here on out.

I’ve been busy working out some of the hardware kinks on a massive VR space, but I wanted to take the time to belatedly comment on on an announcement Epic made last week. They not only managed to get the Unreal Editor running in VR, but have hacked up some VR UI that enables you to access (most?) of the editing features from inside VR. This is awesome. This is something we (Framestore) were kicking about, thinking of hacking some implementation together, but now we don’t have to. Thank you Epic for providing (and supporting) tools to make VR content creation sooooo much simpler.

Here’s a screen shot of manipulating an object in 3D – you can translate, rotate and scale the object in VR!

UnrealVREditThey have also implemented some menu items in VR – note that you pull up a menu then make the selection using a (in this case Vive) controller.

UnrealVREdit3For non- textual things like selection a material, they have a more traditional menu palette.

UnrealVREdit2You can see some videos and read a summary of it here.

Epic will be making a more formal announcement at GDC on Wednesday March 16. Hopefully with soon-to-follow release of a VR enabled Engine update or source code. I’ve seen some comments that have downplayed the usefulness of this development, but I think that most folks are missing the point (or have never developed VR content) – you’ll use the tools that are the fasted for initial development – I can’t really see folks starting a project from scratch by climbing into VR. The resolutions and ergonomics suck.


For anyone who’s worked on VR content, tried a work-in-progress level in VR, gotten out of VR, done a tweak, gotten back into VR, etc. etc. Just being able to climb into VR and make edits in situ is a huge step forward. There’s a traditional space for editing that doesn’t require literally waving your hands around (which gets pretty tiring pretty quickly), but for that final bit of tweaking, you can now immerse yourself in the environment the user will be in. This is a fantastic step in the right direction. This is a bear to implement, and I’m really glad Epic is taking on VR content editing in such an enthusiastic manner. The fact that Sweeney is narrating is just the icing on the cake.

Posted in Technology, Virtual Reality | Leave a comment

‘Battle for Avengers Tower’ wins Best Animated VR Film at VRFest 2016

Kudos to the entire Framestore VR Studio team!

Posted in Virtual Reality | Leave a comment

Best Practices in VR Design

I just ran across @VRBoy ‘s post on VR Design Practices. Yes, yes, and yes. In particular Performance and Testing are the two areas I constantly see folks forgetting to implement.

Best Practices in VR Design

And by testing I mean not only making sure the app works, but that the overall implementation is suited for VR. I see a lot of creatives who think that just because they can make a good 2D or video, that that translates to VR. No one seems to actually *test* their apps in VR before they release, as if what they see on a monitor is what’s it’s like in VR.


Posted in Virtual Reality | Leave a comment

Musings on “Retina Display” quality for Virtual Reality

tl,dr: How much resolution do VR HMDs need to approach that of a “Retina Display”? Using the current optics setup, about 10x more than they do now. Roughly 22,000 × 10,060 pixels per display.

I work for Framestore’s Virtual Reality Studio. Occasionally I ponder big questions. Sometimes these paths cross. We also make a lot of videos – both for the incorrectly named 360º videos (of which I’m firmly in the “this is NOT Virtual Reality” camp, they just use the same hardware as VR) and as backdrops for VR experiences.

I was wondering just how good we should make our videos in VR without going overboard. You just want to  make the resolution of the video good enough so it either hits the resolution of the display, OR it hits the maximum resolution of the eye – whichever is lower. So this got me to thinking, what IS the resolution that the typical person can discern when wearing an HMD? Hmm… where to start…

Well let’s take a guess that Apple’s definition of a retina display is a good resolution to shoot for. The definition of a Retina Display is generally accepted as 300 ppi  (pixels/inch) @ 10-12 inches from eye. (Really – who holds a display in front of their face? – no matter…)

OK, so let’s convert this into some form of useful VR measurement. If we use the formula;

PPD (Pixels /Degree) = 2 D R tan(½º)

R = resolution in pixels/unit-distance
D = distance in units-distance

 = ~57 PPD

So 57 PPD is going to be our definition of “Retina Display”. We need to convert this to the VR FOV (Field of View) to calculate how big a VR display has to be. Well’ since this is speculative let’s go the full value, the average full FOV.

SO now just how big is the average FOV, both horizontally and vertically? Well, there’s quite a few opinions, but since I’m a US taxpayer, let’s let the US Government – specifically the United Stated Department of Defense – answer this (and many many other human factors questions) in MIL-STD-1473 G – “Human Engineering” or

So if we look through this wonderful, rich, document that contains many many useful items for the aspiring VR/AR UI & UX designer, we arrive at the following diagram:


So we can see that according to the DOD, the average FOV is 190º horizontal and 120º vertically.

But.. the horizontal value is for both eyes.  If we instead use the horizontal FOV of a single eye 95º from the eye center outward, and 35º from eye center inwards, we get about 130º FOV for an eye.

For 57 PPD, this gives us 7,410 x 6840 pixel display – per eye, or a 14,820 x 6840 a full stereo FOV retina display using current lens optics with a single split screen.

I might be wrong, but current HMD’s are nowhere near that….Let’s compare to a DK2.

A Oculus DK2 FOV max is 95º horizontal and 106º vertical (from vrHmdDesc::MaxEyeFov – not using the 100 degree value). So assuming we still have 35º from eye center inwards this gives of 64º per eye horizontal FOV for a DK2.

Thus we’d need a resolution 3648 x 6042 to make a “Retina Display” at this FOV for one eye. A DK2 is 1080 x 1200 per eye. If we use the current split screen setup for left eye/right eye,  7269 x 6042 is the “Retina Display” resolution, while 2160 x 1200 is the DK2s.

But we’re not done. You see, due to the lenses, most HMD’s render out smaller area than the actual resolution. If you plot it out from a standard DK2, you’ll get the actual pixels used (white) vs the unused pixels (black);


Thus only about 68% of the pixels in the display are actually used to render what the eye can see.

So my final tally is that for an HMD to have a retina display and using lenses to bend the flat display, we’d need about 10,900 x 10,060 per eye. This is about 85 times the pixels of a DK2. Or, put another way it’s about a factor of 10x the current resolution.

Considering that I currently have to try to get a render done in about 5 milliseconds with my current target hardware (Oculus’  CV1), it looks like my role as chief performance optimization engineer is pretty safe occupation for the near future.

It’s also a good thing to know, as now when we’re doing high quality rendered video we should keep the high quality stuff archived (at 20K x 30K as the target because, 360!), because while a 10x jump in resolution sounds like a lot, I have no doubt that we’re going to rapidly iterate on VR hardware improvements and that number will soon be well within the real of possibility.






Posted in Augmented Reality, GearVR, Graphics Hardware, Virtual Reality | Leave a comment

Making the most of your GearVR

The GearVR has a micro USB port on it that can be used for a number of different things. The trick is to plug in the USB device *before* you plug in the phone! We use the following type of adapter in our GearVR’s to connect up any regular USB-Type A device.


This allows us to power the fan for extended game playing or movie watching (makes the battery last longer) or to take screen shots by connecting a keyboard – you just need to Print-Screen and the screenshot will show up in the Phone’s screenshots folder. You can connect up any HID USB device.

Posted in GearVR, Virtual Reality | Leave a comment

Framestore’s VR Studio is pulling ahead in VR Space

My plan for domination of VR content creation space continues …muhahahahahaha

(As told by AdAge)

Production Company Standout 2015: Framestore

Shop Solidified Rep as VR Pro

Virtual reality has been the industry’s darling of late and remains top of mind for agencies and advertisers. Visual effects powerhouse Framestore planted its foot firmly in the space by creating a dedicated division to the field and has now become the destination shop for those seeking to create sophisticated virtual reality experiences.

… the article continues on in a favorable tone :-)

Really, it’s the excellent team for folks working at Framestore that make this kind of stuff possible. The Framestore reputation for the highest quality work continues in our VR productions. Did I mention that we are hiring?

Posted in Framestore, Virtual Reality | Leave a comment