Let The Computer Figure It Out: PID Controllers-theory

I’m going to start some posts on how to “Let The Computer Figure It Out”. I see rational folks sometimes use trial and error to figure something out – a totally valid methodology – but occasionally there’s a need for either  responding to a dynamic system in code or just plain not taking advantage of the fact that you have a frikkin computer at your disposal and there’s no need NOT to let if do the math for you. There are a few techniques that will enable you to just let the computer figure it out for you. This is the first one.

It turns out that it’s fairly easy to code these implementations in software, and today I’m going to discuss one of the basic and most useful controller equations – that of the Proportional-Integral-Derivative controller, or PID controller.

Occasionally you need to fine tune some parameters according to changing conditions – you basically want something that will adjust to meet a set of conditions. I frequently see folks make educated guesses and try to get values that are in the ballpark of being acceptable. This works for one-off’s but it’s really simple to get a computer to fine tune things for you.  In my Chemical Engineering past I learned about control theory, and it turns out that these techniques are frequently used in other fields as well, from many engineering fields, some AI fields, financial, and pretty much anywhere there’s a need for mechanical controllers. Physical PID controllers are all around us – while computer implementations are frequently used for everything from smart thermostats, HVAC systems, robotics,  AI’s to drive cars, missile targeting systems, etc. Anywhere you need to have a system to respond to changing conditions, you can probably use a PID controller.

A PID controller is used when you have a output value that (typically) responds to some adjustment value – think of it as a dial where you can turn the dial and make the output value go up or down. The PID controller is given control of the dial and monitors the output. If the output deviates from the desired target value (called the setpoint), the controller will adjust the dial. Here’s the equation for a PID controller.pideqnThe value e(t) is the error or deviation of the actual output from the setpoint as a function of time.  The three parts on the right side are (in order) the proportional, derivative, and integral – hence the PID name. The K values are the controller constants, and adjust how much of each part of the PID equation contributes to the final value – and are how the PID controller is adjusted to be responsive.

pidThe Proportional Control

The proportional control is basically how much the dial gets turned when there is an error in the output value. It’s directly proportional to the difference between the setpoint and the actual value – for some cases you can just use the error directly and set the control and you’re done – but particularly in either physical systems or dynamic computer systems, you will be constantly adjusting the setpoint to adjust the output to new conditions, and that’s when the rest of the PID terms come into play. If you think about a hot water tank, the heat comes on till the water reaches the setpoint, then the heat shuts off. Residual heat will raise the temperature a bit more, overshooting the setpoint – but since we can’t cool the water, we need to wait for it to naturally lose heat till we start to accumulate a significant error, at which point the heat will kick on again. Hot water tanks and your home heating systems are a special sort of controller situation – typically called bang-bang controllers – because they have a state of being on or off (hence bang-bang) with no other states – thus they are just P controllers, with no adjusting of the controls other than on or off. The proportional part is the main contributor to the controller value – it’s Proportional to the error. The larger the error, the larger the adjustment to the controller.

The Integral Control

The integral part is the part of the PID equation takes into account any steady-state or constant forces that are changing the output value – like heat loss in a water tank, or trying to aim at a moving target. The integral part is actually the integration of the error values over time – thus it’s a value that provides adjustment to the controller if there’s a build up of error over time. A controller can be a PI controller, and in many cases this is good enough. The proportional part will make the gross adjustments, the integral part can keep a small part of the controller active to offset any bias in the system.

The Derivative Control

The derivative part can be considered the part that rapidly adjusts to a change in the error – the derivative part serves to adjust the direction of the control – hence when the error goes from positive to negative (e.g. we just moved through the setpoint value) the derivative changes sign and serves to damp down any oscillations in the controller. The derivative part is frequently used when the process changes rapidly and you want the setpoint to be very closely monitored and need the controller to be very quick to adjust. However, if you have a noisy system, including derivative may make things worse.


Now a PID controller has one input and one output – but frequently you can use a bunch of them in tandem to control more than just one value. It’s even common to have PID controller values feeding into other PID controllers when you have a more complicated process to control. Next time – the code.

Posted in Code, Control Theory | Leave a comment

Largest shared VR installation ever?

I’m currently pretty busy building out and managing the development of what may be the largest shared VR installation ever. It’s designed to surround up to about 25-30 people and they will be sharing a virtual experience, so while one person will be directing the experience in real-time, everyone else is along for the ride, so to speak. We don’t have the physical space yet to set this up, so I had to build a small-scale prototype for testing out the proof-of-concept and (assuming that goes well) to validate our rendering strategy. The first step was the monitors; Here’s 4 (we could not fit the desired 5) 4K 55″ monitors.

wallOMonitorsThe next step is the PC hardware. We’re trying to determine exactly how many 4K displays we can drive from one beefy PC. Since the PC’s have special requirements, we’re specifying the hardware – a water cooled Intel Core i7 6700K CPU, 32GB memory, 1TB SSD, a water cooled Nvidia 980ti GPU. Here’s the parts;

PCEquipSo far it’s all come together pretty well. We’ve had multiple folks stop by to gawk at the displays – we have a synchronized scene running across all the monitors (which I can’t show yet). It’s starting to come together nicely. The 4K displays really do look pretty good. I can share that unfortunately that, no, one beefy PC can not handle 2x 4K displays running at 60fps. 30fps is pushing it. The final installation size is anywhere from 8 to 10 4K screens, all synchronized rendering different views of some out-of-this world scenes. Stay tuned for some in-game rendering when the project makes it’s public appearance.

Posted in Hardware, Technology, Virtual Reality | Leave a comment

Simple Exponential Smoothing, explained

I was assisting a coworker deal with some noisy real-world data.  Normally my first instinct is to use some averaging algorithm. Frequently I’ve used something like this to output a smoothed value of frames-per-second (FPS) for a graphics program. An averaging algorithm is a good choice since you’ve got a fairly rapid number of samples-per-second and you want to get a smoothed value that doesn’t jump around but still seems responsive. However, I now usually use a better algorithm that eliminates a lot of the issues with trying to use an averaging algorithm.  Ideally you’d really want a smoothing algorithm that allows you to weigh recent values higher than older values. Keeping weights and and array of values has all sorts of startup and storage problems, but there’s a much simpler way – it’s called simple exponential smoothing and it’s incredibly simple, yet able to be tuned to the desired amount of responsiveness.

The way it works is to denote a “smoothing constant”  α.  This constant is a number between 0 and 1. If α is small (i.e., close to 0), more weight is given to older observations. If α is large (i.e., close to 1), more weight is given to more recent observations.

Implementations typically define a series S that represents the current smoothed value (i.e., local mean value) of the series as estimated from data up to the present.   The smoothed value of St is computed recursively from its own previous value and the current measured value Xt, like this:

St = αXt + (1-α) St-1

The calculation is pretty simple. You take the previous smoothed value, St-1,  the current raw value, Xt, and use the function to get the current smoothed value, St. (This is called the component form, and there are other forms that are a little more complicated.  There’s also a version that lets you predict the next value rather than just smoothing the values)

If we are using it for frequently updated raw values, like this graph of FPS, we can easily tune the constant like so. I’ve let it run at 60 FPS for about half a second then the rate drops to 30 FPS for half a second before returning to 60 FPS. There’s a single frame drop to 45 FPS towards the end. We can plot the simple exponential smoothed value with various values of α, and see how the smooth curves look.

ExponentialSmoothingYou can see that for α = 0.10, the curve shows a very gradual drop, a little too slow for the purpose we want it for. Conversely even for values of 0.75 or greater, we still get a response that’s a bit too quick to show a smoothly changing FPS value.  For FPS measurements I typically use α = 0.5.

The great thing about this function is that it’s simple to implement and use and fairly easy to tune. The value of α that’s optimal for your particular need depends upon the frequency of samples, plus the responsiveness you desire. The only tricky implementation detail is the initial smoothed value, which I usually ignore by providing the “expected” value as the initial St-1 value. It quickly gets smoothed to a more valid value.


Posted in Code, Miscellaneous | Leave a comment

Realtime editing in VR is (almost) here


There are two huge problems with creating content for VR – Epic has addressed the major one, being able to interactively edit in VR. This is the way VR specific content will be created from now on – in VR for VR. The UI can only get more intuitive from here on out.

I’ve been busy working out some of the hardware kinks on a massive VR space, but I wanted to take the time to belatedly comment on on an announcement Epic made last week. They not only managed to get the Unreal Editor running in VR, but have hacked up some VR UI that enables you to access (most?) of the editing features from inside VR. This is awesome. This is something we (Framestore) were kicking about, thinking of hacking some implementation together, but now we don’t have to. Thank you Epic for providing (and supporting) tools to make VR content creation sooooo much simpler.

Here’s a screen shot of manipulating an object in 3D – you can translate, rotate and scale the object in VR!

UnrealVREditThey have also implemented some menu items in VR – note that you pull up a menu then make the selection using a (in this case Vive) controller.

UnrealVREdit3For non- textual things like selection a material, they have a more traditional menu palette.

UnrealVREdit2You can see some videos and read a summary of it here.

Epic will be making a more formal announcement at GDC on Wednesday March 16. Hopefully with soon-to-follow release of a VR enabled Engine update or source code. I’ve seen some comments that have downplayed the usefulness of this development, but I think that most folks are missing the point (or have never developed VR content) – you’ll use the tools that are the fasted for initial development – I can’t really see folks starting a project from scratch by climbing into VR. The resolutions and ergonomics suck.


For anyone who’s worked on VR content, tried a work-in-progress level in VR, gotten out of VR, done a tweak, gotten back into VR, etc. etc. Just being able to climb into VR and make edits in situ is a huge step forward. There’s a traditional space for editing that doesn’t require literally waving your hands around (which gets pretty tiring pretty quickly), but for that final bit of tweaking, you can now immerse yourself in the environment the user will be in. This is a fantastic step in the right direction. This is a bear to implement, and I’m really glad Epic is taking on VR content editing in such an enthusiastic manner. The fact that Sweeney is narrating is just the icing on the cake.

Posted in Technology, Virtual Reality | Leave a comment

‘Battle for Avengers Tower’ wins Best Animated VR Film at VRFest 2016

Kudos to the entire Framestore VR Studio team!

Posted in Virtual Reality | Leave a comment

Best Practices in VR Design

I just ran across @VRBoy ‘s post on VR Design Practices. Yes, yes, and yes. In particular Performance and Testing are the two areas I constantly see folks forgetting to implement.

Best Practices in VR Design

And by testing I mean not only making sure the app works, but that the overall implementation is suited for VR. I see a lot of creatives who think that just because they can make a good 2D or video, that that translates to VR. No one seems to actually *test* their apps in VR before they release, as if what they see on a monitor is what’s it’s like in VR.


Posted in Virtual Reality | Leave a comment

Musings on “Retina Display” quality for Virtual Reality

tl,dr: How much resolution do VR HMDs need to approach that of a “Retina Display”? Using the current optics setup, about 10x more than they do now. Roughly 22,000 × 10,060 pixels per display.

I work for Framestore’s Virtual Reality Studio. Occasionally I ponder big questions. Sometimes these paths cross. We also make a lot of videos – both for the incorrectly named 360º videos (of which I’m firmly in the “this is NOT Virtual Reality” camp, they just use the same hardware as VR) and as backdrops for VR experiences.

I was wondering just how good we should make our videos in VR without going overboard. You just want to  make the resolution of the video good enough so it either hits the resolution of the display, OR it hits the maximum resolution of the eye – whichever is lower. So this got me to thinking, what IS the resolution that the typical person can discern when wearing an HMD? Hmm… where to start…

Well let’s take a guess that Apple’s definition of a retina display is a good resolution to shoot for. The definition of a Retina Display is generally accepted as 300 ppi  (pixels/inch) @ 10-12 inches from eye. (Really – who holds a display in front of their face? – no matter…)

OK, so let’s convert this into some form of useful VR measurement. If we use the formula;

PPD (Pixels /Degree) = 2 D R tan(½º)

R = resolution in pixels/unit-distance
D = distance in units-distance

 = ~57 PPD

So 57 PPD is going to be our definition of “Retina Display”. We need to convert this to the VR FOV (Field of View) to calculate how big a VR display has to be. Well’ since this is speculative let’s go the full value, the average full FOV.

(Update –  I have since discovered that the accepted resolution value for the human fovea is 60 pixels/degree)

SO now just how big is the average FOV, both horizontally and vertically? Well, there’s quite a few opinions, but since I’m a US taxpayer, let’s let the US Government – specifically the United Stated Department of Defense – answer this (and many many other human factors questions) in MIL-STD-1473 G – “Human Engineering” https://www.document-center.com/standards/show/MIL-STD-1472. or


So if we look through this wonderful, rich, document that contains many many useful items for the aspiring VR/AR UI & UX designer, we arrive at the following diagram:


So we can see that according to the DOD, the average FOV is 190º horizontal and 120º vertically.

But.. the horizontal value is for both eyes.  If we instead use the horizontal FOV of a single eye 95º from the eye center outward, and 35º from eye center inwards, we get about 130º FOV for an eye.

For 57 PPD, this gives us 7,410 x 6840 pixel display – per eye, or a 14,820 x 6840 a full stereo FOV retina display using current lens optics with a single split screen.

I might be wrong, but current HMD’s are nowhere near that….Let’s compare to a DK2.

A Oculus DK2 FOV max is 95º horizontal and 106º vertical (from vrHmdDesc::MaxEyeFov – not using the 100 degree value). So assuming we still have 35º from eye center inwards this gives of 64º per eye horizontal FOV for a DK2.

Thus we’d need a resolution 3648 x 6042 to make a “Retina Display” at this FOV for one eye. A DK2 is 1080 x 1200 per eye. If we use the current split screen setup for left eye/right eye,  7269 x 6042 is the “Retina Display” resolution, while 2160 x 1200 is the DK2s.

But we’re not done. You see, due to the lenses, most HMD’s render out smaller area than the actual resolution. If you plot it out from a standard DK2, you’ll get the actual pixels used (white) vs the unused pixels (black);


Thus only about 68% of the pixels in the display are actually used to render what the eye can see.

So my final tally is that for an HMD to have a retina display and using lenses to bend the flat display, we’d need about 10,900 x 10,060 per eye. This is about 85 times the pixels of a DK2. Or, put another way it’s about a factor of 10x the current resolution.

Considering that I currently have to try to get a render done in about 5 milliseconds with my current target hardware (Oculus’  CV1), it looks like my role as chief performance optimization engineer is pretty safe occupation for the near future.

It’s also a good thing to know, as now when we’re doing high quality rendered video we should keep the high quality stuff archived (at 20K x 30K as the target because, 360!), because while a 10x jump in resolution sounds like a lot, I have no doubt that we’re going to rapidly iterate on VR hardware improvements and that number will soon be well within the realm of possibility.






Posted in Augmented Reality, GearVR, Graphics Hardware, Virtual Reality | Leave a comment

Making the most of your GearVR

The GearVR has a micro USB port on it that can be used for a number of different things. The trick is to plug in the USB device *before* you plug in the phone! We use the following type of adapter in our GearVR’s to connect up any regular USB-Type A device.


This allows us to power the fan for extended game playing or movie watching (makes the battery last longer) or to take screen shots by connecting a keyboard – you just need to Print-Screen and the screenshot will show up in the Phone’s screenshots folder. You can connect up any HID USB device.

Posted in GearVR, Virtual Reality | Leave a comment