The Soul of a New Machine? 1st move – Avatar firm gets VC Investments

(to repurpose from Tracy Kidder). A number of years ago I was dev lead for a groundbreaking company called LifeF/X , we made fairly believable avatar heads – ones that were pretty far down the uncanny valley – believable enough so you though you were talking to video of  a real person – albeit a person who had some issues. We had many things in our favor – low render target size (about  300×300 pixels for the face) plus some really advanced facial animation software – nothing like it exists today unfortunately.

The lead scientist/facial-animator behind that software was Dr. Mark Sagar. After LifeF/X spent all the VC money and folded he went on to Sony & Weta for become their expert in facial animation. If you’ve seen Spiderman or King Kong or Avatar you’ve seen Mark’s work close up. He then returned back to research at the university where he’s been working on an extremely creepy (sorry Mark!) AI/Avatar project called Baby X.

I find is creepy because he’s creating a neural network that’s starting off basically with a human baby’s level of understanding and teaching it through interaction.  All good, sound tech, – the fact that the model used is based on a real baby’s interactions – his daughter in fact – is  a bit unnerving to me – the animation isn’t quite out of the uncanny valley, so it’s a bit creepy to watch, but the progress is real. Baby X is powered by an artificial brain with inputs layered in through an artificial nervous system. It’s designed to be plugged in to other AI systems that may deliver higher level thought.

I bring this up because he’s gotten the tech far enough along to attract  US$ 7.5M in VC money for a spin-off company – Soul Machines.   From the press release:

Soul Machines is a developer of intelligent, emotionally responsive avatars that augment and enrich the user experience for Artificial Intelligence (AI) platforms.

So, here we see the first VC investment in a company creating AI designed for human interaction – think your personal assistant (ala Siri, Cortana, Alexa, Google Now, etc.) but one that understands human emotions, has their own emotional state, and shows up as a human on your PDA, computer, AR glasses – whenever you need her (or she needs you). You talk, they listen, understand, and respond. The future is getting closer.

Posted in Augmented Reality, Digital Intelligence, Technology, Virtual Reality | Leave a comment

‘The Field Trip to Mars,’ the Single Most Awarded Campaign at Cannes

Inside ‘The Field Trip to Mars,’ the Single Most Awarded Campaign at Cannes 2016 McCann and Framestore dissect their Lockheed Martin marvel – so reads the Adweek headline discussing the Lockheed Martin Mars Bus, and its huge success at the Cannes Film Festival.

titlebusIt is one of the most demanding, largescale, mobile, group VR experiences ever created.  Framestore, working with McCann NY successfully launched Lockheed Martin’s Project Beyond: Mars Bus for the 2016 U.S.A. Science and Engineering Festival at the Washington DC Science Convention on April 15th, where we ran through about 2400 thrilled bus riders over the two days of the convention plus taking school kids out for the actual “live” VR experience the day before.

schoolbus_front

What other project involves a school bus, equipped with opaque/transparent windows, transparent/opaque 4K monitors, 6 rack-mounted gaming PC’s, its own private network, A/C unit and diesel generator, a suite of real-time GPS, inertial sensors, compass, and velocity laser readers? The software consisted of a custom-built data fusion application to spit out real-time velocity, heading, acceleration, and positional data to a suite of PC’s running the Unreal Engine Mars Bus simulation through various “windows” onto Mars. All packing into a moving school bus. A combination that’s never been tried before – and we pulled it off.

schoolbus_midride

I came up with the overall architecture of the software applications. The software team was then able to test out Proof-of-concepts that  let us push the limits of what had been attempted before. We quickly hit upon using Unreal running a driving simulation and needed to figure out how to make Mars drivable – both in a software simulation and in a real-world sense – the actions that the bus went through in the real world had to be real-time simulated in the Mars sim – marrying reality with simulation.

dc_segment2

A real map of DC streets was used generate the “drivable” area of “Mars” – making it possible to drive through the DC metropolitan area (215 sq km) while avoiding the “randomly” placed rock and mountains of Mars. Thus the 24 kids got to drive around DC while seeing Mars go by on the windows. If the bus stopped, turned, or went over a bump, so did the kids – making it the largest, mobile, real-time group VR experience I believe has ever been created. There were some highlights that we wanted the kids to experience. We had a drive by of the Mars Rover, through a futuristic Mars Base Camp, and through a Martian sandstorm – taking advantage of our 500 watts of  5.1 sound system – which was plenty loud in an enclosed bus – the impact sounds provided a visceral feel that things were actually impacting the bus. (See the experience video).

There was a server rack of gaming PC’s in the back of the bus – four that were dedicated to driving the 80 inch 4K monitors; there was one PC doing the data acquisition and data fusion, which then squirted position, velocity, direction info out onto the network via UDP to be picked up by the 5 simulation PCs – the additional one showing the real time top down bus position on the road map – serving as a witness to the accuracy of the simulation. There was a Xbox controller attached so that the position/speed could be manually modified  in case it was needed or the GPS system went south. We never needed to use it. Finally there was one PC for the “monkey” – the poor sod who had to sit in the back with the servers in a dark tiny room on a folding chair whose job it was to start and if necessary restart the experience. It was also used to provide some systems monitoring – a lot of which was added at the last minute because we had some trouble with the PC’s failing to respond initially – we finally added some software that would ping each PC every second to make sure it was responding – the simulation status, and PC status were all shown on the experience control display.

Once in the convention center we switched over to a canned ride, but once word got out about the experience, we had lines of well over an hour wait to ride the bus. Overall the ooo’s and ahhh’s we got, plus the frequent clapping at the end by the thrilled riders made the demanding, 3-month effort to bring it together a cherished accomplishment.  Overall it was a pretty daring experience to bring off – given that we were attempting things never before attempted using technology in a way that had never been used together.

bus_line

The final result was that there was a lot or recognition when the project was shown to the general public. The Field Trip to Mars was the single most awarded campaign at the 2016 Cannes Lions International Festival of Creativity. It got a total of 19 Lions across 11 categories.

  • 1 Innovation Lion
  • 5 Gold Lions
  • 8 Silver Lions
  • 5 Bronze Lions

It was also nominated for a Titanium Lion among 22 out of 43,101total entries across all categories from around the world.

ADWEEK also liked the bus, as it won top honor from ADWEEK’s Project Isaac – wining 1 Creative Invention, 1 Event/Experience Invention and the highest award, the Gravity.

In all it was one of the most challenging VR experiences I have ever had the pleasure to work on, and with the extraordinary team of engineers, artists and producers working furiously during the last week it came together and created one heck of a VR experience.

Ron’s Video links for the Mars Bus;

7 Days to go

less than 24 hours to go…

The experience from inside

It was all worth it!

Framestore

Framestore’s Mars Bus page (click on credits to see who worked on it)

External sites

Lockheed Martin’s new magic school bus wants to virtually take kids to Mars

Lockheed Mars Bus Experience

Alexander Rea’s page

Anima Patil-Sabale‘s video as a passenger

Posted in Augmented Reality, Technology, Virtual Reality | Leave a comment

Let The Computer Figure It Out: PID Controllers-theory

I’m going to start some posts on how to “Let The Computer Figure It Out”. I see rational folks sometimes use trial and error to figure something out – a totally valid methodology – but occasionally there’s a need for either  responding to a dynamic system in code or just plain not taking advantage of the fact that you have a frikkin computer at your disposal and there’s no need NOT to let if do the math for you. There are a few techniques that will enable you to just let the computer figure it out for you. This is the first one.

It turns out that it’s fairly easy to code these implementations in software, and today I’m going to discuss one of the basic and most useful controller equations – that of the Proportional-Integral-Derivative controller, or PID controller.

Occasionally you need to fine tune some parameters according to changing conditions – you basically want something that will adjust to meet a set of conditions. I frequently see folks make educated guesses and try to get values that are in the ballpark of being acceptable. This works for one-off’s but it’s really simple to get a computer to fine tune things for you.  In my Chemical Engineering past I learned about control theory, and it turns out that these techniques are frequently used in other fields as well, from many engineering fields, some AI fields, financial, and pretty much anywhere there’s a need for mechanical controllers. Physical PID controllers are all around us – while computer implementations are frequently used for everything from smart thermostats, HVAC systems, robotics,  AI’s to drive cars, missile targeting systems, etc. Anywhere you need to have a system to respond to changing conditions, you can probably use a PID controller.

A PID controller is used when you have a output value that (typically) responds to some adjustment value – think of it as a dial where you can turn the dial and make the output value go up or down. The PID controller is given control of the dial and monitors the output. If the output deviates from the desired target value (called the setpoint), the controller will adjust the dial. Here’s the equation for a PID controller.pideqnThe value e(t) is the error or deviation of the actual output from the setpoint as a function of time.  The three parts on the right side are (in order) the proportional, derivative, and integral – hence the PID name. The K values are the controller constants, and adjust how much of each part of the PID equation contributes to the final value – and are how the PID controller is adjusted to be responsive.

pidThe Proportional Control

The proportional control is basically how much the dial gets turned when there is an error in the output value. It’s directly proportional to the difference between the setpoint and the actual value – for some cases you can just use the error directly and set the control and you’re done – but particularly in either physical systems or dynamic computer systems, you will be constantly adjusting the setpoint to adjust the output to new conditions, and that’s when the rest of the PID terms come into play. If you think about a hot water tank, the heat comes on till the water reaches the setpoint, then the heat shuts off. Residual heat will raise the temperature a bit more, overshooting the setpoint – but since we can’t cool the water, we need to wait for it to naturally lose heat till we start to accumulate a significant error, at which point the heat will kick on again. Hot water tanks and your home heating systems are a special sort of controller situation – typically called bang-bang controllers – because they have a state of being on or off (hence bang-bang) with no other states – thus they are just P controllers, with no adjusting of the controls other than on or off. The proportional part is the main contributor to the controller value – it’s Proportional to the error. The larger the error, the larger the adjustment to the controller.

The Integral Control

The integral part is the part of the PID equation takes into account any steady-state or constant forces that are changing the output value – like heat loss in a water tank, or trying to aim at a moving target. The integral part is actually the integration of the error values over time – thus it’s a value that provides adjustment to the controller if there’s a build up of error over time. A controller can be a PI controller, and in many cases this is good enough. The proportional part will make the gross adjustments, the integral part can keep a small part of the controller active to offset any bias in the system.

The Derivative Control

The derivative part can be considered the part that rapidly adjusts to a change in the error – the derivative part serves to adjust the direction of the control – hence when the error goes from positive to negative (e.g. we just moved through the setpoint value) the derivative changes sign and serves to damp down any oscillations in the controller. The derivative part is frequently used when the process changes rapidly and you want the setpoint to be very closely monitored and need the controller to be very quick to adjust. However, if you have a noisy system, including derivative may make things worse.

Summary

Now a PID controller has one input and one output – but frequently you can use a bunch of them in tandem to control more than just one value. It’s even common to have PID controller values feeding into other PID controllers when you have a more complicated process to control. Next time – the code.

Posted in Code, Control Theory | Leave a comment

Largest shared VR installation ever?

I’m currently pretty busy building out and managing the development of what may be the largest shared VR installation ever. It’s designed to surround up to about 25-30 people and they will be sharing a virtual experience, so while one person will be directing the experience in real-time, everyone else is along for the ride, so to speak. We don’t have the physical space yet to set this up, so I had to build a small-scale prototype for testing out the proof-of-concept and (assuming that goes well) to validate our rendering strategy. The first step was the monitors; Here’s 4 (we could not fit the desired 5) 4K 55″ monitors.

wallOMonitorsThe next step is the PC hardware. We’re trying to determine exactly how many 4K displays we can drive from one beefy PC. Since the PC’s have special requirements, we’re specifying the hardware – a water cooled Intel Core i7 6700K CPU, 32GB memory, 1TB SSD, a water cooled Nvidia 980ti GPU. Here’s the parts;

PCEquipSo far it’s all come together pretty well. We’ve had multiple folks stop by to gawk at the displays – we have a synchronized scene running across all the monitors (which I can’t show yet). It’s starting to come together nicely. The 4K displays really do look pretty good. I can share that unfortunately that, no, one beefy PC can not handle 2x 4K displays running at 60fps. 30fps is pushing it. The final installation size is anywhere from 8 to 10 4K screens, all synchronized rendering different views of some out-of-this world scenes. Stay tuned for some in-game rendering when the project makes it’s public appearance.

Posted in Hardware, Technology, Virtual Reality | Leave a comment

Simple Exponential Smoothing, explained

I was assisting a coworker deal with some noisy real-world data.  Normally my first instinct is to use some averaging algorithm. Frequently I’ve used something like this to output a smoothed value of frames-per-second (FPS) for a graphics program. An averaging algorithm is a good choice since you’ve got a fairly rapid number of samples-per-second and you want to get a smoothed value that doesn’t jump around but still seems responsive. However, I now usually use a better algorithm that eliminates a lot of the issues with trying to use an averaging algorithm.  Ideally you’d really want a smoothing algorithm that allows you to weigh recent values higher than older values. Keeping weights and and array of values has all sorts of startup and storage problems, but there’s a much simpler way – it’s called simple exponential smoothing and it’s incredibly simple, yet able to be tuned to the desired amount of responsiveness.

The way it works is to denote a “smoothing constant”  α.  This constant is a number between 0 and 1. If α is small (i.e., close to 0), more weight is given to older observations. If α is large (i.e., close to 1), more weight is given to more recent observations.

Implementations typically define a series S that represents the current smoothed value (i.e., local mean value) of the series as estimated from data up to the present.   The smoothed value of St is computed recursively from its own previous value and the current measured value Xt, like this:

St = αXt + (1-α) St-1

The calculation is pretty simple. You take the previous smoothed value, St-1,  the current raw value, Xt, and use the function to get the current smoothed value, St. (This is called the component form, and there are other forms that are a little more complicated.  There’s also a version that lets you predict the next value rather than just smoothing the values)

If we are using it for frequently updated raw values, like this graph of FPS, we can easily tune the constant like so. I’ve let it run at 60 FPS for about half a second then the rate drops to 30 FPS for half a second before returning to 60 FPS. There’s a single frame drop to 45 FPS towards the end. We can plot the simple exponential smoothed value with various values of α, and see how the smooth curves look.

ExponentialSmoothingYou can see that for α = 0.10, the curve shows a very gradual drop, a little too slow for the purpose we want it for. Conversely even for values of 0.75 or greater, we still get a response that’s a bit too quick to show a smoothly changing FPS value.  For FPS measurements I typically use α = 0.5.

The great thing about this function is that it’s simple to implement and use and fairly easy to tune. The value of α that’s optimal for your particular need depends upon the frequency of samples, plus the responsiveness you desire. The only tricky implementation detail is the initial smoothed value, which I usually ignore by providing the “expected” value as the initial St-1 value. It quickly gets smoothed to a more valid value.

 

Posted in Code, Miscellaneous | Leave a comment

Realtime editing in VR is (almost) here

tldr:

There are two huge problems with creating content for VR – Epic has addressed the major one, being able to interactively edit in VR. This is the way VR specific content will be created from now on – in VR for VR. The UI can only get more intuitive from here on out.

I’ve been busy working out some of the hardware kinks on a massive VR space, but I wanted to take the time to belatedly comment on on an announcement Epic made last week. They not only managed to get the Unreal Editor running in VR, but have hacked up some VR UI that enables you to access (most?) of the editing features from inside VR. This is awesome. This is something we (Framestore) were kicking about, thinking of hacking some implementation together, but now we don’t have to. Thank you Epic for providing (and supporting) tools to make VR content creation sooooo much simpler.

Here’s a screen shot of manipulating an object in 3D – you can translate, rotate and scale the object in VR!

UnrealVREditThey have also implemented some menu items in VR – note that you pull up a menu then make the selection using a (in this case Vive) controller.

UnrealVREdit3For non- textual things like selection a material, they have a more traditional menu palette.

UnrealVREdit2You can see some videos and read a summary of it here.

Epic will be making a more formal announcement at GDC on Wednesday March 16. Hopefully with soon-to-follow release of a VR enabled Engine update or source code. I’ve seen some comments that have downplayed the usefulness of this development, but I think that most folks are missing the point (or have never developed VR content) – you’ll use the tools that are the fasted for initial development – I can’t really see folks starting a project from scratch by climbing into VR. The resolutions and ergonomics suck.

But….

For anyone who’s worked on VR content, tried a work-in-progress level in VR, gotten out of VR, done a tweak, gotten back into VR, etc. etc. Just being able to climb into VR and make edits in situ is a huge step forward. There’s a traditional space for editing that doesn’t require literally waving your hands around (which gets pretty tiring pretty quickly), but for that final bit of tweaking, you can now immerse yourself in the environment the user will be in. This is a fantastic step in the right direction. This is a bear to implement, and I’m really glad Epic is taking on VR content editing in such an enthusiastic manner. The fact that Sweeney is narrating is just the icing on the cake.

Posted in Technology, Virtual Reality | Leave a comment

‘Battle for Avengers Tower’ wins Best Animated VR Film at VRFest 2016

Kudos to the entire Framestore VR Studio team!

Posted in Virtual Reality | Leave a comment

Best Practices in VR Design

I just ran across @VRBoy ‘s post on VR Design Practices. Yes, yes, and yes. In particular Performance and Testing are the two areas I constantly see folks forgetting to implement.

Best Practices in VR Design

And by testing I mean not only making sure the app works, but that the overall implementation is suited for VR. I see a lot of creatives who think that just because they can make a good 2D or video, that that translates to VR. No one seems to actually *test* their apps in VR before they release, as if what they see on a monitor is what’s it’s like in VR.

 

Posted in Virtual Reality | Leave a comment