Animating the ‘Face’ of Reflexion Health: How We Brought VERA to Life

At Reflexion Health, we’re proud to be a leader in virtual physical therapy rehabilitation solutions. And we couldn’t do it without our FDA-cleared Virtual Exercise Rehabilitation Assistant (VERA™) platform — or Vera, as we call her.

A lifelike PT assistant who walks patients through the form and pace for their exercises, Vera is the centerpiece of our virtual PT platform. Engaging and responsive, she’s one of the reasons why researchers from a major medical center found our platform to offer “the advantage of cost savings, convenience, at-home monitoring, and coordination of care, all of which are geared to improve adherence and overall patient satisfaction.”1

But Vera wouldn’t exist without the hard work of our world-class development team. In this series, we’re lifting the veil on these essential but behind-the-scenes members of our virtual PT team. In this post, Senior Software Engineer Michael Graessle discusses the unique challenges that came along with our goal of providing Vera with world-class animation — and how those challenges were overcome to bring Vera to life.  

Animating the ‘Face’ of Reflexion Health

We needed a way to easily and effectively create animations for Vera’s face so she could talk along with the audio that we’d recorded. There are hundreds of things that Vera can say, so we needed to find a process to sync up audio with the face movements without having to touch up the animation or create it by hand — an expensive and time-consuming process

Hand animation is a wonderful art, but it wouldn’t have been the right approach for Vera. The face is a complex thing to animate by hand, or any other way. Consider all the parts that need to be animated on the face — the lips, tongue, eyes, eyebrows, cheeks, and so on — and then consider that each of these features needs to have unique, lifelike qualifies. 

In addition to the face, there’s also the sheer volume of additional animation that needed to be done. Take a Pixar movie like Toy Story, for instance. It’s a different type of project, but the animation alone still takes at least six months for an animated feature film — and that’s with the work of a huge team of professionals. 

With Vera, we had a similar mandate of capturing a vast variety of facial and physical movements — not to tell a story in a feature film, but to ensure that the many patients who work with Vera find her as engaging as possible, with lifelike movements they could recognize as their own. 

All this could easily add up to a time sink! And Reflexion Health is still an early-stage company, without the limitless time and resources of a large movie animation studio. The simple fact was that there was no time for delays on the march to launch: We needed to find a better, faster, more efficient way. 

Bringing VERA to Life

Virtual PT succeeds in part thanks to its ability to automate redundant tasks. So, we took a cue from our own playbook, and applied this philosophy of automation to some of the animation we needed to do. 

To automate the process of Vera’s face animation, we created our own proprietary animation pipeline using home-grown and acquired software like FaceFX. When it receives an audio file input, this software creates corresponding animation based on phonemes that closely match the words being said.

Next, we use a custom script to input each audio file for FaceFx processing. Written in the Python programming language to match the FaceFx API, this custom script lets us more easily interface with the software, transforming the audio sounds we feed it into animated movements. The result is a comprehensive library of animations that, though they’re technically automated, are no less convincing and engaging than if they were carefully handcrafted. 

Since they are automated, we only needed to create one piece of animation that contained the necessary phonemes, then match each up with a corresponding facial expression from Vera. In this way, the software learned the definition of what Vera's facial expression would look like when she said a diverse range of sounds.  

Once hit upon, our process for creating Vera’s animation — and especially her facial expressions — made it possible for Vera to come to life as an engaging avatar that patients find interactive and even fun, without bogging down our small startup organization with unrealistic manpower hours. 

With this innovation in place, we now have the ability to process a large number of facial animations for Vera very rapidly, allowing us to add new things Vera can say in the future as well. And we’re also free to move on to other challenges — which can be an unending list for healthcare tech startups (and the DevOps engineers who work for them). 

Michael Graessle is a Senior Software Engineer for Reflexion Health.


1 Chughtai et al. The Role of Virtual Rehabilitation in Total and Unicompartmental Knee Arthroplasty. J Knee Surg. 2018 Mar 16. doi: 10.1055/s-0038-1637018. [Epub ahead of print]