Digital Human Likenesses (DHLs) are an increasingly important facet of our screen experiences. We now enjoy recreations of deceased or age-inconsistent actors in our favourite movies; we play with synthetic lifelike human characters in videogames; we engage with personalised digital agents in online transactions; and we control and interact with believable avatar representations of both real and imaginary people in virtual environments. Furthermore, new immersive experiences are emerging for entertainment, education, information provision and commerce, and with each of these the need for a ready supply of believable digital humans for us to engage, transact and empathise with grows and grows.
Key to the believability of these digital incarnations is the life-like representation of the human face. The ways in which our faces physically respond – both consciously and subconsciously – are direct manifestations of our inner feelings. Facial expressions are not only complex but also highly individualised. The notion that everyone exhibits ‘primary emotions’ in the same way – accepted as a ‘given’ for most of recent history – has now been substantially challenged, as has the idea that these emotions are universal across individuals and cultures. In fact, one of the key ways we ‘get to know’ people is by training ourselves to be able to ‘read’ their individuated facial expressions.
The simplified ‘emotion expression archetypes’ of smiles, grimaces, frowns etc. have been a staple of Western cartoon animation for many years, and are the basis for the emojis we use in text messaging platforms. But when we ‘up our game’ with photo-realistic human face depiction, a more nuanced and individualised approach is needed. Photorealistic digital humans often fall into the ‘uncanny valley’, where the virtual character’s animated facial expression appears less than the audience’s expectation of ‘true-to-life’. While examples of believability are achieved in high end (pre-rendered) film production, these are extremely labour intensive endeavours involving many man-months of artistic and technical effort. Real-time spontaneity in facial expression that even approaches a fraction of the believability of a human actor has up to now been very difficult to achieve.
Utilising the latest Machine Learning, Computer Vision and CGI rendering technologies, our Empathic Avatars system will offer the creation of digital human likenesses able to convincingly respond to external stimuli through believable facial expression. These will be individuated to both the specific physical characteristics of the generated face and to the psychological and behavioural traits selectable or designable by the user/artist/director.