C-CATS

Research | Development | Production

Among the Cherry Trees

Among the Cherry is a short visual poem about hesitation, fear of commitment, and also the dangers that lie therein. The singer/dancer is a siren: the custodian of the cherry trees and its exquisite flowering petals. Just like her floral manifestations, she can only be with you for a passing moment in the seasons of time. She invites you to join her in her sacred quest to protect the natural world; to immerse yourselves in the beauty and grace of her music and movement; to cast off material things; and to help save what is fragile and precious.

 

Credits

DANCE & CHOREOGRAPHY
Mansi Harvey

MUSIC & VOICE
Alice Mills

TECHNICAL DIRECTOR
Matt O’Dell

PRODUCTION & DIRECTION
Jon Weinbren

PRODUCTION MANAGER
Roxie Oliveira

ENVIRONMENT DESIGN
Nazia Zaman

VP TECHNICAL TEAM
Harry Piercy
Graham Keith

DIRECTOR OF PHOTOGRAPHY
Bojan Brbora

1st ASSISTANT CAMERA
Ellie Thompson

2nd ASSISTANT CAMERA
Dan MacDuff

EDITORS
Roxie Oliveira
Cezara Hertanu

PRODUCTION DESIGN
Abbie Cornwell

ART DEPARTMENT ASSISTANTS
Cezara Hertanu
Alice Mills

MIXING ENGINEER
Damian Pace

 

Among the Cherry Trees (1:09)

Behind the Scenes (4:48)


Remember the Future

Credits

WRITERS
Tom Hill
Jon Weinbren

ANIMATION DIRECTOR
Izabela Barszcz

BACKGROUND PAINTER
Kate Mercer

LEAD ANIMATOR
Milda Kargaudaitė

ANIMATION
Maria Belik
Sofia Negri
Luke Ramsay

ANIMATION ASSISTANTS
Beatriz Rosa
Savion Alexander

MUSIC           
Seasons of Time | Back to You
Composed and Performed by Alice Mills
Guitars by Johan Beavis Berry
Mixed by Cardamon Rozzi

EDITORS
Anselem Nkoro
Jon Weinbren

PRODUCTION ADMINISTRATOR
Pina Stamp

EXECUTIVE PRODUCERS
Lorenzo Fiaramonti
Nathalie Hinds

PRODUCED and DIRECTED
by
Jon Weinbren

Awards (so far)


Altered Perceptions


Empathic Avatars

Digital Human Likenesses (DHLs) are an increasingly important facet of our screen experiences. We now enjoy recreations of deceased or age-inconsistent actors in our favourite movies; we play with synthetic lifelike human characters in videogames; we engage with personalised digital agents in online transactions; and we control and interact with believable avatar representations of both real and imaginary people in virtual environments. Furthermore, new immersive experiences are emerging for entertainment, education, information provision and commerce, and with each of these the need for a ready supply of believable digital humans for us to engage, transact and empathise with grows and grows.

Key to the believability of these digital incarnations is the life-like representation of the human face. The ways in which our faces physically respond – both consciously and subconsciously – are direct manifestations of our inner feelings. Facial expressions are not only complex but also highly individualised. The notion that everyone exhibits ‘primary emotions’ in the same way – accepted as a ‘given’ for most of recent history – has now been substantially challenged, as has the idea that these emotions are universal across individuals and cultures. In fact, one of the key ways we ‘get to know’ people is by training ourselves to be able to ‘read’ their individuated facial expressions.

The simplified ‘emotion expression archetypes’ of smiles, grimaces, frowns etc. have been a staple of Western cartoon animation for many years, and are the basis for the emojis we use in text messaging platforms. But when we ‘up our game’ with photo-realistic human face depiction, a more nuanced and individualised approach is needed. Photorealistic digital humans often fall into the ‘uncanny valley’, where the virtual character’s animated facial expression appears less than the audience’s expectation of ‘true-to-life’. While examples of believability are achieved in high end (pre-rendered) film production, these are extremely labour intensive endeavours involving many man-months of artistic and technical effort. Real-time spontaneity in facial expression that even approaches a fraction of the believability of a human actor has up to now been very difficult to achieve.

Utilising the latest Machine Learning, Computer Vision and CGI rendering technologies, our Empathic Avatars system will offer the creation of digital human likenesses able to convincingly respond to external stimuli through believable facial expression. These will be individuated to both the specific physical characteristics of the generated face and to the psychological and behavioural traits selectable or designable by the user/artist/director.


ASC: Automating the Subjective Cineaste


REACT


An Avatar Prepares


(Re)Animating Stanislavsky