C-CATS

Research | Development | Production

Ready Set Virtual

Basics

RSV is a quintessentially inclusive endeavour, which provides high-end virtual location and studio facilities to anyone, form anywhere. The only qualifications are craft, creativity and story-telling prowess. Actors project their performances in the RSV ‘realm’ using low-cost performance capture technologies for movement, gesture, expression and voice. Their digital equivalents are manifest within the set, and perform exactly how each actor drives (or ‘puppets’) them. The cinematographer and camera crew will have set up lights and camera choreography, and the production design and art crew will have built the virtual environments, props and set. The director and AD can then virtually ‘shoot’ different takes, which are recorded in their entirety as if on a real film set. These are auto-logged for an editor to put them together. Plus any visual effects can be designed and cued in real-time within the virtual set (realm) where needed, to be enhanced in the post production phase.

RSV is not only a unique learning tool, it is also a useful previsualisation mechanism, and – ultimately – a wholly workable ‘virtual production’ set up. Teams and individuals can work remotely at a distance, or within each other’s vicinity. The interface tool will be made to mimic real physical devices (such as camera adjustments, dolly moves, lighting consoles, sound recording mixer etc).

Players and Participants

The system incorporates synchronous and asynchronous involvement from a wide range of filmmaking and performance talents, integrating roles which would traditionally have been considered ‘on-set’ with those which previously would have been confined to the domain of postproduction. Directing, Producing, Cinematography, Sound, Production Design and other live-action ‘departments’ work synergistically with editing, VFX, music, sound design, atmostpherics, makeup, costume and more; all of which are enabled through the digital manifestations of their crafts within this seamless virtual world. While actors and performers work with modern low overhead performance capture tools, with technical and creative support on-hand whenever required.

Operation

In most cases, the crew, cast and creative technicians will be a small subset of a full film production operation. Actors will use motion capture suits and face capture devices; characters can be either pre-built or custom designed. Environments can be pre-made or custom crafted. Everyone logs into the system remotely from wherever they are. The virtual set is positioned, adjusted and lit; cameras are placed with virtual tracks and movement mechanisms devised. Background action is pre-choreographed where needed. Atmospheric and practical effects can either be created synchronously ‘in virtuo’, or added afterwards in layers of cumulative output.

A ‘take’ occurs just like on a real film set. After it is recorded, it can be reviewed if needed, notes made to actors and crew, and further takes recorded. All takes can be marked up for continuity or  performance notes, placed in the dailies bank for editing, post effects or sound mix once each scene has been covered in its entirety.

The whole process is multipurpose. It can be used as (i) an inexpensive training environment for crews and actors; (ii) a previsualisation tool for larger productions; and (iii) a final output filmmaking engine, particularly good for new innovative short form drama, but really could be for anything – even high end episodic television production.

RSV takes current ‘Virtual Production’ and makes it even more virtual. Limited ‘photo-live’ action is possible through the use of remotely composited video stream. While this is unlikely to produce the verisimilitude afforded through the use of full LED volume live action cinematography, the remote collaboration access of the system which removes the requirement for physical co-presence offers much more accessible to all filmmakers, artists and performers, whatever their level or location.


Empathic Avatars

Digital Human Likenesses (DHLs) are an increasingly important facet of our screen experiences. We now enjoy recreations of deceased or age-inconsistent actors in our favourite movies; we play with synthetic lifelike human characters in videogames; we engage with personalised digital agents in online transactions; and we control and interact with believable avatar representations of both real and imaginary people in virtual environments. Furthermore, new immersive experiences are emerging for entertainment, education, information provision and commerce, and with each of these the need for a ready supply of believable digital humans for us to engage, transact and empathise with grows and grows.

Key to the believability of these digital incarnations is the life-like representation of the human face. The ways in which our faces physically respond – both consciously and subconsciously – are direct manifestations of our inner feelings. Facial expressions are not only complex but also highly individualised. The notion that everyone exhibits ‘primary emotions’ in the same way – accepted as a ‘given’ for most of recent history – has now been substantially challenged, as has the idea that these emotions are universal across individuals and cultures. In fact, one of the key ways we ‘get to know’ people is by training ourselves to be able to ‘read’ their individuated facial expressions.

The simplified ‘emotion expression archetypes’ of smiles, grimaces, frowns etc. have been a staple of Western cartoon animation for many years, and are the basis for the emojis we use in text messaging platforms. But when we ‘up our game’ with photo-realistic human face depiction, a more nuanced and individualised approach is needed. Photorealistic digital humans often fall into the ‘uncanny valley’, where the virtual character’s animated facial expression appears less than the audience’s expectation of ‘true-to-life’. While examples of believability are achieved in high end (pre-rendered) film production, these are extremely labour intensive endeavours involving many man-months of artistic and technical effort. Real-time spontaneity in facial expression that even approaches a fraction of the believability of a human actor has up to now been very difficult to achieve.

Utilising the latest Machine Learning, Computer Vision and CGI rendering technologies, our Empathic Avatars system will offer the creation of digital human likenesses able to convincingly respond to external stimuli through believable facial expression. These will be individuated to both the specific physical characteristics of the generated face and to the psychological and behavioural traits selectable or designable by the user/artist/director.


Remotely Theatrical


ASC: Automating the Subjective Cineaste


REACT


EVNE (Exploring the Virtual Nature of Emotion)