Self-Reflective-Journal: Week Two: 13/03/25
For the second week we went over the virtual production and it's pipeline. We learnt about 'on-set virtual production' OSVP which is an entertainment technology for television and film production in which LED panels are used as a backdrop for a set or computer-generated imagery can be displayed in real-time. We then looked into the MoCap pipeline and shot coverage to get an idea of how our pre-production planning will go and the types of camera angles and shots needed for the scene. We as a group knew our actor would be Jasmine so she would be the performer for the photogrammetry in week 3.
​​​



Then we headed down to WG to the Motion Capture Studio with Technician Blair Yate to have a Faceware demo and framing exercise. We got to take turns trying on the facial tracking head gear that was recording live footage of the face in real time. Jasmine tried it on first. It look reall fun seeing the various faces that we could try while seeing the Meta-human replicating. I then tried the head gear. When wearing the head gear it was to big so it kept sliding on my head but other wise it was very cool seeing how the technology was all live tracking.





After class we then did a rehearsal with our Actor, Jasmine and me as Sonny to stand in. This was very helpful as Blair and Greg were there to see our rehearsal take and give good feedback while we were able to understand our scene. The whole class had left so we had the space and confidence to move around and experiment with camera shots and replicating what we see from the reference footage of the actual movie to our own performance take.











← Rehearsal Video
We then got a briefing on how to prepare for the facial photogrammetry recording. Faceware doesn't require tracking dots to process the footage, though it can help be more consistent when making decisions if some dots are applied on the actors face. Applying two different coloured lipsticks on the actor would help visually separate the lips when they move. Also using liquid eyeliner to make dots on the face for tracking points would help the system. This would have to match the layout of the markers in Analyzer. Analyzer requires a “neutral frame” to function as a reference point in order to accurately track the facial performance.
After as a group we were told to fill in a 'U, Robot' performance questionnaire to get a sense of character building and acting cues. During the evening the group made a copy of the document and we all took time filling in the questions and viewpoint sentence.
← 'U, Robot' Questionaire
Then we did a ROM training data of Dr Kennedy's face focusing on the brows and the eyes. I followed each ROM tutorial and followed the facial ROM shape list to make sure I was doing all the right dedicated shapes on the face. I started with the brows keyframing where the most elevated parts moved. I then trained the keys then tracked the data​ I then started a new 'fwt' eyes files and made keyframes, after I saved the pre-tracked version then I trained and tracked the eyes and nose. I zipped the fwt eyes file and folder and then opened the footage test video file in analyser. Then on frame zero (neutral frame) I posed the brows, nose and eyes and then imported tracking models where I made a folder for tracking file and then brought in the eye and brow tracking model. Then these two files synced up and I was able to see all the posing tracked and put together.

← Faceware Analyzer

Final Tracked eyes and brows video →

