Real-time Lip-syncing and Audio Recording
If your webcam has a microphone, then you can synchronously capture the facial expressions along with the voice recording via Face3D by using the Motion LIVE 2D plugin.
Alternatively, install a microphone on your computer if an iPhone is the capture device, as iPhone's true depth camera can only transfer facial image data.
Not only the voice of the actor can be recorded, but also the facial expressions with the lip-sync (lip shapes) can be automatically captured and generated at the same time. The benefits are:
- Recording live audio with the facial capture in one session.
- Recorded audio generates lip-sync data in real time.
- Blend lip-sync and motion capture to make a cohesive performance.
( Watch Tutorial - Audio Recording and Real-time Lip Syncing )
-
In Cartoon Animator, apply a character and make sure it is selected.
- Go to Plugins >> Motion LIVE 2D >> Motion LIVE 2D to open the Motion LIVE 2D panel.
- Connect to a facial capture device in the Motion LIVE 2D panel. For more information about the gear connection, please refer to the Workflow for Facial Mocap section.
- The Record Audio checkbox is activated by default, while audio input is detected.
You can choose an appropriate microphone for your voice from the drop-down list.
-
Activate the Lip Sync checkbox to recognize the lip shapes by your voices in real-time.
Note
From the Lip Sync drop-down list, you can choose either Real-time or Post-process.
No matter what you choose, Real-time or Post-process, the lips performance (changing speed of lip shapes) will be the same in real-time preview. The only difference is that the lip visemes on the Lips track in the Timeline.
Real-time
Post-process
- The new simplified lip-sync solution to create lip shapes and users are able to control the Sensitivity of the lip shapes changing speed in real-time for live broadcast.
- More viseme keys generated in the Lips track.
- The traditional audio lip-sync method to generate more precise lip shapes and users are able to do further editing for post production.
- Fewer but more accurate viseme keys generated in the Lips track.
- Speak while you are capturing the facial expressions in Record mode.
- Play back the project to observe the character's facial
expressions and lip-syncs.
You can define the changing speed of the lip shapes by dragging the Sensitivity slider when you select Real-time in the Lip Sync drop-down list.
This slider is disabled when Post-process is chosen, but you can still have the same result in real-time preview.
The higher the Sensitivity value is, the faster the lip shape changes, and there will be more viseme keys produced in the Lips track.
Sensitivity = 2
|
Sensitivity = 4
|
There are two options for capturing your lip-syncing in real-time: Visually (Lip-sync OFF) and From Audio (Lip-sync ON).
The default option to record the audio (the wave form of the voice) and capture the mouth via the webcam Face3D Tracker based on the visuals.
Because it creates visuals from the mouth in stead of creating visemes from the sounds, the captured facial motions will be recorded in the Facial Clip track only.
Please refer to the Facial Clips and Keys and Voice Clip sections for more information about modifying the facial animation of the character.
Enable the Lip Sync feature to use an audio input, that is speaking clearly and precisely into a microphone in order to let the Motion LIVE 2D to detect audio and generate viseme lip-syncs based on the audio.
Because it animates the lips based on the sounds, not only the audio wave form recorded in the Voice track, but there are also the visemes generated in the Lips track for you to do further editing in the Lips Editor.
In the current version of Motion LIVE 2D, if you don't make any sound when Lip Sync is ON, there may be no mouth animation at all.
In Cartoon Animator, open the Timeline panel and click the Face button in order to view the recorded data in their corresponding tracks.
You can find the facial image data stored in the Facial Clip (A) track, and the sound waves and lip-sync visemes stored in the Voice (B) and Lips (C) sub-tracks of the Voice Clip main track.
Drag the play head to any desired frames and hear the sound waves. If the lip-synching keys are not transferred well, follow the steps below to modify the viseme keys:
- Double-click the viseme key in the Lips track.
- In the following popup Lips Editor dialog, replace the lip-sync viseme with another one.
Note
If a G3 360 or G3 character's mouth feature is converted to Smooth Mode, then you can drag the Expressiveness slider of the selected viseme to exaggerate or diminish the mouth opening strength in the Lips Editor.
- The lip viseme shape on the character will change accordingly.
Original viseme: WOO
Modified viseme: Oh
Once the Lip-syncs recording is finished, you can collect and export the voice clip in Cartoon Animator, then apply to other characters.
-
Open the Timeline panel (F3) and click the Face button,
you can find the recorded audio data stored in the Voice Clip main track.
-
Make sure the character or the voice clip is selected.
Go to the Content Manager >> Animation >> Face folder and click the Add button to save the clip in *.ctFCS format under the Custom tab.
-
Drag the saved voice clip onto the character which you want to apply the audio data.
The character will then start the face animation with the audio.
Note:
Please refer to the Facial Clips and Keys and Voice Clip sections for more information about modifying the facial animation of the character.