Generating Drive and Driven Animation

You can design the animation trigger conditions and their corresponding driven animation for head and facial parts like brow, eyelid, blink, and saccade.

Workflow of Drive and Driven Animation

Refer to the illustration below to explore the relationship between the Driver and Driven nodes, the Driver Priority, and Mood conditions for speech output.

  1. Create the Driver nodes for animation trigger.
  2. Create the Driven nodes for sourcing animation to a Driver node.
  3. Determine the Driver Priority to define which driver’s output takes precedence when overlapping animations occur.
  4. Specify the Mood to apply different animations based on real-time emotional tags inferred from the LLM’s response.
  5. Send driver outputs to the root for speech output.

After merging driver outputs, the AI Assistant system dynamically selects and blends the appropriate animation clips according to the emotion detected during speech. It selects a Mood Track in the Mood Selector node based on the detected emotion in the current sentence (Only one mood is selected per occurrence).

To create a Mood Selector, right-click on the empty area of the graph, and select Driver > Mood Selector from the context menu.

There is a variety of mood states in this node. The system prompts the LLM to return sentences embedding one of the following Mood Tags at the end. After connecting the mood to a Driver or Driver Priority node, the Drive and Driven Animation will be generated once the emotion is detected during speech.

  • Neutral
  • Amazement
  • Anger
  • Cheekiness
  • Disgust
  • Fear
  • Grief
  • Joy
  • Out of breath
  • Pain
  • Sadness
* You can rename emotion tags or disable unwanted tags in the LLM asset.
* If the sentence does not contain a mood tag, or the LLM does not support prompt conditioning, the default emotion will be neutral.