EasyVFX | Documentation
HomeYouTubeDiscord
  • Welcome
  • EasyVFX Lip Sync
    • Introduction
      • Get started
    • Unreal Engine
      • Pre-Requisites
      • Installation and settings
      • Talk Component
      • Basic functionality
    • Blender
      • Pre-Requisites
      • Downloads
      • Setting Up
        • User token
        • Settings
          • TTS (text-to-speech)
          • STL (speech-to-lipsync)
      • Demo project
    • ARKit Blendshapes
  • EasyVFX Mocap
    • Introduction
Powered by GitBook
On this page
  • Runtime
  • Text to Speech
  • Audio to lip sync
  • Manual
  • Text to speech
  • Audio to lip sync
  1. EasyVFX Lip Sync
  2. Unreal Engine

Basic functionality

PreviousTalk ComponentNextBlender

Last updated 2 months ago

Runtime

Text to Speech

To create speech from text, call the Lipsync API manager, then Text to Speech. This node will help you to create a sound wave in runtime scripts.

Currently, Elevenlabs and Azure can be used inside the plugin. For correct operation, specify the TTS engine you need in the appropriate field, as well as the Voice ID.

Audio to lip sync

To create animation from audio, call the Lipsync API manager, then Audio to Lipsync. This node will help you to create facial animation of speech in runtime scripts.

When generating an animation for an avatar, it is important to specify the appropriate avatar skeleton and set the appropriate mapping, otherwise the animation may not play or may not play correctly.

Manual

Text to speech

To create an audio file, right-click in the empty Content Browser space. In the context menu that appears, select Create Speech from Text.

In the window that appears, select TTS Engine, select a suitable voice and write the text you want to voice. Click Generate and wait for the *.wav file to appear in the Content Browser.

Audio to lip sync

To create an animation file, right-click on the sound file in the Content Browser. In the context menu that appears, select Create Lipsync Animation.

Then choose an emotion, an avatar skeleton, and a suitable mapping. If you are using MetaHuman, select the Metahuman mapping. If you are using an avatar with ARKit blendshapes, select the Custom mapping. Then click Generate and wait for the animation file to appear.

Text to speech node
Audio to lip sync node
Context menu in Content Browser
TTS window
Context menu from the *.wav file
Window for generating Lip Sync animation files