Basic functionality

Runtime

Text to Speech

To create speech from text, call the Lipsync API manager, then Text to Speech. This node will help you to create a sound wave in runtime scripts.

Text to speech node

Currently, Elevenlabs and Azure can be used inside the plugin. For correct operation, specify the TTS engine you need in the appropriate field, as well as the Voice ID.

Audio to lip sync

To create animation from audio, call the Lipsync API manager, then Audio to Lipsync. This node will help you to create facial animation of speech in runtime scripts.

When generating an animation for an avatar, it is important to specify the appropriate avatar skeleton and set the appropriate mapping, otherwise the animation may not play or may not play correctly.

Audio to lip sync node

Manual

Text to speech

To create an audio file, right-click in the empty Content Browser space. In the context menu that appears, select Create Speech from Text.

Context menu in Content Browser

In the window that appears, select TTS Engine, select a suitable voice and write the text you want to voice. Click Generate and wait for the *.wav file to appear in the Content Browser.

TTS window

Audio to lip sync

To create an animation file, right-click on the sound file in the Content Browser. In the context menu that appears, select Create Lipsync Animation.

Context menu from the *.wav file

Then choose an emotion, an avatar skeleton, and a suitable mapping. If you are using MetaHuman, select the Metahuman mapping. If you are using an avatar with ARKit blendshapes, select the Custom mapping. Then click Generate and wait for the animation file to appear.

Window for generating Lip Sync animation files

Last updated