Basic functionality
Last updated
Last updated
To create speech from text, call the Lipsync API manager, then Text to Speech. This node will help you to create a sound wave in runtime scripts.
Currently, Elevenlabs and Azure can be used inside the plugin. For correct operation, specify the TTS engine you need in the appropriate field, as well as the Voice ID.
To create animation from audio, call the Lipsync API manager, then Audio to Lipsync. This node will help you to create facial animation of speech in runtime scripts.
To create an audio file, right-click in the empty Content Browser space. In the context menu that appears, select Create Speech from Text.
In the window that appears, select TTS Engine, select a suitable voice and write the text you want to voice. Click Generate and wait for the *.wav file to appear in the Content Browser.
To create an animation file, right-click on the sound file in the Content Browser. In the context menu that appears, select Create Lipsync Animation.
Then choose an emotion, an avatar skeleton, and a suitable mapping. If you are using MetaHuman, select the Metahuman mapping. If you are using an avatar with ARKit blendshapes, select the Custom mapping. Then click Generate and wait for the animation file to appear.