EasyVFX | Documentation
HomeYouTubeDiscord
  • Welcome
  • EasyVFX Lip Sync
    • Introduction
      • Get started
    • Unreal Engine
      • Pre-Requisites
      • Installation and settings
      • Talk Component
      • Basic functionality
    • Blender
      • Pre-Requisites
      • Downloads
      • Setting Up
        • User token
        • Settings
          • TTS (text-to-speech)
          • STL (speech-to-lipsync)
      • Demo project
    • ARKit Blendshapes
  • EasyVFX Mocap
    • Introduction
Powered by GitBook
On this page
  1. EasyVFX Lip Sync
  2. Blender
  3. Setting Up
  4. Settings

TTS (text-to-speech)

Configure and generate the audio that will be used to create the animation in two ways:

PreviousSettingsNextSTL (speech-to-lipsync)

Last updated 7 months ago

  1. Text-Based Option:

    1. To generate audio from text, first create a text component in Blender under the "Scripts" tab.

    2. Then, select it from the “Text components” dropdown.

    3. Choose a voice engine from the “Voice engine” dropdown.

    4. Click the “Select voice” button to open a pop-up window where you can choose the language.

  1. Audio-Based Option:

    1. Click the folder icon and upload the audio file you want to use.

    2. Adjust the settings, then click the “Process audio” button to generate the audio track. Check the Status field for progress.

      status
      description
      source

      ___

      audio not uploaded

      audio or text

      loaded

      audio uploaded from a file

      audio

      generated

      audio generated from the server

      text

    3. Once the track is generated, click “Play / Stop audio” to listen, and click “Save audio” to save the track.

If the audio doesn’t play when you click the “Play / Stop audio” button, go to Edit > Preferences > System in Blender. Scroll to the Sound sub-panel at the bottom and change the “Audio Device” setting.