Microsoft Text-to-Speech (TTS)
The microsoft
text-to-speech integrationIntegrations connect and integrate Home Assistant with your devices, services, and more. [Learn more] uses the TTS engine of the Microsoft Speech Service
Configuration
To enable text-to-speech with Microsoft, add the following lines to your configuration.yaml
The configuration.yaml file is the main configuration file for Home Assistant. It lists the integrations to be loaded and their specific configurations. In some cases, the configuration needs to be edited manually directly in the configuration.yaml file. Most integrations can be configured in the UI. [Learn more] file.
After changing the configuration.yaml
The configuration.yaml file is the main configuration file for Home Assistant. It lists the integrations to be loaded and their specific configurations. In some cases, the configuration needs to be edited manually directly in the configuration.yaml file. Most integrations can be configured in the UI. [Learn more] file, restart Home Assistant to apply the changes. The integration is now shown on the integrations page under Settings > Devices & services. Its entities are listed on the integration card itself and on the Entities tab.
# Example configuration.yaml entry
tts:
- platform: microsoft
api_key: YOUR_API_KEY
Configuration Variables
The language to use. Note that if you set the language to anything other than the default, you will need to specify a matching voice type as well. For the supported languages check the list of available languages
The gender you would like to use for the voice. Accepted values are Female
and Male
.
The voice type you want to use. Accepted values are listed as the service name mapping in the documentation
JennyNeural
Change the rate of speaking in percentage. Example values: 25
, 50
.
Change the volume of the output in percentage. Example values: -20
, 70
.
Change the contour of the output in percentages. This overrides the pitch setting. See the W3 SSML specification(0,0) (100,100)
.
The region of your API endpoint. See documentation
Not all Azure regions support high-quality neural voices. Use this overview
New users (any newly created Azure Speech resource after August 31st, 2021
If you set the language to anything other than the default en-us
, you will need to specify a matching voice type as well.
Full configuration example
A full configuration sample including optional variables:
# Example configuration.yaml entry
tts:
- platform: microsoft
api_key: YOUR_API_KEY
language: en-gb
gender: Male
type: RyanNeural
rate: 20
volume: -50
pitch: high
contour: (0, 0) (100, 100)
region: eastus