TTS models
This document lists all text-to-speech models supported in sherpa-onnx.
Monolingual
The following table lists models by languages.
Mixed-lingual
The following lists models supporting multiple languages.
Chinese+English
This section lists text to speech models for Chinese+English.
kokoro-multi-lang-v1_0
Info about this model
This model is kokoro v1.0 and it is from https://huggingface.co/hexgrad/Kokoro-82M
It supports both Chinese
and English
.
Number of speakers | Sample rate |
---|---|
53 | 24000 |
Meaning of speaker prefix
Prefix | Meaning | sid range | Number of speakers |
---|---|---|---|
af | American female | 0 - 10 | 11 |
am | American male | 11 - 19 | 9 |
bf | British female | 20 - 23 | 4 |
bm | British male | 24 - 27 | 4 |
ef | Spanish female | 28 | 1 |
em | Spanish male | 29 | 1 |
ff | French female | 30 | 1 |
hf | Hindi female | 31 - 32 | 2 |
hm | Hindi male | 33 - 34 | 2 |
if | Italian female | 35 | 1 |
im | Italian male | 36 | 1 |
jf | Japanese female | 37 - 40 | 4 |
jm | Japanese male | 41 | 1 |
pf | Brazilian Portuguese female | 42 | 1 |
pm | Brazilian Portuguese male | 43 - 44 | 2 |
zf | Chinese female | 45 - 48 | 4 |
zm | Chinese male | 49 - 52 | 4 |
speaker ID to speaker name (sid -> name)
The mapping from speaker ID (sid) to speaker name is given below:
0 - 3 | 0 -> af_alloy | 1 -> af_aoede | 2 -> af_bella | 3 -> af_heart |
4 - 7 | 4 -> af_jessica | 5 -> af_kore | 6 -> af_nicole | 7 -> af_nova |
8 - 11 | 8 -> af_river | 9 -> af_sarah | 10 -> af_sky | 11 -> am_adam |
12 - 15 | 12 -> am_echo | 13 -> am_eric | 14 -> am_fenrir | 15 -> am_liam |
16 - 19 | 16 -> am_michael | 17 -> am_onyx | 18 -> am_puck | 19 -> am_santa |
20 - 23 | 20 -> bf_alice | 21 -> bf_emma | 22 -> bf_isabella | 23 -> bf_lily |
24 - 27 | 24 -> bm_daniel | 25 -> bm_fable | 26 -> bm_george | 27 -> bm_lewis |
28 - 31 | 28 -> ef_dora | 29 -> em_alex | 30 -> ff_siwis | 31 -> hf_alpha |
32 - 35 | 32 -> hf_beta | 33 -> hm_omega | 34 -> hm_psi | 35 -> if_sara |
36 - 39 | 36 -> im_nicola | 37 -> jf_alpha | 38 -> jf_gongitsune | 39 -> jf_nezumi |
40 - 43 | 40 -> jf_tebukuro | 41 -> jm_kumo | 42 -> pf_dora | 43 -> pm_alex |
44 - 47 | 44 -> pm_santa | 45 -> zf_xiaobei | 46 -> zf_xiaoni | 47 -> zf_xiaoxiao |
48 - 51 | 48 -> zf_xiaoyi | 49 -> zm_yunjian | 50 -> zm_yunxi | 51 -> zm_yunxia |
52 | 52 -> zm_yunyang |
speaker name to speaker ID (name -> sid)
The mapping from speaker name to speaker ID (sid) is given below:
0 - 3 | af_alloy -> 0 | af_aoede -> 1 | af_bella -> 2 | af_heart -> 3 |
4 - 7 | af_jessica -> 4 | af_kore -> 5 | af_nicole -> 6 | af_nova -> 7 |
8 - 11 | af_river -> 8 | af_sarah -> 9 | af_sky -> 10 | am_adam -> 11 |
12 - 15 | am_echo -> 12 | am_eric -> 13 | am_fenrir -> 14 | am_liam -> 15 |
16 - 19 | am_michael -> 16 | am_onyx -> 17 | am_puck -> 18 | am_santa -> 19 |
20 - 23 | bf_alice -> 20 | bf_emma -> 21 | bf_isabella -> 22 | bf_lily -> 23 |
24 - 27 | bm_daniel -> 24 | bm_fable -> 25 | bm_george -> 26 | bm_lewis -> 27 |
28 - 31 | ef_dora -> 28 | em_alex -> 29 | ff_siwis -> 30 | hf_alpha -> 31 |
32 - 35 | hf_beta -> 32 | hm_omega -> 33 | hm_psi -> 34 | if_sara -> 35 |
36 - 39 | im_nicola -> 36 | jf_alpha -> 37 | jf_gongitsune -> 38 | jf_nezumi -> 39 |
40 - 43 | jf_tebukuro -> 40 | jm_kumo -> 41 | pf_dora -> 42 | pm_alex -> 43 |
44 - 47 | pm_santa -> 44 | zf_xiaobei -> 45 | zf_xiaoni -> 46 | zf_xiaoxiao -> 47 |
48 - 51 | zf_xiaoyi -> 48 | zm_yunjian -> 49 | zm_yunxi -> 50 | zm_yunxia -> 51 |
52 - 52 | zm_yunyang -> 52 |
Samples
For the following text:
This model supports both Chinese and English. 小米的核心价值观是什么?答案
是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习.
我在研究 machine learning。What do you think 中英文说的如何呢?
今天是 2025年6月18号.
sample audios for different speakers are listed below:
Speaker 0 - af_alloy
Speaker 1 - af_aoede
Speaker 2 - af_bella
Speaker 3 - af_heart
Speaker 4 - af_jessica
Speaker 5 - af_kore
Speaker 6 - af_nicole
Speaker 7 - af_nova
Speaker 8 - af_river
Speaker 9 - af_sarah
Speaker 10 - af_sky
Speaker 11 - am_adam
Speaker 12 - am_echo
Speaker 13 - am_eric
Speaker 14 - am_fenrir
Speaker 15 - am_liam
Speaker 16 - am_michael
Speaker 17 - am_onyx
Speaker 18 - am_puck
Speaker 19 - am_santa
Speaker 20 - bf_alice
Speaker 21 - bf_emma
Speaker 22 - bf_isabella
Speaker 23 - bf_lily
Speaker 24 - bm_daniel
Speaker 25 - bm_fable
Speaker 26 - bm_george
Speaker 27 - bm_lewis
Speaker 28 - ef_dora
Speaker 29 - em_alex
Speaker 30 - ff_siwis
Speaker 31 - hf_alpha
Speaker 32 - hf_beta
Speaker 33 - hm_omega
Speaker 34 - hm_psi
Speaker 35 - if_sara
Speaker 36 - im_nicola
Speaker 37 - jf_alpha
Speaker 38 - jf_gongitsune
Speaker 39 - jf_nezumi
Speaker 40 - jf_tebukuro
Speaker 41 - jm_kumo
Speaker 42 - pf_dora
Speaker 43 - pm_alex
Speaker 44 - pm_santa
Speaker 45 - zf_xiaobei
Speaker 46 - zf_xiaoni
Speaker 47 - zf_xiaoxiao
Speaker 48 - zf_xiaoyi
Speaker 49 - zm_yunjian
Speaker 50 - zm_yunxi
Speaker 51 - zm_yunxia
Speaker 52 - zm_yunyang
kokoro-multi-lang-v1_1
Info about this model
This model is kokoro v1.1-zh and it is from https://huggingface.co/hexgrad/Kokoro-82M-v1.1-zh
It supports both Chinese
and English
.
Number of speakers | Sample rate |
---|---|
103 | 24000 |
Meaning of speaker prefix
Prefix | Meaning | sid range | Number of speakers |
---|---|---|---|
af | American female | 0 - 1 | 2 |
bf | British female | 2 | 1 |
zf | Chinese female | 3 - 57 | 55 |
zm | Chinese male | 58 - 102 | 45 |
speaker ID to speaker name (sid -> name)
The mapping from speaker ID (sid) to speaker name is given below:
0 - 3 | 0 -> af_maple | 1 -> af_sol | 2 -> bf_vale | 3 -> zf_001 |
4 - 7 | 4 -> zf_002 | 5 -> zf_003 | 6 -> zf_004 | 7 -> zf_005 |
8 - 11 | 8 -> zf_006 | 9 -> zf_007 | 10 -> zf_008 | 11 -> zf_017 |
12 - 15 | 12 -> zf_018 | 13 -> zf_019 | 14 -> zf_021 | 15 -> zf_022 |
16 - 19 | 16 -> zf_023 | 17 -> zf_024 | 18 -> zf_026 | 19 -> zf_027 |
20 - 23 | 20 -> zf_028 | 21 -> zf_032 | 22 -> zf_036 | 23 -> zf_038 |
24 - 27 | 24 -> zf_039 | 25 -> zf_040 | 26 -> zf_042 | 27 -> zf_043 |
28 - 31 | 28 -> zf_044 | 29 -> zf_046 | 30 -> zf_047 | 31 -> zf_048 |
32 - 35 | 32 -> zf_049 | 33 -> zf_051 | 34 -> zf_059 | 35 -> zf_060 |
36 - 39 | 36 -> zf_067 | 37 -> zf_070 | 38 -> zf_071 | 39 -> zf_072 |
40 - 43 | 40 -> zf_073 | 41 -> zf_074 | 42 -> zf_075 | 43 -> zf_076 |
44 - 47 | 44 -> zf_077 | 45 -> zf_078 | 46 -> zf_079 | 47 -> zf_083 |
48 - 51 | 48 -> zf_084 | 49 -> zf_085 | 50 -> zf_086 | 51 -> zf_087 |
52 - 55 | 52 -> zf_088 | 53 -> zf_090 | 54 -> zf_092 | 55 -> zf_093 |
56 - 59 | 56 -> zf_094 | 57 -> zf_099 | 58 -> zm_009 | 59 -> zm_010 |
60 - 63 | 60 -> zm_011 | 61 -> zm_012 | 62 -> zm_013 | 63 -> zm_014 |
64 - 67 | 64 -> zm_015 | 65 -> zm_016 | 66 -> zm_020 | 67 -> zm_025 |
68 - 71 | 68 -> zm_029 | 69 -> zm_030 | 70 -> zm_031 | 71 -> zm_033 |
72 - 75 | 72 -> zm_034 | 73 -> zm_035 | 74 -> zm_037 | 75 -> zm_041 |
76 - 79 | 76 -> zm_045 | 77 -> zm_050 | 78 -> zm_052 | 79 -> zm_053 |
80 - 83 | 80 -> zm_054 | 81 -> zm_055 | 82 -> zm_056 | 83 -> zm_057 |
84 - 87 | 84 -> zm_058 | 85 -> zm_061 | 86 -> zm_062 | 87 -> zm_063 |
88 - 91 | 88 -> zm_064 | 89 -> zm_065 | 90 -> zm_066 | 91 -> zm_068 |
92 - 95 | 92 -> zm_069 | 93 -> zm_080 | 94 -> zm_081 | 95 -> zm_082 |
96 - 99 | 96 -> zm_089 | 97 -> zm_091 | 98 -> zm_095 | 99 -> zm_096 |
100 - 102 | 100 -> zm_097 | 101 -> zm_098 | 102 -> zm_100 |
speaker name to speaker ID (name -> sid)
The mapping from speaker name to speaker ID (sid) is given below:
0 - 3 | af_maple -> 0 | af_sol -> 1 | bf_vale -> 2 | zf_001 -> 3 |
4 - 7 | zf_002 -> 4 | zf_003 -> 5 | zf_004 -> 6 | zf_005 -> 7 |
8 - 11 | zf_006 -> 8 | zf_007 -> 9 | zf_008 -> 10 | zf_017 -> 11 |
12 - 15 | zf_018 -> 12 | zf_019 -> 13 | zf_021 -> 14 | zf_022 -> 15 |
16 - 19 | zf_023 -> 16 | zf_024 -> 17 | zf_026 -> 18 | zf_027 -> 19 |
20 - 23 | zf_028 -> 20 | zf_032 -> 21 | zf_036 -> 22 | zf_038 -> 23 |
24 - 27 | zf_039 -> 24 | zf_040 -> 25 | zf_042 -> 26 | zf_043 -> 27 |
28 - 31 | zf_044 -> 28 | zf_046 -> 29 | zf_047 -> 30 | zf_048 -> 31 |
32 - 35 | zf_049 -> 32 | zf_051 -> 33 | zf_059 -> 34 | zf_060 -> 35 |
36 - 39 | zf_067 -> 36 | zf_070 -> 37 | zf_071 -> 38 | zf_072 -> 39 |
40 - 43 | zf_073 -> 40 | zf_074 -> 41 | zf_075 -> 42 | zf_076 -> 43 |
44 - 47 | zf_077 -> 44 | zf_078 -> 45 | zf_079 -> 46 | zf_083 -> 47 |
48 - 51 | zf_084 -> 48 | zf_085 -> 49 | zf_086 -> 50 | zf_087 -> 51 |
52 - 55 | zf_088 -> 52 | zf_090 -> 53 | zf_092 -> 54 | zf_093 -> 55 |
56 - 59 | zf_094 -> 56 | zf_099 -> 57 | zm_009 -> 58 | zm_010 -> 59 |
60 - 63 | zm_011 -> 60 | zm_012 -> 61 | zm_013 -> 62 | zm_014 -> 63 |
64 - 67 | zm_015 -> 64 | zm_016 -> 65 | zm_020 -> 66 | zm_025 -> 67 |
68 - 71 | zm_029 -> 68 | zm_030 -> 69 | zm_031 -> 70 | zm_033 -> 71 |
72 - 75 | zm_034 -> 72 | zm_035 -> 73 | zm_037 -> 74 | zm_041 -> 75 |
76 - 79 | zm_045 -> 76 | zm_050 -> 77 | zm_052 -> 78 | zm_053 -> 79 |
80 - 83 | zm_054 -> 80 | zm_055 -> 81 | zm_056 -> 82 | zm_057 -> 83 |
84 - 87 | zm_058 -> 84 | zm_061 -> 85 | zm_062 -> 86 | zm_063 -> 87 |
88 - 91 | zm_064 -> 88 | zm_065 -> 89 | zm_066 -> 90 | zm_068 -> 91 |
92 - 95 | zm_069 -> 92 | zm_080 -> 93 | zm_081 -> 94 | zm_082 -> 95 |
96 - 99 | zm_089 -> 96 | zm_091 -> 97 | zm_095 -> 98 | zm_096 -> 99 |
100 - 102 | zm_097 -> 100 | zm_098 -> 101 | zm_100 -> 102 |
Samples
For the following text:
This model supports both Chinese and English. 小米的核心价值观是什么?答案
是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习.
我在研究 machine learning。What do you think 中英文说的如何呢?
今天是 2025年6月18号.
sample audios for different speakers are listed below:
Speaker 0 - af_maple
Speaker 1 - af_sol
Speaker 2 - bf_vale
Speaker 3 - zf_001
Speaker 4 - zf_002
Speaker 5 - zf_003
Speaker 6 - zf_004
Speaker 7 - zf_005
Speaker 8 - zf_006
Speaker 9 - zf_007
Speaker 10 - zf_008
Speaker 11 - zf_017
Speaker 12 - zf_018
Speaker 13 - zf_019
Speaker 14 - zf_021
Speaker 15 - zf_022
Speaker 16 - zf_023
Speaker 17 - zf_024
Speaker 18 - zf_026
Speaker 19 - zf_027
Speaker 20 - zf_028
Speaker 21 - zf_032
Speaker 22 - zf_036
Speaker 23 - zf_038
Speaker 24 - zf_039
Speaker 25 - zf_040
Speaker 26 - zf_042
Speaker 27 - zf_043
Speaker 28 - zf_044
Speaker 29 - zf_046
Speaker 30 - zf_047
Speaker 31 - zf_048
Speaker 32 - zf_049
Speaker 33 - zf_051
Speaker 34 - zf_059
Speaker 35 - zf_060
Speaker 36 - zf_067
Speaker 37 - zf_070
Speaker 38 - zf_071
Speaker 39 - zf_072
Speaker 40 - zf_073
Speaker 41 - zf_074
Speaker 42 - zf_075
Speaker 43 - zf_076
Speaker 44 - zf_077
Speaker 45 - zf_078
Speaker 46 - zf_079
Speaker 47 - zf_083
Speaker 48 - zf_084
Speaker 49 - zf_085
Speaker 50 - zf_086
Speaker 51 - zf_087
Speaker 52 - zf_088
Speaker 53 - zf_090
Speaker 54 - zf_092
Speaker 55 - zf_093
Speaker 56 - zf_094
Speaker 57 - zf_099
Speaker 58 - zm_009
Speaker 59 - zm_010
Speaker 60 - zm_011
Speaker 61 - zm_012
Speaker 62 - zm_013
Speaker 63 - zm_014
Speaker 64 - zm_015
Speaker 65 - zm_016
Speaker 66 - zm_020
Speaker 67 - zm_025
Speaker 68 - zm_029
Speaker 69 - zm_030
Speaker 70 - zm_031
Speaker 71 - zm_033
Speaker 72 - zm_034
Speaker 73 - zm_035
Speaker 74 - zm_037
Speaker 75 - zm_041
Speaker 76 - zm_045
Speaker 77 - zm_050
Speaker 78 - zm_052
Speaker 79 - zm_053
Speaker 80 - zm_054
Speaker 81 - zm_055
Speaker 82 - zm_056
Speaker 83 - zm_057
Speaker 84 - zm_058
Speaker 85 - zm_061
Speaker 86 - zm_062
Speaker 87 - zm_063
Speaker 88 - zm_064
Speaker 89 - zm_065
Speaker 90 - zm_066
Speaker 91 - zm_068
Speaker 92 - zm_069
Speaker 93 - zm_080
Speaker 94 - zm_081
Speaker 95 - zm_082
Speaker 96 - zm_089
Speaker 97 - zm_091
Speaker 98 - zm_095
Speaker 99 - zm_096
Speaker 100 - zm_097
Speaker 101 - zm_098
Speaker 102 - zm_100
Arabic
This section lists text to speech models for Arabic.
vits-piper-ar_JO-kareem-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ar/ar_JO/kareem/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ar_JO-kareem-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ar_JO-kareem-low/ar_JO-kareem-low.onnx";
config.model.vits.tokens = "vits-piper-ar_JO-kareem-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-ar_JO-kareem-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "كيف حالك اليوم؟";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ar_JO-kareem-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ar_JO-kareem-low/ar_JO-kareem-low.onnx",
lexicon="",
data_dir="vits-piper-ar_JO-kareem-low/espeak-ng-data",
tokens="vits-piper-ar_JO-kareem-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="كيف حالك اليوم؟",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
كيف حالك اليوم؟
sample audios for different speakers are listed below:
Speaker 0
vits-piper-ar_JO-kareem-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ar/ar_JO/kareem/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ar_JO-kareem-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ar_JO-kareem-medium/ar_JO-kareem-medium.onnx";
config.model.vits.tokens = "vits-piper-ar_JO-kareem-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-ar_JO-kareem-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "كيف حالك اليوم؟";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ar_JO-kareem-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ar_JO-kareem-medium/ar_JO-kareem-medium.onnx",
lexicon="",
data_dir="vits-piper-ar_JO-kareem-medium/espeak-ng-data",
tokens="vits-piper-ar_JO-kareem-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="كيف حالك اليوم؟",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
كيف حالك اليوم؟
sample audios for different speakers are listed below:
Speaker 0
Catalan
This section lists text to speech models for Catalan.
vits-piper-ca_ES-upc_ona-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ca/ca_ES/upc_ona/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ca_ES-upc_ona-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ca_ES-upc_ona-medium/ca_ES-upc_ona-medium.onnx";
config.model.vits.tokens = "vits-piper-ca_ES-upc_ona-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-ca_ES-upc_ona-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Si vols estar ben servit, fes-te tu mateix el llit";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ca_ES-upc_ona-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ca_ES-upc_ona-medium/ca_ES-upc_ona-medium.onnx",
lexicon="",
data_dir="vits-piper-ca_ES-upc_ona-medium/espeak-ng-data",
tokens="vits-piper-ca_ES-upc_ona-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Si vols estar ben servit, fes-te tu mateix el llit",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Si vols estar ben servit, fes-te tu mateix el llit
sample audios for different speakers are listed below:
Speaker 0
vits-piper-ca_ES-upc_ona-x_low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ca/ca_ES/upc_ona/x_low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ca_ES-upc_ona-x_low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ca_ES-upc_ona-x_low/ca_ES-upc_ona-x_low.onnx";
config.model.vits.tokens = "vits-piper-ca_ES-upc_ona-x_low/tokens.txt";
config.model.vits.data_dir = "vits-piper-ca_ES-upc_ona-x_low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Si vols estar ben servit, fes-te tu mateix el llit";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ca_ES-upc_ona-x_low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ca_ES-upc_ona-x_low/ca_ES-upc_ona-x_low.onnx",
lexicon="",
data_dir="vits-piper-ca_ES-upc_ona-x_low/espeak-ng-data",
tokens="vits-piper-ca_ES-upc_ona-x_low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Si vols estar ben servit, fes-te tu mateix el llit",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Si vols estar ben servit, fes-te tu mateix el llit
sample audios for different speakers are listed below:
Speaker 0
vits-piper-ca_ES-upc_pau-x_low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ca/ca_ES/upc_pau/x_low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ca_ES-upc_pau-x_low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ca_ES-upc_pau-x_low/ca_ES-upc_pau-x_low.onnx";
config.model.vits.tokens = "vits-piper-ca_ES-upc_pau-x_low/tokens.txt";
config.model.vits.data_dir = "vits-piper-ca_ES-upc_pau-x_low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Si vols estar ben servit, fes-te tu mateix el llit";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ca_ES-upc_pau-x_low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ca_ES-upc_pau-x_low/ca_ES-upc_pau-x_low.onnx",
lexicon="",
data_dir="vits-piper-ca_ES-upc_pau-x_low/espeak-ng-data",
tokens="vits-piper-ca_ES-upc_pau-x_low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Si vols estar ben servit, fes-te tu mateix el llit",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Si vols estar ben servit, fes-te tu mateix el llit
sample audios for different speakers are listed below:
Speaker 0
Chinese
This section lists text to speech models for Chinese.
matcha-icefall-zh-baker
Info about this model
This model is trained using the code from https://github.com/k2-fsa/icefall/tree/master/egs/baker_zh/TTS/matcha
It supports only Chinese
.
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Samples
For the following text:
某某银行的副行长和一些行政领导表示,他们去过长江和长白山;
经济不断增长。2024年12月31号,拨打110或者18920240511。123456块钱。
当夜幕降临,星光点点,伴随着微风拂面,我在静谧中感受着时光的流转,
思念如涟漪荡漾,梦境如画卷展开,我与自然融为一体,
沉静在这片宁静的美丽之中,感受着生命的奇迹与温柔.
sample audios for different speakers are listed below:
Speaker 0
Czech
This section lists text to speech models for Czech.
vits-piper-cs_CZ-jirka-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/cs/cs_CZ/jirka/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-cs_CZ-jirka-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-cs_CZ-jirka-low/cs_CZ-jirka-low.onnx";
config.model.vits.tokens = "vits-piper-cs_CZ-jirka-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-cs_CZ-jirka-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Co můžeš udělat dnes, neodkládej na zítřek. ";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-cs_CZ-jirka-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-cs_CZ-jirka-low/cs_CZ-jirka-low.onnx",
lexicon="",
data_dir="vits-piper-cs_CZ-jirka-low/espeak-ng-data",
tokens="vits-piper-cs_CZ-jirka-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Co můžeš udělat dnes, neodkládej na zítřek. ",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Co můžeš udělat dnes, neodkládej na zítřek.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-cs_CZ-jirka-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/cs/cs_CZ/jirka/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-cs_CZ-jirka-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-cs_CZ-jirka-medium/cs_CZ-jirka-medium.onnx";
config.model.vits.tokens = "vits-piper-cs_CZ-jirka-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-cs_CZ-jirka-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Co můžeš udělat dnes, neodkládej na zítřek. ";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-cs_CZ-jirka-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-cs_CZ-jirka-medium/cs_CZ-jirka-medium.onnx",
lexicon="",
data_dir="vits-piper-cs_CZ-jirka-medium/espeak-ng-data",
tokens="vits-piper-cs_CZ-jirka-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Co můžeš udělat dnes, neodkládej na zítřek. ",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Co můžeš udělat dnes, neodkládej na zítřek.
sample audios for different speakers are listed below:
Speaker 0
Danish
This section lists text to speech models for Danish.
vits-piper-da_DK-talesyntese-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/da/da_DK/talesyntese/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-da_DK-talesyntese-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-da_DK-talesyntese-medium/da_DK-talesyntese-medium.onnx";
config.model.vits.tokens = "vits-piper-da_DK-talesyntese-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-da_DK-talesyntese-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Hvis du går langsomt, men aldrig stopper, når du ender frem til dit mål.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-da_DK-talesyntese-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-da_DK-talesyntese-medium/da_DK-talesyntese-medium.onnx",
lexicon="",
data_dir="vits-piper-da_DK-talesyntese-medium/espeak-ng-data",
tokens="vits-piper-da_DK-talesyntese-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Hvis du går langsomt, men aldrig stopper, når du ender frem til dit mål.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Hvis du går langsomt, men aldrig stopper, når du ender frem til dit mål.
sample audios for different speakers are listed below:
Speaker 0
Dutch
This section lists text to speech models for Dutch.
- vits-piper-nl_BE-nathalie-medium
- vits-piper-nl_BE-nathalie-x_low
- vits-piper-nl_NL-pim-medium
- vits-piper-nl_NL-ronnie-medium
vits-piper-nl_BE-nathalie-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/nl/nl_BE/nathalie/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-nl_BE-nathalie-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-nl_BE-nathalie-medium/nl_BE-nathalie-medium.onnx";
config.model.vits.tokens = "vits-piper-nl_BE-nathalie-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-nl_BE-nathalie-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "God schiep het water, maar de Nederlander schiep de dijk";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-nl_BE-nathalie-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-nl_BE-nathalie-medium/nl_BE-nathalie-medium.onnx",
lexicon="",
data_dir="vits-piper-nl_BE-nathalie-medium/espeak-ng-data",
tokens="vits-piper-nl_BE-nathalie-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="God schiep het water, maar de Nederlander schiep de dijk",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
God schiep het water, maar de Nederlander schiep de dijk
sample audios for different speakers are listed below:
Speaker 0
vits-piper-nl_BE-nathalie-x_low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/nl/nl_BE/nathalie/x_low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-nl_BE-nathalie-x_low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-nl_BE-nathalie-x_low/nl_BE-nathalie-x_low.onnx";
config.model.vits.tokens = "vits-piper-nl_BE-nathalie-x_low/tokens.txt";
config.model.vits.data_dir = "vits-piper-nl_BE-nathalie-x_low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "God schiep het water, maar de Nederlander schiep de dijk";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-nl_BE-nathalie-x_low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-nl_BE-nathalie-x_low/nl_BE-nathalie-x_low.onnx",
lexicon="",
data_dir="vits-piper-nl_BE-nathalie-x_low/espeak-ng-data",
tokens="vits-piper-nl_BE-nathalie-x_low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="God schiep het water, maar de Nederlander schiep de dijk",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
God schiep het water, maar de Nederlander schiep de dijk
sample audios for different speakers are listed below:
Speaker 0
vits-piper-nl_NL-pim-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/nl/nl_NL/pim/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-nl_NL-pim-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-nl_NL-pim-medium/nl_NL-pim-medium.onnx";
config.model.vits.tokens = "vits-piper-nl_NL-pim-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-nl_NL-pim-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "God schiep het water, maar de Nederlander schiep de dijk";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-nl_NL-pim-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-nl_NL-pim-medium/nl_NL-pim-medium.onnx",
lexicon="",
data_dir="vits-piper-nl_NL-pim-medium/espeak-ng-data",
tokens="vits-piper-nl_NL-pim-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="God schiep het water, maar de Nederlander schiep de dijk",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
God schiep het water, maar de Nederlander schiep de dijk
sample audios for different speakers are listed below:
Speaker 0
vits-piper-nl_NL-ronnie-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/nl/nl_NL/ronnie/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-nl_NL-ronnie-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-nl_NL-ronnie-medium/nl_NL-ronnie-medium.onnx";
config.model.vits.tokens = "vits-piper-nl_NL-ronnie-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-nl_NL-ronnie-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "God schiep het water, maar de Nederlander schiep de dijk";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-nl_NL-ronnie-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-nl_NL-ronnie-medium/nl_NL-ronnie-medium.onnx",
lexicon="",
data_dir="vits-piper-nl_NL-ronnie-medium/espeak-ng-data",
tokens="vits-piper-nl_NL-ronnie-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="God schiep het water, maar de Nederlander schiep de dijk",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
God schiep het water, maar de Nederlander schiep de dijk
sample audios for different speakers are listed below:
Speaker 0
English
This section lists text to speech models for English.
- kokoro-en-v0_19
- vits-piper-en_GB-alan-low
- vits-piper-en_GB-alan-medium
- vits-piper-en_GB-alba-medium
- vits-piper-en_GB-aru-medium
- vits-piper-en_GB-cori-high
- vits-piper-en_GB-cori-medium
- vits-piper-en_GB-jenny_dioco-medium
- vits-piper-en_GB-northern_english_male-medium
- vits-piper-en_GB-semaine-medium
- vits-piper-en_GB-southern_english_female-low
- vits-piper-en_GB-southern_english_female-medium
- vits-piper-en_GB-southern_english_male-medium
- vits-piper-en_GB-vctk-medium
- vits-piper-en_US-amy-low
- vits-piper-en_US-amy-medium
- vits-piper-en_US-arctic-medium
- vits-piper-en_US-bryce-medium
- vits-piper-en_US-danny-low
- vits-piper-en_US-glados-high
- vits-piper-en_US-hfc_female-medium
- vits-piper-en_US-hfc_male-medium
- vits-piper-en_US-joe-medium
- vits-piper-en_US-john-medium
- vits-piper-en_US-kathleen-low
- vits-piper-en_US-kristin-medium
- vits-piper-en_US-kusal-medium
- vits-piper-en_US-l2arctic-medium
- vits-piper-en_US-lessac-high
- vits-piper-en_US-lessac-low
- vits-piper-en_US-lessac-medium
- vits-piper-en_US-libritts-high
- vits-piper-en_US-libritts_r-medium
- vits-piper-en_US-ljspeech-high
- vits-piper-en_US-ljspeech-medium
- vits-piper-en_US-norman-medium
- vits-piper-en_US-reza_ibrahim-medium
- vits-piper-en_US-ryan-high
- vits-piper-en_US-ryan-low
- vits-piper-en_US-ryan-medium
- vits-piper-en_US-sam-medium
kokoro-en-v0_19
Info about this model
This model is kokoro v0.19 and it is from https://huggingface.co/hexgrad/kLegacy
It supports only English
.
Number of speakers | Sample rate |
---|---|
11 | 24000 |
Meaning of speaker prefix
Prefix | Meaning | sid range | Number of speakers |
---|---|---|---|
af | American female | 0 - 4 | 5 |
am | American male | 5 - 6 | 2 |
bf | British female | 7 - 8 | 2 |
bm | British male | 9 - 10 | 2 |
speaker ID to speaker name (sid -> name)
The mapping from speaker ID (sid) to speaker name is given below:
0 - 3 | 0 -> af | 1 -> af_bella | 2 -> af_nicole | 3 -> af_sarah |
4 - 7 | 4 -> af_sky | 5 -> am_adam | 6 -> am_michael | 7 -> bf_emma |
8 - 10 | 8 -> bf_isabella | 9 -> bm_george | 10 -> bm_lewis |
speaker name to speaker ID (name -> sid)
The mapping from speaker name to speaker ID (sid) is given below:
0 - 3 | af -> 0 | af_bella -> 1 | af_nicole -> 2 | af_sarah -> 3 |
4 - 7 | af_sky -> 4 | am_adam -> 5 | am_michael -> 6 | bf_emma -> 7 |
8 - 10 | bf_isabella -> 8 | bm_george -> 9 | bm_lewis -> 10 |
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0 - af
Speaker 1 - af_bella
Speaker 2 - af_nicole
Speaker 3 - af_sarah
Speaker 4 - af_sky
Speaker 5 - am_adam
Speaker 6 - am_michael
Speaker 7 - bf_emma
Speaker 8 - bf_isabella
Speaker 9 - bm_george
Speaker 10 - bm_lewis
vits-piper-en_GB-alan-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_GB/alan/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
https://github.com/k2-fsa/sherpa-onnx/releases/download/tts-models/vits-piper-en_GB-alan-low.tar.bz2
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-alan-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-alan-low/en_GB-alan-low.onnx";
config.model.vits.tokens = "vits-piper-en_GB-alan-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-alan-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
https://github.com/k2-fsa/sherpa-onnx/releases/download/tts-models/vits-piper-en_GB-alan-low.tar.bz2
You can use the following code to play with vits-piper-en_GB-alan-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-alan-low/en_GB-alan-low.onnx",
lexicon="",
data_dir="vits-piper-en_GB-alan-low/espeak-ng-data",
tokens="vits-piper-en_GB-alan-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_GB-alan-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_GB/alan/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-alan-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-alan-medium/en_GB-alan-medium.onnx";
config.model.vits.tokens = "vits-piper-en_GB-alan-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-alan-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_GB-alan-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-alan-medium/en_GB-alan-medium.onnx",
lexicon="",
data_dir="vits-piper-en_GB-alan-medium/espeak-ng-data",
tokens="vits-piper-en_GB-alan-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_GB-alba-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_GB/alba/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-alba-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-alba-medium/en_GB-alba-medium.onnx";
config.model.vits.tokens = "vits-piper-en_GB-alba-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-alba-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_GB-alba-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-alba-medium/en_GB-alba-medium.onnx",
lexicon="",
data_dir="vits-piper-en_GB-alba-medium/espeak-ng-data",
tokens="vits-piper-en_GB-alba-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_GB-aru-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_GB/aru/medium
Number of speakers | Sample rate |
---|---|
12 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-aru-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-aru-medium/en_GB-aru-medium.onnx";
config.model.vits.tokens = "vits-piper-en_GB-aru-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-aru-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_GB-aru-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-aru-medium/en_GB-aru-medium.onnx",
lexicon="",
data_dir="vits-piper-en_GB-aru-medium/espeak-ng-data",
tokens="vits-piper-en_GB-aru-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
Speaker 6
Speaker 7
Speaker 8
Speaker 9
Speaker 10
Speaker 11
vits-piper-en_GB-cori-high
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_GB/cori/high
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-cori-high
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-cori-high/en_GB-cori-high.onnx";
config.model.vits.tokens = "vits-piper-en_GB-cori-high/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-cori-high/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_GB-cori-high
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-cori-high/en_GB-cori-high.onnx",
lexicon="",
data_dir="vits-piper-en_GB-cori-high/espeak-ng-data",
tokens="vits-piper-en_GB-cori-high/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_GB-cori-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_GB/cori/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-cori-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-cori-medium/en_GB-cori-medium.onnx";
config.model.vits.tokens = "vits-piper-en_GB-cori-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-cori-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_GB-cori-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-cori-medium/en_GB-cori-medium.onnx",
lexicon="",
data_dir="vits-piper-en_GB-cori-medium/espeak-ng-data",
tokens="vits-piper-en_GB-cori-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_GB-jenny_dioco-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_GB/jenny_dioco/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-jenny_dioco-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-jenny_dioco-medium/en_GB-jenny_dioco-medium.onnx";
config.model.vits.tokens = "vits-piper-en_GB-jenny_dioco-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-jenny_dioco-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_GB-jenny_dioco-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-jenny_dioco-medium/en_GB-jenny_dioco-medium.onnx",
lexicon="",
data_dir="vits-piper-en_GB-jenny_dioco-medium/espeak-ng-data",
tokens="vits-piper-en_GB-jenny_dioco-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_GB-northern_english_male-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_GB/northern_english_male/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-northern_english_male-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-northern_english_male-medium/en_GB-northern_english_male-medium.onnx";
config.model.vits.tokens = "vits-piper-en_GB-northern_english_male-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-northern_english_male-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_GB-northern_english_male-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-northern_english_male-medium/en_GB-northern_english_male-medium.onnx",
lexicon="",
data_dir="vits-piper-en_GB-northern_english_male-medium/espeak-ng-data",
tokens="vits-piper-en_GB-northern_english_male-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_GB-semaine-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_GB/semaine/medium
Number of speakers | Sample rate |
---|---|
4 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-semaine-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-semaine-medium/en_GB-semaine-medium.onnx";
config.model.vits.tokens = "vits-piper-en_GB-semaine-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-semaine-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_GB-semaine-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-semaine-medium/en_GB-semaine-medium.onnx",
lexicon="",
data_dir="vits-piper-en_GB-semaine-medium/espeak-ng-data",
tokens="vits-piper-en_GB-semaine-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
vits-piper-en_GB-southern_english_female-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_GB/southern_english_female/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-southern_english_female-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-southern_english_female-low/en_GB-southern_english_female-low.onnx";
config.model.vits.tokens = "vits-piper-en_GB-southern_english_female-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-southern_english_female-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_GB-southern_english_female-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-southern_english_female-low/en_GB-southern_english_female-low.onnx",
lexicon="",
data_dir="vits-piper-en_GB-southern_english_female-low/espeak-ng-data",
tokens="vits-piper-en_GB-southern_english_female-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_GB-southern_english_female-medium
Info about this model
This model is converted from https://huggingface.co/csukuangfj/vits-piper-en_GB-southern_english_female-medium
Number of speakers | Sample rate |
---|---|
6 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-southern_english_female-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-southern_english_female-medium/en_GB-southern_english_female-medium.onnx";
config.model.vits.tokens = "vits-piper-en_GB-southern_english_female-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-southern_english_female-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_GB-southern_english_female-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-southern_english_female-medium/en_GB-southern_english_female-medium.onnx",
lexicon="",
data_dir="vits-piper-en_GB-southern_english_female-medium/espeak-ng-data",
tokens="vits-piper-en_GB-southern_english_female-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
vits-piper-en_GB-southern_english_male-medium
Info about this model
This model is converted from https://huggingface.co/csukuangfj/vits-piper-en_GB-southern_english_male-medium
Number of speakers | Sample rate |
---|---|
8 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-southern_english_male-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-southern_english_male-medium/en_GB-southern_english_male-medium.onnx";
config.model.vits.tokens = "vits-piper-en_GB-southern_english_male-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-southern_english_male-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_GB-southern_english_male-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-southern_english_male-medium/en_GB-southern_english_male-medium.onnx",
lexicon="",
data_dir="vits-piper-en_GB-southern_english_male-medium/espeak-ng-data",
tokens="vits-piper-en_GB-southern_english_male-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
Speaker 6
Speaker 7
vits-piper-en_GB-vctk-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_GB/vctk/medium
Number of speakers | Sample rate |
---|---|
109 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_GB-vctk-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_GB-vctk-medium/en_GB-vctk-medium.onnx";
config.model.vits.tokens = "vits-piper-en_GB-vctk-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_GB-vctk-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_GB-vctk-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_GB-vctk-medium/en_GB-vctk-medium.onnx",
lexicon="",
data_dir="vits-piper-en_GB-vctk-medium/espeak-ng-data",
tokens="vits-piper-en_GB-vctk-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
Speaker 6
Speaker 7
Speaker 8
Speaker 9
Speaker 10
Speaker 11
Speaker 12
Speaker 13
Speaker 14
Speaker 15
Speaker 16
Speaker 17
Speaker 18
Speaker 19
Speaker 20
Speaker 21
Speaker 22
Speaker 23
Speaker 24
Speaker 25
Speaker 26
Speaker 27
Speaker 28
Speaker 29
Speaker 30
Speaker 31
Speaker 32
Speaker 33
Speaker 34
Speaker 35
Speaker 36
Speaker 37
Speaker 38
Speaker 39
Speaker 40
Speaker 41
Speaker 42
Speaker 43
Speaker 44
Speaker 45
Speaker 46
Speaker 47
Speaker 48
Speaker 49
Speaker 50
Speaker 51
Speaker 52
Speaker 53
Speaker 54
Speaker 55
Speaker 56
Speaker 57
Speaker 58
Speaker 59
Speaker 60
Speaker 61
Speaker 62
Speaker 63
Speaker 64
Speaker 65
Speaker 66
Speaker 67
Speaker 68
Speaker 69
Speaker 70
Speaker 71
Speaker 72
Speaker 73
Speaker 74
Speaker 75
Speaker 76
Speaker 77
Speaker 78
Speaker 79
Speaker 80
Speaker 81
Speaker 82
Speaker 83
Speaker 84
Speaker 85
Speaker 86
Speaker 87
Speaker 88
Speaker 89
Speaker 90
Speaker 91
Speaker 92
Speaker 93
Speaker 94
Speaker 95
Speaker 96
Speaker 97
Speaker 98
Speaker 99
Speaker 100
Speaker 101
Speaker 102
Speaker 103
Speaker 104
Speaker 105
Speaker 106
Speaker 107
Speaker 108
vits-piper-en_US-amy-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/amy/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
https://github.com/k2-fsa/sherpa-onnx/releases/download/tts-models/vits-piper-en_US-amy-low.tar.bz2
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-amy-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-amy-low/en_US-amy-low.onnx";
config.model.vits.tokens = "vits-piper-en_US-amy-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-amy-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
https://github.com/k2-fsa/sherpa-onnx/releases/download/tts-models/vits-piper-en_US-amy-low.tar.bz2
You can use the following code to play with vits-piper-en_US-amy-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-amy-low/en_US-amy-low.onnx",
lexicon="",
data_dir="vits-piper-en_US-amy-low/espeak-ng-data",
tokens="vits-piper-en_US-amy-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-amy-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/amy/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-amy-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-amy-medium/en_US-amy-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-amy-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-amy-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-amy-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-amy-medium/en_US-amy-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-amy-medium/espeak-ng-data",
tokens="vits-piper-en_US-amy-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-arctic-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/arctic/medium
Number of speakers | Sample rate |
---|---|
18 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-arctic-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-arctic-medium/en_US-arctic-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-arctic-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-arctic-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-arctic-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-arctic-medium/en_US-arctic-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-arctic-medium/espeak-ng-data",
tokens="vits-piper-en_US-arctic-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
Speaker 6
Speaker 7
Speaker 8
Speaker 9
Speaker 10
Speaker 11
Speaker 12
Speaker 13
Speaker 14
Speaker 15
Speaker 16
Speaker 17
vits-piper-en_US-bryce-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/bryce/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-bryce-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-bryce-medium/en_US-bryce-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-bryce-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-bryce-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-bryce-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-bryce-medium/en_US-bryce-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-bryce-medium/espeak-ng-data",
tokens="vits-piper-en_US-bryce-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-danny-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/danny/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-danny-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-danny-low/en_US-danny-low.onnx";
config.model.vits.tokens = "vits-piper-en_US-danny-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-danny-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-danny-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-danny-low/en_US-danny-low.onnx",
lexicon="",
data_dir="vits-piper-en_US-danny-low/espeak-ng-data",
tokens="vits-piper-en_US-danny-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-glados-high
Info about this model
This model is converted from https://github.com/rhasspy/piper/issues/187#issuecomment-1805709037
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-glados-high
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-glados-high/en_US-glados-high.onnx";
config.model.vits.tokens = "vits-piper-en_US-glados-high/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-glados-high/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-glados-high
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-glados-high/en_US-glados-high.onnx",
lexicon="",
data_dir="vits-piper-en_US-glados-high/espeak-ng-data",
tokens="vits-piper-en_US-glados-high/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-hfc_female-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/hfc_female/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-hfc_female-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-hfc_female-medium/en_US-hfc_female-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-hfc_female-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-hfc_female-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-hfc_female-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-hfc_female-medium/en_US-hfc_female-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-hfc_female-medium/espeak-ng-data",
tokens="vits-piper-en_US-hfc_female-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-hfc_male-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/hfc_male/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-hfc_male-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-hfc_male-medium/en_US-hfc_male-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-hfc_male-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-hfc_male-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-hfc_male-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-hfc_male-medium/en_US-hfc_male-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-hfc_male-medium/espeak-ng-data",
tokens="vits-piper-en_US-hfc_male-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-joe-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/joe/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-joe-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-joe-medium/en_US-joe-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-joe-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-joe-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-joe-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-joe-medium/en_US-joe-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-joe-medium/espeak-ng-data",
tokens="vits-piper-en_US-joe-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-john-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/john/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-john-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-john-medium/en_US-john-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-john-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-john-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-john-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-john-medium/en_US-john-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-john-medium/espeak-ng-data",
tokens="vits-piper-en_US-john-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-kathleen-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/kathleen/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-kathleen-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-kathleen-low/en_US-kathleen-low.onnx";
config.model.vits.tokens = "vits-piper-en_US-kathleen-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-kathleen-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-kathleen-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-kathleen-low/en_US-kathleen-low.onnx",
lexicon="",
data_dir="vits-piper-en_US-kathleen-low/espeak-ng-data",
tokens="vits-piper-en_US-kathleen-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-kristin-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/kristin/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-kristin-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-kristin-medium/en_US-kristin-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-kristin-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-kristin-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-kristin-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-kristin-medium/en_US-kristin-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-kristin-medium/espeak-ng-data",
tokens="vits-piper-en_US-kristin-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-kusal-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/kusal/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-kusal-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-kusal-medium/en_US-kusal-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-kusal-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-kusal-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-kusal-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-kusal-medium/en_US-kusal-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-kusal-medium/espeak-ng-data",
tokens="vits-piper-en_US-kusal-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-l2arctic-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/l2arctic/medium
Number of speakers | Sample rate |
---|---|
24 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-l2arctic-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-l2arctic-medium/en_US-l2arctic-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-l2arctic-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-l2arctic-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-l2arctic-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-l2arctic-medium/en_US-l2arctic-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-l2arctic-medium/espeak-ng-data",
tokens="vits-piper-en_US-l2arctic-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
Speaker 6
Speaker 7
Speaker 8
Speaker 9
Speaker 10
Speaker 11
Speaker 12
Speaker 13
Speaker 14
Speaker 15
Speaker 16
Speaker 17
Speaker 18
Speaker 19
Speaker 20
Speaker 21
Speaker 22
Speaker 23
vits-piper-en_US-lessac-high
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/lessac/high
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-lessac-high
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-lessac-high/en_US-lessac-high.onnx";
config.model.vits.tokens = "vits-piper-en_US-lessac-high/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-lessac-high/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-lessac-high
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-lessac-high/en_US-lessac-high.onnx",
lexicon="",
data_dir="vits-piper-en_US-lessac-high/espeak-ng-data",
tokens="vits-piper-en_US-lessac-high/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-lessac-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/lessac/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-lessac-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-lessac-low/en_US-lessac-low.onnx";
config.model.vits.tokens = "vits-piper-en_US-lessac-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-lessac-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-lessac-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-lessac-low/en_US-lessac-low.onnx",
lexicon="",
data_dir="vits-piper-en_US-lessac-low/espeak-ng-data",
tokens="vits-piper-en_US-lessac-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-lessac-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/lessac/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-lessac-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-lessac-medium/en_US-lessac-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-lessac-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-lessac-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-lessac-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-lessac-medium/en_US-lessac-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-lessac-medium/espeak-ng-data",
tokens="vits-piper-en_US-lessac-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-libritts-high
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/libritts/high
Number of speakers | Sample rate |
---|---|
904 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-libritts-high
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-libritts-high/en_US-libritts-high.onnx";
config.model.vits.tokens = "vits-piper-en_US-libritts-high/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-libritts-high/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-libritts-high
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-libritts-high/en_US-libritts-high.onnx",
lexicon="",
data_dir="vits-piper-en_US-libritts-high/espeak-ng-data",
tokens="vits-piper-en_US-libritts-high/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
Speaker 6
Speaker 7
Speaker 8
Speaker 9
Speaker 10
Speaker 11
Speaker 12
Speaker 13
Speaker 14
Speaker 15
Speaker 16
Speaker 17
Speaker 18
Speaker 19
Speaker 20
Speaker 21
Speaker 22
Speaker 23
Speaker 24
Speaker 25
Speaker 26
Speaker 27
Speaker 28
Speaker 29
Speaker 30
Speaker 31
Speaker 32
Speaker 33
Speaker 34
Speaker 35
Speaker 36
Speaker 37
Speaker 38
Speaker 39
Speaker 40
Speaker 41
Speaker 42
Speaker 43
Speaker 44
Speaker 45
Speaker 46
Speaker 47
Speaker 48
Speaker 49
Speaker 50
Speaker 51
Speaker 52
Speaker 53
Speaker 54
Speaker 55
Speaker 56
Speaker 57
Speaker 58
Speaker 59
Speaker 60
Speaker 61
Speaker 62
Speaker 63
Speaker 64
Speaker 65
Speaker 66
Speaker 67
Speaker 68
Speaker 69
Speaker 70
Speaker 71
Speaker 72
Speaker 73
Speaker 74
Speaker 75
Speaker 76
Speaker 77
Speaker 78
Speaker 79
Speaker 80
Speaker 81
Speaker 82
Speaker 83
Speaker 84
Speaker 85
Speaker 86
Speaker 87
Speaker 88
Speaker 89
Speaker 90
Speaker 91
Speaker 92
Speaker 93
Speaker 94
Speaker 95
Speaker 96
Speaker 97
Speaker 98
Speaker 99
Speaker 100
Speaker 101
Speaker 102
Speaker 103
Speaker 104
Speaker 105
Speaker 106
Speaker 107
Speaker 108
Speaker 109
Speaker 110
Speaker 111
Speaker 112
Speaker 113
Speaker 114
Speaker 115
Speaker 116
Speaker 117
Speaker 118
Speaker 119
Speaker 120
Speaker 121
Speaker 122
Speaker 123
Speaker 124
Speaker 125
Speaker 126
Speaker 127
Speaker 128
Speaker 129
Speaker 130
Speaker 131
Speaker 132
Speaker 133
Speaker 134
Speaker 135
Speaker 136
Speaker 137
Speaker 138
Speaker 139
Speaker 140
Speaker 141
Speaker 142
Speaker 143
Speaker 144
Speaker 145
Speaker 146
Speaker 147
Speaker 148
Speaker 149
Speaker 150
Speaker 151
Speaker 152
Speaker 153
Speaker 154
Speaker 155
Speaker 156
Speaker 157
Speaker 158
Speaker 159
Speaker 160
Speaker 161
Speaker 162
Speaker 163
Speaker 164
Speaker 165
Speaker 166
Speaker 167
Speaker 168
Speaker 169
Speaker 170
Speaker 171
Speaker 172
Speaker 173
Speaker 174
Speaker 175
Speaker 176
Speaker 177
Speaker 178
Speaker 179
Speaker 180
Speaker 181
Speaker 182
Speaker 183
Speaker 184
Speaker 185
Speaker 186
Speaker 187
Speaker 188
Speaker 189
Speaker 190
Speaker 191
Speaker 192
Speaker 193
Speaker 194
Speaker 195
Speaker 196
Speaker 197
Speaker 198
Speaker 199
Speaker 200
Speaker 201
Speaker 202
Speaker 203
Speaker 204
Speaker 205
Speaker 206
Speaker 207
Speaker 208
Speaker 209
Speaker 210
Speaker 211
Speaker 212
Speaker 213
Speaker 214
Speaker 215
Speaker 216
Speaker 217
Speaker 218
Speaker 219
Speaker 220
Speaker 221
Speaker 222
Speaker 223
Speaker 224
Speaker 225
Speaker 226
Speaker 227
Speaker 228
Speaker 229
Speaker 230
Speaker 231
Speaker 232
Speaker 233
Speaker 234
Speaker 235
Speaker 236
Speaker 237
Speaker 238
Speaker 239
Speaker 240
Speaker 241
Speaker 242
Speaker 243
Speaker 244
Speaker 245
Speaker 246
Speaker 247
Speaker 248
Speaker 249
Speaker 250
Speaker 251
Speaker 252
Speaker 253
Speaker 254
Speaker 255
Speaker 256
Speaker 257
Speaker 258
Speaker 259
Speaker 260
Speaker 261
Speaker 262
Speaker 263
Speaker 264
Speaker 265
Speaker 266
Speaker 267
Speaker 268
Speaker 269
Speaker 270
Speaker 271
Speaker 272
Speaker 273
Speaker 274
Speaker 275
Speaker 276
Speaker 277
Speaker 278
Speaker 279
Speaker 280
Speaker 281
Speaker 282
Speaker 283
Speaker 284
Speaker 285
Speaker 286
Speaker 287
Speaker 288
Speaker 289
Speaker 290
Speaker 291
Speaker 292
Speaker 293
Speaker 294
Speaker 295
Speaker 296
Speaker 297
Speaker 298
Speaker 299
Speaker 300
Speaker 301
Speaker 302
Speaker 303
Speaker 304
Speaker 305
Speaker 306
Speaker 307
Speaker 308
Speaker 309
Speaker 310
Speaker 311
Speaker 312
Speaker 313
Speaker 314
Speaker 315
Speaker 316
Speaker 317
Speaker 318
Speaker 319
Speaker 320
Speaker 321
Speaker 322
Speaker 323
Speaker 324
Speaker 325
Speaker 326
Speaker 327
Speaker 328
Speaker 329
Speaker 330
Speaker 331
Speaker 332
Speaker 333
Speaker 334
Speaker 335
Speaker 336
Speaker 337
Speaker 338
Speaker 339
Speaker 340
Speaker 341
Speaker 342
Speaker 343
Speaker 344
Speaker 345
Speaker 346
Speaker 347
Speaker 348
Speaker 349
Speaker 350
Speaker 351
Speaker 352
Speaker 353
Speaker 354
Speaker 355
Speaker 356
Speaker 357
Speaker 358
Speaker 359
Speaker 360
Speaker 361
Speaker 362
Speaker 363
Speaker 364
Speaker 365
Speaker 366
Speaker 367
Speaker 368
Speaker 369
Speaker 370
Speaker 371
Speaker 372
Speaker 373
Speaker 374
Speaker 375
Speaker 376
Speaker 377
Speaker 378
Speaker 379
Speaker 380
Speaker 381
Speaker 382
Speaker 383
Speaker 384
Speaker 385
Speaker 386
Speaker 387
Speaker 388
Speaker 389
Speaker 390
Speaker 391
Speaker 392
Speaker 393
Speaker 394
Speaker 395
Speaker 396
Speaker 397
Speaker 398
Speaker 399
Speaker 400
Speaker 401
Speaker 402
Speaker 403
Speaker 404
Speaker 405
Speaker 406
Speaker 407
Speaker 408
Speaker 409
Speaker 410
Speaker 411
Speaker 412
Speaker 413
Speaker 414
Speaker 415
Speaker 416
Speaker 417
Speaker 418
Speaker 419
Speaker 420
Speaker 421
Speaker 422
Speaker 423
Speaker 424
Speaker 425
Speaker 426
Speaker 427
Speaker 428
Speaker 429
Speaker 430
Speaker 431
Speaker 432
Speaker 433
Speaker 434
Speaker 435
Speaker 436
Speaker 437
Speaker 438
Speaker 439
Speaker 440
Speaker 441
Speaker 442
Speaker 443
Speaker 444
Speaker 445
Speaker 446
Speaker 447
Speaker 448
Speaker 449
Speaker 450
Speaker 451
Speaker 452
Speaker 453
Speaker 454
Speaker 455
Speaker 456
Speaker 457
Speaker 458
Speaker 459
Speaker 460
Speaker 461
Speaker 462
Speaker 463
Speaker 464
Speaker 465
Speaker 466
Speaker 467
Speaker 468
Speaker 469
Speaker 470
Speaker 471
Speaker 472
Speaker 473
Speaker 474
Speaker 475
Speaker 476
Speaker 477
Speaker 478
Speaker 479
Speaker 480
Speaker 481
Speaker 482
Speaker 483
Speaker 484
Speaker 485
Speaker 486
Speaker 487
Speaker 488
Speaker 489
Speaker 490
Speaker 491
Speaker 492
Speaker 493
Speaker 494
Speaker 495
Speaker 496
Speaker 497
Speaker 498
Speaker 499
Speaker 500
Speaker 501
Speaker 502
Speaker 503
Speaker 504
Speaker 505
Speaker 506
Speaker 507
Speaker 508
Speaker 509
Speaker 510
Speaker 511
Speaker 512
Speaker 513
Speaker 514
Speaker 515
Speaker 516
Speaker 517
Speaker 518
Speaker 519
Speaker 520
Speaker 521
Speaker 522
Speaker 523
Speaker 524
Speaker 525
Speaker 526
Speaker 527
Speaker 528
Speaker 529
Speaker 530
Speaker 531
Speaker 532
Speaker 533
Speaker 534
Speaker 535
Speaker 536
Speaker 537
Speaker 538
Speaker 539
Speaker 540
Speaker 541
Speaker 542
Speaker 543
Speaker 544
Speaker 545
Speaker 546
Speaker 547
Speaker 548
Speaker 549
Speaker 550
Speaker 551
Speaker 552
Speaker 553
Speaker 554
Speaker 555
Speaker 556
Speaker 557
Speaker 558
Speaker 559
Speaker 560
Speaker 561
Speaker 562
Speaker 563
Speaker 564
Speaker 565
Speaker 566
Speaker 567
Speaker 568
Speaker 569
Speaker 570
Speaker 571
Speaker 572
Speaker 573
Speaker 574
Speaker 575
Speaker 576
Speaker 577
Speaker 578
Speaker 579
Speaker 580
Speaker 581
Speaker 582
Speaker 583
Speaker 584
Speaker 585
Speaker 586
Speaker 587
Speaker 588
Speaker 589
Speaker 590
Speaker 591
Speaker 592
Speaker 593
Speaker 594
Speaker 595
Speaker 596
Speaker 597
Speaker 598
Speaker 599
Speaker 600
Speaker 601
Speaker 602
Speaker 603
Speaker 604
Speaker 605
Speaker 606
Speaker 607
Speaker 608
Speaker 609
Speaker 610
Speaker 611
Speaker 612
Speaker 613
Speaker 614
Speaker 615
Speaker 616
Speaker 617
Speaker 618
Speaker 619
Speaker 620
Speaker 621
Speaker 622
Speaker 623
Speaker 624
Speaker 625
Speaker 626
Speaker 627
Speaker 628
Speaker 629
Speaker 630
Speaker 631
Speaker 632
Speaker 633
Speaker 634
Speaker 635
Speaker 636
Speaker 637
Speaker 638
Speaker 639
Speaker 640
Speaker 641
Speaker 642
Speaker 643
Speaker 644
Speaker 645
Speaker 646
Speaker 647
Speaker 648
Speaker 649
Speaker 650
Speaker 651
Speaker 652
Speaker 653
Speaker 654
Speaker 655
Speaker 656
Speaker 657
Speaker 658
Speaker 659
Speaker 660
Speaker 661
Speaker 662
Speaker 663
Speaker 664
Speaker 665
Speaker 666
Speaker 667
Speaker 668
Speaker 669
Speaker 670
Speaker 671
Speaker 672
Speaker 673
Speaker 674
Speaker 675
Speaker 676
Speaker 677
Speaker 678
Speaker 679
Speaker 680
Speaker 681
Speaker 682
Speaker 683
Speaker 684
Speaker 685
Speaker 686
Speaker 687
Speaker 688
Speaker 689
Speaker 690
Speaker 691
Speaker 692
Speaker 693
Speaker 694
Speaker 695
Speaker 696
Speaker 697
Speaker 698
Speaker 699
Speaker 700
Speaker 701
Speaker 702
Speaker 703
Speaker 704
Speaker 705
Speaker 706
Speaker 707
Speaker 708
Speaker 709
Speaker 710
Speaker 711
Speaker 712
Speaker 713
Speaker 714
Speaker 715
Speaker 716
Speaker 717
Speaker 718
Speaker 719
Speaker 720
Speaker 721
Speaker 722
Speaker 723
Speaker 724
Speaker 725
Speaker 726
Speaker 727
Speaker 728
Speaker 729
Speaker 730
Speaker 731
Speaker 732
Speaker 733
Speaker 734
Speaker 735
Speaker 736
Speaker 737
Speaker 738
Speaker 739
Speaker 740
Speaker 741
Speaker 742
Speaker 743
Speaker 744
Speaker 745
Speaker 746
Speaker 747
Speaker 748
Speaker 749
Speaker 750
Speaker 751
Speaker 752
Speaker 753
Speaker 754
Speaker 755
Speaker 756
Speaker 757
Speaker 758
Speaker 759
Speaker 760
Speaker 761
Speaker 762
Speaker 763
Speaker 764
Speaker 765
Speaker 766
Speaker 767
Speaker 768
Speaker 769
Speaker 770
Speaker 771
Speaker 772
Speaker 773
Speaker 774
Speaker 775
Speaker 776
Speaker 777
Speaker 778
Speaker 779
Speaker 780
Speaker 781
Speaker 782
Speaker 783
Speaker 784
Speaker 785
Speaker 786
Speaker 787
Speaker 788
Speaker 789
Speaker 790
Speaker 791
Speaker 792
Speaker 793
Speaker 794
Speaker 795
Speaker 796
Speaker 797
Speaker 798
Speaker 799
Speaker 800
Speaker 801
Speaker 802
Speaker 803
Speaker 804
Speaker 805
Speaker 806
Speaker 807
Speaker 808
Speaker 809
Speaker 810
Speaker 811
Speaker 812
Speaker 813
Speaker 814
Speaker 815
Speaker 816
Speaker 817
Speaker 818
Speaker 819
Speaker 820
Speaker 821
Speaker 822
Speaker 823
Speaker 824
Speaker 825
Speaker 826
Speaker 827
Speaker 828
Speaker 829
Speaker 830
Speaker 831
Speaker 832
Speaker 833
Speaker 834
Speaker 835
Speaker 836
Speaker 837
Speaker 838
Speaker 839
Speaker 840
Speaker 841
Speaker 842
Speaker 843
Speaker 844
Speaker 845
Speaker 846
Speaker 847
Speaker 848
Speaker 849
Speaker 850
Speaker 851
Speaker 852
Speaker 853
Speaker 854
Speaker 855
Speaker 856
Speaker 857
Speaker 858
Speaker 859
Speaker 860
Speaker 861
Speaker 862
Speaker 863
Speaker 864
Speaker 865
Speaker 866
Speaker 867
Speaker 868
Speaker 869
Speaker 870
Speaker 871
Speaker 872
Speaker 873
Speaker 874
Speaker 875
Speaker 876
Speaker 877
Speaker 878
Speaker 879
Speaker 880
Speaker 881
Speaker 882
Speaker 883
Speaker 884
Speaker 885
Speaker 886
Speaker 887
Speaker 888
Speaker 889
Speaker 890
Speaker 891
Speaker 892
Speaker 893
Speaker 894
Speaker 895
Speaker 896
Speaker 897
Speaker 898
Speaker 899
Speaker 900
Speaker 901
Speaker 902
Speaker 903
vits-piper-en_US-libritts_r-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/libritts_r/medium
Number of speakers | Sample rate |
---|---|
904 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-libritts_r-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-libritts_r-medium/en_US-libritts_r-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-libritts_r-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-libritts_r-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-libritts_r-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-libritts_r-medium/en_US-libritts_r-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-libritts_r-medium/espeak-ng-data",
tokens="vits-piper-en_US-libritts_r-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
Speaker 6
Speaker 7
Speaker 8
Speaker 9
Speaker 10
Speaker 11
Speaker 12
Speaker 13
Speaker 14
Speaker 15
Speaker 16
Speaker 17
Speaker 18
Speaker 19
Speaker 20
Speaker 21
Speaker 22
Speaker 23
Speaker 24
Speaker 25
Speaker 26
Speaker 27
Speaker 28
Speaker 29
Speaker 30
Speaker 31
Speaker 32
Speaker 33
Speaker 34
Speaker 35
Speaker 36
Speaker 37
Speaker 38
Speaker 39
Speaker 40
Speaker 41
Speaker 42
Speaker 43
Speaker 44
Speaker 45
Speaker 46
Speaker 47
Speaker 48
Speaker 49
Speaker 50
Speaker 51
Speaker 52
Speaker 53
Speaker 54
Speaker 55
Speaker 56
Speaker 57
Speaker 58
Speaker 59
Speaker 60
Speaker 61
Speaker 62
Speaker 63
Speaker 64
Speaker 65
Speaker 66
Speaker 67
Speaker 68
Speaker 69
Speaker 70
Speaker 71
Speaker 72
Speaker 73
Speaker 74
Speaker 75
Speaker 76
Speaker 77
Speaker 78
Speaker 79
Speaker 80
Speaker 81
Speaker 82
Speaker 83
Speaker 84
Speaker 85
Speaker 86
Speaker 87
Speaker 88
Speaker 89
Speaker 90
Speaker 91
Speaker 92
Speaker 93
Speaker 94
Speaker 95
Speaker 96
Speaker 97
Speaker 98
Speaker 99
Speaker 100
Speaker 101
Speaker 102
Speaker 103
Speaker 104
Speaker 105
Speaker 106
Speaker 107
Speaker 108
Speaker 109
Speaker 110
Speaker 111
Speaker 112
Speaker 113
Speaker 114
Speaker 115
Speaker 116
Speaker 117
Speaker 118
Speaker 119
Speaker 120
Speaker 121
Speaker 122
Speaker 123
Speaker 124
Speaker 125
Speaker 126
Speaker 127
Speaker 128
Speaker 129
Speaker 130
Speaker 131
Speaker 132
Speaker 133
Speaker 134
Speaker 135
Speaker 136
Speaker 137
Speaker 138
Speaker 139
Speaker 140
Speaker 141
Speaker 142
Speaker 143
Speaker 144
Speaker 145
Speaker 146
Speaker 147
Speaker 148
Speaker 149
Speaker 150
Speaker 151
Speaker 152
Speaker 153
Speaker 154
Speaker 155
Speaker 156
Speaker 157
Speaker 158
Speaker 159
Speaker 160
Speaker 161
Speaker 162
Speaker 163
Speaker 164
Speaker 165
Speaker 166
Speaker 167
Speaker 168
Speaker 169
Speaker 170
Speaker 171
Speaker 172
Speaker 173
Speaker 174
Speaker 175
Speaker 176
Speaker 177
Speaker 178
Speaker 179
Speaker 180
Speaker 181
Speaker 182
Speaker 183
Speaker 184
Speaker 185
Speaker 186
Speaker 187
Speaker 188
Speaker 189
Speaker 190
Speaker 191
Speaker 192
Speaker 193
Speaker 194
Speaker 195
Speaker 196
Speaker 197
Speaker 198
Speaker 199
Speaker 200
Speaker 201
Speaker 202
Speaker 203
Speaker 204
Speaker 205
Speaker 206
Speaker 207
Speaker 208
Speaker 209
Speaker 210
Speaker 211
Speaker 212
Speaker 213
Speaker 214
Speaker 215
Speaker 216
Speaker 217
Speaker 218
Speaker 219
Speaker 220
Speaker 221
Speaker 222
Speaker 223
Speaker 224
Speaker 225
Speaker 226
Speaker 227
Speaker 228
Speaker 229
Speaker 230
Speaker 231
Speaker 232
Speaker 233
Speaker 234
Speaker 235
Speaker 236
Speaker 237
Speaker 238
Speaker 239
Speaker 240
Speaker 241
Speaker 242
Speaker 243
Speaker 244
Speaker 245
Speaker 246
Speaker 247
Speaker 248
Speaker 249
Speaker 250
Speaker 251
Speaker 252
Speaker 253
Speaker 254
Speaker 255
Speaker 256
Speaker 257
Speaker 258
Speaker 259
Speaker 260
Speaker 261
Speaker 262
Speaker 263
Speaker 264
Speaker 265
Speaker 266
Speaker 267
Speaker 268
Speaker 269
Speaker 270
Speaker 271
Speaker 272
Speaker 273
Speaker 274
Speaker 275
Speaker 276
Speaker 277
Speaker 278
Speaker 279
Speaker 280
Speaker 281
Speaker 282
Speaker 283
Speaker 284
Speaker 285
Speaker 286
Speaker 287
Speaker 288
Speaker 289
Speaker 290
Speaker 291
Speaker 292
Speaker 293
Speaker 294
Speaker 295
Speaker 296
Speaker 297
Speaker 298
Speaker 299
Speaker 300
Speaker 301
Speaker 302
Speaker 303
Speaker 304
Speaker 305
Speaker 306
Speaker 307
Speaker 308
Speaker 309
Speaker 310
Speaker 311
Speaker 312
Speaker 313
Speaker 314
Speaker 315
Speaker 316
Speaker 317
Speaker 318
Speaker 319
Speaker 320
Speaker 321
Speaker 322
Speaker 323
Speaker 324
Speaker 325
Speaker 326
Speaker 327
Speaker 328
Speaker 329
Speaker 330
Speaker 331
Speaker 332
Speaker 333
Speaker 334
Speaker 335
Speaker 336
Speaker 337
Speaker 338
Speaker 339
Speaker 340
Speaker 341
Speaker 342
Speaker 343
Speaker 344
Speaker 345
Speaker 346
Speaker 347
Speaker 348
Speaker 349
Speaker 350
Speaker 351
Speaker 352
Speaker 353
Speaker 354
Speaker 355
Speaker 356
Speaker 357
Speaker 358
Speaker 359
Speaker 360
Speaker 361
Speaker 362
Speaker 363
Speaker 364
Speaker 365
Speaker 366
Speaker 367
Speaker 368
Speaker 369
Speaker 370
Speaker 371
Speaker 372
Speaker 373
Speaker 374
Speaker 375
Speaker 376
Speaker 377
Speaker 378
Speaker 379
Speaker 380
Speaker 381
Speaker 382
Speaker 383
Speaker 384
Speaker 385
Speaker 386
Speaker 387
Speaker 388
Speaker 389
Speaker 390
Speaker 391
Speaker 392
Speaker 393
Speaker 394
Speaker 395
Speaker 396
Speaker 397
Speaker 398
Speaker 399
Speaker 400
Speaker 401
Speaker 402
Speaker 403
Speaker 404
Speaker 405
Speaker 406
Speaker 407
Speaker 408
Speaker 409
Speaker 410
Speaker 411
Speaker 412
Speaker 413
Speaker 414
Speaker 415
Speaker 416
Speaker 417
Speaker 418
Speaker 419
Speaker 420
Speaker 421
Speaker 422
Speaker 423
Speaker 424
Speaker 425
Speaker 426
Speaker 427
Speaker 428
Speaker 429
Speaker 430
Speaker 431
Speaker 432
Speaker 433
Speaker 434
Speaker 435
Speaker 436
Speaker 437
Speaker 438
Speaker 439
Speaker 440
Speaker 441
Speaker 442
Speaker 443
Speaker 444
Speaker 445
Speaker 446
Speaker 447
Speaker 448
Speaker 449
Speaker 450
Speaker 451
Speaker 452
Speaker 453
Speaker 454
Speaker 455
Speaker 456
Speaker 457
Speaker 458
Speaker 459
Speaker 460
Speaker 461
Speaker 462
Speaker 463
Speaker 464
Speaker 465
Speaker 466
Speaker 467
Speaker 468
Speaker 469
Speaker 470
Speaker 471
Speaker 472
Speaker 473
Speaker 474
Speaker 475
Speaker 476
Speaker 477
Speaker 478
Speaker 479
Speaker 480
Speaker 481
Speaker 482
Speaker 483
Speaker 484
Speaker 485
Speaker 486
Speaker 487
Speaker 488
Speaker 489
Speaker 490
Speaker 491
Speaker 492
Speaker 493
Speaker 494
Speaker 495
Speaker 496
Speaker 497
Speaker 498
Speaker 499
Speaker 500
Speaker 501
Speaker 502
Speaker 503
Speaker 504
Speaker 505
Speaker 506
Speaker 507
Speaker 508
Speaker 509
Speaker 510
Speaker 511
Speaker 512
Speaker 513
Speaker 514
Speaker 515
Speaker 516
Speaker 517
Speaker 518
Speaker 519
Speaker 520
Speaker 521
Speaker 522
Speaker 523
Speaker 524
Speaker 525
Speaker 526
Speaker 527
Speaker 528
Speaker 529
Speaker 530
Speaker 531
Speaker 532
Speaker 533
Speaker 534
Speaker 535
Speaker 536
Speaker 537
Speaker 538
Speaker 539
Speaker 540
Speaker 541
Speaker 542
Speaker 543
Speaker 544
Speaker 545
Speaker 546
Speaker 547
Speaker 548
Speaker 549
Speaker 550
Speaker 551
Speaker 552
Speaker 553
Speaker 554
Speaker 555
Speaker 556
Speaker 557
Speaker 558
Speaker 559
Speaker 560
Speaker 561
Speaker 562
Speaker 563
Speaker 564
Speaker 565
Speaker 566
Speaker 567
Speaker 568
Speaker 569
Speaker 570
Speaker 571
Speaker 572
Speaker 573
Speaker 574
Speaker 575
Speaker 576
Speaker 577
Speaker 578
Speaker 579
Speaker 580
Speaker 581
Speaker 582
Speaker 583
Speaker 584
Speaker 585
Speaker 586
Speaker 587
Speaker 588
Speaker 589
Speaker 590
Speaker 591
Speaker 592
Speaker 593
Speaker 594
Speaker 595
Speaker 596
Speaker 597
Speaker 598
Speaker 599
Speaker 600
Speaker 601
Speaker 602
Speaker 603
Speaker 604
Speaker 605
Speaker 606
Speaker 607
Speaker 608
Speaker 609
Speaker 610
Speaker 611
Speaker 612
Speaker 613
Speaker 614
Speaker 615
Speaker 616
Speaker 617
Speaker 618
Speaker 619
Speaker 620
Speaker 621
Speaker 622
Speaker 623
Speaker 624
Speaker 625
Speaker 626
Speaker 627
Speaker 628
Speaker 629
Speaker 630
Speaker 631
Speaker 632
Speaker 633
Speaker 634
Speaker 635
Speaker 636
Speaker 637
Speaker 638
Speaker 639
Speaker 640
Speaker 641
Speaker 642
Speaker 643
Speaker 644
Speaker 645
Speaker 646
Speaker 647
Speaker 648
Speaker 649
Speaker 650
Speaker 651
Speaker 652
Speaker 653
Speaker 654
Speaker 655
Speaker 656
Speaker 657
Speaker 658
Speaker 659
Speaker 660
Speaker 661
Speaker 662
Speaker 663
Speaker 664
Speaker 665
Speaker 666
Speaker 667
Speaker 668
Speaker 669
Speaker 670
Speaker 671
Speaker 672
Speaker 673
Speaker 674
Speaker 675
Speaker 676
Speaker 677
Speaker 678
Speaker 679
Speaker 680
Speaker 681
Speaker 682
Speaker 683
Speaker 684
Speaker 685
Speaker 686
Speaker 687
Speaker 688
Speaker 689
Speaker 690
Speaker 691
Speaker 692
Speaker 693
Speaker 694
Speaker 695
Speaker 696
Speaker 697
Speaker 698
Speaker 699
Speaker 700
Speaker 701
Speaker 702
Speaker 703
Speaker 704
Speaker 705
Speaker 706
Speaker 707
Speaker 708
Speaker 709
Speaker 710
Speaker 711
Speaker 712
Speaker 713
Speaker 714
Speaker 715
Speaker 716
Speaker 717
Speaker 718
Speaker 719
Speaker 720
Speaker 721
Speaker 722
Speaker 723
Speaker 724
Speaker 725
Speaker 726
Speaker 727
Speaker 728
Speaker 729
Speaker 730
Speaker 731
Speaker 732
Speaker 733
Speaker 734
Speaker 735
Speaker 736
Speaker 737
Speaker 738
Speaker 739
Speaker 740
Speaker 741
Speaker 742
Speaker 743
Speaker 744
Speaker 745
Speaker 746
Speaker 747
Speaker 748
Speaker 749
Speaker 750
Speaker 751
Speaker 752
Speaker 753
Speaker 754
Speaker 755
Speaker 756
Speaker 757
Speaker 758
Speaker 759
Speaker 760
Speaker 761
Speaker 762
Speaker 763
Speaker 764
Speaker 765
Speaker 766
Speaker 767
Speaker 768
Speaker 769
Speaker 770
Speaker 771
Speaker 772
Speaker 773
Speaker 774
Speaker 775
Speaker 776
Speaker 777
Speaker 778
Speaker 779
Speaker 780
Speaker 781
Speaker 782
Speaker 783
Speaker 784
Speaker 785
Speaker 786
Speaker 787
Speaker 788
Speaker 789
Speaker 790
Speaker 791
Speaker 792
Speaker 793
Speaker 794
Speaker 795
Speaker 796
Speaker 797
Speaker 798
Speaker 799
Speaker 800
Speaker 801
Speaker 802
Speaker 803
Speaker 804
Speaker 805
Speaker 806
Speaker 807
Speaker 808
Speaker 809
Speaker 810
Speaker 811
Speaker 812
Speaker 813
Speaker 814
Speaker 815
Speaker 816
Speaker 817
Speaker 818
Speaker 819
Speaker 820
Speaker 821
Speaker 822
Speaker 823
Speaker 824
Speaker 825
Speaker 826
Speaker 827
Speaker 828
Speaker 829
Speaker 830
Speaker 831
Speaker 832
Speaker 833
Speaker 834
Speaker 835
Speaker 836
Speaker 837
Speaker 838
Speaker 839
Speaker 840
Speaker 841
Speaker 842
Speaker 843
Speaker 844
Speaker 845
Speaker 846
Speaker 847
Speaker 848
Speaker 849
Speaker 850
Speaker 851
Speaker 852
Speaker 853
Speaker 854
Speaker 855
Speaker 856
Speaker 857
Speaker 858
Speaker 859
Speaker 860
Speaker 861
Speaker 862
Speaker 863
Speaker 864
Speaker 865
Speaker 866
Speaker 867
Speaker 868
Speaker 869
Speaker 870
Speaker 871
Speaker 872
Speaker 873
Speaker 874
Speaker 875
Speaker 876
Speaker 877
Speaker 878
Speaker 879
Speaker 880
Speaker 881
Speaker 882
Speaker 883
Speaker 884
Speaker 885
Speaker 886
Speaker 887
Speaker 888
Speaker 889
Speaker 890
Speaker 891
Speaker 892
Speaker 893
Speaker 894
Speaker 895
Speaker 896
Speaker 897
Speaker 898
Speaker 899
Speaker 900
Speaker 901
Speaker 902
Speaker 903
vits-piper-en_US-ljspeech-high
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/ljspeech/high
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-ljspeech-high
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-ljspeech-high/en_US-ljspeech-high.onnx";
config.model.vits.tokens = "vits-piper-en_US-ljspeech-high/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-ljspeech-high/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-ljspeech-high
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-ljspeech-high/en_US-ljspeech-high.onnx",
lexicon="",
data_dir="vits-piper-en_US-ljspeech-high/espeak-ng-data",
tokens="vits-piper-en_US-ljspeech-high/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-ljspeech-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/ljspeech/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-ljspeech-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-ljspeech-medium/en_US-ljspeech-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-ljspeech-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-ljspeech-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-ljspeech-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-ljspeech-medium/en_US-ljspeech-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-ljspeech-medium/espeak-ng-data",
tokens="vits-piper-en_US-ljspeech-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-norman-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/norman/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-norman-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-norman-medium/en_US-norman-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-norman-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-norman-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-norman-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-norman-medium/en_US-norman-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-norman-medium/espeak-ng-data",
tokens="vits-piper-en_US-norman-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-reza_ibrahim-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/reza_ibrahim/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-reza_ibrahim-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-reza_ibrahim-medium/en_US-reza_ibrahim-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-reza_ibrahim-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-reza_ibrahim-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-reza_ibrahim-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-reza_ibrahim-medium/en_US-reza_ibrahim-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-reza_ibrahim-medium/espeak-ng-data",
tokens="vits-piper-en_US-reza_ibrahim-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-ryan-high
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/ryan/high
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-ryan-high
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-ryan-high/en_US-ryan-high.onnx";
config.model.vits.tokens = "vits-piper-en_US-ryan-high/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-ryan-high/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-ryan-high
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-ryan-high/en_US-ryan-high.onnx",
lexicon="",
data_dir="vits-piper-en_US-ryan-high/espeak-ng-data",
tokens="vits-piper-en_US-ryan-high/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-ryan-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/ryan/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
https://github.com/k2-fsa/sherpa-onnx/releases/download/tts-models/vits-piper-en_US-ryan-low.tar.bz2
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-ryan-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-ryan-low/en_US-ryan-low.onnx";
config.model.vits.tokens = "vits-piper-en_US-ryan-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-ryan-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
https://github.com/k2-fsa/sherpa-onnx/releases/download/tts-models/vits-piper-en_US-ryan-low.tar.bz2
You can use the following code to play with vits-piper-en_US-ryan-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-ryan-low/en_US-ryan-low.onnx",
lexicon="",
data_dir="vits-piper-en_US-ryan-low/espeak-ng-data",
tokens="vits-piper-en_US-ryan-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-ryan-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/ryan/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-ryan-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-ryan-medium/en_US-ryan-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-ryan-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-ryan-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-ryan-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-ryan-medium/en_US-ryan-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-ryan-medium/espeak-ng-data",
tokens="vits-piper-en_US-ryan-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-en_US-sam-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_US/sam/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-en_US-sam-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-en_US-sam-medium/en_US-sam-medium.onnx";
config.model.vits.tokens = "vits-piper-en_US-sam-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-en_US-sam-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-en_US-sam-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-en_US-sam-medium/en_US-sam-medium.onnx",
lexicon="",
data_dir="vits-piper-en_US-sam-medium/espeak-ng-data",
tokens="vits-piper-en_US-sam-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Friends fell out often because life was changing so fast. The easiest thing in the world was to lose touch with someone.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Friends fell out often because life was changing so fast.
The easiest thing in the world was to lose touch with someone.
sample audios for different speakers are listed below:
Speaker 0
Finnish
This section lists text to speech models for Finnish.
vits-piper-fi_FI-harri-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/fi/fi_FI/harri/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-fi_FI-harri-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-fi_FI-harri-low/fi_FI-harri-low.onnx";
config.model.vits.tokens = "vits-piper-fi_FI-harri-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-fi_FI-harri-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Sateenkaaren päässä on kultaa, mutta vain ne, jotka siihen uskovat, voivat sen löytää.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-fi_FI-harri-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-fi_FI-harri-low/fi_FI-harri-low.onnx",
lexicon="",
data_dir="vits-piper-fi_FI-harri-low/espeak-ng-data",
tokens="vits-piper-fi_FI-harri-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Sateenkaaren päässä on kultaa, mutta vain ne, jotka siihen uskovat, voivat sen löytää.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Sateenkaaren päässä on kultaa, mutta vain ne, jotka siihen uskovat, voivat sen löytää.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-fi_FI-harri-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/fi/fi_FI/harri/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-fi_FI-harri-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-fi_FI-harri-medium/fi_FI-harri-medium.onnx";
config.model.vits.tokens = "vits-piper-fi_FI-harri-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-fi_FI-harri-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Sateenkaaren päässä on kultaa, mutta vain ne, jotka siihen uskovat, voivat sen löytää.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-fi_FI-harri-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-fi_FI-harri-medium/fi_FI-harri-medium.onnx",
lexicon="",
data_dir="vits-piper-fi_FI-harri-medium/espeak-ng-data",
tokens="vits-piper-fi_FI-harri-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Sateenkaaren päässä on kultaa, mutta vain ne, jotka siihen uskovat, voivat sen löytää.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Sateenkaaren päässä on kultaa, mutta vain ne, jotka siihen uskovat, voivat sen löytää.
sample audios for different speakers are listed below:
Speaker 0
French
This section lists text to speech models for French.
- vits-piper-fr_FR-gilles-low
- vits-piper-fr_FR-siwis-low
- vits-piper-fr_FR-siwis-medium
- vits-piper-fr_FR-tom-medium
- vits-piper-fr_FR-upmc-medium
vits-piper-fr_FR-gilles-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/fr/fr_FR/gilles/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-fr_FR-gilles-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-fr_FR-gilles-low/fr_FR-gilles-low.onnx";
config.model.vits.tokens = "vits-piper-fr_FR-gilles-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-fr_FR-gilles-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Pas de nouvelles, bonnes nouvelles.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-fr_FR-gilles-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-fr_FR-gilles-low/fr_FR-gilles-low.onnx",
lexicon="",
data_dir="vits-piper-fr_FR-gilles-low/espeak-ng-data",
tokens="vits-piper-fr_FR-gilles-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Pas de nouvelles, bonnes nouvelles.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Pas de nouvelles, bonnes nouvelles.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-fr_FR-siwis-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/fr/fr_FR/siwis/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-fr_FR-siwis-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-fr_FR-siwis-low/fr_FR-siwis-low.onnx";
config.model.vits.tokens = "vits-piper-fr_FR-siwis-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-fr_FR-siwis-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Pas de nouvelles, bonnes nouvelles.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-fr_FR-siwis-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-fr_FR-siwis-low/fr_FR-siwis-low.onnx",
lexicon="",
data_dir="vits-piper-fr_FR-siwis-low/espeak-ng-data",
tokens="vits-piper-fr_FR-siwis-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Pas de nouvelles, bonnes nouvelles.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Pas de nouvelles, bonnes nouvelles.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-fr_FR-siwis-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/fr/fr_FR/siwis/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-fr_FR-siwis-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-fr_FR-siwis-medium/fr_FR-siwis-medium.onnx";
config.model.vits.tokens = "vits-piper-fr_FR-siwis-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-fr_FR-siwis-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Pas de nouvelles, bonnes nouvelles.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-fr_FR-siwis-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-fr_FR-siwis-medium/fr_FR-siwis-medium.onnx",
lexicon="",
data_dir="vits-piper-fr_FR-siwis-medium/espeak-ng-data",
tokens="vits-piper-fr_FR-siwis-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Pas de nouvelles, bonnes nouvelles.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Pas de nouvelles, bonnes nouvelles.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-fr_FR-tom-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/fr/fr_FR/tom/medium
Number of speakers | Sample rate |
---|---|
1 | 44100 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-fr_FR-tom-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-fr_FR-tom-medium/fr_FR-tom-medium.onnx";
config.model.vits.tokens = "vits-piper-fr_FR-tom-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-fr_FR-tom-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Pas de nouvelles, bonnes nouvelles.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-fr_FR-tom-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-fr_FR-tom-medium/fr_FR-tom-medium.onnx",
lexicon="",
data_dir="vits-piper-fr_FR-tom-medium/espeak-ng-data",
tokens="vits-piper-fr_FR-tom-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Pas de nouvelles, bonnes nouvelles.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Pas de nouvelles, bonnes nouvelles.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-fr_FR-upmc-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/fr/fr_FR/upmc/medium
Number of speakers | Sample rate |
---|---|
2 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-fr_FR-upmc-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-fr_FR-upmc-medium/fr_FR-upmc-medium.onnx";
config.model.vits.tokens = "vits-piper-fr_FR-upmc-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-fr_FR-upmc-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Pas de nouvelles, bonnes nouvelles.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-fr_FR-upmc-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-fr_FR-upmc-medium/fr_FR-upmc-medium.onnx",
lexicon="",
data_dir="vits-piper-fr_FR-upmc-medium/espeak-ng-data",
tokens="vits-piper-fr_FR-upmc-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Pas de nouvelles, bonnes nouvelles.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Pas de nouvelles, bonnes nouvelles.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Georgian
This section lists text to speech models for Georgian.
vits-piper-ka_GE-natia-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ka/ka_GE/natia/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ka_GE-natia-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ka_GE-natia-medium/ka_GE-natia-medium.onnx";
config.model.vits.tokens = "vits-piper-ka_GE-natia-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-ka_GE-natia-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "ღვინო თბილისში, საქართველო სამტრედში";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ka_GE-natia-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ka_GE-natia-medium/ka_GE-natia-medium.onnx",
lexicon="",
data_dir="vits-piper-ka_GE-natia-medium/espeak-ng-data",
tokens="vits-piper-ka_GE-natia-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="ღვინო თბილისში, საქართველო სამტრედში",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
ღვინო თბილისში, საქართველო სამტრედში
sample audios for different speakers are listed below:
Speaker 0
German
This section lists text to speech models for German.
- vits-piper-de_DE-eva_k-x_low
- vits-piper-de_DE-glados-high
- vits-piper-de_DE-glados-low
- vits-piper-de_DE-glados-medium
- vits-piper-de_DE-glados_turret-high
- vits-piper-de_DE-glados_turret-low
- vits-piper-de_DE-glados_turret-medium
- vits-piper-de_DE-karlsson-low
- vits-piper-de_DE-kerstin-low
- vits-piper-de_DE-pavoque-low
- vits-piper-de_DE-ramona-low
- vits-piper-de_DE-thorsten-high
- vits-piper-de_DE-thorsten-low
- vits-piper-de_DE-thorsten-medium
- vits-piper-de_DE-thorsten_emotional-medium
vits-piper-de_DE-eva_k-x_low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/de/de_DE/eva_k/x_low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-eva_k-x_low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-eva_k-x_low/de_DE-eva_k-x_low.onnx";
config.model.vits.tokens = "vits-piper-de_DE-eva_k-x_low/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-eva_k-x_low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-eva_k-x_low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-eva_k-x_low/de_DE-eva_k-x_low.onnx",
lexicon="",
data_dir="vits-piper-de_DE-eva_k-x_low/espeak-ng-data",
tokens="vits-piper-de_DE-eva_k-x_low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-glados-high
Info about this model
This model is converted from https://huggingface.co/systemofapwne/piper-de-glados
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-glados-high
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-glados-high/de_DE-glados-high.onnx";
config.model.vits.tokens = "vits-piper-de_DE-glados-high/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-glados-high/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-glados-high
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-glados-high/de_DE-glados-high.onnx",
lexicon="",
data_dir="vits-piper-de_DE-glados-high/espeak-ng-data",
tokens="vits-piper-de_DE-glados-high/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-glados-low
Info about this model
This model is converted from https://huggingface.co/systemofapwne/piper-de-glados
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-glados-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-glados-low/de_DE-glados-low.onnx";
config.model.vits.tokens = "vits-piper-de_DE-glados-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-glados-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-glados-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-glados-low/de_DE-glados-low.onnx",
lexicon="",
data_dir="vits-piper-de_DE-glados-low/espeak-ng-data",
tokens="vits-piper-de_DE-glados-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-glados-medium
Info about this model
This model is converted from https://huggingface.co/systemofapwne/piper-de-glados
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-glados-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-glados-medium/de_DE-glados-medium.onnx";
config.model.vits.tokens = "vits-piper-de_DE-glados-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-glados-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-glados-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-glados-medium/de_DE-glados-medium.onnx",
lexicon="",
data_dir="vits-piper-de_DE-glados-medium/espeak-ng-data",
tokens="vits-piper-de_DE-glados-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-glados_turret-high
Info about this model
This model is converted from https://huggingface.co/systemofapwne/piper-de-glados
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-glados_turret-high
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-glados_turret-high/de_DE-glados_turret-high.onnx";
config.model.vits.tokens = "vits-piper-de_DE-glados_turret-high/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-glados_turret-high/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-glados_turret-high
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-glados_turret-high/de_DE-glados_turret-high.onnx",
lexicon="",
data_dir="vits-piper-de_DE-glados_turret-high/espeak-ng-data",
tokens="vits-piper-de_DE-glados_turret-high/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-glados_turret-low
Info about this model
This model is converted from https://huggingface.co/systemofapwne/piper-de-glados
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-glados_turret-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-glados_turret-low/de_DE-glados_turret-low.onnx";
config.model.vits.tokens = "vits-piper-de_DE-glados_turret-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-glados_turret-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-glados_turret-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-glados_turret-low/de_DE-glados_turret-low.onnx",
lexicon="",
data_dir="vits-piper-de_DE-glados_turret-low/espeak-ng-data",
tokens="vits-piper-de_DE-glados_turret-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-glados_turret-medium
Info about this model
This model is converted from https://huggingface.co/systemofapwne/piper-de-glados
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-glados_turret-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-glados_turret-medium/de_DE-glados_turret-medium.onnx";
config.model.vits.tokens = "vits-piper-de_DE-glados_turret-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-glados_turret-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-glados_turret-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-glados_turret-medium/de_DE-glados_turret-medium.onnx",
lexicon="",
data_dir="vits-piper-de_DE-glados_turret-medium/espeak-ng-data",
tokens="vits-piper-de_DE-glados_turret-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-karlsson-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/de/de_DE/karlsson/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-karlsson-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-karlsson-low/de_DE-karlsson-low.onnx";
config.model.vits.tokens = "vits-piper-de_DE-karlsson-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-karlsson-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-karlsson-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-karlsson-low/de_DE-karlsson-low.onnx",
lexicon="",
data_dir="vits-piper-de_DE-karlsson-low/espeak-ng-data",
tokens="vits-piper-de_DE-karlsson-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-kerstin-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/de/de_DE/kerstin/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-kerstin-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-kerstin-low/de_DE-kerstin-low.onnx";
config.model.vits.tokens = "vits-piper-de_DE-kerstin-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-kerstin-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-kerstin-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-kerstin-low/de_DE-kerstin-low.onnx",
lexicon="",
data_dir="vits-piper-de_DE-kerstin-low/espeak-ng-data",
tokens="vits-piper-de_DE-kerstin-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-pavoque-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/de/de_DE/pavoque/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-pavoque-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-pavoque-low/de_DE-pavoque-low.onnx";
config.model.vits.tokens = "vits-piper-de_DE-pavoque-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-pavoque-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-pavoque-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-pavoque-low/de_DE-pavoque-low.onnx",
lexicon="",
data_dir="vits-piper-de_DE-pavoque-low/espeak-ng-data",
tokens="vits-piper-de_DE-pavoque-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-ramona-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/de/de_DE/ramona/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-ramona-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-ramona-low/de_DE-ramona-low.onnx";
config.model.vits.tokens = "vits-piper-de_DE-ramona-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-ramona-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-ramona-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-ramona-low/de_DE-ramona-low.onnx",
lexicon="",
data_dir="vits-piper-de_DE-ramona-low/espeak-ng-data",
tokens="vits-piper-de_DE-ramona-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-thorsten-high
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/de/de_DE/thorsten/high
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-thorsten-high
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-thorsten-high/de_DE-thorsten-high.onnx";
config.model.vits.tokens = "vits-piper-de_DE-thorsten-high/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-thorsten-high/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-thorsten-high
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-thorsten-high/de_DE-thorsten-high.onnx",
lexicon="",
data_dir="vits-piper-de_DE-thorsten-high/espeak-ng-data",
tokens="vits-piper-de_DE-thorsten-high/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-thorsten-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/de/de_DE/thorsten/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-thorsten-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-thorsten-low/de_DE-thorsten-low.onnx";
config.model.vits.tokens = "vits-piper-de_DE-thorsten-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-thorsten-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-thorsten-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-thorsten-low/de_DE-thorsten-low.onnx",
lexicon="",
data_dir="vits-piper-de_DE-thorsten-low/espeak-ng-data",
tokens="vits-piper-de_DE-thorsten-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-thorsten-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/de/de_DE/thorsten/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-thorsten-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-thorsten-medium/de_DE-thorsten-medium.onnx";
config.model.vits.tokens = "vits-piper-de_DE-thorsten-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-thorsten-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-thorsten-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-thorsten-medium/de_DE-thorsten-medium.onnx",
lexicon="",
data_dir="vits-piper-de_DE-thorsten-medium/espeak-ng-data",
tokens="vits-piper-de_DE-thorsten-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-de_DE-thorsten_emotional-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/de/de_DE/thorsten_emotional/medium
Number of speakers | Sample rate |
---|---|
8 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-de_DE-thorsten_emotional-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-de_DE-thorsten_emotional-medium/de_DE-thorsten_emotional-medium.onnx";
config.model.vits.tokens = "vits-piper-de_DE-thorsten_emotional-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-de_DE-thorsten_emotional-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Alles hat ein Ende, nur die Wurst hat zwei.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-de_DE-thorsten_emotional-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-de_DE-thorsten_emotional-medium/de_DE-thorsten_emotional-medium.onnx",
lexicon="",
data_dir="vits-piper-de_DE-thorsten_emotional-medium/espeak-ng-data",
tokens="vits-piper-de_DE-thorsten_emotional-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Alles hat ein Ende, nur die Wurst hat zwei.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Alles hat ein Ende, nur die Wurst hat zwei.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
Speaker 6
Speaker 7
Greek
This section lists text to speech models for Greek.
vits-piper-el_GR-rapunzelina-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/el/el_GR/rapunzelina/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-el_GR-rapunzelina-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-el_GR-rapunzelina-low/el_GR-rapunzelina-low.onnx";
config.model.vits.tokens = "vits-piper-el_GR-rapunzelina-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-el_GR-rapunzelina-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Όταν το δέντρο είναι μικρό, το στρέβλεις· όταν είναι μεγάλο, το λυγίζεις.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-el_GR-rapunzelina-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-el_GR-rapunzelina-low/el_GR-rapunzelina-low.onnx",
lexicon="",
data_dir="vits-piper-el_GR-rapunzelina-low/espeak-ng-data",
tokens="vits-piper-el_GR-rapunzelina-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Όταν το δέντρο είναι μικρό, το στρέβλεις· όταν είναι μεγάλο, το λυγίζεις.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Όταν το δέντρο είναι μικρό, το στρέβλεις· όταν είναι μεγάλο, το λυγίζεις.
sample audios for different speakers are listed below:
Speaker 0
Hungarian
This section lists text to speech models for Hungarian.
vits-piper-hu_HU-anna-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/hu/hu_HU/anna/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-hu_HU-anna-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-hu_HU-anna-medium/hu_HU-anna-medium.onnx";
config.model.vits.tokens = "vits-piper-hu_HU-anna-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-hu_HU-anna-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Ha északról fúj a szél, a lányok nem lógnak együtt.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-hu_HU-anna-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-hu_HU-anna-medium/hu_HU-anna-medium.onnx",
lexicon="",
data_dir="vits-piper-hu_HU-anna-medium/espeak-ng-data",
tokens="vits-piper-hu_HU-anna-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Ha északról fúj a szél, a lányok nem lógnak együtt.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Ha északról fúj a szél, a lányok nem lógnak együtt.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-hu_HU-berta-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/hu/hu_HU/berta/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-hu_HU-berta-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-hu_HU-berta-medium/hu_HU-berta-medium.onnx";
config.model.vits.tokens = "vits-piper-hu_HU-berta-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-hu_HU-berta-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Ha északról fúj a szél, a lányok nem lógnak együtt.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-hu_HU-berta-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-hu_HU-berta-medium/hu_HU-berta-medium.onnx",
lexicon="",
data_dir="vits-piper-hu_HU-berta-medium/espeak-ng-data",
tokens="vits-piper-hu_HU-berta-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Ha északról fúj a szél, a lányok nem lógnak együtt.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Ha északról fúj a szél, a lányok nem lógnak együtt.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-hu_HU-imre-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/hu/hu_HU/imre/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-hu_HU-imre-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-hu_HU-imre-medium/hu_HU-imre-medium.onnx";
config.model.vits.tokens = "vits-piper-hu_HU-imre-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-hu_HU-imre-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Ha északról fúj a szél, a lányok nem lógnak együtt.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-hu_HU-imre-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-hu_HU-imre-medium/hu_HU-imre-medium.onnx",
lexicon="",
data_dir="vits-piper-hu_HU-imre-medium/espeak-ng-data",
tokens="vits-piper-hu_HU-imre-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Ha északról fúj a szél, a lányok nem lógnak együtt.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Ha északról fúj a szél, a lányok nem lógnak együtt.
sample audios for different speakers are listed below:
Speaker 0
Icelandic
This section lists text to speech models for Icelandic.
- vits-piper-is_IS-bui-medium
- vits-piper-is_IS-salka-medium
- vits-piper-is_IS-steinn-medium
- vits-piper-is_IS-ugla-medium
vits-piper-is_IS-bui-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/is/is_IS/bui/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-is_IS-bui-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-is_IS-bui-medium/is_IS-bui-medium.onnx";
config.model.vits.tokens = "vits-piper-is_IS-bui-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-is_IS-bui-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Farðu með allt, eða farðu ekki.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-is_IS-bui-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-is_IS-bui-medium/is_IS-bui-medium.onnx",
lexicon="",
data_dir="vits-piper-is_IS-bui-medium/espeak-ng-data",
tokens="vits-piper-is_IS-bui-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Farðu með allt, eða farðu ekki.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Farðu með allt, eða farðu ekki.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-is_IS-salka-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/is/is_IS/salka/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-is_IS-salka-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-is_IS-salka-medium/is_IS-salka-medium.onnx";
config.model.vits.tokens = "vits-piper-is_IS-salka-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-is_IS-salka-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Farðu með allt, eða farðu ekki.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-is_IS-salka-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-is_IS-salka-medium/is_IS-salka-medium.onnx",
lexicon="",
data_dir="vits-piper-is_IS-salka-medium/espeak-ng-data",
tokens="vits-piper-is_IS-salka-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Farðu með allt, eða farðu ekki.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Farðu með allt, eða farðu ekki.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-is_IS-steinn-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/is/is_IS/steinn/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-is_IS-steinn-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-is_IS-steinn-medium/is_IS-steinn-medium.onnx";
config.model.vits.tokens = "vits-piper-is_IS-steinn-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-is_IS-steinn-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Farðu með allt, eða farðu ekki.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-is_IS-steinn-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-is_IS-steinn-medium/is_IS-steinn-medium.onnx",
lexicon="",
data_dir="vits-piper-is_IS-steinn-medium/espeak-ng-data",
tokens="vits-piper-is_IS-steinn-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Farðu með allt, eða farðu ekki.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Farðu með allt, eða farðu ekki.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-is_IS-ugla-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/is/is_IS/ugla/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-is_IS-ugla-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-is_IS-ugla-medium/is_IS-ugla-medium.onnx";
config.model.vits.tokens = "vits-piper-is_IS-ugla-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-is_IS-ugla-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Farðu með allt, eða farðu ekki.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-is_IS-ugla-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-is_IS-ugla-medium/is_IS-ugla-medium.onnx",
lexicon="",
data_dir="vits-piper-is_IS-ugla-medium/espeak-ng-data",
tokens="vits-piper-is_IS-ugla-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Farðu með allt, eða farðu ekki.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Farðu með allt, eða farðu ekki.
sample audios for different speakers are listed below:
Speaker 0
Italian
This section lists text to speech models for Italian.
vits-piper-it_IT-paola-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/it/it_IT/paola/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-it_IT-paola-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-it_IT-paola-medium/it_IT-paola-medium.onnx";
config.model.vits.tokens = "vits-piper-it_IT-paola-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-it_IT-paola-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Se vuoi andare veloce, vai da solo; se vuoi andare lontano, vai insieme.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-it_IT-paola-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-it_IT-paola-medium/it_IT-paola-medium.onnx",
lexicon="",
data_dir="vits-piper-it_IT-paola-medium/espeak-ng-data",
tokens="vits-piper-it_IT-paola-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Se vuoi andare veloce, vai da solo; se vuoi andare lontano, vai insieme.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Se vuoi andare veloce, vai da solo; se vuoi andare lontano, vai insieme.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-it_IT-riccardo-x_low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/it/it_IT/riccardo/x_low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-it_IT-riccardo-x_low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-it_IT-riccardo-x_low/it_IT-riccardo-x_low.onnx";
config.model.vits.tokens = "vits-piper-it_IT-riccardo-x_low/tokens.txt";
config.model.vits.data_dir = "vits-piper-it_IT-riccardo-x_low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Se vuoi andare veloce, vai da solo; se vuoi andare lontano, vai insieme.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-it_IT-riccardo-x_low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-it_IT-riccardo-x_low/it_IT-riccardo-x_low.onnx",
lexicon="",
data_dir="vits-piper-it_IT-riccardo-x_low/espeak-ng-data",
tokens="vits-piper-it_IT-riccardo-x_low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Se vuoi andare veloce, vai da solo; se vuoi andare lontano, vai insieme.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Se vuoi andare veloce, vai da solo; se vuoi andare lontano, vai insieme.
sample audios for different speakers are listed below:
Speaker 0
Kazakh
This section lists text to speech models for Kazakh.
vits-piper-kk_KZ-iseke-x_low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/kk/kk_KZ/iseke/x_low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-kk_KZ-iseke-x_low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-kk_KZ-iseke-x_low/kk_KZ-iseke-x_low.onnx";
config.model.vits.tokens = "vits-piper-kk_KZ-iseke-x_low/tokens.txt";
config.model.vits.data_dir = "vits-piper-kk_KZ-iseke-x_low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Әлемнің жұлдыздары сенің көзің, жаным.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-kk_KZ-iseke-x_low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-kk_KZ-iseke-x_low/kk_KZ-iseke-x_low.onnx",
lexicon="",
data_dir="vits-piper-kk_KZ-iseke-x_low/espeak-ng-data",
tokens="vits-piper-kk_KZ-iseke-x_low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Әлемнің жұлдыздары сенің көзің, жаным.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Әлемнің жұлдыздары сенің көзің, жаным.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-kk_KZ-issai-high
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/kk/kk_KZ/issai/high
Number of speakers | Sample rate |
---|---|
6 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-kk_KZ-issai-high
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-kk_KZ-issai-high/kk_KZ-issai-high.onnx";
config.model.vits.tokens = "vits-piper-kk_KZ-issai-high/tokens.txt";
config.model.vits.data_dir = "vits-piper-kk_KZ-issai-high/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Әлемнің жұлдыздары сенің көзің, жаным.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-kk_KZ-issai-high
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-kk_KZ-issai-high/kk_KZ-issai-high.onnx",
lexicon="",
data_dir="vits-piper-kk_KZ-issai-high/espeak-ng-data",
tokens="vits-piper-kk_KZ-issai-high/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Әлемнің жұлдыздары сенің көзің, жаным.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Әлемнің жұлдыздары сенің көзің, жаным.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
vits-piper-kk_KZ-raya-x_low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/kk/kk_KZ/raya/x_low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-kk_KZ-raya-x_low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-kk_KZ-raya-x_low/kk_KZ-raya-x_low.onnx";
config.model.vits.tokens = "vits-piper-kk_KZ-raya-x_low/tokens.txt";
config.model.vits.data_dir = "vits-piper-kk_KZ-raya-x_low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Әлемнің жұлдыздары сенің көзің, жаным.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-kk_KZ-raya-x_low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-kk_KZ-raya-x_low/kk_KZ-raya-x_low.onnx",
lexicon="",
data_dir="vits-piper-kk_KZ-raya-x_low/espeak-ng-data",
tokens="vits-piper-kk_KZ-raya-x_low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Әлемнің жұлдыздары сенің көзің, жаным.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Әлемнің жұлдыздары сенің көзің, жаным.
sample audios for different speakers are listed below:
Speaker 0
Latvian
This section lists text to speech models for Latvian.
vits-piper-lv_LV-aivars-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/lv/lv_LV/aivars/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-lv_LV-aivars-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-lv_LV-aivars-medium/lv_LV-aivars-medium.onnx";
config.model.vits.tokens = "vits-piper-lv_LV-aivars-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-lv_LV-aivars-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Zeme nenes augļus, ja tēvs sēj, bet māte auž.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-lv_LV-aivars-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-lv_LV-aivars-medium/lv_LV-aivars-medium.onnx",
lexicon="",
data_dir="vits-piper-lv_LV-aivars-medium/espeak-ng-data",
tokens="vits-piper-lv_LV-aivars-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Zeme nenes augļus, ja tēvs sēj, bet māte auž.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Zeme nenes augļus, ja tēvs sēj, bet māte auž.
sample audios for different speakers are listed below:
Speaker 0
Luxembourgish
This section lists text to speech models for Luxembourgish.
vits-piper-lb_LU-marylux-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/lb/lb_LU/marylux/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-lb_LU-marylux-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-lb_LU-marylux-medium/lb_LU-marylux-medium.onnx";
config.model.vits.tokens = "vits-piper-lb_LU-marylux-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-lb_LU-marylux-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Op der Haaptstrooss sinn all Stroossen Brécken, awer d'Dier kann iwwerall erreecht ginn.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-lb_LU-marylux-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-lb_LU-marylux-medium/lb_LU-marylux-medium.onnx",
lexicon="",
data_dir="vits-piper-lb_LU-marylux-medium/espeak-ng-data",
tokens="vits-piper-lb_LU-marylux-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Op der Haaptstrooss sinn all Stroossen Brécken, awer d'Dier kann iwwerall erreecht ginn.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Op der Haaptstrooss sinn all Stroossen Brécken, awer d'Dier kann iwwerall erreecht ginn.
sample audios for different speakers are listed below:
Speaker 0
Malayalam
This section lists text to speech models for Malayalam.
vits-piper-ml_IN-arjun-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ml/ml_IN/arjun/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ml_IN-arjun-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ml_IN-arjun-medium/ml_IN-arjun-medium.onnx";
config.model.vits.tokens = "vits-piper-ml_IN-arjun-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-ml_IN-arjun-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "മണ്ണ് മരിക്കുമ്പോൾ കാട്ടിലെ വെള്ളവും മരിക്കുന്നു.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ml_IN-arjun-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ml_IN-arjun-medium/ml_IN-arjun-medium.onnx",
lexicon="",
data_dir="vits-piper-ml_IN-arjun-medium/espeak-ng-data",
tokens="vits-piper-ml_IN-arjun-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="മണ്ണ് മരിക്കുമ്പോൾ കാട്ടിലെ വെള്ളവും മരിക്കുന്നു.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
മണ്ണ് മരിക്കുമ്പോൾ കാട്ടിലെ വെള്ളവും മരിക്കുന്നു.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-ml_IN-meera-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ml/ml_IN/meera/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ml_IN-meera-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ml_IN-meera-medium/ml_IN-meera-medium.onnx";
config.model.vits.tokens = "vits-piper-ml_IN-meera-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-ml_IN-meera-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "മണ്ണ് മരിക്കുമ്പോൾ കാട്ടിലെ വെള്ളവും മരിക്കുന്നു.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ml_IN-meera-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ml_IN-meera-medium/ml_IN-meera-medium.onnx",
lexicon="",
data_dir="vits-piper-ml_IN-meera-medium/espeak-ng-data",
tokens="vits-piper-ml_IN-meera-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="മണ്ണ് മരിക്കുമ്പോൾ കാട്ടിലെ വെള്ളവും മരിക്കുന്നു.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
മണ്ണ് മരിക്കുമ്പോൾ കാട്ടിലെ വെള്ളവും മരിക്കുന്നു.
sample audios for different speakers are listed below:
Speaker 0
Nepali
This section lists text to speech models for Nepali.
vits-piper-ne_NP-chitwan-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ne/ne_NP/chitwan/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ne_NP-chitwan-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ne_NP-chitwan-medium/ne_NP-chitwan-medium.onnx";
config.model.vits.tokens = "vits-piper-ne_NP-chitwan-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-ne_NP-chitwan-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "घाँसको पातले पहाडलाई अभिवादन गर्दै झुक्छ।";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ne_NP-chitwan-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ne_NP-chitwan-medium/ne_NP-chitwan-medium.onnx",
lexicon="",
data_dir="vits-piper-ne_NP-chitwan-medium/espeak-ng-data",
tokens="vits-piper-ne_NP-chitwan-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="घाँसको पातले पहाडलाई अभिवादन गर्दै झुक्छ।",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
घाँसको पातले पहाडलाई अभिवादन गर्दै झुक्छ।
sample audios for different speakers are listed below:
Speaker 0
vits-piper-ne_NP-google-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ne/ne_NP/google/medium
Number of speakers | Sample rate |
---|---|
18 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ne_NP-google-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ne_NP-google-medium/ne_NP-google-medium.onnx";
config.model.vits.tokens = "vits-piper-ne_NP-google-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-ne_NP-google-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "घाँसको पातले पहाडलाई अभिवादन गर्दै झुक्छ।";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ne_NP-google-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ne_NP-google-medium/ne_NP-google-medium.onnx",
lexicon="",
data_dir="vits-piper-ne_NP-google-medium/espeak-ng-data",
tokens="vits-piper-ne_NP-google-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="घाँसको पातले पहाडलाई अभिवादन गर्दै झुक्छ।",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
घाँसको पातले पहाडलाई अभिवादन गर्दै झुक्छ।
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
Speaker 6
Speaker 7
Speaker 8
Speaker 9
Speaker 10
Speaker 11
Speaker 12
Speaker 13
Speaker 14
Speaker 15
Speaker 16
Speaker 17
vits-piper-ne_NP-google-x_low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ne/ne_NP/google/x_low
Number of speakers | Sample rate |
---|---|
18 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ne_NP-google-x_low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ne_NP-google-x_low/ne_NP-google-x_low.onnx";
config.model.vits.tokens = "vits-piper-ne_NP-google-x_low/tokens.txt";
config.model.vits.data_dir = "vits-piper-ne_NP-google-x_low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "घाँसको पातले पहाडलाई अभिवादन गर्दै झुक्छ।";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ne_NP-google-x_low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ne_NP-google-x_low/ne_NP-google-x_low.onnx",
lexicon="",
data_dir="vits-piper-ne_NP-google-x_low/espeak-ng-data",
tokens="vits-piper-ne_NP-google-x_low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="घाँसको पातले पहाडलाई अभिवादन गर्दै झुक्छ।",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
घाँसको पातले पहाडलाई अभिवादन गर्दै झुक्छ।
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
Speaker 6
Speaker 7
Speaker 8
Speaker 9
Speaker 10
Speaker 11
Speaker 12
Speaker 13
Speaker 14
Speaker 15
Speaker 16
Speaker 17
Norwegian
This section lists text to speech models for Norwegian.
vits-piper-no_NO-talesyntese-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/no/no_NO/talesyntese/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-no_NO-talesyntese-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-no_NO-talesyntese-medium/no_NO-talesyntese-medium.onnx";
config.model.vits.tokens = "vits-piper-no_NO-talesyntese-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-no_NO-talesyntese-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Uskyldig kan stormen veroorzaken";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-no_NO-talesyntese-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-no_NO-talesyntese-medium/no_NO-talesyntese-medium.onnx",
lexicon="",
data_dir="vits-piper-no_NO-talesyntese-medium/espeak-ng-data",
tokens="vits-piper-no_NO-talesyntese-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Uskyldig kan stormen veroorzaken",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Uskyldig kan stormen veroorzaken
sample audios for different speakers are listed below:
Speaker 0
Persian
This section lists text to speech models for Persian.
- vits-piper-fa_IR-amir-medium
- vits-piper-fa_IR-ganji-medium
- vits-piper-fa_IR-ganji_adabi-medium
- vits-piper-fa_IR-gyro-medium
- vits-piper-fa_IR-reza_ibrahim-medium
vits-piper-fa_IR-amir-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/fa/fa_IR/amir/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-fa_IR-amir-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-fa_IR-amir-medium/fa_IR-amir-medium.onnx";
config.model.vits.tokens = "vits-piper-fa_IR-amir-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-fa_IR-amir-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-fa_IR-amir-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-fa_IR-amir-medium/fa_IR-amir-medium.onnx",
lexicon="",
data_dir="vits-piper-fa_IR-amir-medium/espeak-ng-data",
tokens="vits-piper-fa_IR-amir-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-fa_IR-ganji-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/fa/fa_IR/ganji/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-fa_IR-ganji-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-fa_IR-ganji-medium/fa_IR-ganji-medium.onnx";
config.model.vits.tokens = "vits-piper-fa_IR-ganji-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-fa_IR-ganji-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-fa_IR-ganji-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-fa_IR-ganji-medium/fa_IR-ganji-medium.onnx",
lexicon="",
data_dir="vits-piper-fa_IR-ganji-medium/espeak-ng-data",
tokens="vits-piper-fa_IR-ganji-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-fa_IR-ganji_adabi-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/fa/fa_IR/ganji_adabi/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-fa_IR-ganji_adabi-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-fa_IR-ganji_adabi-medium/fa_IR-ganji_adabi-medium.onnx";
config.model.vits.tokens = "vits-piper-fa_IR-ganji_adabi-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-fa_IR-ganji_adabi-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-fa_IR-ganji_adabi-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-fa_IR-ganji_adabi-medium/fa_IR-ganji_adabi-medium.onnx",
lexicon="",
data_dir="vits-piper-fa_IR-ganji_adabi-medium/espeak-ng-data",
tokens="vits-piper-fa_IR-ganji_adabi-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-fa_IR-gyro-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/fa/fa_IR/gyro/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-fa_IR-gyro-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-fa_IR-gyro-medium/fa_IR-gyro-medium.onnx";
config.model.vits.tokens = "vits-piper-fa_IR-gyro-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-fa_IR-gyro-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-fa_IR-gyro-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-fa_IR-gyro-medium/fa_IR-gyro-medium.onnx",
lexicon="",
data_dir="vits-piper-fa_IR-gyro-medium/espeak-ng-data",
tokens="vits-piper-fa_IR-gyro-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-fa_IR-reza_ibrahim-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/fa/fa_IR/reza_ibrahim/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-fa_IR-reza_ibrahim-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-fa_IR-reza_ibrahim-medium/fa_IR-reza_ibrahim-medium.onnx";
config.model.vits.tokens = "vits-piper-fa_IR-reza_ibrahim-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-fa_IR-reza_ibrahim-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-fa_IR-reza_ibrahim-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-fa_IR-reza_ibrahim-medium/fa_IR-reza_ibrahim-medium.onnx",
lexicon="",
data_dir="vits-piper-fa_IR-reza_ibrahim-medium/espeak-ng-data",
tokens="vits-piper-fa_IR-reza_ibrahim-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
همانطور که کوه ها در برابر باد و باران پایدارند، اما به مرور زمان خرد و پخش می شوند، انسان نیز باید در برابر مشکلات قوی باشد، اما با خرد و خویشتن داری در زندگی به پیش برود.
sample audios for different speakers are listed below:
Speaker 0
Polish
This section lists text to speech models for Polish.
vits-piper-pl_PL-darkman-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/pl/pl_PL/darkman/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-pl_PL-darkman-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-pl_PL-darkman-medium/pl_PL-darkman-medium.onnx";
config.model.vits.tokens = "vits-piper-pl_PL-darkman-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-pl_PL-darkman-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Nieważne, za kogo walczysz, i tak popełnisz błąd";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-pl_PL-darkman-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-pl_PL-darkman-medium/pl_PL-darkman-medium.onnx",
lexicon="",
data_dir="vits-piper-pl_PL-darkman-medium/espeak-ng-data",
tokens="vits-piper-pl_PL-darkman-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Nieważne, za kogo walczysz, i tak popełnisz błąd",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Nieważne, za kogo walczysz, i tak popełnisz błąd
sample audios for different speakers are listed below:
Speaker 0
vits-piper-pl_PL-gosia-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/pl/pl_PL/gosia/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-pl_PL-gosia-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-pl_PL-gosia-medium/pl_PL-gosia-medium.onnx";
config.model.vits.tokens = "vits-piper-pl_PL-gosia-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-pl_PL-gosia-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Nieważne, za kogo walczysz, i tak popełnisz błąd";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-pl_PL-gosia-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-pl_PL-gosia-medium/pl_PL-gosia-medium.onnx",
lexicon="",
data_dir="vits-piper-pl_PL-gosia-medium/espeak-ng-data",
tokens="vits-piper-pl_PL-gosia-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Nieważne, za kogo walczysz, i tak popełnisz błąd",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Nieważne, za kogo walczysz, i tak popełnisz błąd
sample audios for different speakers are listed below:
Speaker 0
vits-piper-pl_PL-mc_speech-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/pl/pl_PL/mc_speech/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-pl_PL-mc_speech-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-pl_PL-mc_speech-medium/pl_PL-mc_speech-medium.onnx";
config.model.vits.tokens = "vits-piper-pl_PL-mc_speech-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-pl_PL-mc_speech-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Nieważne, za kogo walczysz, i tak popełnisz błąd";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-pl_PL-mc_speech-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-pl_PL-mc_speech-medium/pl_PL-mc_speech-medium.onnx",
lexicon="",
data_dir="vits-piper-pl_PL-mc_speech-medium/espeak-ng-data",
tokens="vits-piper-pl_PL-mc_speech-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Nieważne, za kogo walczysz, i tak popełnisz błąd",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Nieważne, za kogo walczysz, i tak popełnisz błąd
sample audios for different speakers are listed below:
Speaker 0
Portuguese
This section lists text to speech models for Portuguese.
- vits-piper-pt_BR-cadu-medium
- vits-piper-pt_BR-edresson-low
- vits-piper-pt_BR-faber-medium
- vits-piper-pt_BR-jeff-medium
- vits-piper-pt_PT-tugão-medium
vits-piper-pt_BR-cadu-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/pt/pt_BR/cadu/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-pt_BR-cadu-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-pt_BR-cadu-medium/pt_BR-cadu-medium.onnx";
config.model.vits.tokens = "vits-piper-pt_BR-cadu-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-pt_BR-cadu-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Marinha sem vento, não chega a porto";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-pt_BR-cadu-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-pt_BR-cadu-medium/pt_BR-cadu-medium.onnx",
lexicon="",
data_dir="vits-piper-pt_BR-cadu-medium/espeak-ng-data",
tokens="vits-piper-pt_BR-cadu-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Marinha sem vento, não chega a porto",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Marinha sem vento, não chega a porto
sample audios for different speakers are listed below:
Speaker 0
vits-piper-pt_BR-edresson-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/pt/pt_BR/edresson/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-pt_BR-edresson-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-pt_BR-edresson-low/pt_BR-edresson-low.onnx";
config.model.vits.tokens = "vits-piper-pt_BR-edresson-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-pt_BR-edresson-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Marinha sem vento, não chega a porto";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-pt_BR-edresson-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-pt_BR-edresson-low/pt_BR-edresson-low.onnx",
lexicon="",
data_dir="vits-piper-pt_BR-edresson-low/espeak-ng-data",
tokens="vits-piper-pt_BR-edresson-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Marinha sem vento, não chega a porto",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Marinha sem vento, não chega a porto
sample audios for different speakers are listed below:
Speaker 0
vits-piper-pt_BR-faber-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/pt/pt_BR/faber/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-pt_BR-faber-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-pt_BR-faber-medium/pt_BR-faber-medium.onnx";
config.model.vits.tokens = "vits-piper-pt_BR-faber-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-pt_BR-faber-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Marinha sem vento, não chega a porto";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-pt_BR-faber-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-pt_BR-faber-medium/pt_BR-faber-medium.onnx",
lexicon="",
data_dir="vits-piper-pt_BR-faber-medium/espeak-ng-data",
tokens="vits-piper-pt_BR-faber-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Marinha sem vento, não chega a porto",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Marinha sem vento, não chega a porto
sample audios for different speakers are listed below:
Speaker 0
vits-piper-pt_BR-jeff-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/pt/pt_BR/jeff/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-pt_BR-jeff-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-pt_BR-jeff-medium/pt_BR-jeff-medium.onnx";
config.model.vits.tokens = "vits-piper-pt_BR-jeff-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-pt_BR-jeff-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Marinha sem vento, não chega a porto";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-pt_BR-jeff-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-pt_BR-jeff-medium/pt_BR-jeff-medium.onnx",
lexicon="",
data_dir="vits-piper-pt_BR-jeff-medium/espeak-ng-data",
tokens="vits-piper-pt_BR-jeff-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Marinha sem vento, não chega a porto",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Marinha sem vento, não chega a porto
sample audios for different speakers are listed below:
Speaker 0
vits-piper-pt_PT-tugão-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/pt/pt_PT/tugão/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-pt_PT-tugão-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-pt_PT-tugão-medium/pt_PT-tugão-medium.onnx";
config.model.vits.tokens = "vits-piper-pt_PT-tugão-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-pt_PT-tugão-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Marinha sem vento, não chega a porto";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-pt_PT-tugão-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-pt_PT-tugão-medium/pt_PT-tugão-medium.onnx",
lexicon="",
data_dir="vits-piper-pt_PT-tugão-medium/espeak-ng-data",
tokens="vits-piper-pt_PT-tugão-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Marinha sem vento, não chega a porto",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Marinha sem vento, não chega a porto
sample audios for different speakers are listed below:
Speaker 0
Romanian
This section lists text to speech models for Romanian.
vits-piper-ro_RO-mihai-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ro/ro_RO/mihai/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ro_RO-mihai-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ro_RO-mihai-medium/ro_RO-mihai-medium.onnx";
config.model.vits.tokens = "vits-piper-ro_RO-mihai-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-ro_RO-mihai-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Un foc fără lemne se stinge, o lume fără poveste moare.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ro_RO-mihai-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ro_RO-mihai-medium/ro_RO-mihai-medium.onnx",
lexicon="",
data_dir="vits-piper-ro_RO-mihai-medium/espeak-ng-data",
tokens="vits-piper-ro_RO-mihai-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Un foc fără lemne se stinge, o lume fără poveste moare.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Un foc fără lemne se stinge, o lume fără poveste moare.
sample audios for different speakers are listed below:
Speaker 0
Russian
This section lists text to speech models for Russian.
- vits-piper-ru_RU-denis-medium
- vits-piper-ru_RU-dmitri-medium
- vits-piper-ru_RU-irina-medium
- vits-piper-ru_RU-ruslan-medium
vits-piper-ru_RU-denis-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ru/ru_RU/denis/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ru_RU-denis-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ru_RU-denis-medium/ru_RU-denis-medium.onnx";
config.model.vits.tokens = "vits-piper-ru_RU-denis-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-ru_RU-denis-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Если курица укусит, ей отрубят голову.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ru_RU-denis-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ru_RU-denis-medium/ru_RU-denis-medium.onnx",
lexicon="",
data_dir="vits-piper-ru_RU-denis-medium/espeak-ng-data",
tokens="vits-piper-ru_RU-denis-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Если курица укусит, ей отрубят голову.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Если курица укусит, ей отрубят голову.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-ru_RU-dmitri-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ru/ru_RU/dmitri/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ru_RU-dmitri-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ru_RU-dmitri-medium/ru_RU-dmitri-medium.onnx";
config.model.vits.tokens = "vits-piper-ru_RU-dmitri-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-ru_RU-dmitri-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Если курица укусит, ей отрубят голову.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ru_RU-dmitri-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ru_RU-dmitri-medium/ru_RU-dmitri-medium.onnx",
lexicon="",
data_dir="vits-piper-ru_RU-dmitri-medium/espeak-ng-data",
tokens="vits-piper-ru_RU-dmitri-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Если курица укусит, ей отрубят голову.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Если курица укусит, ей отрубят голову.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-ru_RU-irina-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ru/ru_RU/irina/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ru_RU-irina-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ru_RU-irina-medium/ru_RU-irina-medium.onnx";
config.model.vits.tokens = "vits-piper-ru_RU-irina-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-ru_RU-irina-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Если курица укусит, ей отрубят голову.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ru_RU-irina-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ru_RU-irina-medium/ru_RU-irina-medium.onnx",
lexicon="",
data_dir="vits-piper-ru_RU-irina-medium/espeak-ng-data",
tokens="vits-piper-ru_RU-irina-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Если курица укусит, ей отрубят голову.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Если курица укусит, ей отрубят голову.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-ru_RU-ruslan-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/ru/ru_RU/ruslan/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-ru_RU-ruslan-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-ru_RU-ruslan-medium/ru_RU-ruslan-medium.onnx";
config.model.vits.tokens = "vits-piper-ru_RU-ruslan-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-ru_RU-ruslan-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Если курица укусит, ей отрубят голову.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-ru_RU-ruslan-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-ru_RU-ruslan-medium/ru_RU-ruslan-medium.onnx",
lexicon="",
data_dir="vits-piper-ru_RU-ruslan-medium/espeak-ng-data",
tokens="vits-piper-ru_RU-ruslan-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Если курица укусит, ей отрубят голову.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Если курица укусит, ей отрубят голову.
sample audios for different speakers are listed below:
Speaker 0
Serbian
This section lists text to speech models for Serbian.
vits-piper-sr_RS-serbski_institut-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/sr/sr_RS/serbski_institut/medium
Number of speakers | Sample rate |
---|---|
2 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-sr_RS-serbski_institut-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-sr_RS-serbski_institut-medium/sr_RS-serbski_institut-medium.onnx";
config.model.vits.tokens = "vits-piper-sr_RS-serbski_institut-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-sr_RS-serbski_institut-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Круг не може постојати без свог центра, а нација не може постојати без својих хероја.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-sr_RS-serbski_institut-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-sr_RS-serbski_institut-medium/sr_RS-serbski_institut-medium.onnx",
lexicon="",
data_dir="vits-piper-sr_RS-serbski_institut-medium/espeak-ng-data",
tokens="vits-piper-sr_RS-serbski_institut-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Круг не може постојати без свог центра, а нација не може постојати без својих хероја.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Круг не може постојати без свог центра, а нација не може постојати без својих хероја.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Slovak
This section lists text to speech models for Slovak.
vits-piper-sk_SK-lili-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/sk/sk_SK/lili/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-sk_SK-lili-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-sk_SK-lili-medium/sk_SK-lili-medium.onnx";
config.model.vits.tokens = "vits-piper-sk_SK-lili-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-sk_SK-lili-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Kto nepozná strach, nepozná vôľu.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-sk_SK-lili-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-sk_SK-lili-medium/sk_SK-lili-medium.onnx",
lexicon="",
data_dir="vits-piper-sk_SK-lili-medium/espeak-ng-data",
tokens="vits-piper-sk_SK-lili-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Kto nepozná strach, nepozná vôľu.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Kto nepozná strach, nepozná vôľu.
sample audios for different speakers are listed below:
Speaker 0
Slovenian
This section lists text to speech models for Slovenian.
vits-piper-sl_SI-artur-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/sl/sl_SI/artur/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-sl_SI-artur-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-sl_SI-artur-medium/sl_SI-artur-medium.onnx";
config.model.vits.tokens = "vits-piper-sl_SI-artur-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-sl_SI-artur-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Kto sa nebojí, nie je hlúpy.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-sl_SI-artur-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-sl_SI-artur-medium/sl_SI-artur-medium.onnx",
lexicon="",
data_dir="vits-piper-sl_SI-artur-medium/espeak-ng-data",
tokens="vits-piper-sl_SI-artur-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Kto sa nebojí, nie je hlúpy.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Kto sa nebojí, nie je hlúpy.
sample audios for different speakers are listed below:
Speaker 0
Spanish
This section lists text to speech models for Spanish.
- vits-piper-es_ES-carlfm-x_low
- vits-piper-es_ES-davefx-medium
- vits-piper-es_ES-sharvard-medium
- vits-piper-es_MX-ald-medium
- vits-piper-es_MX-claude-high
vits-piper-es_ES-carlfm-x_low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/es/es_ES/carlfm/x_low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-es_ES-carlfm-x_low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-es_ES-carlfm-x_low/es_ES-carlfm-x_low.onnx";
config.model.vits.tokens = "vits-piper-es_ES-carlfm-x_low/tokens.txt";
config.model.vits.data_dir = "vits-piper-es_ES-carlfm-x_low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-es_ES-carlfm-x_low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-es_ES-carlfm-x_low/es_ES-carlfm-x_low.onnx",
lexicon="",
data_dir="vits-piper-es_ES-carlfm-x_low/espeak-ng-data",
tokens="vits-piper-es_ES-carlfm-x_low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-es_ES-davefx-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/es/es_ES/davefx/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-es_ES-davefx-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-es_ES-davefx-medium/es_ES-davefx-medium.onnx";
config.model.vits.tokens = "vits-piper-es_ES-davefx-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-es_ES-davefx-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-es_ES-davefx-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-es_ES-davefx-medium/es_ES-davefx-medium.onnx",
lexicon="",
data_dir="vits-piper-es_ES-davefx-medium/espeak-ng-data",
tokens="vits-piper-es_ES-davefx-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-es_ES-sharvard-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/es/es_ES/sharvard/medium
Number of speakers | Sample rate |
---|---|
2 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-es_ES-sharvard-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-es_ES-sharvard-medium/es_ES-sharvard-medium.onnx";
config.model.vits.tokens = "vits-piper-es_ES-sharvard-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-es_ES-sharvard-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-es_ES-sharvard-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-es_ES-sharvard-medium/es_ES-sharvard-medium.onnx",
lexicon="",
data_dir="vits-piper-es_ES-sharvard-medium/espeak-ng-data",
tokens="vits-piper-es_ES-sharvard-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
vits-piper-es_MX-ald-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/es/es_MX/ald/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-es_MX-ald-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-es_MX-ald-medium/es_MX-ald-medium.onnx";
config.model.vits.tokens = "vits-piper-es_MX-ald-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-es_MX-ald-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-es_MX-ald-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-es_MX-ald-medium/es_MX-ald-medium.onnx",
lexicon="",
data_dir="vits-piper-es_MX-ald-medium/espeak-ng-data",
tokens="vits-piper-es_MX-ald-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-es_MX-claude-high
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/es/es_MX/claude/high
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-es_MX-claude-high
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-es_MX-claude-high/es_MX-claude-high.onnx";
config.model.vits.tokens = "vits-piper-es_MX-claude-high/tokens.txt";
config.model.vits.data_dir = "vits-piper-es_MX-claude-high/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-es_MX-claude-high
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-es_MX-claude-high/es_MX-claude-high.onnx",
lexicon="",
data_dir="vits-piper-es_MX-claude-high/espeak-ng-data",
tokens="vits-piper-es_MX-claude-high/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Cuando te encuentres ante una puerta cerrada, no olvides que a veces el destino cierra una puerta para que te desvíes hacia un camino que lleva a una ventana que nunca habrías encontrado por tu cuenta.
sample audios for different speakers are listed below:
Speaker 0
Swahili
This section lists text to speech models for Swahili.
vits-piper-sw_CD-lanfrica-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/sw/sw_CD/lanfrica/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-sw_CD-lanfrica-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-sw_CD-lanfrica-medium/sw_CD-lanfrica-medium.onnx";
config.model.vits.tokens = "vits-piper-sw_CD-lanfrica-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-sw_CD-lanfrica-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Mtu mmoja hawezi kuiba mazingira.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-sw_CD-lanfrica-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-sw_CD-lanfrica-medium/sw_CD-lanfrica-medium.onnx",
lexicon="",
data_dir="vits-piper-sw_CD-lanfrica-medium/espeak-ng-data",
tokens="vits-piper-sw_CD-lanfrica-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Mtu mmoja hawezi kuiba mazingira.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Mtu mmoja hawezi kuiba mazingira.
sample audios for different speakers are listed below:
Speaker 0
Swedish
This section lists text to speech models for Swedish.
vits-piper-sv_SE-lisa-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/sv/sv_SE/lisa/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-sv_SE-lisa-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-sv_SE-lisa-medium/sv_SE-lisa-medium.onnx";
config.model.vits.tokens = "vits-piper-sv_SE-lisa-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-sv_SE-lisa-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Liten skog, med många träd";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-sv_SE-lisa-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-sv_SE-lisa-medium/sv_SE-lisa-medium.onnx",
lexicon="",
data_dir="vits-piper-sv_SE-lisa-medium/espeak-ng-data",
tokens="vits-piper-sv_SE-lisa-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Liten skog, med många träd",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Liten skog, med många träd
sample audios for different speakers are listed below:
Speaker 0
vits-piper-sv_SE-nst-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/sv/sv_SE/nst/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-sv_SE-nst-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-sv_SE-nst-medium/sv_SE-nst-medium.onnx";
config.model.vits.tokens = "vits-piper-sv_SE-nst-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-sv_SE-nst-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Liten skog, med många träd";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-sv_SE-nst-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-sv_SE-nst-medium/sv_SE-nst-medium.onnx",
lexicon="",
data_dir="vits-piper-sv_SE-nst-medium/espeak-ng-data",
tokens="vits-piper-sv_SE-nst-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Liten skog, med många träd",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Liten skog, med många träd
sample audios for different speakers are listed below:
Speaker 0
Turkish
This section lists text to speech models for Turkish.
vits-piper-tr_TR-dfki-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/tr/tr_TR/dfki/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-tr_TR-dfki-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-tr_TR-dfki-medium/tr_TR-dfki-medium.onnx";
config.model.vits.tokens = "vits-piper-tr_TR-dfki-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-tr_TR-dfki-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Bir evin duvarları, bir adamın sözü, bir kadının gülü kırılmaz";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-tr_TR-dfki-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-tr_TR-dfki-medium/tr_TR-dfki-medium.onnx",
lexicon="",
data_dir="vits-piper-tr_TR-dfki-medium/espeak-ng-data",
tokens="vits-piper-tr_TR-dfki-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Bir evin duvarları, bir adamın sözü, bir kadının gülü kırılmaz",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Bir evin duvarları, bir adamın sözü, bir kadının gülü kırılmaz
sample audios for different speakers are listed below:
Speaker 0
vits-piper-tr_TR-fahrettin-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/tr/tr_TR/fahrettin/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-tr_TR-fahrettin-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-tr_TR-fahrettin-medium/tr_TR-fahrettin-medium.onnx";
config.model.vits.tokens = "vits-piper-tr_TR-fahrettin-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-tr_TR-fahrettin-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Bir evin duvarları, bir adamın sözü, bir kadının gülü kırılmaz";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-tr_TR-fahrettin-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-tr_TR-fahrettin-medium/tr_TR-fahrettin-medium.onnx",
lexicon="",
data_dir="vits-piper-tr_TR-fahrettin-medium/espeak-ng-data",
tokens="vits-piper-tr_TR-fahrettin-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Bir evin duvarları, bir adamın sözü, bir kadının gülü kırılmaz",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Bir evin duvarları, bir adamın sözü, bir kadının gülü kırılmaz
sample audios for different speakers are listed below:
Speaker 0
vits-piper-tr_TR-fettah-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/tr/tr_TR/fettah/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-tr_TR-fettah-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-tr_TR-fettah-medium/tr_TR-fettah-medium.onnx";
config.model.vits.tokens = "vits-piper-tr_TR-fettah-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-tr_TR-fettah-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Bir evin duvarları, bir adamın sözü, bir kadının gülü kırılmaz";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-tr_TR-fettah-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-tr_TR-fettah-medium/tr_TR-fettah-medium.onnx",
lexicon="",
data_dir="vits-piper-tr_TR-fettah-medium/espeak-ng-data",
tokens="vits-piper-tr_TR-fettah-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Bir evin duvarları, bir adamın sözü, bir kadının gülü kırılmaz",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Bir evin duvarları, bir adamın sözü, bir kadının gülü kırılmaz
sample audios for different speakers are listed below:
Speaker 0
Ukrainian
This section lists text to speech models for Ukrainian.
vits-piper-uk_UA-lada-x_low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/uk/uk_UA/lada/x_low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-uk_UA-lada-x_low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-uk_UA-lada-x_low/uk_UA-lada-x_low.onnx";
config.model.vits.tokens = "vits-piper-uk_UA-lada-x_low/tokens.txt";
config.model.vits.data_dir = "vits-piper-uk_UA-lada-x_low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Ви не можете навчити коня, якщо не відвикнете від годівлі.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-uk_UA-lada-x_low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-uk_UA-lada-x_low/uk_UA-lada-x_low.onnx",
lexicon="",
data_dir="vits-piper-uk_UA-lada-x_low/espeak-ng-data",
tokens="vits-piper-uk_UA-lada-x_low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Ви не можете навчити коня, якщо не відвикнете від годівлі.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Ви не можете навчити коня, якщо не відвикнете від годівлі.
sample audios for different speakers are listed below:
Speaker 0
vits-piper-uk_UA-ukrainian_tts-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/uk/uk_UA/ukrainian_tts/medium
Number of speakers | Sample rate |
---|---|
3 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-uk_UA-ukrainian_tts-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-uk_UA-ukrainian_tts-medium/uk_UA-ukrainian_tts-medium.onnx";
config.model.vits.tokens = "vits-piper-uk_UA-ukrainian_tts-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-uk_UA-ukrainian_tts-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Ви не можете навчити коня, якщо не відвикнете від годівлі.";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-uk_UA-ukrainian_tts-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-uk_UA-ukrainian_tts-medium/uk_UA-ukrainian_tts-medium.onnx",
lexicon="",
data_dir="vits-piper-uk_UA-ukrainian_tts-medium/espeak-ng-data",
tokens="vits-piper-uk_UA-ukrainian_tts-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Ви не можете навчити коня, якщо не відвикнете від годівлі.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Ви не можете навчити коня, якщо не відвикнете від годівлі.
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Vietnamese
This section lists text to speech models for Vietnamese.
vits-piper-vi_VN-25hours_single-low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/vi/vi_VN/25hours_single/low
Number of speakers | Sample rate |
---|---|
1 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-vi_VN-25hours_single-low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-vi_VN-25hours_single-low/vi_VN-25hours_single-low.onnx";
config.model.vits.tokens = "vits-piper-vi_VN-25hours_single-low/tokens.txt";
config.model.vits.data_dir = "vits-piper-vi_VN-25hours_single-low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Nước cũ đào gỗ mới, sông cũ chảy nước mới";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-vi_VN-25hours_single-low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-vi_VN-25hours_single-low/vi_VN-25hours_single-low.onnx",
lexicon="",
data_dir="vits-piper-vi_VN-25hours_single-low/espeak-ng-data",
tokens="vits-piper-vi_VN-25hours_single-low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Nước cũ đào gỗ mới, sông cũ chảy nước mới",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Nước cũ đào gỗ mới, sông cũ chảy nước mới
sample audios for different speakers are listed below:
Speaker 0
vits-piper-vi_VN-vais1000-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/vi/vi_VN/vais1000/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-vi_VN-vais1000-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-vi_VN-vais1000-medium/vi_VN-vais1000-medium.onnx";
config.model.vits.tokens = "vits-piper-vi_VN-vais1000-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-vi_VN-vais1000-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Nước cũ đào gỗ mới, sông cũ chảy nước mới";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-vi_VN-vais1000-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-vi_VN-vais1000-medium/vi_VN-vais1000-medium.onnx",
lexicon="",
data_dir="vits-piper-vi_VN-vais1000-medium/espeak-ng-data",
tokens="vits-piper-vi_VN-vais1000-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Nước cũ đào gỗ mới, sông cũ chảy nước mới",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Nước cũ đào gỗ mới, sông cũ chảy nước mới
sample audios for different speakers are listed below:
Speaker 0
vits-piper-vi_VN-vivos-x_low
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/vi/vi_VN/vivos/x_low
Number of speakers | Sample rate |
---|---|
65 | 16000 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-vi_VN-vivos-x_low
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-vi_VN-vivos-x_low/vi_VN-vivos-x_low.onnx";
config.model.vits.tokens = "vits-piper-vi_VN-vivos-x_low/tokens.txt";
config.model.vits.data_dir = "vits-piper-vi_VN-vivos-x_low/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Nước cũ đào gỗ mới, sông cũ chảy nước mới";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-vi_VN-vivos-x_low
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-vi_VN-vivos-x_low/vi_VN-vivos-x_low.onnx",
lexicon="",
data_dir="vits-piper-vi_VN-vivos-x_low/espeak-ng-data",
tokens="vits-piper-vi_VN-vivos-x_low/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Nước cũ đào gỗ mới, sông cũ chảy nước mới",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Nước cũ đào gỗ mới, sông cũ chảy nước mới
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
Speaker 6
Speaker 7
Speaker 8
Speaker 9
Speaker 10
Speaker 11
Speaker 12
Speaker 13
Speaker 14
Speaker 15
Speaker 16
Speaker 17
Speaker 18
Speaker 19
Speaker 20
Speaker 21
Speaker 22
Speaker 23
Speaker 24
Speaker 25
Speaker 26
Speaker 27
Speaker 28
Speaker 29
Speaker 30
Speaker 31
Speaker 32
Speaker 33
Speaker 34
Speaker 35
Speaker 36
Speaker 37
Speaker 38
Speaker 39
Speaker 40
Speaker 41
Speaker 42
Speaker 43
Speaker 44
Speaker 45
Speaker 46
Speaker 47
Speaker 48
Speaker 49
Speaker 50
Speaker 51
Speaker 52
Speaker 53
Speaker 54
Speaker 55
Speaker 56
Speaker 57
Speaker 58
Speaker 59
Speaker 60
Speaker 61
Speaker 62
Speaker 63
Speaker 64
Welsh
This section lists text to speech models for Welsh.
vits-piper-cy_GB-bu_tts-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/cy/cy_GB/bu_tts/medium
Number of speakers | Sample rate |
---|---|
7 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-cy_GB-bu_tts-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-cy_GB-bu_tts-medium/cy_GB-bu_tts-medium.onnx";
config.model.vits.tokens = "vits-piper-cy_GB-bu_tts-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-cy_GB-bu_tts-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Ni all y gwynt ei hunan ei ddilyn, ac felly mae’n rhaid i’r gŵyr ddod i’r gorwel i weld y llwybr yn gyfarwydd";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-cy_GB-bu_tts-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-cy_GB-bu_tts-medium/cy_GB-bu_tts-medium.onnx",
lexicon="",
data_dir="vits-piper-cy_GB-bu_tts-medium/espeak-ng-data",
tokens="vits-piper-cy_GB-bu_tts-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Ni all y gwynt ei hunan ei ddilyn, ac felly mae’n rhaid i’r gŵyr ddod i’r gorwel i weld y llwybr yn gyfarwydd",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Ni all y gwynt ei hunan ei ddilyn, ac felly mae’n rhaid i’r gŵyr ddod i’r gorwel i weld y llwybr yn gyfarwydd
sample audios for different speakers are listed below:
Speaker 0
Speaker 1
Speaker 2
Speaker 3
Speaker 4
Speaker 5
Speaker 6
vits-piper-cy_GB-gwryw_gogleddol-medium
Info about this model
This model is converted from https://huggingface.co/rhasspy/piper-voices/tree/main/cy/cy_GB/gwryw_gogleddol/medium
Number of speakers | Sample rate |
---|---|
1 | 22050 |
Model download address
Android APK
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.12.1
If you don't know what ABI is, you probably need to select
arm64-v8a
.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
https://k2-fsa.github.io/sherpa/onnx/tts/apk-engine.html
C API
You can use the following code to play with vits-piper-cy_GB-gwryw_gogleddol-medium
with C API.
#include <stdio.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
int main() {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.vits.model = "vits-piper-cy_GB-gwryw_gogleddol-medium/cy_GB-gwryw_gogleddol-medium.onnx";
config.model.vits.tokens = "vits-piper-cy_GB-gwryw_gogleddol-medium/tokens.txt";
config.model.vits.data_dir = "vits-piper-cy_GB-gwryw_gogleddol-medium/espeak-ng-data";
config.model.num_threads = 1;
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
int sid = 0; // speaker id
const char *text = "Ni all y gwynt ei hunan ei ddilyn, ac felly mae’n rhaid i’r gŵyr ddod i’r gorwel i weld y llwybr yn gyfarwydd";
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerate(tts, text, sid, 1.0);
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
In the following, we describe how to compile and run the above C example.
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake -DSHERPA_ONNX_ENABLE_C_API=ON -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=ON -DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared ..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared
.
Assume you have saved the above example file as /tmp/test-piper.c
.
Then you can compile it with the following command:
gcc -I /tmp/sherpa-onnx/shared/include -L /tmp/sherpa-onnx/shared/lib -lsherpa-onnx-c-api -lonnxruntime -o /tmp/test-piper /tmp/test-piper.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-piper
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATH
before you run
/tmp/test-piper
.
Use static library (static link)
Please see the documentation at
https://k2-fsa.github.io/sherpa/onnx/c-api/index.html
Python API
Assume you have installed sherpa-onnx
via
pip install sherpa-onnx
and you have downloaded the model from
You can use the following code to play with vits-piper-cy_GB-gwryw_gogleddol-medium
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
vits=sherpa_onnx.OfflineTtsVitsModelConfig(
model="vits-piper-cy_GB-gwryw_gogleddol-medium/cy_GB-gwryw_gogleddol-medium.onnx",
lexicon="",
data_dir="vits-piper-cy_GB-gwryw_gogleddol-medium/espeak-ng-data",
tokens="vits-piper-cy_GB-gwryw_gogleddol-medium/tokens.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="Ni all y gwynt ei hunan ei ddilyn, ac felly mae’n rhaid i’r gŵyr ddod i’r gorwel i weld y llwybr yn gyfarwydd",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
Samples
For the following text:
Ni all y gwynt ei hunan ei ddilyn, ac felly mae’n rhaid i’r gŵyr ddod i’r gorwel i weld y llwybr yn gyfarwydd
sample audios for different speakers are listed below: