kokoro-multi-lang-v1_0
| Info about this model | Download the model | Android APK | Python API | C API |
| C++ API | Rust API | Node.js API | Dart API | Swift API |
| C# API | Kotlin API | Java API | Pascal API | Go API |
| Samples |
Info about this model
This model is kokoro v1.0 and it is from https://huggingface.co/hexgrad/Kokoro-82M
It supports both Chinese and English.
| Number of speakers | Sample rate |
|---|---|
| 53 | 24000 |
Meaning of speaker prefix
| Prefix | Meaning | sid range | Number of speakers |
|---|---|---|---|
| af | American female | 0 - 10 | 11 |
| am | American male | 11 - 19 | 9 |
| bf | British female | 20 - 23 | 4 |
| bm | British male | 24 - 27 | 4 |
| ef | Spanish female | 28 | 1 |
| em | Spanish male | 29 | 1 |
| ff | French female | 30 | 1 |
| hf | Hindi female | 31 - 32 | 2 |
| hm | Hindi male | 33 - 34 | 2 |
| if | Italian female | 35 | 1 |
| im | Italian male | 36 | 1 |
| jf | Japanese female | 37 - 40 | 4 |
| jm | Japanese male | 41 | 1 |
| pf | Brazilian Portuguese female | 42 | 1 |
| pm | Brazilian Portuguese male | 43 - 44 | 2 |
| zf | Chinese female | 45 - 48 | 4 |
| zm | Chinese male | 49 - 52 | 4 |
speaker ID to speaker name (sid -> name)
The mapping from speaker ID (sid) to speaker name is given below:
| 0 - 3 | 0 -> af_alloy | 1 -> af_aoede | 2 -> af_bella | 3 -> af_heart |
| 4 - 7 | 4 -> af_jessica | 5 -> af_kore | 6 -> af_nicole | 7 -> af_nova |
| 8 - 11 | 8 -> af_river | 9 -> af_sarah | 10 -> af_sky | 11 -> am_adam |
| 12 - 15 | 12 -> am_echo | 13 -> am_eric | 14 -> am_fenrir | 15 -> am_liam |
| 16 - 19 | 16 -> am_michael | 17 -> am_onyx | 18 -> am_puck | 19 -> am_santa |
| 20 - 23 | 20 -> bf_alice | 21 -> bf_emma | 22 -> bf_isabella | 23 -> bf_lily |
| 24 - 27 | 24 -> bm_daniel | 25 -> bm_fable | 26 -> bm_george | 27 -> bm_lewis |
| 28 - 31 | 28 -> ef_dora | 29 -> em_alex | 30 -> ff_siwis | 31 -> hf_alpha |
| 32 - 35 | 32 -> hf_beta | 33 -> hm_omega | 34 -> hm_psi | 35 -> if_sara |
| 36 - 39 | 36 -> im_nicola | 37 -> jf_alpha | 38 -> jf_gongitsune | 39 -> jf_nezumi |
| 40 - 43 | 40 -> jf_tebukuro | 41 -> jm_kumo | 42 -> pf_dora | 43 -> pm_alex |
| 44 - 47 | 44 -> pm_santa | 45 -> zf_xiaobei | 46 -> zf_xiaoni | 47 -> zf_xiaoxiao |
| 48 - 51 | 48 -> zf_xiaoyi | 49 -> zm_yunjian | 50 -> zm_yunxi | 51 -> zm_yunxia |
| 52 | 52 -> zm_yunyang |
speaker name to speaker ID (name -> sid)
The mapping from speaker name to speaker ID (sid) is given below:
| 0 - 3 | af_alloy -> 0 | af_aoede -> 1 | af_bella -> 2 | af_heart -> 3 |
| 4 - 7 | af_jessica -> 4 | af_kore -> 5 | af_nicole -> 6 | af_nova -> 7 |
| 8 - 11 | af_river -> 8 | af_sarah -> 9 | af_sky -> 10 | am_adam -> 11 |
| 12 - 15 | am_echo -> 12 | am_eric -> 13 | am_fenrir -> 14 | am_liam -> 15 |
| 16 - 19 | am_michael -> 16 | am_onyx -> 17 | am_puck -> 18 | am_santa -> 19 |
| 20 - 23 | bf_alice -> 20 | bf_emma -> 21 | bf_isabella -> 22 | bf_lily -> 23 |
| 24 - 27 | bm_daniel -> 24 | bm_fable -> 25 | bm_george -> 26 | bm_lewis -> 27 |
| 28 - 31 | ef_dora -> 28 | em_alex -> 29 | ff_siwis -> 30 | hf_alpha -> 31 |
| 32 - 35 | hf_beta -> 32 | hm_omega -> 33 | hm_psi -> 34 | if_sara -> 35 |
| 36 - 39 | im_nicola -> 36 | jf_alpha -> 37 | jf_gongitsune -> 38 | jf_nezumi -> 39 |
| 40 - 43 | jf_tebukuro -> 40 | jm_kumo -> 41 | pf_dora -> 42 | pm_alex -> 43 |
| 44 - 47 | pm_santa -> 44 | zf_xiaobei -> 45 | zf_xiaoni -> 46 | zf_xiaoxiao -> 47 |
| 48 - 51 | zf_xiaoyi -> 48 | zm_yunjian -> 49 | zm_yunxi -> 50 | zm_yunxia -> 51 |
| 52 - 52 | zm_yunyang -> 52 |
Download the model
Click to expand
Model download address
https://github.com/k2-fsa/sherpa-onnx/releases/download/tts-models/kokoro-multi-lang-v1_0.tar.bz2
Android APK
Click to expand
The following table shows the Android TTS Engine APK with this model for sherpa-onnx v1.13.2
If you don’t know what ABI is, you probably need to select
arm64-v8a.
The source code for the APK can be found at
https://github.com/k2-fsa/sherpa-onnx/tree/master/android/SherpaOnnxTtsEngine
Please refer to the documentation for how to build the APK from source code.
More Android APKs can be found at
Python API
Click to expand
Assume you have installed sherpa-onnx via
pip install sherpa-onnx
and you have downloaded the model from
https://github.com/k2-fsa/sherpa-onnx/releases/download/tts-models/kokoro-multi-lang-v1_0.tar.bz2
You can use the following code to play with kokoro-multi-lang-v1_0
import sherpa_onnx
import soundfile as sf
config = sherpa_onnx.OfflineTtsConfig(
model=sherpa_onnx.OfflineTtsModelConfig(
kokoro=sherpa_onnx.OfflineTtsKokoroModelConfig(
model="kokoro-multi-lang-v1_0/model.onnx",
voices="kokoro-multi-lang-v1_0/voices.bin",
tokens="kokoro-multi-lang-v1_0/tokens.txt",
data_dir="kokoro-multi-lang-v1_0/espeak-ng-data",
lexicon="kokoro-multi-lang-v1_0/lexicon-us-en.txt,kokoro-multi-lang-v1_0/lexicon-zh.txt",
),
num_threads=1,
),
)
if not config.validate():
raise ValueError("Please check your config")
tts = sherpa_onnx.OfflineTts(config)
audio = tts.generate(text="This model supports both Chinese and English. 小米的核心价值观是什么?答案是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习. 我在研究 machine learning。What do you think 中英文说的如何呢?今天是 2025年6月18号.",
sid=0,
speed=1.0)
sf.write("test.mp3", audio.samples, samplerate=audio.sample_rate)
C API
Click to expand
You can use the following code to play with kokoro-multi-lang-v1_0 with C API.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "sherpa-onnx/c-api/c-api.h"
static int32_t ProgressCallback(const float *samples, int32_t num_samples,
float progress, void *arg) {
fprintf(stderr, "Progress: %.3f%%\n", progress * 100);
// return 1 to continue generating
// return 0 to stop generating
return 1;
}
int32_t main(int32_t argc, char *argv[]) {
SherpaOnnxOfflineTtsConfig config;
memset(&config, 0, sizeof(config));
config.model.kokoro.model = "kokoro-multi-lang-v1_0/model.onnx";
config.model.kokoro.voices = "kokoro-multi-lang-v1_0/voices.bin";
config.model.kokoro.tokens = "kokoro-multi-lang-v1_0/tokens.txt";
config.model.kokoro.data_dir = "kokoro-multi-lang-v1_0/espeak-ng-data";
config.model.kokoro.lexicon = "kokoro-multi-lang-v1_0/lexicon-us-en.txt,kokoro-multi-lang-v1_0/lexicon-zh.txt";
config.model.num_threads = 1;
// If you don't want to see debug messages, please set it to 0
config.model.debug = 0;
const char *text = "This model supports both Chinese and English. 小米的核心价值观是什么?答案是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习. 我在研究 machine learning。What do you think 中英文说的如何呢?今天是 2025年6月18号.";
const SherpaOnnxOfflineTts *tts = SherpaOnnxCreateOfflineTts(&config);
SherpaOnnxGenerationConfig gen_cfg;
memset(&gen_cfg, 0, sizeof(gen_cfg));
gen_cfg.sid = 0;
gen_cfg.speed = 1.0;
#if 0
// If you don't want to use a callback, then please enable this branch
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerateWithConfig(tts, text, &gen_cfg, NULL, NULL);
#else
const SherpaOnnxGeneratedAudio *audio =
SherpaOnnxOfflineTtsGenerateWithConfig(tts, text, &gen_cfg,
ProgressCallback, NULL);
#endif
SherpaOnnxWriteWave(audio->samples, audio->n, audio->sample_rate,
"./test.wav");
// You need to free the pointers to avoid memory leak in your app
SherpaOnnxDestroyOfflineTtsGeneratedAudio(audio);
SherpaOnnxDestroyOfflineTts(tts);
printf("Saved to ./test.wav\n");
return 0;
}
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake \
-DSHERPA_ONNX_ENABLE_C_API=ON \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_SHARED_LIBS=ON \
-DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared \
..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared.
Assume you have saved the above example file as /tmp/test-kokoro.c.
Then you can compile it with the following command:
gcc \
-I /tmp/sherpa-onnx/shared/include \
-L /tmp/sherpa-onnx/shared/lib \
-lsherpa-onnx-c-api \
-lonnxruntime \
-o /tmp/test-kokoro \
/tmp/test-kokoro.c
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-kokoro
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATHbefore you run
/tmp/test-kokoro.
Use static library (static link)
Please see the documentation at
C++ API
Click to expand
You can use the following code to play with kokoro-multi-lang-v1_0 with C++ API.
#include <cstdint>
#include <cstdio>
#include <string>
#include "sherpa-onnx/c-api/cxx-api.h"
static int32_t ProgressCallback(const float *samples, int32_t num_samples,
float progress, void *arg) {
fprintf(stderr, "Progress: %.3f%%\n", progress * 100);
// return 1 to continue generating
// return 0 to stop generating
return 1;
}
int32_t main(int32_t argc, char *argv[]) {
using namespace sherpa_onnx::cxx; // NOLINT
OfflineTtsConfig config;
config.model.kokoro.model = "kokoro-multi-lang-v1_0/model.onnx";
config.model.kokoro.voices = "kokoro-multi-lang-v1_0/voices.bin";
config.model.kokoro.tokens = "kokoro-multi-lang-v1_0/tokens.txt";
config.model.kokoro.data_dir = "kokoro-multi-lang-v1_0/espeak-ng-data";
config.model.kokoro.lexicon = "kokoro-multi-lang-v1_0/lexicon-us-en.txt,kokoro-multi-lang-v1_0/lexicon-zh.txt";
config.model.num_threads = 1;
// If you don't want to see debug messages, please set it to 0
config.model.debug = 0;
std::string filename = "./test.wav";
std::string text = "This model supports both Chinese and English. 小米的核心价值观是什么?答案是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习. 我在研究 machine learning。What do you think 中英文说的如何呢?今天是 2025年6月18号.";
auto tts = OfflineTts::Create(config);
GenerationConfig gen_cfg;
gen_cfg.sid = 0;
gen_cfg.speed = 1.0; // larger -> faster in speech speed
#if 0
// If you don't want to use a callback, then please enable this branch
GeneratedAudio audio = tts.Generate(text, gen_cfg);
#else
GeneratedAudio audio = tts.Generate(text, gen_cfg, ProgressCallback);
#endif
WriteWave(filename, {audio.samples, audio.sample_rate});
fprintf(stderr, "Input text is: %s\n", text.c_str());
fprintf(stderr, "Speaker ID is: %d\n", gen_cfg.sid);
fprintf(stderr, "Saved to: %s\n", filename.c_str());
return 0;
}
Use shared library (dynamic link)
cd /tmp
git clone https://github.com/k2-fsa/sherpa-onnx
cd sherpa-onnx
mkdir build-shared
cd build-shared
cmake \
-DSHERPA_ONNX_ENABLE_C_API=ON \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_SHARED_LIBS=ON \
-DCMAKE_INSTALL_PREFIX=/tmp/sherpa-onnx/shared \
..
make
make install
You can find required header file and library files inside /tmp/sherpa-onnx/shared.
Assume you have saved the above example file as /tmp/test-kokoro.cc.
Then you can compile it with the following command:
g++ \
-std=c++17 \
-I /tmp/sherpa-onnx/shared/include \
-L /tmp/sherpa-onnx/shared/lib \
-lsherpa-onnx-cxx-api \
-lsherpa-onnx-c-api \
-lonnxruntime \
-o /tmp/test-kokoro \
/tmp/test-kokoro.cc
Now you can run
cd /tmp
# Assume you have downloaded the model and extracted it to /tmp
./test-kokoro
You probably need to run
# For Linux export LD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$LD_LIBRARY_PATH # For macOS export DYLD_LIBRARY_PATH=/tmp/sherpa-onnx/shared/lib:$DYLD_LIBRARY_PATHbefore you run
/tmp/test-kokoro.
Use static library (static link)
Please see the documentation at
Rust API
Click to expand
You can use the following code to play with kokoro-multi-lang-v1_0 with Rust API.
use sherpa_onnx::{
GenerationConfig, OfflineTts, OfflineTtsConfig, OfflineTtsKokoroModelConfig,
};
fn main() {
let config = OfflineTtsConfig {
model: sherpa_onnx::OfflineTtsModelConfig {
kokoro: OfflineTtsKokoroModelConfig {
model: Some("kokoro-multi-lang-v1_0/model.onnx".into()),
voices: Some("kokoro-multi-lang-v1_0/voices.bin".into()),
tokens: Some("kokoro-multi-lang-v1_0/tokens.txt".into()),
data_dir: Some("kokoro-multi-lang-v1_0/espeak-ng-data".into()),
lexicon: Some("kokoro-multi-lang-v1_0/lexicon-us-en.txt,kokoro-multi-lang-v1_0/lexicon-zh.txt".into()),
..Default::default()
},
num_threads: 2,
debug: false,
..Default::default()
},
..Default::default()
};
let tts = OfflineTts::create(&config).expect("Failed to create OfflineTts");
println!("Sample rate: {}", tts.sample_rate());
println!("Num speakers: {}", tts.num_speakers());
let text = "This model supports both Chinese and English. 小米的核心价值观是什么?答案是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习. 我在研究 machine learning。What do you think 中英文说的如何呢?今天是 2025年6月18号.";
let gen_config = GenerationConfig {
sid: 0,
speed: 1.0,
..Default::default()
};
let audio = tts
.generate_with_config(
text,
&gen_config,
Some(|_samples: &[f32], progress: f32| -> bool {
println!("Progress: {:.1}%", progress * 100.0);
true
}),
)
.expect("Generation failed");
let filename = "./test.wav";
if audio.save(filename) {
println!("Saved to: {}", filename);
} else {
eprintln!("Failed to save {}", filename);
}
}
Please refer to the Rust API documentation for how to build and run the above Rust example.
Node.js (addon) API
Click to expand
You need to install the sherpa-onnx-node npm package first:
npm install sherpa-onnx-node
You can use the following code to play with kokoro-multi-lang-v1_0 with the Node.js addon API.
const sherpa_onnx = require('sherpa-onnx-node');
function createOfflineTts() {
const config = {
model: {
kokoro: {
model: 'kokoro-multi-lang-v1_0/model.onnx',
voices: 'kokoro-multi-lang-v1_0/voices.bin',
tokens: 'kokoro-multi-lang-v1_0/tokens.txt',
dataDir: 'kokoro-multi-lang-v1_0/espeak-ng-data',
lexicon: 'kokoro-multi-lang-v1_0/lexicon-us-en.txt,kokoro-multi-lang-v1_0/lexicon-zh.txt',
},
debug: true,
numThreads: 1,
provider: 'cpu',
},
maxNumSentences: 1,
};
return new sherpa_onnx.OfflineTts(config);
}
const tts = createOfflineTts();
const text = 'This model supports both Chinese and English. 小米的核心价值观是什么?答案是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习. 我在研究 machine learning。What do you think 中英文说的如何呢?今天是 2025年6月18号.';
const generationConfig = new sherpa_onnx.GenerationConfig({
sid: 0,
speed: 1.0,
silenceScale: 0.2,
});
let start = Date.now();
const audio = tts.generate({text, generationConfig});
let stop = Date.now();
const elapsed_seconds = (stop - start) / 1000;
const duration = audio.samples.length / audio.sampleRate;
const real_time_factor = elapsed_seconds / duration;
console.log('Wave duration', duration.toFixed(3), 'seconds');
console.log('Elapsed', elapsed_seconds.toFixed(3), 'seconds');
console.log(
`RTF = ${elapsed_seconds.toFixed(3)}/${duration.toFixed(3)} =`,
real_time_factor.toFixed(3));
const filename = 'test.wav';
sherpa_onnx.writeWave(
filename, {samples: audio.samples, sampleRate: audio.sampleRate});
console.log(`Saved to ${filename}`);
Please refer to the Node.js addon API documentation for more details.
Dart API
Click to expand
You can use the following code to play with kokoro-multi-lang-v1_0 with Dart API.
import 'package:sherpa_onnx/sherpa_onnx.dart' as sherpa_onnx;
void main() {
final kokoro = sherpa_onnx.OfflineTtsKokoroModelConfig(
model: 'kokoro-multi-lang-v1_0/model.onnx',
voices: 'kokoro-multi-lang-v1_0/voices.bin',
tokens: 'kokoro-multi-lang-v1_0/tokens.txt',
dataDir: 'kokoro-multi-lang-v1_0/espeak-ng-data',
);
final modelConfig = sherpa_onnx.OfflineTtsModelConfig(
kokoro: kokoro,
numThreads: 1,
debug: true,
);
final config = sherpa_onnx.OfflineTtsConfig(
model: modelConfig,
maxNumSenetences: 1,
);
final tts = sherpa_onnx.OfflineTts(config);
final genConfig = sherpa_onnx.OfflineTtsGenerationConfig(
sid: 0,
speed: 1.0,
silenceScale: 0.2,
);
final audio = tts.generateWithConfig(text: 'This model supports both Chinese and English. 小米的核心价值观是什么?答案是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习. 我在研究 machine learning。What do you think 中英文说的如何呢?今天是 2025年6月18号.', config: genConfig);
tts.free();
sherpa_onnx.writeWave(
filename: 'test.wav',
samples: audio.samples,
sampleRate: audio.sampleRate,
);
print('Saved to test.wav');
}
Please refer to the Dart API documentation for more details.
Swift API
Click to expand
You can use the following code to play with kokoro-multi-lang-v1_0 with Swift API.
func run() {
let kokoro = sherpaOnnxOfflineTtsKokoroModelConfig(
model: "kokoro-multi-lang-v1_0/model.onnx",
voices: "kokoro-multi-lang-v1_0/voices.bin",
tokens: "kokoro-multi-lang-v1_0/tokens.txt",
dataDir: "kokoro-multi-lang-v1_0/espeak-ng-data"
)
let modelConfig = sherpaOnnxOfflineTtsModelConfig(kokoro: kokoro)
var ttsConfig = sherpaOnnxOfflineTtsConfig(model: modelConfig)
let tts = SherpaOnnxOfflineTtsWrapper(config: &ttsConfig)
let text = "This model supports both Chinese and English. 小米的核心价值观是什么?答案是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习. 我在研究 machine learning。What do you think 中英文说的如何呢?今天是 2025年6月18号."
var genConfig = SherpaOnnxGenerationConfigSwift()
genConfig.sid = 0
genConfig.speed = 1.0
genConfig.silenceScale = 0.2
let audio = tts.generateWithConfig(text: text, config: genConfig, callback: nil, arg: nil)
let filename = "test.wav"
let ok = audio.save(filename: filename)
if ok == 1 {
print("Saved to \(filename)")
} else {
print("Failed to save \(filename)")
}
}
@main
struct App {
static func main() {
run()
}
}
Please refer to the Swift API documentation for more details.
C# API
Click to expand
You can use the following code to play with kokoro-multi-lang-v1_0 with C# API.
using SherpaOnnx;
var config = new OfflineTtsConfig();
config.Model.Kokoro.Model = "kokoro-multi-lang-v1_0/model.onnx";
config.Model.Kokoro.Voices = "kokoro-multi-lang-v1_0/voices.bin";
config.Model.Kokoro.Tokens = "kokoro-multi-lang-v1_0/tokens.txt";
config.Model.Kokoro.DataDir = "kokoro-multi-lang-v1_0/espeak-ng-data";
config.Model.Kokoro.Lexicon = "kokoro-multi-lang-v1_0/lexicon-us-en.txt,kokoro-multi-lang-v1_0/lexicon-zh.txt";
config.Model.NumThreads = 1;
config.Model.Debug = 1;
config.Model.Provider = "cpu";
config.MaxNumSentences = 1;
var tts = new OfflineTts(config);
var text = "This model supports both Chinese and English. 小米的核心价值观是什么?答案是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习. 我在研究 machine learning。What do you think 中英文说的如何呢?今天是 2025年6月18号.";
OfflineTtsGenerationConfig genConfig = new OfflineTtsGenerationConfig();
genConfig.Sid = ;
genConfig.Speed = 1.0f;
genConfig.SilenceScale = 0.2f;
var audio = tts.GenerateWithConfig(text, genConfig, null);
var ok = audio.SaveToWaveFile("./test.wav");
if (ok)
{
Console.WriteLine("Saved to ./test.wav");
}
else
{
Console.WriteLine("Failed to save ./test.wav");
}
Please refer to the C# API documentation for more details.
Kotlin API
Click to expand
You can use the following code to play with kokoro-multi-lang-v1_0 with Kotlin API.
package com.k2fsa.sherpa.onnx
fun main() {
var config = OfflineTtsConfig(
model = OfflineTtsModelConfig(
kokoro = OfflineTtsKokoroModelConfig(
model = "kokoro-multi-lang-v1_0/model.onnx",
voices = "kokoro-multi-lang-v1_0/voices.bin",
tokens = "kokoro-multi-lang-v1_0/tokens.txt",
dataDir = "kokoro-multi-lang-v1_0/espeak-ng-data",
lexicon = "kokoro-multi-lang-v1_0/lexicon-us-en.txt,kokoro-multi-lang-v1_0/lexicon-zh.txt",
),
numThreads = 1,
debug = true,
),
)
val tts = OfflineTts(config = config)
val genConfig = GenerationConfig(
sid = ,
speed = 1.0f,
silenceScale = 0.2f,
)
val audio = tts.generateWithConfigAndCallback(
text = "This model supports both Chinese and English. 小米的核心价值观是什么?答案是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习. 我在研究 machine learning。What do you think 中英文说的如何呢?今天是 2025年6月18号.",
config = genConfig,
callback = ::callback,
)
audio.save(filename = "test.wav")
tts.release()
println("Saved to test.wav")
}
fun callback(samples: FloatArray): Int {
// 1 means to continue
// 0 means to stop
return 1
}
Please refer to the Kotlin API documentation for more details.
Java API
Click to expand
You can use the following code to play with kokoro-multi-lang-v1_0 with Java API.
import com.k2fsa.sherpa.onnx.*;
public class TtsDemo {
public static void main(String[] args) {
var kokoro = new OfflineTtsKokoroModelConfig();
kokoro.setModel("kokoro-multi-lang-v1_0/model.onnx");
kokoro.setVoices("kokoro-multi-lang-v1_0/voices.bin");
kokoro.setTokens("kokoro-multi-lang-v1_0/tokens.txt");
kokoro.setDataDir("kokoro-multi-lang-v1_0/espeak-ng-data");
kokoro.setLexicon("kokoro-multi-lang-v1_0/lexicon-us-en.txt,kokoro-multi-lang-v1_0/lexicon-zh.txt");
var modelConfig = new OfflineTtsModelConfig();
modelConfig.setKokoro(kokoro);
modelConfig.setNumThreads(1);
modelConfig.setDebug(true);
var config = new OfflineTtsConfig();
config.setModel(modelConfig);
config.setMaxNumSentences(1);
var tts = new OfflineTts(config);
var text = "This model supports both Chinese and English. 小米的核心价值观是什么?答案是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习. 我在研究 machine learning。What do you think 中英文说的如何呢?今天是 2025年6月18号.";
var genConfig = new GenerationConfig();
genConfig.setSid(0);
genConfig.setSpeed(1.0f);
genConfig.setSilenceScale(0.2f);
var audio = tts.generateWithConfigAndCallback(text, genConfig, (samples) -> {
// 1 means to continue, 0 means to stop
return 1;
});
audio.save("test.wav");
tts.release();
System.out.println("Saved to test.wav");
}
}
Please refer to the Java API documentation for more details.
Pascal API
Click to expand
You can use the following code to play with kokoro-multi-lang-v1_0 with Pascal API.
program test_kokoro;
{$mode objfpc}
uses
SysUtils,
sherpa_onnx;
var
Config: TSherpaOnnxOfflineTtsConfig;
Tts: TSherpaOnnxOfflineTts;
Audio: TSherpaOnnxGeneratedAudio;
GenConfig: TSherpaOnnxGenerationConfig;
begin
FillChar(Config, SizeOf(Config), 0);
Config.Model.Kokoro.Model := 'kokoro-multi-lang-v1_0/model.onnx';
Config.Model.Kokoro.Voices := 'kokoro-multi-lang-v1_0/voices.bin';
Config.Model.Kokoro.Tokens := 'kokoro-multi-lang-v1_0/tokens.txt';
Config.Model.Kokoro.DataDir := 'kokoro-multi-lang-v1_0/espeak-ng-data';
Config.Model.Kokoro.Lexicon := 'kokoro-multi-lang-v1_0/lexicon-us-en.txt,kokoro-multi-lang-v1_0/lexicon-zh.txt';
Config.Model.NumThreads := 1;
Config.Model.Debug := True;
Config.MaxNumSentences := 1;
Tts := TSherpaOnnxOfflineTts.Create(@Config);
GenConfig.Sid := 0;
GenConfig.Speed := 1.0;
GenConfig.SilenceScale := 0.2;
Audio := Tts.GenerateWithConfig('This model supports both Chinese and English. 小米的核心价值观是什么?答案是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习. 我在研究 machine learning。What do you think 中英文说的如何呢?今天是 2025年6月18号.', @GenConfig, nil);
WriteWave('./test.wav', Audio.Samples, Audio.N, Audio.SampleRate);
WriteLn('Saved to ./test.wav');
Audio.Free;
Tts.Free;
end.
Please refer to the Pascal API documentation for more details.
Go API
Click to expand
You can use the following code to play with kokoro-multi-lang-v1_0 with Go API.
package main
import (
"fmt"
sherpa "github.com/k2-fsa/sherpa-onnx-go/sherpa_onnx"
)
func main() {
config := sherpa.OfflineTtsConfig{
Model: sherpa.OfflineTtsModelConfig{
Kokoro: sherpa.OfflineTtsKokoroModelConfig{
Model: "kokoro-multi-lang-v1_0/model.onnx",
Voices: "kokoro-multi-lang-v1_0/voices.bin",
Tokens: "kokoro-multi-lang-v1_0/tokens.txt",
DataDir: "kokoro-multi-lang-v1_0/espeak-ng-data",
Lexicon: "kokoro-multi-lang-v1_0/lexicon-us-en.txt,kokoro-multi-lang-v1_0/lexicon-zh.txt",
},
NumThreads: 1,
Debug: true,
},
MaxNumSentences: 1,
}
tts := sherpa.NewOfflineTts(&config)
defer tts.Delete()
text := "This model supports both Chinese and English. 小米的核心价值观是什么?答案是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习. 我在研究 machine learning。What do you think 中英文说的如何呢?今天是 2025年6月18号."
genConfig := sherpa.GenerationConfig{
Sid: 0,
Speed: 1.0,
SilenceScale: 0.2,
}
audio := tts.GenerateWithConfig(text, &genConfig, nil)
filename := "./test.wav"
sherpa.WriteWave(filename, audio.Samples, audio.SampleRate)
fmt.Printf("Saved to %s\n", filename)
}
Please refer to the Go API documentation for more details.
Samples
For the following text:
This model supports both Chinese and English. 小米的核心价值观是什么?答案
是真诚热爱!有困难,请拨打110 或者18601200909。I am learning 机器学习.
我在研究 machine learning。What do you think 中英文说的如何呢?
今天是 2025年6月18号.
sample audios for different speakers are listed below: