icefall

Contents:

  • Icefall for dummies tutorial
  • Installation
  • Docker
  • Frequently Asked Questions (FAQs)
  • Model export
    • Export model.state_dict()
    • Export model with torch.jit.trace()
    • Export model with torch.jit.script()
    • Export to ONNX
    • Export to ncnn
      • Export streaming Zipformer transducer models to ncnn
      • Export ConvEmformer transducer models to ncnn
      • Export LSTM transducer models to ncnn
  • FST-based forced alignment
  • Recipes
  • Contributing
  • Huggingface
  • Decoding with language models
icefall
  • Model export
  • Export to ncnn
  • Edit on GitHub

Export to ncnn

We support exporting the following models to ncnn:

  • Zipformer transducer models

  • LSTM transducer models

  • ConvEmformer transducer models

We also provide sherpa-ncnn for performing speech recognition using ncnn with exported models. It has been tested on the following platforms:

  • Linux

  • macOS

  • Windows

  • Android

  • iOS

  • Raspberry Pi

  • 爱芯派 (MAIX-III AXera-Pi).

  • RV1126

sherpa-ncnn is self-contained and can be statically linked to produce a binary containing everything needed. Please refer to its documentation for details:

  • https://k2-fsa.github.io/sherpa/ncnn/index.html

  • Export streaming Zipformer transducer models to ncnn
    • 1. Download the pre-trained model
    • 2. Install ncnn and pnnx
    • 3. Export the model via torch.jit.trace()
    • 4. Export torchscript model via pnnx
    • 5. Test the exported models in icefall
    • 6. Modify the exported encoder for sherpa-ncnn
  • Export ConvEmformer transducer models to ncnn
    • 1. Download the pre-trained model
    • 2. Install ncnn and pnnx
    • 3. Export the model via torch.jit.trace()
    • 4. Export torchscript model via pnnx
    • 5. Test the exported models in icefall
    • 6. Modify the exported encoder for sherpa-ncnn
    • 7. (Optional) int8 quantization with sherpa-ncnn
  • Export LSTM transducer models to ncnn
    • 1. Download the pre-trained model
    • 2. Install ncnn and pnnx
    • 3. Export the model via torch.jit.trace()
    • 4. Export torchscript model via pnnx
    • 5. Test the exported models in icefall
    • 6. Modify the exported encoder for sherpa-ncnn
    • 7. (Optional) int8 quantization with sherpa-ncnn
Previous Next

© Copyright 2021, icefall development team.

Built with Sphinx using a theme provided by Read the Docs.