banner

Technologies

核心技术

语音识别技术

有光科技自主研发的语音识别技术,利用海量语音数据进行深度学习,能够准确识别普通话、英语,以及粤语、四川话等方言和小语种。

通过行业真实数据的训练以及模型算法的不断优化,我们的语音识别引擎能够对方言口音、行业话术、背景噪音等特征变量进行针对性的优化,

显著提升语音识别引擎在不同环境中的准确率和稳定性。

技术特点

  • 图片一 支持各类方言和
    小语种
  • 图片二 自主学习
    识别率不断提升
  • 图片三 强化训练
    准确识别行业话术
  • 图片四 定制化开发
    部署灵活

应用领域

  • 语音聊天机器人

    语音聊天机器人
  • 语音转写与记录

    语音转写与记录
  • 语音分析系统

    语音分析系统
  • 声纹识别

    声纹识别
  • 语音输入

    语音输入
  • 语音助手

    语音助手
  • 智能家居

    智能家居
  • 可穿戴设备

    可穿戴设备

科研论文

  • Domain Adaptation of End-to-end Speech Recognition in Low-resource Settings

    Lahiru Samarakoon, Brian Mak, and Albert Y.S. Lam. IEEE Workshop on Spoken Language Technology (IEEE SLT 2018), Athens, Greece, Dec. 2018.

    End-to-end automatic speech recognition (ASR) has simplified the traditional ASR system building pipeline by eliminating the need to have multiple components and also the requirement for expert linguistic knowledge for creating pronunciation dictionaries. Therefore, end-to-end ASR fits well when building systems for new domains. However, one major drawback of end-to-end ASR is that, it is necessary to have a larger amount of labeled speech in comparison to traditional methods. Therefore, in this paper, we explore domain adaptation approaches for end-to-end ASR in low-resource settings. We show that joint domain identification and speech recognition by inserting a symbol for domain at the beginning of the label sequence, factorized hidden layer adaptation and a domain-specific gating mechanism improve the performance for a low-resource target domain. Furthermore, we also show the robustness of proposed adaptation methods to an unseen domain, when only 3 hours of untranscribed data is available with improvements reporting up to 8.7% relative.


  • Subspace Based Sequence Discriminative Training of LSTM Acoustic Models with Feed-Forward Layers

    Lahiru Samarakoon, Brian Mak, and Albert Y.S. Lam. ISCSLP, Taipei, Taiwan, Nov. 2018.

    State-of-the-art automatic speech recognition (ASR) systems use sequence discriminative training for improved performance over frame-level cross-entropy (CE) criterion. Even though sequence discriminative training improves long short-term memory (LSTM) recurrent neural network (RNN) acoustic models (AMs), it is not clear whether these systems achieve the optimal performance due to overfitting. This paper investigates the effect of state-level minimum Bayes risk (sMBR) training on LSTM AMs and shows that the conventional way of performing sMBR by updating all LSTM parameters is not optimal. We investigate two methods to improve the performance of sequence discriminative training of LSTM AMs. First more feed-forward (FF) layers are included between the last LSTM layer and the output layer so those additional FF layers may bene- fit more from sMBR training. Second, a subspace is estimated as an interpolation of rank-1 matrices when performing sMBR for the LSTM layers of the AM. Our methods are evaluated in benchmark AMI single distance microphone (SDM) task. We find that the proposed approaches provide 1.6% absolute improvement over a strong sMBR trained LSTM baseline.