Open Source Text Processing Project: pocketsphinx-ruby

Deep Learning Specialization on Coursera

pocketsphinx-ruby: Ruby speech recognition with Pocketsphinx

Project Website: None
Github Link:


This gem provides Ruby FFI bindings for Pocketsphinx, a lightweight speech recognition engine, specifically tuned for handheld and mobile devices, though it works equally well on the desktop. Pocketsphinx is part of the CMU Sphinx Open Source Toolkit For Speech Recognition.

Pocketsphinx’s SWIG interface was initially considered for this gem, but dropped in favor of FFI for many of the reasons outlined here; most importantly ease of maintenance and JRuby support.

The goal of this project is to make it as easy as possible for the Ruby community to experiment with speech recognition. Please do contribute fixes and enhancements.


This gem depends on Pocketsphinx (libpocketsphinx), and Sphinxbase (libsphinxbase and libsphinxad). The current stable versions (0.8) are from late 2012 and are now outdated. Build them manually from source, or on OSX the latest development (potentially unstable) versions can be installed using Homebrew as follows (more information here).

Add the Homebrew tap:

$ brew tap watsonbox/cmu-sphinx
You’ll see some warnings as these formulae conflict with those in the main reponitory, but that’s fine.

Install the libraries:

$ brew install –HEAD watsonbox/cmu-sphinx/cmu-sphinxbase
$ brew install –HEAD watsonbox/cmu-sphinx/cmu-sphinxtrain # optional
$ brew install –HEAD watsonbox/cmu-sphinx/cmu-pocketsphinx
You can test continuous recognition as follows:

$ pocketsphinx_continuous -inmic yes
Then add this line to your application’s Gemfile:

gem ‘pocketsphinx-ruby’
And then execute:

$ bundle
Or install it yourself as:

$ gem install pocketsphinx-ruby

The LiveSpeechRecognizer is modeled on the same class in Sphinx4. It uses the Microphone and Decoder classes internally to provide a simple, high-level recognition interface:

require ‘pocketsphinx-ruby’ # Omitted in subsequent examples do |speech|
puts speech
The AudioFileSpeechRecognizer decodes directly from an audio file by coordinating interactions between an AudioFile and Decoder.

recognizer =

recognizer.recognize(‘spec/assets/audio/goforward.raw’) do |speech|
puts speech # => “go forward ten meters”
These two classes split speech into utterances by detecting silence between them. By default this uses Pocketsphinx’s internal Voice Activity Detection (VAD) which can be configured by adjusting the vad_postspeech, vad_prespeech, and vad_threshold configuration settings.

Leave a Reply

Your email address will not be published. Required fields are marked *