Open Source Text Processing Project: kljensen-snowball

Deep Learning Specialization on Coursera

Go implementation of the Snowball stemmers

Project Website: None

Github Link:


A Go (golang) implementation of the Snowball stemmer for natural language processing.

Latest release v0.3.4 (2013-05-19)
Latest build status Build Status
Go versions tested go1.0.3
Languages available English, Spanish (español), French (le français), Russian (ру́сский язы́к)
License MIT

Here is a minimal Go program that uses this package in order to stem a single word.

package main
import (
func main(){
stemmed, err := snowball.Stem(“Accumulations”, “english”, true)
if err == nil{
fmt.Println(stemmed) // Prints “accumul”
Organization & Implementation

The code is organized as follows:

The top-level snowball package has a single exported function snowball.Stem, which is defined in snowball/snowball.go.
The stemmer for each language is defined in a “sub-package”, e.g snowball/spanish.
Each language exports a Stem function: e.g. spanish.Stem, which is defined in snowball/spanish/stem.go.
Code that is common to multiple lanuages may go in a separate package, e.g. the small romance package.
Some notes about the implementation:

In order to ensure the code is easily extended to non-English lanuages, I avoided using bytes and byte arrays, and instead perform all operations on runes. See snowball/snowballword/snowballword.go and the SnowballWord struct.
In order to avoid casting strings into slices of runes numerous times, this implementation uses a single slice of runes stored in the SnowballWord struct for each word that needs to be stemmed.
In spite of the foregoing, readability requires that some strings be kept around and repeatedly cast into slices of runes. For example, in the Spanish stemmer, one step requires removing suffixes with accute accents such as “ución”, “logía”, and “logías”. If I were to hard-code those suffices as slices of runes, the code would be substantially less readable.
Instead of carrying around the word regions R1, R2, & RV as separate strings (or slices or runes, or whatever), we carry around the index where each of these regions begins. These are stored as R1start, R2start, & RVstart on the SnowballWord struct. I believe this is a relatively efficient way of storing R1 and R2.
The code does not use any maps or regular expressions 1) for kicks, and 2) because I thought they’d negatively impact the performance. (But, mostly for #1; I realize #2 is silly.)
I end up refactoring the snowballword package a bit every time I implement a new language.
Clearly, the Go implentation of these stemmers is verbose relative to the Snowball language. However, it is much better than the Java version and others.
Future work

I’d like to implement the Snowball stemmer in more lanuages. If you can help, I would greatly appreciate it: please fork the project and send a pull request!

(Also, if you are interested in creating a larger NLP project for Go, please get in touch.)

Related work

I know of a few other stemmers availble in Go:

stemmer by Dmitry Chestnykh. His project also implements the Snowball (Porter2) English stemmer as well as the Snowball German stemmer.
porter-stemmer – an implementation of the original Porter stemming algorithm.
go-stem by Alex Gonopolskiy. Also the original Porter algorithm.
paicehusk by Aaron Groves. This package implements the Paice/Husk stemmer.
golibstemmer by Richard Johnson. This provides Go bindings for the libstemmer C library.
snowball by Miki Tebeka. Also, I believe, Go bindings for the C library.

Leave a Reply

Your email address will not be published. Required fields are marked *