Open Source Text Processing Project: Wapiti

Wapiti – A simple and fast discriminative sequence labelling toolkit

Project Website:
Github Link:

Description

Wapiti is a very fast toolkit for segmenting and labeling sequences with discriminative models. It is based on maxent models, maximum entropy Markov models and linear-chain CRF and proposes various optimization and regularization methods to improve both the computational complexity and the prediction performance of standard models. Wapiti is ranked first on the sequence tagging task for more than a year on MLcomp web site.

Features

Handle large label and feature sets
Wapiti was used to train models with more than one thousand labels and models with several billions features. Training time still increase with the size of these set, but provided you have computing power and enough memory, Wapiti will handle them without problems.

L-BFGS, OWL-QN, SGD-L1, BCD, and RPROP training algorithms
Wapiti implements all the standard training algorithms. All these algorithms are highly-optimized and can be combined to improve both computational and generalization performances.

L1, L2, or Elastic-net regularization
Wapiti provides different regularization methods which allow reducing overfitting and efficient features selections.

Powerful features extraction system
Wapiti uses an extended version of the CRF++ patterns for extracting features, which reduces both the amount of pre-processing required and the size of datafiles.

Multi-threaded and vectorized implementation
To further improve their performances, all optimization algorithms can take advantage of SSE instructions, if available. The Quasi-Newton and RPROP optimization algorithms are parallelized and scale very well on multi-processors.

N-best Viterbi output
Viterbi decoding can output the classical best label sequence as well as the n-best ones. Decoding can be done with the classical Viterbi for CRF or through posteriors which are slower but generaly lead to better result and give normalized scores.

Compact model creation
When used with L1 or elastic-net penalties, Wapiti is able to remove unused features and creates compact models which load faster and use less memory, speeding up the labeling.

Sparse forward-backward
A specific sparse forward-backward procedure is used during the training to take advantage of the sparsity of the model and speedup computation.

Written in standard C99+POSIX
Wapiti source code is written almost entirely in standard C99 and should work on any computer. However, the multi-threading code is written using POSIX threads and the SSE code is written for x86 platform. Both are optional and can be disabled or rewritten for other platforms.

Open source (BSD Licence)


Leave a Reply

Your email address will not be published. Required fields are marked *