NLP-Caffe: natural language processing with Caffe
Project Website: None
Github Link: https://github.com/Russell91/nlpcaffe
NLP-Caffe is a pull request  on the Caffe framework developed by Yangqing Jia and Evan Shelhamer, among other members of the BVLC lab at Berkeley and a large number of independent online contributers.
This fork makes it easier for NLP users to get started without merging C++ code. The current example constructs a language model for a small subset of Google’s Billion Word corpus. It uses a two-layer LSTM architecture that processes in excess of 15,000 words per second , and achieves a perplexity of 79. More examples for Machine Translation using the encoder-decoder model and character-level RNNs are in the works. This code will eventually be merged into the Caffe master branch. This work was funded by the Stanford NLP Group, under the guidance of Chris Manning, and with the invaluable expertise of Thang Luong.