Open Source Text Processing Project: textacy

textacy: higher-level NLP built on spaCy

Project Website: https://textacy.readthedocs.io

Github Link: https://github.com/chartbeat-labs/textacy

Description

textacy is a Python library for performing higher-level natural language processing (NLP) tasks, built on the high-performance spaCy library. With the basics — tokenization, part-of-speech tagging, dependency parsing, etc. — offloaded to another library, textacy focuses on tasks facilitated by the ready availability of tokenized, POS-tagged, and parsed text.

Features
Stream text, json, csv, and spaCy binary data to and from disk
Clean and normalize raw text, before analyzing it
Explore included corpora of Congressional speeches and Supreme Court decisions, or stream documents from standard Wikipedia pages and Reddit comments datasets
Access and filter basic linguistic elements, such as words and ngrams, noun chunks and sentences
Extract named entities, acronyms and their definitions, direct quotations, key terms, and more from documents
Compare strings, sets, and documents by a variety of similarity metrics
Transform documents and corpora into vectorized and semantic network representations
Train, interpret, visualize, and save sklearn-style topic models using LSA, LDA, or NMF methods
Identify a text’s language, display key words in context (KWIC), true-case words, and navigate a parse tree
… and more!

Open Source Text Processing Project: vivekn sentiment

Sentiment analysis using machine learning techniques

Project Website: http://sentiment.vivekn.com/

Github Link: https://github.com/vivekn/sentiment

Description

Sentiment analysis using machine learning techniques.

Check info.py for the training and testing code. A demo of the tool is available here

Refer this paper for more information about the algorithms used.

http://arxiv.org/abs/1305.6143

This tool works by examining individual words and short sequences of words (n-grams) and comparing them with a probability model. The probability model is built on a prelabeled test set of IMDb movie reviews. It can also detect negations in phrases, i.e, the phrase “not bad” will be classified as positive despite having two individual words with a negative sentiment.

Open Source Deep Learning Project: Paddle

Paddle: PArallel Distributed Deep LEarning

Project Website: http://www.paddlepaddle.org/

Github Link: https://github.com/baidu/Paddle

Description

PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu.

Features

Flexibility

PaddlePaddle supports a wide range of neural network architectures and optimization algorithms. It is easy to configure complex models such as neural machine translation model with attention mechanism or complex memory connection.

Efficiency

In order to unleash the power of heterogeneous computing resource, optimization occurs at different levels of PaddlePaddle, including computing, memory, architecture and communication. The following are some examples:

Optimized math operations through SSE/AVX intrinsics, BLAS libraries (e.g. MKL, ATLAS, cuBLAS) or customized CPU/GPU kernels.
Highly optimized recurrent networks which can handle variable-length sequence without padding.
Optimized local and distributed training for models with high dimensional sparse data.
Scalability

With PaddlePaddle, it is easy to use many CPUs/GPUs and machines to speed up your training. PaddlePaddle can achieve high throughput and performance via optimized communication.

Connected to Products

In addition, PaddlePaddle is also designed to be easily deployable. At Baidu, PaddlePaddle has been deployed into products or service with a vast number of users, including ad click-through rate (CTR) prediction, large-scale image classification, optical character recognition(OCR), search ranking, computer virus detection, recommendation, etc. It is widely utilized in products at Baidu and it has achieved a significant impact. We hope you can also exploit the capability of PaddlePaddle to make a huge impact for your product.

Open Source Text Processing Project: Stanford Temporal Tagger

Stanford Temporal Tagger

Project Website: http://nlp.stanford.edu/software/sutime.html

Github Link: None

Description

SUTime is a library for recognizing and normalizing time expressions. That is, it will convert next wednesday at 3pm to something like 2016-02-17T15:00 (depending on the assumed current reference time). SUTime is available as part of the Stanford CoreNLP pipeline and can be used to annotate documents with temporal information. It is a deterministic rule-based system designed for extensibility. The currently available rule support only English.

SUTime was developed using TokensRegex, a generic framework for definining patterns over text and mapping to semantic objects. An included set of powerpoint slides and the javadoc for SUTime provide an overview of this package.

SUTime was written by Angel Chang. These programs also rely on classes developed by others as part of the Stanford JavaNLP project.

There is a paper describing SUTime. You’re encouraged to cite it if you use SUTime.

Angel X. Chang and Christopher D. Manning. 2012. SUTIME: A Library for Recognizing and Normalizing Time Expressions. 8th International Conference on Language Resources and Evaluation (LREC 2012).

Open Source Deep Learning Project: dlib

dlib: A toolkit for making real world machine learning and data analysis aplications in C++

Project Website: http://dlib.net

Github Link: https://github.com/davisking/dlib

Description

Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems. It is used in both industry and academia in a wide range of domains including robotics, embedded devices, mobile phones, and large high performance computing environments. Dlib’s open source licensing allows you to use it in any application, free of charge.

To follow or participate in the development of dlib subscribe to dlib on github. Also be sure to read the how to contribute page if you intend to submit code to the project.

Open Source Deep Learning Project: torchnet

torchnet: Torch on steroids

Project Website: None

Github Link: https://github.com/torchnet/torchnet

Description

torchnet is a framework for torch which provides a set of abstractions aiming at encouraging code re-use as well as encouraging modular programming.

At the moment, torchnet provides four set of important classes:

Dataset: handling and pre-processing data in various ways.
Engine: training/testing machine learning algorithm.
Meter: meter performance or any other quantity.
Log: output performance or any other string to file / disk in a consistent manner.

Open Source Deep Learning Project: OpenNN

OpenNN – Open Neural Networks Library

Project Website: http://www.opennn.net/

Github Link: https://github.com/Artelnics/OpenNN

Description

OpenNN is an open source class library written in C++ programming language which implements neural networks, a main area of deep learning research. It is intended for advanced users, with high C++ and machine learning skills.

The library implements any number of layers of non-linear processing units for supervised learning. This deep architecture allows the design of neural networks with universal approximation properties.

The main advantage of OpenNN is its high performance. This library outstands in terms of execution speed and memory allocation. It is constantly optimized and parallelized in order to maximize its efficiency.

OpenNN is a software library written in C++ for predictive analytics. It implements neural networks, the most successful deep learning method.

The main advantage of OpenNN is its high performance. This library outstands in terms of execution speed and memory allocation. It is constantly optimized and parallelized in order to maximize its efficiency.

Some typical applications of OpenNN are function regression (modelling), pattern recognition (classification) and time series prediction (forecasting).

The documentation is composed by tutorials and examples to offer a complete overview about the library. The documentation can be found at the official OpenNN site.

CMakeLists.txt are build files for CMake, it is also used byt the CLion IDE.

The .pro files are project files for the Qt Creator IDE, which can be downloaded from its site. Note that OpenNN does not make use of the Qt library.

OpenNN is developed by Artelnics, a company specialized in artificial intelligence.

Open Source Deep Learning Project: ELEKTRONN

ELEKTRONN: A highly configurable toolkit for training 3d/2d CNNs and general Neural Networks

Project Website: http://elektronn.org/

Github Link: https://github.com/ELEKTRONN/ELEKTRONN

Description

ELEKTRONN is a deep learning toolkit that makes powerful neural networks accessible to scientists outside of the machine learning community.

ELEKTRONN is a highly configurable toolkit for training 3D/2D CNNs and general Neural Networks.

It is written in Python 2 and based on Theano, which allows CUDA-enabled GPUs to significantly accelerate the pipeline.

The package includes a sophisticated training pipeline designed for classification/localisation tasks on 3D/2D images. Additionally, the toolkit offers training routines for tasks on non-image data.

ELEKTRONN was created by Marius Killinger and Gregor Urban at the Max Planck Institute For Medical Research to solve connectomics tasks.

Open Source Deep Learning Project: ConvNet

ConvNet: Convolutional Neural Networks for Matlab

Project Website: None

Github Link: https://github.com/sdemyanov/ConvNet

Description

Convolutional Neural Networks for Matlab, including Invariang Backpropagation algorithm (IBP). Has versions for GPU and CPU, written on CUDA, C++ and Matlab. All versions work identically. The GPU version uses kernels from Alex Krizhevsky’s library ‘cuda-convnet2’.

Convolutional neural net is a type of deep learning classification algorithms, that can learn useful features from raw data by themselves. Learning is performed by tuning its weighs. CNNs consist of several layers, that are usually convolutional and subsampling layers following each other. Convolution layer performs filtering of its input with a small matrix of weights and applies some non-linear function to the result. Subsampling layer does not contain weights and simply reduces the size of its input by averaging of max-pooling operation. The last layer is fully connected by weights with all outputs of the previous layer. The output is also modified by a non-linear function. If your neural net consists of only fully connected layers, you get a classic neural net.

Learning process consists of 2 steps: forward and backward passes, that repeat for all objects in a training set. On the forward pass each layer transforms the output from the previous layer according to its function. The output of the last layer is compared with the label values and the total error is computed. On the backward pass the corresponding transformation happens with the derivatives of error with respect to outputs and weights of this layer. After the backward pass finished, the weights are changed in the direction that decreases the total error. This process is performed for a batch of objects simultaneously, in order to decrease the sample bias. After all the object have been processed, the process might repeat for different batch splits.

Open Source Deep Learning Project: neuralnetworks

neuralnetworks: Deep Neural Networks with GPU support

Project Website: None

Github Link: https://github.com/ivan-vasilev/neuralnetworks

Description

This is a Java implementation of some of the algorithms for training deep neural networks. GPU support is provided via the OpenCL and Aparapi. The architecture is designed with modularity, extensibility and pluggability in mind.

Git structure

I’m using the git-flow model. The most stable (but older) sources are available in the master branch, while the latest ones are in the develop branch.

If you want to use the previous Java 7 compatible version you can check out this release.

Neural network types

Multilayer perceptron
Restricted Boltzmann Machine
Autoencoder
Deep belief network
Stacked autoencodeer
Convolutional networks with max pooling, average poolng and stochastic pooling.
Maxout networks (work-in-progress)
Training algorithms

Backpropagation – supports multilayer perceptrons, convolutional networks and dropout.
Contrastive divergence and persistent contrastive divergence implemented using these and these guidelines.
Greedy layer-wise training for deep networks – works for stacked autoencoders and DBNs, but supports any kind of training.
All the algorithms support GPU execution.

Out of the box supported datasets are MNIST, CIFAR-10/CIFAR-100 (experimental, not much testing), IRIS and XOR, but you can easily implement your own.

Experimental support of RGB image preprocessing operations – affine transformations, cropping, and color scaling (see Generaltest.java -> testImageInputProvider).

Activation functions

Logistic
Tanh
Rectifiers
Softplus
Softmax
Weighted sum
All the functions support GPU execution. They can be applied to all types of networks and all training algorithms. You can also implement new activations.