Use Python’s NLTK suite of libraries to maximize your Natural Language Processing capabilities. * Quickly get to grips with Natural Language Processing ? with Text Analysis, Text Mining, and beyond * Learn how machines and crawlers interpret and process natural languages * Easily work with huge amounts of data and learn how to handle distributed processing * Part of Packt’s Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible In Detail Natural Language Processing is used everywhere ? in search engines, spell checkers, mobile phones, computer games ? even your washing machine. Python’s Natural Language Toolkit (NLTK) suite of libraries has rapidly emerged as one of the most efficient tools for Natural Language Processing. You want to employ nothing less than the best techniques in Natural Language Processing ? and this book is your answer. Python Text Processing with NLTK 2.0 Cookbook is your handy and illustrative guide, which will walk you through all the Natural Language Processing techniques in a step?by-step manner. It will demystify the advanced features of text analysis and text mining using the comprehensive NLTK suite. This book cuts short the preamble and you dive right into the science of text processing with a practical hands-on approach. Get started off with learning tokenization of text. Get an overview of WordNet and how to use it. Learn the basics as well as advanced features of Stemming and Lemmatization. Discover various ways to replace words with simpler and more common (read: more searched) variants. Create your own corpora and learn to create custom corpus readers for JSON files as well as for data stored in MongoDB. Use and manipulate POS taggers. Transform and normalize parsed chunks to produce a canonical form without changing their meaning. Dig into feature extraction and text classification. Learn how to easily handle huge amounts of data without any loss in efficiency or speed. This book will teach you all that and beyond, in a hands-on learn-by-doing manner. Make yourself an expert in using the NLTK for Natural Language Processing with this handy companion. What you will learn from this book * Learn Text categorization and Topic identification * Learn Stemming and Lemmatization and how to go beyond the usual spell checker * Replace negations with antonyms in your text * Learn to tokenize words into lists of sentences and words, and gain an insight into WordNet * Transform and manipulate chunks and trees * Learn advanced features of corpus readers and create your own custom corpora * Tag different parts of speech by creating, training, and using a part-of-speech tagger * Improve accuracy by combining multiple part-of-speech taggers * Learn how to do partial parsing to extract small chunks of text from a part-of-speech tagged sentence * Produce an alternative canonical form without changing the meaning by normalizing parsed chunks * Learn how search engines use Natural Language Processing to process text * Make your site more discoverable by learning how to automatically replace words with more searched equivalents * Parse dates, times, and HTML * Train and manipulate different types of classifiers Approach The learn-by-doing approach of this book will enable you to dive right into the heart of text processing from the very first page. Each recipe is carefully designed to fulfill your appetite for Natural Language Processing. Packed with numerous illustrative examples and code samples, it will make the task of using the NLTK for Natural Language Processing easy and straightforward. Who this book is written for This book is for Python programmers who want to quickly get to grips with using the NLTK for Natural Language Processing. Familiarity with basic text processing concepts is required. Programmers experienced in the NLTK will also find it useful. Students of linguistics will find it invaluable.
About the Author
Jacob Perkins has been an avid user of open source software since high school, when he first built his own computer and didn’t want to pay for Windows. At one point he had 5 operating systems installed, including RedHat Linux, OpenBSD, and BeOS. While at Washington University in St. Louis, Jacob took classes in Spanish, poetry writing, and worked on an independent study project that eventually became his Master’s Project: WUGLE – a GUI for manipulating logical expressions. In his free time, he wrote the Gnome2 version of Seahorse (a GUI for encryption and key management), which has since been translated into over a dozen languages and is included in the default Gnome distribution. After getting his MS in Computer Science, Jacob tried to start a web development studio with some friends, but since no-one knew anything about web development, it didn’t work out as planned. Once he’d actually learned web development, he went off and co-founded another company called Weotta, which sparked his interest in Machine Learning and Natural Language Processing. Jacob is currently the CTO / Chief Hacker for Weotta and blogs about what he’s learned along the way at http://streamhacker.com/. He is also applying this knowledge to produce text processing APIs and demos at http://text-processing.com/. This book is a synthesis of his knowledge on processing text using Python, NLTK, and more.