Text Processing Course: Natural Language Processing by Columbia University

Name: Natural Language Processing

Website:

Description

Natural language processing (NLP) deals with the application of computational models to text or speech data. Application areas within NLP include automatic (machine) translation between languages; dialogue systems, which allow a human to interact with a machine using natural language; and information extraction, where the goal is to transform unstructured text into structured (database) representations that can be searched and browsed in flexible ways. NLP technologies are having a dramatic impact on the way people interact with computers, on the way people interact with each other through the use of language, and on the way people access the vast amount of linguistic data now in electronic form. From a scientific viewpoint, NLP involves fundamental questions of how to structure formal models (for example statistical models) of natural language phenomena, and of how to design algorithms that implement these models.

In this course you will study mathematical and computational models of language, and the application of these models to key problems in natural language processing. The course has a focus on machine learning methods, which are widely used in modern NLP systems: we will cover formalisms such as hidden Markov models, probabilistic context-free grammars, log-linear models, and statistical models for machine translation. The curriculum closely follows a course currently taught by Professor Collins at Columbia University, and previously taught at MIT.

About the Instructors

Michael Collins
Vikram S. Pandit Professor of Computer Science
Columbia University
Michael Collins is the Vikram S. Pandit Professor of Computer Science at Columbia University. Michael received Bachelors and MPhil degrees from Cambridge University, and a PhD from University of Pennsylvania. He was then a researcher at AT&T Labs (1999-2002), and an assistant/associate professor at MIT (2003-2010), before joining Columbia University in January 2011. His research areas are natural language processing and machine learning, with a focus on problems such as statistical parsing, structured prediction problems in machine learning, and applications including machine translation, dialog systems, and speech recognition. Michael is a fellow of the Association for Computational Linguistics, and has received various awards including a Sloan fellowship, an NSF Career award, as well as best paper awards at several conferences.


Leave a Reply

Your email address will not be published. Required fields are marked *