Articles - R Conference

A Tidy Data Model for Natural Language Processing

  |   495  |  Post a comment  |  R Conference  |  Text mining, useR 2017
This talk introduces the R package cleanNLP, which provides a set of fast tools for converting a textual corpus into a set of normalized tables.

The underlying natural language processing pipeline utilizes Stanford's CoreNLP library, exposing a number of annotation tasks for text written in English, French, German, and Spanish (Marneffe et al. 2016, De Marneffe et al. (2014)).

Annotators include tokenization, part of speech tagging, named entity recognition, entity linking, sentiment analysis, dependency parsing, coreference resolution, and information extraction (Lee et al. 2011).

The functionality provided by the package applies the tidy data philosophy (Wickham 2014) to the processing of raw textual data.

Together, these contributions simplify the process of doing exploratory data analysis over a corpus of text.

The output works seamlessly with both tidy data tools as well as other programming and graphing systems.

The talk will illustrate the basic usage of the cleanNLP package, explain the rational behind the underlying data model, and show an example from a corpus of the text from every State of the Union address made by a United States President (Peters 2016).



Source: useR 2017