Difference between revisions of "Parsing/home"
(→Preprocessing Tools) |
(→Preprocessing Tools) |
||
Line 10: | Line 10: | ||
Additionally, a variety of tools for sentence splitting, tokenization, lemmatization, et al. | Additionally, a variety of tools for sentence splitting, tokenization, lemmatization, et al. | ||
are available through the NLPL installations of the | are available through the NLPL installations of the | ||
− | [http://nltk.org Natural Language Processing Toolkit (NLTK)] | + | [http://nltk.org Natural Language Processing Toolkit (NLTK)] and the |
[https://en.wikipedia.org/wiki/SpaCy spaCy: Natural Language Processing in Python] tools. | [https://en.wikipedia.org/wiki/SpaCy spaCy: Natural Language Processing in Python] tools. | ||
Revision as of 17:40, 30 January 2019
Background
An experimentation environment for data-driven dependency parsing is maintained for NLPL under the coordination of Uppsala University (UU). Initially, the software and data are commissioned on the Norwegian Abel supercluster.
Preprocessing Tools
Additionally, a variety of tools for sentence splitting, tokenization, lemmatization, et al. are available through the NLPL installations of the Natural Language Processing Toolkit (NLTK) and the spaCy: Natural Language Processing in Python tools.