Parsing/home

From Nordic Language Processing Laboratory
(Difference between revisions)
Jump to: navigation, search
(Parsing Systems)
(Parsing Systems)
 
Line 21: Line 21:
 
* [http://wiki.nlpl.eu/index.php/Parsing/udpipe UDPipe]  
 
* [http://wiki.nlpl.eu/index.php/Parsing/udpipe UDPipe]  
 
* [http://wiki.nlpl.eu/index.php/Parsing/turboparser TurboParser]
 
* [http://wiki.nlpl.eu/index.php/Parsing/turboparser TurboParser]
* Additional parsers: [http://wiki.nlpl.eu/index.php/Parsing/stanfordnlp StanfordNLP], [https://www.nltk.org/ NLTK], [https://spacy.io/ spaCy]. For the parsers in these toolkits we refer to the official documentation.
+
 
 +
 
 +
Additionallly, parsers are available in several toolkits installed by nlpl: [http://wiki.nlpl.eu/index.php/Parsing/stanfordnlp StanfordNLP], [https://www.nltk.org/ NLTK], [https://spacy.io/ spaCy].
  
 
= Training and Evaluation Data =  
 
= Training and Evaluation Data =  

Latest revision as of 09:18, 15 January 2020

Contents

[edit] Background

An experimentation environment for data-driven dependency parsing is maintained for NLPL under the coordination of Uppsala University (UU). The data is available on the Norwegian Saga cluster and on the Finnish Puhti cluster. The software is available on the Norwegian Saga cluster

Initially, software and data were commissioned on the Norwegian Abel supercluster, see The Abel page for legacy information.

[edit] Preprocessing Tools

Additionally, a variety of tools for sentence splitting, tokenization, lemmatization, et al. are available through the NLPL installations of the Natural Language Processing Toolkit (NLTK) and the spaCy: Natural Language Processing in Python tools.

[edit] Parsing Systems


Additionallly, parsers are available in several toolkits installed by nlpl: StanfordNLP, NLTK, spaCy.

[edit] Training and Evaluation Data

Personal tools
Namespaces

Variants
Actions
Navigation
Tools