Difference between revisions of "Eosc/norbert/benchmark"

From Nordic Language Processing Laboratory
Jump to: navigation, search
Line 9: Line 9:
 
* PoS tagging: [https://github.com/UniversalDependencies/UD_Norwegian-NynorskLIA ILA] +  NDT [https://github.com/UniversalDependencies/UD_Norwegian-Bokmaal Bokmaal] / [https://github.com/UniversalDependencies/UD_Norwegian-Nynorsk Nynorsk]
 
* PoS tagging: [https://github.com/UniversalDependencies/UD_Norwegian-NynorskLIA ILA] +  NDT [https://github.com/UniversalDependencies/UD_Norwegian-Bokmaal Bokmaal] / [https://github.com/UniversalDependencies/UD_Norwegian-Nynorsk Nynorsk]
 
* Dependency parsing: [https://github.com/UniversalDependencies/UD_Norwegian-NynorskLIA ILA] + NDT [https://github.com/UniversalDependencies/UD_Norwegian-Bokmaal Bokmaal] / [https://github.com/UniversalDependencies/UD_Norwegian-Nynorsk Nynorsk]  
 
* Dependency parsing: [https://github.com/UniversalDependencies/UD_Norwegian-NynorskLIA ILA] + NDT [https://github.com/UniversalDependencies/UD_Norwegian-Bokmaal Bokmaal] / [https://github.com/UniversalDependencies/UD_Norwegian-Nynorsk Nynorsk]  
 +
* NER: [https://github.com/ltgoslo/norne NorNE]
 
* Co-reference resolution (annotation ongoing)
 
* Co-reference resolution (annotation ongoing)
  

Revision as of 11:55, 23 June 2021

Emerging Thoughts on Benchmarking

The following would be natural places to start. For most of these, while we do have baseline numbers to compare to, we do not have existing set-ups where we could simply plug in a Norwegian BERT and rund, so we may need to identify suitable code for existing BERT-based architectures for e.g. English to re-use. For the first task though (document-level SA on NoReC) Jeremy would have an existing set-up for using mBERT that we could perhaps use.

NLP tasks

Lexical

Text classification

Other