Difference between revisions of "Eosc/norbert/benchmark"
(→Lexical) |
|||
Line 17: | Line 17: | ||
*[https://github.com/ltgoslo/norwegian-synonyms Norwegian synonyms] (for static models) | *[https://github.com/ltgoslo/norwegian-synonyms Norwegian synonyms] (for static models) | ||
*[https://github.com/ltgoslo/norwegian-analogies Norwegian analogies] (for static models) | *[https://github.com/ltgoslo/norwegian-analogies Norwegian analogies] (for static models) | ||
− | *[https://github.com/ltgoslo/norsentlex NorSentLex]: Sentiment lexicon | + | *[https://github.com/ltgoslo/norsentlex NorSentLex]: Sentiment lexicon (for static models) |
== Text classification == | == Text classification == |
Revision as of 11:47, 23 June 2021
Contents
Emerging Thoughts on Benchmarking
The following would be natural places to start. For most of these, while we do have baseline numbers to compare to, we do not have existing set-ups where we could simply plug in a Norwegian BERT and rund, so we may need to identify suitable code for existing BERT-based architectures for e.g. English to re-use. For the first task though (document-level SA on NoReC) Jeremy would have an existing set-up for using mBERT that we could perhaps use.
NLP tasks
- Structured sentiment analysis: NoReC_fine
- Sentence-level 2/3-way polarity: NoReC_sentences
- Negation cues and scopes (evaluation is still being developed): NoReC_neg
Linguistic pipeline (dependency parsing or PoS tagging)
Lexical
- Word sense disambiguation in context
- Norwegian synonyms (for static models)
- Norwegian analogies (for static models)
- NorSentLex: Sentiment lexicon (for static models)
Text classification
- NoReC; document-level ratings.
- Talk of Norway
- NorDial