Difference between revisions of "Eosc/norbert/benchmark"

From Nordic Language Processing Laboratory
Jump to: navigation, search
(Emerging Thoughts on Benchmarking)
(Emerging Thoughts on Benchmarking)
Line 3: Line 3:
 
The following would be natural places to start. For most of these, while we do have baseline numbers to compare to, we do not have existing set-ups where we could simply plug in a Norwegian BERT and rund, so we may need to identify suitable code for existing BERT-based architectures for e.g. English to re-use. For the first task though (document-level SA on NoReC) Jeremy would have an existing set-up for using mBERT that we could perhaps use.  
 
The following would be natural places to start. For most of these, while we do have baseline numbers to compare to, we do not have existing set-ups where we could simply plug in a Norwegian BERT and rund, so we may need to identify suitable code for existing BERT-based architectures for e.g. English to re-use. For the first task though (document-level SA on NoReC) Jeremy would have an existing set-up for using mBERT that we could perhaps use.  
  
 +
== Linguistic pipeline ==
 +
*[https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-10/ NDT]; for dependency parsing or PoS tagging (perhaps best to use the UD version)
 +
 +
== Document classification ==
 
*[https://github.com/ltgoslo/norec_fine NoReC]; for document-level sentiment analysis (i.e. rating prediction). Note that we would want to use another version than the current official release; this has 10k more sentences (and is soon to be officially released).
 
*[https://github.com/ltgoslo/norec_fine NoReC]; for document-level sentiment analysis (i.e. rating prediction). Note that we would want to use another version than the current official release; this has 10k more sentences (and is soon to be officially released).
 +
*[https://github.com/ltgoslo/talk-of-norway Talk of Norway]
 +
*[https://github.com/jerbarnes/norwegian_dialect NorDial]
 +
 +
==Other ==
 
*[https://github.com/ltgoslo/norec_fine NoReC_fine]; subset of documents from NoReC annotated with fine-grained sentiment (e.g. for predicting target expression + polarity)
 
*[https://github.com/ltgoslo/norec_fine NoReC_fine]; subset of documents from NoReC annotated with fine-grained sentiment (e.g. for predicting target expression + polarity)
*[https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-10/ NDT]; for dependency parsing or PoS tagging (perhaps best to use the UD version)
 
 
*[https://github.com/ltgoslo/norne NorNE]; for named entity recognition, extends NDT (also available for the UD version)
 
*[https://github.com/ltgoslo/norne NorNE]; for named entity recognition, extends NDT (also available for the UD version)
 
*NoReC_neg; soon to be released; adds negation cues and scopes to the same subset of sentences as in NoReC_fine.
 
*NoReC_neg; soon to be released; adds negation cues and scopes to the same subset of sentences as in NoReC_fine.
*[https://github.com/ltgoslo/talk-of-norway Talk of Norway]
 
*[https://github.com/jerbarnes/norwegian_dialect NorDial]
 

Revision as of 11:10, 23 June 2021

Emerging Thoughts on Benchmarking

The following would be natural places to start. For most of these, while we do have baseline numbers to compare to, we do not have existing set-ups where we could simply plug in a Norwegian BERT and rund, so we may need to identify suitable code for existing BERT-based architectures for e.g. English to re-use. For the first task though (document-level SA on NoReC) Jeremy would have an existing set-up for using mBERT that we could perhaps use.

Linguistic pipeline

  • NDT; for dependency parsing or PoS tagging (perhaps best to use the UD version)

Document classification

  • NoReC; for document-level sentiment analysis (i.e. rating prediction). Note that we would want to use another version than the current official release; this has 10k more sentences (and is soon to be officially released).
  • Talk of Norway
  • NorDial

Other

  • NoReC_fine; subset of documents from NoReC annotated with fine-grained sentiment (e.g. for predicting target expression + polarity)
  • NorNE; for named entity recognition, extends NDT (also available for the UD version)
  • NoReC_neg; soon to be released; adds negation cues and scopes to the same subset of sentences as in NoReC_fine.