Difference between revisions of "Eosc/norbert/benchmark"

From Nordic Language Processing Laboratory
Jump to: navigation, search
Line 3: Line 3:
 
This would be natural places to start:
 
This would be natural places to start:
  
*[https://github.com/ltgoslo/norec_fine NoReC]; for document-level sentiment analysis (i.e. rating prediction):
+
*[https://github.com/ltgoslo/norec_fine NoReC]; for document-level sentiment analysis (i.e. rating prediction)
*[https://github.com/ltgoslo/norec_fine NoReC_fine]; for fine-grained sentiment analysis (e.g. predicting target expression + polarity):
+
*[https://github.com/ltgoslo/norec_fine NoReC_fine]; for fine-grained sentiment analysis (e.g. predicting target expression + polarity)
*[https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-10/ NDT]; for dependency parsing or PoS tagging (perhaps best to use the UD version)
+
*[https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-10/ NDT]; for dependency parsing or PoS tagging (perhaps best to use the UD version)
*[https://github.com/ltgoslo/norne NorNE]; for named entity recognition, extends NDT (also available for the UD version):
+
*[https://github.com/ltgoslo/norne NorNE]; for named entity recognition, extends NDT (also available for the UD version)

Revision as of 18:03, 3 December 2020

Emerging Thoughts on Benchmarking

This would be natural places to start:

  • NoReC; for document-level sentiment analysis (i.e. rating prediction)
  • NoReC_fine; for fine-grained sentiment analysis (e.g. predicting target expression + polarity)
  • NDT; for dependency parsing or PoS tagging (perhaps best to use the UD version)
  • NorNE; for named entity recognition, extends NDT (also available for the UD version)