Difference between revisions of "Eosc/norbert/benchmark"
Line 3: | Line 3: | ||
This would be natural places to start: | This would be natural places to start: | ||
− | NoReC | + | *[https://github.com/ltgoslo/norec_fine NoReC]; for document-level sentiment analysis (i.e. rating prediction): |
− | https://github.com/ltgoslo/ | + | *[https://github.com/ltgoslo/norec_fine NoReC_fine]; for fine-grained sentiment analysis (e.g. predicting target expression + polarity): |
− | + | *[https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-10/ NDT]; for dependency parsing or PoS tagging (perhaps best to use the UD version): | |
− | NoReC_fine | + | *[https://github.com/ltgoslo/norne NorNE]; for named entity recognition, extends NDT (also available for the UD version): |
− | https:// | ||
− | |||
− | NDT | ||
− | https:// | ||
− | |||
− | NorNE | ||
− |
Revision as of 18:03, 3 December 2020
Emerging Thoughts on Benchmarking
This would be natural places to start:
- NoReC; for document-level sentiment analysis (i.e. rating prediction):
- NoReC_fine; for fine-grained sentiment analysis (e.g. predicting target expression + polarity):
- NDT; for dependency parsing or PoS tagging (perhaps best to use the UD version):
- NorNE; for named entity recognition, extends NDT (also available for the UD version):