Difference between revisions of "Eosc/norbert/benchmark"
(→Emerging Thoughts on Benchmarking) |
|||
Line 1: | Line 1: | ||
= Emerging Thoughts on Benchmarking = | = Emerging Thoughts on Benchmarking = | ||
− | The following would be natural places to start. For most of these | + | The following would be natural places to start. For most of these, while we do have baseline numbers to compare to, we do not have existing set-ups where we could simply plug in a Norwegian BERT and rund, so we may need to identify suitable code for existing BERT-based architectures for e.g. English to re-use. For the first task though (document-level SA on NoReC) Jeremy would have an existing set-up for using mBERT that we could perhaps use. |
*[https://github.com/ltgoslo/norec_fine NoReC]; for document-level sentiment analysis (i.e. rating prediction). | *[https://github.com/ltgoslo/norec_fine NoReC]; for document-level sentiment analysis (i.e. rating prediction). |
Revision as of 20:45, 3 December 2020
Emerging Thoughts on Benchmarking
The following would be natural places to start. For most of these, while we do have baseline numbers to compare to, we do not have existing set-ups where we could simply plug in a Norwegian BERT and rund, so we may need to identify suitable code for existing BERT-based architectures for e.g. English to re-use. For the first task though (document-level SA on NoReC) Jeremy would have an existing set-up for using mBERT that we could perhaps use.
- NoReC; for document-level sentiment analysis (i.e. rating prediction).
- NoReC_fine; for fine-grained sentiment analysis (e.g. predicting target expression + polarity)
- NDT; for dependency parsing or PoS tagging (perhaps best to use the UD version)
- NorNE; for named entity recognition, extends NDT (also available for the UD version)