Difference between revisions of "Vectors/norlm"
(→License and Access) |
(→License and Access) |
||
Line 51: | Line 51: | ||
The NorBERT model is also included with the | The NorBERT model is also included with the | ||
[https://huggingface.co/transformers/ Huggingface Transformers Library]. | [https://huggingface.co/transformers/ Huggingface Transformers Library]. | ||
− | |||
− | |||
− | |||
− | |||
= Related Work = | = Related Work = |
Revision as of 16:16, 15 February 2024
Contents
Norwegian Large-scale Language Models
Welcome to the emerging collection of large-scale contextualized and generative language models for the Norwegian language. NorLM (or, more recently, NORA.LLM) originated as a joint initiative of the projects EOSC-Nordic (European Open Science Cloud), SANT (Sentiment Analysis for Norwegian), and HPLT (High-Performance Language Technologies), in collaboration with the AI Laboratory of the National Library of Norway and the National e-Infrastructure Services, coordinated by the Language Technology Group (LTG) at the University of Oslo.
We are working to provide these models and supporting tools for researchers and developers in Natural Language Processing (NLP) for the Norwegian language. We do so in the hope of facilitating scientific experimentation with and practical applications of state-of-the-art NLP architectures, as well as to enable others to develop their own large-scale models, for example for domain- or application-specific tasks, language variants, or even other languages than Norwegian.
Under the auspices of the NLPL use case in EOSC-Nordic, we are coordinating with colleagues in Denmark, Finland, and Sweden on a collection of large contextualized language models for the Nordic languages, including language variants or related groups of languages, as linguistically or technologically appropriate.
Available Models
At this initial stage of development, Norwegian models for two common architecture variants are available:
- NorELMo: LSTM-Based Architectures
- NorBERT: Transformer-Based Architectures
- NorT5: Combined Encoder–Decoder Architecture
- NorMistral & NorBLOOM: Generative Language Models
We emphatically welcome all kinds of user feedback, including of course suggestions for improvement or suggestions for additional types of Norwegian contextualized language models or associated tools. Please contact us via the Nor(aL)LM technical coordinator, Andrey Kutuzov.
License and Access
All Norwegian language models from the NorLM initiative are publicly available for download under open source licenses, either from the NLPL Vectors Repository, or through the Huggingface Hub. The NorBERT model is also included with the Huggingface Transformers Library.
Related Work
Our paper "Large-Scale Contextualised Language Modelling for Norwegian" was presented at NoDaliDa'2021 conference.
Acknowledgements
The NorLM resources are being developed on the Norwegian national supercomputing services operated by UNINETT Sigma2, the National Infrastructure for High Performance Computing and Data Storage in Norway. Software provisioning was financially supported through the European EOSC-Nordic project; data preparation and evaluation were supported by the Norwegian SANT and the Horizon Europe HPLT projects. We are indebted to all funding agencies involved, the University of Oslo, and the Norwegian tax payer.