Difference between revisions of "Community/workshop/2019/program"
(→The First NLPL Workshop on Deep Learning for Natural Language Processing) |
|||
(4 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | + | Following is the scientific programme for | |
+ | [http://wiki.nlpl.eu/index.php/Community/workshop The First NLPL Workshop on Deep Learning for Natural Language Processing] | ||
Line 10: | Line 11: | ||
| || || '''Session 1''' Chair: Leon Derczynski | | || || '''Session 1''' Chair: Leon Derczynski | ||
|- | |- | ||
− | | 09:20 || 09:40 || [http://svn.nlpl.eu/outreach/dl4nlp/2019/W19-0501.pdf Timothee Mickus, Denis Paperno and Matthieu Constant: <i>Mark my Word: A Sequence-to-Sequence Approach to Definition Modeling | + | | 09:20 || 09:40 || [http://svn.nlpl.eu/outreach/dl4nlp/2019/W19-0501.pdf Timothee Mickus, Denis Paperno and Matthieu Constant: <i>Mark my Word: A Sequence-to-Sequence Approach to Definition Modeling</i>] |
|- | |- | ||
− | | 09:40 || 10:00 || Robin Kurtz, Daniel Roxbo and Marco Kuhlmann: <i>Improving Semantic Dependency Parsing with Syntactic Features</i> | + | | 09:40 || 10:00 || [http://svn.nlpl.eu/outreach/dl4nlp/2019/W19-0502.pdf Robin Kurtz, Daniel Roxbo and Marco Kuhlmann: <i>Improving Semantic Dependency Parsing with Syntactic Features</i>] |
|- | |- | ||
| 10:00 || 10:30 || '''Coffee Break''' | | 10:00 || 10:30 || '''Coffee Break''' | ||
Line 22: | Line 23: | ||
| || || '''Session 2''' Chair: Sara Stymne | | || || '''Session 2''' Chair: Sara Stymne | ||
|- | |- | ||
− | | 11:30 || 11:50 || Andrey Kutuzov and Elizaveta Kuzmenko: <i>To Lemmatize or Not to Lemmatize: How Word Normalisation Affects ELMo Performance in Word Sense Disambiguation</i> | + | | 11:30 || 11:50 || [http://svn.nlpl.eu/outreach/dl4nlp/2019/W19-0503.pdf Andrey Kutuzov and Elizaveta Kuzmenko: <i>To Lemmatize or Not to Lemmatize: How Word Normalisation Affects ELMo Performance in Word Sense Disambiguation</i>] |
|- | |- | ||
− | | 11:50 || 12:10 || Samuel Rönnqvist, Jenna Kanerva, Tapio Salakoski and Filip Ginter: <i>Is Multilingual BERT Fluent in Language Generation?</i> | + | | 11:50 || 12:10 || [http://svn.nlpl.eu/outreach/dl4nlp/2019/W19-0504.pdf Samuel Rönnqvist, Jenna Kanerva, Tapio Salakoski and Filip Ginter: <i>Is Multilingual BERT Fluent in Language Generation?</i>] |
|- | |- | ||
− | | 12:10 || 12:30 || Vinit Ravishankar, Memduh Gökırmak, Lilja Øvrelid and Erik Velldal: <i>Multilingual Probing of Contextual Sentence Encoders</i> | + | | 12:10 || 12:30 || [http://svn.nlpl.eu/outreach/dl4nlp/2019/W19-0505.pdf Vinit Ravishankar, Memduh Gökırmak, Lilja Øvrelid and Erik Velldal: <i>Multilingual Probing of Contextual Sentence Encoders</i>] |
|- | |- | ||
| 12:30 || 14:00 || '''Lunch Break''' | | 12:30 || 14:00 || '''Lunch Break''' | ||
Line 38: | Line 39: | ||
| || || '''Session 3''' Chair: Lilja Øvrelid | | || || '''Session 3''' Chair: Lilja Øvrelid | ||
|- | |- | ||
− | | 15.30 || 15:50 || Nicolaj Filrup Rasmussen, Kristian Nørgaard Jensen, Marco Placenti and Thai Wang: <i>Cross-Domain Sentiment Classification using Vector Embedded Domain Representations</i> | + | | 15.30 || 15:50 || [http://svn.nlpl.eu/outreach/dl4nlp/2019/W19-0506.pdf Nicolaj Filrup Rasmussen, Kristian Nørgaard Jensen, Marco Placenti and Thai Wang: <i>Cross-Domain Sentiment Classification using Vector Embedded Domain Representations</i>] |
|- | |- | ||
− | | 15:50 || 16:10 || Matthias Damaschk, Tillmann Dönicke and Florian Lux: <i>Multiclass Text Classification on Unbalanced, Sparse and Noisy Data</i> | + | | 15:50 || 16:10 || [http://svn.nlpl.eu/outreach/dl4nlp/2019/W19-0507.pdf Matthias Damaschk, Tillmann Dönicke and Florian Lux: <i>Multiclass Text Classification on Unbalanced, Sparse and Noisy Data</i>] |
|- | |- | ||
− | | 16:10 || 16: | + | | 16:10 || 16:25 || Bjørn Lindi: <i>A Teaser for the NLPL Infrastructure</i> |
+ | |- | ||
+ | | 16:25 || 16:40 || '''Discussion and Closing''' | ||
|} | |} |
Latest revision as of 18:07, 18 September 2019
Following is the scientific programme for The First NLPL Workshop on Deep Learning for Natural Language Processing