#EMNLP2023

WikiResearchwikiresearch
2024-01-18

RT by @wikiresearch: The video of the presentation w/ @rnav_arora of our paper on Transparent Stance Detection in Multilingual Wikipedia Editor Discussions predicting Wikipedia policies for content moderation is now online at
youtu.be/UUuC6Q1SIoM?t=2190 twitter.com/frimelle/status/17

WikiResearchwikiresearch
2024-01-15

RT by @wikiresearch: Excited to start the new year by presenting our paper on Transparent Stance Detection in Multilingual Wikipedia Editor Discussions w/ @rnav_arora @IAugenstein at the @Wikimedia Research Showcase!
Online, 17.01., 17:30 UTC

mediawiki.org/wiki/Wikimedia_R @wikiresearch twitter.com/frimelle/status/17

WikiResearchwikiresearch
2023-12-28
2023-12-21

A paper on the topic by Max Glockner (UKP Lab), @ievaraminta Staliūnaitė (University of Cambridge), James Thorne (KAIST AI), Gisela Vallejo (University of Melbourne), Andreas Vlachos (University of Cambridge) and Iryna Gurevych was accepted to TACL and has just been presented at #EMNLP2023.

📄 arxiv.org/abs/2104.00640

➡️ sigmoid.social/@UKPLab/1115613

2023-12-19

At #EMNLP2023, our colleague Jonathan Tonglet presented his master thesis, conducted at the KU Leuven. Find out more about »SEER : A Knapsack approach to Exemplar Selection for In-Context HybridQA« in this thread 🧵:

➡️ sigmoid.social/@UKPLab/1113743

A picture of Jonathan Tonglet at #EMNLP2023
Niloufar SalehiNiloufar@hci.social
2023-12-13

Many models produce outputs that are hard to verify for an end user.

🏆 Our new #emnlp2023 paper won an outstanding paper award for showing that a secondary quality estimation model can help users decide when to rely on the model output.

We ran a controlled experiment showing that a calibrated quality estimation model can make physicians twice better at correctly deciding when to rely on a translation model output.

Paper: arxiv.org/pdf/2310.16924v1.pdf

2023-12-11

A group photo from the poster presentation of »AmbiFC: Fact-Checking Ambiguous Claims with Evidence«, co-authored by our colleague Max Glockner, @ievaraminta, James Thorne, Gisela Vallejo, Andreas Vlachos and Iryna Gurevych. #EMNLP2023

2023-12-11

A successful #EMNLPMeeting has come to an end! A group photo of our colleagues Yongxin Huang, Jonathan Tonglet, Aniket Pramanick, Sukannya Purkayastha, Dominic Petrak and Max Glockner, who represented the UKP Lab in Singapore! #EMNLP2023

2023-12-09

You can find our paper here:
📃 arxiv.org/abs/2311.00408
and our code here:
💻 github.com/UKPLab/AdaSent

Check out the work of our authors Yongxin Huang, Kexin Wang, Sourav Dutta, Raj Nath Patel, Goran Glavaš and Iryna Gurevych! (6/🧵) #EMNLP2023 #AdaSent #NLProc

2023-12-09

What makes the difference 🧐 ?

We attribute the effectiveness of the sentence encoding adapter to the consistency between the pre-training and DAPT objectives of the base PLM. If the base PLM is domain-adapted with another loss, the adapter won’t be compatible any more, reflected in a performance drop. (5/🧵) #EMNLP2023

2023-12-09

AdaSent decouples DAPT and SEPT by storing the sentence encoding abilities into an adapter, which is trained only once in the general domain and plugged into various DAPT-ed PLMs. It can match or surpass the performance of DAPT→SEPT, with more efficient training. (4/🧵) #EMNLP2023

2023-12-09

Domain-adapted sentence embeddings can be created by applying general-domain SEPT on top of a domain-adapted base PLM (DAPT→SEPT). But this requires the same SEPT procedure to be done on each DAPT-ed PLM for every domain, resulting in computational inefficiency. (3/🧵) #EMNLP2023

2023-12-09

In our #EMNLP2023 paper we demonstrate AdaSent's effectiveness in extensive experiments on 17 different few-shot sentence classification datasets! It matches or surpasses the performance of full SEPT on DAPT-ed PLM (DAPT→SEPT) while substantially reducing training costs. (2/🧵)

2023-12-09

Need a lightweight solution for few-shot domain-specific sentence classification?

We propose #AdaSent!
🚀 Up to 7.2 acc. gain in 8-shot classification with 10K unlabeled data
🪶 Small backbone with 82M parameters
🧩 Reusable general sentence adapter across domains
(1/🧵) #EMNLP2023

2023-12-09

Which factors shape #NLProc research over time? This was the topic of the talk by our colleague Aniket Pramanick at #EMNLP2023!

Learn more about the paper by him, Yufang Hou, Saif M. Mohammad & Iryna Gurevych here: 📑 arxiv.org/abs/2305.12920

2023-12-08

If you are around at #EMNLP2023, look out for our colleague Sukannya Purkayastha, who presented today our paper on the use of Jiu-Jitsu argumentation in #PeerReview, authored by her, Anne Lauscher (Universität Hamburg) and Iryna Gurevych.

📑 arxiv.org/abs/2311.03998

2023-12-08

Check out the full paper on arXiv and the code on GitLab – we look forward to your thoughts and feedback! (9/9) #NLProc #eRisk #EMNLP2023

Paper 📄 arxiv.org/abs/2211.07624
Code ⌨️ gitlab.irlab.org/anxo.pvila/se

2023-12-08

We also illustrate how our semantic retrieval pipeline provides interpretability of the symptom estimation, highlighting the most relevant sentences. (8/🧵) #EMNLP2023 #NLProc

2023-12-08

Our approaches achieve good performance in two Reddit benchmark collections (DCHR metric). (7/🧵) #EMNLP2023 #NLProc

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst