Bio

I’m a Postdoc at the University of Copenhagen researching sustainable machine learning and fact checking of science. Previously I was a PhD Fellow in the CopeNLU group where I worked on automated fact checking, automatic understanding and analysis of science communication, and domain adaptation. I received my master’s degree from University of California, San Diego, and have worked at IBM Research and the Allen Institute for Artificial Intelligence on the Semantic Scholar project. I also write sometimes on substack. Outside of science I like making music and playing dungeons and dragons :dragon_face: and rhythm games :arrow_left::arrow_down::arrow_up::arrow_right:.

News

  • (20/07/2023) “Modeling Information Change in Science Communication with Semantically Matched Paraphrases” received an honorable mention (top 5 submission) at the International Conference on Computational Social Science!

  • (25/06/2023) I was awarded a two-year postdoc fellowship from the Danish Data Science Academy to work on NLP for science communication!

  • (01/02/2023) Started a postdoc at University of Copenhagen on sustainable machine learning

  • (06/10/2022) “Modeling Information Change in Science Communication with Semantically Matched Paraphrases” is accepted to EMNLP 2022!

  • (15/03/2022) Gave an invited talk about science communication and misinformation detection at Elsevier

  • (24/02/2022) One paper accepted to ACL on generating scientific claims for zero-shot scientific fact checking! This work was done during my internship at AI2

  • (21/01/2022) Gave an invited talk about exaggeration detection in science for Search Engines Amsterdam

  • (01/09/2021) Our paper on few shot learning for exaggeration detection in science is accepted to EMNLP 2021

  • (02/08/2021) One paper published in Findings of ACL

  • (01/06/2021) Started an internship at AI2 with Lucy Wang at Semantic Scholar on scientific claim generation

  • (01/03/2021) Gave a talk at ETH Zürich about cite-worthiness detection.

  • (15/09/2020) 2 main conference and 1 Findings paper accepted to EMNLP 2020. Announcement thread

  • (19/07/2020) New website is now live!

  • (08/07/2020) We hosted a series of person-limited meetups at the University of Copenhagen to view the live sessions of ACL, with plenty of interesting discussions and good company :smile:

  • (05/03/2020) Preprint of our work on claim check-worthiness detection (w/ Isabelle Augenstein) is now available: https://arxiv.org/pdf/2003.02736.pdf

  • (01/10/2019) Started my PhD in natural language processing and machine learning at the University of Copenhagen

Press

Featured Publications

Transformer Based Multi-Source Domain Adaptation

Dustin Wright and Isabelle Augenstein

Published in EMNLP, 2020

We demonstrate that when using large pretrained transformer models, mixture of experts methods can lead to significant gains in domain adaptation settings while domain adversarial training does not. We provide evidence that such models are relatively robust across domains, making homogenous predictions despite being fine-tuned on different domains.

Download here