Dr Simone Stumpf

Reader in Responsible and Interactive AI, University of Glasgow

2023-03-26

Starting off with the Doctoral Consortium at #IUI2023. Happy to be a mentor for the future generation of HCI/AI researchers.

Dr Simone Stumpf boosted:
Prof. Emily M. Bender(she/her)emilymbender@dair-community.social
2023-03-23

Apropos #openai refusing to disclose any information about the training data for #GPT4 and #Google being similarly cagey about #Bard...

From the Stochastic Parrots paper, written in late 2020 and published in March 2021:

@timnitGebru @meg

Screencap: "When we rely on ever larger datasets we risk incurring documentation debt, 18 i.e. putting ourselves in a situation where the datasets are both undocumented and too large to document post hoc. While documentation allows for potential accountability [13, 52, 86], undocumented training data perpetuates harm without recourse. Without documentation, one cannot try to understand training data characteristics in order to mitigate some of these attested issues or even unknown ones. The solution, we propose, is to budget for" and footnote 18 "On the notion of documentation debt as applied to code, rather than data, see [154]."Screen cap: Running header "Bender and Gebru, et al." and text "documentation as part of the planned costs of dataset creation, and only collect as much data as can be thoroughly documented within that budget."Screencap "As a part of careful data collection practices, researchers must adopt frameworks such as [13, 52, 86] to describe the uses for which their models are suited and benchmark evaluations for a variety of conditions. This involves providing thorough documentation on the data used in model building, including the motivations underlying data selection and collection processes. This documentation should reflect and indicate researchers’ goals, values, and motivations in assembling data and creating a given model. It should also make note of potential users and stakeholders, particularly those that stand to be negatively impacted by model errors or misuse. We note that just because a model might have many different applications doesn’t mean that its developers don’t need to consider stakeholders. An exploration of stakeholders for likely use cases can still be informative around potential risks, even when there is no way to guarantee that all use cases can be explored." 2nd-4th sentences highlighted in blue.Screencap: "In summary, we advocate for research that centers the people who stand to be adversely affected by the resulting technology, with a broad view on the possible ways that technology can affect people. This, in turn, means making time in the research process for considering environmental impacts, for doing careful data curation and documentation, for engaging with stakeholders early in the design process, for exploring multiple possible paths towards longterm goals, for keeping alert to dual-use scenarios, and finally for allocating research effort to harm mitigation in such cases." Highlighted in blue: "for doing careful data curation and documentation"
2023-03-23

5 days of AI + HCI coming up at #IUI2023. Looking forward to the conference and exploring the intersection of these important fields.

2023-01-06

Happy to announce that I have taken on the leadership of the GIST research section here at University of Glasgow. Onwards and upwards!

2023-01-05

Want to get certified online training to design inclusive technology whatever the gender of your users? Get started now gendermag.org/onlinetraining.p

2022-12-23

Last post before the new year. Have stopped checking my email. Didn’t get everything done I wanted to do but practising not beating myself up about it. Let’s all enjoy a festive time and happy holidays everyone!

2022-12-19

We’ll be presenting our TiiS paper about lay users finding and fixing ‘fairness bugs’ at in Sydney. Woohoo! @nkwyri

2022-12-02

If you work in interactive machine learning you will need to be equally good at HCI and AI. Discuss.

Dr Simone Stumpf boosted:
2022-12-01

The 10th Heidelberg Laureate Forum will be held September 24-29, 2023 in Heidelberg, Germany. It brings together Laureates in computing with 200 early career researchers, who can apply directly or be nominated. Nominators need to register  using the ACM “Organization code”: ACM49263.

Deadline to apply: February 11, 2023

For more info, see heidelberg-laureate-forum.org/.

2022-12-01

Off to give a seminar and examine a PhD tomorrow. Travelling only the second time since the pandemic started. Excited and slightly apprehensive at the same time as I’ve forgotten most of airport etiquette.

2022-11-29

Very busy start to week working on Responsible AI proposals and projects. Hot area to be in.

2022-11-23

Super fun week giving guest lectures on Responsible AI to our grad apprentices here in Glasgow and on Explainable AI to students at Cambridge. I hope that many follow in my footsteps to tackle these important issues.

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst