Margaret Mitchell was born in the United States on November 18, 1983, and currently lives in Seattle, Washington. Despite her prominence in AI (Artifical Intelligence) ethics and machine learning research, not much is known about her personal life nor her education prior to college.
In Reed College, she obtained a BA in Linguistics with Psychology as an allied field, which means taking four courses from Psychology (2001-2005). Advised by John Haviland and Matt Pearson, her senior thesis, “On the Generation of Referring Expressions,” critiqued the problem of generating human-like references, similar to how AI models such as ChatGPT produce responses that resemble those of humans. She also examined the linguistic, psychological, and computational factors involved in this process.
Later, Mitchell acquired her MA/MS in Computational Linguistics at the University of Washington (2007-2008). In her master’s thesis, “Towards the Generation of Natural Reference,” Mitchell developed an algorithm that creates descriptions similar to those made by humans under the guidance of advisors, Scott Farrar and Emily Bender. Additionally, Mitchell created a system for arranging words before nouns in a more natural way.
Moreover, she obtained her PhD in Computing Science at the University of Aberdeen, a top ranked public university in Scotland (2009-2012). Advised by Kees van Deemter and Ehud Reiter, she published her doctoral thesis, “Generating Reference to Visible Objects,” where she studied how people describe objects they can see. Mitchell looked at factors, including color, perception, size, and how people store information about objects in their brains. She also developed an algorithm that generates human-like descriptions of real-world objects people can see.
After completing her studies, Mitchell started working as a Researcher at Microsoft Research, NLP Group from 2013 to 2014, where she worked on helping computers understand and use human language for Bing and Cortana. Specifically, she focused on creating short summaries and improving conversational abilities. From 2014 to 2016, she worked as a Researcher at Microsoft Research, Cognition Group, where she worked on improving AI to help people as much as possible by describing images, telling stories, and helping visually impaired people by describing what’s in a picture. Moreover, she developed an app called Seeing AI, which uses AI to describe pictures for visually impaired people.
After leaving Microsoft Research, she worked at Google as a Senior Research Scientist from 2016 to 2017 to ensure AI fairness and started a team to work on ethical AI. From 2018 to 2021, she was a Staff Research Scientist at Google Brain, Ethical AI and led a team with Timnit Gerbu to develop ethical AI. To achieve this, they ensured that AI is ethical by developing ways to hold AI accountable. Additionally, they created tools to find and measure bias in AI systems. Based on their findings, they published research papers about the team’s work.
Unfortunately, one of the research papers published in 2020 caused Timnit Gerbu to resign from Google and Margaret Mitchell to get fired from Google. The paper, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” presents the risks of large language models (LLMs). One of the four risks mentioned in the research paper is that LLMs are trained on increasing amounts of text, which has the risk of having abusive, racist, and sexist data ending up in the dataset. With a staggering amount of data, it would be difficult filtering out abusive language and capturing language from movements, such as BlackLivesMatter and MeToo, accurately.
After Margaret Mitchell got fired from Google, she did not know where she would find another job. Despite her forced termination, she eventually got a job several months at Hugging Face as a Researcher and Chief AI Ethics Scientist. Overall, her work has greatly influenced the fields of algorithmic bias and fairness in machine learning, natural language processing, and computer vision. With that being said, Mitchell hopes to develop AI technologies that are fair to everyone, regardless of their background.
I chose Margaret Mitchell due to me, closely following her ever since she was forcefully kicked out of Google alongside her former colleague, Timnit Gerbu. Through researching about Mitchell, I hope that her story inspires you to learn more about AI Ethics and how she bounced back after being fired by Google by becoming the Chief AI Ethics Scientist at Hugging Face. Personally, she has introduced me to the intersections of natural language processing and AI ethics. Mitchell inspires me to care about AI Ethics and to create a more inclusive culture about AI by sharing her story with all of you.
Dr Margaret Mitchell. (2024). Alan Turing Institute; Alan Turing Institute. https://www.turing.ac.uk/people/guest-speakers/margaret-mitchell
Figuls, J. C. (2023, November 27). Margaret Mitchell : ‘The people who are most likely to be harmed by AI don’t have a seat at the table for regulation’. EL PAÍS English. https://english.elpais.com/technology/2023-11-27/margaret-mitchell-the-people-who-are-most-likely-to-be-harmed-by-ai-dont-have-a-seat-at-the-table-for-regulation.html
Margaret Mitchell. (s. d.). Consulted 30 July 2024, at the address https://www.m-mitchell.com/
Margaret Mitchell : The 100 most influential people of 2023. (2023, April 13). Time. https://time.com/collection/100-most-influential-people-2023/6270780/margaret-mitchell/
Time100 AI 2023 : Margaret Mitchell. (2023, September 7). Time. https://time.com/collection/time100-ai/6309005/margaret-mitchell-ai/
We read the paper that forced Timnit Gebru out of Google. Here’s what it says. (2020, December 4). MIT Technology Review. Consulted 30 July 2024, at the address https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
This article was published on 9/13/24