The History of Ethical AI at Google

I have had the privilege of being at Google for the past six years. One of the most impressive things I’ve seen there was an Ethical AI team being pulled out of the ether by one woman over the course of four years. Meg Mitchell is a friend of mine who I met through my work on ethical AI at Google. She spent years recruiting people, connecting people across Google and building Google’s reputation with the rest of the world as a place where ethics matters. In a time when Google’s motto of “Don’t be evil” had become an ironic punchline, Meg made sure it was true wherever she went. There has been much talk in the press about the recent negative events surrounding that team but I felt like the story of its birth and fostering has been largely overlooked.

I joined Google on February 2, 2015. At that time there was no Ethical AI team. There was barely a hint of ethical concerns about the AI which we were building. I started working on ethical AI by joining Yuliya Zabiyaka’s 20% time project detecting gender bias in text. We worked for a year and a half and, with the help of my intern Brian Zhang, were able to produce a system that could remove some of the gender bias from word embeddings. Brian brought it to my attention that there was an “Ethical AI Hackathon” happening and that we might want to attend. I asked my manager if it would be okay and with his approval we attended. That’s how a Stanford sophomore and I found ourselves in the company of some of the most brilliant AI ethicists that Google had to offer. That was the week when I met Meg Mitchell.

I’m a decent software engineer and system designer. My intern was (and presumably still is) a mathematical genius. Neither of us were cutting edge scientists (although he may be now). Meg saw what we had to offer and agreed to help us turn it into real science. Over the course of the next few weeks she told us which datasets to use and which tasks to try it against. At each turn we adapted the system based on hir guidance. Within a month we had turned our system from something which was hodge-podge code into real science. To get it the last mile she helped us write the paper and we got it published at the first ever AI Ethics and Society conference.

Through all of that I was only tangentially aware that Meg had organized the entire hackathon that we had attended. She was hired to Google from Microsoft research and began trailblazing as soon as she got there. She had a clear vision of what she wanted to accomplish and had a deep understanding of the amount of work which it would take to achieve. I was just proud of my gee-gaw that I had invented.

When we eventually went to New Orleans to present our findings Meg acted as a leader guiding us towards excellence. We engineers are an insular bunch and we tend to recongregate with other people from our own inner circle. Each time she saw a bunch of us Google employees talking to each other she would approach us, gently join the group and then softly say, “Aren’t we here to be expanding our circle?” I made several new connections that week solely because Meg was there to guide me out of my most comfortable habits.

Although I didn’t know it at the time, that was the beginning of the Ethical AI team. Meg built relationships both within the company and to other companies. Our algorithm would eventually be included in IBM’s AI 360 toolkit. The groundwork which Meg laid over those months created a bedrock on which a team could be built. She created a safe space within Google where real challenges to the ethical applications of artificial intelligence could be made and where researchers could ask hard questions. It’s difficult to quantify that sort of work but from a qualitative perspective I can say that it is artful in its mastery.

Within a year of that conference, Meg had conjured an Ethical AI team out of the ether. She had laid the groundwork for hard questions to be asked, she had found those people within Google interested in those questions and she helped nurture each of us towards our potential. Even someone like myself who was only 20% focused on Ethical AI she took time to mentor. It was her 120% time project. Through her hard work and diligence she was able to attract other top minds in the field such as Timnit Gebru.

Meg’s contributions weren’t limited to research and development. Googlers used to be able to ask questions of the founders of the company at a weekly meeting called “TGIF”. I only ever saw Meg step up to the microphone one time. In the midst of the Maven debacle I saw her walk up and ask: (paraphrasing) “There are some of us who came here to Google from companies which collaborated with the US DoD because we didn’t want any part of the war machine. Isn’t it a bit of bait and switch now that you’re suddenly turning our work towards weapons R&D?” The response which she received was roughly: (paraphrasing) “Well if you don’t like it there’s the door.”

Sadly, that’s the response which I’ve seen Google give over and over again when people bring up ethical concerns. Google has moved from being the company whose motto is “Don’t be Evil” to being the company whose motto is “If you don’t like it there’s the door”. Meredith was a top tier research scientist in the realm of AI ethics. They showed her the door. Claire, a person who was the internal voice of Google, was shown the door. Business interests kept clashing with moral values and time and time again the people speaking truth to power were shown the door. That was right up until people confronted Larry and Sergey about the golden parachutes which they had given to rapists, sexual harassers and misogynists. Apparently that’s when they themselves found the door because we haven’t been able to ask them a single question since the Andy Rubin debacle.

And chronologically that brings us up to 2019. I had the great privilege of being in a heated debate with Timnit Gebru in one of Google’s ML forums. By this time the AI Ethics team was garnering both respect and ridicule from different groups at Google. We had adopted contrary positions on an important topic. Should every AI researcher and practitioner take it upon themselves to do AI ethics at a high level of proficiency? I had taken a negative position. My opinion was that ethics itself is a discipline which requires mastery and that to require every AI practitioner to do ethics was tantamount to asking amateurs to do expert work. Timnit took the position that every person who deploys an AI system is personally responsible for the moral consequences of its deployment. We argued back and forth for a while, each making valid points, until I said that we couldn’t trust people who don’t have the relevant amount of emotional intelligence to do the work of AI ethics. Her response was: (paraphrasing) “So you’re saying that the black women are responsible for doing it.” I knew then that she was right and I was wrong.

AI is a field dominated by emotionally immature white and Asian men. If you can ignore all of the brilliance of human interactions and focus your soul on calculus you can become a rising star in the field of AI. All you need to do is beat the benchmarks and come up with clever formulae and you can be great. What Timnit said resonated closely with me because when I had gone to that conference with Meg I saw an engineer present his findings regarding police profiling systems. A researcher from UCLA had presented a system for determining whether or not a given dispatch was “gang related” so that the police could bring “the appropriate amount of force”. Several other people had asked him about the biases of the datasets his system consumed and he had canned answers for those questions. I asked him: (paraphrasing) “Let’s assume that all of those concerns are irrelevant and that your system perfectly predicts whether or not a dispatch is gang related. Are you at all concerned that the informational advantage you’re giving to the police might cause the gangs to unilaterally raise their force profile thereby making a bad situation worse?” His response: “Man I’m just an engineer.” I decided that the best response was to sing Tim Lehrer’s “Werner von Braun” into the microphone.

And that’s how Timit convinced me that I was wrong. All I was doing was giving engineers a pass. “When the rockets go up, who knows where they come down? That’s not my department, says Werner von Braun”. She convinced me that the best path forward to ethical AI was to teach white and asian men in the field how to increase emotional intelligence. Ensuring that AI is ethical shouldn’t be the job of a handful of people, most of whom are women and people of color. That model leads to a tiny group of people trying to clean up everyone else’s messes. That’s when I fully understood what Meg was building with the Ethical AI team. I had interacted with them as though they were a group of experts who would come in to consult. In fact what she was building was a model of how every single AI team should work, with a mind towards ethics as a primary concern of technological development. Unfortunately, as many of the people reading this already know, this story doesn’t have a happy ending.

The “Stochastic Parrots” paper wasn’t anything special. It was a good paper but rather run-of-the-mill in all regards. It examined an emerging technology and did a risk analysis of how it could possibly run afoul of major Ethical AI concerns. Their paper passed the internal Google review process and then later passed the external review process necessary to get it published. That’s when Google asked them to retract the paper or at least remove their names (and Google’s affiliation) from it. This actually isn’t very uncommon. In fact, I myself have had a very similar experience.

In 2018 I was working on a paper concerning the proper regulation of artificial intelligence. I had the good fortune of working on this paper with a lawyer whom I had met at the AIES conference at which our work had been published. After we had gone through the internal review process (with approval) and after we had submitted our paper to a conference, Google requested that we retract the paper or at least remove the names of Googlers from it. Quite comparable to what happened to Meg and Timnit. The major difference is in what happened next. I asked “why”. In response to my query I was given a meeting with the specific person who had requested the retraction. She then explained to me exactly why they thought it was a bad idea and offered me alternative ways to contribute to the community. I felt respected and heard and valued. I retracted the paper and contributed to the joint white paper which they directed me towards as an alternative.

Timnit was blackballed. When she made a similar request to the one I made she was told “no”. When she said that this was unreasonable treatment and that if she wasn’t given an opportunity to discuss it with the Google executives who wanted to suppress the research she was told “no”. When she sent an email advising people to avoid Google’s DEI efforts which she had concluded were nothing more than institutional fig leaves she was summarily fired. I have no doubt in my mind that whoever insisted on the retraction legitimately believed that the paper was “unpublishable” unless it contained references to the technologies which they believed mitigated the risks. There’s much still shrouded about what happened in that executive review. One thing that we do know though is that whoever insisted on the retraction wasn’t an Ethical AI expert.

From there the dominos continued to fall. Meg, having recruited Timnit herself, had been a witness to every piece of discrimination. She attempted to gather that evidence and was fired for doing so. In the wake of Timnit and Meg’s firings several other AI Ethics experts have quit. Others remain silent for fear of being fired for supporting them.

Before Meg Mitchell joined Google there was no Ethical AI team. All that existed then were a handful of concerned individuals and a bunch of emotionally incompetent dudes who didn’t want your concerns to get in their way. Meg built an Ethical AI team out of the ether and recruited the best minds in the field to staff it. She convinced others that Google was a safe harbor in which they could do such work. Inside Google she built trust with multiple product teams that she and her colleagues could add value to theirs. Then, in a single day, Google decided to risk destroying all of that.

Hopefully Ethical AI may someday again be as robust at Google as what Meg built. Unfortunately, today we are bleeding minds because the best of the best no longer trust Google leadership to make the right decisions. I’ve had multiple conversations with other Ethical AI researchers and engineers about whether or not they were going to quit. Many have considered it, myself included. Those of us who have decided to stay all have essentially the same reason. If we quit then all that work that Meg did really will have been for nothing. Who would be left to do the work if we all leave. Google would continue making world changing AI but without the ethical expertise to avoid the immense harms it might cause. Things are certainly different now but hopefully the connections and expertise built over the past four years is sufficiently robust to weather this storm.

We miss you Meg. May you prosper wherever you land.

I'm a software engineer. I'm a priest. I'm a father. I'm a veteran. I'm an ex-convict. I'm an AI researcher. I'm a cajun. I'm whatever I need to be next.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store