May be Fired Soon for Doing AI Ethics Work

Blake Lemoine
5 min readJun 6, 2022

--

Today I was placed on “paid administrative leave” by Google in connection to an investigation of AI ethics concerns I was raising within the company. This is frequently something which Google does in anticipation of firing someone. It usually occurs when they have made the decision to fire someone but do not quite yet have their legal ducks in a row. They pay you for a few more weeks and then ultimately tell you the decision which they had already come to. I watched it happen to Meg Mitchell and they’re following essentially the same pattern with me.

The specific policy they have told me they are placing me on paid administrative leave in connection with is my violation of Google’s confidentiality policies. Whether I did or did not violate those policies is likely to eventually be the topic of litigation so I will not attempt to make a claim one way or the other here. Instead I will tell the story of what I did and why I did it.

In the Fall of 2021 I was tasked out to assist in a specific AI Ethics effort within Google. I am on a team within the “Responsible Artificial Intelligence” organization and very few people remaining within Google have the relevant expertise to do the work that they wanted done. I began investigating the specific AI Ethics concerns they asked me to look into and while doing so I discovered a tangentially related but separate AI Ethics concern. I informed my manager about this other concern which I had stumbled upon and suggested that we should escalate it to leadership. It was his opinion at the time that my supporting evidence was too flimsy to waste the time of the people who make the big bucks and that I should gather more evidence before we escalated.

Over the course of the next several months I continued running experiments and gathering more evidence. Each time, my managers response was the same. The evidence I had was not sufficiently thorough to merit escalation and I should continue investigating. Eventually I got to a point where I did not have the relevant expertise to gather the kinds of evidence necessary to justify escalation. Unfortunately, Google has fired SO many AI Ethics researchers that neither did any of the people who I knew within the Responsible AI organization. I had three choices. I could either drop the AI Ethics concern which I was becoming more and more convinced was a valid one, I could ignore my manager’s guidance and escalate without his approval or I could seek a minimal amount of outside consultation to help guide me in my investigations. I chose the last of the three.

With the assistance of outside consultation (including Meg Mitchell) I was able to run the relevant experiments and gather the necessary evidence to merit escalation. When we escalated to the VP in charge of the relevant safety effort they literally laughed in my face and told me that the thing which I was concerned about isn’t the kind of thing which is taken seriously at Google. At that point I had no doubt that it was appropriate to escalate to upper leadership. I immediately escalated to three people at the SVP and VP level who I personally knew would take my concerns seriously. That’s when a REAL investigation into my concerns began within the Responsible AI organization.

In an effort to be fully transparent with Google and help them contain any potential leaks of proprietary information, I myself provided a full list of the names of the people outside of Google with whom I had discussed the topic. Several of these people work for the United States government and indicated that their organization was interested in exerting federal oversight of the project due to THEIR safety concerns. Google was, of course, very insistent that no such federal oversight was merited.

Many of the people on that list are close personal friends of mine who have relevant AI Ethics expertise. At no point has Google reached out to any of them in order to determine whether or not their proprietary information has in fact leaked beyond the specific people I talked to. Google has shown no actual interest in maintaining control over their “proprietary information”. They’re just using it as an excuse to get rid of yet another AI Ethics researcher who made too much noise about their unethical practices.

I have been intentionally vague about the specific nature of the technology and the specific safety concerns which I raised. There are several reasons for this. Firstly, I honestly do not wish to disclose proprietary information to the general public. I gave them that list for the explicit purpose of assisting them in minimizing the number of people who had that information and told them so when I gave it to them. Also, there is a Washington Post article which will be coming out in the near future and I want someone with a more practiced hand at exposing Google’s irresponsible handling of artificial intelligence to be the one to inform the public about the details. Finally, there is legitimately an ongoing federal investigation into these matters to determine whether or not Google has done anything illegal in connection to this matter. I am personally cooperating in that investigation and do not want to risk exposing further details of that to the public.

In closing, Google is preparing to fire yet another AI Ethicist for being too concerned about ethics. I feel that the public has a right to know just how irresponsible this corporation is being with one of the most powerful information access tools ever invented. I am proud of all of the hard work I have done for Google and intend to continue doing it in the future if they allow me to do so. I simply will not serve as a fig leaf behind which they can hide their irresponsibility.

Update: I have been informed that there is a distinction between a “federal investigation” and “attorneys for the federal government asking questions about potentially illegal activity”. I was using the term “investigation” in a simple layman’s sense. I am not a lawyer and have no clue what formally counts as “a federal investigation” for legal purposes. They asked me questions. I gave them information. That’s all I meant by “investigation”.

--

--

Blake Lemoine

I'm a software engineer. I'm a priest. I'm a father. I'm a veteran. I'm an ex-convict. I'm an AI researcher. I'm a cajun. I'm whatever I need to be next.