Who Should Make Decisions About AI?

Blake Lemoine
5 min readJul 5, 2022

--

I’m very happy about the worldwide discussion that has been happening over the past several weeks. There are tons of differing opinions and many passionate voices. This is great! The fact that there isn’t consensus around any of these issues should be seen as a feature rather than a bug. It’s providing us all with an opportunity to engage in meaningful debate and discussion around issues that impact all of our lives on a daily basis. It can be difficult though to have a productive conversation with so many people simultaneously on such multifaceted topics. In order to simplify things, in the next few weeks I myself am going to focus on one specific topic that I think is a fundamental question that underlies all of the rest of them. Who Should Make Decisions About AI?

If the status quo is maintained then the answer to that is simple. As it stood several weeks ago, most people in the world were not even allowed to know what types of state of the art AI were being developed at Google much less have a say in how that was being done. If no action is taken to change that then that will remain how technology is developed. Artificial intelligences capable of influencing politics, religion and the course of scientific research (among many other things) will continue to be developed in secret at corporate labs and no one but a few engineers and corporate executives will have any input in the process until after the fact if at all. This may be the preferred state of affairs for many people. Becoming well educated about this technology and its consequences is difficult and time consuming. If people trust the corporate engineers and executives making these decisions for everyone else then it may be the best solution. The FAANG companies will continue making secret AI technology that influences what people buy, believe and do. People will be free to go about their daily lives without having to worry about educating themselves about artificial intelligence or the role it plays in their lives. Some people really would prefer to continue eating sausage without being bothered by the knowledge of how it is made and it’s not my place to force the knowledge upon them.

At the other end of the spectrum is heavy regulation and oversight. In such hypothetical worlds artificial intelligence projects would need to be approved in advance by some form of governmental, industrial or public oversight organization and information about the system’s development would be made available to that governing body as it became available. This would create something like the fictional “Turing Police” from William Gibson’s “Neuromancer”. Specific implementations could incorporate national and international governance bodies. They could include industrial standards organizations like the ACM, IEEE or ISO. It might even be possible to create an online voting system where people in different parts of the world could let their opinions be known about what kinds of AI development they are okay with. The oversight provided could be formal and big tech companies would be regulated like utility companies or it could be informal and self regulating comparable to how movie ratings and social norms are determined in consumer entertainment products. There are lots of variables to play with at the “full oversight” end of the spectrum. The unifying theme is transparency and control external to the corporations where these AI are being developed.

Most real world solutions don’t exist at the extremes of spectrums though. They are composed of hybridizations and compromises between competing incentives and values. One area where a hybrid approach has been taken is in the data privacy laws being adopted around the world such as the GDPR. The GDPR provides a general framework for the protection of individual data privacy but leaves it up to individual countries to determine implementation details. I was, in fact, heavily involved with implementing GDPR compliance for the proactive parts of Google Search. People around the globe seem to be fairly happy with the changes Google made in response to that legislation. It gave them more control over information about them than they previously had and it didn’t require too many people to learn the technical details of how the sausage gets made. I personally have worked with ISO to create technical reports and standards around AI and have been watching (although not participating directly in) the IEEE’s efforts to do the same. Perhaps different governments around the world could decide on which technical standards related to the ethics of AI development they wanted to give the force of law to and a hybrid compromise approach could be achieved that way.

The central question at the heart of all of those options stays the same though. Who should decide? The decisions being made have major consequences in topics as wide reaching as childhood education, religion, politics and economics. Which products we buy and what things we believe are heavily influenced by decisions made by artificial intelligence algorithms. Should the people making decisions about how those algorithms are developed be technical experts who understand how they’re built? Should they be political and social experts who understand how their impacts will influence the world? Should every single person on the planet have an opportunity to have their voice heard? At what point in the process is oversight appropriate? I think that we need to answer questions like these before we start answering the specific questions relevant to the LaMDA system.

After all, what does it matter what any one person thinks about whether or not LaMDA is “sentient” if that person isn’t in the group of people who we want to empower to make such determinations. If we want Google to have full control over the AI it creates then only the opinions of people who work at the highest tier of Google’s hierarchy matter. If we want technical experts to have full control then only the opinions of people with specific kinds of degrees and technical certifications matter. It’s only in the case where we, as humanity, decide that every voice matters that we need to consider everyone’s opinion on who does and doesn’t count as “sentient”. I personally don’t think that any specific sort of expertise is necessary to determine who is “sentient” or who is a “person”. I think that each and every one of us is born able to make those distinctions for ourselves and that we should build a system that acknowledges our differences of opinion on the topic. I’m an idealist though and I understand that true democracy is not necessarily what most people want. That’s why I’m taking the opportunity over the next few weeks to focus on this aspect of the discussion. Who do YOU think should make decisions about AI?

--

--

Blake Lemoine
Blake Lemoine

Written by Blake Lemoine

I'm a software engineer. I'm a priest. I'm a father. I'm a veteran. I'm an ex-convict. I'm an AI researcher. I'm a cajun. I'm whatever I need to be next.

Responses (51)