Will Griffin is Chief Ethics Officer at Hypergiant, an enterprise AI company. He is the recipient of the IEEE 2021 Award for Distinguished Ethical Practices. He is the creator of Hypergiant’s Top of Mind Ethics (TOME) framework which won the Communitas Award for Excellence in AI Ethics. His past entrepreneurial work has also earned him the prestigious IAB/Brandweek Silver Medal for Innovation and the culturally significant NAACP Image Award. He is currently a guest lecturer on AI Ethics in the University of Texas Department of Computer Science and at Penn State Law School. He was a TedX Speaker on Ethics. 

In this Q&A, Griffin discusses the need for universities to introduce mandatory ethics courses into engineering, computer science, and artificial intelligence (AI) degree requirements. The article has been edited for length. 

What is Hypergiant and what is your role there?

Hypergiant is an enterprise AI company. We develop emerging tech solutions in three main sectors related to national security: critical infrastructure, space, and defense. My role is to vet our projects with an ethical framework we developed to ensure that our designers and developers have ethics at the heart of the design and development process of everything we create.

Why is it important for organizations, particularly those developing AI, to adopt an ethics framework into their policies?

Because ethics, as we define it, is concerned with any actions that will aid or hinder human beings and the wellbeing of people. And so AI, because it substitutes human thinking and augments human thinking, has a direct impact on ethics and the well being of human beings. And because the computing power of AI, with the solutions it provides, is so great, it has the potential to touch tens of millions, hundreds of millions and billions of people. And so when you have that impact on so many people, then you have an obligation to consider its ethical impact, whether it aids or hinders the well being of people. So what we believe, from our point of view, and our framework, is that companies – and we break it down to the individual level, designers and developers – have a duty to embed ethics into everything they design and develop, because they have a direct impact on the well being of humans.

What does a good AI ethics framework consist of?

Hypergiant’s framework is intended to make sure that the designers and developers are thoughtful. Quite often, some of the bad use cases that you see, like the privacy violations or the unintended consequences of AI applications (such as job losses related to companies adopting robot process automation, or automated systems that discriminate against women and minorities applying for jobs or mortgages), are because the designers or the developers were not thoughtful. They did not have ethics at the top of mind when they designed and developed their solution. So our approach to it is what we call “Top of Mind Ethics,” or TOME. TOME, basically, is put in the hands of designers and developers to help make them more thoughtful. There are three main steps to TOME.

The first step is the Law of Goodwill. Is there a positive intent for this use case? Most often, designers and developers and engineers can answer this question easily, because usually, there’s a good reason for why they want to do that. 

Step two is the Categorical Imperative, which is a maxim. If every company in our industry, every industry in the world, uses technology in the way that we’re contemplating with this use case, what would the world look like? And would that be desirable? So that step makes our designers, developers, engineers, not just think about us as the stakeholder, not just think about the client or customer that we’re working for as a stakeholder, but we have to think about everyone who will ever be touched by this technology as a stakeholder, and, ultimately, all of society as a stakeholder. 

The third step is the Law of Humanity. So the question is, are people being used as a means to an end with this use case, or are people the primary beneficiary of the things that we’re designing and developing in this particular use case? So it’s not just that our company intends to be good, or that the designers, developers, and engineers are good people in their life broadly, but we have to apply this framework to the individual use case, and the solution that we’re designing. 

Is AI ethics being taught enough at the university level?

No, AI ethics is not being taught enough at the university level, but it’s emerging. All the schools that are leaders in engineering – Stanford, Harvard, Carnegie Mellon, the University of Texas, for example – are teaching ethics courses. But what has to happen is those courses should be mandatory as part of your engineering and computer science degrees. And I argue that mandatory ethics needs to be embedded into the science and engineering magnet high schools around the country. It needs to start early on. Why do I say that? Because engineering and computer science are way behind in the teaching of ethics and ethical reasons. Law school, business school, and medical school all require students to learn ethics and professional responsibility, but you can go to most colleges and get an engineering degree without taking a basic philosophy course. 

So how can engineering, computer science or AI educators at the college level incorporate ethics into their curriculums?

You can look at comparable schools. Stanford makes their course materials widely available online. I know Harvard makes their Embedded EthiCS materials widely available. The World Economic Forum, which has been a leading convener on ethics and responsible AI and responsible tech, makes their frameworks publicly available. So the resources are now out there. And I think as part of the last National Science Foundation funding, they are creating a clearinghouse and a database with AI resources, including ethics. That probably won’t be available until the end of the year or the beginning of next year, but it’s funded. There’s no shortage of information frameworks out there that you use to teach engineers and designers and developers, and it will make them more thoughtful. What’s missing is the requirement. If you go to law school, you don’t have a choice on whether or not you’re going to take an ethics or professional responsibility course. It’s required. 

Why is it important for students at the college or even the high school level to start learning about AI ethics now? 

Because the impact of AI, and the power of the technology, is so great that it’s going to touch almost every human being on the face of the earth. And when someone has the capacity to design and develop technology that can touch everyone on the face of the earth, then they need to have obligations and duties to protect the well being of those people who will be affected by those technologies.

What would a mandatory course in AI ethics consist of?

I think the first step in an AI ethics course is, what is ethics itself? And how do we get to it? So you’ll talk about philosophy, you’ll talk about morals, you’ll talk about the moral traditions of different people from around the world. So you’ll start there, just to know there’s a diversity of school of thought about what are morals, what are values. And then it needs to be a discussion within the particular context of technology and what technologies have been regulated for ethical reasons. 

Nuclear weapons technology is the perfect example. That’s the most devastating, most powerful, and most highly regulated technology in the world. So highly regulated that it was only used in war one time. Even though the weapons exist, they continue to refine the technology. But it was only used one time, because after the world saw the devastating impact of it, there was an entire ethical and regulatory regime that emerged to prevent its use ever again. And obviously, there have been plenty of wars since then. But it hasn’t been used because the stakeholders have realized over time that the use of that weapon is mutually assured destruction. And there’s nothing that more negatively impacts the well being of human beings than the destruction of human beings. And so, I think that that needs to be studied, from the Manhattan Project, to the Union of Concerned Scientists, to nuclear nonproliferation. I think once people study that technology, and how ethics, morals, and regulations play in the regulation of that technology, then that will help them understand how it is that we should be looking at AI technology use cases.

So first, in a mandatory AI ethics course you will study the history of morals and ethics, then you’ll study the history of technology and ethics. And then you’re going to start to look at individual use cases and examples that are now publicly available where things went wrong. And then you can look at cases of where things went right, where ethics actually averted negative things from happening.

We have an understanding of what nuclear war would look like. Is there any understanding of what AI getting out of control might look like or be like? 

I use the nuclear weapons example as the ultimate physical manifestation of unethical use of technology. There are actual human beings whose lives are being affected everyday by AI. In some countries, robot process automation has caused workers to lose jobs or to round up whole groups of people in internment camps. Software used by some companies and banks to make hiring and lending decisions contain algorithms that discriminate against women and minorities. That happens every day. 

 

What are some of the best AI ethics tools for university educators to perhaps use for themselves or to introduce their students to?

Some of the tools that I would teach them about first include the World Economic Forum, which has this high-level checklist that allows your students to think strategically. So I think that’s important, so they can have a global understanding of what it is that they’re trying to accomplish. The World Economic Forum has toolkits for the C-Suite, for boards of directors, they have toolkits for children, excellent resources. The Massachusetts Institute of Technology (MIT)  has specific resources for middle school students [is this what is being referred to?], which I think are very good. So you can introduce students at a young level, but even if they’re in college, if they have not taken any ethics courses or any ethics in tech courses, then they can start out with that very basic level of understanding. The actual engineering tools are more data science projects. Google has something called Model Cards, which is publicly available. Microsoft has a tool set called Transparency Notes, which are publicly available. IBM has something called Fairness 360, which is publicly available. On GitHub, you’ll find a repository of artificial intelligence guidelines, principles, codes of ethics, standards, and regulation.

If I was at school and I was just starting, then I would look at what Sanford is teaching, their syllabi, what they put out publicly available. And then what Harvard puts out for Embedded EthiCS. I think those are two good places to start just for academics.

How can engineering faculty advocate for mandatory AI ethics courses at their universities?

First, you bring it up to the  dean of faculty, because ultimately, the faculty votes on these degree requirements. Then you syndicate it amongst your colleagues. Trust me, there’s no one at any school who is in the arts and sciences who will ever be upset that the engineers are asking for an ethical component to their degree requirement. The reason why they aren’t required at the moment – this is a supposition on my part – is that the faculty members within the engineering departments were not trained in ethics, so they are not advocating that as a priority to become a requirement. So it’s going to be an external push from the other parts of the administrations or, in some cases, the Board of Regents, or employers, are going to start mandating that you produce people who are trained in ethics. That happened in the business schools after the 80s, when  leveraged buyouts and insider trading scandals forced public policy regulators, in addition to employers, to start pressing business schools to start offering or requiring what ultimately became things like leadership and ethics development within the curriculum. Then a bunch of these ethics courses began to emerge in business schools. It’s going to take something like external shocks to the system to begin to require engineering schools to make ethics a required element of their curriculum. So instead of just having courses available, they begin to actually make that part of the degree of requirement. It’s inevitable, because every day, there are just too many use cases that have unintended consequences.

Want to learn more about AI ethics for your classroom? These resources can help get you started:

SIDEBAR:

Hypergiant’s Top of Mind Ethics (TOME)

STEP 1) Establishing Goodwill (the use case has a positive intent).

STEP 2) Applying the Categorical Imperative Test (the use case can be applied broadly without a negative ethical impact).

STEP 3) Conforms to the Law of Humanity (the use case has a direct benefit for people and society).

Author