The Sweetness Test

Photo by Maxwell Nelson on Unsplash

The New York Times ended its recent article on Geoffrey Hinton's resignation from Google's AI (artificial intelligence) Team, with a Robert Oppenheimer quote that Hinton would often use to defend technology advances. Oppenheimer, known most famously for his work on the atomic bomb, is famous for saying, "When you see something that is technically sweet, you go ahead and do it."


The "Sweetness Test" is a common one in a world driven by innovation and technical progress. Isaac Asimov, the author of the 1950 sci-fi novel I-Robot once said, “If knowledge can create problems, it is not through ignorance that we can solve them.” 


The underlying message aligns with the sweetness test. As we learn and explore, we need to let our minds follow what intrigues them and solve the problems created by our exploration with the very things we have created.


The same year I Robot came out, Alan Turing released his Turing Test (or the Imitation Game) and began the race towards today's "AI Moment." His test was simple. He said, "if a computer acts, reacts, and interacts like a sentient being, then call it sentient." (Encyclopedia Britannica)


Arguably, all the progress we have seen since 1950 (personal computers, the Internet, social media, mobile technology, etc.) has been tied to modern human's drive to create ever more capable machines while wrestling with and increasing crisis of definition. Mainly, who am I and what is my purpose? Unfortunately, our questioning has not led to clarity but is becoming less and less predictable as people give up on shared truth (a common moral framework). The lack of a common moral platform on which we can stand means that our response to advances in AI tends to be purely pragmatic (the ends justify the means). And this is compounded by the democratization of data, open source resources and expanding Internet access which is putting huge amounts of data and increasingly powerful tools in the hands of anyone with an interest (highlighted by a recently leaked internal Google memo).


The moral crisis we are experiencing has made our technical advances unable to consistently turn the "sweetness" we taste into the common good we long for. Without an understanding of who we are and our greater purpose, our decisions about which innovations to pursue are as subjective as our ability to describe whether a chocolate bar is bitter or sweet.


I would like to propose the two discussions I believe everyone needs to be having right now:

  • Who are we and what is our purpose? - Moral Framework
  • How should we then live? - Discernment

Moral Framework

The reason why there is so much anxiety about the role of AI is that we are filled with anxiety about who we are. Our inventiveness has outpaced the confidence we have in our moral framework. So when we are faced with a question about whether a machine should do this or that, we aren't able to confidently draw from who we are and then imagine what a machine should be able to do in support of that identity. As a Jesus follower, I believe that instead of drawing our morals from the one who created us, we are surrendering our moral boundaries to the very things we create and asking them to do the job we are unwilling to do. Because the machines we create have no soul, they are unable to replace God in our lives, no matter how much we want them to.


A well known 20th Century author, Francis Shaeffer said this about humanities' response to Psalm 23 that begins "The Lord is my shepherd..." In his book How Shall We Then Live, Francis said, “As my son Frankie put it, Humanism has changed the Twenty-third Psalm: They began - I am my shepherd. Then - Sheep are my shepherd. Then - Everything is my shepherd. Finally - Nothing is my shepherd.” 

In this AI Moment, I would append Shaeffer's quote to end with "... - AI is my shepherd."


Discernment

The next few years will require a maturity in our discernment that can only come if we are building our lives within a solid moral framework. There are two fundamental areas of concern that will require discernment. The first is what Geoffrey Hinton is focused on; the potential for AI to destroy life as we know it (which he describes in this interview). The second, and more immediate concern is what Tristan Harris and Aza Raskin discuss as the ability of AI to generate misinformation, deep fake images and fake voices among other things. When we turn on the news, answer the phone, receive a direct message or watch a video, we will have to discern whether the content is real or fake.


The second will require us to be aware of our surroundings, in tune with the people in our lives and up to speed on what is really happening in the broader world. The first will require those involved in designing, marketing and regulating the technology to be highly attuned to what will help people flourish rather than lead to their demise. In short, the level of sophisticated discernment required of us far outstrips what we have been accustomed to and trained for. Our future requires a level of self-awareness, critical thinking and situational insight that I believe will only be possible with God's help and our intentional preparation.


A New Sweetness Test

With these two discussions in play, I would now like to propose a new sweetness test to replace the one Oppenheimer coined so many years ago and Hinton used to justify his advancements:


"When you see something that is humanly sweet (things that are aligned with a shared moral framework and strong commitment to discerning in community), you go ahead and do it."


Here are some questions you can ask yourself:

  1. Does this advance affirm and strengthen my unique status as a human created by God?
  2. If I build this thing, will it help people live out a shared moral framework and treat each other with love and respect?
  3. Will this innovation lead to people flourishing or floundering?
  4. Do we have shared agreement on how to use this new invention for good?
  5. Do we have the structures in place to defend the helpless from evil uses?

In a recent TED Talk, Sal Khan, founder of Khan Academy, shared their latest work to give every student who wants one a personal tutor. The potential to help students is clear and the power of the technology is amazing. Towards the end of this talk he addressed the issue in this article. He challenged the audience this way, "...obviously there's many potential positive use cases, but perhaps the most powerful use case and perhaps the most poetic use case is if AI, artificial intelligence, can be used to enhance HI, human intelligence, human potential and human purpose."


Khan eloquently describes the second part of my modified "Sweetness Test," but I'm afraid that without addressing the first part, our moral framework, no amount of focus on human purpose will lead to the flourishing lives we all desire.


Read the next article on AI, "A Courageous Response."

Comments