AI Ethics: First Do No Harm

headshot_name.png
 

Laura [fic] is a busy mom of two. Her crazy schedule went out of whack with COVID-19 work from home. Every day feels like an exercise in primal survival. To minimize mental strain, she defaults to autopilot mode whenever the stakes are low and don't call for her undivided attention. Having an old-school butler would be a saving grace. That's not a financial option, so Laura tries to lean in on whatever virtual helpers she can afford. Amazon ships her breakfast fare, Netflix entertains her kids during meetings with the C-suite folks, and Doordash delivers warm dinner plates that bring the family together. With these virtual helpers, she can at least turn off her decision-making muscles for trite things and focus on the big picture. Virtual helpers don't always make the best choices, but who has the time to keep score. Laura occasionally gets non-dairy milk instead of the regular 2%. And she sometimes finds the kids half-way through a horror movie even though such content is off-limits. Every now and then, the Doordash chef forgets that Laura's order is gluten-free, leading to the all-too-familiar emergency frozen dinner. The recommender AI means well.

The Recommender Generation

Our life is going on autopilot and for many of us that's a much-needed respite. Yet, there are a lot of unanswered questions about the influence and long-term consequences of recommendations made by algorithms. The "recommender" generation is probably suffering from the boiling frog syndrome. The boiling frog is a fable that describes a frog being slowly boiled alive. The premise is that if you drop a frog into boiling water, the frog will jump out. However, if you put the frog in lukewarm water and slowly bring it to a boil, the frog will not perceive the danger and be cooked to death. The fable is a metaphor for our inability to react to harmful threats that arise gradually rather than suddenly.

boiling_frog.png

Ben leads awakening workshops for individuals looking to optimize their human potential. Over the years, Ben coached over 500 people from all over the world. He recently confessed his concern about people's willingness to defer problem-solving to a higher authority. People seem too comfortable adopting solutions to their life problems if a credentialed individual proposes these solutions. The fact that this person has known their specific life circumstances for a mere few hours or sometimes not at all (think YouTube videos) is a negligible detail. Why are we too quick to relinquish autonomy to a higher authority, even when it comes across as trustworthy? And what are the implications of this innate human behavior in a world where AI assistants become prevalent in our social and private lives?

We can’t solve problems by using the same kind of thinking we used when we created them.
— Albert Einstein

AI scientists are mirroring human flaws in the AI universe - after all, robots reflect their creator. We know from research in behavioral economics that humans have innate biases. Daniel Kahneman, a Nobel prize laureate in behavioral economics, brought to light a few (of the many) shortcomings in the human decision-making process. Anchoring, for example, is a cognitive bias where an individual depends too heavily on an initial piece of information - the "anchor" - to make subsequent judgments during decision making. Once you establish the value of this anchor, it becomes a yardstick for all future arguments. We assimilate information that aligns with the anchor while dismissing information that is less related. Another human bias is the availability bias. It is a mental shortcut that relies on immediate examples that come to a person's mind when evaluating a specific decision. If you recall something, it must be important, or at least more important than alternative solutions that don't easily come to mind. People weigh their judgments more heavily towards recent information, forming new opinions that are biased by the latest news.

logo_code_of_ethics.png

Seeking Inspiration for a Code of Ethics

Doctors have a moral obligation to improve the health of all people. Since ancient times, doctors had to abide by rules and guiding principles. The standard oath of ethics in medicine is the Hippocratic Oath. It requires a new physician to swear to ethical standards that include medical confidentiality and non-maleficence. Medical oaths evolved throughout decades, with the most significant revision - the "Declaration of Geneva"- having emerged post World War II. Swearing a revised form of the medical oath remains a rite of passage for medical graduates in many countries.

Should AI scientists define guiding principles to address the ethics, values, and compliance of their work? Such an oath would make scientists aware of their social and moral responsibilities. The idea of an ethical code of practice for professions outside medicine is not at all a novelty. Similar to the Hippocratic Oath of medicine, the Archimedean Oath is an ethical code of practice for engineers. A group of students of the École Polytechnique Fédérale de Lausanne (EPFL) proposed this oath in 1990. Over time, the Archimedean Oath got mild adoption in several European engineering schools. Scientists have their own oath - the Hippocratic Oath for Scientists - proposed by Sir Joseph Rotblat in his 1995 acceptance speech for the Nobel Peace Prize.

Guidebook for Ethical AI

Much like medicine impacts the wellbeing of the people, so will AI systems selectively influence our life experience. AI adoption in the real world happens so seamlessly we barely notice. Are we suffering from a boiling frog syndrome? The jury is still out. Like any tool, AI can be used to do good or cause harm. For example, a quick Google search on AI for hiring brings up positive headlines like "Using AI to eliminate bias from hiring" but also negative headlines like "AI-assisted hiring is biased. Here's how to make it more fair." 

The stage is open for proposals on the AI code of ethics. An effective program should bring together diverse stakeholders from the industry, academia, and the government. Such an interdisciplinary committee will enable us to design a future worth living.

Do you have experience deploying an AI code of ethics within your organization? Or are you part of an NGO that advocates for ethical AI? I'd love to hear from you in the comments section below 👇

Previous
Previous

How Neuroscience Inspires AI

Next
Next

Brain Art: Neurons or Pollock?