Stanford Computer Forum provides a platform to discuss the latest advancements in computer science and engineering. In their April meeting, I realized that we have come a long way since Asimov’s 3 laws of robotics. The nuances associated with robotics has become way beyond the dystopian vision of robotic overlords, and apocalyptical notions of cyborgs but more practical society augmentation. Issac Isomov, the master of the science-fiction genre and author of Foundation series suggested three laws devised to protect humans from interactions with robots.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
In the Computer Forum meetup, guest speaker Shannon Vallor, Professor and Department Chair of Philosophy, in Santa Clara University spoke about Artificial Intelligence's ethical imperative and how to humanize machine values. It was an interesting talk by Dr. Vallor who is the chair of Santa Clara university department of philosophy, Markkula Center for Applied Ethics, and William j. Rewak professor, president of society for philosophy &technology, executive board member for foundation for responsible robotics, and member IEEE standards association's global initiative for ethical considerations the design of autonomous systems.
Our times have been called AI’s coming of age; the talk raised more questions than what it answered, and it was by design. The presentation was focused on Task-Specific AI As compared to AGI (artificial general intelligence) or strong AI. As Andrew Ng once said
If you're trying to understand AI's near-term impact, don't think "sentience." Instead think "automation on steroids."
In the context of weak AI, the intelligence was being defined by behavioral competence only, which does not require system to mirror human patterns of thinking/reasoning and does not assume consciousness (self- awareness), understanding (worldly knowledge), sentience (ability to feel/suffer), judgment (moral discernment), and agency (responsibility, freedom). Speaking of AI as Augmented cognition, Dr. Vallor argued that AI can counter harmful biases in human thinking, only if designed with this aim in mind. Also, AI can model much larger space of action possibilities, helping to surpass the cognitive limits of human 'bandwidth' and speed. This helps ‘enable’ and augment human cognition and extend it artificially since AI is not bound to our evolutionary cognitive heuristics and can discover new solutions by thinking outside the box.
However, any discussion around Human AI future is simultaneously marred by fears, and hopes. We are afraid of superintelligence, Robot Overlords, Skynet, A Jobless Future, and WAL-E style Dystopia. At the same time, we also secretly (or not so secretly) hope for amplified humanity, Al - Human Partnerships, and the recovery of leisure where humans can focus on higher level of cognitive activities investing time in curing the ails of humanity (with AI’s assistance of course) bringing about a new 'Age of reason’. The typical anxieties around AI range from skepticism and concerns for Safety, Opacity, Autonomy, Accountability, Power, Diversity, and most prominently lack of human vigor and wisdom. This leads to the key engineering question of How can AI Be designed safely & benevolently, in ways that do not expose human beings to undue harms even when doing so will advance a system goal? For us engineers, the people designing and developing intelligent systems, it is a monumental question staring us in the face how can we promote AI Design that recognizes and gives appropriate weight to human suffering? From a systems security perspective, it also raises the critical question of how can Al, given access to critical data or systems essential to human well-being, be designed to recognize and resist hacking, abuse or malware that aims to weaponize AI against vital human interests?
Are robots going to take our jobs? A big question mark around un-employment and is-my-job-robot-proof? is also something stares us squarely in the face. How to manage risks to human well-being posed by technological unemployment from AI/robotic automation? How can we ensure that the economic benefits of AI are justly distributed rather than shared only with the wealthiest or the people designing and developing these systems? How can we detect and mitigate unjust discrimination in machine judgments?
Societal rules are yet another cause of concern. How do we ensure that AI Is not designed in ways that intrude upon, weaken or damage human social relations and values? Also, what kind of social roles should artificial agents occupy when they may interact as caregivers for the elderly/sick, nannies/babysitters, Teachers/Teaching assistants, Law Enforcement Agents/Soldiers, Legal/Judicial Advisors, and potential Romantic partners and or Friends. This broadens up the query to how an AI system “should” behave when/if it is placed in a position of responsibility and who would be responsible if an autonomous, self-learning agent injures/harms a human? Which social/political/economic institutions, groups or persons will be responsible for steering or governing appropriate AI Technology and how can we promote responsible AI Design.
Dr. Vallor argues that even the ardent opponent of AI acknowledges the advantages AI has over us humans in variety of areas. This includes optimality, efficiently, decision making speed, precision, reliability, readability (informational advantage), compressibility, replicability and being non-vulnerable.
- Optimality i.e. 'Right/Good'= Maximized Utility Function which deduces complex value environments but risks stasis when optimality reached
- Efficiency i.e. All values instrumentalized relative to system goals/function and speed
- Decisional advantage over humans
- Precision calculation advantage over humans
- Reliability Stability advantage over humans
- Readability Informational advantage over humans
- Compressibility - Lossless reduction of informational complexity
- Computational advantage
- Replicability - Economic advantage (force multiplier)
- Invulnerability - Non-biological advantages over humans (physical affective in-vulnerabilities)
There are also concerns around what happens when AI goes “rogue”. The AI ‘Phrenology’ may consider facial or other biometrics as criminal/risk indicators and new spurious correlation explosion may happen. In case of Bias-Laundering, when training data permeated with racial, gender, class biases magically transforms into objective decision support system for judges, employers, bankers, teachers, who may use it justify their own inherent biases, now supporting by machines. Also, gaming Vital Human Systems where AI 'Arbitrage' may end up happening where algorithms that ruthlessly exploit differences in human/machine capacities to manipulate, undermine, or wrongfully profit from vital human institutions (news, politics, etc.).
After bringing these very important and challenging questions to the front and center of the discussion, Dr. Vallor summed up the answer in what she calls, the Artificial Intelligence mirror. She concludes that all designed human artifacts are mirrors of human value, including the artificial agents. Therefore, machine values are human values too; AI Mirrors are still reflecting human values, though externalized and transformed. The AI Mirrors may magnify, reduce, narrow, invert, distort, center or decenter human values.
As AI system designers and implementers, the key lesson learned from this approach is that responsible AI Design takes deliberate account of these mirror effects on human values, and their ethical impact on human lives and institutions.
Some of the approaches and potential recommendations include mapping machine values against optimized and impacted human values and looking for Value Gaps/Overlaps/Conflicts/Tradeoffs. Practicing 'Al Pre-Mortems' i.e. assuming that the project will go wrong and consider the failure options where/when/in how many ways might fail, and identify who will be harmed? How?
Finally, the Al Disaster Planning is also an important part of overall AI system design strategy i.e. what is the worst possible way this technology could be used and how sound are my mitigation strategies?'
While humanizing machine values, Dr. Vallor warns us to beware of 'Reverse Adaptation' and try to audit AI outcomes using robustly human and machine values. She recommends aiming to conserve moral complexity, and not reducing it. Last but not least, we must seek and design work-arounds/allowances for human values which are not easily optimized/reduced to algorithmic representation such as Compassion, Justice, Fairness, Love, Hope, Responsibility, Creativity, Liberty, Dignity, etc.