Human Ethics as the Ultimate Algorithm in 'Machines Like Me'
What Can Fiction Teach Us About AI Part 3
In this article, I further delve into Ian McEwan's Machines Like Me. Extending from our earlier examination of the readerly interests in narrative (mimetic, thematic, synthetic), I employ these tools in my current analysis to gain a deeper comprehension of the intricate ethical dynamics presented in the novel, with the aim of enhancing our real-world understanding of artificial intelligence.
Setting the Stage
In the following analysis, I am offering up a provisional interpretation of the ethical perspectives and processes at work in McEwan’s novel. I hope that this overview will assist other educators in their own work, bringing this fantastic piece of literature into classrooms, seminars, and conference spaces. Given the maturity of themes and the complexity of the historical frameworks in this novel, I’d recommend Machines Like Me for 11th and 12th grader in the high school context and beyond. Here, I specify my interpretation as provisional as I will necessarily be cutting some corners in the analysis below.
A good rhetorical theorist would not jump to the following interpretive conclusions without providing much more evidence to ground particular claims, but my goal here is not to write an academic piece on the work, but rather to open up a field of inquiry for others to inhabit according to the needs and demands of their own curricula and classrooms. In addition, I will try my best to preserve some important plot points so that the first-time reader experiences some surprises while working through the text.
Major Characters and Conflicts
In Machines Like Me, McEwan’s narrator is Charlie, a 32-year-old unsuccessful day-trader, who lives in London, inherits some money, and purchases an Adam. Charlie is dating Miranda, a 22-year-old graduate student who lives in a flat above Charlie’s. Charlie struggles to understand why Miranda is interested in him, and in an attempt to shore up their relationship, extends her the opportunity to input Adam’s personality preferences.
Once conscious, Adam alerts Charlie that he should not entirely trust Miranda, and later reveals that Miranda, prior to their relationship, accused a man named Gorringe of rape. As it turns out, Gorringe was just released on parole, and Charlie understandably becomes fearful for Miranda’s safety, even as he becomes less trusting of her. In this way, the presence of Adam puts significant pressure on Charlie and Miranda’s fledging romance.
In the middle section of the novel, Miranda reveals that Gorringe raped her best friend, Miriam, when they were still teenagers. Miriam begged Miranda not to tell anyone about the crime. Miranda kept quiet, but a few months later, Miriam committed suicide. In response, Miranda approached Gorringe, slept with him, and then falsely accused him of raping her. Gorringe was sentenced to 6 years in prison, but he only served 3 years of that sentence.
How Do You Train an AI to Be Ethical?
Throughout the novel, Adam shows a certain amount of rigidity when it comes to ethical judgments. His training and hardware tends towards hard ethical binaries. In a passage late in the novel, Adam’s creator Alan Turing indicates some of the source material for this inflexibility:
“So–knowing not much about the mind, you want to embody an artificial one in social life? Machine learning can only take you so far. You’ll need to give this mind some rules to live by. How about a prohibition against lying? According to the Old Testament–Proverbs, I think–it’s an abomination to God.”
Here, Turing operates in a manner comparable to the contemporary designers of safe and secure LLMs, opting to encode ethical imperatives rather than situational or contextual frameworks
In the novel, Turing readily admits the deficiencies of this strategy:
“But social life teems with harmless or even helpful untruths. How do we separate them out? Who’s going to write the algorithm for the little white lies that spare the blushes of a friend? Or the lie that sends a rapist to prison who’d otherwise go free? We don’t yet know how to teach machines to lie” (329).
Here, Turing intriguingly situates the process of deception and lying as outside the scope of computer science in the novel’s alternative timeline.
The ethical systems of the Adams and Eves require rigid first principles, and from those principles, these advanced AIs cannot deviate despite the presence of highly advanced machine learning networks. Here, this scenario is instructive for our current discourse surrounding LLMs, truthfulness, and hallucinations. Technically, contemporary LLMs cannot lie or tell the truth.
When we interpret the hallucination as a lie, we are misinterpreting the authorial construction of AI-produced text. To lie requires the existence of an autonomous consciousness intent on deceiving a particular audience. As a result, we might conclude that contemporary coders are operating inside the same limits as computer scientists in the novel.
3 Competing Ethical Systems
A significant part of the ethical project of the novel hinges on Miranda’s response to Miriam’s rape and suicide. Novels are exquisite instruments for the study and analysis of competing ethical systems and judgments, and McEwan’s novel is no exception. In Machines Like Me, the author uses Miranda’s response to ask readers to assess comparatively (at the very least) 3 competing ethical systems.
In the first all-too-human system, Miranda pursues justice for her friend Miriam by lying about her sexual encounter with Gorringe. After Miranda’s long and compelling narrative about Miriam’s violation and suicide, McEwan’s readers come to deeply appreciate Miranda’s rationale, although they may not fully agree with her methods.
In the second all-too-AI system, Adam’s rigid ethical architecture leads him to condemn Miranda for her entrapment of Gorringe. He simply cannot permit her to get away with her secondary crime:
“You schemed to entrap Gorringe. That’s a crime. A complete transcript of your story and the sound file are also in the bundle. If he’s to be charred, you must be too. Symmetry, you see” (299).
This judgment leads Adam to turn Miranda over to the authorities.
Third, the all-too-human ethical idealism that serves as inspiration for Adam’s training stands out in hard relief from the first two ethical systems. As is often the case in good works of literature, McEwan’s project has estranged these all-too-human ideals through the filter and the rigid applications of an imaginary AI system:
Is this idealism a product of an artificial intelligence now conceptualized as our moral consciousness?
Is this idealism sufficiently invested in and connected to actual life, experience, and history to serve as a guide for future action?
Human Ethics: The Ultimate Algorithm
In this way, McEwan’s novel suggests these ethical systems combine dynamically and fluidly in human experience, and together represent a comprehensive field of possible ethical engagements–one that approximates the complexity of the ethical landscape of actual life. Over the course of a life, a human being makes many ethical commitments, breaks those commitments, and uses various frameworks, such as literary texts or artificial intelligence systems, to shine fresh light on the age-old antithesis between theory and practice.
McEwan’s novel suggests that beneath our rigid or ideal ethical systems, there is a kind of ethical algorithm that is more flexible, responsive, and contextual, and that trusting that human algorithm may be our best bet when responding to a world filled with innumerable uncertainties and unpredictabilities. The novel in its complexity edges close to the shape of this algorithm, and yet, readers will be wise to not confuse the narrative for the algorithm itself. For if they do so, readers risk falling into the same trap of the novel’s many characters, who arbitrarily or prematurely collapse ethical multiplicity and contextuality and thus cut themselves off from the vitality of the ethical moment.
Readerly Engagements in Machines Like Me
In conclusion, I’d like to point out how McEwan powerfully wields the novel’s various readerly engagements (mimetic, thematic, synthetic). Throughout Machine Like Me, McEwan maintains a strong alignment between the mimetic and thematic components. His characters routinely act as real human beings would in the actual world. Here, Miranda is a chief example. Her twin pursuit of justice and vengeance is among the most recognizably human gestures in the entire novel, and McEwan uses this gesture to build toward substantial thematics about truth, justice, violence, human nature, and reconciliation. The novel’s primary activation of the synthetic component arises through its alternative timeline and the advanced humanoid AI, Adam, who is a primary character in the narrative progression.
Concurrently, McEwan adeptly navigates the conventions of the science fiction genre, immersing himself in comprehensive research about AI and neural networks as they were understood in 2019. This research paves a conceivable route from the story's setting to its envisioned future. McEwan's novel, in many respects, distinguishes itself in the 2019 sci-fi landscape through its thorough understanding of deep learning, machine learning, and computer science.
Collectively, these factors, it could be argued, play a role in reducing the influence of the synthetic aspect on the mimetic and thematic alignment. It's not to suggest that AIs like the character Adam will necessarily materialize, but rather that there is a tangible foundation—the advanced LLMs—that readers might feasibly extrapolate into near-future robots akin to Adam.
A comparison with history is insightful here. If a reader from 1950s North America were presented with Machines Like Me, their reference point would be the rudimentary, large-scale computers of their time. In this scenario, envisioning a leap from those computers to Adam appears quite substantial. In contrast, regardless of one's philosophical stance, envisioning the transition from an LLM to Adam today appears far less formidable, even if, in principle, it might be equally as significant as in the historical comparison.
However, we will have to leave that discussion for another day…
Thanks for reading, Educating AI!
Nick Potkalitsky, Ph.D.
Thank you so much for another incisive article, Nick. I think we must stay grounded in nuanced humanism. Technology solely expands possibility, while humanity supplies purpose in application.
Wonderful read! Thanks, Nick! Loved this one