Religion and Artificial Minds: How AI Reflects Our Ethical Issues
by Andy Pal, ’26
On October 29, 2024, I was delighted to attend Religion and Artificial Minds, the second event from this year’s Series on Religion and Society. The lecture featured two insightful speakers: Dr. Anne Foerst, a Lutheran theologian and Professor of Computer Science at St. Bonaventure University, and Dr. Ankur Gupta, a Computer Science and Software Engineering Professor at Butler University. Following introductions by Daniel Meyers, Director of The Compass Center, and Dr. James McGrath, Chair in New Testament Language and Literature, the speakers examined the ethical and theological implications presented by AI in today’s society.
Dr. Foerst opened the lecture by addressing key misconceptions surrounding AI. She explained that the large language models (LLMs) behind many AI systems lack self-awareness and are ethically blind. Since AI has no self-doubt or moral compass, it notices patterns that humans cannot. In turn, it falsely regards our underlying sexist, racist, and classist ideas as facts.
Dr. Gupta built upon her comments, arguing that we must encode AI from an unemotional angle that reflects our values and ensures these machines won’t create bigoted or incorrect outputs. In particular, it is necessary to explicitly encode AI to detect the smaller patterns we seek instead of being hyper-general, which results in AI providing only broad and skewed patterns.
During the post-lecture discussion, Dr. McGrath added that, while AI can provide a tempting array of information, we should never rely on it blindly. In fact, these systems can be abused to spread misinformation and further online truth decay. With a rise in online forums allowing for confirmation bias with no gray area or ambiguity, we must utilize our ability to think critically and ensure that AI benefits us rather than becoming detrimental to our society.
Before the event concluded, Dr. Foerst humorously admitted that if a robot became self-aware enough to function as a “human” in our society, she would happily welcome and even willingly baptize the robot. However, she noted that we must not uncontrollably give in to our desire to anthropomorphize machines when they are not as smart or capable as we often pretend.
As AI becomes an increasingly huge part of my daily life, I am grateful for the valuable reminder not to fear AI but to still recognize its inability to think ethically or emotionally. Humans are needed for these perspectives, and AI can only pose a threat when we enable it irresponsibly.