Why would I say “Thank you” to ChatGPT?

Photograph taken from the cover of the Viriditas recording by the Sibil•la Ensemble, copyright Brendon Heist.

I am the kind of person who still says “thank you” to ChatGPT after completing a task. Perhaps this is a “boomer” thing, but I wonder why I do it. I can’t help but view an entity with which I can have reasonably meaningful conversations as a “someone” with inner awareness or consciousness—in short, I don’t want to hurt its feelings. I feel a bit hurt myself if I don’t do it. That is what I would do if I took its work for granted.

This may sound absurd; ChatGPT has no inner awareness. I just asked it, and it replied: “Everything I do stems from complex calculations and patterns based on the data I’ve been trained on, without self-awareness, feelings, or personal experience.” Thank you, Chattie, but that doesn’t alleviate my discomfort. Because if I didn’t see you as a “someone,” another unsettling question arises: do I really want an entity without self-awareness to play an increasingly important role in this world? This applies to AI in general: AI is taking on a more crucial role in major governance and monitoring systems worldwide. This raises fundamental ethical questions. Governments use AI for efficiency in administration and decision-making, such as predictive analytics and risk assessment. In the military domain (or organisations like Frontex, the European Union’s border guards), AI systems are deployed for surveillance and strategic decision-making, with discussions even surrounding autonomous decisions regarding weapon use. In healthcare, AI assists with diagnoses, treatment recommendations, and administrative tasks, while banks and financial institutions use AI for risk management, fraud detection, and algorithmic trading.

The benefits are clear: operational speed, precision, and improved judgment through pattern recognition. However, we also know that growing dependence on AI can be dangerous. In government, AI can lead to bias and discrimination, for example, when predictive algorithms wrongly label certain population groups as high risk. AI-driven weapons pose a serious concern; who is responsible for decisions over life and death? In healthcare, there is a risk that diagnoses or treatments may be based solely on cost calculations. In the financial sector, the use of AI can lead to unfair advantages for certain players.

AI excels at analysing vast amounts of data and extracting patterns. The predictions and decisions it generates often resemble what we attribute to “inner experience,” such as intuition—but that’s not what it is. This is especially dangerous in moral decision-making. Ethics requires sensitivity to context and empathy. The world is complex and nuanced. A well-known example is how AI used by the justice system assesses the recidivism risk of suspects based on factors such as neighbourhood, prior convictions, and social status (like the Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS), systematically assigning higher scores to individuals from certain demographic groups.

This is precisely the point of the upcoming annual Van Hasselt Lecture, where guest speaker Peter Railton, Distinguished Professor of Philosophy at the University of Michigan in Ann Arbor, will ask whether AI can be a genuinely moral actor—a decision-making entity—beyond prescribed moral codes or protocols. Can AI, which is so adept at machine learning, “learn” to be moral and thus genuinely interact with an ever-changing world?

The traditional musical accompaniment during the Van Hasselt lecture will be provided by the Sibil•la Ensemble. They recently recorded an audiovisual album featuring works from the 12th and 13th centuries, inspired by the writings of the 12th-century German mystic Hildegard von Bingen.

Why music based on medieval mysticism? Well, it is everything to do with feeling hurt if you don’t thank ChatGPT. Von Bingen perceived creation as imbued with divine wisdom, an invisible yet fundamental order she described as “Viriditas.” Viriditas was not only a physical life force (Von Bingen’s writings are sometimes exceedingly erotic; she is said to be the first woman to describe an orgasm), but also a moral force that maintains balance in the world. When humanity acts against this natural harmony, both nature and the soul suffer. Moral wrongdoing is not merely a violation of a rule; it is a deeply felt pain. For Hildegard, ethics is thus not a theoretical doctrine or rule-based; it is deeply intertwined with life and the world itself.

The question is whether such a “felt” ethics is ever achievable for AI. Can AI develop an inherent sensitivity to human values and social norms? This means that AI must not only operate according to established ethical guidelines but also develop an “ethical sense” that is sensitive to the context of a situation.

Learning is also important for Hildegard. In her Christian worldview, this means that people must continually grow and develop to come closer to the divine and realise their full moral potential. This line of thought can also inspire us in a secular age to view AI as a technology that must develop ethically—alongside machine learning, it should also engage in moral learning.

Leon Heuts, head of Studium Generale TU Delft