Beware of Catholic AI
If you are a Catholic and have been seeing ads for Magisterium AI and Truthly AI, you—like me—have probably been intrigued. Truthly AI markets itself as a Catholic AI companion, allowing users to “engage in meaningful conversations and get answers to any question or problem.” That is quite the promise. Magisterium AI’s promise is similar. It claims to give “accurate answers to your specific questions about faith and morals” and can help clarify difficult doctrines like the Trinity.
On the surface, these things seem to be harmless at worst and edifying at best. This is especially the case because their models claim to be exclusively trained in canonical texts and official Church teachings. However, caution is in order. For four primary reasons, we should encourage young people to proceed with extra caution, and we should not be quick to encourage the use of such tools in catechesis.
Hallucination-Free AI?
“Hallucinations” occur when LLMs make up information. You might ask it for a list of articles on abortion, and while some are accurate, others might be real authors with fake titles. While hallucinations can be mitigated, they cannot be eliminated. LLMs produce outputs based on statistical likelihood, not truth. As a result, they will inevitably give answers that sound authoritative but are patently false.
Moreover, while it is surely better to have a model that only draws from curated sources, it still uses a base generative, pre-trained, transformer (GPT), originally trained on massive amounts of text from the internet and other published material. There is not enough content in the exclusively Catholic corpus to achieve the sophistication of a GPT. Thus, Magisterium AI’s claim to provide “accurate answers” is only valid with considerable qualification. It readily admitted as much when I asked if it could give false answers, saying, “Yes. Like all large language models, I can sometimes generate statements that are not grounded in the sources I have been trained on or that misrepresent Church teaching. When this happens the output is called a hallucination.” Truthly AI might be more intentionally vague. Indeed, you can prompt the bot to “get answers.” Whether they will be true each time is something else entirely.
To be clear, hallucinations are intrinsic to LLMs. Thus, so long as these Catholic chatbots use them, they are hallucination-prone and, as a result, not infallible or even wholly accurate sources of Church teaching. Actually go to the Catechism for that.
Some might argue that even truth mediated through these very fallible chatbots is better than none at all. Certainly, an imperfect tool like Truthly AI or Magisterium AI can sometimes get it right—maybe even most of the time. However, we should not forget that truth matters not only as content but as encounter. Tools that provide correct propositions while eroding virtuous habits may frustrate the ultimate end of catechetical formation—a point I’ll consider more fully in what follows.
The Dangers of Outsourcing Spiritual Authority
Because Catholic AI sounds authoritative, people may substitute its answers for the guidance of Scripture, tradition, or living teachers. Why? For two reasons. First, due to AI’s fluency and the speed with which it can produce an output, we can easily fall victim to “automation bias”—our disposition to prefer AI’s answers to our own simply because it is automated. As AI becomes ubiquitous, this poses serious dangers, especially for young and impressionable users who may begin to use it as a tool for moral discernment.
Second, the ease with which people have access to this tool increases the likelihood that they will turn to it more readily in times of stress. Think about it: if you are facing a pressing moral dilemma amid life’s busyness and chaos, it is often difficult to find time to schedule a meeting with your parish priest, or to take a holy hour, put distractions at bay, and invoke the guidance of the Holy Spirit. What is much easier?—pulling out the supercomputer in your phone and getting what feels like wise counsel in seconds (with citations from Church documents granting its answers seeming legitimacy). Practically and pastorally speaking, this risks transferring authority from the Church to a machine—something incapable of receiving grace, having faith, or possessing genuine wisdom. The danger is not just misinformation but a distortion of how authority actually functions in the life of the Church. Chatbots have no authority. They are synthesizers of information. Names like “Magisterium AI” really confuse their capabilities and proper functions, even if they warn users not to assign their answers moral weight.
Can AI Weigh Goods? Love Truth? Care About You?
Even if a chatbot could give perfectly accurate doctrinal summaries and answers, it cannot understand first principles or weigh goods against one another in context. This is what we do when we discern moral dilemmas. We know truth is a good. We know life is a good. But we might be in a situation where speaking the truth poses risks to our safety or even our life. Here we must decide: What is more important? What is demanded of us here? And the answer is not easy. Many factors come into play. For example, someone responsible for small children faces a layer of complexity another person may not. Context can add or remove weight from certain goods. But again, chatbots don’t seek truth or desire goods at all. Its probabilistic nature flattens moral discernment: all sources weigh the same, no values or cares propel their outputs.
Moreover, it does not care about you as a friend, parent, or confessor does. It has no interest in your spiritual development. In fact, it has no interests at all. However, its anthropomorphic training makes it seem as though it does have cares—even cares for us as people. It does not. This blurs its functional use and trains users to acquiesce to it in ways harmful to their spiritual and moral development. This is especially the case because we interact with chatbots in a way similar to the way we interact with fellow humans through conversational text messaging. We often use courteous language, saying “please” and “thank you” as if we might hurt its nonexistent feelings. This manner of interaction over time places unwarranted trust in something fundamentally untrustworthy in matters of faith and morals: not only because it is hallucination-prone and does not care about first principles or you as a person, but because—and perhaps most importantly—the Holy Spirit does not guide the outputs of LLMs. For this reason, it does not matter if it were invincible to mistakes or if agentic AI could one day be sentient and develop feelings and values. Still, they are not made in the imago Dei; they cannot receive gifts of the Holy Spirit through the laying on of hands. I think getting intellectual clarity about this is less of a problem than our practical engagement with it as a tool. Though the distinction may be conceptually clear for some, many will continue to give such tools deference as sources of truth for the reasons I articulated above: it’s easy, seems trustworthy, and has a fluency and command of language that is rare among even the most learned philosophers and theologians—despite the fact that it may be confidently delivering falsehoods.
The Prudence of Embodied Experience
Practical wisdom, or prudence, is the virtue by which we learn to apply universal principles to specific situations. It depends on lived experience as embodied human beings—facing fear, struggling with desire, making mistakes, and learning from them. Wisdom comes with age and experience. AI cannot undergo life, suffer, fail, or grow; it cannot practice prudence. As Aquinas explains, prudence must be “perfected by memory and experience so as to judge promptly of particular cases” (ST II–II, Q.47, a.3, ad 3). In other words, prudence cannot be produced as output; it must be lived.
Now, because prudence must be practiced to be acquired (save infused prudence, which is another matter), outsourcing moral discernment to a chatbot robs us of opportunities to practice discernment ourselves. The Catechism underscores this, teaching that conscience requires continual formation if it is to remain upright and truthful (CCC §1783). The more we offload discernment to machines, the less we cultivate the habits of attention, deliberation, and judgment needed to become prudent people.
To illustrate: Can you ever learn to shoot a basketball only by watching someone else shoot it for you? Of course not. You have to pick up the ball and throw it yourself—with proper form—repeatedly in order to learn how to sink a shot. Prudence works in the same way. We cannot learn it if we only get answers from others. We have to wrestle with decisions ourselves to grow in discernment. There is no shortcut.
To be clear, I don’t claim malicious intent by these Catholic AI companies. However, given the very design of LLMs and some considerable theological issues with outsourcing matters of faith and morals to chatbots, they cannot deliver on their promises. In the end, Truthly AI and Magisterium AI seem like harmless tools to aid the faithful in understanding dense moral doctrine. They might provide some limited and qualified help with that. However, they do so at the expense of deeper goods. Chronic use, promotion, and interaction with them seems hazardous for both spiritual and moral development.
We are not machines and cannot be well formed by them either. Human formation should be primarily human—even if it’s easier and faster to dole out our questions to a machine. This leaves me to emphasize my main point: beware of so-called “Catholic AI.”
Image licensed via Adobe Stock.
