Centralis

View Original

Can Chatbots Talk Theology? ChatGPT as a Spiritual Advisor

Centralis recently embarked on an exciting project with a client in the religious sector, aiming to compare their existing chat feature with a prototype enhanced with ChatGPT. While the potential for facilitating theological discussions and providing insightful user experiences was promising, our usability testing revealed the importance of not rushing into new technology trends too quickly.

The Expectation: A Virtual Theological Conversationalist

Our client was curious to explore if their website's chatbot should go beyond assisting with everyday administrative tasks. Would church members engage in meaningful discussions about theological matters with a chatbot?

The Reality: Unforeseen Challenges

In usability testing, we found that members were hesitant to fully embrace the chatbot as a credible theological resource. They cited three main reasons:

  • Artificial Empathy: The ChatGPT-enhanced bot attempted to insert human-like empathy in its responses that ended up undermining testers’ confidence in the chatbot experience overall. When testers were presented with a scenario in which their families were experiencing financial difficulties and they needed to seek financial support from the organization, the prototype's responses exhibited excessive empathy in the first person, saying things such as, “I'm sorry that you're going through this...” This led to skeptical reactions from testers, with many stating things like, “You're a chatbot; you don't have feelings.”

  • Inconsistency with Organizational Values: ChatGPT's generative nature occasionally led to responses that unintentionally deviated from our client’s religious doctrine. These inaccuracies eroded the chatbot's credibility with church members and left them questioning the reliability of the information provided. Many requested to see references to where the information originated.

  • Lack of Transparency: During our testing, the prototype clearly identified itself as a bot, however, it did not explicitly communicate its AI-enhanced capabilities or set expectations around what it could or could not do, which left users unaware of its full range of capabilities.

Our research illustrated that chatbots (and sites that use them) need to navigate a fine line to achieve appropriate balance. Users appreciate empathy, as long as the chatbot doesn't come across as trying too hard to mimic human emotions. It's crucial for the chatbot to align its responses with the organization's ethos, reflecting its values, beliefs, and goals. Additionally, the chatbot should establish user expectations from the outset by clearly stating its capabilities and limitations.  

In the near future, many chatbots are likely to integrate ChatGPT capabilities. However, it's important to be cautious about adopting this technology for complex use cases, like engaging in theological discussions from a specific ideological perspective. It’s best to wait until it consistently delivers functionality that truly benefits users - and confirming that via usability testing - before fully embracing the new technology.