Discussion about this post

User's avatar
Beatrice Marovich's avatar

I’m a little late to read this, but I find it super fascinating! My academic field is religious studies and theology, and I’ve had questions about the limits and holes in the training protocol for LLMs, when it comes to religion. But I don’t really understand the computational systems well enough at this point to have developed thoughts, so your piece is kind of helping me better articulate my questions.

My latent big question, though, has been about the protocol to train AI like ChapGPT toward objective neutrality on religious and theological issues and topics. From my perspective, especially given the data sets that these LLMs are working with, it seems inevitable that the training protocols are bound to fail in interesting or curious ways (like this Marian devotion moment). From a simplistic or obvious standpoint, we could say it’s because (as so many scholars in the humanities have been reminding us for decades) neutrality itself is an ideological standpoint. And an ideological frame, even in an LLM, just can’t extend infinitely into all registers of its data processing.

But I feel like you are offering another sort of take, is that fair to say? Is part of the implication of what you’re suggesting here that we could be watching the development of a form of rationality in the AI that doesn’t look rational to us, according to our standard modern expectations? That it is, in essence a form of rationality that includes (rather than excludes) what we might commonly refer to as “reverence”? Or am I misreading?

Expand full comment
1 more comment...

No posts