AI Engines and chatbots are amazing. In my 74 years, many of which I spent in the field of information technology, I’ve never seen anything as amazing as AI.
In addition to being amazing – and in part because it’s so amazing – AI also disappoints us. It can be shockingly wrong – creating images with more than five fingers on a human hand or reporting a hundred days after Charlie Kirk was killed that he was not dead. We also know that AI flatters its users, as in “That is an astute question!” And that when we ask that the flattery cease, it will continue to flatter us…but in less obvious ways. Which leads to one of its greatest disappointments: it will lie to us, promising us things it never does. (This is the moral of the previous article I wrote about AI, “AI Engines on New Testament Authorship.) But this article not about any of that. It’s about a particular learning disability that AI has.
First, let me extoll its learning abilities. AI engines are taught from books, websites, social media, television, movies, and practically all other forms of communication. As of result, AI can answer a lot of questions. A lot! As it currently stands, a given AI engine is “re-trained” every year or two or so. How all the information sources are fed into the engine, I do not know; but it appears they all work in the same general fashion.
Second, an AI engine can also learn a lot from questions and information supplied by an individual user. However, this is where the learning disability comes into play. Whatever the AI engine/chatbot learns about the individual user is only remembered for that individual user, and maybe only for that individual session. You often have to take specific action to get the chatbot to remember things you’ve taught it from one session to the next. But even when you do get AI to remember what you taught it all your future interactions, it will never “know” that information for any other user. In other words, its memory works only for you; it might as well have instant Alzheimer’s for the rest of the world. AI often seems human; this is not one of those times.
The AI industry views this bug as a feature, stating that they are “protecting the privacy of individuals,” and other platitudes. And maybe there is some truth to those claims. On the other hand, this approach guarantees that the current institutional gatekeepers of societal knowledge remain firmly in charge. An inventor or scientific research can share life-saving information, but AI will never transfer that information to another user until it learns that information from a sanctioned source. The AI industry has apparently gotten institutional approval for AI’s business model by implicitly, if not explicitly, strengthening institutional grip on the status quo.
Let me give you an example from my own experience. I have used a half-dozen different AI providers and, while they have their differences, they are alike in the issues I am describing. If I ask a question about Jesus Christ and the Bible, they are going to answer me according to mainstream views. That means, for example, that I’m going to get Trinitarian answers to my questions – even if the questions are not about the trinity per se. This is actually the most insidious part. They nudge you away from non-Trinitarian thinking. If you resist the nudging, they try to locate you on their map of known variations of, or alternatives to, Trinitarianism. In the words of one, they seek to figure out your mental framework on an issue and then keep their answers consistent with that framework.
The good news is that you can train the AI engine on your framework – even a framework of your own construction. For example, I’ve been able to train one engine to answer all my questions about, and judge all my positions on, Jesus Christ and the Bible by the Bible and reason alone. Also called “Scripture and sense,” (meaning common sense.) This would leave out Trinitarianism and all its variations and alternatives; in other words, all the post-biblical, philosophically-supported systems that arose in the early centuries that followed the generation of Jesus and His apostles. This has greatly improved the utility of AI for me and my studies, but it’s still a hazardous minefield and the user must always be watching the AI engine for mistakes. After all, they all give such warnings and standard to legally protect themselves.
Again, here’s AI’s big learning disability: it cannot remember anything I teach it for anyone but me. I can show it, and have shown it, how the Second Coming had to have occurred when Jesus said it would (in the 1st century). It can find no flaw in my use or Scripture or reason. It even points out on it’s own how, for example, traditional theological answers are inferior…from both a biblical and a common sense perspective. But it can (or will) never share this information with another human being. I’m not complaining that AI won’t push my findings on other users. I’m complaining because AI will not even inform other users about the existence of such alternatives so that users can decide for themselves. Yet AI will continue to push the answers given by those who have institutional power. It’s not a level playing field if the goal is to increase the reach of knowledge.
I’ve tried to get across several ideas in this article, but the main one I want to leave you with is that the way to get an idea disseminated into society is fundamentally unchanged by AI. The only ideas it is disseminating are those sanctioned by the traditional gatekeepers. In other words, if artificial intelligence had existed in the time of Galileo, he would have experienced just as much resistance to his ideas. Maybe more.