I know this is unpopular as hell, but I believe that LLMs have potential to do more good than bad for learning, a long as you don’t use it for critical things. So no health related questions, or questions that is totally unacceptable to have wrong.
The ability to learn about most subjects in a really short time from a “private tutor”, makes it an effective, but flawed tool.
Let’s say that it gets historical facts wrong 10% of the time, is the world more well off if people learn a lot more, but it has some errors here and there? Most people don’t seem to know almost no history at all.
Currently people know very little about critical topics that is important to a society. This ignorance is politically and societally very damaging, maybe a lot more than the source being 10% wrong. If you ask it about social issues, there is a more empathetic answers and views than in the main political discourse. “Criminals are criminals for societal reasons”, “Human rights are important” etc.
Yes, I know manipulation of truth can be done, so it has to be neutral, which some LLMs probably aren’t or will not be.
Am I totally crazy for thinking this?


I spent years in IT. We would often run into problems we didn’t know the solution to. The first step was to hit up searches and see what people are saying.
You NEVER randomly take the first solution and apply it to your production machine. You review what the process is, the reasoning behind the solution, and see how that fits for your example. Then you move to the next solution and see how the two compare. Then keep digging, reading comments and arguments, and sussing out how these might apply to your situation. Depending on the situation you build a test rig to duplicate the issue and test fixes. You pick up other things along the way and your knowledge and skillsets improve.
This is completely different than what I’m personally seeing with AI. People expect the answers are correct and they just go with it. They are missing the second part of the education which is learning the nitty-gritty and the hows and whys. They want a short cut to knowledge and there isn’t one.
It’s like someone who spends years learning how to play, write music, and record their instrument verses someone with Garage Band lining up premade loops end to end and calling it a song. One of these people is a musician and one is not.
Our previous mechanic garage hired a couple guys new to the industry. They aren’t being trained by anyone but admitted to me they rely on ChatGPT. The service and quality of work got to be so bad we stopped using them.
If you run with the 10% wrong aspect, that’s still far too much to be wrong. Would you expect 10% of education from a uni to be completely wrong?