So one of the big problems with ChatGPT-type AI as it stands currently is that it really wants to give you an answer. It’s kind of like a dog trying to please its owner. So you get lots of false positives because you’ve given it the job of providing an answer so that’s what it will do until you correct it for lying.
NFL refs are like this in a lot of ways. You give them the job, a rulebook that is needlessly complicated, and make them enforce it on a game moving way too fast for them to actually see. So we get primetime games with 20+ accepted penalties because well we’ve got the whistles and we’re here so might as well do the job. They are out there looking for penalties rather than to officiate the game.