AI has no independent ability to vet data from the sources it pulls. If I showed you and an AI a picture of a pig flying and a pig playing in mud, AI has no basis to decide one of those pictures is more realistic than the other does, as you should be able to do easily.
AI cannot tell a reliable source from an unreliable source. Any biases the AI has is either going to come from the programmer(s) who tailor it to favor one thing more than another according to their own biases OR it might be given the ability to determine from "more popular" posts... as in, things that get searched for or clicked most often and things with more "thumbs up" ratings... but those are not at all reliable ways to determine truth from fiction.
People are flawed, but we at least have the innate ability to treat each new encounter on its own, so even with our biases we have a chance each time to recalibrate and decide between two choices. The AI simply does not have this ability and is limited by its algorithm and the available data.
If I gave you nothing but fictional garbage page after page after page... you still have the ability to determine whether what I'm telling you is complete fabricated crap or not. Maybe you will not, but you can... an AI cannot make those kinds of decisions. Will it ever be able to? That starts to be the conversation about sentience.
Sentience, in my book anyway, is an ability (for good or bad) to be able to make a decision based on information not provided. Humans have an ability to jump to conclusions, sometimes the wrong ones, but we can interpolate missing information and link seemingly unlinked things to make a leap in logic from data not present. Of course, to prove a fact, we have to reverse engineer from there and see if our conclusion makes sense... but we can do that. AI cannot.
The AI can produce fabulous renderings of artwork from prompts... but it can't tell how many fingers it put on a person's hand or whether or not it has rendered that incorrectly. IF it could, it would never make that mistake... but it's damned hard to get an AI to not make that mistake. That's the easiest example I can come up with... humans can take a look at a picture generated by AI in seconds and tell the flaws... the AI clearly can't do that.