I don't know how AI engines work -- happily I've been out of IT long enough -- but I'm confident the algorithms are programmed intentionally and are given rules to follow about both themselves and how to deal with certain content. Different engnes may deal with the same content differently. They may construct responses ultimately based on their designer's predilections, overt or tacit, effectively leaning one way or the other on certain issues.
Consider that today large social media sites have armies of watchers constantly going through content and ranking it in ways that makes it more or less visible, transparent or hidden. Presumably the guidelines the watchers follow can be turned into algorithmic controls within an AI engine.
I'm somewhat sceptical of the notion that an AI simply regurgitates what it scoops up and I'm very sceptical that human content could influence how an AI engine acts or presents information. Over time we may get a sense of how different engines respond -- not unlike studying a polling provider for several years to understand how their polling practices tend to lean one way or the other. Regardless of the subject matter will there be or can there be a 'neutral' AI? I don't know.
Agree. It follows its programming and there is an open question about how neutral programming can be.
My understanding is that the chat bots are predictive statistical engines that look for word patterns and guess the next word based upon extensive pattern recognition. There is also a programmed in verification and correction engine that the AI can use to determine if it was right in its word prediction and correct accordingly if wrong. OK, most of that is beyond my comprehension. No problem. But no one can explain exactly what is going on inside of that process - how it actually/practically works! That leads to the fear of machines going wild.
I believe the applied programs such as its use in radiology is similar. Look for patterns from as much data as can be provided. AI doesn't get tired and it can notice minute patterns that escape the human eye (it simply has greater resolution), especially one that has been doing this all day long and gets tired. Again, it can learn its mistakes.
Overall, what I was trying to say (without the help of AI) is that AI is not creative even when it appears to be and requires the creative output of humans who become anonymous in this immense data scrape. For example, fed enough fantasy art like the one posted (which seems to me to combine several male fantasies into the possibility of a pixie woman who can fly) the AI sees patterns in the way images are presented and can iterate on them. The programmer, presumably, adds the rules that govern the iteration. Suddenly, AI seems to be creating. I assume music follows the same scenario.
But, creative folks can also use AI to create. An audiophile friend sent me an example of fantasy art (don't know what is actually called) that a buddy had made by explaining very specifically to the AI what he wanted. It was quite impressive. But it is based upon the creative work of many, many
original artists. If it for the artist's amusement that seems fine. What about if he sells it?