At day I'm a software developer and computer historian. I give lectures on computer history, explaining people the marvelous story of technology, electronics and computers, from the first triode vacuum tube invented by Lee De Forest in 1906 and the first digital computers to the current wave of AIs, and all in between those. I'm amazed that the field of artificial intelligence, which, from the 50s, has been in a permanent state of "the next five years will be revolutionary", has finally produced results -- and in what a way!
At the same time, it's sad that every time a new technology arrives, people react negatively, perhaps out of fear. For example, in the 1560s, the first Russian printing shop opened in Moscow. Months later, it was burned down by copyists, which saw it as a menace to their work. The same happened with the invention of photography (it would end art), the phonograph record (it would end music), the radio (it would end music, again), the television (it was going to be the end of cinema), the computer (everyone would lose their jobs), the home Internet (it would end music and literature, one more time). And so on. Not a single one of those prophecies have been fulfilled, of course. But it doesn't stop us from falling in the same trap again and again. It's part of human condition, I'm afraid.
The empty criticism to AI doesn't surprise me. In fact, what would have surprised me would have been a lack of criticism.
That said, AIs, at least in their current form, are limited. For starters, they have been trained using texts written by humans, so they are not immune to human bias. And they aren't really "intelligent", at least not in the sense in which most people use the word. It's technically true that they are just very elaborated Márkov chains, but what that means is that they just excel in creating coherent sentences or paragraphs, without really understanding them. And thus, they are unable to tell fact from fiction, or to assign a "level of trustworthiness" to each piece of information. Because they don't even work on pieces of information. Consequently, they are known to start hallucinating (yes, a term has already been coined) when they can't find facts on which to base their response. And, even worse, they excel in making it look credible. That's a very big problem, because people are using the AIs like oracles, and trusting every word they say.
Such as the attorney who asked an AI to prepare his defense, and then gave the resulting text to the judge without verifying it. The AI cited six cases, none of which existed, and the judge turned it down and indicted the attorney for falsehood. It would have been easy to avoid it: a quick check of the cases cited by the IA would have been enough. The problem was blind faith, not the use of an AI.
AIs are here to stay. But they are a new tool, and we must learn how to use it, how to make the most of it. To achieve it, we must have no fear of using them, but, at the same time, we must know when and how they can fail.
The use of AIs to generate "art" is a different debate. They are more than capable of it, of course. In fact, the attorney report citing six invented cases is more fiction than anything else. But I guess that in this case the matter lies more on what we understand by art, and how we value the output of an AI. And this post is already too long

.