to prove that it's a tool you can use to express your fantasies in the way you want.
I've been occasionally looking into the ai-art channel on the discord so yeah, I see how you've been creating these from the ground up.
That there is very definitely a creative process going on.
And obviously besides diving deep into the technology, gathering experience etc. it also requires a discerning eye and good visual sense.
I think a lot of people don't realise there is any other application of these generative technologies than "go to a website and type a prompt".
Anyway regarding some of the ChatGPT comments
That implies that both programs recognise that the question is about letters of the alphabet, and they have at least some grasp of the point that ms by itself may be unclear, ...
But having made quite a good fist of dealing with the tricky bit, how do they both fall flat on their virtual faces with the simple counting?
I think what often happens is that we look at ChatGPT and its responses and implicitly assume that this is in fact supposed to be, "Artificial General Intelligence".
Only so long as we assume that, does it seem surprising that the "Intelligence" is sometimes so seemingly "stupid".
Now some years back there was already a bit of a flurry of reporting around an older language model (I think it was GPT2) but at the time hardly anybody seemed to fall into that trap.
It was just seen as a text generator.
The reason might be, that the newer models are so good at creating text, that we emotionally tend to infer there ought to be an actual mind behind it and get disappointed when it turns out it's not a general purpose thought engine!
i.e. ... an artifical mind that parses a natural language question in English, into sequences of logic and meaning, extracts any possible task it's asked to do, mentally executes the task (even if it is as simple as counting letters), reaches a conclusion/result, and then formulates this conclusion again into natural human readable language.
This is encouraged by the loose usage of the term artificial intelligence.
Hence we expect it to "recognise" what a question is, have a "grasp" of things and obviously be able to execute tasks like counting.
But these things just don't process language like we do.
From an earlier version of ChatGPT:
You can see the token system glitching there.
(For anyone more deeply interested start here https://www.lesswrong.com/posts/aPe...arp-plus-prompt-generation#The_plot_thickens_ )
This isn't the way anyone even a foreign speaker with limited vocabulary would try to parse an apparently made-up term like
SolidGoldMagikarp. You wouldn't mistake it for being identical with "
distribute". You'd likely guess it's a carp, that's magic, and made of solid gold, and perhaps guess that it's an item from a video game or so.
The point of this example isn't to make fun of how
bad ChatGPT is -- The point is to see how utterly
alien it is in its way of arriving at 'language generation'.
This has nothing to do with semantics or any set of theories in linguistics about how humans learn language (and therefore it's also ridiculous that some people are using it in an attempt to 'disprove' certain theories or make statements of linguistics)
And considering like that I find it quite amazing that it creates so much text that seems understandable to humans.
This also says something about us.
And how we project intent, meaning, and the assumption of consciousness into things...
Also we should equally consider that not all language generated by humans comes out of deep logical thought!
But then how did the incident I mentioned above with the lawyer happen? The cases the bot cited don't exist so it didn't find them in a database.. That it would invent them from whole cloth really is quite concerning.
No it is actually what should be expected and as intended by the idea of a language model I'd say!
The accomplishment is that the model can create something which "sounds and is structured like something that would fulfil the expectation of what a (court filing | scientific paper | crime fiction | sermon | address to the union ...) should look like".
As the NYT article mentions, the lawyer using ChatGPT
"was unaware of the possibility that its content could be false. He had, he told Judge Castel, even asked the program to verify that the cases were real. It had said yes."
Being true & right isn't it's job. Of course it would ouput 'yes!'
Imagine someone is writing a story or movie or storyboard for an adventure game etc. where they wanted to cite from fictional court filings from a case that plays a role in the story. Imagine maybe also that they're writing the story in a non-English language but because of the situation in the story the court cases are supposed to take place in the US and the writer is neither familiar with the US court system nor legal English. ChatGPT will be able to
generate convincing enough sounding US English legalese to be used as snippets, quotes, screenshots, etc. to give a realistic seeming background.
'
Passable-sounding generated legalese' will usually contain references to prior court decisions.
Just like '
passable sounding generated scientific writing' will contain citations from studies in academic journals.
So, if Chat GPT's job is to generate 'text that sounds right' it
needs to contain these decisions & citations and so it's ChatGPT's job to put them in there. Whether they
actually exist is not really the point though the inverse conclusion also isn't right, it doesn't always make everything up.
Now anyone who expects the 'OpenAI' company has delivered to the public, for free, an artifical mind that out of the box can replace any profession which has writing as its ouptut ... it's a bit of a letdown.
But if you understand it as showing off their tech, (and meanwhile collecting tons of usage data and feedback valuable for them), it's another story.
the bots can read the entirety of the medical literature, including obscure case reports, papers in foreign languages, etc., which no human can do. They will find some match of a puzzling set of symptoms to something out there and solve the puzzle. But if it simply makes shit up, I'll stay with a human doctor going on their experience
Umm I'm pretty sure the idea isn't to use ChatGPT for this
ChatGPT is not the only possible application of machine learning and in fact a lot of this has been going on for quite a while.
And of course the idea is to highlight possible connections and then have them investigated.