• Sign up or login, and you'll have full access to opportunities of forum.

Discussion about A.I.

Go to CruxDreams.com
It happens so frequently that a term has been coined to describe it. When an AI doesn't have enough facts to generate content and starts creating from the void, it is said that it is "hallucinating".

For me, that is the real danger of AIs, and not the possibility of ending all the humanity's jobs. But it isn't the end of the world, either. This is not different from, say, an intern, or any other form of underpaid or underqualified worker. If pressed to generate some content, they will write it, even if they don't know about the matter. Thus, the same way an intern's work should be supervised, and AI's output should also be put to test.

Anyway, the hallucinations are good for generating fiction. At least until what you want is porn or torture fiction and the AI refuses to do it. Maybe you could put an AI to use in generating the background for a cross story, telling it that you want to a detective story about betrayal and murder? One in which someone is wrongly sentenced to death for a crime they have not committed, or something in that line. Then you adapt the argument and replace the actual execution with a good old crossing... :angel2:
I have seen the term "hallucinatory stochastic parrot" for these programs. They don't understand anything; they just stitch stuff from their feeds together using probabilities, and churn it out whether it makes sense or not.

If the feed is wrong (e.g. Russian propaganda about Ukraine) the output is going to be flawed, to say the least, and sometimes they just make stuff up.
 
When an AI doesn't have enough facts to generate content and starts creating from the void, it is said that it is "hallucinating".
They cannot create anything from the void. From a real void, they are mute. It's all just statistically predicting the next string from whatever data they have on their disposal, with some semantical vectors and constraints. There's absolutely no intelligence, no dreams, no awareness, no innate knowledge, and of course no hallucinations. Heck, I think most AI engines still have no ''idea'' what it was that they ''wrote'' in the sentence before or after the sentence you are looking at.

I heard from one of the architects of these systems, he described them succinctly as ''statistics on steroids''. Don't fall into the hype trap and infer something that isn't there.
 
If the feed is wrong (e.g. Russian propaganda about Ukraine) the output is going to be flawed, to say the least, and sometimes they just make stuff up.
Exactly. They are the same as if you put a 18-year-old boy of girl, fresh out of secondary school, to write about something they don't understand. They can take information off the Internet and chain it into a report, but they will probably fail in understanding what sources are reliable.

They cannot create anything from the void. From a real void, they are mute.
You are right, in a sense. When I said "from the void", I meant that the IA couldn't find in the Internet any related information. In that case, the AI would start using unrelated facts. Let's say, for example, we ask an AI to write about the sexual preferences of Shakespeare, of whom we don't know much apart from his works. Surely the AI would start chaining facts about the sexual customs of the late 16th Century Britain, maybe mixing them with some names from the era, or with people that lived in the same period. It could say, perhaps, that he was a lover of Elizabeth I.
 
It depends on the model. ChatGPT 3.5 is usually fair at acknowledging when something is outside its training, but with the right prompts you can make any bot spew bullshit.

People are stupid and malicious, and the tech itself is emerging. This is what happens when something quasi-revolutionary gets popular.
 
They cannot create anything from the void. From a real void, they are mute. It's all just statistically predicting the next string from whatever data they have on their disposal, with some semantical vectors and constraints. There's absolutely no intelligence, no dreams, no awareness, no innate knowledge, and of course no hallucinations. Heck, I think most AI engines still have no ''idea'' what it was that they ''wrote'' in the sentence before or after the sentence you are looking at.

I heard from one of the architects of these systems, he described them succinctly as ''statistics on steroids''. Don't fall into the hype trap and infer something that isn't there.
But then how did the incident I mentioned above with the lawyer happen? The cases the bot cited don't exist so it didn't find them in a database.. That it would invent them from whole cloth really is quite concerning. And I'm not excusing the attorney involved in the slightest; he is responsible for anything he submits to the court and the judge will sanction him, not the bot. Though it would be interesting to think about how courts might exercise authority over bots-"ChatGPT, I find you in contempt and sentence you to 30 days in jail!"...
 
But then how did the incident I mentioned above with the lawyer happen? The cases the bot cited don't exist so it didn't find them in a database.. That it would invent them from whole cloth really is quite concerning. And I'm not excusing the attorney involved in the slightest; he is responsible for anything he submits to the court and the judge will sanction him, not the bot. Though it would be interesting to think about how courts might exercise authority over bots-"ChatGPT, I find you in contempt and sentence you to 30 days in jail!"...

AI models don't have to "invent [legal cases] from whole cloth." As TomiRex noted, that's not how they work anyway.

Rather, an AI has entire genres of fictional legal dramas from which to learn, and they can learn real cases and change the details. ChatGPT will roleplay with or without regard for reality, so I really have to question how this attorney phrased his prompts.
 
These programs can be really dangerous: I just read an article about a lawyer who used an AI chatbot to write a brief for a lawsuit he was working on. The bot wrote it, citing a number of precedent cases that supported his position. Unfortunately, none of those cases existed-the program has literally made them up. When the judge found out, he was, shall we say, a bit peeved and is considering what sanctions to apply to this attorney.
Outside of it being an interesting experiment, why do we want an AI to write stories or paint pictures? We're too lazy?
 
Outside of it being an interesting experiment, why do we want an AI to write stories or paint pictures? We're too lazy?
AIs won't take me away from writing. I write for the pleasure of creating something, even if most of what I write isn't read by anybody but me.

As a reader, I value the writer's effort, and enjoy it when I find a connection with him/her. Neither of which could be possible with a machine-generated story. Thus, I guess I wouldn't enjoy reading texts generated by AIs. But I haven't tried, so I can't say for sure.
 
That is the case I was referring to above
These programs can be really dangerous: I just read an article about a lawyer who used an AI chatbot to write a brief for a lawsuit he was working on. The bot wrote it, citing a number of precedent cases that supported his position. Unfortunately, none of those cases existed-the program has literally made them up. When the judge found out, he was, shall we say, a bit peeved and is considering what sanctions to apply to this attorney.
One of the uses being touted for AI is in medical diagnosis. The bots can read the entirety of the medical literature, including obscure case reports, papers in foreign languages, etc., which no human can do. They will find some match of a puzzling set of symptoms to something out there and solve the puzzle. But if it simply makes shit up, I'll stay with a human doctor going on their experience, thank you very much.

Some of us love reading stories not writing, some may don't have free time for writing, the ai will generate 1k-2k words in just one minute so with our prompts it could write exactly what we are in mood to read every time.
You could of course, commission a story from a human writer (hint, hint;)).

While I understand the attraction of "I want this, this and this and it gives it to me", the truth is we often don't know what really will appeal to us until we happen upon it. Some of the things I have read that moved me the most were books I happened upon by writers I didn't know on topics that I wouldn't have put on my top 100 list of interests until I started reading. If AI removes that element of serendipity, it will greatly diminish the experience.
 
That is the case I was referring to above

One of the uses being touted for AI is in medical diagnosis. The bots can read the entirety of the medical literature, including obscure case reports, papers in foreign languages, etc., which no human can do. They will find some match of a puzzling set of symptoms to something out there and solve the puzzle. But if it simply makes shit up, I'll stay with a human doctor going on their experience, thank you very much.


You could of course, commission a story from a human writer (hint, hint;)).

While I understand the attraction of "I want this, this and this and it gives it to me", the truth is we often don't know what really will appeal to us until we happen upon it. Some of the things I have read that moved me the most were books I happened upon by writers I didn't know on topics that I wouldn't have put on my top 100 list of interests until I started reading. If AI removes that element of serendipity, it will greatly diminish the experience.
When you write code, you don't just "trust it", you think of some examples and see if the code handles them correctly. Getting good tests is the hardest part of programming. In the attorney's case, look up three of the citations. If you can't find them, well...
 
You could of course, commission a story from a human writer (hint, hint;)).
Why I cannot do both?

A good story is a good story, I dont care if it is written by an AI or by human hand. I never said I would stop reading stories written by humans. It isnt the one or the other.

Commissions take a lot of effort, time and they arent cheap.
 
While I understand the attraction of "I want this, this and this and it gives it to me", the truth is we often don't know what really will appeal to us until we happen upon it. Some of the things I have read that moved me the most were books I happened upon by writers I didn't know on topics that I wouldn't have put on my top 100 list of interests until I started reading. If AI removes that element of serendipity, it will greatly diminish the experience.
And this is why human writers will remain valuable... but the AI can also surprise you sometimes. Especially if you redo a prompt several times.

(Sometimes it might even give you an idea for a commission, or a story of your own, when the AI comes up with something but doesn't follow through on it well.)
 
People can hate on AI all they want, but it is not going away. AI tech is here to stay. Fifty years ago, some people didn't like computers. Thirty years ago, some people didn't like the internet. People have been complaining about social media for the past several years and now it is AI. All of the afore mentioned are here, they are not going anywhere. AI is the same. It is a new tool to do things with. It can inspire, it can create, it is a lot of fun. While it will be a while before I post anymore of that content here, I will be posting it elsewhere. I and the AI create some pretty cool shit, but you won't see any here!

MH
 
Only a few months ago everyone was getting their mind blown by the level of advancement that had happened. And now everyone is parroting the same tired lines, like inferior LLMs themselves. “Oh it’s not capable of reason. It only writes the most likely next word. It can’t…” oh on and on. You are just so desperate to seem cool that you deny that this is even a major technological milestone. I know that many of you are simply used to the capabilities now, but it’s worth remembering all of the things we straight up didn’t have a year ago, or five years ago.

I know that these can’t be your real opinions about AI. You are scared. You have no idea where it is going to go. Don’t act like you don’t feel that way because right now, nobody knows.

I demand some fucking honesty from you guys. All I see is unearned naïve hostility, cheap cynics, and regurgitated propaganda. When the next breakthrough happens, six months after that, the great majority of Computer Science experts we have here and elsewhere will be sure to chime in that it is nothing more than an “applied linear regression model with iterative simulated reinforcement” or whatever technobabble du jour your seven minute YouTube explainers tell you.

We tricked sand into doing math and put ten of billions of them into tiny wafers for the price of a single bedroom apartment in a small town. We took video cards, which were once cheap expansion cards that were originally made to show video games at a little higher quality and turned them into workhorse machines that could turn words into images.

None of you. Not a single fucking one of you could have figured out a tenth of the steps needed to get from vacuum tubes to our first wave generative ai. Acting anything but impressed is just disrespectful to the mountains of genius it took to make this happen.

General computing is the single most significant development of the 20th century. Yes, the century with the atom bombs.

Malign it all you want; your takes will age like milk.
 
At day I'm a software developer and computer historian. I give lectures on computer history, explaining people the marvelous story of technology, electronics and computers, from the first triode vacuum tube invented by Lee De Forest in 1906 and the first digital computers to the current wave of AIs, and all in between those. I'm amazed that the field of artificial intelligence, which, from the 50s, has been in a permanent state of "the next five years will be revolutionary", has finally produced results -- and in what a way!

At the same time, it's sad that every time a new technology arrives, people react negatively, perhaps out of fear. For example, in the 1560s, the first Russian printing shop opened in Moscow. Months later, it was burned down by copyists, which saw it as a menace to their work. The same happened with the invention of photography (it would end art), the phonograph record (it would end music), the radio (it would end music, again), the television (it was going to be the end of cinema), the computer (everyone would lose their jobs), the home Internet (it would end music and literature, one more time). And so on. Not a single one of those prophecies have been fulfilled, of course. But it doesn't stop us from falling in the same trap again and again. It's part of human condition, I'm afraid.

The empty criticism to AI doesn't surprise me. In fact, what would have surprised me would have been a lack of criticism.

That said, AIs, at least in their current form, are limited. For starters, they have been trained using texts written by humans, so they are not immune to human bias. And they aren't really "intelligent", at least not in the sense in which most people use the word. It's technically true that they are just very elaborated Márkov chains, but what that means is that they just excel in creating coherent sentences or paragraphs, without really understanding them. And thus, they are unable to tell fact from fiction, or to assign a "level of trustworthiness" to each piece of information. Because they don't even work on pieces of information. Consequently, they are known to start hallucinating (yes, a term has already been coined) when they can't find facts on which to base their response. And, even worse, they excel in making it look credible. That's a very big problem, because people are using the AIs like oracles, and trusting every word they say.

Such as the attorney who asked an AI to prepare his defense, and then gave the resulting text to the judge without verifying it. The AI cited six cases, none of which existed, and the judge turned it down and indicted the attorney for falsehood. It would have been easy to avoid it: a quick check of the cases cited by the IA would have been enough. The problem was blind faith, not the use of an AI.

AIs are here to stay. But they are a new tool, and we must learn how to use it, how to make the most of it. To achieve it, we must have no fear of using them, but, at the same time, we must know when and how they can fail.

The use of AIs to generate "art" is a different debate. They are more than capable of it, of course. In fact, the attorney report citing six invented cases is more fiction than anything else. But I guess that in this case the matter lies more on what we understand by art, and how we value the output of an AI. And this post is already too long :angel2: .
 
Last edited:
Only a few months ago everyone was getting their mind blown by the level of advancement that had happened. And now everyone is parroting the same tired lines, like inferior LLMs themselves. “Oh it’s not capable of reason. It only writes the most likely next word. It can’t…” oh on and on. You are just so desperate to seem cool that you deny that this is even a major technological milestone. I know that many of you are simply used to the capabilities now, but it’s worth remembering all of the things we straight up didn’t have a year ago, or five years ago.

I know that these can’t be your real opinions about AI. You are scared. You have no idea where it is going to go. Don’t act like you don’t feel that way because right now, nobody knows.

I demand some fucking honesty from you guys. All I see is unearned naïve hostility, cheap cynics, and regurgitated propaganda. When the next breakthrough happens, six months after that, the great majority of Computer Science experts we have here and elsewhere will be sure to chime in that it is nothing more than an “applied linear regression model with iterative simulated reinforcement” or whatever technobabble du jour your seven minute YouTube explainers tell you.

We tricked sand into doing math and put ten of billions of them into tiny wafers for the price of a single bedroom apartment in a small town. We took video cards, which were once cheap expansion cards that were originally made to show video games at a little higher quality and turned them into workhorse machines that could turn words into images.

None of you. Not a single fucking one of you could have figured out a tenth of the steps needed to get from vacuum tubes to our first wave generative ai. Acting anything but impressed is just disrespectful to the mountains of genius it took to make this happen.

General computing is the single most significant development of the 20th century. Yes, the century with the atom bombs.

Malign it all you want; your takes will age like milk.

You aren't replying to anyone in particular, so I have no idea who is scared, or what kind of honesty you're demanding from whom.
 
You are scared. You have no idea where it is going to go. Don’t act like you don’t feel that way because right now, nobody knows.
I am scared shitless! Because so many people believe in A.I., which in my opinion is the worst kind of religion. But also because too many people and intitutions are willing to give computers decisive powers - selfdriving cars, weapon systems, and so on. While computers do not and will never understand shit! They are not alive, do not have any grasp of reality and cannot understand the consequences of their decisions. In fact, it is fully a mistake to even speak of 'them' - as if they are persons. A machine is just a thing, and a very dangerous thing as well, like a knife, a bullet, a missile - but autonomous.

We humans still live in a mythical state. Our history has been too short to overcome our naivity. But our toys have become so dangerous, I fear we simply cannot survive this. At least not as free critical self-thinking individuals.

Some progress is good, some progress is dangerous, but all progress is misunderstood. Nobody could ever imagine traffic jams when the first car was invented. Or the fact that mosts deaths are caused by traffic accidents. Imagine what we can't see when it comes to computer technology. When internet was invented we were all so very hopeful and enthusiastic. Now it is mostly a marketing tool. We welcomed social media and had no idea how it would split the world, create bubbles, bring populism to immense heights.

Yes, I am scared and with reason.
 
Fear is the most basal of our emotions. The one inherited from our reptile brain. It blocks the deer when it sees the car's lights at night, freezing it in its way and condemning it to death. Our human brain adds the capability of rationalizing on top of that, which allows us to control the fear and keep moving in the case of an emergency. If the deer had that, it would be able to overcome its block and flee the car.

I wish we could, as a species, overcome our fears and face AI as it is, with its lights and shadows. AI is here to stay. It is our duty to decide what we do about it. Fear and denial will not solve anything. In fact, they probably will make things worse.

But who am I to tell humanity what is has to be done...
 
Back
Top Bottom