Sent from my penis using wankertalk. "The Western world is fucking awesome because of mostly white men" - DaveDodo007. "Socialized medicine is just exactly as morally defensible as gassing and cooking Jews" - Seth. Yes, he really did say that.. "Seth you are a boon to this community" - Cunt. "I am seriously thinking of going on a spree killing" - Svartalf.
But I do look forward to a future shared with AI - I'd just rather it was Star Trek post-scarcity utopia AI rather than Skynet or Matrix AI.
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here. .
"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."
Frank Zappa
"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
International disaster, gonna be a blaster
Gonna rearrange our lives
International disaster, send for the master
Don't wait to see the white of his eyes
International disaster, international disaster
Price of silver droppin' so do yer Christmas shopping
Before you lose the chance to score (Pembroke)
The New York Times is suing Microsoft and OpenAI, the creator of ChatGPT, claiming millions of its news articles have been misused by the tech companies to train their AI-powered chatbots.
It's the first time one of America's big traditional media companies has taken on the new technology in court. And it sets up a showdown over the increasingly contentious use of copyrighted content to fuel artificial intelligence software.
The legal complaint, which demands a jury trial in a New York district court, says the bots' creators have refused to recognise copyright protections afforded by legislation and the US Constitution. It says the bots, including those incorporated into Microsoft products like its Bing search engine, have repurposed the Times's content to compete with it.
"Times journalists go where the story is, often at great risk and cost, to inform the public about important and pressing issues," the Times's complaint argues.
"Their essential work is made possible through the efforts of a large and expensive organization that provides legal, security, and operational support.
"Defendants' unlawful use of The Times's work to create artificial intelligence products that compete with it threatens The Times's ability to provide that service."
The Times wants the court to hold Microsoft and OpenAI responsible "for the billions of dollars in statutory and actual damages that they owe". It's also requested the "destruction" of parts of the chatbots that incorporate Times content.
It's a difficult question. Because the ai does what every human being does. Learning by reading. It doesn't reproduce content. No one will sue a human author although a human couldn't be creative without having read a lot of other people's stuff before.
If you put your ideas into the world, they will be consumed. So what's the problem here? Is it that the consumer is able to produce another product using parts of yours, or that the consumer reaches more people in the end? I'm afraid I don't get it. At best maybe you can charge AI companies more than the average consumer for access to your product --good luck with that.
"With less regulation on the margins we expect the financial sector to do well under the incoming administration” —money manager
It's a difficult question. Because the ai does what every human being does. Learning by reading. It doesn't reproduce content. No one will sue a human author although a human couldn't be creative without having read a lot of other people's stuff before.
However, a lot of the content generated by the AI on topics reported in the newspaper doesn't just have the gist of the report, it's almost word for word...
It's a difficult question. Because the ai does what every human being does. Learning by reading. It doesn't reproduce content. No one will sue a human author although a human couldn't be creative without having read a lot of other people's stuff before.
However, a lot of the content generated by the AI on topics reported in the newspaper doesn't just have the gist of the report, it's almost word for word...
Nah, that's wrong. Large Language Models will generate their own wording and often their own facts.
The only exception is Bing Copilot which is more like an AI-powered web search. It does quote from websites but also references its sources with links.
It's a difficult question. Because the ai does what every human being does. Learning by reading. It doesn't reproduce content. No one will sue a human author although a human couldn't be creative without having read a lot of other people's stuff before.
However, a lot of the content generated by the AI on topics reported in the newspaper doesn't just have the gist of the report, it's almost word for word...
Nah, that's wrong. Large Language Models will generate their own wording and often their own facts.
The only exception is Bing Copilot which is more like an AI-powered web search. It does quote from websites but also references its sources with links.
If you read through the news report I posted, rather than just the excerpt, you will come to a section showing a word-for-word identity example, which the newspaper is using in its legal case...
Rationalia relies on voluntary donations. There is no obligation of course, but if you value this place and want to see it continue please consider making a small donation towards the forum's running costs.
Details on how to do that can be found here. .
"It isn't necessary to imagine the world ending in fire or ice.
There are two other possibilities: one is paperwork, and the other is nostalgia."
Frank Zappa
"This is how humanity ends; bickering over the irrelevant."
Clinton Huxley » 21 Jun 2012 » 14:10:36 GMT
If you read through the news report I posted, rather than just the excerpt, you will come to a section showing a word-for-word identity example, which the newspaper is using in its legal case...
Oh but they are cheating. They are specifically asking for the news article. The AI doesn't generate the content based on a general question and doesn't pretend it was its own creation. It quotes an article it has seen when asked for that.
That's a different issue and can easily be prevented by adjusting the morality rules (the "Alignment") of the AI by telling it that quoting larger parts of copyrighted work verbatim is not allowed.
A year and a half ago an item was posted in this thread describing lawyers getting burned for using generative AI to help draft their work--the AI cited case precedent that was pure hallucination not once but several times. You'd think that other lawyers would have taken note but y'know, they're just sooo busy.
Demonstrating yet again that uncritically trusting the output of generative AI is dangerous, attorneys involved in a product liability lawsuit have apologized to the presiding judge for submitting documents that cite non-existent legal cases.
The lawsuit began with a complaint filed in June, 2023, against Walmart and Jetson Electric Bikes over a fire allegedly caused by a hoverboard. The blaze destroyed the plaintiffs' house and caused serious burns to family members, it is said.
Last week, Wyoming District Judge Kelly Rankin issued an order to show cause that directs the plaintiffs' attorneys to explain why they should not be sanctioned for citing eight cases that do not exist in a January 22, 2025 filing.
...
As noted by Judge Rankin, eight of the nine citations in the January motion were pulled from thin air or lead to cases with different names. Pointing to some of the past instances where AI chatbots have hallucinated in legal proceedings over the past few years – Mata v. Avianca, Inc, United States v. Hayes, and United States v. Cohen – the judge's order asks the attorneys who signed the filing to explain why they should not be punished.