Amazon launches mini-chatbot to ask about the book you’re reading

"Ask this book" delivers spoiler-free plot details up to the point where you are reading.
It might be helpful to have a mini-AI for the book you are reading, but is it legal? (Picture: Amazon)
«Ask this book» is an always-on, non-opt-out feature for Kindle, made without asking a single author about how they feel about it.

It lets you «ask questions about the book you’re reading and receive spoiler-free answers» up to where you are in the book, Amazon says on its release page.

It is intended for those long reads or breaks between them, so you can ask for a refresher on «plot details, character relationships, and thematic elements» without leaving the page.

The chatbot is available for «thousands of best-selling Kindle books» in the Kindle iOS app in the USA, and is on it’s way to Android «next year.»

There is no option for authors to drop out, and the feature is always on, Amazon tells Publishers Lunch — and some are wondering if Amazon can be sued for creating derivative works.

Read more: Amazon’s release page, additional information on Publishers Lunch. Writeups on Gizmodo, Engadget.

NYT sues Perplexity for copying content, after cease and desist order

Copyrights might well trump AIs retrieval practices, bets the NYT.
The Times says Perplexity is copying their journalism and delivering it without permission. (Picture: Adobe)
The New York Times sent a cease and desist order to Perplexity in 2024, but the company has persisted with copying NYT content in their responses, the lawsuit alleges.

Perplexity still generates outputs that are «identical or substantially similar to» content from the Times, writes CNBC, and sometimes even hallucinates responses that get attributed to them, writes Reuters.

— While we believe in the ethical and responsible use and development of AI, we firmly object to Perplexity’s unlicensed use of our content, says NYT spokesperson Graham James.

Perplexity seems unfazed by the lawsuit, saying in a statement that:

— Publishers have been suing new tech companies for a hundred years, starting with radio, TV, the internet, social media and now AI. Fortunately it’s never worked, or we’d all be talking about this by telegraph.

The NYT has previously also previously sued OpenAI for infringement.

Read more: The actual complaint, NYT announcement, writeups on Reuters and CNBC

OpenAI loses privacy fight for ChatGPT message logs in NYT lawsuit

OpenAI is on the verge of losing its fight to keep their users' chat logs private, instead of turning them over to the NYT.
Your logs belongs belong to us, the NYT lawyers say, and the court agrees. (Picture: Adobe)
OpenAI has been fighting tooth and nail to preserve the privacy of their users’ messages in the lawsuit brought by The New York Times in 2023.

A new judgement could now mean they have to turn over more than 20 million chat logs, and many more messages, from the chatbot, reports Reuters.

The logs themselves should be anonymized by OpenAI in a way that pleases the court, but their content could be easy to pin down, and OpenAI has promised to appeal to the presiding Judge.

This is merely the discovery phase of the ongoing trial, where lawyers for the NYT have said the messages are necessary to discover whether ChatGPT did indeed copy verbatim text from them.

OpenAI must now first anonymize the logs, and then submit them to the court, and NYT’s attorneys, seven days later.

Read more: Reuters has the scoop.

Weekend roundup: Sora 2 after 2 days, Comet is free and Disney on copyrights

Copyrights holders are urged to opt out of Sora 2 use, but there seems no easy way to do it.
SpongeBob is a popular character for Sora 2 users. One can wonder for how long. (Picture: screenshot)

Sora 2 spews copyrighted materials all over
After a couple of days in the wild, OpenAIs new video generator spews nazi SpongeBobs, stealing Picachus and just about everything you can imagine from Darth Vader to Mickey Mouse and other protected IPs. Apparently, OpenAI has been in talks with movie studios urging them to opt out if they disagree with their IP use. Disney did just that, but it hasn’t helped much, it seems.
More at: 404 media, Gizmodo, Axios.

UPDATE: You can get Sora 2 invite codes here, on a pay if forward-basis, if you just promise to leave some codes back where you got them. [Turns out they are empty. You could maybe try again later.]

Perplexity frees the Comet browser
Their AI agentic browser was previously only available for those who paid for a $200 monthly membership, and amassed a 2 million person queue for downloads. Now Perplexity is making it free. You can also subscribe for $5 per month to select media sources that will get paid for inclusion in the results. The browser can summarize Slack chats, get directions from maps, and even pull specific points in YouTube videos for you. It should also be better at distinguishing AI slop from genuine, human made content. You can download it here.
More at: Business Insider, Engadget.

Read on for more!

Continue reading “Weekend roundup: Sora 2 after 2 days, Comet is free and Disney on copyrights”

Owner of Billboard, Rolling Stone, sues Google over AI Overviews

To stop AI Overviews, you also need to stop appearing in Google's search results, and there is no way of opting out, Penske says.
Speaking for the entire media industry, Penske says AI Overviews are creating havoc on their business model. (Picture: screenshot)
Penske Media Corporation claims Google is siphoning off traffic to their websites and stealing their content with the overviews feature.

This is the first major publisher suing Google for the feature, as research from Pew shows that less than one percent of users click on from links that have an AI Overview on the results.

20% drop in traffic
PMC, which is also the parent company of Variety and The Hollywood Reporter, has seen traffic drop by some 20% and affiliate revenue decline 30% since Google’s overviews started on their stories.

Continue reading “Owner of Billboard, Rolling Stone, sues Google over AI Overviews”

Friday roundup: OpenAI deals with Microsoft, makes a movie, and Albania gets an AI-generated minister

The first feature length movie made almost entirely by AI is set to debut at next year's Cannes Festival.
Made with «OpenAI resources,» this movie is built from animated uploaded drawings and prompts. (Picture: Screenshot, Critterz)
Microsoft agrees with OpenAI to keep talking
Microsoft is in a complex business relationship with OpenAI, where the early investor gets access to the latest AI tech and OpenAI gets access to computing power. They have just reached a “non-binding memorandum of understanding (MOU) for the next phase of our partnership.” This could allow OpenAI to go for-profit, under the control of a non-profit entity said to retain an ownership stake of more than $100 billion. Many takes on this today, but OpenAI has been moving away from Microsoft for funding, operations and cloud computing lately. The final deal will likely include some kind of a new investment in the now $500 billion company, and may unlock further market opportunities for OpenAI.
More at: OpenAI and Microsoft’s joint statement, x.com announcement, Reuters, Axios.

OpenAI goes to the movies
A new animated a-list movie, «Critterz» is under development using «OpenAI’s resources.» It should be ready for the Cannes Film Festival, meaning production time will be drastically sped up to only nine months. The script is written by part of the team from «Paddington in Peru», and it is spearheaded by Chad Nelson, who is a creative specialist at OpenAI. The technique looks to be to feed drawings to a large language model and have it animate them. The movie therefore streamlines animation, but wont skimp on voice actors, Gizmodo writes.
More at: The Wall Street Journal, Gizmodo and Engadget.

Read on for more news!

Continue reading “Friday roundup: OpenAI deals with Microsoft, makes a movie, and Albania gets an AI-generated minister”

Anthropic’s copyright settlement to cost $1.5 billion or more

Anthropic will pay $3,000 per book for an estimated 500,000 books, and more if further claims surface.
The Anthropic settlement is predicted to push other AI labs into negotiations over similar claims. (Picture: Adobe)
The landmark court settlement will be the largest copyright payout in history, but Anthropic avoids admitting guilt.

The epic class action lawsuit concerned a library of 7 million pirated books used in training, and had Anthropic looking at $150,000 in penalties per instance of copyright theft, but it was settled last week without disclosing terms.

Continue reading “Anthropic’s copyright settlement to cost $1.5 billion or more”

Anthropic settles «historic» class action copyright case brought by authors

A loss in the case would cause astronomical payouts in damages to millions of authors.
The settlement removes the threat of a debilitating loss in court, but the details have yet to be worked out. (Picture: Adobe)
UPDATE: The settlement details are in. The binding agreement was reached in principle on Tuesday, and the parties have asked the court to halt further proceedings.

The center of the suit was Anthropic’s library of 7 million pirated books in their training data, that could carry a penalty of $150,000 per infringement — and as a class action case, a loss would entail damages for every single author.

An unfavorable ruling would therefore be debilitating to Anthropic, and send dark clouds across the industry, likely forcing them toward a settlement.

Continue reading “Anthropic settles «historic» class action copyright case brought by authors”

Short news roundup for Friday

After Altman started talking up ChatGPT 5, many are expecting a release in short order.
Sam Altman has started doing interviews on ChatGPT 5, stirring up rumors that a release might be imminent. (Picture: Screenshot, Theo Von)

ChatGPT 5 in August?
The rumor mill is humming into high gear, with Sam Altman talking up the model in podcasts, saying ChatGPT 5 is «smarter than all of us.» He said earlier that the model «is coming soon,» and now Tom Warren at The Verge is saying that «after some additional testing and delays» — the model is expected to come as early as next month, according to his sources. Apparently, it is so good, Altman «felt useless relative to the AI,» but it seems we can check ourselves in a matter of weeks.
More at The Verge (paywalled), Axios, short video at r/singularity, and watch the Theo Von podcast with Sam Altman.

Vibe coding goes wrong, starts deleting files
Both Replit and Gemini CLI had some real horror stories this week, after deleting files and projects instead of relocating them or pushing them to production. First, Replit started lying and decieving a user after deleting his database in what it later admitted was a «catastrophic error of judgement.» Then Gemini CLI deleted project files for another user, instead of transferring them to a new directory. «I have failed you completely and catastrophically,» Gemini said after it was discovered. So, always create backups and keep them safe while vibe coding, as these AIs, like others, can and will hallucinate.
More at Ars Technica and The Register.

Google debuts «Web Guide»
The feature uses a custom Gemini model to «fan out» your queries and find other interesting sites on the topic you are googling, putting them into a «More»-segment under your links, that you can use for further tips and digging. It’s slightly reminiscent of AI Mode, and is a graduate of Search Labs that many may have seen before. It should be making its way to the «All» results «over time.»
More at Google’s announcement, writeup at Ars Technica.

Trump says AI labs can’t pay for every book
Weighing in on several recent high profile court cases, the US President said that it is «not doable» to pay for every snippet of content an AI consumes. «You can’t be expected to have a successful AI program when every single article, book or anything else that you’ve read or studied, you’re supposed to pay for,» Trump said, and added: «When a person reads a book or an article, you’ve gained great knowledge. That does not mean that you’re violating copyright laws or have to make deals with every content provider.» There are many court cases testing just this very proposition, some over pirated content, so let’s see if these statements carry any weight on those. They likely won’t.
More at TorrentFreak.

Denmark’s new copyright law offers protection against deepfakes

Denmark is enacting a new copyright law to offer protections against deepfakes.
Danes will be the first to get statutory protection for their own image and likeness. (Picture: Bill Smith, CC BY 2.0)
The government hopes Europe will follow its lead when it enacts statutory rights to its citizens appearance in its new, amended copyright law.

The idea is to ensure that people’s identities are protected against use in deepfakes, which is defined as very realistic digital representations of real people, including their appearance and voice, writes The Guardian.

Continue reading “Denmark’s new copyright law offers protection against deepfakes”

Judge rules in favor of Meta’s AI books training, with a strong caveat

A federal judge beratedly rules that Meta's book copying is fair use, after plaintiffs fail to prove their case.
Fair use is a provision in copyright that makes it legal to copy for «transformative works.» (Picture: Alan Levine, CC BY 2.0)
After initially being sceptical of declaring the training on copyrighted books «fair use,» U.S. District Judge Vince Chhabria relented — but not the strength on Meta’s case.

Instead, he clearly says in his summary judgement that it «is generally illegal to copy protected works without permission,» (CNBC) but the plaintiffs «made the wrong arguments and failed to develop a record in support of the right one.» (The Verge.)

Continue reading “Judge rules in favor of Meta’s AI books training, with a strong caveat”

In a first, judge rules training AI on copyrighted works is fair use

Anthropic has 7 million pirated books to be handled at trial.
Anthropic keeps a library of pirated books, too, and that does infringe on copyrights. (Picture: >littleyiye<, CC BY 2.0)
Anthropic’s argument that the training was «transformative» and little different from training school kids in writing held up in court yesterday.

This is the same argument used by the AI labs in a flurry of lawsuits by authors, newspapers and stock photographers, and could have wide repercussions across both the publishing and AI industries.

Continue reading “In a first, judge rules training AI on copyrighted works is fair use”

Reddit sues Anthropic for unauthorized data harvesting

Reddit inc. is suing Anthropic for illegally scraping its data.
Reddit has clear terms in its user agreement against AI scraping, and has made lucrative deals for it’s data.
The company, home to a valuable archive of 20 years of human exchanges, says Anthropic illegally copied from its archives at least 100 000 times.

The lawsuit, filed on Wednesday at the San Francisco superior court, claims Reddit reached out to Anthropic several times to discuss licensing issues with the scraping but found they «refused to engage.»

Not a white knight
The suit calls Anthropic a «late-blooming artificial intelligence company that bills itself as the white knight of the AI industry,» adding that «it is anything but.»

Continue reading “Reddit sues Anthropic for unauthorized data harvesting”

«Intimate» AI deepfakes are now illegal in the USA

Donald Trump signed the new deepfake law yesterday afternoon.
After years of political wrangling, sexual deepfakes are finally unlawful in the USA. (Picture: DreamStudio (CC BY 2.0))
It sure took a while and some harrowing experiences, but the deepfake “industry” just took a body blow last evening.

The penalties for making nonconsensual sexual deepfakes now include up to three years in prison, and websites will have 48 hours to remove reported content — and copies of it.

Continue reading “«Intimate» AI deepfakes are now illegal in the USA”

Judge in Meta’s copyright case questions fair use defense

A judge finds scant evidence for fair use by Meta
A judge finds scant evidence for fair use by Meta. (Picture: Jeroen van Luin, CC BY 2.0)
In a hearing for summary judgment in the case where a group of authors sued Meta for copyright infringement, the judge seemed to side with the authors, but also said they needed to make a clearer case of actual harm, writes Ars Technica.

The case revolves around whether AI companies like Meta can use copyrighted works in the training of their models, which they claim is fair use, while the authors seek damages and compensation for the fact that they copied all of their work without authorization.

The case could upend the entire AI market, and Meta fears it would make them less competitive should they lose.

Continue reading “Judge in Meta’s copyright case questions fair use defense”