Character.ai and Google settle harmful behavior court cases

With other AI companies watching closely, Google and Character.ai quietly bow out of contentious cases. (Picture: Adobe)
The companies have moved in four states to settle cases where the chatbot was accused of encouraging harmful behavior — sometimes resulting in death.

The court documents contain no actual monetary payouts to the victim’s families, as this seems still to be negotiated.

These are not the only cases of this kind, and may signal a shift in strategy for the entire AI business, as Meta and OpenAI are also facing similar lawsuits.

Character.ai was accused by parents of encouraging their children to cut their arms, suggesting murdering their parents, writing sexually explicit messages and of not discouraging suicide, Axios writes.

They have since banned under 18s from using their service.

Read more: Writeups on Axios, TechCrunch.

New Chinese AI rules should spread «core socialist values,» identify as AI

China's new rules require AIs to identify as such, and to spread "core socialist values."
It is almost like China has learned from AI adoption in the west. (Picture: generated)
Newly published AI rules in China address both AI psychosis, and forbids spreading of rumors or endangering national security.

They also require that any AI that engages with people should announce at the start that they are talking to an AI, with new warnings every two hours, writes Gizmodo.

Also forbidden are «illegal religious activities,» obscenity, violence or crime, and the list goes on to cover libel and insults, material that damages relationships — or encouraging self harm and suicide.

There aren’t just warnings for using AI chatbots too long, but providers should also assess the user’s emotional state — and take «necessary measures to intervene,» writes Reuters.

Read more: Writeups on Reuters, Gizmodo.

NYT sues Perplexity for copying content, after cease and desist order

Copyrights might well trump AIs retrieval practices, bets the NYT.
The Times says Perplexity is copying their journalism and delivering it without permission. (Picture: Adobe)
The New York Times sent a cease and desist order to Perplexity in 2024, but the company has persisted with copying NYT content in their responses, the lawsuit alleges.

Perplexity still generates outputs that are «identical or substantially similar to» content from the Times, writes CNBC, and sometimes even hallucinates responses that get attributed to them, writes Reuters.

— While we believe in the ethical and responsible use and development of AI, we firmly object to Perplexity’s unlicensed use of our content, says NYT spokesperson Graham James.

Perplexity seems unfazed by the lawsuit, saying in a statement that:

— Publishers have been suing new tech companies for a hundred years, starting with radio, TV, the internet, social media and now AI. Fortunately it’s never worked, or we’d all be talking about this by telegraph.

The NYT has previously also previously sued OpenAI for infringement.

Read more: The actual complaint, NYT announcement, writeups on Reuters and CNBC

OpenAI loses privacy fight for ChatGPT message logs in NYT lawsuit

OpenAI is on the verge of losing its fight to keep their users' chat logs private, instead of turning them over to the NYT.
Your logs belongs belong to us, the NYT lawyers say, and the court agrees. (Picture: Adobe)
OpenAI has been fighting tooth and nail to preserve the privacy of their users’ messages in the lawsuit brought by The New York Times in 2023.

A new judgement could now mean they have to turn over more than 20 million chat logs, and many more messages, from the chatbot, reports Reuters.

The logs themselves should be anonymized by OpenAI in a way that pleases the court, but their content could be easy to pin down, and OpenAI has promised to appeal to the presiding Judge.

This is merely the discovery phase of the ongoing trial, where lawyers for the NYT have said the messages are necessary to discover whether ChatGPT did indeed copy verbatim text from them.

OpenAI must now first anonymize the logs, and then submit them to the court, and NYT’s attorneys, seven days later.

Read more: Reuters has the scoop.

U.S. patent office: AI is like any other tool, and can’t hold patents

AI is just another tool, says the USPTO, and it cannot legally hold patents.
The patent for AI discoveries should go to the user, the USPTO says in a filing. (Picture: Adobe)
The USPTO is out with new guidelines on AI, reported by Reuters, and says quite frankly that AI use is like any other tool, like «computer software, research databases» that «assists in the inventive process.»

They go on to say that AI may «provide services and generate ideas, but they remain tools used by the human inventor who conceived the claimed invention.»

Therefore, AI itself can’t be considered an inventor under current U.S. laws, the document says.

In a departure from Biden administration policies, where AI could be considered a co-inventor, the patent office now says «there is no separate or modified standard for AI-assisted inventions.»

That should mean that using AI to concieve of an invention means the user gets the patent, as with any other tool. This has yet to be tested in U.S. courts, Reuters reports.

Read more: The patent notice, writeups by Reuters, Engadget.

OpenAI can’t use the word «Cameo» inside and outside of Sora

OpenAI is now barred from using the "Cameo" word, which is a major feature in the Sora 2 generator
Sora 2 is barred from making use to the term until December 22, when a new hearing will be held. (Picture: generated)
Sora’s launch wasn’t just about the ability to make realistic short-form videos, but heavily featured the «Cameo» ability.

This lets you create custom characters of friends or yourself and re-use your «Cameo» in different settings.

Not so fast, said the makers of the real «Cameo,» which sells custom-made celebrity videos or greetings. This is their whole business model, and they promptly sued to get their name back.

Now, U.S. District Judge Eumi K. Lee has granted a temporary restraining order on the use of the word by OpenAI — inside the app and elsewhere — until a hearing can be held on whether or not the ban should be made permanent on December 22.

Read more: scoop by CNBC, writeups by Engadget and Gizmodo

OpenAI shuts down pipeline of professional advice on ChatGPT [updated]

Reddit used to be democratizing tool for expert professional advice, but now it's all over. OpenAI lawyered up.
You can no longer use ChatGPT as your personal doctor, as it defies the EU AI Act and FDA guidance, according to OpenAI. (Picture: Adobe)
UPDATE: OpenAI says there are no changes, simply a consolidation of several usage policies that might have lead to confusion.

OpenAI has updated ChatGPT’s usage policies of October 29, banning a vast swath of content where it was arguably the most useful — as in interpreting medical imagery and helping with medical diagnosis, and offering legal or financial advice.

The idea is to stop ChatGPT (and any other OpenAI model) from giving advice that could be interpreted as professional, fiduciary, or legally binding guidance, as required by the EU AI Act and American FDA guidance.

Continue reading “OpenAI shuts down pipeline of professional advice on ChatGPT [updated]”

OpenAI completes transition to a public benefit corporation

OpenAI will still be controlled by a non-profit, but is now easier to invest in and might go public sometime in the future.
The AI lab is now open to investments, but will have a purpose baked into its corporate structure. (Picture: Adobe)
The AI lab is now a less regulated $500 billion public benefit corporation, controlled by a non-profit arm with about $130 billion in equity.

The new governance makes it possible for them to reach for a market debut at some time in the future, and unlocks the investment of some $30 billion from SoftBank, which was contingent on the regulatory change.

Continue reading “OpenAI completes transition to a public benefit corporation”

Sam Altman on teen use: «Some of our principles are in conflict»

OpenAI will start automatic age checks on its users, and direct teens to clean, "age-appropriate" version.
Happy and clean ChatGPT is coming for teens, and it will call the cops if you cross the line. (Picture: generated).
Trying to balance freedom with safety, OpenAI is going all in on an age-appropriate version of ChatGPT.

Teen use of chatbots and their potential harm is rapidly becoming a hot-button political issue, complete with a Congressional hearing and an FTC probe.

OpenAI is therefore reiterating their new policies on teen use and parental controls, and says they will be rolling out automatic age verification for under-18 users that should default to the teen version when in doubt.

Continue reading “Sam Altman on teen use: «Some of our principles are in conflict»”

Owner of Billboard, Rolling Stone, sues Google over AI Overviews

To stop AI Overviews, you also need to stop appearing in Google's search results, and there is no way of opting out, Penske says.
Speaking for the entire media industry, Penske says AI Overviews are creating havoc on their business model. (Picture: screenshot)
Penske Media Corporation claims Google is siphoning off traffic to their websites and stealing their content with the overviews feature.

This is the first major publisher suing Google for the feature, as research from Pew shows that less than one percent of users click on from links that have an AI Overview on the results.

20% drop in traffic
PMC, which is also the parent company of Variety and The Hollywood Reporter, has seen traffic drop by some 20% and affiliate revenue decline 30% since Google’s overviews started on their stories.

Continue reading “Owner of Billboard, Rolling Stone, sues Google over AI Overviews”

Friday roundup: OpenAI deals with Microsoft, makes a movie, and Albania gets an AI-generated minister

The first feature length movie made almost entirely by AI is set to debut at next year's Cannes Festival.
Made with «OpenAI resources,» this movie is built from animated uploaded drawings and prompts. (Picture: Screenshot, Critterz)
Microsoft agrees with OpenAI to keep talking
Microsoft is in a complex business relationship with OpenAI, where the early investor gets access to the latest AI tech and OpenAI gets access to computing power. They have just reached a “non-binding memorandum of understanding (MOU) for the next phase of our partnership.” This could allow OpenAI to go for-profit, under the control of a non-profit entity said to retain an ownership stake of more than $100 billion. Many takes on this today, but OpenAI has been moving away from Microsoft for funding, operations and cloud computing lately. The final deal will likely include some kind of a new investment in the now $500 billion company, and may unlock further market opportunities for OpenAI.
More at: OpenAI and Microsoft’s joint statement, x.com announcement, Reuters, Axios.

OpenAI goes to the movies
A new animated a-list movie, «Critterz» is under development using «OpenAI’s resources.» It should be ready for the Cannes Film Festival, meaning production time will be drastically sped up to only nine months. The script is written by part of the team from «Paddington in Peru», and it is spearheaded by Chad Nelson, who is a creative specialist at OpenAI. The technique looks to be to feed drawings to a large language model and have it animate them. The movie therefore streamlines animation, but wont skimp on voice actors, Gizmodo writes.
More at: The Wall Street Journal, Gizmodo and Engadget.

Read on for more news!

Continue reading “Friday roundup: OpenAI deals with Microsoft, makes a movie, and Albania gets an AI-generated minister”

Anthropic’s copyright settlement to cost $1.5 billion or more

Anthropic will pay $3,000 per book for an estimated 500,000 books, and more if further claims surface.
The Anthropic settlement is predicted to push other AI labs into negotiations over similar claims. (Picture: Adobe)
The landmark court settlement will be the largest copyright payout in history, but Anthropic avoids admitting guilt.

The epic class action lawsuit concerned a library of 7 million pirated books used in training, and had Anthropic looking at $150,000 in penalties per instance of copyright theft, but it was settled last week without disclosing terms.

Continue reading “Anthropic’s copyright settlement to cost $1.5 billion or more”

Anthropic settles «historic» class action copyright case brought by authors

A loss in the case would cause astronomical payouts in damages to millions of authors.
The settlement removes the threat of a debilitating loss in court, but the details have yet to be worked out. (Picture: Adobe)
UPDATE: The settlement details are in. The binding agreement was reached in principle on Tuesday, and the parties have asked the court to halt further proceedings.

The center of the suit was Anthropic’s library of 7 million pirated books in their training data, that could carry a penalty of $150,000 per infringement — and as a class action case, a loss would entail damages for every single author.

An unfavorable ruling would therefore be debilitating to Anthropic, and send dark clouds across the industry, likely forcing them toward a settlement.

Continue reading “Anthropic settles «historic» class action copyright case brought by authors”

The news in short for Friday

Human costs pale in comparison to infrastructure, Zuckerberg says.
Zuckerberg sits down with The Information to explain his Superintelligence spending. (Picture: Screenshot)

Executive order coming on «Woke AI»
The US President is planning a new Executive Order regarding balance in AI models. They are going to have to incorporate more right-wing ideology in order to remain in contention for government contracts.
The order would «dictate that AI companies getting federal contracts be politically neutral and unbiased in their AI models.» No news yet on who will be the arbiter of what is «neutral,» but we can guess, right?
More on The Wall Street Journal, discussion on r/singularity

Anthropic copyright case moves to class action
While Anthropic’s use of purchased books in training was ruled «fair use» by U.S. District Judge William Alsup in late june, their archive of 7 million pirated books was not.
In this phase of the trial, Alsup has okayed it proceeding as a class action suit, on behalf of all pirated authors.
At a maximum penalty of $150,000 for each infringement, that could total a competely debilitating bill if Anthropic is found guilty, and greatly impact the AI industry.
More at Reuters.

Perplexity gets huge India boost
The AI search engine has sealed a deal to give 360 million customers of the Airtel telco their Perplexity Pro service for free. Airtel is the second largest telecoms operator in India. The trial lasts a year and comes with no strings attached, and offers access to ChatGPT models along with Claude Sonnet and Opus 4. It would normally cost $200 per year.
Previously, Google has offered Gemini for free for all students in India, as everyone is trying to capture the enormous market.
More at India Dispatch, TechCrunch, and a press release by Airtel.

Zuckerberg explains «Superintelligence» hires
The Meta CEO just announced a 5GW data center with even more to be built, some the size of Manhattan, in a push worth «hundreds of billions.» In a recent interview with The Information, he explained his reasoning on his expensive AI hires — by saying infrastructure investments pale in comparison to human costs. And, he says, his new hires «want the fewest number of people reporting to them — and the most GPUs.»
See the interview here (25 minutes), and Business Insider on talent motivations.

Judge rules in favor of Meta’s AI books training, with a strong caveat

A federal judge beratedly rules that Meta's book copying is fair use, after plaintiffs fail to prove their case.
Fair use is a provision in copyright that makes it legal to copy for «transformative works.» (Picture: Alan Levine, CC BY 2.0)
After initially being sceptical of declaring the training on copyrighted books «fair use,» U.S. District Judge Vince Chhabria relented — but not the strength on Meta’s case.

Instead, he clearly says in his summary judgement that it «is generally illegal to copy protected works without permission,» (CNBC) but the plaintiffs «made the wrong arguments and failed to develop a record in support of the right one.» (The Verge.)

Continue reading “Judge rules in favor of Meta’s AI books training, with a strong caveat”