WhatsApp tentatively allows AI chatbots competing with Meta in Europe

WhatsApp sets steep prices for rival AI access. (Picture: generated)
As the EU Commission is considering «interim measures» against the messaging app for refusing chatbots not made by Meta, WhatsApp is slightly opening the door to rivals in Europe.

The platform has 3 billion users, and is considered a «gatekeeper» in EU laws, subject to demands for equal access.

The compromise Meta is rolling out is that rival chatbots will be allowed on the platform, but have to pay their way.

The fees range from €0.0490 to €0.1323 for «non-template messages.» That could ratchet up quickly, considering that chatbot sessions cover multiple messages across millions of users, writes TechCrunch.

The European Commission is said to be «analyzing» how this move «might affect its interim measures» as well as the broader investigation, Reuters reports.

Read more: Reuters and TechCrunch.

Amodei officially says Anthropic won’t drop Pentagon safeguards

Dario Amodei at TechCrunch Disrupt, 2023. (Picture: TechCrunch (CC BY 2.0))
Following last Friday’s meeting and ultimatum from the Pentagon, which set a deadline to respond by this Friday, Amodei says Anthropic will not comply with the demands.

The Anthropic CEO says they will «work to enable a smooth transition,» after denying the US military use of their AI for mass surveillance or autonomous killing.

— In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values, writes Amodei, — Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.

Continue reading “Amodei officially says Anthropic won’t drop Pentagon safeguards”

EU investigating WhatsApp AI ban, considering «interim actions»

The EU might decide that WhatsApp has to open for competing AI bots sooner rather than later. (Picture: European Commission)
The European Commission said yesterday that it had notified Meta on possible action to open up WhatsApp to rival AI chatbots.

Meta banned all AI chatbots other than Meta AI from WhatsApp on January 15th, and while the EU can take a long time to investigate antitrust allegations — they are considering issuing an early order to «avoid Meta’s new policy irreparably harming competition in Europe,» says Teresa Ribera, The EU’s Executive Vice-President for Clean, Just and Competitive Transition.

WhatsApp has over 3 billion users worldwide and qualifies as a gatekeeper in EU parlance, subject to rules on equal access.

Meta says that «There are many AI options and people can use them from app stores, operating systems, devices, websites, and industry partnerships,» in a statement to Reuters.

The process following this formal notification is that parties can examine the EU’s files, reply in writing and then receive a hearing. After that, the Commission will consider «interim measures,» such as restoring access for competitors, even as the case moves forward in their systems.

Read more: Statement by the EC, writeup on Reuters.

The EU wants equal access for other AI models on Google’s Android

The EU wants other AI labs to have the same hooks in Android that Gemini has. (Picture: generated)
— The aim is to ensure that third-party providers have an equal opportunity to innovate and compete in the rapidly evolving AI landscape on smart mobile devices, their statement says, per Engadget.

It’s an investigation («proceeding») started under the Digital Markets Act (DMA), made to ensure major platform owners don’t abuse their power, and Google now has six months to find a workable solution.

Gemini enjoys system-level and app-level access on Android, and many competitors have flagged this as a violation of the DMA.

— We are concerned that further rules which are often driven by competitor grievances rather than the interest of consumers, will compromise user privacy, security, and innovation, says Clare Kelly, Google’s Senior Competition Counsel to Reuters.

If no relief is found on the issue, the DMA allows for fines of up to 10% of a company’s global revenue.

Read more: The Commission’s statement, Engadget, Reuters.

Character.ai and Google settle harmful behavior court cases

With other AI companies watching closely, Google and Character.ai quietly bow out of contentious cases. (Picture: Adobe)
The companies have moved in four states to settle cases where the chatbot was accused of encouraging harmful behavior — sometimes resulting in death.

The court documents contain no actual monetary payouts to the victim’s families, as this seems still to be negotiated.

These are not the only cases of this kind, and may signal a shift in strategy for the entire AI business, as Meta and OpenAI are also facing similar lawsuits.

Character.ai was accused by parents of encouraging their children to cut their arms, suggesting murdering their parents, writing sexually explicit messages and of not discouraging suicide, Axios writes.

They have since banned under 18s from using their service.

Read more: Writeups on Axios, TechCrunch.

New Chinese AI rules should spread «core socialist values,» identify as AI

China's new rules require AIs to identify as such, and to spread "core socialist values."
It is almost like China has learned from AI adoption in the west. (Picture: generated)
Newly published AI rules in China address both AI psychosis, and forbids spreading of rumors or endangering national security.

They also require that any AI that engages with people should announce at the start that they are talking to an AI, with new warnings every two hours, writes Gizmodo.

Also forbidden are «illegal religious activities,» obscenity, violence or crime, and the list goes on to cover libel and insults, material that damages relationships — or encouraging self harm and suicide.

There aren’t just warnings for using AI chatbots too long, but providers should also assess the user’s emotional state — and take «necessary measures to intervene,» writes Reuters.

Read more: Writeups on Reuters, Gizmodo.

NYT sues Perplexity for copying content, after cease and desist order

Copyrights might well trump AIs retrieval practices, bets the NYT.
The Times says Perplexity is copying their journalism and delivering it without permission. (Picture: Adobe)
The New York Times sent a cease and desist order to Perplexity in 2024, but the company has persisted with copying NYT content in their responses, the lawsuit alleges.

Perplexity still generates outputs that are «identical or substantially similar to» content from the Times, writes CNBC, and sometimes even hallucinates responses that get attributed to them, writes Reuters.

— While we believe in the ethical and responsible use and development of AI, we firmly object to Perplexity’s unlicensed use of our content, says NYT spokesperson Graham James.

Perplexity seems unfazed by the lawsuit, saying in a statement that:

— Publishers have been suing new tech companies for a hundred years, starting with radio, TV, the internet, social media and now AI. Fortunately it’s never worked, or we’d all be talking about this by telegraph.

The NYT has previously also previously sued OpenAI for infringement.

Read more: The actual complaint, NYT announcement, writeups on Reuters and CNBC

OpenAI loses privacy fight for ChatGPT message logs in NYT lawsuit

OpenAI is on the verge of losing its fight to keep their users' chat logs private, instead of turning them over to the NYT.
Your logs belongs belong to us, the NYT lawyers say, and the court agrees. (Picture: Adobe)
OpenAI has been fighting tooth and nail to preserve the privacy of their users’ messages in the lawsuit brought by The New York Times in 2023.

A new judgement could now mean they have to turn over more than 20 million chat logs, and many more messages, from the chatbot, reports Reuters.

The logs themselves should be anonymized by OpenAI in a way that pleases the court, but their content could be easy to pin down, and OpenAI has promised to appeal to the presiding Judge.

This is merely the discovery phase of the ongoing trial, where lawyers for the NYT have said the messages are necessary to discover whether ChatGPT did indeed copy verbatim text from them.

OpenAI must now first anonymize the logs, and then submit them to the court, and NYT’s attorneys, seven days later.

Read more: Reuters has the scoop.

U.S. patent office: AI is like any other tool, and can’t hold patents

AI is just another tool, says the USPTO, and it cannot legally hold patents.
The patent for AI discoveries should go to the user, the USPTO says in a filing. (Picture: Adobe)
The USPTO is out with new guidelines on AI, reported by Reuters, and says quite frankly that AI use is like any other tool, like «computer software, research databases» that «assists in the inventive process.»

They go on to say that AI may «provide services and generate ideas, but they remain tools used by the human inventor who conceived the claimed invention.»

Therefore, AI itself can’t be considered an inventor under current U.S. laws, the document says.

In a departure from Biden administration policies, where AI could be considered a co-inventor, the patent office now says «there is no separate or modified standard for AI-assisted inventions.»

That should mean that using AI to concieve of an invention means the user gets the patent, as with any other tool. This has yet to be tested in U.S. courts, Reuters reports.

Read more: The patent notice, writeups by Reuters, Engadget.

OpenAI can’t use the word «Cameo» inside and outside of Sora

OpenAI is now barred from using the "Cameo" word, which is a major feature in the Sora 2 generator
Sora 2 is barred from making use to the term until December 22, when a new hearing will be held. (Picture: generated)
Sora’s launch wasn’t just about the ability to make realistic short-form videos, but heavily featured the «Cameo» ability.

This lets you create custom characters of friends or yourself and re-use your «Cameo» in different settings.

Not so fast, said the makers of the real «Cameo,» which sells custom-made celebrity videos or greetings. This is their whole business model, and they promptly sued to get their name back.

Now, U.S. District Judge Eumi K. Lee has granted a temporary restraining order on the use of the word by OpenAI — inside the app and elsewhere — until a hearing can be held on whether or not the ban should be made permanent on December 22.

Read more: scoop by CNBC, writeups by Engadget and Gizmodo

OpenAI shuts down pipeline of professional advice on ChatGPT [updated]

Reddit used to be democratizing tool for expert professional advice, but now it's all over. OpenAI lawyered up.
You can no longer use ChatGPT as your personal doctor, as it defies the EU AI Act and FDA guidance, according to OpenAI. (Picture: Adobe)
UPDATE: OpenAI says there are no changes, simply a consolidation of several usage policies that might have lead to confusion.

OpenAI has updated ChatGPT’s usage policies of October 29, banning a vast swath of content where it was arguably the most useful — as in interpreting medical imagery and helping with medical diagnosis, and offering legal or financial advice.

The idea is to stop ChatGPT (and any other OpenAI model) from giving advice that could be interpreted as professional, fiduciary, or legally binding guidance, as required by the EU AI Act and American FDA guidance.

Continue reading “OpenAI shuts down pipeline of professional advice on ChatGPT [updated]”

OpenAI completes transition to a public benefit corporation

OpenAI will still be controlled by a non-profit, but is now easier to invest in and might go public sometime in the future.
The AI lab is now open to investments, but will have a purpose baked into its corporate structure. (Picture: Adobe)
The AI lab is now a less regulated $500 billion public benefit corporation, controlled by a non-profit arm with about $130 billion in equity.

The new governance makes it possible for them to reach for a market debut at some time in the future, and unlocks the investment of some $30 billion from SoftBank, which was contingent on the regulatory change.

Continue reading “OpenAI completes transition to a public benefit corporation”

Sam Altman on teen use: «Some of our principles are in conflict»

OpenAI will start automatic age checks on its users, and direct teens to clean, "age-appropriate" version.
Happy and clean ChatGPT is coming for teens, and it will call the cops if you cross the line. (Picture: generated).
Trying to balance freedom with safety, OpenAI is going all in on an age-appropriate version of ChatGPT.

Teen use of chatbots and their potential harm is rapidly becoming a hot-button political issue, complete with a Congressional hearing and an FTC probe.

OpenAI is therefore reiterating their new policies on teen use and parental controls, and says they will be rolling out automatic age verification for under-18 users that should default to the teen version when in doubt.

Continue reading “Sam Altman on teen use: «Some of our principles are in conflict»”

Owner of Billboard, Rolling Stone, sues Google over AI Overviews

To stop AI Overviews, you also need to stop appearing in Google's search results, and there is no way of opting out, Penske says.
Speaking for the entire media industry, Penske says AI Overviews are creating havoc on their business model. (Picture: screenshot)
Penske Media Corporation claims Google is siphoning off traffic to their websites and stealing their content with the overviews feature.

This is the first major publisher suing Google for the feature, as research from Pew shows that less than one percent of users click on from links that have an AI Overview on the results.

20% drop in traffic
PMC, which is also the parent company of Variety and The Hollywood Reporter, has seen traffic drop by some 20% and affiliate revenue decline 30% since Google’s overviews started on their stories.

Continue reading “Owner of Billboard, Rolling Stone, sues Google over AI Overviews”

Friday roundup: OpenAI deals with Microsoft, makes a movie, and Albania gets an AI-generated minister

The first feature length movie made almost entirely by AI is set to debut at next year's Cannes Festival.
Made with «OpenAI resources,» this movie is built from animated uploaded drawings and prompts. (Picture: Screenshot, Critterz)
Microsoft agrees with OpenAI to keep talking
Microsoft is in a complex business relationship with OpenAI, where the early investor gets access to the latest AI tech and OpenAI gets access to computing power. They have just reached a “non-binding memorandum of understanding (MOU) for the next phase of our partnership.” This could allow OpenAI to go for-profit, under the control of a non-profit entity said to retain an ownership stake of more than $100 billion. Many takes on this today, but OpenAI has been moving away from Microsoft for funding, operations and cloud computing lately. The final deal will likely include some kind of a new investment in the now $500 billion company, and may unlock further market opportunities for OpenAI.
More at: OpenAI and Microsoft’s joint statement, x.com announcement, Reuters, Axios.

OpenAI goes to the movies
A new animated a-list movie, «Critterz» is under development using «OpenAI’s resources.» It should be ready for the Cannes Film Festival, meaning production time will be drastically sped up to only nine months. The script is written by part of the team from «Paddington in Peru», and it is spearheaded by Chad Nelson, who is a creative specialist at OpenAI. The technique looks to be to feed drawings to a large language model and have it animate them. The movie therefore streamlines animation, but wont skimp on voice actors, Gizmodo writes.
More at: The Wall Street Journal, Gizmodo and Engadget.

Read on for more news!

Continue reading “Friday roundup: OpenAI deals with Microsoft, makes a movie, and Albania gets an AI-generated minister”