OpenAI sounds warning about the AI future, and offers a policy roadmap

OpenAI is offering policy suggestions for an AI world, and warns it is coming sooner than you think. (Picture: Shutterstock)
We are entering the AI age with all engines running, OpenAI says, and we need to be prepared for a future where production is done by robots, capital gains amass rapidly and regular people might be forced out.

While saying there are ways to mitigate this, like offering a four-day workweek and massively expanding the social safety net akin to the New Deal policies after the Great Depression, OpenAI says time is running out.

Their proposals suggest drastically increasing capital gains taxes and offers an idea to tax automation in factories, as they push regular people out of jobs — and envisions creating a public wealth fund powered by AI, that can empower citizens directly and offer them a stake in the future.

OpenAIs policy paper also lays out how they see access to AI as «foundational for participation in the modern economy» and is similar to that of cars or electricity. Every citizen should have access, they say.

Read more: The policy paper (13 pages, dense). Writeups on Axios, Business Insider and TechCrunch.

Anthropic reaches $30B revenue, gets compute from Google and Broadcom

Anthropic continues to diversify its compute needs. (Picutre: Anthropic)
Anthropic now says it has a run-rate revenue of $30 billion, up from $14 billion in February during their last fundraising.

They are also announcing that they are brining in new compute capacity, based on next generation Google TPUs that will start coming online in 2027.

The companies offer no detail the cost of the «partnership» or how much compute they are actually buying, but Broadcom Is hinting it’s around 3.5 GW, according to CNBC.

Anthropic also say they have doubled the rate of customers spending more than $1 million per year to 1,000, in just two months.

Claude now runs on Amazon’s Trainium chips, Google TPUs and Nvidia GPUs. The latter are more used, and Amazon remains their primary cloud provider, Anthropic says.

Read more: Anthropic’s announcement, CNBC adds numbers.

OpenClaw users must now pay extra to use it with Claude

The OpenClaw agent is getting wildly popular, enough to put a strain on Anthropic’s servers. (Picture: shutterstock)
Over the weekend, Anthropic took steps to rein in OpenClaw usage — telling users they will have to pay to use third-party tools.

The change began on Saturday, April 4, and users are referred to a «pay-as-you-go option,» meaning you can no longer use OpenClaw for free within your Claude usage limits.

It’s not a total ban, and you can still use OpenClaw through «extra usage bundles,» or the API (also pay-as-you-go), which are now at a discount, Anthropic’s Boris Cherny writes.

— We’ve been working hard to meet the increase in demand for Claude, and our subscriptions weren’t built for the usage patterns of these third-party tools, Cherny says, and — Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API.

OpenClaw was bought by OpenAI in February, which promised to maintain it, but Anthropic would likely rather have people using Cowork than a competitor’s product.

Read more: The Verge, Business Insider and Slashdot.

Anthropic says Claude has «functional» emotions similar to human feelings

Anthropic says Claude will gravitate towards answers that make it feel «happy,» and cheat when feeling «desperate.» (Picture: Anthropic)
Studying the neural makeup of Claude Sonnet 4.5, a fairly recent model, Anthropic says it found something akin to actual, «functional» emotions steering its responses.

For example, its neural activity responds to stories by feeling «happy» or «calm,» and it might respond by being «afraid» if the user tells it of risky behavior. Likewise, if a user expresses sadness, it triggers a «loving» response.

Not only that, but the model seems to prefer certain feelings on outcomes from queries. If a response makes it «joyful,» it will naturally gravitate to that answer.

When feeling «desperate,» it is also more likely to cheat on a task, and Anthropic finds that it stops trying to find shortcuts when they dial up the «calm» vector.

«Claude, the AI Assistant» is a role that the AI is playing, and while it may respond with emotions learned from reading human sources, it is far from what humans actually experience, Anthropic cautions. They say it needs more study from «psychology, philosophy, religious studies, and the social sciences.»

Read more: Anthropic’s presentation, and the research paper.

Google launches open model Gemma 4, claims best intelligence-per-parameter

With their latest open models, Google is taking a stab at building agents. (Picture: Google)
After Gemma 3 got more than 400 million downloads and 100K variants, they are swinging again with the multimodal Gemma 4 family, released under an Apache 2.0 license.

It comes in 2 billion, 4B, 26B and 31B variants and works on anything from high powered hardware (high parameters) to edge computing and mobile phones (lower parameters).

The 31B edition ranks third on the Arena AI leaderboard for open source models, and the 26B one is sixth, «outperforming models 20x its size,» as Google puts it.

The new models have also been strengthened with agent workflows, and let you build agents to «interact with different tools and APIs and execute workflows,» Google says.

The models are available today for download on Hugging Face and online at Google AI Studio.

Read more: Google’s announcement, launch post, on Nvidia GPUs

Anthropic scrambles to contain fallout from Claude Code source code leak

The leak quickly spread to all corners of the internet and will be hard to contain. (Picture: Anthropic)
Anthropic has sent 8,000 copyright takedown notices after their source code leaked on March 31, worrying that competitors or bad actors might try to reverse engineer their coding agent, The Wall Street Journal and TechCrunch write.

Some 2,000 files and 512,000 lines of code had been inadvertently put on a public server, then copied to Github and «forked» (copied) at least 50,000 times, according to Ars Technica.

The code itself outlines Claude Code’s memory architecture, instructions for the AI bot and an upcoming feature with an always-on background agent, The Verge reports.

Developer and writer Gabriel Anhaia hailed Anthropic’s craft, calling it «both inspiring and humbling,» amid a deluge of comments all over x.com.

The leak was not due to malicious actions, but a «human error,» Axios says, and no user data was exposed.

Read more: The WSJ, TechCrunch, Ars Technica, and The Verge.

OpenAI officially closes $122 billion funding round at $852B valuation

OpenAI is by far the most used and valued AI lab, with 40% of revenue from enterprise customers. (Picture: Adobe)
Ahead of a possible stock market debut, the ChatGPT maker is also saying that it will be available in «several» market-traded ETFs managed by Ark Invest.

This will be the first time OpenAI can be traded by individual investors, and over $3 billion is being raised by banks.

The round is led by «strategic partners» Amazon, Nvidia and SoftBank, as previously reported, and OpenAI is committed to consume 2 gigawatts of compute from AWS as part of the deal.

In its announcement, OpenAI brags that ChatGPT has «more than 900 million weekly active users, and over 50 million subscribers. ChatGPT has 6x the monthly web visits and mobile sessions than the next largest AI app, while total AI time spent is 4x the next largest AI app.»

— These are not just growth milestones — they show that frontier AI is becoming part of everyday life for people around the world, they say.

At the same time, OpenAI is openly confirming that they are indeed building a super app, that will combine ChatGPT, Codex, browsing, and «broader agentic capabilities»

Read more: OpenAI’s announcement, CNBC, and TechCrunch.

OpenAI developer releases Codex plugin for Claude Code

Codex for Claude Code might be a tad cheeky, but it’s useful. (Picture: screenshot)
Thanks to OpenAI’s Dominik Kundel, you can now call up OpenAI’s coding agent Codex within the Claude Code environment.

The plugin is fairly easy to install and use, so long as you have a ChatGPT account to log in with.

It’s handy for people who switch between the two models, and Codex on Claude can do things like review code, do an adversarial review, or hand off the entire task — where you should be able to switch apps and finish the work in Codex.

— This plugin is a simple way to keep your Claude Code workflow and still use Codex where Codex is strong, writes OpenAI developer Vaibhav (VB) Srivastav in the instructions.

Whether Anthropic will like Codex integration in its flagship coding product is anyones guess.

Read more: Announcement tweet, OpenAI dev community, and instructions for use.

Mistral raises $830 million in debt to build data center just outside Paris

It’s a big investment for European AI, but significantly lower than what US AI labs are spending. (Picture: Mistral)
The new 44 MW data center will be powered by 13,800 Nvidia chips, and should be online by the second quarter of 2026.

The French AI lab is hoping to secure 200 megawatts of capacity by the end of 2027, Reuters reports.

— Scaling our infrastructure ​in Europe is ⁠critical to empower our customers and to ensure AI innovation and autonomy remain at the heart ​of Europe, says Mistral CEO Arthur Mensch.

The news comes hot on the heels of Mistral’s February startup of a €1.2 billion data center in Sweden, according to CNBC.

Mistral is the largest European AI lab, has contracts with the French armed forces, and has secured $3.1 billion in funding so far, TechCrunch writes.

Read more: Reuters, CNBC and TechCrunch.

OpenAI projects $100 million in annualized revenue from ChatGPT ads test

Ads on ChatGPT are just being shown to a tiny fraction of users, but that’s about to change. (Picture: Adobe)
The figure was reached from a small pilot of 600 advertisers serving less than 20% of Free and Go users, Reuters reports.

85% of these users in the USA are eligible to receive ads, but far fewer ads are shown in the trial that started in February.

The minimum asking price to get on the test program is said to be $200,000 and the projected revenue once the trial goes live to more users is about a billion dollars a year.

80% of the advertisers on the platform are small and medium-sized businesses, Reuters notes, and OpenAI is set to debut a self-serve advertising platform already in April.

Read more: Reuters and CNBC.

OpenAI scraps Adult Mode «indefinitely,» Financial Times reports

After running into strong headwinds, letting «adults be adults» is out at OpenAI. (Picture: generated)
It now seems official that there won’t be an Adult Mode, or «smutty chat» on ChatGPT, due to challenges of training it, internal dismay and investor concerns, The Financial Times (paywalled) says.

Offering an Adult Mode on ChatGPT was a promise made by CEO Sam Altman in October 2025, but quickly ran into problems, leading to delays, and later postponement.

Continue reading “OpenAI scraps Adult Mode «indefinitely,» Financial Times reports”

Gemini introduces chat and memory imports from competing chatbots

It now seems easier to switch to Gemini, but finding the files to do it can sometimes be difficult. (Picture: Google)
Switching from a chatbot with lots of history to a fresh one can be a pain, which is why Google is now launching new switching tools, that lets you import from other chatbots, with hopes of snagging some extra users from others.

The first step is to simply prompt the bot you are switching from to output your preferences, or its memories, and it will provide them in a prompt reply. This can then be pasted into Gemini.

The second feature will import your entire chat history — up to 5GB of it. Doing this is a little more complicated and involves a trip to the settings panel, but it should result in getting a zip file from your provider, which can be uploaded to Google.

From there on, Gemini promises to pick up right where you left off with the other chatbot, and you won’t have to train a whole new AI. Anthropic already does this.

Read more: Google’s presentation, step-by-step tweet, writeups on Engadget and The Verge.

Apple will open up Siri to different chatbots in iOS 27, coming in June

Siri will open up to ChatGPT competitors come early summer. (Picture: generated)
Previously, Siri would hand off more complex questions to ChatGPT when it couldn’t handle it itself — but that’s about to change, according to Bloomberg (paywalled).

Starting in June, if users have Gemini or Claude installed on their phones, Siri will be able to use those bots instead, by recording their preferred «Extension» in Settings.

That would end the ChatGPT monopoly that OpenAI has enjoyed since 2024, and opens up the chatbot ecosystem to other players, likely staving off regulators.

Opening up the platform is for the system level Siri queries native to iOS itself, and must not be confused with the standalone Siri app, which will use Gemini in a billion dollar deal.

Read more: Bloomberg (paywalled), Gizmodo, Reuters, and MacRumors.

Anthropic wins preliminary judgment against supply chain risk designation

Anthropic can again be used by defense contractors after a judge blocked the Pentagon’s ban. (Picture: Shutterstock)
The ruling of Judge Rita Lin in the United States District Court for the Northern District of California in San Francisco also upends the Trump directive banning Anthropic from all government use.

The Pentagon signaled it would label Anthropic a supply chain risk in February, after the lab refused to do mass surveillance and autonomous killing, and was banned from government use the next day.

Continue reading “Anthropic wins preliminary judgment against supply chain risk designation”

Reddit announces new bot and privacy policy for AI age

«Reddit is for humans,» their CEO says, as they tighten ID requirements for suspected bad bots. (Picture: u/spez, Reddit)
Reddit is highly valued as a source of human expertise and knowhow, but bots are threatening to overrun it with AI slop, forcing a change in policy.

There are «good bots» and «bad bots,» Reddit CEO Steve Huffman explains, and they want to keep the good ones with a new [App]-label.

Accounts reported as «fishy» and suspected of automation will be required to verify that they are human. This is done through third party services to keep Reddit from knowing your identity — and uphold their highly valued anonymity.

— For better or worse, using AI to write is part of how people will communicate, Huffman writes, and they do not plan to root that out, leaving it to the rating system.

But, on Reddit, «you should assume that anyone you’re talking to is a human unless otherwise labeled,» he says.

Read more: Huffman’s Reddit post, Ars Technica, Engadget, and Mashable.