
Offering an Adult Mode on ChatGPT was a promise made by CEO Sam Altman in October 2025, but quickly ran into problems, leading to delays, and later postponement.
Continue reading “OpenAI scraps Adult Mode «indefinitely,» Financial Times reports”

Offering an Adult Mode on ChatGPT was a promise made by CEO Sam Altman in October 2025, but quickly ran into problems, leading to delays, and later postponement.
Continue reading “OpenAI scraps Adult Mode «indefinitely,» Financial Times reports”

The first step is to simply prompt the bot you are switching from to output your preferences, or its memories, and it will provide them in a prompt reply. This can then be pasted into Gemini.
The second feature will import your entire chat history — up to 5GB of it. Doing this is a little more complicated and involves a trip to the settings panel, but it should result in getting a zip file from your provider, which can be uploaded to Google.
From there on, Gemini promises to pick up right where you left off with the other chatbot, and you won’t have to train a whole new AI. Anthropic already does this.
Read more: Google’s presentation, step-by-step tweet, writeups on Engadget and The Verge.

Starting in June, if users have Gemini or Claude installed on their phones, Siri will be able to use those bots instead, by recording their preferred «Extension» in Settings.
That would end the ChatGPT monopoly that OpenAI has enjoyed since 2024, and opens up the chatbot ecosystem to other players, likely staving off regulators.
Opening up the platform is for the system level Siri queries native to iOS itself, and must not be confused with the standalone Siri app, which will use Gemini in a billion dollar deal.
Read more: Bloomberg (paywalled), Gizmodo, Reuters, and MacRumors.

The Pentagon signaled it would label Anthropic a supply chain risk in February, after the lab refused to do mass surveillance and autonomous killing, and was banned from government use the next day.
Continue reading “Anthropic wins preliminary judgment against supply chain risk designation”

There are «good bots» and «bad bots,» Reddit CEO Steve Huffman explains, and they want to keep the good ones with a new [App]-label.
Accounts reported as «fishy» and suspected of automation will be required to verify that they are human. This is done through third party services to keep Reddit from knowing your identity — and uphold their highly valued anonymity.
— For better or worse, using AI to write is part of how people will communicate, Huffman writes, and they do not plan to root that out, leaving it to the rating system.
But, on Reddit, «you should assume that anyone you’re talking to is a human unless otherwise labeled,» he says.
Read more: Huffman’s Reddit post, Ars Technica, Engadget, and Mashable.

That entails that they can run «distillation» on the model, meaning they can use it to provide answers and reasoning over a wide array of tasks and use that to train smaller, more capable Apple models, MacRumors says.
Distillation is a controversial technique, and many of the big AI labs have been accusing Chinese startups of doing it to make their own models more capable.
Apple can also tinker with Gemini, to make it give responses that Apple likes, MacRumors writes.
The Gemini model is optimized for chatbots and coding, and might not always produce the kinds of answers that Apple wants, they note.
Read more: The Information (paywalled), MacRumors.

This comes after a trial period where Copilot has been feeding on internal Microsoft engineers’ data, which they say «improved model performance.»
They will not train on your entire code repositories, and will only use your interactions with Copilot — including accepted outputs, inputs sent to the model and «code context.»
Github is hardly alone in doing this, as Anthropic and OpenAI have been doing this for more than half a year. It’s common industry practice.
If you don’t like Copilot training on your data, you can opt out on the Copilot features page.
Read more: Github’s announcement, How-to Geek.

Co-developed with Meta, the chip is claimed to deliver twice the performance per rack compared to x86 platforms.
Agentic AI compute is expected to require more than four times the current capacity per gigawatt in data centers, and both Arm and Meta expect the design to iterate across several generations.
The AGI CPU is projected to lift Arm’s revenue by «billions» of dollars, Reuters reports, and has over fifty launch partners, including OpenAI, Amazon AWS, Google Cloud, and Meta.
Read more: Arm presser, Meta presser, product page, Reuters, and The Verge.

The video generation app had amassed 920 million users since December 2025 and was for a while the number one app on the App Store, before declining to #165 recently.
Closing the free app, estimated to cost $15 million per day, opens up resources for OpenAI’s recent focus on coding and business, as it was labelled a «side quest» internally.
With Sora discontinued, OpenAI is also leaving behind a $1 billion deal with Disney — which had licensed some of its characters for use on the platform. Disney says they are open to new investments, and «respects» OpenAI’s decision.
Read more: The Wall Street Journal, Reuters and Tibor Blaho/A>.

Available as a research preview for Pro and Max subscribers, it will identify what tools it needs to complete a task, and then ask for connectors to, say, the Finder on the Mac or Chrome.
Anthropic warns that the feature is «still early» and can make mistakes, as well as having vulnerabilities to threats. It can also be slower than doing the thing yourself.
The feature works especially well with Dispatch, Anthropic says, a tool released last week to let you start a task from your mobile and finish it up on the computer.
With it, you can get Claude to check your emails in the morning, or pull updates from spreadsheets, or «spin up a Claude Code session» directly from your phone.
Read more: Anthropic’s announcement, Anthropic on Dispatch, and Engadget.

That means the test with showing some ads to about 5% of users is coming to an end, and the full plan will start up just after easter.
The limited advertising has so far been a success. The main complaint from advertisers is that it’s going too slow, according to CNBC. Most of them are happy and ready to spend more — with more varied ads.
— We’re encouraged by early signals from users and participating brands, and continue to see strong interest from advertisers, OpenAI tells CNBC.
The advertising program on Free and Go tiers is expected to earn OpenAI about $1 billion per year, and usher in a third tier for advertisers in addition to Search, Social, and Retail.
Read more: The Information (paywalled), Reuters, and CNBC.

The role is for someone with long, PhD level experience in «chemical weapons and/or explosives defence,» the LinkedIn post says.
It would be helpful if the person has an «understanding of radiological materials,» the posting goes on, and says the candidate will be «tackling critical problems in preventing catastrophic misuse.»
OpenAI is not far behind in worrying about these issues, and also has a job post open for much the same, but they are looking for someone with machine learning experience from red-teaming in order to safeguard their AI’s responses.
Using any AI for developing these kinds of weapons is of course against all the labs’ terms of use, but as the models grow more capable, they also need more safeguards.
Read more: Anthropic’s job post, OpenAI’s job post, writeups on the BBC and Mashable.

The app will make it easier for teams within OpenAI to work together, the WSJ reports, and will help other users with productivity-related tasks, as they double down on enterprise users.
The standalone ChatGPT app will not be affected by the move, although the paper notes that OpenAI feels it has lost attention by focusing on «side quests» like the Sora app — now rumored to get included in ChatGPT proper.
OpenAI’s Fidji Simo will be leading the super app effort, and she tweets that:
— When new bets start to work, like we’re seeing now with Codex, it’s very important to double down on them and avoid distractions.
Read more: The Wall Street Journal and CNBC.

The main focus on the deal is on inference workloads, the process of completing tasks and answers from an AI query — which is growing at pace with AI’s general expansion.
— Inference is hard. It’s wickedly hard, Buck told Reuters. — To be the best at inference, it is not a one chip pony. We actually use all seven chips.
Amazon is betting on a broad mix of chips, Reuters reports, and says in their press release that they are buying Blackwell and Vera Rubin chips.
From what Reuters understands, they will also be buying a number of the newly released Groq 3 LPX servers — which are optimized for inference and can do 700 million tokens per second.
Read more: Reuters report, Amazon press release.

Some of the most beloved and, importantly, used Python developer tools come from the company, which will now be supported by OpenAI.
The deal for roughly 32 employees will strengthen Codex by integrating the tools that have «hundreds of millions of downloads per month,» according to Astral themselves.
OpenAI will continue to maintain the open source projects, and by gaining access to them — and the engineers’ knowhow — for Codex’s AI agents, they will be able to work more closely with the tools.
Read more: OpenAI’s announcement, Astral’s announcement, and CNBC.