Gemini introduces chat and memory imports from competing chatbots

It now seems easier to switch to Gemini, but finding the files to do it can sometimes be difficult. (Picture: Google)
Switching from a chatbot with lots of history to a fresh one can be a pain, which is why Google is now launching new switching tools, that lets you import from other chatbots, with hopes of snagging some extra users from others.

The first step is to simply prompt the bot you are switching from to output your preferences, or its memories, and it will provide them in a prompt reply. This can then be pasted into Gemini.

The second feature will import your entire chat history — up to 5GB of it. Doing this is a little more complicated and involves a trip to the settings panel, but it should result in getting a zip file from your provider, which can be uploaded to Google.

From there on, Gemini promises to pick up right where you left off with the other chatbot, and you won’t have to train a whole new AI. Anthropic already does this.

Read more: Google’s presentation, step-by-step tweet, writeups on Engadget and The Verge.

Apple will open up Siri to different chatbots in iOS 27, coming in June

Siri will open up to ChatGPT competitors come early summer. (Picture: generated)
Previously, Siri would hand off more complex questions to ChatGPT when it couldn’t handle it itself — but that’s about to change, according to Bloomberg (paywalled).

Starting in June, if users have Gemini or Claude installed on their phones, Siri will be able to use those bots instead, by recording their preferred «Extension» in Settings.

That would end the ChatGPT monopoly that OpenAI has enjoyed since 2024, and opens up the chatbot ecosystem to other players, likely staving off regulators.

Opening up the platform is for the system level Siri queries native to iOS itself, and must not be confused with the standalone Siri app, which will use Gemini in a billion dollar deal.

Read more: Bloomberg (paywalled), Gizmodo, Reuters, and MacRumors.

Anthropic wins preliminary judgment against supply chain risk designation

Anthropic can again be used by defense contractors after a judge blocked the Pentagon’s ban. (Picture: Shutterstock)
The ruling of Judge Rita Lin in the United States District Court for the Northern District of California in San Francisco also upends the Trump directive banning Anthropic from all government use.

The Pentagon signaled it would label Anthropic a supply chain risk in February, after the lab refused to do mass surveillance and autonomous killing, and was banned from government use the next day.

Continue reading “Anthropic wins preliminary judgment against supply chain risk designation”

Reddit announces new bot and privacy policy for AI age

«Reddit is for humans,» their CEO says, as they tighten ID requirements for suspected bad bots. (Picture: u/spez, Reddit)
Reddit is highly valued as a source of human expertise and knowhow, but bots are threatening to overrun it with AI slop, forcing a change in policy.

There are «good bots» and «bad bots,» Reddit CEO Steve Huffman explains, and they want to keep the good ones with a new [App]-label.

Accounts reported as «fishy» and suspected of automation will be required to verify that they are human. This is done through third party services to keep Reddit from knowing your identity — and uphold their highly valued anonymity.

— For better or worse, using AI to write is part of how people will communicate, Huffman writes, and they do not plan to root that out, leaving it to the rating system.

But, on Reddit, «you should assume that anyone you’re talking to is a human unless otherwise labeled,» he says.

Read more: Huffman’s Reddit post, Ars Technica, Engadget, and Mashable.

Apple able to extract model responses from their custom Gemini solution

With Gemini running on Apple’s own servers, they have wide access and permission to customize it. (Picture: generated)
With Google’s bespoke Gemini model running on their internal servers, Apple will have full access to the AI, The Information (paywalled) writes.

That entails that they can run «distillation» on the model, meaning they can use it to provide answers and reasoning over a wide array of tasks and use that to train smaller, more capable Apple models, MacRumors says.

Distillation is a controversial technique, and many of the big AI labs have been accusing Chinese startups of doing it to make their own models more capable.

Apple can also tinker with Gemini, to make it give responses that Apple likes, MacRumors writes.

The Gemini model is optimized for chatbots and coding, and might not always produce the kinds of answers that Apple wants, they note.

Read more: The Information (paywalled), MacRumors.

Github Copilot to start training on user interactions from April 24

Github is coming for your code, after a successful trial on internal Microsoft data. (Picture: Github)
If you ever used Github to complete your code, your data can now be «used to train and improve our AI models,» Github says.

This comes after a trial period where Copilot has been feeding on internal Microsoft engineers’ data, which they say «improved model performance.»

They will not train on your entire code repositories, and will only use your interactions with Copilot — including accepted outputs, inputs sent to the model and «code context.»

Github is hardly alone in doing this, as Anthropic and OpenAI have been doing this for more than half a year. It’s common industry practice.

If you don’t like Copilot training on your data, you can opt out on the Copilot features page.

Read more: Github’s announcement, How-to Geek.

Arm releases its first physical silicon chip, the AGI CPU, for agentic inference

Arm says agent workflows are set to rise four times, and their new CPU is tailor made for the process. (Picture: Arm)
Precisely catching the fastest growing trend in AI computing, Arm says its new CPU is tailor made for agent workloads.

Co-developed with Meta, the chip is claimed to deliver twice the performance per rack compared to x86 platforms.

Agentic AI compute is expected to require more than four times the current capacity per gigawatt in data centers, and both Arm and Meta expect the design to iterate across several generations.

The AGI CPU is projected to lift Arm’s revenue by «billions» of dollars, Reuters reports, and has over fifty launch partners, including OpenAI, Amazon AWS, Google Cloud, and Meta.

Read more: Arm presser, Meta presser, product page, Reuters, and The Verge.

OpenAI axes Sora video app and API, and it won’t live on in ChatGPT

The expensive «side quest» of Sora video generation is officially at an end. (Picture: Shutterstock)
Contrary to earlier rumors, the app won’t be integrated into ChatGPT, writes The Wall Street Journal and Reuters, citing an internal email by CEO Sam Altman.

The video generation app had amassed 920 million users since December 2025 and was for a while the number one app on the App Store, before declining to #165 recently.

Closing the free app, estimated to cost $15 million per day, opens up resources for OpenAI’s recent focus on coding and business, as it was labelled a «side quest» internally.

With Sora discontinued, OpenAI is also leaving behind a $1 billion deal with Disney — which had licensed some of its characters for use on the platform. Disney says they are open to new investments, and «respects» OpenAI’s decision.

Read more: The Wall Street Journal, Reuters and Tibor Blaho.

Claude Code and Cowork get computer use agent, works with phone

Code and Cowork from anywhere on your mobile phone; they now seamlessly hand off tasks. (Picture: Anthropic)
Anthropic’s most popular apps can now spin up an agent to use your computer to complete tasks — and you can even start it from your mobile.

Available as a research preview for Pro and Max subscribers, it will identify what tools it needs to complete a task, and then ask for connectors to, say, the Finder on the Mac or Chrome.

Anthropic warns that the feature is «still early» and can make mistakes, as well as having vulnerabilities to threats. It can also be slower than doing the thing yourself.

The feature works especially well with Dispatch, Anthropic says, a tool released last week to let you start a task from your mobile and finish it up on the computer.

With it, you can get Claude to check your emails in the morning, or pull updates from spreadsheets, or «spin up a Claude Code session» directly from your phone.

Read more: Anthropic’s announcement, Anthropic on Dispatch, and Engadget.

As OpenAI prepares to show ads to all Free and Go users, advertisers are giddy

Everyone on Free and Go plans will be getting ads before soon. (Picture: screenshot)
According to The Information (paywalled), OpenAI will soon stop its «experiment» in ads. They will go for a full advertising service in «the coming weeks,» reports Reuters.

That means the test with showing some ads to about 5% of users is coming to an end, and the full plan will start up just after easter.

The limited advertising has so far been a success. The main complaint from advertisers is that it’s going too slow, according to CNBC. Most of them are happy and ready to spend more — with more varied ads.

— We’re encouraged by early signals from users and participating brands, and continue to see strong interest from advertisers, OpenAI tells CNBC.

The advertising program on Free and Go tiers is expected to earn OpenAI about $1 billion per year, and usher in a third tier for advertisers in addition to Search, Social, and Retail.

Read more: The Information (paywalled), Reuters, and CNBC.

Labs are hiring experts to protect against «catastrophic misuse»

As their models grow more capable, so is the potential for WMD misuse — and AI labs want to be ahead of the curve. (Picture: Adobe)
Anthropic is hiring a weapons expert, the BBC reports.

The role is for someone with long, PhD level experience in «chemical weapons and/or explosives defence,» the LinkedIn post says.

It would be helpful if the person has an «understanding of radiological materials,» the posting goes on, and says the candidate will be «tackling critical problems in preventing catastrophic misuse.»

OpenAI is not far behind in worrying about these issues, and also has a job post open for much the same, but they are looking for someone with machine learning experience from red-teaming in order to safeguard their AI’s responses.

Using any AI for developing these kinds of weapons is of course against all the labs’ terms of use, but as the models grow more capable, they also need more safeguards.

Read more: Anthropic’s job post, OpenAI’s job post, writeups on the BBC and Mashable.

OpenAI plans to combine Codex, ChatGPT and Atlas in «super app»

Feeling that OpenAI has lost focus, attention turns to putting all eggs in one basket. (Picture: generated)
According to The Wall Street Journal, the new app will include agentic capabilities, and signals another step in the company’s recent quest to refocus on coding and business users.

The app will make it easier for teams within OpenAI to work together, the WSJ reports, and will help other users with productivity-related tasks, as they double down on enterprise users.

The standalone ChatGPT app will not be affected by the move, although the paper notes that OpenAI feels it has lost attention by focusing on «side quests» like the Sora app — now rumored to get included in ChatGPT proper.

OpenAI’s Fidji Simo will be leading the super app effort, and she tweets that:

— When new bets start to work, like we’re seeing now with Codex, it’s very important to double down on them and avoid distractions.

Read more: The Wall Street Journal and CNBC.

Amazon to buy one million Nvidia chips, focusing on inference and Groq

Nvidia’s newly released Groq 3 LPX servers are already in demand. (Picture: Amazon)
Nvidia Executive Ian Buck confirms to Reuters that the company will sell the chips to Amazon starting this year and closing in 2027.

The main focus on the deal is on inference workloads, the process of completing tasks and answers from an AI query — which is growing at pace with AI’s general expansion.

— Inference is hard. ⁠It’s wickedly hard, Buck told Reuters. — To be the best at inference, it is not a one chip pony. We actually ​use all seven chips.

Amazon is betting on a broad mix of chips, Reuters reports, and says in their press release that they are buying Blackwell and Vera Rubin chips.

From what Reuters understands, they will also be buying a number of the newly released Groq 3 LPX servers — which are optimized for inference and can do 700 million tokens per second.

Read more: Reuters report, Amazon press release.

Codex grows to 2 million weekly users, acquires Python developers Astral

With the popular developers joining, Codex moves in closer on the software stack. (Picture: Shutterstock)
While announcing that Codex had a 3x increase in users and 5x more actual usage this year, and are up to 2 million weekly active users, OpenAI says they are buying Python developer tool company Astral.

Some of the most beloved and, importantly, used Python developer tools come from the company, which will now be supported by OpenAI.

The deal for roughly 32 employees will strengthen Codex by integrating the tools that have «hundreds of millions of downloads per month,» according to Astral themselves.

OpenAI will continue to maintain the open source projects, and by gaining access to them — and the engineers’ knowhow — for Codex’s AI agents, they will be able to work more closely with the tools.

Read more: OpenAI’s announcement, Astral’s announcement, and CNBC.

«Vibe design» by Gemini — Google updates Stitch for the AI age

Design help from Google? If it floats your boat. (Picture Google)
Promising to let «anyone» create layouts with natural language prompts and turn them into «high-fidelity UI designs,» Stitch is supposed to let you «vibe design» your projects.

It is intended to let you «explore ideas quickly» with a «high quality outcome.»

The app can take input from text, images, or code, and provides you with an entire design language that you can pick and choose from, with an «infinite» canvas storing your ideas.

It should be equally good at designing for the web and apps, but does come out as somewhat boilerplate and generic.

I tried to get it to brainstorm a little about improving the design of this webpage, and the results were terrible, but it might be worth it for other projects.

The improved Stitch is available at stitch.withgoogle.com and can be accessed for free anywhere Gemini is available.

Read more: Google’s introduction, launch tweet.