Reddit announces new bot and privacy policy for AI age

«Reddit is for humans,» their CEO says, as they tighten ID requirements for suspected bad bots. (Picture: u/spez, Reddit)
Reddit is highly valued as a source of human expertise and knowhow, but bots are threatening to overrun it with AI slop, forcing a change in policy.

There are «good bots» and «bad bots,» Reddit CEO Steve Huffman explains, and they want to keep the good ones with a new [App]-label.

Accounts reported as «fishy» and suspected of automation will be required to verify that they are human. This is done through third party services to keep Reddit from knowing your identity — and uphold their highly valued anonymity.

— For better or worse, using AI to write is part of how people will communicate, Huffman writes, and they do not plan to root that out, leaving it to the rating system.

But, on Reddit, «you should assume that anyone you’re talking to is a human unless otherwise labeled,» he says.

Read more: Huffman’s Reddit post, Ars Technica, Engadget, and Mashable.

Apple able to extract model responses from their custom Gemini solution

With Gemini running on Apple’s own servers, they have wide access and permission to customize it. (Picture: generated)
With Google’s bespoke Gemini model running on their internal servers, Apple will have full access to the AI, The Information (paywalled) writes.

That entails that they can run «distillation» on the model, meaning they can use it to provide answers and reasoning over a wide array of tasks and use that to train smaller, more capable Apple models, MacRumors says.

Distillation is a controversial technique, and many of the big AI labs have been accusing Chinese startups of doing it to make their own models more capable.

Apple can also tinker with Gemini, to make it give responses that Apple likes, MacRumors writes.

The Gemini model is optimized for chatbots and coding, and might not always produce the kinds of answers that Apple wants, they note.

Read more: The Information (paywalled), MacRumors.

Github Copilot to start training on user interactions from April 24

Github is coming for your code, after a successful trial on internal Microsoft data. (Picture: Github)
If you ever used Github to complete your code, your data can now be «used to train and improve our AI models,» Github says.

This comes after a trial period where Copilot has been feeding on internal Microsoft engineers’ data, which they say «improved model performance.»

They will not train on your entire code repositories, and will only use your interactions with Copilot — including accepted outputs, inputs sent to the model and «code context.»

Github is hardly alone in doing this, as Anthropic and OpenAI have been doing this for more than half a year. It’s common industry practice.

If you don’t like Copilot training on your data, you can opt out on the Copilot features page.

Read more: Github’s announcement, How-to Geek.

Arm releases its first physical silicon chip, the AGI CPU, for agentic inference

Arm says agent workflows are set to rise four times, and their new CPU is tailor made for the process. (Picture: Arm)
Precisely catching the fastest growing trend in AI computing, Arm says its new CPU is tailor made for agent workloads.

Co-developed with Meta, the chip is claimed to deliver twice the performance per rack compared to x86 platforms.

Agentic AI compute is expected to require more than four times the current capacity per gigawatt in data centers, and both Arm and Meta expect the design to iterate across several generations.

The AGI CPU is projected to lift Arm’s revenue by «billions» of dollars, Reuters reports, and has over fifty launch partners, including OpenAI, Amazon AWS, Google Cloud, and Meta.

Read more: Arm presser, Meta presser, product page, Reuters, and The Verge.

OpenAI axes Sora video app and API, and it won’t live on in ChatGPT

The expensive «side quest» of Sora video generation is officially at an end. (Picture: Shutterstock)
Contrary to earlier rumors, the app won’t be integrated into ChatGPT, writes The Wall Street Journal and Reuters, citing an internal email by CEO Sam Altman.

The video generation app had amassed 920 million users since December 2025 and was for a while the number one app on the App Store, before declining to #165 recently.

Closing the free app, estimated to cost $15 million per day, opens up resources for OpenAI’s recent focus on coding and business, as it was labelled a «side quest» internally.

With Sora discontinued, OpenAI is also leaving behind a $1 billion deal with Disney — which had licensed some of its characters for use on the platform. Disney says they are open to new investments, and «respects» OpenAI’s decision.

Read more: The Wall Street Journal, Reuters and Tibor Blaho.

Claude Code and Cowork get computer use agent, works with phone

Code and Cowork from anywhere on your mobile phone; they now seamlessly hand off tasks. (Picture: Anthropic)
Anthropic’s most popular apps can now spin up an agent to use your computer to complete tasks — and you can even start it from your mobile.

Available as a research preview for Pro and Max subscribers, it will identify what tools it needs to complete a task, and then ask for connectors to, say, the Finder on the Mac or Chrome.

Anthropic warns that the feature is «still early» and can make mistakes, as well as having vulnerabilities to threats. It can also be slower than doing the thing yourself.

The feature works especially well with Dispatch, Anthropic says, a tool released last week to let you start a task from your mobile and finish it up on the computer.

With it, you can get Claude to check your emails in the morning, or pull updates from spreadsheets, or «spin up a Claude Code session» directly from your phone.

Read more: Anthropic’s announcement, Anthropic on Dispatch, and Engadget.

As OpenAI prepares to show ads to all Free and Go users, advertisers are giddy

Everyone on Free and Go plans will be getting ads before soon. (Picture: screenshot)
According to The Information (paywalled), OpenAI will soon stop its «experiment» in ads. They will go for a full advertising service in «the coming weeks,» reports Reuters.

That means the test with showing some ads to about 5% of users is coming to an end, and the full plan will start up just after easter.

The limited advertising has so far been a success. The main complaint from advertisers is that it’s going too slow, according to CNBC. Most of them are happy and ready to spend more — with more varied ads.

— We’re encouraged by early signals from users and participating brands, and continue to see strong interest from advertisers, OpenAI tells CNBC.

The advertising program on Free and Go tiers is expected to earn OpenAI about $1 billion per year, and usher in a third tier for advertisers in addition to Search, Social, and Retail.

Read more: The Information (paywalled), Reuters, and CNBC.

Labs are hiring experts to protect against «catastrophic misuse»

As their models grow more capable, so is the potential for WMD misuse — and AI labs want to be ahead of the curve. (Picture: Adobe)
Anthropic is hiring a weapons expert, the BBC reports.

The role is for someone with long, PhD level experience in «chemical weapons and/or explosives defence,» the LinkedIn post says.

It would be helpful if the person has an «understanding of radiological materials,» the posting goes on, and says the candidate will be «tackling critical problems in preventing catastrophic misuse.»

OpenAI is not far behind in worrying about these issues, and also has a job post open for much the same, but they are looking for someone with machine learning experience from red-teaming in order to safeguard their AI’s responses.

Using any AI for developing these kinds of weapons is of course against all the labs’ terms of use, but as the models grow more capable, they also need more safeguards.

Read more: Anthropic’s job post, OpenAI’s job post, writeups on the BBC and Mashable.

OpenAI plans to combine Codex, ChatGPT and Atlas in «super app»

Feeling that OpenAI has lost focus, attention turns to putting all eggs in one basket. (Picture: generated)
According to The Wall Street Journal, the new app will include agentic capabilities, and signals another step in the company’s recent quest to refocus on coding and business users.

The app will make it easier for teams within OpenAI to work together, the WSJ reports, and will help other users with productivity-related tasks, as they double down on enterprise users.

The standalone ChatGPT app will not be affected by the move, although the paper notes that OpenAI feels it has lost attention by focusing on «side quests» like the Sora app — now rumored to get included in ChatGPT proper.

OpenAI’s Fidji Simo will be leading the super app effort, and she tweets that:

— When new bets start to work, like we’re seeing now with Codex, it’s very important to double down on them and avoid distractions.

Read more: The Wall Street Journal and CNBC.

Amazon to buy one million Nvidia chips, focusing on inference and Groq

Nvidia’s newly released Groq 3 LPX servers are already in demand. (Picture: Amazon)
Nvidia Executive Ian Buck confirms to Reuters that the company will sell the chips to Amazon starting this year and closing in 2027.

The main focus on the deal is on inference workloads, the process of completing tasks and answers from an AI query — which is growing at pace with AI’s general expansion.

— Inference is hard. ⁠It’s wickedly hard, Buck told Reuters. — To be the best at inference, it is not a one chip pony. We actually ​use all seven chips.

Amazon is betting on a broad mix of chips, Reuters reports, and says in their press release that they are buying Blackwell and Vera Rubin chips.

From what Reuters understands, they will also be buying a number of the newly released Groq 3 LPX servers — which are optimized for inference and can do 700 million tokens per second.

Read more: Reuters report, Amazon press release.

Codex grows to 2 million weekly users, acquires Python developers Astral

With the popular developers joining, Codex moves in closer on the software stack. (Picture: Shutterstock)
While announcing that Codex had a 3x increase in users and 5x more actual usage this year, and are up to 2 million weekly active users, OpenAI says they are buying Python developer tool company Astral.

Some of the most beloved and, importantly, used Python developer tools come from the company, which will now be supported by OpenAI.

The deal for roughly 32 employees will strengthen Codex by integrating the tools that have «hundreds of millions of downloads per month,» according to Astral themselves.

OpenAI will continue to maintain the open source projects, and by gaining access to them — and the engineers’ knowhow — for Codex’s AI agents, they will be able to work more closely with the tools.

Read more: OpenAI’s announcement, Astral’s announcement, and CNBC.

«Vibe design» by Gemini — Google updates Stitch for the AI age

Design help from Google? If it floats your boat. (Picture Google)
Promising to let «anyone» create layouts with natural language prompts and turn them into «high-fidelity UI designs,» Stitch is supposed to let you «vibe design» your projects.

It is intended to let you «explore ideas quickly» with a «high quality outcome.»

The app can take input from text, images, or code, and provides you with an entire design language that you can pick and choose from, with an «infinite» canvas storing your ideas.

It should be equally good at designing for the web and apps, but does come out as somewhat boilerplate and generic.

I tried to get it to brainstorm a little about improving the design of this webpage, and the results were terrible, but it might be worth it for other projects.

The improved Stitch is available at stitch.withgoogle.com and can be accessed for free anywhere Gemini is available.

Read more: Google’s introduction, launch tweet.

OpenAI upgrades GPT-5.3-instant to be «less clickbait-y» in its responses

“If you want, I can also explain…”-clickbait should largely be gone from the model after the latest update. (Picture: generated)
5.3-instant is the model most people encounter on the Plus and Pro subscriptions on a daily basis. It was supposed to be «less cringe,» and offer «fewer lectures.»

But many had noticed that it had become filled with follow-up questions for simple queries, offering «one strange trick,» «would you like me to tell you three things that…» and «You’ll never believe…»

These teaser-style responses were not just annoying, but sometimes frustrating — as if the bot had become optimized for engagement and tried to keep the conversation going after already answering the query.

The good news is that as of March 16, 2026, OpenAI has upgraded the model to show less of this slop, and users should already be noticing an improvement in «follow-up tone.»

Read more: OpenAI’s update page, Android Headlines.

Anthropic surveys 81,000 people in 159 countries about their thoughts on AI

Most respondents hail AI for the learning experience, but some worry about agency and thinking less. (Picture: Anthropic)
Capturing a wide sentiment across the world, the survey also breaks down what people expect, and their fears and hopes on AI.

— For the first time, AI has enabled us to collect rich, open-ended interviews at extraordinary scale, Anthropic writes. — We believe this is the largest and most multilingual quantitative survey ever conducted.

It finds that the USA is most worried about the future with AI, while Brazil, India and most of Southeast Asia are generally positive toward it.

For what people expect and hope for from AI, the results are varied, but the top answer is «Professional excellence» (18.8%), «Personal transformation» 13.7%, and «Life management» at 13.5%.

The responses on whether AI actually delivered on any of those aspirations falls short, though — with 32% responding that it helped on productivity and 28.9%, in second place, saying that «AI hasn’t delivered.»

The survey found that, globally, 67% of respondents have a positive view of AI.

Read the full survey on Anthropic.

ChatGPT’s «adult mode» hotly debated at OpenAI, will be smutty, but not porn

There are several roadblocks for Adult Mode, should it ever come to pass. (Picture: generated)
According to The Wall Street Journal, the upcoming «adult mode» for ChatGPT is hitting some internal snags.

Touted by CEO Sam Altman as letting «adults be adults» in October 2025, it was later delayed and then deprioritized last week.

It now seems the company’s internal advisory board is against going forward with the feature, saying it could foster «unhealthy emotional dependence,» Mashable writes.

Also holding back the launch is the fact that ChatGPT’s age checks aren’t that good, and has a rather large error rate of 12% on identifying kids and teens, The Verge reports.

100 million under 18s use ChatGPT every week, which would mean that some 12 million of them could be classified as adults and exposed to «sexualized conversations.»

The feature is currently postponed due to «other priorities,» but is said to skirt images, voice and video for pure text, and will supposedly be «smutty,» not «pornographic,» the Verge says.

Read more: The Wall Street Journal (paywalled), Mashable and The Verge.