Anthropic upgrades Claude’s free tier with file handling, connectors and skills

The free tier on Claude is leveling up, getting the most popular paid features. (Picture: Anthropic)
Using Sonnet 4.5, these features were previously only available on the paid tiers.

But now, Claude can create and manipulate Office files and PDFs for free.

Connectors are also available, which make it possible to link to Slack, Canva and others.

Anthropic is also making Skills free. These are saved prompts and workflows as a kind of template, that can be invoked at any time for repetitive tasks.

Finally, «Compaction» is becoming available on the free tier, which «summarizes earlier context automatically, so long conversations can continue without starting over.»

Together, these comprise «Claude’s most-used features,» Anthropic says.

Read more: Launch thread, writeup on Engadget.

Nvidia standardizing on GPT-5.3-Codex internally for ~30k engineers

OpenAI just scored a big win for its coding platform. (Picture: generated)
OpenAI CEO Sam Altman has been touting the latest Codex coding model all week, praising the team and extolling how excited everyone is. Adoption is also growing, and now Nvidia is voting with their feet:

Codex is now rolling out to all engineers at the company, in close cooperation with the OpenAI team who built in «cloud-managed admin controls» and fail-safe processing.

They even helped onboard the Nvidia team, saying «It’s shocking how quickly they’ve adopted Codex», and that they move like a giant startup.

GPT-5.3-Codex was only launched last week, and is perceived as a possible profit engine, competing with Claude Code for enterprise customers.

ByteDance is developing in-house AI chips, to be manufactured by Samsung

Nvidia chips are available in China, but users need permission to buy them. (Picture: Adobe)
Not much is known about the AI inference chips, or how they compare to Nvidia’s offerings, but ByteDance is going to be making about 100,000 of them «this year,» and then scale up to 350,000 units, according to Reuters.

ByteDance has been known to work with US chip producer Broadcom, and started seriously hiring chip specialists in 2022.

The new chips are set to be produced with Samsung in a deal that includes memory chips, which definitely sweetens the deal.

Production is advanced enough that Reuters’ sources say engineering samples are due by late March, which is the last stage before production.

A spokesperson for the company does not deny the report outright, but claims the information is «inaccurate,» Reuters writes.

Most US frontier labs are developing their own chips, as is Alibaba and Baidu.

Read the scoop at Reuters.

Anthropic’s Cowork app comes to Windows, available to all paid tiers

The Cowork agent app was previously only available as a research preview on the Mac, but it’s now out on Windows for all paid tiers.

The agent lets you create summaries and manipulate files on your computer, and can access Slack, Google Calendar and Office files.

Read more about it here.

EU investigating WhatsApp AI ban, considering «interim actions»

The EU might decide that WhatsApp has to open for competing AI bots sooner rather than later. (Picture: European Commission)
The European Commission said yesterday that it had notified Meta on possible action to open up WhatsApp to rival AI chatbots.

Meta banned all AI chatbots other than Meta AI from WhatsApp on January 15th, and while the EU can take a long time to investigate antitrust allegations — they are considering issuing an early order to «avoid Meta’s new policy irreparably harming competition in Europe,» says Teresa Ribera, The EU’s Executive Vice-President for Clean, Just and Competitive Transition.

WhatsApp has over 3 billion users worldwide and qualifies as a gatekeeper in EU parlance, subject to rules on equal access.

Meta says that «There are many AI options and people can use them from app stores, operating systems, devices, websites, and industry partnerships,» in a statement to Reuters.

The process following this formal notification is that parties can examine the EU’s files, reply in writing and then receive a hearing. After that, the Commission will consider «interim measures,» such as restoring access for competitors, even as the case moves forward in their systems.

Read more: Statement by the EC, writeup on Reuters.

New «Chat» model coming this week, Altman says in memo to staff

After enjoying great success with GPT-5.3 on Codex, it seems to be ChatGPT’s turn this week. (Picture: generated)
While touting a return to more than 10% monthly growth for ChatGPT and «insane» Codex growth, Sam Altman also said they are preparing to launch «an updated Chat model» this week, writes CNBC.

That would likely be GPT-5.3, debuted last week for Codex. It’s a blazingly fast and more capable successor to 5.2, and is said to be one of the first models used to make itself.

The new Codex model has been a great success, funneling a 60% growth in overall use just last week.

That model is also available to Free and Go users for a limited time, and is now being extended with perhaps reduced limits, Altman says:

— We want everyone to be able to try Codex and start building.

Read the full scoop on CNBC

Ads are now live on Free and Go tiers of ChatGPT in the USA

Ads are supposed to finance giving free users the latest tech and the most messages, OpenAI says. (Picture: OpenAI)
The ads will be clearly labeled, separated from ChatGPT responses and won’t influence what the Chatbot says. Chats will be «kept private» from advertisers, OpenAI says.

— Our goal is for ads to support broader access to more powerful ChatGPT features while maintaining the trust people place in ChatGPT for important and personal tasks, they write.

Paid tiers other than Go won’t be seeing any ads at all, and the stated reason for them is to give more queries and responses to the ad-supported tiers, while keeping them fast and responsive.

It’s possible to turn off ads and get limited messages, and under-18s won’t be getting ads. They will also be disabled for «sensitive topics» such as health, mental health and politics.

The test is only for the U.S. market as it stands, and the plan is for ads to make up a little less than half of OpenAIs income once they get off the ground, CNBC reports.

Read more: OpenAI’s announcement. Writeups on Engadget, The Verge, and Gizmodo.

Anthropic unlocks fast mode for Claude Code, 2.5x faster at 6x the price

For «work like rapid iteration or live debugging» Anthropic is letting users go fast, by toggling «/fast» on their console.

The feature is available in a «limited research preview» in the API for paid users with «extra usage» enabled, and there is a waiting list to get on it.

It’s the same model with the same capabilities and there is no change other than the speed — and cost.

The price for using fast mode is $30 for input and $150 in output for a million tokens, but there is a 50% discount until February 16.

Read more: Details on Claude, launch post.

OpenAI releases GPT-5.3-Codex, faster and more capable

The new coding model is 25% faster — letting it do long-running tasks in a shorter time frame.

It’s the first OpenAI model that was built with itself. They used early versions of it to debug, manage deployment and diagnose test results, and say they were impressed with its capabilities.

Continue reading “OpenAI releases GPT-5.3-Codex, faster and more capable”

Anthropic upgrades Claude Opus to 4.6

Opus 4.6 should outperform most other frontier models as of now. (Picture: Anthropic)
It’s a point release, but Claude just got a whole lot more capable, and now has a 1 million token context window.

It should be better at doing everyday tasks, and along with upgrades to Claude in Excel, Anthropic is also launching Claude in Powerpoint in beta with this release.

It also supports «agent teams,» letting you «spin up multiple agents that work in parallel as a team that coordinates autonomously.»

Opus 4.6 was also built by Claude, in what seems to have become an industry standard to use their own coding tools for new models. GPT-5.3-Codex was built in a similar manner.

As for benchmarks, it beats most frontier models on almost every one of them. It scores 65.4% on coding-level Terminal-Bench 2.0, and does 68.8% on the difficult ARC-AGI-2, and 53% on Humanity’s Last Exam for general reasoning.

Also new with this model is the advent of «Adaptive thinking,» which lets Claude itself decide when to use deeper reasoning, and different «Effort»-levels for each query, set by users, which could save a few tokens.

Read more: Anthropic’s introduction, TechCrunch, CNBC. Discussion on r/Singularity.

Alexa+ exits beta and is now available for free to U.S. Amazon Prime users

Alexa+ is a powerful, Anthropic-based home assistant. (Picture: Amazon)
Launched in beta in March 2025, the Alexa+ generative AI model is a huge upgrade to the older «plain» Alexa assistant.

It can handle multiple complex requests and act like an agent, ordering up Ubers, reserving seats at restaurants or tickets to concerts. It also handles home automation tasks.

80% of American households have Amazon Prime, ticking in at 180 million users, and 70 million people have some kind of Echo device with Alexa on it.

That is a huge user base to start off from for a semi-new agentic LLM, even though it is partially powered by Anthropic.

The Alexa+ assistant can also be accessed through an app, or at Alexa.com, and non-Prime users can pay $20 a month for access.

Read more: Amazon’s announcement, writeups on The Verge and CNBC.

Alphabet set to double AI spending as Google owner hits record revenue

AI spending increases twofold at Alphabet this year. (Picture: generated)
The Google owner is set to join Amazon and Meta in spending more than $100 billion on AI this year, as its 2025 revenue tops $400 billion.

The headline capex number of $175 to $185 billion is in comparison to a spend of $91 billion in 2025, as their cloud VP, Amin Vahdat, has said they need to double capacity every six months.

In 2026, Meta will spend $135 billion, Microsoft expects a decrease from $37 billion last quarter, and Amazon clocks in at $146 billion, according to CNBC.

Combined, Big Tech looks set to cross $500 billion in AI spending this year, Reuters reports.

As for Google’s AI push, it seems on the rise, having sold 8 million enterprise subscriptions in 2025 and now reaching 750 million monthly active users, up from 650 million last quarter.

Read more: Alphabet’s numbers, writeups at CNBC, Reuters, TechCrunch.

OpenAI hires Head of Preparedness after very public job listing

Dylan Scandinaro’s profile picture on x.com. (Picture: screenshot)
Anthropic’s safety engineer Dylan Scandinaro has agreed to join OpenAI for the crucial role in ensuring OpenAI can keep growing while mitigating risks.

The job post almost instantly went viral in December, with CEO Sam Altman warning of biological and hacking risks — saying things were moving so fast they urgently needed someone for the «stressful job,» to be ready to «jump into the deep end pretty much immediately.»

On the hiring, Altman says he has found the best candidate for the job, and is «extremely excited» to welcome Scandinaro, who says there are great benefits ahead, but also warns of «irrecoverable harm» if not handled correctly.

— Things are about to move quite fast and we will be working with extremely powerful models soon. This will require commensurate safeguards to ensure we can continue to deliver tremendous benefits, Altman writes.

Read more: The Verge, Bloomberg.

Apple’s Xcode gets full agentic coding support for Claude and Codex

The new Xcode frees the developer from coding, so you can focus on innovation. (Picture: Apple)
With Xcode 26.3, there is full support for Claude and Codex — from idea iteration to file creation and structural editing.

Agents can also verify their work, and «collaborate throughout the entire development cycle,» Apple says.

They can search documentation, update project settings and «explore file structures.» Additionally, users can track what the agents are doing in the sidebar and adjust progress.

It only takes a single click to switch between models and pick the one best suited for the task, MacRumors writes.

Although Apple has worked exclusively with Anthropic and OpenAI to implement the features, any agent can be used — so long as they are using the Model Context Protocol.

Read more: Apple’s release, writeups on MacRumors, Engadget.