onlyTrustedInfo.comonlyTrustedInfo.comonlyTrustedInfo.com
Font ResizerAa
  • News
  • Finance
  • Sports
  • Life
  • Entertainment
  • Tech
Reading: A new, challenging AGI test stumps most AI models
Share
onlyTrustedInfo.comonlyTrustedInfo.com
Font ResizerAa
  • News
  • Finance
  • Sports
  • Life
  • Entertainment
  • Tech
Search
  • News
  • Finance
  • Sports
  • Life
  • Entertainment
  • Tech
  • Advertise
  • Advertise
© 2025 OnlyTrustedInfo.com . All Rights Reserved.
Tech

A new, challenging AGI test stumps most AI models

Last updated: March 24, 2025 8:29 pm
OnlyTrustedInfo.com
Share
4 Min Read
A new, challenging AGI test stumps most AI models
SHARE

The Arc Prize Foundation, a nonprofit co-founded by prominent AI researcher François Chollet, announced in a blog post on Monday that it has created a new, challenging test to measure the general intelligence of leading AI models.

So far, the new test, called ARC-AGI-2, has stumped most models.


“Reasoning” AI models like OpenAI’s o1-pro and DeepSeek’s R1 score between 1% and 1.3% on ARC-AGI-2, according to the Arc Prize leaderboard. Powerful non-reasoning models including GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Flash score around 1%.

The ARC-AGI tests consist of puzzle-like problems where an AI has to identify visual patterns from a collection of different-colored squares, and generate the correct “answer” grid. The problems were designed to force an AI to adapt to new problems it hasn’t seen before.


The Arc Prize Foundation had over 400 people take ARC-AGI-2 to establish a human baseline. On average, “panels” of these people got 60% of the test’s questions right — much better than any of the models’ scores.

a sample question from Arc-AGI-2 (credit: Arc Prize).

In a post on X, Chollet claimed ARC-AGI-2 is a better measure of an AI model’s actual intelligence than the first iteration of the test, ARC-AGI-1. The Arc Prize Foundation’s tests are aimed at evaluating whether an AI system can efficiently acquire new skills outside the data it was trained on.

Chollet said that unlike ARC-AGI-1, the new test prevents AI models from relying on “brute force” — extensive computing power — to find solutions. Chollet previously acknowledged this was a major flaw of ARC-AGI-1.

To address the first test’s flaws, ARC-AGI-2 introduces a new metric: efficiency. It also requires models to interpret patterns on the fly instead of relying on memorization.

“Intelligence is not solely defined by the ability to solve problems or achieve high scores,” Arc Prize Foundation co-founder Greg Kamradt wrote in a blog post. “The efficiency with which those capabilities are acquired and deployed is a crucial, defining component. The core question being asked is not just, ‘Can AI acquire [the] skill to solve a task?’ but also, ‘At what efficiency or cost?’”


ARC-AGI-1 was unbeaten for roughly five years until December 2024, when OpenAI released its advanced reasoning model, o3, which outperformed all other AI models and matched human performance on the evaluation. However, as we noted at the time, o3’s performance gains on ARC-AGI-1 came with a hefty price tag.

The version of OpenAI’s o3 model — o3 (low) — that was first to reach new heights on ARC-AGI-1, scoring 75.7% on the test, got a measly 4% on ARC-AGI-2 using $200 worth of computing power per task.

Comparison of Frontier AI model performance on ARC-AGI-1 and ARC-AGI-2 (credit: Arc Prize).

The arrival of ARC-AGI-2 comes as many in the tech industry are calling for new, unsaturated benchmarks to measure AI progress. Hugging Face’s co-founder, Thomas Wolf, recently told TechCrunch that the AI industry lacks sufficient tests to measure the key traits of so-called artificial general intelligence, including creativity.

Alongside the new benchmark, the Arc Prize Foundation announced a new Arc Prize 2025 contest, challenging developers to reach 85% accuracy on the ARC-AGI-2 test while only spending $0.42 per task.

You Might Also Like

From Obama to Bitcoin: How the UK Twitter Hack Exposed Security’s Achilles Heel and Changed the Value of Cybercrime

Apple is launching 15+ new products later this year, here’s what’s coming

Why Do Black Holes Spin?

Britain’s Biodiversity Breakthrough: Rare Pink Fungus Signals Ancient Meadow Survival

I’m a software engineer who spent nearly 7 years at Twitter before joining OpenAI. Here’s how I got hired and why I love it.

Share This Article
Facebook X Copy Link Print
Share
Previous Article Trump Media partners with Crypto.com to begin launching ETFs Trump Media partners with Crypto.com to begin launching ETFs
Next Article OpenAI says its AI voice assistant is now better to chat with OpenAI says its AI voice assistant is now better to chat with

Latest News

The Steam Deck’s Desktop Mode Is a Secret Weapon Most Users Ignore
The Steam Deck’s Desktop Mode Is a Secret Weapon Most Users Ignore
Tech March 15, 2026
The 5 Coolest Gadgets That Transform Your Steam Deck From Great to Essential
The 5 Coolest Gadgets That Transform Your Steam Deck From Great to Essential
Tech March 15, 2026
WPS Button Exposed: The Router Shortcut That Could Invite Hackers Into Your Home
WPS Button Exposed: The Router Shortcut That Could Invite Hackers Into Your Home
Tech March 15, 2026
The Toolbox Revolution: Why Your 3D Printer’s Best Investment Isn’t Another Filament
The Toolbox Revolution: Why Your 3D Printer’s Best Investment Isn’t Another Filament
Tech March 15, 2026
//
  • About Us
  • Contact US
  • Privacy Policy
onlyTrustedInfo.comonlyTrustedInfo.com
© 2026 OnlyTrustedInfo.com . All Rights Reserved.