Innovative AGI Assessment Poses Significant Challenge to Leading AI Models
A groundbreaking test by the Arc Prize Foundation evaluates AI’s general intelligence, with most models scoring below 2%.

So, the Arc Prize Foundation, brainchild of the legendary AI whiz François Chollet, just dropped something pretty cool: ARC-AGI-2. Think of it as the ultimate pop quiz for AI, but with a twist. It’s all about visual puzzles that test how well AI can think on its feet—no cheating by memorizing answers beforehand. And guess what? Even the big shots like OpenAI and DeepSeek are kinda floundering, scraping by with scores barely over 1%. Meanwhile, us humans are out here averaging a solid 60%. Talk about a reality check for artificial intelligence, huh?
But here’s the kicker: ARC-AGI-2 isn’t just about whether AI can solve the problems. It’s about how efficiently they can do it. No more throwing endless computing power at the wall to see what sticks. The foundation’s throwing down the gauntlet with the Arc Prize 2025, daring devs to hit 85% accuracy while keeping costs down to a measly $0.42 per task. It’s not just a challenge; it’s a wake-up call for smarter, leaner AI development. Game on.