onlyTrustedInfo.comonlyTrustedInfo.comonlyTrustedInfo.com
Font ResizerAa
  • News
  • Finance
  • Sports
  • Life
  • Entertainment
  • Tech
Reading: Researchers say they’ve discovered a new method of ‘scaling up’ AI, but there’s reason to be skeptical
Share
onlyTrustedInfo.comonlyTrustedInfo.com
Font ResizerAa
  • News
  • Finance
  • Sports
  • Life
  • Entertainment
  • Tech
Search
  • News
  • Finance
  • Sports
  • Life
  • Entertainment
  • Tech
  • Advertise
  • Advertise
© 2025 OnlyTrustedInfo.com . All Rights Reserved.
Advertise here
Tech

Researchers say they’ve discovered a new method of ‘scaling up’ AI, but there’s reason to be skeptical

Last updated: March 19, 2025 12:03 pm
OnlyTrustedInfo.com
Share
5 Min Read
Researchers say they’ve discovered a new method of ‘scaling up’ AI, but there’s reason to be skeptical
SHARE
Advertise here

Have researchers discovered a new AI “scaling law”? That’s what some buzz on social media suggests — but experts are skeptical.

AI scaling laws, a bit of an informal concept, describe how the performance of AI models improves as the size of the datasets and computing resources used to train them increases. Until roughly a year ago, scaling up “pre-training” — training ever-larger models on ever-larger datasets — was the dominant law by far, at least in the sense that most frontier AI labs embraced it.

Pre-training hasn’t gone away, but two additional scaling laws, post-training scaling and test-time scaling, have emerged to complement it. Post-training scaling is essentially tuning a model’s behavior, while test-time scaling entails applying more computing to inference — i.e. running models — to drive a form of “reasoning” (see: models like R1).

Google and UC Berkeley researchers recently proposed in a paper what some commentators online have described as a fourth law: “inference-time search.”

Advertise here

Inference-time search has a model generate many possible answers to a query in parallel and then select the “best” of the bunch. The researchers claim it can boost the performance of a year-old model, like Google’s Gemini 1.5 Pro, to a level that surpasses OpenAI’s o1-preview “reasoning” model on science and math benchmarks.

Our paper focuses on this search axis and its scaling trends. For example, by just randomly sampling 200 responses and self-verifying, Gemini 1.5 (an ancient early 2024 model!) beats o1-Preview and approaches o1. This is without finetuning, RL, or ground-truth verifiers. pic.twitter.com/hB5fO7ifNh

— Eric Zhao (@ericzhao28) March 17, 2025

“[B]y just randomly sampling 200 responses and self-verifying, Gemini 1.5 — an ancient early 2024 model — beats o1-preview and approaches o1,” Eric Zhao, a Google doctorate fellow and one of the paper’s co-authors, wrote in a series of posts on X. “The magic is that self-verification naturally becomes easier at scale! You’d expect that picking out a correct solution becomes harder the larger your pool of solutions is, but the opposite is the case!”

Several experts say that the results aren’t surprising, however, and that inference-time search may not be useful in many scenarios.

Matthew Guzdial, an AI researcher and assistant professor at the University of Alberta, told TechCrunch that the approach works best when there’s a good “evaluation function” — in other words, when the best answer to a question can be easily ascertained. But most queries aren’t that cut-and-dry.

“[I]f we can’t write code to define what we want, we can’t use [inference-time] search,” he said. “For something like general language interaction, we can’t do this […] It’s generally not a great approach to actually solving most problems.”

Advertise here

Mike Cook, a research fellow at King’s College London specializing in AI, agreed with Guzdial’s assessment, adding that it highlights the gap between “reasoning” in the AI sense of the word and our own thinking processes.

“[Inference-time search] doesn’t ‘elevate the reasoning process’ of the model,” Cook said. “[I]t’s just a way of us working around the limitations of a technology prone to making very confidently supported mistakes […] Intuitively if your model makes a mistake 5% of the time, then checking 200 attempts at the same problem should make those mistakes easier to spot.”

That inference-time search may have limitations is sure to be unwelcome news to an AI industry looking to scale up model “reasoning” compute-efficiently. As the co-authors of the paper note, reasoning models today can rack up thousands of dollars of computing on a single math problem.

It seems the search for new scaling techniques will continue.

Advertise here

You Might Also Like

A Zombie Volcano Hasn’t Erupted in 250,000 Years—But Scientists Say It’s Still Alive

iOS 18’s Maps upgrade makes my go-to feature even more convenient

Parallel Systems is building autonomous electric rail for short-distance freight

72 Million Face Holiday Travel Nightmare as Winter Fury Grips U.S.

CES 2026: What We Expect To See — Intel’s Panther Lake, Mid-Gen GPUs, and AI Everywhere

Share This Article
Facebook X Copy Link Print
Share
Previous Article 2025 NFL mock draft 3.0: Who’s drafting Cam Ward at No. 1? Shedeur Sanders falling? 2025 NFL mock draft 3.0: Who’s drafting Cam Ward at No. 1? Shedeur Sanders falling?
Next Article Did a topless photo posted online lead a California IVF doctor to kill his wife? Did a topless photo posted online lead a California IVF doctor to kill his wife?

Latest News

Eminem’s Grandmother Betty Kresin Dies at 87: The Unresolved Trauma Behind the Rapper’s Reclusive Years
Eminem’s Grandmother Betty Kresin Dies at 87: The Unresolved Trauma Behind the Rapper’s Reclusive Years
Entertainment March 11, 2026
MGK’s ‘Stoked’ Comment on Megan Fox’s Racy Photo: The Definitive Breakdown of Their Post-Split Dynamic
MGK’s ‘Stoked’ Comment on Megan Fox’s Racy Photo: The Definitive Breakdown of Their Post-Split Dynamic
Entertainment March 11, 2026
Eric Dane’s Last Words: The AI Miracle That Let Him Speak Before He Died
Eric Dane’s Last Words: The AI Miracle That Let Him Speak Before He Died
Entertainment March 11, 2026
Saturday Night Live U.K. Sets March Premiere on Peacock with Tina Fey Hosting Debut
Saturday Night Live U.K. Sets March Premiere on Peacock with Tina Fey Hosting Debut
Entertainment March 11, 2026
//
  • About Us
  • Contact US
  • Privacy Policy
onlyTrustedInfo.comonlyTrustedInfo.com
© 2026 OnlyTrustedInfo.com . All Rights Reserved.