Nous Analysis, the San Francisco-based synthetic intelligence startup, launched on Tuesday an open-source mathematical reasoning system known as Nomos 1 that achieved near-elite human efficiency on this yr's William Lowell Putnam Mathematical Competitors, one of the vital prestigious and notoriously tough undergraduate math contests on this planet.
The Putnam is understood for its issue: Whereas an ideal rating is 120, this yr's prime rating was 90, and the median was simply 2. Nomos 1, against this, scored 87 factors — a end result that might have ranked second out of three,988 individuals within the 2024 competitors, in keeping with the corporate.
The discharge marks an inflection level within the quickly accelerating race to construct AI methods able to refined mathematical reasoning. In contrast to the huge, compute-intensive fashions deployed by main expertise firms, Nomos 1 achieves its outcomes with a comparatively compact structure: 30 billion parameters with roughly 3 billion lively at any given time, utilizing a mixture-of-experts design based mostly on Alibaba's Qwen3 mannequin.
"This rating would rank #2/3988 in 2024 and marks our first step with Hillclimb AI in the direction of making a SOTA AI mathematician," Nous Analysis introduced on social media Tuesday.
The identical base mannequin scored 24 factors with out Nous Analysis's specialised coaching
Maybe most placing is the hole between Nomos 1 and its base mannequin. When Nous Analysis ran the identical Qwen3-30B-A3B-Considering-2507 mannequin by way of an equivalent testing harness, it scored simply 24 out of 120 — a end result that underscores the crucial significance of post-training optimization and specialised reasoning methods over uncooked mannequin scale.
"Nomos 1 achieved an 87/120 with 8 good scores," the corporate acknowledged, noting that the efficiency distinction "is essentially on account of post-training and information high quality slightly than the harness."
The outcomes have been verified by way of blind grading by a human professional who had beforehand completed within the prime 200 on the Putnam. Nous Analysis offered the anonymized submissions to the grader, then revealed the complete set of de-anonymized information and the runbooks used to generate them on GitHub.
Why the Putnam competitors is taken into account the final word take a look at of mathematical reasoning
The William Lowell Putnam Mathematical Competitors is an annual arithmetic competitors for undergraduate school college students enrolled at establishments of upper studying in america and Canada. It’s extensively thought-about to be essentially the most prestigious university-level arithmetic competitors on this planet.
The notoriously brutal William Lowell Putnam Mathematical Competitors is extra of a mathematical sporting occasion than an educational take a look at. The examination consists of two 3-hour classes separated by a 2-hour break. There are a complete of 12 inquiries to be solved, 6 for every session. Every query is value 10 factors, for a complete of 120 factors.
Putnam questions will not be the kind that come up in common exams or textbooks. They’re extra like puzzles than calculations, typically requiring college students to seek out other ways to characterize issues earlier than an answer may unfold.
Final yr, practically 4,000 college students throughout the continent wrote the Putnam. Sixty-one per cent scored three factors or fewer, in keeping with the Mathematical Affiliation of America, which organizes the competitors. The highest rating was 90 out of 120.
Many Putnam Fellows have gone on to turn out to be distinguished researchers in arithmetic and different fields, together with three Fields Medalists — John Milnor, David Mumford, and Daniel Quillen — and two Nobel laureates in physics — Richard Feynman and Kenneth Wilson.
Contained in the two-phase reasoning system that powers Nomos 1's mathematical breakthroughs
Nomos 1 is a specialization of Qwen's Qwen3-30B-A3B-Considering mannequin, optimized for mathematical problem-solving and proof-writing in pure language. The system was developed in collaboration with Hillclimb AI.
What distinguishes Nomos 1 from easy mannequin inference is its refined reasoning harness — an open-source framework that orchestrates how the mannequin approaches and solves issues. The harness operates in two distinct phases inside a three-hour time restrict, mirroring the precise Putnam competitors construction.
Within the fixing section, parallel staff concurrently sort out issues utilizing a priority-based system. Every employee picks an issue, generates a submission, then scores its personal work on a scale of 1 to 7. Issues with the fewest good scores obtain precedence, guaranteeing the system focuses its compute on the toughest challenges. This course of continues till both all issues have achieved a goal variety of self-critiqued good scores or time runs out.
The finalization section begins quarter-hour earlier than the time restrict (or at 50% for shorter runs) and employs a two-stage choice course of. First, a consolidation step teams submissions by conclusion and makes an attempt to establish the right group — importantly, not essentially the bulk group. Then, a pairwise match utilizing single elimination determines the ultimate submission for every drawback.
"Our open supply reasoning system consists of a fixing section, the place staff try a least-solved drawback and self-assess, adopted by a finalization section, which consolidates submissions to decide on a last submission for every drawback," Nous Analysis defined.
How Nomos 1 compares to mathematical AI methods from DeepSeek, Google, and OpenAI
The Nomos 1 outcomes arrive amid a flurry of advances in mathematical reasoning AI. DeepSeek's mannequin, DeepSeekMath-V2, scored 118 out of 120 factors on questions from the 2024 William Lowell Putnam Mathematical Competitors, beating the highest human rating of 90. The mannequin additionally carried out on the stage of gold-medal winners within the Worldwide Mathematical Olympiad.
This yr, Google's superior Gemini mannequin operated end-to-end in pure language, producing rigorous mathematical proofs instantly from the official drawback descriptions – all inside the 4.5-hour competitors time restrict. They achieved this yr's end result utilizing a complicated model of Gemini Deep Suppose.
What makes Nomos 1's achievement notable will not be uncooked efficiency — it trails DeepSeek's 118/120 — however slightly its accessibility and effectivity. At 30 billion parameters with solely 3 billion lively, the mannequin can run on consumer-grade {hardware}, a stark distinction to the huge compute clusters required by frontier fashions from OpenAI and Google.
Hermes 4.3 arrived simply six days earlier, skilled on a decentralized blockchain community
The Nomos 1 announcement follows intently on the heels of Nous Analysis's December 3 launch of Hermes 4.3, a general-purpose language mannequin that marked one other important milestone for the corporate.
Hermes 4.3, based mostly on ByteDance's Seed-OSS-36B-Base mannequin, is the primary manufacturing mannequin that Nous Analysis skilled completely on its Psyche community — a distributed coaching infrastructure that makes use of a novel optimizer known as DisTrO to coordinate coaching throughout nodes unfold all through information facilities over the open web, secured by consensus on the Solana blockchain.
The corporate skilled Hermes 4.3 each by way of conventional centralized strategies and on the Psyche community, particularly to confirm that distributed coaching might match or exceed centralized efficiency for manufacturing workloads. The Psyche-trained model outperformed the centralized model throughout a set of downstream duties, the corporate reported.
"The coaching run proved secure all through, averaging 144k tokens/second unfold throughout 24 Psyche nodes," Nous Analysis acknowledged. "Utilizing DisTrO's overlapped collective technique, the whole lot of the P2P communications have been hidden by the coaching time, successfully reaching equal throughput to conventional, centralized coaching."
Hermes 4.3 additionally achieved state-of-the-art outcomes on RefusalBench, a brand new benchmark that measures a mannequin's willingness to be useful throughout quite a lot of situations generally restricted by different fashions. The mannequin answered 74.60% of RefusalBench questions in non-reasoning mode, surpassing its predecessor Hermes 4 70B (59.50%) and outperforming closed fashions together with Grok 4 (51.30%) and Gemini 2.5 Professional (24.23%).
Small fashions with good coaching are closing the hole with trillion-parameter giants
Collectively, the 2 releases in a single week sign Nous Analysis's strategic wager: that smaller, extra environment friendly fashions with refined post-training methods and reasoning harnesses can compete with — and in some instances outperform — the huge fashions developed by better-funded rivals.
For enterprise decision-makers, the implications are important. Mathematical reasoning capabilities have functions far past tutorial competitions: they're important for formal verification, theorem proving, scientific modeling, cryptographic evaluation, and any area requiring rigorous logical deduction.
The open-source nature of each releases — Nomos 1 is out there underneath the Apache 2.0 license on Hugging Face, with the complete reasoning harness on GitHub — signifies that organizations can deploy these capabilities on their very own infrastructure with out counting on API calls to main cloud suppliers.
"For the primary time, anybody can run or entry a state-of-the-art AI mathematician," one observer famous on social media. "This lowers the barrier to severe math analysis, proof verification, modeling advanced methods, superior reasoning work."
The important thing contributors to Nomos 1 embody Roger Jin, who led the coaching; Jeffrey Quesnelle and Dakota Mahan, who constructed the infrastructure; Chen Guang, who suggested; and Ryan Teknium and Jeffrey Quesnelle, who offered management. The mannequin was developed with contributions from Hillclimb AI and a crew of math consultants together with Samuel Kim, Miron Yurkevich, and others.
The race to construct AI mathematicians is accelerating quicker than anybody predicted
The 86th Putnam Competitors happened on Saturday, December 6, 2025 — simply three days earlier than Nous Analysis launched Nomos 1. The timing underscores how quickly the sector is shifting: firms are actually releasing mathematical AI methods able to near-elite human efficiency inside days of the competitions they're designed to unravel.
Competitors in mathematical AI has intensified dramatically in current months. In July, a complicated model of Google DeepMind's Gemini mannequin and an experimental reasoning mannequin from OpenAI each achieved gold standing on the IMO 2025. DeepSeek's new mannequin matched their efficiency, fixing 5 out of 6 issues.
However the useful resource necessities for these frontier methods stay prohibitive for many organizations. OpenAI's o1-pro is estimated at over 1.8 trillion parameters; Google's Gemini 2.5 Professional seemingly exceeds 400 billion. Nomos 1, against this, achieves aggressive outcomes with a fraction of that footprint.
The hole between large frontier fashions and environment friendly open-source alternate options is narrowing. And for organizations that want mathematical reasoning capabilities with out the finances for hyperscale compute, that hole might have simply closed sufficient to matter.
As one observer put it on social media: "This marks a big bounce for AI math fashions which might be sufficiently small to run in your laptop computer."
A laptop computer that may now outperform practically 4,000 of the continent's greatest undergraduate mathematicians.