Phi-4 proves {that a} 'data-first' SFT methodology is the brand new differentiator

Editorial Team
21 Min Read



AI engineers typically chase efficiency by scaling up LLM parameters and information, however the development towards smaller, extra environment friendly, and better-focused fashions has accelerated. 

The Phi-4 fine-tuning methodology is the cleanest public instance of a coaching strategy that smaller enterprise groups can copy. It exhibits how a rigorously chosen dataset and fine-tuning technique could make a 14B mannequin compete with a lot bigger ones.

The Phi-4 mannequin was educated on simply 1.4 million rigorously chosen prompt-response pairs. As an alternative of brute power, the Microsoft Phi-4 analysis crew targeted on “teachable” examples on the fringe of the mannequin’s skills and rigorous information curation. 

The Phi-4 reasoning sensible information playbook demonstrates how strategic information curation with replicable SFT and RL can elevate a 14B mannequin past a lot bigger counterparts.

Why Phi-4 stands aside

Smaller reasoning fashions, reminiscent of OpenAI’s o1-mini and Google’s Gemma, have gotten extra frequent, and fashions like Alibaba’s Qwen3 (8B and 14B) are seeing huge adoption throughout use circumstances. That adoption is necessary, but it surely doesn’t displace the worth of Phi-4 as an experimental proof: Phi-4 was designed as a testbed for a data-first coaching methodology, and its documentation reads like a sensible information playbook for groups that need to replicate that strategy.

The Phi-4 crew has shared a repeatable SFT playbook that features a 1.4-million-prompt response set. It’s constructed round teachable edge examples, questions which might be neither too straightforward nor too tough, chosen to push the mannequin’s reasoning. Every matter, reminiscent of math or code, is tuned individually after which mixed with artificial rewrites that flip complicated duties into kinds that may be checked mechanically. 

The paper outlines the information choice and filtering course of in sufficient element for smaller groups to breed it with open-source fashions and evaluators. For enterprise groups, that degree of transparency turns a analysis outcome right into a sensible, copyable coaching recipe they will implement and measure rapidly.

The info-first philosophy: Why much less will be extra

Conventional approaches to LLM reasoning have typically relied on scaling datasets massively to encourage generalization. Phi-4 reasoning takes a special path, exhibiting that rigorously curated information can obtain comparable and even higher outcomes with far much less.

The crew assembled a dataset overlaying STEM, coding, and security. Regardless of its small dimension, it outperformed fashions educated on orders of magnitude extra information. 

In benchmarks, the 14B Phi-4 reasoning mannequin outperformed OpenAI’s o1-mini and DeepSeek’s 70B distilled mannequin throughout most reasoning duties, and approached the complete DeepSeek-R1 (671B) on difficult math (AIME) questions. 

With simply 14 billion parameters, Phi-4 reasoning delivers the next outcomes when in comparison with different main fashions:

Benchmark (activity)

Phi-4 reasoning

Comparability mannequin (dimension)

Comparability rating

Date / Supply

AIME 2024 (math olympiad)

75.3%

o1-mini

63.6%

Microsoft Phi-4 mannequin card (Apr 2025). (Hugging Face)

AIME 2025 (math olympiad)

62.9%

DeepSeek-R1-Distill-70B

51.5%

Microsoft Phi-4 mannequin card (April 2025). (Hugging Face)

OmniMath

76.6%

DeepSeek-R1-Distill-70B

63.4%

Microsoft Phi-4 mannequin card (April 2025). (Hugging Face)

GPQA-Diamond (graduate-level science)

65.8%

o1-mini

60.0%

Microsoft Phi-4 mannequin card (April 2025). (Hugging Face)

OmniMath (identical benchmark, completely different comparability)

76.6%

Claude-3.7-Sonnet

54.6%

Microsoft Phi-4 mannequin card (April 2025). (Hugging Face)

Desk: Phi-4 reasoning efficiency throughout benchmarks in comparison with different fashions. Supply: Microsoft

The important thing to that is filtering for high quality over amount. A lot of the generic information is both too straightforward (the bottom mannequin already is aware of it) or too laborious (no studying sign). The Phi-4 crew explicitly discards such examples. “Given the robust baseline reasoning capabilities of Phi-4, many preliminary seed questions are already dealt with competently,” they word. “To make additional studying impactful, we particularly goal seeds located on the edge of Phi-4’s present skills.” 

In apply, they depend on LLM-based analysis. For every candidate query, a powerful reference mannequin (like GPT-4) generates an “reply key,” and the solutions from weaker fashions are in contrast. If the weaker mannequin disagrees sufficient, it signifies a teachable hole. These questions are retained, whereas trivially solved or totally unsolvable questions are dropped. 

For instance, a easy arithmetic drawback is likely to be dropped (too straightforward), and an especially obscure theorem proof is likely to be dropped (too laborious) as nicely. However a reasonably difficult geometry drawback that Phi-4 will get improper is included.

This “candy spot” strategy ensures each instance forces the mannequin to stretch its reasoning. By specializing in multi-step issues somewhat than rote recall, they pack most studying into 1.4M examples. 

Because the authors clarify, coaching on these rigorously chosen seeds “results in broad generalization throughout each reasoning-specific and general-purpose duties.” In impact, Phi-4 reasoning demonstrates that clever information choice can outperform brute power scaling. 

Impartial area optimization

Phi-4 reasoning’s information are grouped by area (math, coding, puzzles, security, and so on.). Somewhat than mixing all the things without delay, the crew tunes every area’s combine individually after which merges them. 

This depends on an additive property: Optimizing math information in isolation and code information in isolation yields weights that, when concatenated, nonetheless give positive aspects in each areas. In apply, they first tuned the maths dataset to saturation on math benchmarks, then did the identical for code, and at last merely added the code information into the maths recipe. The outcome was improved efficiency on each math and coding duties, with out retraining from scratch.

This modular strategy provides clear sensible benefits. This implies a small crew can first refine simply the maths dataset, obtain robust math efficiency, after which later add the coding information with out redoing the maths tuning.

Nevertheless, the Phi-4 authors warning that scaling this technique to many domains stays an open query. Whereas the strategy “labored very nicely” for his or her math+code combine, they word, “it isn’t recognized whether or not this technique can scale to dozens or tons of of domains,” a course they acknowledge as a useful space for future analysis. Briefly, the additive technique is efficient, however increasing into new domains should be approached rigorously, as it could introduce unexpected interactions.

Regardless of potential pitfalls, the additive technique proved efficient in Phi-4 reasoning. By treating every area independently, the crew prevented complicated joint optimization and narrowed the search house for information mixtures. This strategy permits incremental scaling of domains. Groups can start by tuning the maths SFT, then incorporate the code dataset, and later develop to extra specialised duties, all whereas sustaining prior efficiency positive aspects. 

This can be a sensible benefit for resource-constrained groups. As an alternative of requiring a big group of consultants to handle a posh, multi-domain dataset, a small crew can give attention to one information silo at a time.

Artificial information transformation

Some reasoning issues, reminiscent of summary proofs or artistic duties, are tough to confirm mechanically. But automated verification (for RL reward shaping) may be very useful. Phi-4 reasoning tackled this by reworking laborious prompts into easier-to-check kinds. 

For instance, the crew rewrote a subset of coding issues as phrase puzzles or transformed some math issues to have concise numeric solutions. These “artificial seed information” protect the underlying reasoning problem however make correctness simpler to check. Consider it as giving the mannequin a simplified model of the riddle that also teaches the identical logic. 

This engineering hack allows downstream RL to make use of clear reward alerts on duties that might in any other case be too open-ended. 

Right here’s an instance of artificial information transformation:

Uncooked net information

Artificial information

On the perimeters AB and BC of triangle ABC, factors M and N are taken, respectively. It seems that the perimeter of △AMC is the same as the perimeter of △CNA, and the perimeter of △ANB is the same as the perimeter of △CMB. Show that △ABC is isosceles.

ABC is a triangle with AB=13 and BC=10. On the perimeters AB and BC of triangle ABC, factors M and N are taken, respectively. It seems that the perimeter of △AMC is the same as the perimeter of △CNA, and the perimeter of △ANB is the same as the perimeter of △CMB. What’s AC?

Desk: Rewriting seed information from the net (left) into verifiable artificial questions for SFT and RL (proper). Supply: Microsoft

Be aware that by assigning numeric values (AB=13, BC=10) and asking “What’s AC?”, the reply turns into a single quantity, which will be simply checked for correctness.

Different groups have utilized comparable domain-specific tips. For instance, chemistry LLMs like FutureHouse’s ether0 mannequin generate molecules beneath strict pKa or structural constraints, utilizing crafted reward features to make sure legitimate chemistry. 

In arithmetic, the Kimina-Prover mannequin by Numina interprets natural-language theorems into the Lean formal system, so reinforcement studying can confirm appropriate proofs. These examples spotlight how artificial augmentation, when paired with verifiable constraints, can push fashions to carry out nicely in extremely specialised domains.

In sensible phrases, engineers ought to embrace artificial information however preserve it grounded. Heuristics like “convert to numeric solutions” or “decompose a proof into checkable steps” could make coaching safer and extra environment friendly. On the identical time, preserve a pipeline of actual (natural) issues as nicely, to make sure breadth. 

The hot button is stability. Use artificial transformations to unlock tough verification issues, however don’t depend on them solely. Actual-world range nonetheless issues. Following this strategy, the mannequin is guided towards a clearly outlined, discrete goal.

Listed below are some outcomes on Phi-4 reasoning fashions:

Sensible implementation for enterprises

AI groups trying to apply Phi-4 reasoning’s insights can observe a sequence of concrete steps to implement the strategy successfully.

Figuring out the mannequin’s edge

Detect your mannequin’s “edge” by figuring out the place the bottom LLM struggles. A technique is to make use of its confidence or settlement scores. For instance, generate a number of solutions per immediate (utilizing a software like Hugging Face’s vLLM for quick sampling) and see the place consensus breaks. These prompts on the margin of confidence are your teachable examples. By specializing in these low-confidence questions somewhat than the questions it already will get proper, you guarantee every new instance is price studying.

Isolating domains for focused tuning

Tune one area at a time somewhat than mixing all information genres upfront. Decide the highest-value area to your app (math, code, authorized, and so on.) and craft a small SFT dataset for simply that. Iterate on the combination (balancing problem, supply varieties, and so on.) till efficiency saturates on domain-specific benchmarks. Then freeze that blend and add the following area. This modular tuning follows Phi-4 reasoning’s “additive” technique. It avoids cross-talk because you protect positive aspects in area A at the same time as you enhance area B.

Increasing with artificial augmentation

Leverage artificial augmentation when gold-standard solutions are scarce or unverifiable. For example, if it’s essential train a proof assistant however can’t autocheck proofs, rework them into arithmetic puzzles or shorter proofs that may be verified. Use your LLM to rewrite or generate these variants (Phi-4 used this to show complicated phrase issues into numeric ones). 

Artificial augmentation additionally enables you to develop information cheaply. After you have a validated small set, you’ll be able to “multiply” it by having the LLM generate paraphrases, variations, or intermediate reasoning steps.

Scaling by a two-phase technique

Use a two-phase coaching technique that begins with exploration adopted by scaling. In Part 1 (exploration), run quick fine-tuning experiments on a targeted dataset (e.g., one area) with restricted compute. Monitor a number of key metrics (benchmarks or held-out duties) every run. Quickly iterate hyperparameters and information mixes. 

The Phi-4 paper demonstrates that this quickens progress, as small experiments helped the crew uncover a strong recipe earlier than scaling up. Solely when you see constant positive aspects do you progress to Part 2 (scaling), the place you mix your verified recipes throughout domains and practice longer (in Phi-4’s case, ~16 billion tokens). Though this stage is extra compute-intensive, the chance is considerably lowered by the prior experimentation.

Monitor for set off factors reminiscent of a major uplift on validation duties or secure metric tendencies. When these seem, it’s time to scale. If not, refine the recipe extra first. This disciplined two-phase loop saves assets and retains the crew agile.

In apply, many groups at Hugging Face and elsewhere have adopted comparable recommendation. For instance, whereas growing conversational mannequin SmolLM2, the crew observed poor chat efficiency in Part 1. They then generated ~500K artificial multi-turn dialogues and re-trained, which “considerably improved each downstream efficiency and its total ‘vibes,’” as one researcher stories. This represents a concrete win, achieved by a focused artificial information injection primarily based on an preliminary suggestions loop.

How to do that now

Right here’s a easy guidelines that you may observe to place these concepts into motion.

  1. Decide a goal area/activity. Select one space (e.g., math, coding, or a selected utility) the place you want higher efficiency. This retains the undertaking targeted.

  2. Gather a small seed dataset. Collect, say, a number of thousand immediate–reply pairs in that area from current sources (textbooks, GitHub, and so on.).

  3. Filter for edge-of-ability examples. Use a powerful mannequin (e.g., GPT-4) to create a solution key for every immediate. Run your base mannequin on these prompts. Maintain examples that the bottom mannequin typically misses, discard ones it already solves or is hopeless on. This yields “teachable” examples.

  4. Nice-tune your mannequin (Part 1). Run a brief SFT job on this curated information. Monitor efficiency on a held-out set or benchmark. Iterate: Refine the information combine, take away straightforward questions, add new teachable ones, till positive aspects taper off.

  5. Add artificial examples if wanted. If some ideas lack auto-verifiable solutions (like lengthy proofs), create less complicated numeric or single-answer variants utilizing your LLM. This offers clear rewards for RL. Maintain a stability with actual issues.

  6. Develop to the following area. As soon as one area is tuned, “freeze” its dataset. Decide a second high-value area and repeat steps 3 to five to tune that information combine. Lastly, merge the information for each domains, and do a last longer coaching run (Part 2).

  7. Monitor benchmarks rigorously. Use a constant analysis methodology (like  majority-voting runs) to keep away from deceptive outcomes. Solely proceed to a full-scale coaching if small experiments present clear enhancements.

Limits and trade-offs

Regardless of the effectiveness of the Phi-4 coaching technique, a number of limitations and sensible issues stay. One key problem is area scaling. Whereas Phi-4’s additive technique labored nicely for math and code, it has but to be confirmed throughout many domains. The authors acknowledge that it stays an open query whether or not this strategy can scale easily to dozens of subjects. 

One other concern is using artificial information. Relying too closely on artificial rewrites can scale back the range of the dataset, so it’s essential to take care of a stability between actual and artificial examples to protect the mannequin's potential to cause successfully. 

Lastly, whereas the repeatable SFT technique helps scale back computational prices, it doesn’t eradicate the necessity for considerate curation. Although the strategy is extra environment friendly than brute-force scaling, it nonetheless requires cautious information choice and iteration.

Classes from Phi-4

The Phi-4 reasoning story is obvious: Greater isn’t at all times higher for reasoning fashions. As an alternative of blindly scaling, the crew requested the place studying occurs and engineered their information to hit that candy spot. They present that “the advantage of cautious information curation for supervised fine-tuning extends to reasoning fashions.” In different phrases, with a sensible curriculum, you’ll be able to squeeze stunning functionality out of modest fashions.

For engineers, the takeaway is actionable. You don’t want a billion-dollar cluster or an infinite web crawl to enhance reasoning. For resource-strapped groups, that is excellent news, as a cautious information technique enables you to punch above your weight.

Phi-4 reasoning proves that systematic information and coaching design, not sheer parameter rely, drives superior reasoning. Specializing in teachable information and iterative tuning, even a 14B mannequin surpassed a lot bigger rivals. For AI groups as we speak, this provides a sensible blueprint. Refine the information, iterate quick, and scale solely when the alerts are proper. These steps can unlock breakthrough reasoning efficiency with out breaking the financial institution.

Share This Article