Microsoft isn’t like OpenAI, Google, and Meta; particularly not on the subject of giant language fashions. Whereas different tech giants desire to launch a number of fashions nearly overwhelming the customers with decisions; Microsoft launches a couple of, however these fashions all the time make it massive amongst builders world wide. Of their newest launch, they’ve launched 2 reasoning fashions: Phi-4-Reasoning and Phi-4-Reasoning-plus, each skilled on the bottom Phi-4 mannequin. The 2 Phi-4-Reasoning fashions compete with the mighty fashions like o1, o3-mini, and DeepSeek R1. On this weblog, we are going to dive into the technical particulars, structure, coaching strategies, and efficiency of Phi-4-Reasoning fashions intimately.
Let’s discover the Phi-4-Reasoning fashions.
What’s Phi-4 Reasoning?
Phi-4 is just not new within the LLM world. This small and mighty language fashions broke the web when it was launched final yr. Now to cater to the growing demand for reasoning fashions, Microsoft has launched Phi-4-Reasoning fashions. These are 14B parameters that excel at performing complicated reasoning duties involving arithmetic, coding, and STEM questions. Unline its normal function Phi-4 sequence, Phi-4-Reasoning is particularly optimized for long-chain reasoning – that’s the capability to interrupt down complicated multi-step issues systematically into logical steps.
Additionally Learn: Phi-4: Redefining Language Fashions with Artificial Information
Phi 4 Reasoning Fashions
The 2 reasoning fashions launched by Microsoft are:
- Phi-4-Reasoning: A reasoning mannequin skilled utilizing supervised fine-tuning or SFT on high-quality datasets. The mannequin is most popular for all duties that require sooner responses with guided efficiency constraints.
- Phi-4-Reasoning-Plus: An enhanced reasoning mannequin that has been enhanced utilizing reinforcement studying or RL to enhance its efficiency however generates nearly 50% extra tokens in comparison with its counterpart. The mannequin reveals an elevated latency and therefore is really helpful for high-accuracy duties.
The 2 14B fashions presently help solely textual content enter, and Microsoft has launched them as open-weight so builders can freely take a look at and fine-tune them primarily based on their wants. Listed here are some key highlights of the fashions:
Particulars | Phi-4-Reasoning Fashions |
---|---|
Developer | Microsoft Analysis |
Mannequin Variants | Phi-4-Reasoning, Phi-4-Reasoning-Plus |
Base Structure | Phi-4 (14B parameters), dense decoder-only Transformer |
Coaching Technique | Supervised fine-tuning on chain-of-thought information; Plus variant consists of extra Reinforcement Studying (RLHF) |
Coaching Length | 2.5 days on 32× H100-80G GPUs |
Coaching Information | 16B tokens whole (~8.3B distinctive), from artificial prompts and filtered public area information |
Coaching Interval | January – April 2025 |
Information Cutoff | March 2025 |
Enter Format | Textual content enter, optimized for chat-style prompts |
Context Size | 32,000 tokens |
Output Format | Two sections: reasoning chain-of-thought block adopted by a summarization block |
Launch Date | April 30, 2025 |
Key Options of Phi-4-Reasoning Fashions
For Phi-4 the staff took a number of modern steps involving information choice, its coaching methodology in addition to its efficiency. A few of the key issues they did have been:
Information Centric Coaching
The Information Curation for coaching the Phi-4 reasoning fashions relied not simply on sheer amount however emphasised equally on the standard of knowledge too. They particularly selected the information that was on the “edge” of the mannequin’s capabilities. This ensured that the coaching information was solvable however not simply.
The principle steps concerned in constructing the information set for Phi-4 fashions have been:
- Seed Database: The Microsoft staff began with publicly out there datasets like AIME and GPQA. These information units concerned issues in algebra and geometry involving multi-step reasoning.
- Artificial Reasoning Chains: To get complete and detailed step-by-step reasoned-out responses for the issues, the Microsoft staff relied on OpenAI’s o3-mini mannequin.
For instance, for the query “What’s the spinoff of sin(x2)?”; o3-mini gave the next output:
Step 1: Apply the chain rule: d/dx sin(u)=cos(u)*du/dx.Step 2: Let u=x² ⇒ du/dx=2x.
Closing Reply: cos(x²) * 2x.
These artificially or synthetically generated chains of well-reasoned responses gave a transparent blueprint on how a mannequin ought to construction its personal reasoning responses.
- Deciding on “Teachable Moments”: The developer staff, knowingly went for prompts that challenged the bottom Phi-4 mannequin whereas being solvable. These included issues on which Phi-4 initially confirmed round 50% accuracy. This strategy ensured that the coaching course of averted “straightforward” information that simply bolstered current patterns, and centered extra on “structured reasoning”.
The staff basically needed the Phi-4-reasoning fashions to be taught as they do, an strategy that we people often depend on.
Supervised High-quality-Tuning (SFT)
Supervised High-quality-Tuning (SFT) is the method of bettering a pre-trained language mannequin by coaching it on rigorously chosen enter–output pairs with high-quality responses. For the Phi-4-Reasoning fashions, this meant beginning with the bottom Phi-4 mannequin after which refining it utilizing reasoning-focused duties. Basically, Phi-4-Reasoning was skilled to be taught and observe the step-by-step reasoning patterns seen in responses from o3-mini.
Coaching Particulars
- Batch Dimension: It was stored at 32. This small batch dimension allowed the mannequin to give attention to particular person examples with out being overwhelmed by the extra noise.
- Studying price: This was 7e-5, a reasonable price that avoids overshooting the optimum weights throughout updates.
- Optimizer: A typical “Adam W” optimizer was used. This deep studying optimizer balances velocity and stability.
- Context Size: It was taken to 32,768 tokens which was double the 16K token restrict of the bottom Phi-4 mannequin. This allowed the mannequin to deal with an extended context.
Utilizing SFT throughout early coaching allowed the mannequin to make use of
Reinforcement Studying
Reinforcement studying is instructing a mannequin how one can do higher with suggestions on all its generated outputs. The mannequin will get a reward each time it solutions accurately and is punished every time it responds incorrectly. RL was used to additional prepare the Phi-4-Reasoning -Plus mannequin. This coaching methodology refined the mannequin’s math-solving abilities which evaluated the responses for accuracy and the structured strategy.
How does RL work?
- Reward Design: The mannequin obtained +1 for every appropriate response and -0.5 for incorrect response. The mannequin obtained punished for repetitive phrases “Let’s see.. Let’s see..” and so forth.
- Algorithm: The GRPO algorithm or generalized reward coverage optimization algorithm, which is a variant of RL that balances exploration and exploitation was used.
- Outcomes: Phi-4-Reasoning-Plus achieved 82.5% accuracy on AIME 2025 whereas Phi-4 Reasoning scored simply 71.4%. It confirmed improved efficiency on Omni-MATH and TSP (touring Salesman Downside) too.
RL coaching allowed the mannequin to refine its steps iteratively and helped scale back the “hallucinations” within the generated outputs.
Structure of Phi-4-Reasoning Fashions
The principle structure of the Phi-4-Reasoning fashions is just like the bottom Phi-4 mannequin however to help the “reasoning” duties some key modifications have been made.
- The 2 placeholder tokens from Phi-4 have been repurposed. These tokens helped the mannequin to distinguish between uncooked enter and inside reasoning.
: Used to mark the beginning of a reasoning block. - : Used to mark the tip of a reasoning block.
- The Phi-4 Reasoning fashions obtained an prolonged context window of 32K tokens to deal with the additional reasoning chains.
- The fashions used rotary place embeddings to raised observe the place of tokens in lengthy sequences to assist the fashions keep coherency.
- The fashions are skilled to work effectively on shopper {hardware} together with units like mobiles, tablets and desktops.
Phi-4-Reasoning Fashions: Benchmark Efficiency
Phi-4-Reasoning fashions have been evaluated on numerous benchmarks to check their efficiency in opposition to totally different fashions on various duties.
- AIME 2025: A benchmark that checks superior math, reasoning, and up to date examination problem. Phi-4-Reasoning Plus outperforms a lot of the top-performing fashions like o1, and Claude 3.7 Sonnet however remains to be behind o3-mini-high.
- Omni-MATH: A benchmark that evaluates numerous math reasoning throughout matters and ranges. Each Phi-4-Reasoning and Phi-4-Reasoning plus outperform nearly all fashions solely behind DeepSeek R1.
- GPQA: A benchmark that checks mannequin efficiency on graduate-level skilled QA reasoning. The 2 Phi reasoning fashions lag behind the giants like o1, o3-mini excessive and DeepSeek R1.
- SAT: A benchmark that evaluates U.S. excessive school-level tutorial reasoning (math + verbal mix). The Phi-4-Reasoning-Plus mannequin stands among the many high 3 contenders with Phi-4-Reasoning following shut behind.
- Maze: This benchmark checks navigation + resolution pathfinding reasoning. On this benchmark, the Phi-4-reasoning fashions lag behind the top-tier fashions like o1 and Claude 3.7 sonnet.
On different benchmarks like Spatial map, TSP, and BA calendar, each the Phi-4-Reasoning fashions carry out decently.
Additionally Learn: The way to High-quality-Tune Phi-4 Domestically?
The way to Entry Phi-4-Reasoning Fashions?
The 2 Phi-4-Reasoning fashions can be found on Hugging Face:
Click on on the hyperlinks to go to the cuddling face web page the place you possibly can entry these fashions. On the precise facet nook of the display, click on on “Use This Mannequin”, click on on “Transformers” and duplicate the next code:
# Use a pipeline as a high-level helper from transformers import pipeline messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", mannequin="microsoft/Phi-4-reasoning") pipe(messages)
Since it’s a 14B parameter mannequin and therefore requires round 40+ GB of VRAM (GPU), You may both run these fashions on “Colab Professional” or “Runpod”. For this weblog, we ran the mannequin on “Runpod” and used “A100 GPU”.
Set up Required Libraries
First, guarantee you have got the transformer’s library put in. You may set up it utilizing pip:
pip set up transformers
Load the Mannequin
As soon as all of the libraries have been put in, now you can load the Phi-4-Reasoning mannequin in your pocket book:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", mannequin="microsoft/Phi-4-reasoning", max_new_tokens=4096)
Ensure to set the max_new_tokens = 4096, the mannequin generates its whole reasoning and infrequently lesser token rely can cease its output halfway.
Phi-4-Reasoning: HandsOn Functions
We’ll now take a look at the Phi-4-reasoning fashions for 2 duties involving Logical Considering and Reasoning. Let’s begin.
Process 1: Logical Considering
Enter:
messages = [ {"role": "user", "content": """A team is to be selected from among ten persons — A, B, C, D, E, F, G, H, I and J — subject to the following conditions. Exactly two among E, J, l and C must be selected. If F is selected, then J cannot be selected. Exactly one among A and C must be selected. Unless A is selected, E cannot be selected, If and only if G is selected, D must not be selected. If D is not selected, then H must be selected. The size of a team is defined as the number of members in the team. In how many ways can the team of size 6 be selected, if it includes E? and What is the largest possible size of the team?""" }, ]
Output:
Markdown(pipe(messages)[0]["generated_text"][1]["content"])
The mannequin thinks completely. It does an incredible job of breaking down your entire downside into small steps. The issue consists of two duties, with the given token window, it gave the reply for the primary job but it surely couldn’t generate the reply for the second job. What was attention-grabbing was the strategy that the mannequin took in direction of fixing the given downside. First, it began by understanding the query, mapping out all the probabilities, after which it went forward into fixing every job, generally, repeating the logic that it had pre-established.
Process 2: Clarify Working of LLMs to an 8 12 months Previous Child
Enter:
messages = [ {"role": "user", "content": """Explain How LLMs works by comparing their working to the photosynthesis process in a plant so that an 8 year old kid can actually understand""" }, ]
Output:
Markdown(pipe(messages)[0]["generated_text"][1]["content"])
The mannequin hallucinates a bit whereas producing the response for this downside. Then lastly it generates the response that gives analogy between how LLMs work and the photosynthesis course of. It retains the language easy and eventually provides a disclaimer too.
Phi-4 Reasoning vs o3-mini: Comparability
Within the final part, we noticed how the Phi-4-Reasoning mannequin performs whereas coping with complicated issues. Now let’s examine its efficiency in opposition to OpenAI’s o3-mini. To do that, let’s take a look at the output generated by the 2 fashions for a similar job.
Phi-4-Reasoning
Enter:
from IPython.show import Markdown messages = [ {"role": "user", "content": """Suppose players A and B are playing a game with fair coins. To begin the game A and B both flip their coins simultaneously. If A and B both get heads, the game ends. If A and B both get tails, they both flip again simultaneously. If one player gets heads and the other gets tails, the player who got heads flips again until he gets tails, at which point the players flip again simultaneously. What is the expected number of flips until the game ends?""" }, ] Output = pipe(messages)
Output:
Markdown(Output[0]["generated_text"][1]["content"])

o3-mini
Enter:
response = shopper.responses.create( mannequin="o3-mini", enter="""Suppose gamers A and B are enjoying a sport with honest cash. To start the sport A and B each flip their cash concurrently. If A and B each get heads, the sport ends. If A and B each get tails, they each flip once more concurrently. If one participant will get heads and the opposite will get tails, the participant who obtained heads flips once more till he will get tails, at which level the gamers flip once more concurrently. What's the anticipated variety of flips till the sport ends?""" )
Output:
print(response.output_text)

To examine the detailed output you possibly can seek advice from the next Github hyperlink.
Consequence Analysis
Each fashions give correct solutions. Phi-4-Reasoning breaks the issue into many detailed steps and thinks by way of each earlier than reaching the ultimate reply. o3-mini, however, combines its pondering and remaining response extra easily, making the output clear and able to use. Its solutions are additionally extra concise and direct.
Functions of Phi-4-Reasoning Fashions
The Phi-4-Reasoning fashions open a world of prospects. Builders can use these fashions to develop clever programs to cater to totally different industries. Listed here are a couple of areas the place the Phi-4-Reasoning fashions can actually excel:
- Their robust efficiency in coding benchmarks (like LiveCodeBench) suggests functions in code technology, debugging, algorithm design, and automatic software program improvement.
- Their capability to generate detailed reasoning chains makes them well-suited for answering complicated questions that require multi-step inference and logical deduction.
- The fashions’ talents in planning duties could possibly be leveraged in logistics, useful resource administration, game-playing, and autonomous programs requiring sequential decision-making.
- The fashions may also contribute to designing programs in robotics, autonomous navigation, and duties involving the interpretation and manipulation of spatial relationships.
Conclusion
The Phi-4-Reasoning fashions are open-weight and constructed to compete with high paid reasoning fashions like DeepSeek and OpenAI’s o3-mini. Since they aren’t instruction-tuned, their solutions could not all the time observe a transparent, structured format like some widespread fashions, however this could enhance over time or with customized fine-tuning. Microsoft’s new fashions are highly effective reasoning instruments with robust efficiency, and so they’re solely going to get higher from right here.
Login to proceed studying and revel in expert-curated content material.