Be part of our every day and weekly newsletters for the newest updates and unique content material on industry-leading AI protection. Be taught Extra
Alibaba Group has launched QwenLong-L1, a brand new framework that allows massive language fashions (LLMs) to purpose over extraordinarily lengthy inputs. This improvement may unlock a brand new wave of enterprise purposes that require fashions to grasp and draw insights from in depth paperwork reminiscent of detailed company filings, prolonged monetary statements, or advanced authorized contracts.
The problem of long-form reasoning for AI
Latest advances in massive reasoning fashions (LRMs), significantly via reinforcement studying (RL), have considerably improved their problem-solving capabilities. Analysis exhibits that when educated with RL fine-tuning, LRMs purchase expertise just like human “sluggish considering,” the place they develop refined methods to sort out advanced duties.
Nonetheless, these enhancements are primarily seen when fashions work with comparatively brief items of textual content, usually round 4,000 tokens. The power of those fashions to scale their reasoning to for much longer contexts (e.g., 120,000 tokens) stays a serious problem. Such long-form reasoning requires a sturdy understanding of your entire context and the flexibility to carry out multi-step evaluation. “This limitation poses a major barrier to sensible purposes requiring interplay with exterior information, reminiscent of deep analysis, the place LRMs should accumulate and course of data from knowledge-intensive environments,” the builders of QwenLong-L1 write of their paper.
The researchers formalize these challenges into the idea of “long-context reasoning RL.” In contrast to short-context reasoning, which regularly depends on information already saved inside the mannequin, long-context reasoning RL requires fashions to retrieve and floor related data from prolonged inputs precisely. Solely then can they generate chains of reasoning primarily based on this integrated data.
Coaching fashions for this via RL is hard and sometimes ends in inefficient studying and unstable optimization processes. Fashions battle to converge on good options or lose their capacity to discover numerous reasoning paths.
QwenLong-L1: A multi-stage strategy
QwenLong-L1 is a reinforcement studying framework designed to assist LRMs transition from proficiency with brief texts to strong generalization throughout lengthy contexts. The framework enhances current short-context LRMs via a rigorously structured, multi-stage course of:
Heat-up Supervised Superb-Tuning (SFT): The mannequin first undergoes an SFT section, the place it’s educated on examples of long-context reasoning. This stage establishes a strong basis, enabling the mannequin to floor data precisely from lengthy inputs. It helps develop basic capabilities in understanding context, producing logical reasoning chains, and extracting solutions.
Curriculum-Guided Phased RL: At this stage, the mannequin is educated via a number of phases, with the goal size of the enter paperwork step by step rising. This systematic, step-by-step strategy helps the mannequin stably adapt its reasoning methods from shorter to progressively longer contexts. It avoids the instability usually seen when fashions are abruptly educated on very lengthy texts.
Problem-Conscious Retrospective Sampling: The ultimate coaching stage incorporates difficult examples from the previous coaching phases, making certain the mannequin continues to be taught from the toughest issues. This prioritizes troublesome situations and encourages the mannequin to discover extra numerous and sophisticated reasoning paths.
Past this structured coaching, QwenLong-L1 additionally makes use of a definite reward system. Whereas coaching for short-context reasoning duties usually depends on strict rule-based rewards (e.g., an accurate reply in a math drawback), QwenLong-L1 employs a hybrid reward mechanism. This combines rule-based verification, which ensures precision by checking for strict adherence to correctness standards, with an “LLM-as-a-judge.” This choose mannequin compares the semanticity of the generated reply with the bottom fact, permitting for extra flexibility and higher dealing with of the various methods right solutions may be expressed when coping with lengthy, nuanced paperwork.
Placing QwenLong-L1 to the check
The Alibaba crew evaluated QwenLong-L1 utilizing doc question-answering (DocQA) as the first activity. This situation is extremely related to enterprise wants, the place AI should perceive dense paperwork to reply advanced questions.
Experimental outcomes throughout seven long-context DocQA benchmarks confirmed QwenLong-L1’s capabilities. Notably, the QWENLONG-L1-32B mannequin (primarily based on DeepSeek-R1-Distill-Qwen-32B) achieved efficiency corresponding to Anthropic’s Claude-3.7 Sonnet Considering, and outperformed fashions like OpenAI’s o3-mini and Qwen3-235B-A22B. The smaller QWENLONG-L1-14B mannequin additionally outperformed Google’s Gemini 2.0 Flash Considering and Qwen3-32B.

An vital discovering related to real-world purposes is how RL coaching ends in the mannequin growing specialised long-context reasoning behaviors. The paper notes that fashions educated with QwenLong-L1 develop into higher at “grounding” (linking solutions to particular elements of a doc), “subgoal setting” (breaking down advanced questions), “backtracking” (recognizing and correcting their very own errors mid-reasoning), and “verification” (double-checking their solutions).
For example, whereas a base mannequin may get sidetracked by irrelevant particulars in a monetary doc or get caught in a loop of over-analyzing unrelated data, the QwenLong-L1 educated mannequin demonstrated a capability to have interaction in efficient self-reflection. It may efficiently filter out these distractor particulars, backtrack from incorrect paths, and arrive on the right reply.
Strategies like QwenLong-L1 may considerably develop the utility of AI within the enterprise. Potential purposes embody authorized tech (analyzing 1000’s of pages of authorized paperwork), finance (deep analysis on annual experiences and monetary filings for threat evaluation or funding alternatives) and customer support (analyzing lengthy buyer interplay histories to offer extra knowledgeable help). The researchers have launched the code for the QwenLong-L1 recipe and the weights for the educated fashions.