skip to content

Tags #research

  • Overview

    Our contributions are threefold:

    (1) LongVT: An End-to-End Agentic Framework for “Thinking with Long Videos”
    We introduce a novel paradigm that natively interleaves multimodal tool-augmented Chain-of-Thought (CoT) with on-demand clip inspection over hours-long videos, thereby enabling large multimodal models (LMMs) to perform more effective and reliable long-video reasoning.

    (2) VideoSIAH: A Fine-Grained Data Suite for Evidence-Sparse Long-Video Reasoning
    We construct a scalable data pipeline that produces diverse and high-quality question-answering (QA) data and tool-integrated reasoning traces, and a dedicated benchmark under a video segment-in-a-haystack setting.

    (3) LongVT-7B-RFT: A State-of-the-Art Baseline with Invaluable Insights
    Through extensive quantitative comparisons, systematic ablations on data recipes, training strategies, and design choices, as well as in-depth analyses of training dynamics, we establish and open-source a powerful baseline model with “thinking with long videos” capabilities.

    LongVT Interleaved Multimodal Chain-of-Tool-Thought

    Interleaved Multimodal Chain-of-Tool-Thought (iMCoTT). Compared to prior text-based CoT reasoning, iMCoTT in our proposed LongVT can natively perform self-reflection via calling crop_video(start_time, end_time) tool. It proposes a time window after a global preview, proactively fetches the corresponding short clip, rethinks based on the new evidence, and determines whether to refine or answer directly. Such tool-augmented reasoning behaviors ground each step in what is actually seen rather than blindly rephrasing in text-only CoT, which mitigates hallucination and leads to enhanced temporal localization and answer correctness.


    Motivation of VideoSIAH

    Long-video reasoning presents a fundamentally different challenge from previous video QA settings: LMMs must locate sparse, fine-grained, and causally decisive moments embedded within hours-long content. However, existing LMMs are mostly trained with coarse-grained and clip-level data. This mismatch leaves modern LMMs lacking the supervision needed to learn how temporal hypotheses are formed, verified, or revised—a critical yet underexplored capability for agentic long-video reasoning.

    Moreover, most existing video understanding benchmarks only offer multiple-choice QAs, which can be solved without genuine temporal grounding and are vulnerable to dataset leakage or shortcut exploitation. To fill this gap, we introduce VideoSIAH, a large-scale, diverse, and high-quality data suite that serves collectively as a training dataset capturing the reasoning dynamics required for video segment-in-a-haystack QA, and a fine-grained evaluation benchmark, VideoSIAH-Eval, with human-in-the-loop validation for long-video open-ended question-answering.

    We conduct a rigorous contamination study on the Qwen-VL series across two probing settings: (1) No Visual, where we feed the text prompt without video frames to test for direct memorization; (2) Rearranged Choices, where we randomize the mapping between option labels and their textual content for multiple-choice questions to detect label memorization. Our experimental results reveal significant vulnerabilities in existing benchmarks and highlight the necessity of our proposed VideoSIAH-Eval.

    SettingVideoMME (w/o sub)VideoMMMU adapt.VideoMMMU comp.VideoMMMU perc.VideoSIAH-Eval
    Qwen2.5-VL-7B-Instruct
    Original64.335.744.356.733.8
    No Visual40.125.738.339.312.7
    Rearranged Choices56.029.740.367.0-
    Qwen3-VL-8B-Instruct
    Original69.340.760.371.346.6
    No Visual44.133.739.346.70.00
    Rearranged Choices69.036.347.769.3-

    Contamination Tests for Qwen-VL Series on Long Video Understanding and Reasoning Benchmarks. The VideoSIAH-Eval column shows ”-” entries for Rearranged Choices since our proposed benchmark is fully open-ended QA, where random option-answer mapping is not applicable.


    Data Pipeline

    VideoSIAH Data Pipeline

    Data Pipeline of VideoSIAH. We construct a semi-automatic data pipeline that integrates several state-of-the-art LMMs to sequentially perform long video segmentation, video clip captioning, segment-in-a-haystack QA generation, cross-modal QA filtering, and iMCoTT generation. Icons with human silhouettes denote human-in-the-loop validation, where annotators inspect a small set of representative failures to refine prompting rules for QA generation, QA filtering, and iMCoTT generation. Note that iMCoTT traces are generated only for the cold-start supervised fine-tuning (SFT) stage, whereas reinforcement learning (RL) operates solely on the filtered QA pairs.


    Dataset Statistics

    SplitSourcePurposeSamplesTotal
    SFT (w/o tool)LongVideo-Reason CoTReasoning-augmented Open-ended QA5,238228,835
    Video-R1 CoTReasoning-augmented Video QA165,575
    Image-based CoTReasoning-augmented Image QA58,022
    SFT (w/ tool)Gemini-distilled iMCoTTTool-augmented Open-ended QA12,76619,161
    Qwen-distilled iMCoTTTool-augmented Temporal Grounding6,395
    RLGemini-distilled QAsOpen-ended QA over Long Videos1,66717,020
    RFTSelf-distilled iMCoTTAgentic Behaviors15,353

    Dataset Statistics of VideoSIAH. Our proposed dataset contains a large-scale of non-tool SFT data, tool-augmented SFT data, RL QAs, and self-distilled reinforcement fine-tuning (RFT) traces.

    Video Category Distribution
    Question Category Distribution

    Category Distribution of VideoSIAH-Eval. We present the distribution of video types (left) and question types (right), highlighting the diversity of our proposed benchmark.


    Quantitative Comparisons

    We compare our LongVT models against proprietary LMMs and state-of-the-art open-source video reasoning models across various long video understanding and reasoning benchmarks.

    ModelReasoningToolVideoMMEVideoMMMULVBenchVideoSIAH-EvalAvg
    PromptCallingw/ subadapt.comp.perc.
    Proprietary LMMs
    GPT-4o77.266.062.055.730.817.451.5
    Gemini 1.5 Pro81.359.053.349.333.1-55.2
    Open-Source (Sparse Sampling)
    Qwen2.5-VL-7B62.637.328.036.730.728.137.2
    Video-R1-7B61.036.340.752.337.227.942.6
    VideoRFT-7B60.936.742.053.034.726.542.3
    Video-Thinker-7B61.034.344.753.052.210.442.6
    LongVT-7B-SFT (Ours)12.537.746.058.336.026.836.2
    LongVT-7B-RL (Ours)66.132.744.750.037.831.043.7
    Open-Source (Dense Sampling)
    Qwen2.5-VL-7B64.335.744.356.740.933.846.0
    Video-R1-7B60.537.338.746.340.133.142.7
    VideoRFT-7B49.237.740.748.718.726.937.0
    Video-Thinker-7B60.837.742.755.354.36.642.9
    LongVT-7B-SFT (Ours)64.932.342.049.741.134.844.1
    LongVT-7B-RL (Ours)66.137.742.356.341.435.946.6
    LongVT-7B-RFT (Ours)67.035.743.756.741.342.047.7

    Performance Comparison with Existing Video-Centric LMMs across Various Long Video Understanding and Reasoning Benchmarks. The best and second-best results among open-source models in each column are marked in bold and underlined, respectively.


    Ablation Studies

    We conduct comprehensive ablation studies to examine the impact of data recipes, training stages, and reward design on model performance.

    Data Recipe

    SettingVideoMMEVideoMMMULVBenchVideoSIAH-EvalAvg
    w/ subadapt.comp.perc.
    SFT w/o self-curated iMCoTT8.433.641.646.015.14.124.8
    SFT w/ self-curated iMCoTT64.932.342.049.741.134.844.1
    RL w/o self-curated QAs55.130.642.045.638.430.840.4
    RL w/ self-curated QAs66.137.742.356.341.435.946.6

    Training Stage

    SettingVideoMMEVideoMMMULVBenchVideoSIAH-EvalAvg
    w/ subadapt.comp.perc.
    SFT only64.932.342.049.741.134.844.1
    RL only52.735.343.055.137.128.241.9
    SFT+RL66.137.742.356.341.435.946.6
    SFT+RL+RFT67.035.743.756.741.342.047.7

    Training Dynamics

    Training Dynamics and Ablations on Reward Design

    (a) shows training dynamics under different accuracy and time rewards, and (b) shows the effect of tool-call reward on tool usage.

    Recall encourages coverage; IoU demands precision. Using Recall as the reward function during RL presents a drawback: the policy can enlarge the predicted span to envelop the ground-truth interval, which monotonically raises the Recall-based score while ignoring boundary quality. This plateau in the curve of Recall Accuracy Score validates our hypothesized reward hacking. In contrast, IoU explicitly penalizes span inflation via the union term, yielding better-aligned boundaries and more disciplined tool use.

    Is tool reward really necessary? The Qwen2.5-VL-7B baseline collapses to near-zero tool calls after training in both configurations (w/ and w/o tool reward), indicating that the model does not internalize the tool’s function. After performing cold-start SFT to obtain LongVT-7B-SFT, tool-call frequency rises during training under both configurations and accuracy improves in tandem. Hence, the tool reward is not required for basic competence: once SFT grounds the tool’s semantics, the model learns when and how to invoke the tool.


    Open-Source Resources
    We open-source LongVT to facilitate future development of long-video reasoning with tool calling in the community
    Model Checkpoints
    Pre-trained models with SFT, RL, and RFT optimization
    Training Datasets
    VideoSIAH data suite for long-video reasoning
  • Overview

    Our contributions are threefold:

    (1) High-quality multimodal reasoning data curation.
    We provide the first systematic study on constructing SFT and RL datasets for multimodal reasoning, showing that both source diversity and answer diversity are crucial for building reliable supervision signals.

    (2) A strong and reproducible SFT recipe.
    We introduce a robust SFT pipeline with step-by-step validation, careful teacher-model selection, and cross-domain data integration, enabling the construction of a high-quality cold-start reasoning dataset.

    (3) An advanced RL training recipe.
    Through an extensive comparison of GSPO, GRPO, and DAPO, we identify the most stable and scalable RL strategy and build a reliable RL pipeline that significantly strengthens multimodal reasoning performance.

    OpenMMReasoner Performance Comparison

    Performance Comparison with State-of-the-Art Large Multimodal Reasoning Models across Various Benchmarks. Our proposed OpenMMReasoner consistently outperforms competing methods, highlighting its effectiveness in complex reasoning tasks.


    OpenMMReasoner-Data

    OpenMMReasoner-Data presents two training recipes covering both the SFT and RL phases. The pipeline begins by collecting diverse data sources and selecting teacher models to generate new answer traces. During the RL phase, we explore different algorithm choices and filtering strategies, leading to our final optimized recipe.

    OpenMMReasoner Pipeline
    Data Distribution

    Experimental Results on Visual Reasoning Benchmarks

    We evaluate our approach on a suite of public visual reasoning benchmarks. Extensive evaluations demonstrate that our training recipe not only surpasses strong baselines but also highlights the critical role of data quality and training design in shaping multimodal reasoning performance. Notably, our method achieves a 11.6% improvement over the Qwen2.5-VL-7B-Instruct baseline across nine multimodal reasoning benchmarks, establishing a solid empirical foundation for future large-scale multimodal reasoning research.

    Main Experimental Results

    Analysis and Insights for SFT

    Our Analysis and Insights for SFT are as follows:

    (1) Answer diversity enhances reasoning.
    Increasing the diversity of generated answers consistently improves the model’s overall reasoning performance, even when using the same question sources, suggesting that exposure to varied solutions strengthens understanding.

    (2) Teacher model selection is crucial.
    Distilling from a strong teacher model substantially boosts the model’s reasoning ability while maintaining high data efficiency. Careful selection for teacher model directly affects the quality of the distilled dataset and the final model performance.

    (3) Over-filtering reduces diversity and performance.
    The best results are achieved without excessive filtering, indicating that maintaining greater answer diversity encourages more robust reasoning abilities.

    (4) Cross-domain knowledge improves generalization.
    Incorporating diverse data from multiple domains consistently enhances the model’s overall reasoning capabilities across tasks.

    Teacher Model Analysis
    Answer Diversity Analysis
    Cross-domain Analysis

    Analysis and Insights for RL

    Our Analysis and Insights for RL are as follows:

    (1) GSPO outperforms other algorithms.
    GSPO demonstrates superior stability and faster convergence compared to alternative methods in multimodal RL training.

    (2) Token efficiency is crucial.
    While increasing reasoning steps at test time can improve performance, excessive tokens reduce efficiency. Our results show that a smaller reasoning budget can achieve comparable or even better accuracy.

    (3) Reasoning ability transfers across domains.
    Gains in reasoning during training consistently translate into stronger performance across multiple domains.

    RL Experimental Results
    RL Training Curves
    Validation Curves
    Rollout Number Experiment Curves

    Open-Source Resources
    We open-source OpenMMReasoner to facilitate future development of multimodal reasoning in the community
  • LLaVA-OneVision-1.5

    Code | Technical Report | Models and Datasets | Demo

    High performance, low cost, and strong reproducibility!

    LLaVA, proposed in 2023, efficiently connects open-source vision encoders with large language models through low-cost alignment, bringing “see—understand—converse” multimodal capabilities to the open ecosystem. It significantly narrows the gap with top-tier closed models and marks an important milestone in open-source multimodal paradigms.

    Starting with a low-cost alignment that bridges “vision encoder + large language model,” LLaVA laid the groundwork; LLaVA-1.5 strengthened comprehension with larger, cleaner data and high-resolution inputs; LLaVA-NeXT expanded into OCR, mathematical reasoning, and broader, multi-scenario tasks. It then branched into LLaVA-NeXT-Video for temporal video understanding and multi-frame reasoning, and LLaVA-NeXT-Interleave to support interleaved multi-image–text inputs and cross-image joint reasoning. Ultimately, the line converged in LLaVA-OneVision, which provides a unified interface covering images, documents, charts, multi-image, and video, balancing quality and efficiency.

    Although interfaces and architectures for multimodal alignment are trending toward convergence, a truly “reproducible” open-source path still differs from releases that “open weights only.” Qwen2.5-VL and InternVL3.5 set strong baselines in OCR, document understanding, mathematical and cross-image reasoning; however, full data inventories, cleaning and mixing ratios, as well as alignment/sampling and training schedules are often only partially disclosed, making end-to-end reproduction difficult. Molmo, with a cleaner data pipeline and meticulous design, approaches strong closed-source baselines across multiple evaluations and human preference settings; Open-Qwen2VL shows that under a more efficient paradigm, strong comparative performance is achievable even when raw multimodal tokens account for a relatively small proportion. The primary gap today lies in the “reproducibility of recipes and engineering details,” rather than any single choice of model architecture.

    LLaVA-OneVision-1.5 Performance

    LMMs-Lab, focused on the goals of high performance, low cost, and strong reproducibility, releases on top of the LLaVA‑OneVision framework a fully open, concept-balanced 85M pretraining dataset (LLaVA‑OV‑1.5‑Mid‑Training‑85M) and a carefully curated 22M instruction dataset (LLaVA‑OV‑1.5‑Instruct‑22M). We retain a compact three-stage pipeline (Stage‑1 language–image alignment; Stage‑1.5 concept balancing and high-quality knowledge injection; Stage‑2 instruction tuning), combine offline parallel data packing (up to ~11× padding compression) with Megatron‑LM plus a distributed optimizer, and complete Stage‑1.5 pretraining of an 8B‑scale VL model on 128 A800 GPUs in about four days.

    Building on this, we introduce LLaVA‑OneVision‑1.5, which inherits and extends the LLaVA series: it adds RICE‑ViT for native-resolution, region-level fine-grained semantic modeling; strengthens chart/document/structured-scene understanding; continues the compact three-stage paradigm to avoid a lengthy curriculum; and emphasizes “quality–coverage–balance” across the 85M pretraining and 22M instruction sets. Crucially, it delivers truly end-to-end transparent openness—covering data, training and packing toolchains, configuration scripts, logs, and reproducible evaluation commands with their build and execution details—to enable low-cost reproduction and verifiable extension by the community. Experiments show LLaVA‑OneVision achieves competitive or superior performance to Qwen2.5‑VL on multiple public multimodal benchmarks (see the technical report).


    Pretraining Dataset (85M) and Concept Balancing

    LLaVA-OneVision-1.5 Scaling

    A general-purpose vision–language pretraining dataset (85M) and an instruction-tuning dataset (22M). The 85M pretraining corpus fuses eight heterogeneous sources—COYO-700M, Obelics, DataComp-1B, LAION-CN, ImageNet-21K, SAM-1B, MINT, and Zero250M—yielding roughly 20 million Chinese and 65 million English image–text pairs. To tackle long-tail concept sparsity and noise/missing issues in raw captions, we move beyond raw term frequencies and adopt a feature-driven “concept balancing” strategy: using a MetaCLIP encoder, we embed all images and a 500K-scale concept vocabulary into a shared vector space, retrieve the Top-K most similar concepts for each image, tally concept frequencies, and then apply inverse-frequency weighted resampling. This suppresses high-frequency background classes and boosts rare fine-grained entities, attributes, and scenes, substantially flattening the long-tail distribution. We then use a high-quality captioner to generate aligned bilingual (Chinese/English) augmented descriptions. Systematic experiments show that, under the same or lower token budget, scaling high-quality data combined with concept-balanced sampling delivers significant and reproducible gains in multimodal understanding, long-tail recognition, and instruction generalization.


    Instruction Dataset (22M)

    The 22M instruction dataset covers eight categories: Caption, Chart & Table, Code & Math, Domain-specific, General VQA, Grounding & Counting, OCR, and Science. Through multi-source aggregation, format standardization, instruction rewriting, bilingual conversion, template diversification (to reduce homogeneity), and safety filtering, we maintain balanced distributions across categories and difficulty levels. Moreover, augmenting our instruction data with the FineVision dataset yields further performance gains.

    LLaVA-OneVision-1.5 Open Framework

    Method

    1) Visual Encoder Pretraining

    To raise the floor for OCR, tables/documents, region‑level understanding, and downstream instruction reasoning, LLaVA‑OneVision‑1.5 adopts our in‑house MVT v1.5 (RICE‑ViT) as the vision backbone.

    Compared to CLIP/SigLIP‑style contrastive models that rely on global alignment only, RICE‑ViT addresses the structural bottleneck of representing an instance with a single global vector by introducing a unified Region Cluster Discrimination mechanism:

    • trained on 450M images and 2.4B candidate regions
    • explicitly models local entities/text blocks and their context via region‑cluster discrimination plus region‑aware attention
    • uses 2D rotary position encoding (2D RoPE) for native multi‑resolution support

    Unlike SigLIP2, which relies on multiple specialized losses (SILC, TIPS, LocCa, etc.), we use a single clustering‑discrimination paradigm to simultaneously strengthen general semantics, OCR recognition, and localization, yielding a simpler, more maintainable training/inference pipeline.

    During multimodal fusion, a lightweight projection followed by full‑parameter joint training seamlessly plugs this fine‑grained semantic foundation into the language model, reducing redundant adapters and improving cross‑task transfer efficiency.

    LLaVA-OneVision-1.5 Open Framework

    2) Three‑Stage Learning Pipeline

    • Stage‑1: Language–image alignment
      Train the visual projection layer on the LLaVA‑1.5 558K dataset to map visual encoder outputs into the LLM’s token embedding space, with controlled parameter updates for fast, stable convergence.

    • Stage‑1.5: Mid‑stage pretraining with high‑quality knowledge
      Full‑parameter training on the concept‑balanced 85M pretraining set to inject broad visual semantics and world knowledge, emphasizing data quality and coverage rather than blindly expanding token counts.

    • Stage‑2: Visual instruction alignment
      Continue full‑parameter training on the 22M instruction set plus multi‑source visual instruction corpora such as FineVision to improve task generalization, reasoning organization, and response‑format control.

    3) Offline Parallel Data Packing

    To reduce padding waste from multimodal sequence‑length variance and improve effective token utilization, we adopt offline parallel packing:

    • hash‑bucket clustering by sample length or length ranges to cut global sorting/scanning costs
    • multithreaded concatenation of multiple short samples into fixed‑length sequences close to the target length during data prep

    This one‑pass, corpus‑wide pipeline is deterministic and reproducible, avoiding the runtime instability and extra CPU overhead of online dynamic packing. On the 85M pretraining set, it achieves up to ~11× effective padding compression (defined as original total padding tokens / post‑packing total padding tokens) compared to the baseline.

    4) Hybrid Parallelism and Efficient Long‑Context Training

    On the training side, we use hybrid parallelism and long‑context optimizations—tensor parallelism (TP) + pipeline parallelism (PP) + sequence/context parallelism with a distributed optimizer—to improve compute utilization and memory efficiency at cluster scale. We also adopt a native‑resolution strategy to preserve structural details in charts, documents, and dense text regions, avoiding information loss from uniform resizing.

    On a 128×A800 cluster, Stage‑1.5 for an 8B model (85M samples, native resolution) completes in about 3.7 days, balancing throughput and cost.

    LLaVA-OneVision-1.5 Open Framework
    Open-Source Resources
    We open-source LLaVA-OneVision-1.5 to facilitate future development of LMMs in the community

    Quick Start with HuggingFace

    from transformers import AutoTokenizer, AutoProcessor, AutoModelForCausalLM
    from qwen_vl_utils import process_vision_info
     
    model_path = "lmms-lab/LLaVA-One-Vision-1.5-8B-Instruct"
     
    # default: Load the model on the available device(s)
     
    model = AutoModelForCausalLM.from_pretrained(
        model_path, torch_dtype="auto", device_map="auto", trust_remote_code=True
    )
     
    # default processor
     
    processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
     
    messages = [
        {
            "role": "user",
            "content": [
                {
                    "type": "image",
                    "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
                },
                {"type": "text", "text": "Describe this image."},
            ],
        }
    ]
     
    # Preparation for inference
     
    text = processor.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )
    image_inputs, video_inputs = process_vision_info(messages)
    inputs = processor(
        text=[text],
        images=image_inputs,
        videos=video_inputs,
        padding=True,
        return_tensors="pt",
    )
    inputs = inputs.to("cuda")
     
    # Inference: Generation of the output
     
    generated_ids = model.generate(inputs, max_new_tokens=1024)
    generated_ids_trimmed = [
        out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
    ]
    output_text = processor.batch_decode(
        generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
    )
    print(output_text)
     

    Model Evaluation

    # pip install git+https://github.com/EvolvingLMMs-Lab/lmms-eval.git
    accelerate launch --num_processes=8 --main_process_port 12399 -m lmms_eval \
     --model=llava_onevision1_5 \
     --model_args=pretrained=lmms-lab/LLaVA-OneVision-1.5-8B-Instruct,attn_implementation=flash_attention_2,max_pixels=3240000 \
     --tasks=mmmu_val,mmmu_pro_standard,mmbench_en_test,mmerealworld,mmerealworld_cn,ai2d,ai2d_no_mask,vstar_bench,chartqa,charxiv,docvqa_test,mathvista_testmini,mmstar,scienceqa \
     --batch_size=1
     
  • LLaVA-Critic-R1 Performance
    Figure 1: LLaVA-Critic-R1 is trained on top of the base model Qwen-2.5-VL-7B. Building upon a stronger reasoning VLM, ThinkLite-VL-7B, we further develop LLaVA-Critic-R1+ by applying the same RL critic training procedure. **Left**: Performance comparison of LLaVA-Critic-R1 with other base and reasoning VLMs on multiple visual reasoning, visual understanding, and visual reward benchmarks. LLaVA-Critic-R1 not only significantly outperforms other models in critic performance, but also demonstrates stronger policy capabilities. **Right**: Performance improvement of critic training and test-time self-critic scaling on five common visual reasoning and visual understanding benchmarks. Critic training alone significantly improves the base model's performance. Building upon this, leveraging the dual policy and critic capabilities of LLaVA-Critic-R1 for a 'Best-of-128' self-critic scaling procedure at test time leads to a further substantial boost in performance.

    Breaking the Critic-Policy Divide

    In vision-language modeling, critic models are typically trained to evaluate outputs—assigning scalar scores or pairwise preferences—rather than to generate responses. This separation from policy models, which produce the responses, is so entrenched that critics are rarely considered for direct policy use.

    LLaVA-Critic-R1 challenges this convention. We propose to reorganize preference-labeled critic datasets into verifiable training signals and perform reinforcement learning directly on a base generative model, producing a multimodal critic trained to optimize preference judgments while retaining full generation ability.

    Surprising Dual Excellence

    LLaVA-Critic-R1 emerges not only as a top-performing critic but also as a competitive policy model—matching or surpassing specialized reasoning VLMs trained with in-domain data across 26 visual reasoning and understanding benchmarks, with an average gain of +5.7% over its base model (Qwen-2.5-VL-7B).

    Extending this approach to existing strong reasoning VLMs yields LLaVA-Critic-R1+, which further advances policy performance without sacrificing critic quality, achieving a state-of-the-art 71.9 on MMMU at the 7B scale.

    Self-Critique at Test Time

    The enhanced critic ability benefits inference significantly. Applying self-critique at test time yields an average +13.8% improvement on five representative reasoning tasks without additional training. This demonstrates the power of unified critic-policy models for creating self-improving systems.

    Technical Innovation

    Our approach centers on three key innovations:

    Data Reorganization: We transform preference-labeled critic datasets into verifiable training signals suitable for reinforcement learning.

    GRPO Training: We apply Group Relative Policy Optimization directly on generative models, enabling them to learn from critic data while maintaining generation capabilities.

    Unified Architecture: We maintain a single model for both critic and policy functions, eliminating the traditional separation between evaluation and generation.

    Model Performance

    LLaVA-Critic-R1 demonstrates strong performance across diverse benchmarks:

    • Visual Reasoning: Competitive performance with specialized models on complex reasoning tasks
    • Critic Evaluation: Top-tier preference judgment and scalar scoring capabilities
    • Generation Quality: Maintained fluency and coherence with strong instruction following

    The model comes in two variants:

    • LLaVA-Critic-R1: Base model trained from Qwen-2.5-VL-7B
    • LLaVA-Critic-R1+: Extended approach applied to strong reasoning VLMs

    Implications for the Field

    Our results reveal that RL training on critic data can produce a unified model excelling at both evaluation and generation, offering a simple path toward scalable, self-improving multimodal systems. This work demonstrates that the traditional separation between critics and policies is not necessary—a single model can excel at both tasks simultaneously.

  • Our previous work, MMSearch-R1, represents a paradigm shift in multimodal AI as the first framework to employ end-to-end reinforcement learning for autonomous tool invocation in large multimodal models (LMMs). By enabling models to independently determine when and how to leverage external search tools, MMSearch-R1 achieves both high efficiency and state-of-the-art performance on open-world tasks, marking a significant advance in practical AI deployment.

    What began as a specialized tool-calling model has since evolved into a general-purpose reasoning engine that seamlessly integrates knowledge retrieval with cognitive processing. This evolution offers critical insights into the future of autonomous AI systems: the most capable agents will not only be able to think deeply, but also actively seek and utilize relevant information as needed.

    Reasoning-improved Search

    Despite MMSearch-R1’s strong performance, we observed limitations in its ability to adapt to complex, dynamic information needs. To address these constraints, we propose a reasoning-first agent paradigm that emphasizes the following core capabilities:

    1. Intelligent search: The model reasons about its knowledge gaps to make decisions about when and how to invoke search tools
    2. Query generation: Deep task understanding enables context-aware query formulation that evolves with the problem
    3. Knowledge integration: External information is systematically incorporated through reasoning processes, not merely retrieved and appended
    4. Performance: The approach delivers fundamental advances in multimodal reasoning, not just incremental improvements

    Training Recipe

    Prior work in multimodal reasoning has demonstrated that training with verifiable rewards can significantly enhance a model’s capabilities in understanding and solving complex STEM problems. In our initial experiments, we evaluated numerous multimodal STEM datasets. We discovered that many existing datasets suffer from various limitations: some lack sufficient difficulty for advanced models, while others contain noisy annotations, incomplete visual-text alignments, or unverifiable ground truth answers. These issues can produce unreliable reward signals that destabilize reinforcement learning training. To address these challenges, we curated a comprehensive high-quality training set consisting of: MMPR[1], MMK12[2], MMR1[3], Multi-subject-RLVR[4], ScienceQA. To ensure data quality for effective multimodal RL training, we implemented a rigorous filtering pipeline:

    1. Multimodal Verification: Every problem undergoes automatic verification to ensure visual and textual components are properly aligned and complete. We filter datasets to include only problems where both modalities contribute meaningfully to the solution process.

    2. Answer Verifiability: Each problem must have verifiable ground truth answers with clear reasoning paths. For mathematical problems, we verify symbolic and numerical answers; for scientific problems, we ensure explanations align with established principles.

    3. Complexity Filtering: Problems must require genuine multimodal reasoning rather than being solvable through text or vision alone. We exclude problems where one modality is merely decorative.

    After filtering, we obtained 80K high-quality multimodal STEM problems for RL training.

    Our RL training stage follows DAPO[5] with the following modifications:

    • No Entropy Loss: We eliminate entropy loss entirely, as its inclusion frequently causes training instability characterized by exponential entropy growth and subsequent collapse.
    • No KL Loss: Following DAPO, we remove KL loss to allow the model to diverge from the original SFT policy’s trust region. This also eliminates reference policy log probability computation, accelerating training.
    • Overlong Filtering: We mask loss for truncated sequences to preserve long-context reasoning capabilities.
    • Learning Rate Schedule: We implement a sigmoid-based decay schedule. The sigmoid schedule provides smooth S-shaped transitions that stabilize early training and asymptotically approach target rates without discontinuities. We keeps the base learning rate to 2e62e-6 and the warmup steps to 60 steps with sigmoid curve progression. The decay is a sigmoid function reducing to 90% of base rate (final LR 1.8e6\approx 1.8e-6).
    • Improved Exploration: We set the clip high ratio to 0.3 in the GRPO/PPO surrogate loss to encourage exploration and stabilize entropy dynamics.

    Our reward function employs a two-stage hierarchical approach combining mathematical verification with LLM-based evaluation. We first apply a static mathematical verifier to assess answer correctness for questions with deterministic solutions. When the verifier returns zero — indicating either incorrect answers or inability to verify, we employ an LLM-as-judge for secondary assessment to handle questions requiring semantic evaluation or those with multiple valid representations (e.g., “teal blue” vs. “blue”), the LLM would judge based on given images, questions, answers and model predictions.

    This design prioritizes computational verification for efficiency while leveraging LLM evaluation for complex semantic cases.

    Result

    Based on this foundation, we can build a very strong STEM-focused reasoning model that surpasses the rest of open models.

    ModelsMMK12MathVerse (testmini)MathVision (testmini)MathVista (testmini)MMMU (val)
    Qwen2.5-VL-7B34.446.224.066.649.8
    OpenVL-Thinker31.045.224.070.252.3
    R1-OneVision30.644.124.064.149.2
    MM-Eureka-7B27.050.326.973.050.7
    General STEM46.251.428.473.657.3
    General STEM -> Search (Two Stage)43.051.928.072.457.9

    With this reasoning foundation, we can go further to improve the model’s search abilities. We first implemented a two-stage training process to seamlessly integrate search capabilities. This approach ensures that search becomes a natural extension of the model’s reasoning process rather than a separate module.

    From the figure, compared with our original MMSearch baseline, which was built on Qwen-2.5-VL-7B (referred to as Instruct → Search in this context), we can observe that the model achieved good improvements. The reasoning-first approach enabled more intelligent search decisions, better query formulation, and more effective utilization of retrieved information.

    Accuracy across four multimodal benchmarks
    Accuracy across four multimodal benchmarks (Infoseek, MMSearch, FVQA, and SimpleVQA). The Reasoning to Search paradigm consistently outperforms or matches Instruct -> Search, especially on Infoseek and MMSearch, demonstrating the benefit of reasoning-first strategies in complex information retrieval tasks.

    One of the most intriguing findings emerged during our evaluation of STEM tasks (e.g., MMMU, MathVision) using Search prompts. We observed a counterintuitive phenomenon: excessive searching actually led to decreased performance. Specifically, models employing Search prompts tended to over-rely on external searches, frequently initiating queries for information that could have been inferred through reasoning or was already available internally.

    Accuracy comparison across five challenging reasoning datasets
    Accuracy comparison across five challenging reasoning datasets. Results indicate that while integrating search generally helps, excessive or unguided searching can lower performance. This underscores the need for precise reasoning-guided search prompting to achieve optimal results in complex multimodal reasoning tasks.

    These performance drops highlight critical insight: without effective reasoning capabilities to guide their search strategies, models tend to default to inefficient search behaviors. This not only results in unnecessary computational overhead but can also introduce irrelevant information, ultimately degrading the quality of answer generation.

    Search RatioMM-K12MathVerse (testmini)MathVision (testmini)MathVista (testmini)MMMU (val)
    Reason -> Search (Search Prompt)16.822.99.512.524.7

    Reason to Act for General Search Model

    To achieve a robust balance between reasoning and search performance across general-domain tasks, we choose to integrate the training into one stage for both capabilities. Our goal is to build a model that not only retrieves relevant information efficiently but also demonstrates advanced reasoning over searched information.

    Training Recipe

    We unify the training process by adopting a ReACT-style prompt template, inspired by [REACT PAPER], which allows the model to interleave reasoning and action (search) steps within a single trajectory. This template is a slight refinement of the standard Search prompt, and full implementation details are provided in the Appendix.

    The table below summarizes the lineage and training data for each model variant, clarifying the distinctions in model initialization and supervision strategies. For comprehensive information on hyperparameters and training dynamics, please refer to the Appendix.

    Result

    We evaluated both our two-stage and unified (one-stage) models across a broad suite of benchmarks and consistently observed performance improvements as model capacity increased.

    The General STEM model showed that enhancing reasoning capabilities alone can lead to significant gains. In contrast, the General Search model revealed the multiplicative benefits of integrating reasoning with targeted search strategies. Notably, these improvements were not simply incremental - they represent fundamental advances in how models address complex, multimodal problems.

    ModelsMMK12MathVerse (testmini)MathVision (testmini)MathVista (testmini)MMMU (val)AI2DChartQAMMERealworldQAOCRBenchDocVQAMMBenchMMStarMiaBench
    Qwen2.5-VL-7B34.446.224.066.649.893.394.4630.4/1685.268.585.294.682.962.681.7
    General STEM46.251.428.473.657.394.491.4700.7/1662.167.583.792.183.865.576.0
    Reason -> Search43.251.725.071.857.994.093.6652.5/1688.367.581.793.583.263.147.6
    General Search43.652.027.374.756.194.694.0718.9/1775.365.577.889.484.060.444.4

    ModelsInfoseekMMSearchFVQASimpleVQA
    Qwen2.5-VL-7B20.112.820.338.4
    MMSearch55.153.858.457.4
    Reasoning -> Search58.557.157.957.7
    General Search52.054.952.857.0

    Our results reveal that MMSearchR1 achieves the highest accuracy across all benchmarks, significantly outperforming standard General Search configurations. The key differentiator is search utilization: MMSearchR1 demonstrates search ratios up to 61.6%61.6\% on Infoseek, compared to 28.5%28.5\% for General Search.

    MMSearchR1 performance comparison
    MMSearchR1 achieves the highest accuracy across all benchmarks, significantly outperforming standard General Search configurations. The key differentiator is search utilization: MMSearchR1 demonstrates search ratios up to $61.6\%$ on Infoseek, compared to $28.5\%$ for General Search.

    We found a strong positive correlation (Pearson r = 0.911) between search ratio and model performance, indicating that increased search engagement directly improves accuracy. However, this relationship has limits—excessive or undirected search introduces computational costs and answer noise that can degrade reliability. Additional experiments with reduced STEM data, increased search data ratios, and shortened warmup periods (60 vs 45 steps) confirmed that better performance requires strategic search integration. Models perform best when search is invoked selectively through explicit reasoning about information needs, balancing enhanced knowledge access against computational efficiency. These findings demonstrate that the key to multimodal model performance lies not in maximizing search frequency, but in developing sophisticated reasoning mechanisms that determine when external information retrieval adds value to complex query resolution.

    Case Study

    We show the following interesting cases to demonstrate versatile abilities of our final model.

    Case: MME

    In this example from the MME benchmark, the model is required to answer a question about a statue located in the National Gallery of Art in Washington, D.C. The process begins with the model analyzing the query image to determine what additional information is needed. It then performs searches for visually similar images, systematically evaluates the retrieved results, and conducts follow-up searches from different perspectives to verify its findings. This iterative search-and-reasoning approach allows the model to gather comprehensive evidence before arriving at a well-supported conclusion.

    MME benchmark case study
    Example from the MME benchmark showing the model's iterative search-and-reasoning approach to identify a statue in the National Gallery of Art.

    Case: Writing Email to a Public Figure

    In this case, the model is tasked with composing an email to Abdullah Shahid Sial, a public figure. To accomplish this effectively, the model must gather comprehensive information about him through internet searches, including his social media presence (Twitter), official website, professional background, and other publicly available information sources.

    Email composition case study
    Case study showing the model's research process when tasked with writing an email to Abdullah Shahid Sial, demonstrating comprehensive information gathering capabilities.

    Reference

    [1] https://huggingface.co/datasets/OpenGVLab/MMPR-v1.2

    [2] https://huggingface.co/datasets/FanqingM/MMK12

    [3] https://huggingface.co/datasets/MMR1/MMR1-Math-RL-Data-v0

    [4] https://huggingface.co/datasets/virtuoussy/Multi-subject-RLVR

    Appendix

    Reasoning Template

    {question}
    Please reason step by step. Output the thinking process within <think> </think> tags and final answer within <answer> </answer> tags.

    Search Template

    Answer the user's question based on the provided image. Examine the image carefully and identify any recognizable entities, such as faces, objects, locations, events, logos, or text. Determine whether you have sufficient knowledge to confidently recognize the main visual element and answer the user's question. If so, first explain your reasoning, then provide a clear and direct answer.\nIf you are unable to confidently identify the visual element, stop and invoke the image search tool by appending the string <search><img></search> at the end of your response. This will trigger a Google Lens search using the original image to retrieve relevant information that can help you confirm the visual content.\nOnce you have sufficient visual understanding, combine it with the user's question and assess whether you can confidently answer. If so, answer the question directly using your own knowledge. If not, invoke the text search tool by generating a concise and specific query, and output it in the format <text_search>your query here</text_search> at the end of your response. Carefully craft your query to accurately retrieve the information needed to help answer the question. The text search tool will then use Google Search to return relevant information based on your query.\nYou must include your reasoning inside <reason>...</reason> before taking any action, whether it is calling the image search tool, generating a text search query, or providing a final answer. The reasoning may involve analysis of the original image and question, interpretation of search results, or logical steps leading to the final answer.\nAll search results will be placed inside <information> and </information> and returned to you. When you are ready to answer the question, wrap your final answer between <answer> and </answer>, without detailed illustrations. For example: <answer>Titanic</answer>.\nHere is the image and the question:\n<image>
    {question}

    ReACT Template

    # System Message
    You are a helpful assistant. You should strictly follow reason-to-act thinking process to answer user provided question. Namely, you should first analyze the question & observation (e.g., user provided image or search results) and then inform the following action. The thinking process should be within <reason> and </reason> tags. The actions you can choose are:
    <answer>xxxxx</answer>:  which returns the answer within <answer> and </answer> tags, and finishes the task.
    <search>image</search>: which searches user provided image on Google and returns image-related visual entity/concept/knowledge for further reason-to-act. The search results are placed between <observation> and </observation> tags.
    <search>text query</search>:  which generates a text query and sent to Google and returns some snippets containing the answer for further reason-to-act. The search results are placed between <observation> and </observation> tags. Note that sometimes the snippets do not contain the answer, and some alternative search might be needed.
     
    Your output format should be one of the following three formats:
    <reason> YOUR THINKING PROCESS </reason>
    <answer> YOUR ANSWER AFTER GETTING ENOUGH INFORMATION </answer>
    or
    <reason> YOUR THINKING PROCESS </reason>
    <search> IMAGE </search>
    or
    <reason> YOUR THINKING PROCESS </reason>
    <search> YOUR GENERATED TEXT QUERY FOR HELPING YOU FIND INFORMATION ON GOOGLE TO ANSWER USER QUESTION </search>
     
    Only output the final answer (in words, numbers or phrase) inside the <answer></answer> tags, without any explanations or extra information. If this is a yes-or-no question, you should only answer yes or no.
  • SAE Made Easy Framework Overview
    SAE Made Easy: A comprehensive framework for integrating Sparse Autoencoders into any neural network model

    Overview

    SAE Made Easy is inspired by a wealth of Sparse Autoencoder (SAE) work from Anthropic, OpenAI, Google, and the open-source community. SAE has become a powerful and widely-used tool in the field of explainable AI.

    This project aims to provide a simple and flexible interface that allows users to inject SAE modules into their models at any layer with minimal effort. We adopt the elegant design of Hugging Face’s peft and regard SAE training as a kind of parameter efficient tuning - as long as the target is an nn.Module, SAE can be easily integrated and trained with only a few lines of code.

    🎯 Design Philosophy

    The code design takes inspiration from PEFT, as we believe SAE shares many structural similarities with PEFT-based methods. By inheriting from a BaseTuner class, we enable seamless SAE integration into existing models.

    Simple Integration Example

    With this design, injecting an SAE module is as simple as:

    import torch
    import torch.nn as nn
    from peft import inject_adapter_in_model
     
    from sae import TopKSaeConfig, get_peft_sae_model, PeftSaeModel
     
    class DummyModel(nn.Module):
        def __init__(self):
            super(DummyModel, self).__init__()
            self.linear = nn.Linear(10, 10)
     
        def forward(self, x):
            return self.linear(x)
     
    model = DummyModel()
    config = TopKSaeConfig(k=1, num_latents=5, target_modules=["linear"])
     
    # Inject the adapter into the model
    model = inject_adapter_in_model(config, model)
     
    # Check if the adapter was injected correctly
    result = model(torch.randn(1, 512, 10))

    PEFT-Style Workflow

    You can also obtain a PEFT-wrapped model using the magic function from the PEFT library. The rest of your workflow remains the same:

    # Get the PEFT model
    peft_model = get_peft_sae_model(model, config)
     
    result = peft_model(torch.randn(1, 512, 10))

    Model Persistence

    Loading and saving is similar to PeftModel:

    peft_model.save_pretrained("test_save_peft_model")
     
    model = DummyModel()
    peft_model = PeftSaeModel.from_pretrained(
        model,
        "test_save_peft_model",
        adapter_name="default",
        low_cpu_mem_usage=True,
    )

    📊 Data Processing

    To ensure consistency in data formatting, we recommend first processing your data and storing it in Parquet format. This standardization simplifies interface development and data preparation.

    Preprocessing Pipeline

    You are free to customize the preprocessing logic and define keys for different modalities. However, the final output should be compatible with:

    • Chat templates
    • Our preprocessing pipeline

    Example Usage

    An example preprocessing script is available at examples/data_process/llava_ov_clevr.py:

    python examples/data_process/llava_ov_clevr.py \
        --push_to_hub \
        --hf_repo_path lmms-lab/LLaVA-OneVision-Data \
        --subset "CLEVR-Math(MathV360K)" \
        --split train \
        --target_hf_repo_path lmms-lab/LLaVA-OneVision-Data-SAE

    🚀 Training

    Our trainer implementation builds on top of existing frameworks and supports the following enterprise-grade features:

    • ZeRO-1/2/3 training - Efficient memory usage for large models
    • Weights & Biases (WandB) logging - Comprehensive experiment tracking

    Scalability

    With ZeRO optimizations, you can train SAEs on 72B models using just 8×A800 GPUs - making large-scale SAE research accessible to more teams.

    Quick Start Examples

    We provide simple training recipes to help you get started quickly:

    Large-Scale Training

    • ZeRO-3, 72B training: examples/train/zero/run_qwen25_vl_72b_zero3.sh

    Medium-Scale Training

    • ZeRO-2, 7B training: examples/train/zero/run_qwen25_vl_7b_zero2.sh

    Standard Training

    • DDP, 7B training: examples/train/ddp/run_qwen25_vl_7b_ddp.sh

    Training Monitoring

    Training Logs and Metrics
    Reproducible training logs showing SAE training progress with comprehensive metrics tracking

    Our framework provides comprehensive logging for reproducible research and easy debugging.

    🏗️ Framework Features

    PEFT-Inspired Design

    • Seamless integration with existing models
    • Minimal code changes required
    • Compatible with Hugging Face ecosystem

    🔧 Flexible Configuration

    • Support for various SAE architectures
    • Configurable sparsity levels and latent dimensions
    • Target any model layer with precision

    📈 Scalable Training

    • ZeRO optimization support for large models
    • Distributed training capabilities
    • Memory-efficient implementations

    🔍 Research-Ready

    • Built-in experiment tracking
    • Reproducible training pipelines
    • Comprehensive logging and metrics

    🎓 Research Applications

    Mechanistic Interpretability

    • Feature Discovery - Identify interpretable features in neural networks
    • Activation Analysis - Study how models process information
    • Behavioral Understanding - Understand model decision-making

    Model Analysis

    • Sparse Representation - Learn compressed, interpretable representations
    • Feature Steering - Control model behavior through feature manipulation
    • Safety Research - Understand and mitigate potential risks

    If you find this repository useful, please consider checking out our previous paper on applying Sparse Autoencoders (SAE) to Large Multimodal Models, accepted at ICCV 2025.

    🌟 Key Benefits

    Ease of Use

    Transform complex SAE integration into a few lines of code with our PEFT-inspired design.

    Scalability

    Train on models ranging from 7B to 72B parameters with optimized memory usage.

    Flexibility

    Apply SAEs to any neural network layer with configurable parameters and architectures.

    Research Impact

    Accelerate mechanistic interpretability research with production-ready tools and frameworks.

    🚀 Getting Started

    1. Install the framework following our documentation
    2. Prepare your data using our preprocessing pipeline
    3. Configure SAE parameters for your specific use case
    4. Train using our optimized training scripts
    5. Analyze learned features for interpretability insights

    SAE Made Easy democratizes access to sparse autoencoder research, enabling researchers and practitioners to easily integrate interpretability tools into their workflows.

  • MMSearch-R1 Thumbnail
    MMSearch-R1: Bridging the gap between internal knowledge and external search

    MMSearch-R1 is the first end-to-end RL-based solution designed to equip LMMs with the capability to perform search on demand in real-world internet environments. It outperforms same-sized RAG baselines and approaches the performance of larger models while requiring significantly fewer search calls.

    MMSearch-R1 Overview Figure
    Figure 1: MMSearch-R1 learns to recognize the boundaries of its knowledge and perform on-demand search, significantly reducing the number of searches required while outperforming RAG-based models on knowledge-intensive and info-seeking VQA tasks.

    1. Introduction

    Scaling up vision-language paired data has become a widely adopted paradigm for Large Multimodal Models (LMMs) to acquire grounded knowledge of the visual world. Although this static training strategy has proven effective, it remains limited in capturing complex and evolving real-world knowledge. In particular, state-of-the-art LMMs continue to struggle with:

    • Long-tail facts and newly emerging information
    • Domain-specific content restricted by privacy or copyright constraints
    • Knowledge-intensive and information-seeking visual question answering tasks

    As a result, their performance remains suboptimal, frequently generating hallucinated outputs when confronted with inputs beyond their training distribution.

    Current Limitations

    Existing approaches such as Retrieval-Augmented Generation (RAG) and prompt-based agents remain suboptimal:

    • RAG methods rely on fixed retrieve-then-generate pipelines, leading to over-retrieval and high computational costs
    • Prompt-based agents can access real-time search engines but lack parameter optimization through learning

    Our Solution: MMSearch-R1

    To address these limitations, we introduce MMSearch-R1, training LMMs to acquire three essential search-related capabilities:

    1. When to search - Recognizing knowledge boundaries
    2. What to search for - Formulating effective queries
    3. How to reason over search results to answer user queries

    Key Contributions

    • 🏗️ Dataset Construction - Automated approach to construct multimodal search VQA dataset
    • 🔧 Multimodal Search Tool Integration - Real-world search pipeline with image and text tools
    • 🧠 Wiser Search via Reinforcement Learning - GRPO-based RL framework for optimal search decisions
    • 🌐 Open-Sourced Framework - Complete model, dataset, and training framework release

    2. Method

    2.1. Building Iterative Multimodal Search-Integrated RL Framework

    MMSearch-R1 Training Framework
    Figure 2: Illustration of training in MMSearch-R1. Top: The GRPO training pipeline integrated with multimodal search tools. Bottom: A detailed view of the rollout process and search tool execution.

    We built on veRL and adopt standard GRPO as our base RL algorithm, with modifications to allow search interactions during the rollout process.

    Multimodal Search Tools

    Our framework equips models with two types of search tools:

    1. Image Search Tool

      • Takes input image and returns top-5 visually similar webpages
      • Each result includes thumbnail and title
      • Enables identification of unfamiliar visual entities
    2. Text Search Pipeline

      • Model formulates queries based on user questions
      • Retrieves relevant webpages and processes content
      • Provides concise summaries for accurate answering

    Reward Modeling

    Our reward system consists of two components:

    reward = (1 - α) × Acc_Score × Search_Penalty + α × Format_Score
    
    • Accuracy Score - Exact string match against ground truth (1 for correct, 0 otherwise)
    • Search Penalty - Applied to correct responses that used search, encouraging internal knowledge use
    • Format Score - Ensures model follows required output structure

    2.2. Curating Search-balanced VQA Datasets

    FVQA Dataset Construction
    Figure 3: Illustration of data construction process of FVQA dataset: (a) Automated pipeline for visual knowledge-required VQA samples collection; (b) Knowledge taxonomy; (c) Overall pipeline showing composition and origin of FVQA from various sources.

    We construct FactualVQA (FVQA), a search-balanced dataset following three key criteria:

    1. Coverage of Both Search-Required/Free Questions
    2. Concise and Verifiable Answers
    3. Diversity in Knowledge and Difficulty

    Data Construction Pipeline

    • VQA Collection - Gather candidates requiring visual or textual knowledge
    • Search Balancing - Use preliminary model to classify search requirements
    • Human Annotation - Ensure diversity, authenticity, and label quality

    3. Experimental Findings

    We evaluated MMSearch-R1 against both closed-source models (GPT-4o, Gemini 2.5 Pro) and open-source models (Qwen2.5-VL series) on knowledge-intensive VQA tasks.

    Performance Results Table
    Table 1: Performance of MMSearch-R1 across benchmarks. 'Acc (%)' denotes accuracy evaluated by LLM-as-Judge, while 'SR (%)' represents the search ratio.

    Key Findings

    Finding 1: Enhanced Knowledge Boundary Recognition

    MMSearch-R1-7B outperforms same-sized RAG-based models by an average of 3% in accuracy while reducing the average search rate by 32.9%.

    Performance Comparison Analysis
    Figure 4: (a) Performance comparison between Base model and RL-trained model under RAG workflow. (b) Answer behavior breakdown of Base (inner circle) and RL (outer circle) models.

    Finding 2: Improved Query Generation and Summarization

    RL training enhances the model’s ability to generate effective text queries and summarize retrieved information under fixed RAG setup.

    Finding 3: Better Internal Knowledge Utilization

    Clear upward trend in Correct without Search proportion demonstrates improved recall and reasoning based on internal knowledge.

    Training Dynamics Analysis
    Figure 5: (a) Performance improvements of SFT and RL over Base across five VQA datasets. (b) Training dynamics of reward and search ratio for different strategies.

    Finding 4: RL vs. Supervised Learning

    RL consistently outperforms SFT across all tasks despite being trained on only about half as much data, demonstrating superior data efficiency.

    Finding 5: Balanced Training Effectiveness

    Training with balanced data and search penalty effectively guides the model to perform on-demand search without overusing the search tool.

    4. Conclusion

    MMSearch-R1 represents a significant advancement in multimodal AI, learning to:

    • Recognize knowledge gaps and boundaries
    • Selectively invoke image or text search
    • Reason effectively over retrieved content

    Our framework outperforms same-sized RAG baselines and approaches larger model performance while requiring significantly fewer search calls. This work lays the groundwork for building multimodal agents that are both adaptive and interactive, paving the way for the next major advancement in multimodal intelligence.

  • MGPO High-Resolution Visual Reasoning
    MGPO: Multi-Turn Grounding-Based Policy Optimization for high-resolution visual reasoning
    Project Resources
    Access the complete MGPO implementation and research materials

    1. Introduction

    SOTA large multimodal model (LMM) architectures, such as Qwen2.5-VL, typically build on a powerful large language model (LLM) (e.g. Qwen2.5) integrated with an external Native Resolution Vision Transformer (NaViT). Such approach also presents challenges in high-resolution real-world scenarios, as these inputs are converted into enormous visual tokens, many of which are irrelevant to the downstream task. By comparison, when processing high-resolution real-world scenarios, the human visual system employs task-driven visual search strategies to ground and scrutinize critical regions of interest. Motivated by this biological mechanism, we attempt to equip LLMs with similar visual search capabilities by leveraging visual grounding to focus on key image regions.

    However, empowering LMMs with such grounding-based visual reasoning capabilities is non-trivial, primarily due to the scarcity and high cost of obtaining grounding annotations for standard visual-question-answering (VQA) datasets, which are required for constructing multi-turn grounding-based conversation data for supervised fine-tuning (SFT). In this paper, we highlight that accurate grounding behavior can emerge within a reinforcement learning (RL) paradigm, even when training supervision is provided solely through a binary reward function derived from the correctness of the final answer.

    To this end, we introduce Multi-turn Grounding-based Policy Optimization (MGPO), a reinforcement learning (RL) algorithm that enables LMMs to iteratively focus on key image regions by automatically cropping sub-images, based on model-predicted grounding coordinates within a multi-turn conversation framework. Given a high-resolution image and a question, the model first predicts the coordinates of key regions relevant to the query. An image cropping function is then triggered to extract and return the corresponding sub-image. In subsequent turns, the model can integrate previous in-context convesations (including both the original image and cropped sub-image) to solve the question.

    Examples of models trained with multi-turn grounding-based RL
    Figure 1: Examples of models trained with multi-turn grounding-based RL on high-resolution realworld tasks. The model first identifies key regions, which are then automatically cropped and returned as sub-images. Notably, despite only a binary reward function derived from the correctness of the final answer, the model gradually emerge robust grounding capability throughout the RL process.

    In summary, MGPO mainly offers the following advantages:

    • Top-down and Interpretable Visual Reasoning. MGPO equips LMMs with a top-down, question-driven visual search mechanism for high-resolution scenarios and provides interpretable outputs that indicate which image regions are attended to throughout the reasoning process.
    • Overcomes Maximum Pixel Constraints. MGPO can overcomes the maximum pixel limitation of LMMs. As shown in the first example of Figure 1, even when resizing a high-resolution image within pixel limits results in a blurred input, the model can still identify relevant coordinates and crop clear sub-images from the original input for further analysis.
    • Without Additional Grounding Annotations. MGPO can be post-trained directly on standard VQA datasets without the need for extra grounding annotations, and experimental results demonstrate substantial improvements in intermediate grounding performance compared to GRPO

    Ultimately, we utilize MGPO to post-train Qwen2.5-VL-7B using visual-question-short answering data, yet achieves strong intermediate grounding performance without requiring grounding annotations (examples shown in Figure 1). Compared to GRPO, MGPO yields a 5.4% improvement on the in-distribution MME-Realworld benchmark and a 5.2% gain on the challenging out-of-distribution V* Bench. Notably, leveraging with only 21K post-training samples, our model surpasses OpenAI’s o1 and GPT-4o models on the OOD V* Bench.

    2. Multi-turn Grounding-Based RL

    Figure illustrates a comparison of different post-training paradigms for LMMs. In our MGPO, the model operates over K sequential interaction, dynamically grounding and reasoning by conditioning on the full history of visual and textual context at each step.

    Comparison of different post-training paradigms for LMMs
    Figure 2: Comparison of different post-training paradigms for LMMs. Our MGPO automatically crops and returns sub-image to the model based on its predicted grounding coordinates, enabling the model to iteratively focus on key regions and effectively solve high-resolution visual tasks.

    Multi-turn Template without Cold Start. In practice, we observe that LLMs struggle to autonomously generate grounding coordinates during the rollout process, which hinder effective multi-turn RL. To address this, we design a fixed two-turn dialogue template, as shown in Figure 3, to explicitly activate the model’s grounding and reasoning abilities.

    Two-turn dialogue template
    Figure 3: Our two-turn dialogue template design to explicitly activate the model's grounding and reasoning abilities.

    Multi-turn Grounding-Based RL Process. The MGPO training process consists of the following key steps:

    1. Initial Grounding: Given a high-resolution image and question, the model predicts bounding box coordinates for key regions
    2. Image Cropping: Based on predicted coordinates, relevant sub-images are automatically cropped from the original image
    3. Multi-turn Reasoning: The model integrates both original and cropped images in subsequent conversation turns
    4. Reward Learning: Binary rewards are provided based on final answer correctness, enabling the emergence of grounding behavior through RL
    MGPO training algorithm
    Figure 4: The Multi-turn Grounding-based Policy Optimization (MGPO) algorithm workflow.

    3. Experimental Results

    We evaluate MGPO on multiple high-resolution visual reasoning benchmarks and demonstrate significant improvements over baseline methods.

    3.1 Main Results

    Main experimental results
    Table 1: Performance comparison on high-resolution visual reasoning benchmarks. MGPO achieves superior performance across multiple datasets.

    Our experimental results show that MGPO yields substantial improvements:

    • 5.4% improvement on MME-Realworld benchmark compared to GRPO
    • 5.2% gain on challenging out-of-distribution V* Bench
    • Surpasses OpenAI’s o1 and GPT-4o models on OOD V* Bench with only 21K post-training samples

    3.2 Ablation Studies

    Ablation study results
    Table 2: Ablation study showing the contribution of different components in MGPO.

    3.3 Grounding Performance Analysis

    Grounding performance analysis
    Figure 5: Analysis of grounding performance showing emergence of accurate grounding behavior through RL training.

    4. Additional Analysis

    4.1 Point Counting Task

    Point counting task performance
    Table 4: Performance comparison of image count task. Additional point reward do not lead to significant performance improvements.

    4.2 Visualization Results

    Point prediction visualization
    Figure 8: Visualization of point predictions from the GRPO model trained with only accuracy reward.

    5. Limitation

    All experiments of MGPO are conducted using a fixed two-turn template, rather than allowing the model to autonomously decide when to perform image cropping based on the input question, as illustrated in lasted OpenAI models such as o3 and o4-mini. This limitation stems from our observation that Qwen2.5-VL, when directly subjected to RL post-training, struggles to generate grounding coordinates without explicit prompt guidance.

    Nevertheless, we believe that our trained models can be leveraged to generate high-quality chain-ofthought (CoT) data for subsequent SFT. By adopting a multi-stage training strategy that combines SFT and RL, as in DeepSeek-R1, may ultimately enable the model to autonomously decide when and how to perform grounding. We leave this direction for future work.

    Appendix

    Full conversation example
    Figure 9: A full conversation example of MGPO post-trained model on high-resolution image tasks.
  • Aero-1-Audio demonstration
    Aero-1-Audio: A compact 1.5B audio model for speech recognition and audio understanding

    What is Aero Audio?

    Aero-1-Audio is a compact audio model adept at various audio tasks, including speech recognition, audio understanding, and following audio instructions. It is part of the Aero-1 series, the first generation of lightweight multimodal models developed by LMMs-Lab, with future expansions planned across additional modalities.

    1. Built upon the Qwen-2.5-1.5B language model, Aero delivers strong performance across multiple audio benchmarks while remaining parameter-efficient, even compared with larger advanced models like Whisper and Qwen-2-Audio and Phi-4-Multimodal, or commercial services like ElevenLabs/Scribe.

    2. Aero is trained within one day on 16 H100 GPUs using just 50k hours of audio data. Our insight suggests that audio model training could be sample efficient with high quality and filtered data.

    3. Aero can accurately perform ASR and audio understanding on continuous audio inputs up to 15 minutes in length, which we find the scenario is still a challenge for other models.

    ASR & Audio Understanding Performance

    We evaluate our model performance on multiple dimensions and different benchmarks. Let’s first take a look at its overall performance compare with other models

    ASR and Understanding Performance Comparison
    Performance comparison showing Aero-1-Audio's balance between parameter efficiency and performance across ASR and audio understanding benchmarks
    Detailed ASR Performance
    Detailed ASR performance metrics showing Aero-1-Audio achieving optimal trade-off between parameter efficiency and performance

    Our model achieves a balance between performance and parameter efficiency. We evaluate it across multiple ASR and audio understanding benchmarks. On ASR tasks, our model attains the lowest WER scores on datasets such as AMI, LibriSpeech, and SPGISpeech. It also demonstrates strong audio understanding capabilities on various comprehension benchmarks. As illustrated in the plotted graph, our model falls within the highlighted triangular region that represents an optimal trade-off between parameter efficiency and performance.

    Data Distribution

    We present the contributions of our data mixture here. Our SFT data mixture includes over 20 publicly available datasets, and comparisons with other models highlight the data’s lightweight nature.

    Data Distribution
    Training data distribution showing the lightweight nature of our approach with approximately 50,000 hours of audio data from publicly available datasets
    Training Time Comparison
    Training time comparison demonstrating sample efficiency - our dataset is over 100 times smaller than comparable models while achieving competitive performance

    *The hours of some training datasets are estimated and may not be fully accurate

    One of the key strengths of our training recipe lies in the quality and quantity of our data. Our training dataset consists of approximately 5 billion tokens, corresponding to around 50,000 hours of audio. Compared to models such as Qwen-Omni and Phi-4, our dataset is over 100 times smaller, yet our model achieves competitive performance. All data is sourced from publicly available open-source datasets, highlighting the sample efficiency of our training approach. A detailed breakdown of our data distribution is provided below, along with comparisons to other models.

    What’s insightful

    In this release, our primary focus is on developing an audio model capable of handling multiple audio tasks. The following examples showcase its core abilities across tasks such as audio understanding and speech recognition. Most notably, we highlight the model’s capability to perform long-form ASR, as demonstrated in the example below.

    Long ASR

    A common approach for current long-form ASR tasks is to split the audio into smaller, processable chunks and perform ASR on each segment individually. However, with the advancement of large language models (LLMs), long-context understanding has become increasingly important. We argue that a model’s ability to process long audio sequences continuously is essential for effective audio understanding and should be considered a critical capability. To demonstrate this, we set up a simple use case using examples from an NVIDIA conference and calculate the WER with respect to the auto-generated YouTube subtitles.

    Long ASR Evaluation
    Heatmap comparison of different models performing ASR tasks with varying audio input lengths, showing Aero's stability across different lengths

    The image above presents a heatmap comparison of different models performing ASR tasks on a video with varying audio input lengths. As shown in the heatmap, Qwen-Omni and Phi-4 exhibit instability across different lengths and do not consistently produce the desired output.

    Note: The ground truth is derived from the auto-generated subtitles downloaded from YouTube. Therefore, the WER does not necessarily imply that our model achieves perfect results, but rather demonstrates that our model is comparable to the YouTube ASR pipeline.

    Model’s Output

    Qwen Omni (12 minutes chunk)

    When processing the audio in 12-minute chunks, Qwen-Omni failed to recognize the full speech content and was only able to capture portions of the audio.

    Qwen Omni (12 minutes chunk)

    that’s like what’s going on why does itfocused on um ai and parallel parallelizable workloads but it’s still general to an extent it’s not as use case specific as something like grock with a queue that’s really designed to you know spit out tokens as fast as possible and that like is a goldilocks zone where it’s flexible enough to handle different workloads but not um but still much faster than um a traditional cpu and that google is one of the only companies that has a scaled internal custom silicon effort

    Phi-4-Multimodal (full chunk)

    When processing the full audio without splitting, the Phi-4-Multimodal model began to ignore the instructions and instead generated an overall summary of the audio.

    Phi-4-Multimodal (full chunk)

    The conversation covered Nvidia’s focus on inference over training, the partnership with GM, the release of GUT-N1 for humanoid robotics, and the impact of China’s AI initiatives on global chip demand.

    Aero (full chunk)

    Aero Audio is able to generate the complete ASR output and accurately identify the full transcript.

    Aero (full chunk)

    Welcome to the brainstorm episode eighty two frank downing joining us recap of nvidia’s gtc conference that is the gpu technology conference frank what happened what were the big takeaways i on my side i saw a gm and in video partnering but we can circle back to that what was

    right nice timing good timing all right we’ll see everyone next week see everyone thank you

    Results on LibriSpeech Unchunked

    In the previous release, LibriSpeech split their audio files into smaller chunks and calculated the overall Word Error Rate (WER) based on these segmented samples. However, as we observed, it is straightforward to concatenate the chunks back into their original form, thereby creating a simple long-form Audio Speech Recognition benchmark. We evaluated various models on these benchmarks and found that their performance generally declined compared to their results on shorter samples. Among the models tested, our model achieved the best performance, showing the smallest drop in accuracy relative to the chunked version.

    ModelLS.CleanLS.OtherLS.Clean(Long)LS.Other(Long)Avg Diff
    Phi-41.683.8311.5124.7230.72
    Qwen2-Audio-Instruct3.597.4693.0193.63175.59
    Qwen2.5-Omni1.803.4013.0313.2921.12
    Aero-1-Audio1.493.175.3111.7112.36

    We present the evaluation of various models on the unchunked LibriSpeech dataset. The average result is calculated by averaging the WER score differences across the same splits. All models show some degradation when handling longer audio, whereas our model exhibits the least amount of performance drop.

    Evaluation Result

    We then present the full evaluation result here with the evaluation scores

    ASR Benchmarks

    ModelParametersAMIEarnings22LibriSpeech CleanLibriSpeech OtherSPGispeechTedliumAverage
    ElevenLabs/ScribeN/A14.4312.141.793.313.303.176.36
    REV.AI/FusionN/A10.9312.092.886.234.052.806.50
    OpenAI/Whisper-large-v31.5B15.9511.292.013.912.943.866.66
    Assembly.AI/AssemblyBestN/A15.6413.541.743.111.813.436.55
    Alibaba/Qwen2.5-Omni7B12.4112.741.803.402.353.115.97
    Microsoft/Phi-4-Multimodal4B+1.6B11.4510.501.673.823.112.895.57
    LMMs-Lab/Aero-1-Audio1.5B10.5313.791.493.171.972.875.64

    We evaluate our model on AMI, Earnings22, LibriSpeech, SPGISpeech, and TedLium. Our model achieves the second-best WER score compared to other models, while maintaining a small and efficient size.

    Audio Understanding Result

    We then test our model’s understanding result across 3 dimensions, Audio Analysis and Understanding, Speech Instruction, and Audio Scene Understanding

    ModelParametersAIR-ChatSpeech InstructionAIR-FoundationAverage
    SpeechSoundMusicMixAvgMMAU testminiOpenHermes testAlpaca Audio testSpeechSoundMusic
    Alibaba/Qwen2-Audio-Instruct7B7.27.06.86.86.949.246.849.262.955.456.856.7
    Alibaba/Qwen2.5-Omni7B6.85.74.85.45.765.657.257.467.276.363.064.4
    Microsoft/Phi-4-Multimodal4B+1.6B7.57.06.76.87.065.057.862.648.340.635.552.8
    Tencent/Ola7B7.36.45.96.06.470.362.662.858.870.453.163.2
    Tencent/Vita 1.57B4.85.54.92.94.535.59.67.031.524.125.528.6
    InspirAI/Mini-Omni20.5B3.63.52.63.13.2-------
    LMMs-Lab/Aero-1-Audio1.5B5.75.34.75.85.459.440.045.448.057.644.250.5

    We conducted evaluations on AIR-Bench-Chat and MMAU for audio analysis and understanding. Our model achieved an average score of 5.35, outperforming Mini-Omni2 and Vita. For Audio Instruction Following, we evaluated on OpenHermes and Alpaca-Audio, following the same pipeline as AudioBench. Our model demonstrates a strong ability to understand instructions in speech and provide correct responses. Additionally, when evaluated on AIR-Bench-Foundation for Audio Scene Understanding, our model outperformed Phi-4-Multimodal in the sound and music dimensions. Overall, the average score of our model indicates strong performance relative to other models with larger parameter sizes.

    Training Techniques

    Dynamic Batch Size

    We implemented a dynamic batching strategy based on the estimated token length to control the batch size per device. In many cases, using a fixed batch size requires setting it conservatively small to avoid out-of-memory (OOM) errors on longer samples, which leads to underutilization of computing resources. To address this, we group samples into batches such that the total token length stays within a predefined threshold, thereby minimizing computational waste and improving efficiency.

    Sequence Packing

    To further optimize dynamic batching, we implemented sequence packing for both the audio encoder and the language model, enabling larger batch sizes and faster training. This operation was then fused with the Liger kernel to achieve even higher throughput and lower memory usage. With a fixed packing length of 4096 to regulate the dynamic batch size, the average Model FLOP Utilization (MFU) was limited to 0.03. However, with sequence packing enabled, the average MFU increased to approximately 0.34, demonstrating a significant improvement in training efficiency.

    Packing LengthSequence PackingNum GPUsAvg MFUZeroOOM
    4096FALSE640.032No
    32768FALSE64NA2Yes
    32768TRUE320.342No

    We tested our implementations on different settings to demonstrate the efficiency of our implementation

  • EgoLife Project Teaser
    EgoLife: Towards Egocentric Life Assistant - A comprehensive project developing AI-powered wearable glasses for personal efficiency enhancement

    We introduce EgoLife, a project to develop an egocentric life assistant that accompanies and enhances personal efficiency through AI-powered wearable glasses 👓. To lay the foundation for this assistant, we conducted a comprehensive data collection study where six participants lived together for one week, continuously recording their daily activities—including discussions 💬, shopping 🛍️, cooking 🍳, socializing 👥, and entertainment 🎮 - using AI glasses for multimodal egocentric video capture, along with synchronized third-person-view video references. This effort resulted in the EgoLife Dataset 📖, a comprehensive 300-hour egocentric, interpersonal, multiview, and multimodal daily life dataset with intensive annotation. Leveraging this dataset, we introduce EgoLifeQA❓, a suite of 3K long-context, life-oriented question-answering tasks designed to provide meaningful assistance in daily life by addressing practical questions such as recalling past relevant events, monitoring health habits, and offering personalized recommendations.

    To address the key technical challenges of 1) developing robust visual-audio models for egocentric data, 2) enabling identity recognition, and 3) facilitating long-context question answering over extensive temporal information, we introduce EgoBulter 🫡, an integrated system comprising EgoGPT 🧠 and EgoRAG 🔍. EgoGPT is a vision-language model trained on egocentric datasets, achieving state-of-the-art performance on egocentric video understanding. EgoRAG is a retrieval-based component that supports answering ultra-long-context questions. Our experimental studies verify their working mechanisms and reveal critical factors and bottlenecks, guiding future improvements. By releasing our datasets, models, and benchmarks, we aim to stimulate further research in egocentric AI assistants.