skip to content

Tags #multimodal

  • Multimodal-SAE Banner
    Multimodal-SAE: First demonstration of SAE-based feature interpretation in Large Multimodal Models

    Overview

    For the first time in the multimodal domain, we demonstrate that features learned by Sparse Autoencoders (SAEs) in a smaller Large Multimodal Model (LMM) can be effectively interpreted by a larger LMM. Our work introduces the use of SAEs to analyze the open-semantic features of LMMs, providing a breakthrough solution for feature interpretation across various model scales.

    Inspiration and Motivation

    This research is inspired by Anthropic’s remarkable work on applying SAEs to interpret features in large-scale language models. In multimodal models, we discovered intriguing features that:

    • Correlate with diverse semantics across visual and textual modalities
    • Can be leveraged to steer model behavior for precise control
    • Enable deeper understanding of LMM functionality and decision-making

    Technical Approach

    SAE Training Pipeline

    The Sparse Autoencoder (SAE) is trained using a targeted approach:

    1. Integration Strategy - SAE integrated into a specific layer of the model
    2. Frozen Architecture - All other model components remain frozen during training
    3. Training Data - Utilizes LLaVA-NeXT dataset for comprehensive multimodal coverage
    4. Feature Learning - Learns sparse, interpretable representations of multimodal features

    Auto-Explanation Pipeline

    Our novel auto-explanation pipeline analyzes visual features through:

    • Activation Region Analysis - Identifies where features activate in visual inputs
    • Semantic Correlation - Maps features to interpretable semantic concepts
    • Cross-Modal Understanding - Leverages larger LMMs for feature interpretation
    • Automated Processing - Scalable interpretation without manual annotation

    Feature Steering and Control

    Feature Steering Demonstration
    Demonstration of feature steering: These learned features can be used to control model behavior and generate desired outputs

    Behavioral Control Capabilities

    The learned features enable precise model steering by:

    • Selective Feature Activation - Amplifying specific semantic features
    • Behavioral Modification - Directing model attention and responses
    • Interpretable Control - Understanding why specific outputs are generated
    • Fine-Grained Manipulation - Precise control over model behavior

    Key Contributions

    🔬 First Multimodal SAE Implementation

    Pioneering application of SAE methodology to multimodal models, opening new research directions in mechanistic interpretability.

    🎯 Cross-Scale Feature Interpretation

    Demonstration that smaller LMMs can learn features interpretable by larger models, enabling scalable analysis approaches.

    🎮 Model Steering Capabilities

    Practical application of learned features for controllable model behavior and output generation.

    🔄 Auto-Explanation Pipeline

    Automated methodology for interpreting visual features without requiring manual semantic labeling.

    Research Impact

    Mechanistic Interpretability Advancement

    This work represents a significant advancement in understanding how multimodal models process and integrate information across modalities.

    Practical Applications

    • Model Debugging - Understanding failure modes and biases
    • Controllable Generation - Steering model outputs for specific applications
    • Safety and Alignment - Better control over model behavior
    • Feature Analysis - Deep understanding of learned representations

    Future Directions

    Our methodology opens new research avenues in:

    1. Cross-Modal Feature Analysis - Understanding feature interactions across modalities
    2. Scalable Interpretability - Extending to larger and more complex models
    3. Real-Time Steering - Dynamic control during inference
    4. Safety Applications - Preventing harmful or biased outputs

    Technical Details

    Architecture Integration

    The SAE is carefully integrated to:

    • Preserve Model Performance - Minimal impact on original capabilities
    • Capture Rich Features - Learn meaningful sparse representations
    • Enable Interpretation - Facilitate analysis by larger models
    • Support Steering - Allow runtime behavioral modification

    Evaluation Methodology

    Our approach is validated through:

    • Feature Interpretability - Qualitative analysis of learned features
    • Steering Effectiveness - Quantitative measurement of behavioral control
    • Cross-Model Validation - Testing interpretation across different model sizes
    • Semantic Consistency - Verifying feature stability and meaning

    Conclusion

    Multimodal-SAE represents a breakthrough in multimodal mechanistic interpretability, providing the first successful demonstration of SAE-based feature interpretation in the multimodal domain. Our work enables:

    • Deeper Understanding of how LMMs process multimodal information
    • Practical Control over model behavior through feature steering
    • Scalable Interpretation methods for increasingly complex models
    • Foundation Research for future advances in multimodal AI safety and control

    This research establishes a new paradigm for understanding and controlling Large Multimodal Models, with significant implications for AI safety, controllability, and interpretability research.

  • The development of video large multimodal models (LMMs) has been hindered by the difficulty of curating large amounts of high-quality raw data from the web. To address this, we consider an alternative approach, creating a high-quality synthetic dataset specifically for video instruction-following, namely LLaVA-Video-178K. This dataset includes key tasks such as detailed captioning, open-ended question-answering (QA), and multiple-choice QA. By training on this proposed dataset, in combination with existing visual instruction tuning data, we introduce LLaVA-Video, a new video LMM. Our experiments demonstrate that LLaVA-Video achieves strong performance across various video benchmarks, highlighting the effectiveness of our dataset. We plan to release the dataset, its generation pipeline, and the model checkpoints.

    Video Instruction-Following Data Synthesis

    A high-quality dataset for video instruction-tuning is crucial for developing effective video-language models. We identify a key factor in building such datasets: ensuring richness and diversity in both video content and its language annotations. We perform comprehensive survey on the existing video benchmarks, covering across various public video captioning and question-answering datasets, then identify ten unique video sources that contribute to over 40 video-language benchmarks. From each source, we select videos that exhibit significant temporal dynamics. To maintain diversity in the annotations, we establish a pipeline capable of generating detailed captions for videos of any length. Additionally, we define 16 types of questions that guide GPT-4o in creating question-answer pairs to assess the perceptual and reasoning skills of the video-language models.

    Video Sources

    Video Sources for LLaVA-Video
    Video sources in the proposed LLaVA-Video-178K. The relationship between 10 video sources we have utilized and other existing video-language datasets.

    We noticed that although different video-language datasets focus on various video understanding tasks , most are sourced from ten main video sources, which offer a wide range of video data from different websites, viewpoints, and domains. The relationship between these ten selected video datasets and others is shown in figure below. We select the dynamic video from these source, we detail the video selection logic in the paper.

    Automated Generation for Video Detail Description

    LLaVA-Video Data Creation
    The video detail description creation pipeline. A three-level creation pipeline is considered, with each level developed via a recurrent approach. Note that t is the index of time internal at its own level, and T is the last time internal index

    For selected videos, we use GPT-4o to systematically describe their content. We start by sampling video frames at one frame per second (fps). However, due to the input size constraints of GPT-4o, we cannot use all sampled frames. Instead, we describe the videos sequentially, as shown in figure below. We create descriptions at three distinct levels, detailed below.

    Automated Generation for Video Question Answering

    In addition to detailed video descriptions, our dataset includes a variety of question-answer pairs designed for complex interactions. This setup improves the video understanding model’s ability to handle real-life queries. We refer to public video question-answering benchmarks to organize these questions into 16 specific categories, as shown in Figure 3. Given a detailed video description, we use GPT-4o to generate at most one question-answer pair for each type of question. Please refer to the paper for more details of the question types and the generation process.

    Dataset Statistics

    We carefully select from our collected data sources to form a balanced and comprehensive collection, resulting in a total of 178K videos and 1.3M instruction-following samples. This includes 178K captions, 960K open-ended QAs, and 196K multiple-choice QAs.

    Dataset Comparison

    We provide a comparison of high-quality instruction-following video-language datasets, with a focus on synthetic data created with strong AI models, as shown in Table 1.

    A broad collection of dynamic videos. In terms of video sources, although LLaVA-Hound contains the largest number of videos, 44% of its video data are sourced from WebVid, where most videos are static. ShareGPT4Video includes 30% of its videos from Pexels, ,Pixabay, and Mixkit, which are aesthetically good but also mostly static. Additionally, the majority of its videos come from Panda-70M, which are short clips from longer videos, suggesting simpler plots. In contrast, we carefully select video sources that offer dynamic, untrimmed videos with complex plots, which are crucial for developing a powerful video understanding model.

    High frames per second. Regarding frame sampling in language annotations, the proposed dataset considers 1 FPS, while other datasets consider much lower FPS. LLaVA-Hound uniformly samples 10 frames from videos of any length. The average FPS is 0.008, which may miss some fine details. ShareGPT4Video picks key frames using CLIP based on frame uniqueness. This method might also miss subtle changes in the video because CLIP embeddings do not capture fine-grained dynamics well. Our method samples FPS=1 without using key frame selection algorithms, ensuring that detailed temporal information can be expressed in annotations with high coverage.

    Diverse tasks. The proposed dataset considers three common task types, including caption, free-form, and closed-form QA, while existing datasets only consider a subset. Meanwhile, the quality and number of samples in our dataset is higher.

  • LLaVA-OneVision
    LLaVA-OneVision: A unified model for single-image, multi-image, and video understanding

    Overview

    We present LLaVA-OneVision, a family of open large multimodal models (LMMs) developed by consolidating our insights into data, models, and visual representations in the LLaVA-NeXT blog series. LLaVA-OneVision is the first single model that can simultaneously push the performance boundaries of open LMMs in three important computer vision scenarios: single-image, multi-image, and video scenarios.

    Key Features

    Unified Architecture

    LLaVA-OneVision is designed to have a similar maximum visual token count across different scenarios, enabling flexible extension to multiple visual signal types while maintaining consistent performance.

    Model Sizes

    • 0.5B parameters - Lightweight deployment
    • 7B parameters - Balanced performance
    • 72B parameters - State-of-the-art capabilities

    Emerging Capabilities

    The design of LLaVA-OneVision enables strong transfer learning across different modalities and scenarios, yielding impressive emerging capabilities:

    1. Cross-Scenario Understanding

    Seamlessly process and understand content across single images, multiple images, and videos within a unified framework.

    2. Advanced Visual Analysis

    • Diagram and table interpretation - Understanding complex visual structures
    • Multi-screenshot interaction - Analyzing relationships across multiple screens
    • Set-of-mark object referencing - Precise object identification and tracking

    3. Video Capabilities

    • Image-to-video generation understanding - Comprehending temporal transitions
    • Video analysis and comparison - Deep understanding of video content
    • Multi-camera video interpretation - Processing footage from multiple viewpoints
    • Detailed video subject description - Rich, contextual video narration

    Strong Transfer Learning

    Importantly, the design of LLaVA-OneVision allows strong transfer learning across different modalities/scenarios. In particular, strong video understanding and cross-scenario capabilities are demonstrated through task transfer from images to videos, showcasing the model’s ability to generalize learned representations across visual domains.

    Development Roadmap

    LLaVA-OneVision represents a significant milestone in our iterative improvements through the LLaVA-NeXT series, focusing on:

    • Enhanced reasoning capabilities
    • Improved OCR performance
    • Expanded world knowledge
    • Advanced multimodal understanding
  • LongVA Visual Needle-in-a-Haystack Heatmap
    LongVA's performance on Visual Needle-In-A-Haystack benchmark showing accurate retrieval across long video sequences

    Overview

    Gemini has amazed the world with its capability to understand hour-long videos. However, we still lack an open-source alternative with similar capabilities. Our latest research presents an innovative solution towards long video LMM, shifting the focus from reducing visual tokens per frame to leveraging the long context capabilities of language models.

    Here, we present our state-of-the-art video model, Long Video Assistant (LongVA), and our novel benchmark, Visual Needle-In-A-Haystack (V-NIAH).

    Key Innovations

    🔄 Long Context Transfer

    We discovered and verified that the long context capability of language models can be directly transferred to the video domain in modality-aligned multi-modal models. On V-NIAH, LongVA is the only open-source model capable of accurately retrieving visual information from inputs with:

    • 2000+ frames
    • 200K+ visual tokens

    🎯 UniRes: Unified Visual Encoding

    We proposed UniRes, a unified visual encoding scheme that encodes both images and videos. In UniRes, a video is encoded the same as multiple image crops in a sequence.

    Key Benefits:

    • Leverages the Long Context Transfer property
    • Enables superior zero-shot performance in video tasks
    • No video-specific training data required

    Performance Highlights

    🏆 State-of-the-Art Results

    LongVA achieves state-of-the-art performance on the comprehensive Video-MME benchmarks among 7B models.

    Key Performance Features:

    • Performance increases with denser sampling of video frames
    • Superior zero-shot capabilities on video understanding tasks
    • Comprehensive ablation studies validating improvement sources

    📊 V-NIAH Benchmark

    Our novel Visual Needle-In-A-Haystack (V-NIAH) benchmark provides:

    • Rigorous evaluation of long-context visual understanding
    • Testing retrieval accuracy across extended video sequences
    • Open-source evaluation framework for the community

    Technical Architecture

    Multi-Modal Alignment

    LongVA demonstrates that language models’ inherent long-context capabilities can be effectively transferred to visual domains through proper modality alignment.

    Scalable Design

    The architecture scales efficiently with:

    • Increased frame sampling rates
    • Extended sequence lengths
    • Larger visual token counts

    Research Impact

    Open-Source Alternative

    LongVA provides the first viable open-source alternative to proprietary long-video understanding systems, enabling:

    • Academic research advancement
    • Commercial application development
    • Community-driven improvements

    Methodology Innovation

    The long context transfer approach opens new research directions in:

    • Cross-modal capability transfer
    • Efficient video processing
    • Unified multi-modal architectures

    Future Directions

    LongVA establishes a foundation for:

    1. Extended Context Models - Pushing beyond current frame limits
    2. Multi-Modal Transfer Learning - Applying insights to other modalities
    3. Efficient Video Processing - Optimizing computational requirements
    4. Benchmark Development - Creating more comprehensive evaluation metrics
    LongVA Resources
    Complete resources for LongVA including source code, evaluation benchmark, and pre-trained models