LMMS Lab Logo
Home
Posts
Notes
About

Building
the way to intelligence.

Advancing multimodal intelligence through open research. Models, data, and insights - shared as we discover.

Explore Research
About the Lab
LMMS-LAB // BREACH ACTIVENEURAL WEIGHT EXTRACTION
1/7LIVE
_
thinking:
_
Featured Research
OneVision Encoder: Codec-Aligned Sparsity as a Foundational Principle for Multimodal Intelligence
JAN 15, 2026

OneVision Encoder:
Codec-Aligned Sparsity as a Foundational Principle for Multimodal Intelligence

models
Read Paper
Latest Publications
View Archive
[01]
LLaVA-OneVision-1.5: Fully Open Framework for Democratized Multimodal Training
LLaVA-OneVision-1.5: Fully Open Frame...
SEP 2025

LLaVA-OneVision-1.5: Fully Open Framework for Democratized Multimodal Training

models
[02]
LongVT: Incentivizing "Thinking with Long Videos" via Native Tool Calling
LongVT: Incentivizing "Thinking with ...
NOV 2025

LongVT: Incentivizing "Thinking with Long Videos" via Native Tool Calling

models
[03]
OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe
OpenMMReasoner: Pushing the Frontiers...
NOV 2025

OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe

models
2026 LMMs-Lab
GitHubTwitter