News
News
[2024-06] 🚀🚀 We release the
LongVA
, a long language model with state-of-the-art performance on video understanding tasks.[2024-06] 🎬🎬 The
lmms-eval/v0.2
has been upgraded to support video evaluations for video models like LLaVA-NeXT Video and Gemini 1.5 Pro across tasks such as EgoSchema, PerceptionTest, VideoMME, and more.[2024-05] 🚀🚀 We release the
LLaVA-NeXT Video
, a video model with state-of-the-art performance and reaching to Google’s Gemini level performance on diverse video understanding tasks.[2024-05] 🚀🚀 We release the
LLaVA-NeXT
with state-of-the-art and near GPT-4V performance at multiple multimodal benchmarks. LLaVA model family now reaches at 72B, and 110B parameters level.[2024-03] We release the
lmms-eval
, a toolkit for holistic evaluations with 50+ multimodal datasets and 10+ models.