Project Awesome project awesome

InternLM-XComposer-2.5

A versatile Large Vision Language Model (LVLM) designed to handle long-contextual input and output, excelling in various text-image comprehension and composition tasks. It achieves performance comparable to GPT-4V with a significantly smaller 7B LLM backend, demonstrating its efficiency and scalability.

Package 2.9k stars GitHub
Back to VLM Architectures