StreamingVLM: Real-Time Understanding for Infinite Video Streams
View PDF
HTML (experimental)
Abstract:Vision-language models (VLMs) could power real-time assistants and autonomous agents, but they face a critical challenge: understanding near-infinite video streams without escalating latency and memory usage. Processing entire videos with full attention leads to quadratic computational costs and poor performance on long videos. Meanwhile, simple sliding window methods are also flawed, as they either break coherence or suffer from high latency due to redundan...
Read more at arxiv.org