MemCam: Memory-Augmented Camera Control for Consistent Video Generation

Xinhang Gao  ·  Junlin Guan  ·  Shuhan Luo  ·  Wenzhuo Li  ·  Guanghuan Tan  ·  Jiacheng Wang

Guilin University of Electronic Technology


Demo Videos

MemCam maintains consistent scene structure even after 360° camera rotations. When the camera returns to the starting viewpoint, the original scene appearance is faithfully reconstructed.

360° round-trip · scene 01
360° single-direction · scene 02
Open-domain real-world · scene 03
Open-domain real-world · scene 04

Abstract

Interactive video generation has significant potential for scene simulation and video creation. However, existing methods often struggle with maintaining scene consistency during long video generation under dynamic camera control due to limited contextual information. To address this challenge, we propose MemCam, a memory-augmented interactive video generation approach that treats previously generated frames as external memory and leverages them as contextual conditioning to achieve controllable camera viewpoints with high scene consistency.

To enable longer and more relevant context, we design a context compression module that encodes memory frames into compact representations and employs co-visibility-based selection to dynamically retrieve the most relevant historical frames, thereby reducing computational overhead while enriching contextual information. Experiments show that MemCam significantly outperforms existing baseline methods in terms of scene consistency, particularly in long video scenarios with large camera rotations.


Method

MemCam is built on the Wan2.1 1.3B DiT and introduces two key designs that together enable long-range scene consistency without 3D reconstruction.

MemCam Overview
Module 01

Context Compression Module

Encodes historical frames via spatial 2× downsampling, reducing token count to 1/4 and achieving ~5× inference speedup with minimal quality loss.

Module 02

Co-Visibility Retrieval

Uses Monte Carlo FOV overlap estimation to dynamically select the most viewpoint-relevant historical frames, rather than simply the most recent ones.

Module 03

Camera Encoder

A single-layer MLP per DiT Block that encodes the 3×4 [R|t] camera matrix and adds it element-wise to the main feature stream.

Module 04

Segment-wise Inference

Generates video segment by segment; memory is updated after each segment and the highest co-visibility frame is always selected from history.


Results

Qualitative Comparison

MemCam achieves the best FVD across all settings. The gains are most significant in the 360° scenario, where longer duration and larger camera rotation pose greater challenges. Bold = best, underline = second best.

MethodPSNR ↑SSIM ↑LPIPS ↓FVD ↓
Context-as-Memory — 90° Round-trip
I2V15.810.4520.470528.51
DFoT16.760.4740.393683.59
GeometryForcing16.570.4860.348557.66
MemCam (Ours)17.830.5060.357215.71
Context-as-Memory — 360° Round-trip
I2V9.750.3320.603988.82
DFoT8.940.2520.6131188.34
GeometryForcing10.070.4020.565852.05
MemCam (Ours)14.810.4230.504167.87
RealEstate10K — 90° Round-trip (Zero-shot)
I2V16.260.4880.362552.01
DFoT17.170.5050.399539.89
GeometryForcing17.700.5970.316519.78
MemCam (Ours)17.610.5440.314269.82
RealEstate10K — 360° Round-trip (Zero-shot)
I2V10.160.2840.611789.62
DFoT10.430.2810.5641002.39
GeometryForcing11.190.3790.405419.60
MemCam (Ours)16.520.5500.400131.96

Citation

@inproceedings{gao2026memcam,
  title     = {MemCam: Memory-Augmented Camera Control for Consistent Video Generation},
  author    = {Gao, Xinhang and Guan, Junlin and Luo, Shuhan and Li, Wenzhuo and Tan, Guanghuan and Wang, Jiacheng},
  booktitle = {International Joint Conference on Neural Networks (IJCNN)},
  year      = {2026}
}