Loading videos...

Memory Forcing

Spatio-Temporal Memory for Consistent Scene Generation on Minecraft

1 The Chinese University of Hong Kong, Shenzhen    2 Shenzhen Loop Area Institute
3 The University of Hong Kong    4 Voyager Research, Didi Chuxing    5 Microsoft Research
Corresponding Author

Abstract

Autoregressive video diffusion models have proved effective for world modeling and interactive scene generation, with Minecraft gameplay as a representative application. To faithfully simulate play, a model must generate natural content while exploring new scenes and preserve spatial consistency when revisiting explored areas. Under limited computation budgets, it must compress and exploit historical cues within a finite context window, which exposes a trade-off: Temporal-only memory lacks long-term spatial consistency, whereas adding spatial memory strengthens consistency but may degrade new scene generation quality when the model over-relies on insufficient spatial context.

We present Memory Forcing, a learning framework that pairs training protocols with a geometry-indexed spatial memory. Hybrid Training exposes distinct gameplay regimes, guiding the model to rely on temporal memory during exploration and incorporate spatial memory for revisits. Chained Forward Training extends autoregressive training with model rollouts, where chained predictions create larger pose variations and encourage reliance on spatial memory for maintaining consistency. Point-to-Frame Retrieval efficiently retrieves history by mapping currently visible points to their source frames, while Incremental 3D Reconstruction maintains and updates an explicit 3D cache. Extensive experiments demonstrate that Memory Forcing achieves superior long-term spatial consistency and generative quality across diverse environments, while maintaining computational efficiency for extended sequences.

Existing Paradigms

MemoryForcing Intro

In prior works, the allocation of memory manifests in two characteristic failure modes. Models that incorporate long-term spatial memory preserve consistency on revisits as shown in Figure (a) but fail in novel scenes exploration. Conversely, temporal-only models fail to maintain spatial consistency upon revisit as shown in Figure (b).

Method

MemoryForcing Pipeline Overview

Memory Forcing Pipeline. Our framework combines spatial and temporal memory for video generation. 3D geometry is maintained through streaming reconstruction of key frames along the camera trajectory. During generation, Point-to-Frame Retrieval maps spatial context to historical frames, which are integrated with temporal memory and injected together via memory cross-attention in the DiT backbone. Chained Forward Training creates larger pose variations, encouraging the model to effectively utilize spatial memory for maintaining long-term geometric consistency.

Qualitative Comparison

Long-term Memory Performance

Demonstration of model performance on long-term memory tasks, comparing Ground Truth and model output results.

💡 Tip: Click to pause/play video pairs, double-click to restart from beginning.

BibTeX

@misc{huang2025memoryforcingspatiotemporalmemory, title={Memory Forcing: Spatio-Temporal Memory for Consistent Scene Generation on Minecraft}, author={Junchao Huang and Xinting Hu and Boyao Han and Shaoshuai Shi and Zhuotao Tian and Tianyu He and Li Jiang}, year={2025}, eprint={2510.03198}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2510.03198}, }