Skip to content
Neural Holography Codex

Neural Holography, Organized as a Growable Research Codex

A growable codex for neural holography, connecting CGH algorithms, learning methods, display hardware, human perception, and the research threads that shape the field.

This site is the high-level guide to this codex: how it is organized, which research threads and sections are already mapped, and where contributors can help extend it next.

Philosophy behind the Codex

Neural holography is an inherently interdisciplinary field where optics, graphics, imaging, vision, display systems, fabrication, perception, and machine learning all overlap. In practice, that means relevant ideas often live in different venues, use different terminology, and are easy to miss if you only follow one community.

This project tries to make those connections easier to inherit and extend. The hope is that a shared codex can support a small Cambrian explosion of reusable methods, sharper comparisons, and more cross-pollination between researchers who would otherwise stay separated by disciplinary boundaries.

The wonderland Ivan Sutherland imagined in The Ultimate Display is still shimmering ahead of us, waiting to be built.

Start with the complete Neural holography map, then branch outward by purpose.

  1. Begin with the codex for the full long-form Neural holography survey.
  2. Use browse by section when you want a faster category-first route.
  3. Build method intuition through CGH algorithms and display systems.
  4. Continue with software and labs and research groups to connect ideas to implementation and people.
  5. Use venues and communities, media and resources, and the subtopics pages when you want follow-up context or a narrower thread.

Acknowledgement

This codex began as an ongoing personal survey of neural holographic display research and was adapted in part from Brian Chao's awesome-holography, whose structure and collected references helped shape this project.

We are grateful to Brian Chao, and to the authors, maintainers, and researchers whose papers, codebases, talks, and shared resources continue to make the field more accessible. The current codex is maintained by Jinwoo Lee (cinescope@kaist.ac.kr).