Tencent Hunyuan's open-source model that turns text, images, and video into fully navigable, editable 3D worlds — with physics, collisions, and game engine export.
Build optimized prompts for HY-World 2.0. Select your scene type, style, and output format — then copy the generated prompt directly into the model.
HY-World 2.0 is Tencent's second-generation open-source 3D world generation model, released April 15–16, 2026 by the Hunyuan research team. It represents a new category of AI: not just 3D object generation, but complete, interactive world generation.
Generate entire explorable scenes — not just objects. Full spatial layout with floors, walls, terrain, sky, and interactive elements.
Accepts text prompts, reference images, or video clips as input. Describe a world in words or show it a photo and it builds the 3D scene.
Built-in proprietary rendering engine for high-fidelity previewing and exporting. Handles PBR materials, global illumination, and dynamic shadows.
Export directly to Unity and Unreal Engine with physics colliders, navigation meshes, and LOD levels pre-baked.
Point cloud outputs with spatial semantics are ideal for robot navigation training, digital twins, and sim-to-real transfer workflows.
Generated worlds are fully editable — swap materials, move objects, regenerate sub-regions, or extend the scene in any direction.
From installation to your first generated 3D world in under 30 minutes. Follow these steps to get started.
Ensure you have Python 3.10+, CUDA 11.8+, and a GPU with at least 24GB VRAM. An NVIDIA RTX 3090, 4090, or A100 is recommended for full quality generation.
Clone the official Tencent HunyuanWorld GitHub repository and install dependencies.
Pull the model weights from HuggingFace. The full model is several GB — use huggingface-cli for reliable download with resume support.
Run the inference script with a text prompt. Use descriptive language for best results. The Prompt Builder above will help you craft optimized inputs.
Provide a reference image to guide the style and content of the 3D world. Useful for re-creating real locations or concept art.
Generated scenes are saved in your output directory. Open .glb files in any 3D viewer, import into Unity/Unreal, or view 3DGS files with the built-in WorldLens viewer.
HY-World 2.0 supports three distinct output formats, each optimized for different use cases and downstream workflows.
Classic triangle mesh format — the most compatible output for game engines, DCC tools, and 3D printing.
Hyper-photorealistic output using Gaussian Splatting — renders at 30–120 FPS with stunning visual fidelity.
Lightweight spatial representation with semantic labels — perfect for robotics, digital twins, and spatial computing.
How HY-World 2.0 stacks up against Marble, WonderWorld, and other 3D world generation tools as of April 2026.
| Feature | HY-World 2.0 | Marble (Closed) | WonderWorld | Luma AI | Skybox AI |
|---|---|---|---|---|---|
| Open Source | ✓ Yes | ✗ No | ✓ Yes | ✗ No | ✗ No |
| Text-to-3D World | ✓ Full | ✓ Full | ~ Partial | ~ Objects | ~ 360° only |
| Image Input | ✓ | ✓ | ✓ | ✓ | ✗ |
| Video Input | ✓ | ✓ | ✗ | ✓ | ✗ |
| Gaussian Splatting | ✓ | ✓ | ~ Beta | ✓ | ✗ |
| Polygon Mesh Export | ✓ | ✓ | ✓ | ✓ | ✗ |
| Point Cloud Export | ✓ | ✗ | ✗ | ~ Via API | ✗ |
| Physics & Collisions | ✓ Built-in | ~ Manual | ✗ | ✗ | ✗ |
| Navigation Mesh | ✓ Auto | ~ Manual | ✗ | ✗ | ✗ |
| Unity Export | ✓ | ✓ | ✓ | ~ Via plugin | ~ HDRI only |
| Unreal Engine Export | ✓ | ✓ | ~ Manual | ~ Via plugin | ✗ |
| Self-Hostable | ✓ | ✗ SaaS only | ✓ | ✗ API only | ✗ SaaS only |
| Pricing | Free (self-host) | $49–$199/mo | Free (self-host) | $0.01/frame | $19–$79/mo |
| WorldLens Viewer | ✓ Built-in | ✓ | ✗ | ✓ | ~ Limited |
Step-by-step instructions for importing HY-World 2.0 generated scenes into Unity and Unreal Engine for production game development.
Import HY-World 2.0 mesh exports into Unity 2022 LTS or Unity 6. Physics colliders and navmesh data import automatically when using the .glb format.
--format mesh --export glbHY-World 2.0 scenes work seamlessly with UE5's Nanite and Lumen systems. Use the Datasmith plugin for best import fidelity.
--format mesh --export fbxUse the Point Cloud output format for robotics simulation workflows. The semantic labels from HY-World 2.0 map directly to ROS semantic segmentation topics.
HY-World 2.0 unlocks new workflows across game development, VR/AR, robotics, film, and enterprise applications.
Generate game level blockouts in minutes. Iterate on layout and atmosphere with text prompts before committing to manual art production.
Build immersive VR environments from natural language descriptions. Export 3DGS format for photorealistic real-time rendering in headsets.
Generate diverse training environments for reinforcement learning agents. Point cloud outputs with physics enable realistic sim-to-real transfer.
Reconstruct real-world spaces from photos or video for architecture visualization, facility management, and urban planning.
Rapidly generate location stand-ins and virtual production sets. Use Gaussian Splatting output for photorealistic background plates.
Open-source availability makes HY-World 2.0 ideal for academic research in 3D generation, neural rendering, and scene understanding.
Everything you need to know about HY-World 2.0 — model capabilities, technical requirements, licensing, and integration.
tencent/HY-World and GitHub at github.com/Tencent/HunyuanWorld. Hardware requirements: Minimum 24GB VRAM GPU (NVIDIA RTX 3090 or RTX 4090 recommended for consumer hardware; A100 for production). You'll also need CUDA 11.8+, Python 3.10+, and approximately 32GB system RAM. For Point Cloud output only, a 16GB VRAM GPU may be sufficient. Cloud options: RunPod or Vast.ai with A100 instances work well if you lack local hardware.HY-World 2.0 is free, open-source, and ready to use. Build game levels, VR worlds, and robotic simulations with AI.