Open Source · Released April 2026

HY-World 2.0
AI 3D World Generator

Tencent Hunyuan's open-source model that turns text, images, and video into fully navigable, editable 3D worlds — with physics, collisions, and game engine export.

🏢 Tencent Hunyuan Team ⭐ 791+ GitHub Stars 🆓 Open Source (MIT/Apache) 🎮 Unity & Unreal Ready
3
Output Formats
3
Input Modalities
SOTA
Open Source Rank
791+
GitHub Stars
$0
License Cost

3D Scene Prompt Builder

Build optimized prompts for HY-World 2.0. Select your scene type, style, and output format — then copy the generated prompt directly into the model.

Interactive Prompt Builder
Choose your options below to generate a ready-to-use HY-World 2.0 prompt
Forest City Dungeon Space Beach Mountain
Realistic Stylized Anime Low-Poly
Mesh (GLB) 3DGS Splat Point Cloud
Physics + Nav Dynamic Lighting Interactive Props VR Optimized
Generate a 3D world scene: a dense ancient forest, photorealistic, cinematic lighting, 8K detail, with physics collision enabled, navigable paths --format mesh --export glb --worldlens-render --navigation-mesh
Copied!

What is HY-World 2.0?

HY-World 2.0 is Tencent's second-generation open-source 3D world generation model, released April 15–16, 2026 by the Hunyuan research team. It represents a new category of AI: not just 3D object generation, but complete, interactive world generation.

🌍

World-Scale Generation

Generate entire explorable scenes — not just objects. Full spatial layout with floors, walls, terrain, sky, and interactive elements.

🔤

Multi-Modal Input

Accepts text prompts, reference images, or video clips as input. Describe a world in words or show it a photo and it builds the 3D scene.

WorldLens Renderer

Built-in proprietary rendering engine for high-fidelity previewing and exporting. Handles PBR materials, global illumination, and dynamic shadows.

🎮

Game Engine Ready

Export directly to Unity and Unreal Engine with physics colliders, navigation meshes, and LOD levels pre-baked.

🤖

Robotics & Simulation

Point cloud outputs with spatial semantics are ideal for robot navigation training, digital twins, and sim-to-real transfer workflows.

✏️

Editable Scenes

Generated worlds are fully editable — swap materials, move objects, regenerate sub-regions, or extend the scene in any direction.

How to Use HY-World 2.0

From installation to your first generated 3D world in under 30 minutes. Follow these steps to get started.

1

Install Prerequisites

Ensure you have Python 3.10+, CUDA 11.8+, and a GPU with at least 24GB VRAM. An NVIDIA RTX 3090, 4090, or A100 is recommended for full quality generation.

nvidia-smi # Verify GPU and CUDA version python3 --version # Should be 3.10+ pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
2

Clone the Repository

Clone the official Tencent HunyuanWorld GitHub repository and install dependencies.

git clone https://github.com/Tencent/HunyuanWorld cd HunyuanWorld pip install -r requirements.txt pip install -e .
3

Download Model Weights

Pull the model weights from HuggingFace. The full model is several GB — use huggingface-cli for reliable download with resume support.

pip install huggingface_hub huggingface-cli download tencent/HY-World --local-dir ./weights
4

Generate Your First Scene (Text Input)

Run the inference script with a text prompt. Use descriptive language for best results. The Prompt Builder above will help you craft optimized inputs.

python generate.py \ --prompt "a dense ancient forest with moss-covered trees, photorealistic" \ --format mesh \ --export glb \ --output ./my_forest \ --worldlens-render
5

Generate from Image Input

Provide a reference image to guide the style and content of the 3D world. Useful for re-creating real locations or concept art.

python generate.py \ --image ./reference.jpg \ --prompt "expand this into a full navigable 3D world" \ --format 3dgs \ --output ./my_scene
6

Export and View

Generated scenes are saved in your output directory. Open .glb files in any 3D viewer, import into Unity/Unreal, or view 3DGS files with the built-in WorldLens viewer.

# Launch WorldLens viewer python viewer.py --scene ./my_forest/scene.glb # Or open in Blender / Unity / Unreal Engine

Output Formats Explained

HY-World 2.0 supports three distinct output formats, each optimized for different use cases and downstream workflows.

Polygonal Mesh

Traditional 3D Geometry

Classic triangle mesh format — the most compatible output for game engines, DCC tools, and 3D printing.

  • Exports as .glb, .obj, .fbx
  • PBR texture maps included
  • Physics colliders pre-generated
  • Navigation mesh auto-built
  • Best for: Unity, Unreal, Blender
  • VRAM: ~24GB recommended
Gaussian Splatting (3DGS)

Neural Radiance Representation

Hyper-photorealistic output using Gaussian Splatting — renders at 30–120 FPS with stunning visual fidelity.

  • Exports as .splat, .ply
  • Photorealistic rendering quality
  • Real-time navigation in WorldLens
  • Ideal for visual showcase/demo
  • Best for: VR walkthroughs, film viz
  • VRAM: ~32GB for full quality
Point Cloud

Spatial Data Representation

Lightweight spatial representation with semantic labels — perfect for robotics, digital twins, and spatial computing.

  • Exports as .pcd, .las, .e57
  • Semantic segmentation labels
  • RGB + depth channel support
  • Smallest file size
  • Best for: robotics, SLAM, twins
  • VRAM: ~16GB sufficient

HY-World 2.0 vs Competitors

How HY-World 2.0 stacks up against Marble, WonderWorld, and other 3D world generation tools as of April 2026.

Feature HY-World 2.0 Marble (Closed) WonderWorld Luma AI Skybox AI
Open Source ✓ Yes ✗ No ✓ Yes ✗ No ✗ No
Text-to-3D World ✓ Full ✓ Full ~ Partial ~ Objects ~ 360° only
Image Input
Video Input
Gaussian Splatting ~ Beta
Polygon Mesh Export
Point Cloud Export ~ Via API
Physics & Collisions ✓ Built-in ~ Manual
Navigation Mesh ✓ Auto ~ Manual
Unity Export ~ Via plugin ~ HDRI only
Unreal Engine Export ~ Manual ~ Via plugin
Self-Hostable ✗ SaaS only ✗ API only ✗ SaaS only
Pricing Free (self-host) $49–$199/mo Free (self-host) $0.01/frame $19–$79/mo
WorldLens Viewer ✓ Built-in ~ Limited

Unity & Unreal Engine Integration

Step-by-step instructions for importing HY-World 2.0 generated scenes into Unity and Unreal Engine for production game development.

Import HY-World 2.0 mesh exports into Unity 2022 LTS or Unity 6. Physics colliders and navmesh data import automatically when using the .glb format.

  • 01. Generate scene with --format mesh --export glb
  • 02. In Unity: Assets → Import New Asset → select .glb file
  • 03. Open the imported prefab in the Inspector
  • 04. Verify collider components auto-generated on meshes
  • 05. Add NavMesh Surface component to root object
  • 06. Click "Bake" to finalize navigation mesh
  • 07. Drag prefab into your scene — done!

HY-World 2.0 scenes work seamlessly with UE5's Nanite and Lumen systems. Use the Datasmith plugin for best import fidelity.

  • 01. Generate with --format mesh --export fbx
  • 02. Install Datasmith plugin (recommended) or use built-in FBX importer
  • 03. File → Import Into Level → select your .fbx
  • 04. Enable "Generate Collision" and "Import Textures" in dialog
  • 05. For Nanite: right-click mesh → Enable Nanite
  • 06. For navigation: place Nav Mesh Bounds Volume over scene
  • 07. Press P to preview navigation mesh

Robotics & Digital Twins (ROS / Isaac Sim)

Use the Point Cloud output format for robotics simulation workflows. The semantic labels from HY-World 2.0 map directly to ROS semantic segmentation topics.

# Export for robotics python generate.py \ --prompt "warehouse interior with shelving and corridors" \ --format pointcloud \ --semantic-labels \ --output ./warehouse_digital_twin # Load into ROS2 ros2 run pcl_ros pcd_to_pointcloud ./warehouse_digital_twin/scene.pcd

Use Cases & Applications

HY-World 2.0 unlocks new workflows across game development, VR/AR, robotics, film, and enterprise applications.

🎮

Game Level Prototyping

Generate game level blockouts in minutes. Iterate on layout and atmosphere with text prompts before committing to manual art production.

🥽

VR/AR Experiences

Build immersive VR environments from natural language descriptions. Export 3DGS format for photorealistic real-time rendering in headsets.

🤖

Robot Simulation Training

Generate diverse training environments for reinforcement learning agents. Point cloud outputs with physics enable realistic sim-to-real transfer.

🏙️

Digital Twins

Reconstruct real-world spaces from photos or video for architecture visualization, facility management, and urban planning.

🎬

Film & Previz

Rapidly generate location stand-ins and virtual production sets. Use Gaussian Splatting output for photorealistic background plates.

🧪

Research & Education

Open-source availability makes HY-World 2.0 ideal for academic research in 3D generation, neural rendering, and scene understanding.

Frequently Asked Questions

Everything you need to know about HY-World 2.0 — model capabilities, technical requirements, licensing, and integration.

What is HY-World 2.0 and what makes it different from other 3D AI tools? +
HY-World 2.0 is an open-source AI model developed by Tencent's Hunyuan team, released April 15–16, 2026. What sets it apart from object-level generators (like Shap-E or TripoSR) is that it creates complete, navigable worlds — not just individual objects. Scenes have proper spatial layout, physics-aware geometry, collision data, and navigation meshes built-in. The WorldLens rendering engine enables real-time exploration of generated scenes. It's also the first open-source model in this category to match closed-source commercial tools like Marble in output quality.
How does HY-World 2.0 compare to Marble and other closed-source 3D generation tools? +
HY-World 2.0 is the open-source SOTA for 3D world generation, directly competing with closed-source tools like Marble. Key advantages: it's free and self-hostable, supports all three major output formats (mesh, 3DGS, point cloud), includes the WorldLens rendering engine, and enables Unity/Unreal Engine export with physics data intact. Marble offers a polished SaaS experience with faster iteration for non-technical users, but costs $49–$199/month and cannot be self-hosted or customized. For teams needing control, privacy, or custom fine-tuning, HY-World 2.0 is the clear choice.
What output formats does HY-World 2.0 support and which should I use? +
HY-World 2.0 supports three main output formats: (1) Polygonal Mesh (.glb, .obj, .fbx) — best for game engine integration, animation, and workflows needing editable geometry; (2) Gaussian Splatting / 3DGS (.splat, .ply) — best for photorealistic visualization, VR walkthroughs, and real-time rendering demos; (3) Point Cloud (.pcd, .las) — best for robotics simulation, digital twins, and spatial computing applications. Choose mesh for Unity/Unreal game development, 3DGS for visual showcase, and point cloud for technical/scientific applications.
Can HY-World 2.0 be used for commercial game development? +
Yes. HY-World 2.0 is released under a permissive open-source license (MIT/Apache 2.0) that allows commercial use. Generated 3D scenes can be used in commercial games, VR experiences, and enterprise applications without royalty payments. Always verify the exact license terms in the GitHub repository, as Tencent may have updated the license. The model is designed with game developers in mind — Unity and Unreal Engine integration is first-class, with physics colliders and navigation meshes auto-generated.
Where can I download HY-World 2.0 and what are the GPU requirements? +
HY-World 2.0 is available on HuggingFace at tencent/HY-World and GitHub at github.com/Tencent/HunyuanWorld. Hardware requirements: Minimum 24GB VRAM GPU (NVIDIA RTX 3090 or RTX 4090 recommended for consumer hardware; A100 for production). You'll also need CUDA 11.8+, Python 3.10+, and approximately 32GB system RAM. For Point Cloud output only, a 16GB VRAM GPU may be sufficient. Cloud options: RunPod or Vast.ai with A100 instances work well if you lack local hardware.

Start Generating 3D Worlds Today

HY-World 2.0 is free, open-source, and ready to use. Build game levels, VR worlds, and robotic simulations with AI.