Gen AI & ML
Gen AI & ML
This prototype explores how context engineering can enable AI to participate in spatial design workflows.
The system converts architectural blueprints into editable 2D layouts and navigable 3D environments, allowing users to iterate on designs using natural language. After a floor plan is uploaded, a Gemini multimodal model interprets the drawing, extracts spatial structure, and generates a corresponding 2D and 3D representation.
Once the environment is created, users can modify the design by describing changes—such as adjusting walls, resizing rooms, or modifying spatial relationships—and the system updates the model accordingly.
The core experiment focuses on transforming unstructured visual input into structured spatial context that AI systems can reason about and modify reliably. This structured representation enables a workflow where AI acts as a design collaborator rather than a passive generation tool.
The broader question behind this work:
How can AI systems bridge perception, spatial reasoning, and human intent to support iterative design thinking?
Technical stack
Frontend: React, Vite, TypeScript
Styling: Tailwind CSS
State management: Zustand
3D rendering: @react-three/fiber
AI: Gemini multimodal model for blueprint interpretation