Back

AdaptivAI : Coding a Building into Existence

Architecture without modeling. A system that learns, adapts, and evolves.

The goal of this project is to build a fully automated architectural design pipeline that minimizes manual modeling by integrating generative AI, procedural geometry, and real-time environmental data into a dynamic, self-evolving system. This multi-platform workflow generates an architectural project—from floor plan to structure to façade—without traditional modeling. Architecture is treated like an API: modular, adaptive, and responsive. Authorship is distributed across algorithms, sensors, and machine perception. Though machinic in execution, the process preserves architectural intent—design becomes a continuous dialogue between spatial logic and data.

The Challenge: Architects and urban planners require dynamic design systems that can rapidly respond to changing environmental data and user needs, a task for which traditional, static modeling is ill-suited.

Research & Insights: My research into generative AI and procedural geometry revealed a key insight: architecture can be treated as an open API—modular, adaptive, and responsive. Design can become a continuous dialogue between spatial logic and real-time data.

Technologies Used

Toolchain:

  • Generative AI: Stable Diffusion / Flux · ComfyUI
  • Procedural Geometry: Processing (JavaScript) · Grasshopper
  • Real-Time Feedback: Python · ChatGPT Vision · APIs
  • Simulation & Rendering: Houdini · Blender · Unreal Engine · Unity
  • Development Tools: PyCharm · GitHub

These tools are choreographed into a real-time architectural pipeline where AI generates, interprets, and refines space through continuous feedback loops.

Workflow Description

Coding a Building into Existence

From prompt to plan. From point cloud to form. From feedback to facade.

Workflow Baroque is a self-developed architectural workflow that treats architecture as a generative, machinic process. Instead of modeling manually, I created a system where space is iterated, simulated, and refined through a network of AI models, geometry agents, and procedural operations.

At its core, the pipeline integrates:

  • Generative AI: Stable Diffusion + ComfyUI for floor plan prompts
  • Parametric Geometry: Grasshopper + Processing for semantic volume generation
  • Real-Time Feedback: Python + ChatGPT Vision + AP data for live geometry control
  • Remeshing & Simulation: Houdini + Blender + Unreal for volumetric refinement

Every part of the system speaks to another. Outputs are not static—they evolve. The result is not a single building, but a framework for continuous architectural becoming.

This is not just a workflow—it’s an architectural hypothesis: What if we no longer model buildings, but instead orchestrate their emergence?

Project Visuals

Dynamic form iteration

The project begins by analyzing the base site image in Processing to extract urban features. This is used as a conditioning map in ComfyUI, controlling the generation of architectural massing through custom LoRA-trained diffusion.

Site logic extraction
Feature transfer to Rhino

Site logic and generated features are transferred into Rhino via Grasshopper, enabling further manipulation through geometry agents.

Façade generation based on social media input

Building skin is generated based on keyword extraction from X (formerly Twitter). Using topic frequency and tone, the system modulates façade elements algorithmically.

Computed architectural form in Rhino

View of computed architectural form in Rhino. The façade reflects real-time API responses from social media input.

Fully textured and rendered environment in Unreal Engine

Fully textured and rendered environment in Unreal Engine. The system-generated form is visualized with materials, light, and atmosphere.

Parametric control of geometry

Further parametric control of geometry, mapping environmental triggers and API responses to spatial transformation.

Live data processing for geometry logic

Live data—including X posts, weather conditions, and geolocation—is processed using a custom Python script. This directly feeds into the geometry logic.

Alternative site test with different keywords

A second site is tested. Different keywords produce a radically different façade language and volumetric behavior.

Morphing architecture clip 1
Morphing architecture clip 2

PNG sequences of form variations are transformed into videos using ComfyUI’s Cog-Video model, creating morphing architecture clips for cinematic presentation.

Interior environments are grown by conditioning ComfyUI with structural maps and keyword themes. AI fills space with programmatic logic based on inputs.

Finalized Unreal render of an AI-generated, ocean-themed research lab

Finalized Unreal render of an AI-generated, ocean-themed research lab, illustrating both spatial realism and narrative immersion.

Environmental UI overlay 1
Environmental UI overlay 2
Environmental UI overlay 3

Cog-Video output for environmental UI overlays—designed for Unreal Engine integration. These interfaces represent live data from marine sensors.

Outcome & Impact: The project resulted in a framework for “continuous architectural becoming,” shifting the paradigm from static modeling to orchestrating emergence. This work serves as a conceptual blueprint for the future of UX automation, where design tools become intelligent partners in the creative process.

Next Steps

Interested in learning more about this project or discussing potential collaborations? Feel free to contact me or explore my other projects.