Insight
The End of Sketch? Perhaps Not, With the Help of AI
Far from making designers’ human skills irrelevant, AI is unlocking new creative potential for the hand-drawn sketch.
While the era of architecture studios filled with drafting tables may be far behind us, the hand-drawn sketch isn’t done. In fact, with the advent of artificial intelligence, mastering the sketch may reemerge as a prized skill in the design process.
Most designers today rely on a variety of 3D modeling and rendering software to produce the bulk of their architectural visualizations. The AEC (Architecture, Engineering, Construction) industry’s wholesale adoption of these programs has led to a reduction in the value of skilled hand-sketching. Nowadays, these drawings tend to be frenetic in their line work and their purpose more conceptual than instructive. The reliance on standardized design tools has resulted in an unintended homogenizing effect on the built environment, as architectural aesthetics became confined by the software’s capabilities. Though a handful of standout projects manage to defy conventions, the majority are bound by tight schedules and budgets, leading designers to rely on the standard tools available.
Generative AI, paired with an emerging set of neural network tools, could reverse this trend. With more control of the inputs and outputs of generative AI, designers can create photorealistic images of hypothetical architectural and interior spaces based on text prompts and imported, hand-drawn sketches. This capability may allow designers to bypass time-consuming 3D modeling early in the design process. Consequently, they’ll have more time to engage in the unique creative and collaborative work of design ideation.
However, these AI models are not without their challenges. They are complex programs that require substantial investment in hardware, and in learning and development time. Lee Devore, a Principal at HLW and leader of the firm’s practice technology studio, likens the disparity between these customizable models and other generative image creation tools to the differences between iOS and Linux. He explains, “The mainstream generative tools—Midjourney, Dall-E 2, Bing—are like iOS. Anyone can use them, most people will be happy with the output, and they don’t offer much control. These advanced generative tools are like Linux. Most people will never understand them or need them, but if you take the time to understand and learn them, they offer much more fine-tuning capabilities and the ability to enhance the design process in ways the ‘out-of-the-box’ tools can’t.”
These capabilities extend beyond merely transforming sketches into fully rendered images. Over time, the AI could learn a firm’s architectural vocabulary and visual language by building an archive of sketches, renderings, and prompts. This would enable the AI to grasp the specific ways a firm defines space, its aesthetic preferences, and the unique language it uses to describe its work. There is one catch: architects will need to return to practicing and mastering sketching because the AI can only render what it can recognize.
So, while the drafting table may be done, the human hand isn’t getting replaced. In fact, with the integration of generative neural network tools into architectural practice, human imagination and creative ability are given a new place of prominence in the design process.
HLW ArchInsights is a bi-weekly window into the dynamic world of architecture, where we explore industry trends, offer thought-provoking insights, and share the latest news from our firm, guiding you through the ever-evolving landscape of design and innovation.