Around 17 months after Midjourney's public beta launched, I began probing the tool through systematic material studies. Using geometric forms as constants, I tested how the model handled variations in surface treatment—metals, woods, plastics, textured finishes—to understand its capabilities and limitations beyond marketing claims.
This constrained experimental approach revealed specific affordances: where the tool reliably understood material properties, where it became ambiguous, and what the gap between prompt and output actually looked like. Rather than debating whether AI is "creative," I was establishing empirical understanding of what it could do.
These images capture a particular moment in AI image generation capability—distinctive lighting characteristics, material boundary behaviours, and aesthetic coherences that have since evolved. They remain useful reference material, documenting not just the outputs but a methodology for systematically testing new tools rather than accepting surface-level impressions.