Based on “From Fake Perfects to Conversational Imperfects: Exploring Image-Generative AI as a Boundary Object for Participatory Design of Public Spaces,” published in Proceedings of the ACM on Human-Computer Interaction (POCHI/CSCW), 2025.

Who wrote it
- José A. Guridi — Cornell University (College of Computing and Information Science)
- Angel Hsing-Chi Hwang — University of Southern California (Annenberg)
- Duarte Santo — Independent scholar (Portugal)
- Maria Goula — Cornell University (College of Agriculture and Life Sciences)
- Cristóbal Cheyre — Cornell University
- Lee Humphreys — Cornell University
- Marco Rangel — Studio-MLA (US)
Affiliations are drawn from the authors’ public preprint summary.
Big ideas
Traditional renderings for parks and plazas often look too perfect, glossy images that shut down discussion because everything already seems “decided.” The authors argue that image-generative AI (Midjourney, Stable Diffusion, etc.) can be used as a boundary object: a shared, editable image that invites people into the design process. Instead of one “fake perfect,” you get many “conversational imperfects” that neighbors can react to, tweak, and debate together.

How they studied it (methodology)
This is a multi-stage, practice-based study combining interviews, workshops, and live prototyping with immigrant communities and researchers. The project unfolded in three arcs:
- Speculative process — Used transcripts from earlier interviews to generate AI sketches of possible public-space designs, then ran reflection workshops on how IGAI (image-generative AI) changes the conversation.
- Piloting — Trial interviews where researchers themselves used IGAI to pressure-test prompts, workflows, and facilitation techniques.
- IGAI-mediated interviews — Community sessions (Los Angeles, Coney Island) in which participants reacted to and iteratively adjusted AI-generated images during the conversation.
This design-research approach let the team observe how the images functioned as social catalysts—when they opened dialogue, when they biased it, and what facilitation guidelines helped.
What they found (digestible takeaways)
- AI images spark participation when they stay “unfinished.” Early, rough, and plural images keep the door open. Over-polished renders shut it.
- Prompting is power. Small wording changes steer aesthetics, scale, and who gets represented. Facilitators need prompt logs and version histories to keep the process transparent.
- Diversity beats realism. Presenting multiple “imperfect” options invites critique (“this path doesn’t feel safe at night”) and co-creation (“could we add more shade where elders sit?”).
- Context matters. City history, climate, safety, and culture must inform the prompts—otherwise the AI defaults to generic “global-north park” imagery.
- Ethics are practical, not abstract. Be explicit about dataset biases, likeness/privacy, and consent for community imagery; revisit these at each iteration round.
All of this supports their central claim: IGAI works best as a conversational scaffold.

Why this matters (for cities, firms, and educators)
- Cities & planners: You can move from public “presentations” to public conversations. Iterative, co-edited images lower barriers to entry and surface tacit local knowledge (shade, security, flow).
- Design & real-estate teams: Faster early-stage exploration with community buy-in reduces re-work and controversy later.
- Universities & studios: Teaching with IGAI shifts critique from “polish” to plurality and participation—a healthier habit for civic design.
The paper situates these claims in HCI and participatory-design traditions, while offering a contemporary toolset communities actually respond to.

Sources
- Author/publication page with title, authors, venue, and DOI.
- Preprint summary listing affiliations and detailing the three-stage methodology and fieldwork sites.
- Independent index citing the same article metadata and DOI.
0 Comments