Data Poets – People, Urban Space, and AI

Created by Gaston Welisch, Data Poets project explorse embodied interactions between people and urban spaces through AI, pushing the boundaries of how we experience AI beyond screens and voices. Inspired by the organic aesthetics of ceramics, Gaston wanted to create a physical presence for AI, turning it into an active “experiencer” of the world around us.


The data collected helps people paint a more sensitive representation of a local area through Play. The project invites participants to re-appropriate places and give them new meaning poetically. The data poets can be used during cultural events for tourists to keep a meaningful memory of their visit. They can also be used locally for consultation, to reveal overlooked perspectives of an area. The resulting experiences can be celebrated through exhibitions or publications. The Data Poets bring a poetic approach to Citizen Science (data-collecting by ordinary people). Instead of exploring data for scientific reasons, what happens if we have fun with it? 

The project is comprised of three AI devices: The Aesthete, a neon yellow device equipped with a camera, designed to be held at waist height to capture visual stimuli; The Bard, a device resembling a canister and microphone, created for capturing audio data; The Fountain Printer, a red device worn like a pendant, housing a receipt printer to translate sensory inputs into tangible, printed poems.

These prototypes were crafted using 3D printing, Arduino, and Processing, with selective laser sintering (SLS) and nylon powder providing a tactile quality. GPT-2 and Google Vision were used to craft (often nonsensical but charming) poems.
 Most recently, Gason transitioned these from physical prototypes to a web-based platform, using GPT-4 and OpenAI’s machine vision systems. This new iteration allows users to upload images, generating poems that reflect their sensory experiences of places meaningful to them.


Initial Prototypes (2020) use Arduino (microcontrollers for processing user input and connecting to sensors), Processing (main logic), Selective Laser Sintering (3D printing for the physical prototypes) and GPT-2 and Google Vision (image recognition and labelling)

Web-based Platform (2024) is comprised of AI Models API: generative LLM – GPT-4o for creating poetic outputs, Machine Vision (OpenAI’s GPT4o machine vision system for interpreting images), Python + Flask (used for backend development and data processing), SQLite (for storing user inputs and generated poems), Frontend (HTML, CSS, JavaScript) and GPS coordinates provided by OpenStreetMap.

Project Page | Gaston Welisch

/++

/+