Trustworthy interactions with large language models
How can we integrate large language models in design effectively while addressing their limitations and ensuring trust?
We initiated this project for our Sensor Lab event space which was lacking windows and therefore natural daylight. To solve this problem, we developed a concept 'virtual window' that brings in the light, motion and liveliness of the street in the form of an LED wall.
We first experimented with showing actual pictures on the wall. This taught us that because of the poor quality of pixels on the LED wall, the user was able to better understand the context from motions and colors, rather than a real image captured from the street.
The follow-up of this project is to develop a way to remotely control the LED wall. This way, the user can interact with it from her phone by switching on/off the wall, playing different animations or adjusting the brightness.
Trustworthy interactions with large language models
How can we integrate large language models in design effectively while addressing their limitations and ensuring trust?
BMX ride signature visualization
Revealing the unique style of each cyclist
Designing over thirty thousand unique bouquets
How can we design a creative engine for mass-generated personalized compositions?