Read article20 min
Written by Cláudia Oliveira & Bart Simons
Designing over thirty thousand unique bouquets

In late 2024, MRM Spain asked us to help launch an ad campaign for Allevia, a leading UK allergy relief brand under their global healthcare client, Opella. The campaign supported allergy-prone travelers by offering personalized Pollen Passports, each featuring a unique bouquet visualization and a discount code to purchase the Allevia medicine. These digital bouquets were generated using real-time pollen and air quality data based on the user’s destination and travel date, meaning we had to build a system capable of creating endless personalized combinations.

With the concept already defined, we quickly identified a few challenges. Firstly, while the data available to us was extensive, it wasn’t feasible to accurately represent every individual plant species across the globe. In addition, with such a broad dataset resulting in these unique bouquets, we needed to find a way to help users understand the meaning behind their visualizations. This meant we also needed to design an interface that could clearly communicate the story behind each generative visualization. With these challenges in mind, we set out to discover workarounds that would enable us to create a meaningful, user-friendly platform for this campaign.

Design space

The wow factor was settled from the start: gorgeous bouquets! However, the revolving decisions largely focused on the user interaction with this campaign. The main questions we asked ourselves were:

  • How can we easily and quickly help the user grasp how to interact with the platform?
  • Is there something we can do to make the connection between the visualization and the discount code stronger?
  • Can we take this opportunity to strengthen the brand perception as a source of trust?

To support the visualizations, we created an efficient interface that ensured data transparency and accuracy, building simple insights that supported the user in understanding the data and building confidence in the data and the brand.

Generative bouquets

Discovering an efficient system

So, where to start when you’re about to build a system that allows you to generate tens of thousands of different unique bouquets? We decided to tackle this from the ground up, almost literally, by diving into the plant species that are present in the dataset that was to feed the experience. We were building a bouquet visualization engine, and to design it for both accuracy and aesthetics we needed a sense of how each leaf, petal, stem and other “component” might look.

The approach we originally aimed for was to “modularize” each species. Depending on the look of the plant, we would create a few different assets that could be used and organized in flexible ways to produce many variants of the same plant. To push this further, we prototyped a system that combined these modular assets with truly generative components. This meant our code had to be able to draw those components from scratch, which we would be able to do for simpler shapes like stems or grass.

To test this idea, we built an algorithm to generate unique stems for aster flowers, and then placed randomly selected flower head assets on top.The results were promising, but getting them to the visual fidelity we wanted would take a lot of time and effort. And this was just for the stems of asters, one component of one of over 40 species in the dataset.

Evaluation and decision-making

Internally, we often describe creative coding as designing a system with rules and variations. It’s about building in randomness, but with just enough control to shape the result. That’s what we ended up doing for the Pollen Passport, but with an important shift in direction.

Instead of combining generative code with modular assets like before, we focused fully on the modular side: creating a rich library of assets for each species, from single petals to entire plant clusters. The system we built, which we called the composition engine, was designed to place and combine these assets based on the pollen data that we fed it with.

In other words, we put the generative plant-building idea on hold, and chose a more controlled, asset-driven approach, one that allowed us to work faster, and stay true to the visual style we aimed for.

Having figured out some of the technical approach brought us to the real question; what makes a bouquet beautiful? We quickly found there’s a lot of research and theory around this, like the 3-5-8 rule. But applying that to the Pollen Passport turned out to be tricky since we weren’t working with real plants, or even in a 3D space; we were positioning and transforming a select set of pre-made assets instead.

The best way to figure out the rules and systems you need is simply to start building. It’s easy to overthink the visuals in theory, but when you introduce randomness into the mix, surprises are inevitable. So rather than designing every detail up front, we let early mockups and drafts of the Pollen Passport guide us. From there, we defined a set of visual principles that became the foundation for the composition engine, including:

  • Plants should be well spaced out
  • There should be minimal overlap between species
  • Visually heavier elements should appear towards the back of the bouquet

These rules gave the system just enough structure to keep things looking intentional, even as elements were arranged dynamically.

Therefore, our tech stack for the visualization was intentionally simple and flexible. We used ThreeJS to construct the environment in a semi-3D space, giving us full control over positioning and a natural sense of depth. It also lets us layer in custom, data-driven shaders without much overhead. For the 2D elements, we relied on the Canvas2D API, first generating flat visuals, then passing them into the ThreeJS scene as textures. This approach gave us the precision of 2D illustration, with the richness of spatial composition.

Rules, whitespace, and asset structure

Based on these visual principles, we started forming rules. We split up the visual into multiple layers, each with different properties and behavior. Back layers were meant for bigger, full plants, such as components with multiple flowers and lots of leaves. Occlusion was minimized by calculating how much overlap a plant has with another plant. By adding assets to the composition engine, we started building and mapping out the puzzle. We continuously refined the logic of these layers by adding or removing elements, and this provided us with something to start iterating upon.

We created a grid system to try to form the bouquet logic around the principles we defined. Each layer of the bouquet visualization would get its own grid, and for each grid we could start doing overlap checks based on if a cell in the grid was active or not. If a new plant was added to the bouquet, we could check which cells it hit, and if those were already active. From there we could decide to include that new plant, throw it away, or reposition it. However, we ran into some interesting findings when we started testing this part of the algorithm with the first actual plant assets we got in. Logically, these plant assets contain white space. Flowers have petals with gaps between them; a flower head might sit on top of a thin stem, leaves stick out from branches, and so on. This white space is treated as part of the image we position on the grid. So, white space next to a thin stem could cause a false positive when layered over another plant, which would make the two images overlap even if one of them is actually empty space.

This wasn’t the end of the road. We built another layer on top of the overlap detection logic. Instead of looking at the entire image, we automatically generated either the convex or concave hull of the asset’s shape. For a convex hull, imagine stretching a rubber band around the outermost points of a shape. It naturally pulls into a tight boundary that touches only the furthest edges. This still leaves some white space, so we also implemented concave hull generation, which takes it a step further. That’s more like wrapping something tightly with cling film. If done well, it hugs every contour perfectly.

By generating these hulls, we finally got the overlap detection working properly. It was a mix of grid placement and different shape generation coming together, and we could finally start assembling things. That said, we quickly realized that full control over asset overlap wasn’t enough to create pleasing, beautiful bouquets. Put simply, we could now say things like “plant A can only overlap a little with plant B,” and while that was an important piece of the puzzle, just having that level of control wasn’t enough to make the bouquets actually look beautiful.

We took a step back and shifted our focus onto something a bit more fundamental: what plants should actually go into a bouquet based on the data a user provides. This part of the composition engine became known as the pollen processor. Its job is to take the raw pollen data and turn it into something the bouquet system can actually work with. Once we pull in the user’s input and run some calculations using pollen and air quality data from the API, we break the whole composition down into a percentage-based distribution.

From there, we pick the top three most allergenic species based on pollen levels. The more pollen a plant is producing, the more dominant it becomes in the bouquet. If only one species is active in the data, we add a second one that’s common in the region, to avoid making the bouquet feel too flat. The goal was to reflect the data, without sacrificing a visually appealing composition.

Balanced bouquets

As mentioned earlier, just having overlap logic wasn’t enough to make the bouquets look good. So we turned our attention to the rules and variations within the bouquet composition itself. At its core, each visualization is built from three plant layers: back, middle, and front. Based on the pollen levels at the user’s destination, we calculate a maximum bouquet weight. High pollen concentration results in a heavier, denser bouquet, while low levels lead to something lighter and more minimal.

This internal weight number helps the algorithm decide which assets to use for each plant type. For every species, we have multiple assets, and we manually assigned visual weights and preferred layers to each one. A thin stem, for example, adds less visual weight than a dense cluster of leaves and branches. That stem is better suited to the middle layer, where it can be partially covered by other elements. The leafy cluster, on the other hand, belongs in the back so it doesn’t overpower the entire bouquet.

Imagine that a user fills in that they’re heading to Barcelona in one week. The data for that date shows a high pollen concentration, with a heavy presence of aster pollen and smaller amounts of oak and nettle. Based on this, the composition engine assigns a high maximum bouquet weight and divides it accordingly. Around two-thirds of the visual weight goes to aster, while the remaining third is split evenly between oak and nettle.

Starting with the back layer, the engine first looks for a high-weight visual for aster. One of the assets we have is a large aster cluster with multiple stems, flowers, and a few leaves. Since aster is the most dominant pollen source, there’s a good chance it lands on that one. Once placed, the visual weight of that asset is subtracted from the bouquet’s total. With not much space left in the back, the engine moves on to the middle layer.

Here, it looks to balance things out with either oak or nettle. It might choose a branching oak asset, something with a few scattered leaves that adds shape without overwhelming the bouquet. Or perhaps a lighter, more vertical nettle branch that cuts through the composition just enough to make its presence felt. Then, in the front layer, there’s just enough weight left for a couple of smaller details, maybe a single aster petal and a small nettle leaf, reintroducing nettle subtly and tying the whole arrangement together.

To achieve a natural feeling, we introduce flexibility in the engine, which occasionally picks an asset that’s slightly above or below the current target weight. It’s a simple rule, but it goes a long way in preventing the bouquets from feeling too predictable, all while staying true to the story we’re trying to tell through the composition.

In addition to the placement logic, which includes overlap detection, the weight system, and the pollen-based plant selection, we defined a wide range of other rules to make sure the bouquets felt balanced and visually cohesive. Some of these rules are about layers, others about plant types, and some are even specific to individual assets. Most bouquets are centered horizontally within the composition, but depending on the combination of elements, some assets are allowed to shift away from that center. For example, if a heavy asset is flipped horizontally, the next one might need to be lighter and flipped in the opposite direction. If one plant has a slight rotation, the next may need to counterbalance it. These are just a few of the many hand-crafted rules that influence the outcome. Much of it came through trial and error—looking at hundreds or even thousands of outputs to spot what worked, what didn’t, and then turning those insights into code the composition engine could understand.

To help manage that complexity, and to better understand the effects of our decisions, we built a few internal tools. One of them, the asset editor, let us quickly adjust and swap out data points defined for each asset, which are used by the engine’s rule system. We could tweak the visual weight of a plant and instantly see how that changed bouquets containing it. We could also redefine the area used for overlap detection, nudging the asset into a different position within a given composition.

Conclusion

Could we have found a different solution that didn’t rely on creative coding? Possibly. But would it offer the same flexibility and impact? That’s harder to say. Building this system allowed us to translate the user's input into a visual and engaging experience, while also giving us a solid foundation for future growth. Whether it's expanding the bouquet library or integrating new data sources, the system can evolve easily and organically.

Such a complex system can be challenging to manage, but it also produces the most unique and impressive results we couldn’t easily achieve any other way. That said, with such broad data, we found other ways to include creative coding-inspired solutions, such as the guilloche patterns present in the background of each bouquet. All the details about these different additions are available in our website case study.

Looking at all the possibilities and the extensive library we’ve created, this landscape became a dream for any creative team. With this rich set of components, the experience could grow into a standalone platform or even be adapted to integrate seamlessly into brand communications.

Without this creative coding solution we found, organizing and translating these massive datasets into engaging images would have been a much trickier task. Without it, we might have ended up with just ten different bouquets that change based on the area, or a single bouquet with pop-ups. Maybe these options could have resulted in a more controlled visual outcome, but our goal of making these bouquets truly unique to the user would have definitely fallen short.

Read the full project case study here.

Read, see, play next
Smart city sensor visualization
see

Smart city sensor visualization

Real-time visualization of the Smart City sensors

read4 min

Create a react SlideToggle component with hooks and react-spring

Combining React Spring and Hooks to create a reusable SlideToggle component.

Indianapolis precipitation flooding simulation
play

Indianapolis precipitation flooding simulation

Precipitation simulation