C°F Experiments: Learning and Growing Through Experimentation
CLEVER°FRANKE’s self-initiated projects and the value of experimentation
How can we actually build a brand identity that generates itself from context? This is the question we set out to answer when we built the AI-driven generative identity system behind ADC’s new branding.
ADC is a consultancy leading the charge of AI transformation, and they wanted a visual identity that could keep up with the ever-evolving AI landscape. This means they didn’t need a fixed set of templates, but instead a system that generates unique on-brand visuals for every piece of content they put out whether it’s an Instagram post, a LinkedIn banner, a presentation cover, an employee welcome graphic, or something for an event. Crucially, it had to be something that could be used not just by the developers and tech-savvy people but by the entire employee body.
To address this, we built a visual language that feels optimistic and alive. It’s grounded in computation, but expressed through bright colors and smooth organic shapes rather than the sterile, technical aesthetic that has come to define the look and feel of so many other companies in the AI sector. The new identity is meant to reflect the people and culture at the heart of ADC, not just the technology and field they operate in. You can read more about the full identity in our case study here.
To really understand the AI layer built into the core of the identity, it’s good to know the system that powers it. The generator produces patterns from six SVG shapes; circles, rectangles, polygons and composite shapes like crosses and pluses. There are about 60 parameters that control the outputs, and 23 of them have the biggest impact on the visual outcome.
The layout is driven by a quadtree. Points are distributed across a 2D space using strategies like noise fields, sine waves or random scatter. Dense areas are subdivided more finely, while less dense areas aren’t. That’s what makes patterns stand out: tight clusters of small shapes are laid out next to large, open ones. It’s what makes them feel organic, even though the underlying grid is a data structure.
Each cell in the quadtree has a shape that is sized based on its dimensions and the active parameters. Then, all the shapes come together to form a single SVG path via Boolean union. The parameters get the shapes to expand beyond their initial cells, and they make them grow, rotate and shift. This makes sure there’s a lot of overlap, and that’s exactly what gives those connected, blobby and organic forms their signature look.
Merging hundreds or thousands of shapes costs a lot of performance power, so we send those calculations to a web worker to keep the browser responsive. Smoothing actually proved to be the more difficult problem to solve. We work with straight lines and points instead of Bézier curves, and the algorithm had a hard time with transitions between smooth circular edges and sharp angular ones. We ended up with a hybrid solution; we used Paper.js for the general merge and smoothing pass, and Chaikin’s curves specifically for edges near circular segments. It’s one of those things that sounds simple enough to build, but it actually took a while to get it just right.
The first version of the generator was seed-based, describing the context of their pattern with a few inputs: what kind of content it was for, which sector it related to, some free text about the topic. The user could say something like “LinkedIn, announcement, new colleague Laurent.“ All of that got turned into a string, hashed, and fed into a custom PRNG which rolled every pattern parameter deterministically. Give it the exact same input, and the user would always get the exact same output.
The issue with this process was that the variation was more about the structure of words rather than their meaning — changing “colleague“ to “coworker” would result in the whole pattern changing, including its composition, density, and the whole output. It’s not that the meaning is different, since “colleague” and “coworker” are synonyms; it’s just that a few of the characters are different. The context always was there in every input, but the generator had no way of reading it. So, the task at hand became creating a brand identity that doesn’t just generate from context, but actually understands it.
The answer to making the generator understand context was embeddings. An embedding model takes in some kind of input, like text or an image, and sends it through a neural network that has learned from huge amounts of training data to recognize patterns. This network takes the input and turns it into a long list of numbers, or a “vector”, that captures its meaning. Similar inputs are nearby in the space, while different inputs are farther apart.
Instead of turning the user’s input text into a number, we embed it into a vector space. Texts about similar topics produce similar vectors, and similar vectors produce similar patterns. Reports about AI in healthcare and AI in pharma are close together in the vector space, and their patterns reflect that. It’s a simple shift, but it changes the logic of the generator completely. We go from different characters producing different numbers to different meanings producing different visual characters.
A general purpose embedding model doesn’t know anything about ADC specifically. It knows that “healthcare" and “medicine" are related, but it doesn’t know how ADC talks about healthcare, what their projects look like, or what matters to them as a company. So, we built a custom reference space from ADC’s own content. We gathered close to a hundred public sources talking about ADC: website pages, blog posts, case studies, LinkedIn posts, and press coverage. We used NotebookLM to help with extraction, feeding it ADC’s materials and using it to surface the most distinctive and representative statements about what ADC does, how they work, and what they stand for. This gave us over 100 data points, and we ran them through an embedding model to generate embeddings for each one.
Each of those embeddings needed a pattern to go with it. Assigning these patterns was a manual process. We generated thousands of patterns, then looked at the context of each embedding, and connected the two by hand. This was done by assigning a pattern config—basically a set of parameters—to an embedding so the generator doesn’t have to roll those anymore. So, we had to think about things like: what should a pattern about AI readiness in the public sector look like? What about a new hire announcement? We made these calls ourselves, meaning we put our design thinking into ADC’s context. That curation is what gives the system its initial visual taste, and it’s the starting point from which it grows.
The general flow of the pattern generator works as follows. Everything starts with the user input for which we want to generate a custom pattern. This input is then sent to our living identity backend where we use the all-MiniLM-L6-v2 embedding model to transform this text into a vector representing its meaning. We then use pgvector’s cosine distance operator on our database to find the 5 closest vectors to our user input.
Since we want the identity to evolve over time, we don’t just generate the closest matching pattern. Instead, we use a fusion algorithm with one main hyperparameter—temperature—which dictates how the 5 input patterns are combined. With a low temperature value (0.05), we play it safe and mostly just look at the pattern that most closely resembles our user input, so the result closely resembles the most relevant existing pattern. With a normal temperature value (0.1), we create a balanced blend across all neighbors, leading to more novel and creative combinations. Lastly, we have a high temperature (0.5), which gives more influence to distant neighbors and results in more unexpected, creative results.
In general, we use a softmax-weighted interpolation to blend the different parameters of our five input patterns, with the temperature hyperparameter dictating the weight of each. Numerical fields like grid scale, noise strength and mask threshold use a weighted average. Boolean fields like “uniform sizing on/off" use probability voting, where each neighbor’s vote is weighted by similarity to determine the outcome. Categorical fields such as “distribution mode” work similarly, with each neighbor voting for its category and the highest weighted score winning.
The result is thus three different output patterns: safe (low temperature), neutral (medium temperature) and creative (high temperature). Each of these three uniquely generated patterns will be stored in our database, using a hash of their seed and configuration as its primary key.
When the system generates the three variants, they all get stored. The user picks their favorite one, and that chosen pattern with its config and embedding becomes available as a neighbor for future queries. This is what makes the system truly live. Every choice adds a new entry to the embedding space, and each one reflects a real decision by someone at ADC. That entry gives the user the context of what the pattern was for and the visual preference of the person who chose it. The next time someone generates a pattern for a similar topic, the system won’t just find our original curated references but also find what ADC’s own people have chosen before. As time goes on, the embedding space shifts from reflecting our initial design kickstart to reflecting ADC’s preferences. The same context always produces the same embedding and seed hash, so regenerating overwrites previous results rather than duplicating them. The identity doesn’t just provide variation at any given moment; the variation itself evolves over time.
The living identity is now live. ADC’s team is already using it across their communications, and with every pattern they generate, the embedding space grows. It’s still early, but the system is learning. We’re really excited about where this could go beyond just making content. What if, when new employees join, they get a pattern that’s customized for their role and what they’re excited to work on? What if teams create patterns to mark important milestones or internal events? And with every generation logged, there’s data to keep an eye on the status of the identity. What’s the average grid density trending toward, which shape distributions are favored, where are the visual preferences heading? It’s not just about metrics, but also about seeing the identity and the underlying rules evolve over time.
The feedback loop already teaches the system which patterns work for which topics. The next step is to let it reflect on its own data and reshape the boundaries of what it generates. If everyone starts going for higher density patterns, should we shift up the density ceiling? If certain modes never get picked, should they fade out? At the same time, we need to keep a close eye on this. This is uncharted territory, so we’ve got to make sure the identity doesn’t gradually shift into one direction or lose the variety that makes it work. What we’ve built is a brand identity that adapts to context and grows with the company it represents. We’re eager to see where it goes.
C°F Experiments: Learning and Growing Through Experimentation
CLEVER°FRANKE’s self-initiated projects and the value of experimentation
Trustworthy interactions with large language models
How can we integrate large language models in design effectively while addressing their limitations and ensuring trust?
A creative approach to workshops on AI-powered design