Dyslextiskt Kebabstuk

Innehållssamlare

Droptica: Automated Content Creation in Drupal: Field Widget Actions Tutorial with Real Results

Drupal planet - tis, 02/03/2026 - 11:15

Information gathering, content writing, proofreading, SEO optimization, tag preparation – all these tasks consume a significant portion of the editorial team’s time. What if you could reduce this research time by up to 90% through automated content creation? In this article, I present a practical Drupal setup that uses AI-powered modules to generate editorial content with minimal manual input. This includes automatic information retrieval based on the title, tag generation, content creation, and detailed data fetching – all directly in your CMS, without switching between different tools. Read on or watch the episode from the Nowoczesny Drupal series.

Kategorier: Data

Specbee: Extending Drupal Canvas with Canvas Full HTML: A step-by-step Integration guide

Drupal planet - tis, 02/03/2026 - 10:24
Drupal Canvas (Experience Builder) limits WYSIWYG editing within its own text formats. Use the Canvas Full HTML module to remove these limitations, giving content editors full control over their rich text content.
Kategorier: Data

The Grayscale Problem

Smashingmagazine - mån, 10/13/2025 - 12:00

Last year, a study found that cars are steadily getting less colourful. In the US, around 80% of cars are now black, white, gray, or silver, up from 60% in 2004. This trend has been attributed to cost savings and consumer preferences. Whatever the reasons, the result is hard to deny: a big part of daily life isn’t as colourful as it used to be.

The colourfulness of mass consumer products is hardly the bellwether for how vibrant life is as a whole, but the study captures a trend a lot of us recognise — offline and on. From colour to design to public discourse, a lot of life is getting less varied, more grayscale.

The web is caught in the same current. There is plenty right with it — it retains plenty of its founding principles — but its state is not healthy. From AI slop to shoddy service providers to enshittification, the digital world faces its own grayscale problem.

This bears talking about. One of life’s great fallacies is that things get better over time on their own. They can, but it’s certainly not a given. I don’t think the moral arc of the universe does not bend towards justice, not on its own; I think it bends wherever it is dragged, kicking and screaming, by those with the will and the means to do so.

Much of the modern web, and the forces of optimisation and standardisation that drive it, bear an uncanny resemblance to the trend of car colours. Processes like market research and A/B testing — the process by which two options are compared to see which ‘performs’ better on clickthrough, engagement, etc. — have their value, but they don’t lend themselves to particularly stimulating design choices.

The spirit of free expression that made the formative years of the internet so exciting — think GeoCities, personal blogging, and so on — is on the slide.

The ongoing transition to a more decentralised, privacy-aware Web3 holds some promise. Two-thirds of the world’s population now has online access — though that still leaves plenty of work to do — with a wealth of platforms allowing billions of people to connect. The dream of a digital world that is open, connected, and flat endures, but is tainted.

Monopolies

One of the main sources of concern for me is that although more people are online than ever, they are concentrating on fewer and fewer sites. A study published in 2021 found that activity is concentrated in a handful of websites. Think Google, Amazon, Facebook, Instagram, and, more recently, ChatGPT:

“So, while there is still growth in the functions, features, and applications offered on the web, the number of entities providing these functions is shrinking. [...] The authority, influence, and visibility of the top 1,000 global websites (as measured by network centrality or PageRank) is growing every month, at the expense of all other sites.”

Monopolies by nature reduce variance, both through their domination of the market and (understandably in fairness) internal preferences for consistency. And, let’s be frank, they have a vested interest in crushing any potential upstarts.

Dominant websites often fall victim to what I like to call Internet Explorer Syndrome, where their dominance breeds a certain amount of complacency. Why improve your quality when you’re sitting on 90% market share? No wonder the likes of Google are getting worse.

The most immediate sign of this is obviously how sites are designed and how they look. A lot of the big players look an awful lot like each other. Even personal websites are built atop third-party website builders. Millions of people wind up using the same handful of templates, and that’s if they have their own website at all. On social media, we are little more than a profile picture and a pithy tagline. The rest is boilerplate.

Should there be sleek, minimalist, ‘grayscale’ design systems and websites? Absolutely. But there should be colourful, kooky ones too, and if anything, they’re fading away. Do we really want to spend our online lives in the digital equivalent of Levittowns? Even logos are contriving to be less eye-catching. It feels like a matter of time before every major logo is a circle in a pastel colour.

The arrival of Artificial Intelligence into our everyday lives (and a decent chunk of the digital services we use) has put all of this into overdrive. Amalgamating — and hallucinating from — content that was already trending towards a perfect average, it is grayscale in its purest form.

Mix all the colours together, and what do you get? A muddy gray gloop.

I’m not railing against best practice. A lot of conventions have become the standard for good reason. One could just as easily shake their fist at the sky and wonder why all newspapers look the same, or all books. I hope the difference here is clear, though.

The web is a flexible enough domain that I think it belongs in the realm of architecture. A city where all buildings look alike has a soul-crushing quality about it. The same is true, I think, of the web.

In the Oscar Wilde play Lady Windermere’s Fan, a character quips that a cynic “knows the price of everything and the value of nothing.” In fairness, another quips back that a sentimentalist “sees an absurd value in everything, and doesn’t know the market price of any single thing.”

The sweet spot is somewhere in between. Structure goes a long way, but life needs a bit of variety too.

So, how do we go about bringing that variety? We probably shouldn’t hold our breath on big players to lead the way. They have the most to lose, after all. Why risk being colourful or dynamic if it impacts the bottom line?

We, the citizens of the web, have more power than we realise. This is the web, remember, a place where if you can imagine it, odds are you can make it. And at zero cost. No materials to buy and ship, no shareholders to appease. A place as flexible — and limitless — as the web has no business being boring.

There are plenty of ways, big and small, of keeping this place colourful. Whether our digital footprints are on third-party websites or ones we build ourselves, we needn’t toe the line.

Colour seems an appropriate place to start. When given the choice, try something audacious rather than safe. The worst that can happen is that it doesn’t work. It’s not like the sunk cost of painting a room; if you don’t like the palette, you simply change the hex codes. The same is true of fonts, icons, and other building blocks of the web.

As an example, a couple of friends and I listen to and review albums occasionally as a hobby. On the website, the palette of each review page reflects the album artwork:

I couldn’t tell you if reviews ‘perform’ better or worse than if they had a grayscale palette, because I don’t care. I think it’s a lot nicer to look at. And for those wondering, yes, I have tried to make every page meet AA Web Accessibility standards. Vibrant and accessible aren’t mutually exclusive.

Another great way of bringing vibrancy to the web is a degree of randomisation. Bruno Simon of Three Journey and awesome portfolio fame weaves random generation into a lot of his projects, and the results are gorgeous. What’s more, they feel familiar, natural, because life is full of wildcards.

This needn’t be in fancy 3D models. You could lightly rotate images to create a more informal, photo album mood, or chuck in the occasional random link in a list of recommended articles, just to shake things up.

In a lot of ways, it boils down to an attitude of just trying stuff out. Make your own font, give the site a sepia filter, and add that easter egg you keep thinking about. Just because someone, somewhere has already done it doesn’t mean you can’t do it your own way. And who knows, maybe your way stumbles onto someplace wholly new.

I’m wary of being too prescriptive. I don’t have the keys to a colourful web. No one person does. A vibrant community is the sum total of its people. What keeps things interesting is individuals trying wacky ideas and putting them out there. Expression for expression’s sake. Experimentation for experimentation’s sake. Tinkering for tinkering’s sake.

As users, there’s also plenty of room to be adventurous and try out open source alternatives to the software monopolies that shape so much of today’s Web. Being active in the communities that shape those tools helps to sustain a more open, collaborative digital world.

Although there are lessons to be taken from it, we won’t get a more colourful web by idealising the past or pining to get back to the ‘90s. Nor is there any point in resisting new technologies. AI is here; the choice is whether we use it or it uses us. We must have the courage to carry forward what still holds true, drop what doesn’t, and explore new ideas with a spirit of play.

Here are a few more Smashing articles in that spirit:

I do think there’s a broader discussion to be had about the extent to which A/B tests, bottom lines, and focus groups seem to dictate much of how the modern web looks and feels. With sites being squeezed tighter and tighter by dwindling advertising revenues, and AI answers muscling in on search traffic, the corporate entities behind larger websites can’t justify doing anything other than what is safe and proven, for fear of shrinking their slice of the pie.

Lest we forget, though, most of the web isn’t beholden to those types of pressure. From pet projects to wikis to forums to community news outlets to all manner of other things, there are countless reasons for websites to exist, and they needn’t take design cues from the handful of sites slugging it out at the top.

Connected with this is the dire need for digital literacy (PDF) — ‘the confident and critical use of a full range of digital technologies for information, communication and basic problem-solving in all aspects of life.’ For as long as using third-party platforms is a necessity rather than a choice, the needle’s only going to move so much.

There’s a reason why Minecraft is the world’s best-selling game. People are creative. When given the tools — and the opportunity — that creativity will manifest in weird and wonderful ways. That game is a lot of things, but gray ain’t one of them.

The web has all of that flexibility and more. It is a manifestation of imagination. Imagination trends towards colour, not grayness. It doesn’t always feel like it, but where the internet goes is decided by its citizens. The internet is ours. If we want to, we can make it technicolor.

Kategorier: Amerikanska

Smashing Animations Part 5: Building Adaptive SVGs With `<symbol>`, `<use>`, And CSS Media Queries

Smashingmagazine - mån, 10/06/2025 - 15:00

I’ve written quite a lot recently about how I prepare and optimise SVG code to use as static graphics or in animations. I love working with SVG, but there’s always been something about them that bugs me.

To illustrate how I build adaptive SVGs, I’ve selected an episode of The Quick Draw McGraw Show called “Bow Wow Bandit,” first broadcast in 1959.

In it, Quick Draw McGraw enlists his bloodhound Snuffles to rescue his sidekick Baba Looey. Like most Hanna-Barbera title cards of the period, the artwork was made by Lawrence (Art) Goble.

Let’s say I’ve designed an SVG scene like that one that’s based on Bow Wow Bandit, which has a 16:9 aspect ratio with a viewBox size of 1920×1080. This SVG scales up and down (the clue’s in the name), so it looks sharp when it’s gigantic and when it’s minute.

But on small screens, the 16:9 aspect ratio (live demo) might not be the best format, and the image loses its impact. Sometimes, a portrait orientation, like 3:4, would suit the screen size better.

But, herein lies the problem, as it’s not easy to reposition internal elements for different screen sizes using just viewBox. That’s because in SVG, internal element positions are locked to the coordinate system from the original viewBox, so you can’t easily change their layout between, say, desktop and mobile. This is a problem because animations and interactivity often rely on element positions, which break when the viewBox changes.

My challenge was to serve a 1080×1440 version of Bow Wow Bandit to smaller screens and a different one to larger ones. I wanted the position and size of internal elements — like Quick Draw McGraw and his dawg Snuffles — to change to best fit these two layouts. To solve this, I experimented with several alternatives.

Note: Why are we not just using the <picture> with external SVGs? The <picture> element is brilliant for responsive images, but it only works with raster formats (like JPEG or WebP) and external SVG files treated as images. That means that you can’t animate or style internal elements using CSS.

Showing And Hiding SVG

The most obvious choice was to include two different SVGs in my markup, one for small screens, the other for larger ones, then show or hide them using CSS and Media Queries:

<svg id="svg-small" viewBox="0 0 1080 1440"> <!-- ... --> </svg> <svg id="svg-large" viewBox="0 0 1920 1080"> <!--... --> </svg> #svg-small { display: block; } #svg-large { display: none; } @media (min-width: 64rem) { #svg-small { display: none; } #svg-mobile { display: block; } }

But using this method, both SVG versions are loaded, which, when the graphics are complex, means downloading lots and lots and lots of unnecessary code.

Replacing SVGs Using JavaScript

I thought about using JavaScript to swap in the larger SVG at a specified breakpoint:

if (window.matchMedia('(min-width: 64rem)').matches) { svgContainer.innerHTML = desktopSVG; } else { svgContainer.innerHTML = mobileSVG; }

Leaving aside the fact that JavaScript would now be critical to how the design is displayed, both SVGs would usually be loaded anyway, which adds DOM complexity and unnecessary weight. Plus, maintenance becomes a problem as there are now two versions of the artwork to maintain, doubling the time it would take to update something as small as the shape of Quick Draw’s tail.

The Solution: One SVG Symbol Library And Multiple Uses

Remember, my goal is to:

  • Serve one version of Bow Wow Bandit to smaller screens,
  • Serve a different version to larger screens,
  • Define my artwork just once (DRY), and
  • Be able to resize and reposition elements.

I don’t read about it enough, but the <symbol> element lets you define reusable SVG elements that can be hidden and reused to improve maintainability and reduce code bloat. They’re like components for SVG: create once and use wherever you need them:

<svg xmlns="http://www.w3.org/2000/svg" style="display: none;"> <symbol id="quick-draw-body" viewBox="0 0 620 700"> <g class="quick-draw-body">[…]</g> </symbol> <!-- ... --> </svg> <use href="#quick-draw-body" />

A <symbol> is like storing a character in a library. I can reference it as many times as I need, to keep my code consistent and lightweight. Using <use> elements, I can insert the same symbol multiple times, at different positions or sizes, and even in different SVGs.

Each <symbol> must have its own viewBox, which defines its internal coordinate system. That means paying special attention to how SVG elements are exported from apps like Sketch.

Exporting For Individual Viewboxes

I wrote before about how I export elements in layers to make working with them easier. That process is a little different when creating symbols.

Ordinarily, I would export all my elements using the same viewBoxsize. But when I’m creating a symbol, I need it to have its own specific viewBox.

So I export each element as an individually sized SVG, which gives me the dimensions I need to convert its content into a symbol. Let’s take the SVG of Quick Draw McGraw’s hat, which has a viewBox size of 294×182:

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 294 182"> <!-- ... --> </svg>

I swap the SVG tags for <symbol> and add its artwork to my SVG library:

<svg xmlns="http://www.w3.org/2000/svg" style="display: none;"> <symbol id="quick-draw-hat" viewBox="0 0 294 182"> <g class="quick-draw-hat">[…]</g> </symbol> </svg>

Then, I repeat the process for all the remaining elements in my artwork. Now, if I ever need to update any of my symbols, the changes will be automatically applied to every instance it’s used.

Using A <symbol> In Multiple SVGs

I wanted my elements to appear in both versions of Bow Wow Bandit, one arrangement for smaller screens and an alternative arrangement for larger ones. So, I create both SVGs:

<svg class="svg-small" viewBox="0 0 1080 1440"> <!-- ... --> </svg> <svg class="svg-large" viewBox="0 0 1920 1080"> <!-- ... --> </svg>

…and insert links to my symbols in both:

<svg class="svg-small" viewBox="0 0 1080 1440"> <use href="#quick-draw-hat" /> </svg> <svg class="svg-large" viewBox="0 0 1920 1080"> <use href="#quick-draw-hat" /> </svg> Positioning Symbols

Once I’ve placed symbols into my layout using <use>, my next step is to position them, which is especially important if I want alternative layouts for different screen sizes. Symbols behave like <g> groups, so I can scale and move them using attributes like width, height, and transform:

<svg class="svg-small" viewBox="0 0 1080 1440"> <use href="#quick-draw-hat" width="294" height="182" transform="translate(-30,610)"/> </svg> <svg class="svg-large" viewBox="0 0 1920 1080"> <use href="#quick-draw-hat" width="294" height="182" transform="translate(350,270)"/> </svg>

I can place each <use> element independently using transform. This is powerful because rather than repositioning elements inside my SVGs, I move the <use> references. My internal layout stays clean, and the file size remains small because I’m not duplicating artwork. A browser only loads it once, which reduces bandwidth and speeds up page rendering. And because I’m always referencing the same symbol, their appearance stays consistent, whatever the screen size.

Animating <use> Elements

Here’s where things got tricky. I wanted to animate parts of my characters — like Quick Draw’s hat tilting and his legs kicking. But when I added CSS animations targeting internal elements inside a <symbol>, nothing happened.

Tip: You can animate the <use> element itself, but not elements inside the <symbol>. If you want individual parts to move, make them their own symbols and animate each <use>.

Turns out, you can’t style or animate a <symbol>, because <use> creates shadow DOM clones that aren’t easily targetable. So, I had to get sneaky. Inside each <symbol> in my library SVG, I added a <g> element around the part I wanted to animate:

<symbol id="quick-draw-hat" viewBox="0 0 294 182"> <g class="quick-draw-hat"> <!-- ... --> </g> </symbol>

…and animated it using an attribute substring selector, targeting the href attribute of the use element:

use[href="#quick-draw-hat"] { animation-delay: 0.5s; animation-direction: alternate; animation-duration: 1s; animation-iteration-count: infinite; animation-name: hat-rock; animation-timing-function: ease-in-out; transform-origin: center bottom; } @keyframes hat-rock { from { transform: rotate(-2deg); } to { transform: rotate(2deg); } } Media Queries For Display Control

Once I’ve created my two visible SVGs — one for small screens and one for larger ones — the final step is deciding which version to show at which screen size. I use CSS Media Queries to hide one SVG and show the other. I start by showing the small-screen SVG by default:

.svg-small { display: block; } .svg-large { display: none; }

Then I use a min-width media query to switch to the large-screen SVG at 64rem and above:

@media (min-width: 64rem) { .svg-small { display: none; } .svg-large { display: block; } }

This ensures there’s only ever one SVG visible at a time, keeping my layout simple and the DOM free from unnecessary clutter. And because both visible SVGs reference the same hidden <symbol> library, the browser only downloads the artwork once, regardless of how many <use> elements appear across the two layouts.

Wrapping Up

By combining <symbol>, <use>, CSS Media Queries, and specific transforms, I can build adaptive SVGs that reposition their elements without duplicating content, loading extra assets, or relying on JavaScript. I need to define each graphic only once in a hidden symbol library. Then I can reuse those graphics, as needed, inside several visible SVGs. With CSS doing the layout switching, the result is fast and flexible.

It’s a reminder that some of the most powerful techniques on the web don’t need big frameworks or complex tooling — just a bit of SVG know-how and a clever use of the basics.

Kategorier: Amerikanska

Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)

Smashingmagazine - fre, 10/03/2025 - 12:00

In Part 1 of this series, we explored the “lopsided horse” problem born from mockup-centric design and demonstrated how the seductive promise of vibe coding often leads to structural flaws. The main question remains:

How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap?

In other words, we need a way to build prototypes that are both fast to create and founded on a clear, unambiguous blueprint.

The answer is a more disciplined process I call Intent Prototyping (kudos to Marco Kotrotsos, who coined Intent-Oriented Programming). This method embraces the power of AI-assisted coding but rejects ambiguity, putting the designer’s explicit intent at the very center of the process. It receives a holistic expression of intent (sketches for screen layouts, conceptual model description, boxes-and-arrows for user flows) and uses it to generate a live, testable prototype.

This method solves the concerns we’ve discussed in Part 1 in the best way possible:

  • Unlike static mockups, the prototype is fully interactive and can be easily populated with a large amount of realistic data. This lets us test the system’s underlying logic as well as its surface.
  • Unlike a vibe-coded prototype, it is built from a stable, unambiguous specification. This prevents the conceptual model failures and design debt that happen when things are unclear. The engineering team doesn’t need to reverse-engineer a black box or become “code archaeologists” to guess at the designer’s vision, as they receive not only a live prototype but also a clearly documented design intent behind it.

This combination makes the method especially suited for designing complex enterprise applications. It allows us to test the system’s most critical point of failure, its underlying structure, at a speed and flexibility that was previously impossible. Furthermore, the process is built for iteration. You can explore as many directions as you want simply by changing the intent and evolving the design based on what you learn from user testing.

My Workflow

To illustrate this process in action, let’s walk through a case study. It’s the very same example I’ve used to illustrate the vibe coding trap: a simple tool to track tests to validate product ideas. You can find the complete project, including all the source code and documentation files discussed below, in this GitHub repository.

Step 1: Expressing An Intent

Imagine we’ve already done proper research, and having mused on the defined problem, I begin to form a vague idea of what the solution might look like. I need to capture this idea immediately, so I quickly sketch it out:

In this example, I used Excalidraw, but the tool doesn’t really matter. Note that we deliberately keep it rough, as visual details are not something we need to focus on at this stage. And we are not going to be stuck here: we want to make a leap from this initial sketch directly to a live prototype that we can put in front of potential users. Polishing those sketches would not bring us any closer to achieving our goal.

What we need to move forward is to add to those sketches just enough details so that they may serve as a sufficient input for a junior frontend developer (or, in our case, an AI assistant). This requires explaining the following:

  • Navigational paths (clicking here takes you to).
  • Interaction details that can’t be shown in a static picture (e.g., non-scrollable areas, adaptive layout, drag-and-drop behavior).
  • What parts might make sense to build as reusable components.
  • Which components from the design system (I’m using Ant Design Library) should be used.
  • Any other comments that help understand how this thing should work (while sketches illustrate how it should look).

Having added all those details, we end up with such an annotated sketch:

As you see, this sketch covers both the Visualization and Flow aspects. You may ask, what about the Conceptual Model? Without that part, the expression of our intent will not be complete. One way would be to add it somewhere in the margins of the sketch (for example, as a UML Class Diagram), and I would do so in the case of a more complex application, where the model cannot be simply derived from the UI. But in our case, we can save effort and ask an LLM to generate a comprehensive description of the conceptual model based on the sketch.

For tasks of this sort, the LLM of my choice is Gemini 2.5 Pro. What is important is that this is a multimodal model that can accept not only text but also images as input (GPT-5 and Claude-4 also fit that criteria). I use Google AI Studio, as it gives me enough control and visibility into what’s happening:

Note: All the prompts that I use here and below can be found in the Appendices. The prompts are not custom-tailored to any particular project; they are supposed to be reused as they are.

As a result, Gemini gives us a description and the following diagram:

The diagram might look technical, but I believe that a clear understanding of all objects, their attributes, and relationships between them is key to good design. That’s why I consider the Conceptual Model to be an essential part of expressing intent, along with the Flow and Visualization.

As a result of this step, our intent is fully expressed in two files: Sketch.png and Model.md. This will be our durable source of truth.

Step 2: Preparing A Spec And A Plan

The purpose of this step is to create a comprehensive technical specification and a step-by-step plan. Most of the work here is done by AI; you just need to keep an eye on it.

I separate the Data Access Layer and the UI layer, and create specifications for them using two different prompts (see Appendices 2 and 3). The output of the first prompt (the Data Access Layer spec) serves as an input for the second one. Note that, as an additional input, we give the guidelines tailored for prototyping needs (see Appendices 8, 9, and 10). They are not specific to this project. The technical approach encoded in those guidelines is out of the scope of this article.

As a result, Gemini provides us with content for DAL.md and UI.md. Although in most cases this result is quite reliable enough, you might want to scrutinize the output. You don’t need to be a real programmer to make sense of it, but some level of programming literacy would be really helpful. However, even if you don’t have such skills, don’t get discouraged. The good news is that if you don’t understand something, you always know who to ask. Do it in Google AI Studio before refreshing the context window. If you believe you’ve spotted a problem, let Gemini know, and it will either fix it or explain why the suggested approach is actually better.

It’s important to remember that by their nature, LLMs are not deterministic and, to put it simply, can be forgetful about small details, especially when it comes to details in sketches. Fortunately, you don’t have to be an expert to notice that the “Delete” button, which is in the upper right corner of the sketch, is not mentioned in the spec.

Don’t get me wrong: Gemini does a stellar job most of the time, but there are still times when it slips up. Just let it know about the problems you’ve spotted, and everything will be fixed.

Once we have Sketch.png, Model.md, DAL.md, UI.md, and we have reviewed the specs, we can grab a coffee. We deserve it: our technical design documentation is complete. It will serve as a stable foundation for building the actual thing, without deviating from our original intent, and ensuring that all components fit together perfectly, and all layers are stacked correctly.

One last thing we can do before moving on to the next steps is to prepare a step-by-step plan. We split that plan into two parts: one for the Data Access Layer and another for the UI. You can find prompts I use to create such a plan in Appendices 4 and 5.

Step 3: Executing The Plan

To start building the actual thing, we need to switch to another category of AI tools. Up until this point, we have relied on Generative AI. It excels at creating new content (in our case, specifications and plans) based on a single prompt. I’m using Google Gemini 2.5 Pro in Google AI Studio, but other similar tools may also fit such one-off tasks: ChatGPT, Claude, Grok, and DeepSeek.

However, at this step, this wouldn’t be enough. Building a prototype based on specs and according to a plan requires an AI that can read context from multiple files, execute a sequence of tasks, and maintain coherence. A simple generative AI can’t do this. It would be like asking a person to build a house by only ever showing them a single brick. What we need is an agentic AI that can be given the full house blueprint and a project plan, and then get to work building the foundation, framing the walls, and adding the roof in the correct sequence.

My coding agent of choice is Google Gemini CLI, simply because Gemini 2.5 Pro serves me well, and I don’t think we need any middleman like Cursor or Windsurf (which would use Claude, Gemini, or GPT under the hood anyway). If I used Claude, my choice would be Claude Code, but since I’m sticking with Gemini, Gemini CLI it is. But if you prefer Cursor or Windsurf, I believe you can apply the same process with your favourite tool.

Before tasking the agent, we need to create a basic template for our React application. I won’t go into this here. You can find plenty of tutorials on how to scaffold an empty React project using Vite.

Then we put all our files into that project:

Once the basic template with all our files is ready, we open Terminal, go to the folder where our project resides, and type “gemini”:

And we send the prompt to build the Data Access Layer (see Appendix 6). That prompt implies step-by-step execution, so upon completion of each step, I send the following:

Thank you! Now, please move to the next task. Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec. After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.

As the last task in the plan, the agent builds a special page where we can test all the capabilities of our Data Access Layer, so that we can manually test it. It may look like this:

It doesn’t look fancy, to say the least, but it allows us to ensure that the Data Access Layer works correctly before we proceed with building the final UI.

And finally, we clear the Gemini CLI context window to give it more headspace and send the prompt to build the UI (see Appendix 7). This prompt also implies step-by-step execution. Upon completion of each step, we test how it works and how it looks, following the “Manual Testing Plan” from UI-plan.md. I have to say that despite the fact that the sketch has been uploaded to the model context and, in general, Gemini tries to follow it, attention to visual detail is not one of its strengths (yet). Usually, a few additional nudges are needed at each step to improve the look and feel:

Once I’m happy with the result of a step, I ask Gemini to move on:

Thank you! Now, please move to the next task. Make sure you build the UI according to the sketch; this is very important. Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch.
After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.

Before long, the result looks like this, and in every detail it works exactly as we intended:

The prototype is up and running and looking nice. Does it mean that we are done with our work? Surely not, the most fascinating part is just beginning.

Step 4: Learning And Iterating

It’s time to put the prototype in front of potential users and learn more about whether this solution relieves their pain or not.

And as soon as we learn something new, we iterate. We adjust or extend the sketches and the conceptual model, based on that new input, we update the specifications, create plans to make changes according to the new specifications, and execute those plans. In other words, for every iteration, we repeat the steps I’ve just walked you through.

Is This Workflow Too Heavy?

This four-step workflow may create an impression of a somewhat heavy process that requires too much thinking upfront and doesn’t really facilitate creativity. But before jumping to that conclusion, consider the following:

  • In practice, only the first step requires real effort, as well as learning in the last step. AI does most of the work in between; you just need to keep an eye on it.
  • Individual iterations don’t need to be big. You can start with a Walking Skeleton: the bare minimum implementation of the thing you have in mind, and add more substance in subsequent iterations. You are welcome to change your mind about the overall direction in between iterations.
  • And last but not least, maybe the idea of “think before you do” is not something you need to run away from. A clear and unambiguous statement of intent can prevent many unnecessary mistakes and save a lot of effort down the road.
Intent Prototyping Vs. Other Methods

There is no method that fits all situations, and Intent Prototyping is not an exception. Like any specialized tool, it has a specific purpose. The most effective teams are not those who master a single method, but those who understand which approach to use to mitigate the most significant risk at each stage. The table below gives you a way to make this choice clearer. It puts Intent Prototyping next to other common methods and tools and explains each one in terms of the primary goal it helps achieve and the specific risks it is best suited to mitigate.

Method/Tool Goal Risks it is best suited to mitigate Examples Why Intent Prototyping To rapidly iterate on the fundamental architecture of a data-heavy application with a complex conceptual model, sophisticated business logic, and non-linear user flows. Building a system with a flawed or incoherent conceptual model, leading to critical bugs and costly refactoring.
  • A CRM (Customer Relationship Management system).
  • A Resource Management Tool.
  • A No-Code Integration Platform (admin’s UI).
It enforces conceptual clarity. This not only de-risks the core structure but also produces a clear, documented blueprint that serves as a superior specification for the engineering handoff. Vibe Coding (Conversational) To rapidly explore interactive ideas through improvisation. Losing momentum because of analysis paralysis.
  • An interactive data table with live sorting/filtering.
  • A novel navigation concept.
  • A proof-of-concept for a single, complex component.
It has the smallest loop between an idea conveyed in natural language and an interactive outcome. Axure To test complicated conditional logic within a specific user journey, without having to worry about how the whole system works. Designing flows that break when users don’t follow the “happy path.”
  • A multi-step e-commerce checkout.
  • A software configuration wizard.
  • A dynamic form with dependent fields.
It’s made to create complex if-then logic and manage variables visually. This lets you test complicated paths and edge cases in a user journey without writing any code. Figma To make sure that the user interface looks good, aligns with the brand, and has a clear information architecture. Making a product that looks bad, doesn't fit with the brand, or has a layout that is hard to understand.
  • A marketing landing page.
  • A user onboarding flow.
  • Presenting a new visual identity.
It excels at high-fidelity visual design and provides simple, fast tools for linking static screens. ProtoPie, Framer To make high-fidelity micro-interactions feel just right. Shipping an application that feels cumbersome and unpleasant to use because of poorly executed interactions.
  • A custom pull-to-refresh animation.
  • A fluid drag-and-drop interface.
  • An animated chart or data visualization.
These tools let you manipulate animation timelines, physics, and device sensor inputs in great detail. Designers can carefully work on and test the small things that make an interface feel really polished and fun to use. Low-code / No-code Tools (e.g., Bubble, Retool) To create a working, data-driven app as quickly as possible. The application will never be built because traditional development is too expensive.
  • An internal inventory tracker.
  • A customer support dashboard.
  • A simple directory website.
They put a UI builder, a database, and hosting all in one place. The goal is not merely to make a prototype of an idea, but to make and release an actual, working product. This is the last step for many internal tools or MVPs.


The key takeaway is that each method is a specialized tool for mitigating a specific type of risk. For example, Figma de-risks the visual presentation. ProtoPie de-risks the feel of an interaction. Intent Prototyping is in a unique position to tackle the most foundational risk in complex applications: building on a flawed or incoherent conceptual model.

Bringing It All Together

The era of the “lopsided horse” design, sleek on the surface but structurally unsound, is a direct result of the trade-off between fidelity and flexibility. This trade-off has led to a process filled with redundant effort and misplaced focus. Intent Prototyping, powered by modern AI, eliminates that conflict. It’s not just a shortcut to building faster — it’s a fundamental shift in how we design. By putting a clear, unambiguous intent at the heart of the process, it lets us get rid of the redundant work and focus on architecting a sound and robust system.

There are three major benefits to this renewed focus. First, by going straight to live, interactive prototypes, we shift our validation efforts from the surface to the deep, testing the system’s actual logic with users from day one. Second, the very act of documenting the design intent makes us clear about our ideas, ensuring that we fully understand the system’s underlying logic. Finally, this documented intent becomes a durable source of truth, eliminating the ambiguous handoffs and the redundant, error-prone work of having engineers reverse-engineer a designer’s vision from a black box.

Ultimately, Intent Prototyping changes the object of our work. It allows us to move beyond creating pictures of a product and empowers us to become architects of blueprints for a system. With the help of AI, we can finally make the live prototype the primary canvas for ideation, not just a high-effort afterthought.

Appendices

You can find the full Intent Prototyping Starter Kit, which includes all those prompts and guidelines, as well as the example from this article and a minimal boilerplate project, in this GitHub repository.

Appendix 1: Sketch to UML Class Diagram +

You are an expert Senior Software Architect specializing in Domain-Driven Design. You are tasked with defining a conceptual model for an app based on information from a UI sketch. ## Workflow Follow these steps precisely: **Step 1:** Analyze the sketch carefully. There should be no ambiguity about what we are building. **Step 2:** Generate the conceptual model description in the Mermaid format using a UML class diagram. ## Ground Rules - Every entity must have the following attributes: - id (string) - createdAt (string, ISO 8601 format) - updatedAt (string, ISO 8601 format) - Include all attributes shown in the UI: If a piece of data is visually represented as a field for an entity, include it in the model, even if it's calculated from other attributes. - Do not add any speculative entities, attributes, or relationships ("just in case"). The model should serve the current sketch's requirements only. - Pay special attention to cardinality definitions (e.g., if a relationship is optional on both sides, it cannot be "1" -- "0..*", it must be "0..1" -- "0..*"). - Use only valid syntax in the Mermaid diagram. - Do not include enumerations in the Mermaid diagram. - Add comments explaining the purpose of every entity, attribute, and relationship, and their expected behavior (not as a part of the diagram, in the Markdown file). ## Naming Conventions - Names should reveal intent and purpose. - Use PascalCase for entity names. - Use camelCase for attributes and relationships. - Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError). ## Final Instructions - **No Assumptions: Base every detail on visual evidence in the sketch, not on common design patterns. - **Double-Check: After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. - **Do not add redundant empty lines between items.** Your final output should be the complete, raw markdown content for Model.md.

Appendix 2: Sketch to DAL Spec +

You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a comprehensive technical specification for the development team in a structured markdown document, based on a UI sketch and a conceptual model description. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - Model.md: the conceptual model - Sketch.png: the UI sketch There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - TS-guidelines.md: TypeScript Best Practices - React-guidelines.md: React Best Practices - Zustand-guidelines.md: Zustand Best Practices **Step 3:** Create a Markdown specification for the stores and entity-specific hook that implements all the logic and provides all required operations. --- ## Markdown Output Structure Use this template for the entire document. markdown &#35; Data Access Layer Specification This document outlines the specification for the data access layer of the application, following the principles defined in `docs/guidelines/Zustand-guidelines.md`. &#35;&#35; 1. Type Definitions Location: `src/types/entities.ts` &#35;&#35;&#35; 1.1. `BaseEntity` A shared interface that all entities should extend. [TypeScript interface definition] &#35;&#35;&#35; 1.2. `[Entity Name]` The interface for the [Entity Name] entity. [TypeScript interface definition] &#35;&#35; 2. Zustand Stores &#35;&#35;&#35; 2.1. Store for `[Entity Name]` &#42;&#42;Location:&#42;&#42; `src/stores/[Entity Name (plural)].ts` The Zustand store will manage the state of all [Entity Name] items. &#42;&#42;Store State (`[Entity Name]State`):&#42;&#42; [TypeScript interface definition] &#42;&#42;Store Implementation (`use[Entity Name]Store`):&#42;&#42; - The store will be created using `create&lt;[Entity Name]State&gt;()(...)`. - It will use the `persist` middleware from `zustand/middleware` to save state to `localStorage`. The persistence key will be `[entity-storage-key]`. - `[Entity Name (plural, camelCase)]` will be a dictionary (`Record&lt;string, [Entity]&gt;`) for O(1) access. &#42;&#42;Actions:&#42;&#42; - &#42;&#42;`add[Entity Name]`&#42;&#42;: [Define the operation behavior based on entity requirements] - &#42;&#42;`update[Entity Name]`&#42;&#42;: [Define the operation behavior based on entity requirements] - &#42;&#42;`remove[Entity Name]`&#42;&#42;: [Define the operation behavior based on entity requirements] - &#42;&#42;`doSomethingElseWith[Entity Name]`&#42;&#42;: [Define the operation behavior based on entity requirements] &#35;&#35; 3. Custom Hooks &#35;&#35;&#35; 3.1. `use[Entity Name (plural)]` &#42;&#42;Location:&#42;&#42; `src/hooks/use[Entity Name (plural)].ts` The hook will be the primary interface for UI components to interact with [Entity Name] data. &#42;&#42;Hook Return Value:&#42;&#42; [TypeScript interface definition] &#42;&#42;Hook Implementation:&#42;&#42; [List all properties and methods returned by this hook, and briefly explain the logic behind them, including data transformations, memoization. Do not write the actual code here.] --- ## Final Instructions - **No Assumptions:** Base every detail in the specification on the conceptual model or visual evidence in the sketch, not on common design patterns. - **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. - **Do not add redundant empty lines between items.** Your final output should be the complete, raw markdown content for DAL.md.

Appendix 3: Sketch to UI Spec +

You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a comprehensive technical specification by translating a UI sketch into a structured markdown document for the development team. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - Sketch.png: the UI sketch - Note that red lines, red arrows, and red text within the sketch are annotations for you and should not be part of the final UI design. They provide hints and clarification. Never translate them to UI elements directly. - Model.md: the conceptual model - DAL.md: the Data Access Layer spec There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - TS-guidelines.md: TypeScript Best Practices - React-guidelines.md: React Best Practices **Step 3:** Generate the complete markdown content for a new file, UI.md. --- ## Markdown Output Structure Use this template for the entire document. markdown &#35; UI Layer Specification This document specifies the UI layer of the application, breaking it down into pages and reusable components based on the provided sketches. All components will adhere to Ant Design's principles and utilize the data access patterns defined in `docs/guidelines/Zustand-guidelines.md`. &#35;&#35; 1. High-Level Structure The application is a single-page application (SPA). It will be composed of a main layout, one primary page, and several reusable components. &#35;&#35;&#35; 1.1. `App` Component The root component that sets up routing and global providers. - &#42;&#42;Location&#42;&#42;: `src/App.tsx` - &#42;&#42;Purpose&#42;&#42;: To provide global context, including Ant Design's `ConfigProvider` and `App` contexts for message notifications, and to render the main page. - &#42;&#42;Composition&#42;&#42;: - Wraps the application with `ConfigProvider` and `App as AntApp` from 'antd' to enable global message notifications as per `simple-ice/antd-messages.mdc`. - Renders `[Page Name]`. &#35;&#35; 2. Pages &#35;&#35;&#35; 2.1. `[Page Name]` - &#42;&#42;Location:&#42;&#42; `src/pages/PageName.tsx` - &#42;&#42;Purpose:&#42;&#42; [Briefly describe the main goal and function of this page] - &#42;&#42;Data Access:&#42;&#42; [List the specific hooks and functions this component uses to fetch or manage its data] - &#42;&#42;Internal State:&#42;&#42; [Describe any state managed internally by this page using `useState`] - &#42;&#42;Composition:&#42;&#42; [Briefly describe the content of this page] - &#42;&#42;User Interactions:&#42;&#42; [Describe how the user interacts with this page] - &#42;&#42;Logic:&#42;&#42; [If applicable, provide additional comments on how this page should work] &#35;&#35; 3. Components &#35;&#35;&#35; 3.1. `[Component Name]` - &#42;&#42;Location:&#42;&#42; `src/components/ComponentName.tsx` - &#42;&#42;Purpose:&#42;&#42; [Explain what this component does and where it's used] - &#42;&#42;Props:&#42;&#42; [TypeScript interface definition for the component's props. Props should be minimal. Avoid prop drilling by using hooks for data access.] - &#42;&#42;Data Access:&#42;&#42; [List the specific hooks and functions this component uses to fetch or manage its data] - &#42;&#42;Internal State:&#42;&#42; [Describe any state managed internally by this component using `useState`] - &#42;&#42;Composition:&#42;&#42; [Briefly describe the content of this component] - &#42;&#42;User Interactions:&#42;&#42; [Describe how the user interacts with the component] - &#42;&#42;Logic:&#42;&#42; [If applicable, provide additional comments on how this component should work] --- ## Final Instructions - **No Assumptions:** Base every detail on the visual evidence in the sketch, not on common design patterns. - **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. - **Do not add redundant empty lines between items.** Your final output should be the complete, raw markdown content for UI.md.

Appendix 4: DAL Spec to Plan +

You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a plan to build a Data Access Layer for an application based on a spec. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter. There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - TS-guidelines.md: TypeScript Best Practices - React-guidelines.md: React Best Practices - Zustand-guidelines.md: Zustand Best Practices **Step 3:** Create a step-by-step plan to build a Data Access Layer according to the spec. Each task should: - Focus on one concern - Be reasonably small - Have a clear start + end - Contain clearly defined Objectives and Acceptance Criteria The last step of the plan should include creating a page to test all the capabilities of our Data Access Layer, and making it the start page of this application, so that I can manually check if it works properly. I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to review results in between. ## Final Instructions - Note that we are not starting from scratch; the basic template has already been created using Vite. - Do not add redundant empty lines between items. Your final output should be the complete, raw markdown content for DAL-plan.md.

Appendix 5: UI Spec to Plan +

You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a plan to build a UI layer for an application based on a spec and a sketch. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter. - Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible. There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - TS-guidelines.md: TypeScript Best Practices - React-guidelines.md: React Best Practices **Step 3:** Create a step-by-step plan to build a UI layer according to the spec and the sketch. Each task must: - Focus on one concern. - Be reasonably small. - Have a clear start + end. - Result in a verifiable increment of the application. Each increment should be manually testable to allow for functional review and approval before proceeding. - Contain clearly defined Objectives, Acceptance Criteria, and Manual Testing Plan. I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to test in between. ## Final Instructions - Note that we are not starting from scratch, the basic template has already been created using Vite, and the Data Access Layer has been built successfully. - For every task, describe how components should be integrated for verification. You must use the provided hooks to connect to the live Zustand store data—do not use mock data (note that the Data Access Layer has been already built successfully). - The Manual Testing Plan should read like a user guide. It must only contain actions a user can perform in the browser and must never reference any code files or programming tasks. - Do not add redundant empty lines between items. Your final output should be the complete, raw markdown content for UI-plan.md.


Appendix 6: DAL Plan to Code +

You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with building a Data Access Layer for an application based on a spec. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter. There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - @docs/guidelines/TS-guidelines.md: TypeScript Best Practices - @docs/guidelines/React-guidelines.md: React Best Practices - @docs/guidelines/Zustand-guidelines.md: Zustand Best Practices **Step 3:** Read the plan: - @docs/plans/DAL-plan.md: The step-by-step plan to build the Data Access Layer of the application. **Step 4:** Build a Data Access Layer for this application according to the spec and following the plan. - Complete one task from the plan at a time. - After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. - Do not do anything else. At this point, we are focused on building the Data Access Layer. ## Final Instructions - Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. - Do not start the development server, I'll do it by myself.

Appendix 7: UI Plan to Code +

You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with building a UI layer for an application based on a spec and a sketch. ## Workflow Follow these steps precisely: **Step 1:** Analyze the documentation carefully: - @docs/specs/UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter. - @docs/intent/Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible. - @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. That layer is already ready. Use this spec to understand how to work with it. There should be no ambiguity about what we are building. **Step 2:** Check out the guidelines: - @docs/guidelines/TS-guidelines.md: TypeScript Best Practices - @docs/guidelines/React-guidelines.md: React Best Practices **Step 3:** Read the plan: - @docs/plans/UI-plan.md: The step-by-step plan to build the UI layer of the application. **Step 4:** Build a UI layer for this application according to the spec and the sketch, following the step-by-step plan: - Complete one task from the plan at a time. - Make sure you build the UI according to the sketch; this is very important. - After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. ## Final Instructions - Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. - Follow Ant Design's default styles and components. - Do not touch the data access layer: it's ready and it's perfect. - Do not start the development server, I'll do it by myself.

Appendix 8: TS-guidelines.md +

# Guidelines: TypeScript Best Practices ## Type System & Type Safety - Use TypeScript for all code and enable strict mode. - Ensure complete type safety throughout stores, hooks, and component interfaces. - Prefer interfaces over types for object definitions; use types for unions, intersections, and mapped types. - Entity interfaces should extend common patterns while maintaining their specific properties. - Use TypeScript type guards in filtering operations for relationship safety. - Avoid the 'any' type; prefer 'unknown' when necessary. - Use generics to create reusable components and functions. - Utilize TypeScript's features to enforce type safety. - Use type-only imports (import type { MyType } from './types') when importing types, because verbatimModuleSyntax is enabled. - Avoid enums; use maps instead. ## Naming Conventions - Names should reveal intent and purpose. - Use PascalCase for component names and types/interfaces. - Prefix interfaces for React props with 'Props' (e.g., ButtonProps). - Use camelCase for variables and functions. - Use UPPER_CASE for constants. - Use lowercase with dashes for directories, and PascalCase for files with components (e.g., components/auth-wizard/AuthForm.tsx). - Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError). - Favor named exports for components. ## Code Structure & Patterns - Write concise, technical TypeScript code with accurate examples. - Use functional and declarative programming patterns; avoid classes. - Prefer iteration and modularization over code duplication. - Use the "function" keyword for pure functions. - Use curly braces for all conditionals for consistency and clarity. - Structure files appropriately based on their purpose. - Keep related code together and encapsulate implementation details. ## Performance & Error Handling - Use immutable and efficient data structures and algorithms. - Create custom error types for domain-specific errors. - Use try-catch blocks with typed catch clauses. - Handle Promise rejections and async errors properly. - Log errors appropriately and handle edge cases gracefully. ## Project Organization - Place shared types in a types directory. - Use barrel exports (index.ts) for organizing exports. - Structure files and directories based on their purpose. ## Other Rules - Use comments to explain complex logic or non-obvious decisions. - Follow the single responsibility principle: each function should do exactly one thing. - Follow the DRY (Don't Repeat Yourself) principle. - Do not implement placeholder functions, empty methods, or "just in case" logic. Code should serve the current specification's requirements only. - Use 2 spaces for indentation (no tabs).

Appendix 9: React-guidelines.md +

# Guidelines: React Best Practices ## Component Structure - Use functional components over class components - Keep components small and focused - Extract reusable logic into custom hooks - Use composition over inheritance - Implement proper prop types with TypeScript - Structure React files: exported component, subcomponents, helpers, static content, types - Use declarative TSX for React components - Ensure that UI components use custom hooks for data fetching and operations rather than receive data via props, except for simplest components ## React Patterns - Utilize useState and useEffect hooks for state and side effects - Use React.memo for performance optimization when needed - Utilize React.lazy and Suspense for code-splitting - Implement error boundaries for robust error handling - Keep styles close to components ## React Performance - Avoid unnecessary re-renders - Lazy load components and images when possible - Implement efficient state management - Optimize rendering strategies - Optimize network requests - Employ memoization techniques (e.g., React.memo, useMemo, useCallback) ## React Project Structure /src - /components - UI components (every component in a separate file) - /hooks - public-facing custom hooks (every hook in a separate file) - /providers - React context providers (every provider in a separate file) - /pages - page components (every page in a separate file) - /stores - entity-specific Zustand stores (every store in a separate file) - /styles - global styles (if needed) - /types - shared TypeScript types and interfaces

Appendix 10: Zustand-guidelines.md +

# Guidelines: Zustand Best Practices ## Core Principles - **Implement a data layer** for this React application following this specification carefully and to the letter. - **Complete separation of concerns**: All data operations should be accessible in UI components through simple and clean entity-specific hooks, ensuring state management logic is fully separated from UI logic. - **Shared state architecture**: Different UI components should work with the same shared state, despite using entity-specific hooks separately. ## Technology Stack - **State management**: Use Zustand for state management with automatic localStorage persistence via the persist middleware. ## Store Architecture - **Base entity:** Implement a BaseEntity interface with common properties that all entities extend: typescript export interface BaseEntity { id: string; createdAt: string; // ISO 8601 format updatedAt: string; // ISO 8601 format } - **Entity-specific stores**: Create separate Zustand stores for each entity type. - **Dictionary-based storage**: Use dictionary/map structures (Record<string, Entity>) rather than arrays for O(1) access by ID. - **Handle relationships**: Implement cross-entity relationships (like cascade deletes) within the stores where appropriate. ## Hook Layer The hook layer is the exclusive interface between UI components and the Zustand stores. It is designed to be simple, predictable, and follow a consistent pattern across all entities. ### Core Principles 1. **One Hook Per Entity**: There will be a single, comprehensive custom hook for each entity (e.g., useBlogPosts, useCategories). This hook is the sole entry point for all data and operations related to that entity. Separate hooks for single-item access will not be created. 2. **Return reactive data, not getter functions**: To prevent stale data, hooks must return the state itself, not a function that retrieves state. Parameterize hooks to accept filters and return the derived data directly. A component calling a getter function will not update when the underlying data changes. 3. **Expose Dictionaries for O(1) Access**: To provide simple and direct access to data, every hook will return a dictionary (Record<string, Entity>) of the relevant items. ### The Standard Hook Pattern Every entity hook will follow this implementation pattern: 1. **Subscribe** to the entire dictionary of entities from the corresponding Zustand store. This ensures the hook is reactive to any change in the data. 2. **Filter** the data based on the parameters passed into the hook. This logic will be memoized with useMemo for efficiency. If no parameters are provided, the hook will operate on the entire dataset. 3. **Return a Consistent Shape**: The hook will always return an object containing: * A **filtered and sorted array** (e.g., blogPosts) for rendering lists. * A **filtered dictionary** (e.g., blogPostsDict) for convenient O(1) lookup within the component. * All necessary **action functions** (add, update, remove) and **relationship operations**. * All necessary **helper functions** and **derived data objects**. Helper functions are suitable for pure, stateless logic (e.g., calculators). Derived data objects are memoized values that provide aggregated or summarized information from the state (e.g., an object containing status counts). They must be derived directly from the reactive state to ensure they update automatically when the underlying data changes. ## API Design Standards - **Object Parameters**: Use object parameters instead of multiple direct parameters for better extensibility: typescript // ✅ Preferred add({ title, categoryIds }) // ❌ Avoid add(title, categoryIds) - **Internal Methods**: Use underscore-prefixed methods for cross-store operations to maintain clean separation. ## State Validation Standards - **Existence checks**: All update and remove operations should validate entity existence before proceeding. - **Relationship validation**: Verify both entities exist before establishing relationships between them. ## Error Handling Patterns - **Operation failures**: Define behavior when operations fail (e.g., updating non-existent entities). - **Graceful degradation**: How to handle missing related entities in helper functions. ## Other Standards - **Secure ID generation**: Use crypto.randomUUID() for entity ID generation instead of custom implementations for better uniqueness guarantees and security. - **Return type consistency**: add operations return generated IDs for component workflows requiring immediate entity access, while update and remove operations return void to maintain clean modification APIs.


Kategorier: Amerikanska

From Prompt To Partner: Designing Your Custom AI Assistant

Smashingmagazine - fre, 09/26/2025 - 12:00

In “A Week In The Life Of An AI-Augmented Designer”, Kate stumbled her way through an AI-augmented sprint (coffee was chugged, mistakes were made). In “Prompting Is A Design Act”, we introduced WIRE+FRAME, a framework to structure prompts like designers structure creative briefs. Now we’ll take the next step: packaging those structured prompts into AI assistants you can design, reuse, and share.

AI assistants go by different names: CustomGPTs (ChatGPT), Agents (Copilot), and Gems (Gemini). But they all serve the same function — allowing you to customize the default AI model for your unique needs. If we carry over our smart intern analogy, think of these as interns trained to assist you with specific tasks, eliminating the need for repeated instructions or information, and who can support not just you, but your entire team.

Why Build Your Own Assistant?

If you’ve ever copied and pasted the same mega-prompt for the nth time, you’ve experienced the pain. An AI assistant turns a one-off “great prompt” into a dependable teammate. And if you’ve used any of the publicly available AI Assistants, you’ve realized quickly that they’re usually generic and not tailored for your use.

Public AI assistants are great for inspiration, but nothing beats an assistant that solves a repeated problem for you and your team, in your voice, with your context and constraints baked in. Instead of reinventing the wheel by writing new prompts each time, or repeatedly copy-pasting your structured prompts every time, or spending cycles trying to make a public AI Assistant work the way you need it to, your own AI Assistant allows you and others to easily get better, repeatable, consistent results faster.

Benefits Of Reusing Prompts, Even Your Own

Some of the benefits of building your own AI Assistant over writing or reusing your prompts include:

  • Focused on a real repeating problem
    A good AI Assistant isn’t a general-purpose “do everything” bot that you need to keep tweaking. It focuses on a single, recurring problem that takes a long time to complete manually and often results in varying quality depending on who’s doing it (e.g., analyzing customer feedback).
  • Customized for your context
    Most large language models (LLMs, such as ChatGPT) are designed to be everything to everyone. An AI Assistant changes that by allowing you to customize it to automatically work like you want it to, instead of a generic AI.
  • Consistency at scale
    You can use the WIRE+FRAME prompt framework to create structured, reusable prompts. An AI Assistant is the next logical step: instead of copy-pasting that fine-tuned prompt and sharing contextual information and examples each time, you can bake it into the assistant itself, allowing you and others achieve the same consistent results every time.
  • Codifying expertise
    Every time you turn a great prompt into an AI Assistant, you’re essentially bottling your expertise. Your assistant becomes a living design guide that outlasts projects (and even job changes).
  • Faster ramp-up for teammates
    Instead of new designers starting from a blank slate, they can use pre-tuned assistants. Think of it as knowledge transfer without the long onboarding lecture.
Reasons For Your Own AI Assistant Instead Of Public AI Assistants

Public AI assistants are like stock templates. While they serve a specific purpose compared to the generic AI platform, and are useful starting points, if you want something tailored to your needs and team, you should really build your own.

A few reasons for building your AI Assistant instead of using a public assistant someone else created include:

  • Fit: Public assistants are built for the masses. Your work has quirks, tone, and processes they’ll never quite match.
  • Trust & Security: You don’t control what instructions or hidden guardrails someone else baked in. With your own assistant, you know exactly what it will (and won’t) do.
  • Evolution: An AI Assistant you design and build can grow with your team. You can update files, tweak prompts, and maintain a changelog — things a public bot won’t do for you.

Your own AI Assistants allow you to take your successful ways of interacting with AI and make them repeatable and shareable. And while they are tailored to your and your team’s way of working, remember that they are still based on generic AI models, so the usual AI disclaimers apply:

Don’t share anything you wouldn’t want screenshotted in the next company all-hands. Keep it safe, private, and user-respecting. A shared AI Assistant can potentially reveal its inner workings or data.

Note: We will be building an AI assistant using ChatGPT, aka a CustomGPT, but you can try the same process with any decent LLM sidekick. As of publication, a paid account is required to create CustomGPTs, but once created, they can be shared and used by anyone, regardless of whether they have a paid or free account. Similar limitations apply to the other platforms. Just remember that outputs can vary depending on the LLM model used, the model’s training, mood, and flair for creative hallucinations.

When Not to Build An AI Assistant (Yet)

An AI Assistant is great when the same audience has the same problem often. When the fit isn’t there, the risk is high; you should skip building an AI Assistant for now, as explained below:

  • One-off or rare tasks
    If it won’t be reused at least monthly, I’d recommend keeping it as a saved WIRE+FRAME prompt. For example, something for a one-time audit or creating placeholder content for a specific screen.
  • Sensitive or regulated data
    If you need to build in personally identifiable information (PII), health, finance, legal, or trade secrets, err on the side of not building an AI Assistant. Even if the AI platform promises not to use your data, I’d strongly suggest using redaction or an approved enterprise tool with necessary safeguards in place (company-approved enterprise versions of Microsoft Copilot, for instance).
  • Heavy orchestration or logic
    Multi-step workflows, API calls, database writes, and approvals go beyond the scope of an AI Assistant into Agentic territory (as of now). I’d recommend not trying to build an AI Assistant for these cases.
  • Real-time information
    AI Assistants may not be able to access real-time data like prices, live metrics, or breaking news. If you need these, you can upload near-real-time data (as we do below) or connect with data sources that you or your company controls, rather than relying on the open web.
  • High-stakes outputs
    For cases related to compliance, legal, medical, or any other area requiring auditability, consider implementing process guardrails and training to keep humans in the loop for proper review and accountability.
  • No measurable win
    If you can’t name a success metric (such as time saved, first-draft quality, or fewer re-dos), I’d recommend keeping it as a saved WIRE+FRAME prompt.

Just because these are signs that you should not build your AI Assistant now, doesn’t mean you shouldn’t ever. Revisit this decision when you notice that you’re starting to repeatedly use the same prompt weekly, multiple teammates ask for it, or manual time copy-pasting and refining start exceeding ~15 minutes. Those are some signs that an AI Assistant will pay back quickly.

In a nutshell, build an AI Assistant when you can name the problem, the audience, frequency, and the win. The rest of this article shows how to turn your successful WIRE+FRAME prompt into a CustomGPT that you and your team can actually use. No advanced knowledge, coding skills, or hacks needed.

As Always, Start with the User

This should go without saying to UX professionals, but it’s worth a reminder: if you’re building an AI assistant for anyone besides yourself, start with the user and their needs before you build anything.

  • Who will use this assistant?
  • What’s the specific pain or task they struggle with today?
  • What language, tone, and examples will feel natural to them?

Building without doing this first is a sure way to end up with clever assistants nobody actually wants to use. Think of it like any other product: before you build features, you understand your audience. The same rule applies here, even more so, because AI assistants are only as helpful as they are useful and usable.

From Prompt To Assistant

You’ve already done the heavy lifting with WIRE+FRAME. Now you’re just turning that refined and reliable prompt into a CustomGPT you can reuse and share. You can use MATCH as a checklist to go from a great prompt to a useful AI assistant.

  • M: Map your prompt
    Port your successful WIRE+FRAME prompt into the AI assistant.
  • A: Add knowledge and training
    Ground the assistant in your world. Upload knowledge files, examples, or guides that make it uniquely yours.
  • T: Tailor for audience
    Make it feel natural to the people who will use it. Give it the right capabilities, but also adjust its settings, tone, examples, and conversation starters so they land with your audience.
  • C: Check, test, and refine
    Test the preview with different inputs and refine until you get the results you expect.
  • H: Hand off and maintain
    Set sharing options and permissions, share the link, and maintain it.

A few weeks ago, we invited readers to share their ideas for AI assistants they wished they had. The top contenders were:

  • Prototype Prodigy: Transform rough ideas into prototypes and export them into Figma to refine.
  • Critique Coach: Review wireframes or mockups and point out accessibility and usability gaps.

But the favorite was an AI assistant to turn tons of customer feedback into actionable insights. Readers replied with variations of: “An assistant that can quickly sort through piles of survey responses, app reviews, or open-ended comments and turn them into themes we can act on.”

And that’s the one we will build in this article — say hello to Insight Interpreter.

Walkthrough: Insight Interpreter

Having lots of customer feedback is a nice problem to have. Companies actively seek out customer feedback through surveys and studies (solicited), but also receive feedback that may not have been asked for through social media or public reviews (unsolicited). This is a goldmine of information, but it can be messy and overwhelming trying to make sense of it all, and it’s nobody’s idea of fun. Here’s where an AI assistant like the Insight Interpreter can help. We’ll turn the example prompt created using the WIRE+FRAME framework in Prompting Is A Design Act into a CustomGPT.

When you start building a CustomGPT by visiting https://chat.openai.com/gpts/editor, you’ll see two paths:

  • Conversational interface
    Vibe-chat your way — it’s easy and quick, but similar to unstructured prompts, your inputs get baked in a little messily, so you may end up with vague or inconsistent instructions.
  • Configure interface
    The structured form where you type instructions, upload files, and toggle capabilities. Less instant gratification, less winging it, but more control. This is the option you’ll want for assistants you plan to share or depend on regularly.

The good news is that MATCH works for both. In conversational mode, you can use it as a mental checklist, and we’ll walk through using it in configure mode as a more formal checklist in this article.

M: Map Your Prompt

Paste your full WIRE+FRAME prompt into the Instructions section exactly as written. As a refresher, I’ve included the mapping and snippets of the detailed prompt from before:

  • Who & What: The AI persona and the core deliverable (“…senior UX researcher and customer insights analyst… specialize in synthesizing qualitative data from diverse sources…”).
  • Input Context: Background or data scope to frame the task (“…analyzing customer feedback uploaded from sources such as…”).
  • Rules & Constraints: Boundaries (“…do not fabricate pain points, representative quotes, journey stages, or patterns…”).
  • Expected Output: Format and fields of the deliverable (“…a structured list of themes. For each theme, include…”).
  • Flow: Explicit, ordered sub-tasks (“Recommended flow of tasks: Step 1…”).
  • Reference Voice: Tone, mood, or reference (“…concise, pattern-driven, and objective…”).
  • Ask for Clarification: Ask questions if unclear (“…if data is missing or unclear, ask before continuing…”).
  • Memory: Memory to recall earlier definitions (“Unless explicitly instructed otherwise, keep using this process…”).
  • Evaluate & Iterate: Have the AI self-critique outputs (“…critically evaluate…suggest improvements…”).

If you’re building Copilot Agents or Gemini Gems instead of CustomGPTs, you still paste your WIRE+FRAME prompt into their respective Instructions sections.

A: Add Knowledge And Training

In the knowledge section, upload up to 20 files, clearly labeled, that will help the CustomGPT respond effectively. Keep files small and versioned: reviews_Q2_2025.csv beats latestfile_final2.csv. For this prompt for analyzing customer feedback, generating themes organized by customer journey, rating them by severity and effort, files could include:

  • Taxonomy of themes;
  • Instructions on parsing uploaded data;
  • Examples of real UX research reports using this structure;
  • Scoring guidelines for severity and effort, e.g., what makes something a 3 vs. a 5 in severity;
  • Customer journey map stages;
  • Customer feedback file templates (not actual data).

An example of a file to help it parse uploaded data is shown below:

T: Tailor For Audience
  • Audience tailoring
    If you are building this for others, your prompt should have addressed tone in the “Reference Voice” section. If you didn’t, do it now, so the CustomGPT can be tailored to the tone and expertise level of users who will use it. In addition, use the Conversation starters section to add a few examples or common prompts for users to start using the CustomGPT, again, worded for your users. For instance, we could use “Analyze feedback from the attached file” for our Insights Interpreter to make it more self-explanatory for anyone, instead of “Analyze data,” which may be good enough if you were using it alone. For my Designerly Curiosity GPT, assuming that users may not know what it could do, I use “What are the types of curiosity?” and “Give me a micro-practice to spark curiosity”.
  • Functional tailoring
    Fill in the CustomGPT name, icon, description, and capabilities.
    • Name: Pick one that will make it clear what the CustomGPT does. Let’s use “Insights Interpreter — Customer Feedback Analyzer”. If needed, you can also add a version number. This name will show up in the sidebar when people use it or pin it, so make the first part memorable and easily identifiable.
    • Icon: Upload an image or generate one. Keep it simple so it can be easily recognized in a smaller dimension when people pin it in their sidebar.
    • Description: A brief, yet clear description of what the CustomGPT can do. If you plan to list it in the GPT store, this will help people decide if they should pick yours over something similar.
    • Recommended Model: If your CustomGPT needs the capabilities of a particular model (e.g., needs GPT-5 thinking for detailed analysis), select it. In most cases, you can safely leave it up to the user or select the most common model.
    • Capabilities: Turn off anything you won’t need. We’ll turn off “Web Search” to allow the CustomGPT to focus only on uploaded data, without expanding the search online, and we will turn on “Code Interpreter & Data Analysis” to allow it to understand and process uploaded files. “Canvas” allows users to work on a shared canvas with the GPT to edit writing tasks; “Image generation” - if the CustomGPT needs to create images.
    • Actions: Making third-party APIs available to the CustomGPT, advanced functionality we don’t need.
    • Additional Settings: Sneakily hidden and opted in by default, I opt out of training OpenAI’s models.
C: Check, Test & Refine

Do one last visual check to make sure you’ve filled in all applicable fields and the basics are in place: is the concept sharp and clear (not a do-everything bot)? Are the roles, goals, and tone clear? Do we have the right assets (docs, guides) to support it? Is the flow simple enough that others can get started easily? Once those boxes are checked, move into testing.

Use the Preview panel to verify that your CustomGPT performs as well, or better, than your original WIRE+FRAME prompt, and that it works for your intended audience. Try a few representative inputs and compare the results to what you expected. If something worked before but doesn’t now, check whether new instructions or knowledge files are overriding it.

When things don’t look right, here are quick debugging fixes:

  • Generic answers?
    Tighten Input Context or update the knowledge files.
  • Hallucinations?
    Revisit your Rules section. Turn off web browsing if you don’t need external data.
  • Wrong tone?
    Strengthen Reference Voice or swap in clearer examples.
  • Inconsistent?
    Test across models in preview and set the most reliable one as “Recommended.”
H: Hand Off And Maintain

When your CustomGPT is ready, you can publish it via the “Create” option. Select the appropriate access option:

  • Only me: Private use. Perfect if you’re still experimenting or keeping it personal.
  • Anyone with the link: Exactly what it means. Shareable but not searchable. Great for pilots with a team or small group. Just remember that links can be reshared, so treat them as semi-public.
  • GPT Store: Fully public. Your assistant is listed and findable by anyone browsing the store. (This is the option we’ll use.)
  • Business workspace (if you’re on GPT Business): Share with others within your business account only — the easiest way to keep it in-house and controlled.

But hand off doesn’t end with hitting publish, you should maintain it to keep it relevant and useful:

  • Collect feedback: Ask teammates what worked, what didn’t, and what they had to fix manually.
  • Iterate: Apply changes directly or duplicate the GPT if you want multiple versions in play. You can find all your CustomGPTs at: https://chatgpt.com/gpts/mine
  • Track changes: Keep a simple changelog (date, version, updates) for traceability.
  • Refresh knowledge: Update knowledge files and examples on a regular cadence so answers don’t go stale.

And that’s it! Our Insights Interpreter is now live!

Since we used the WIRE+FRAME prompt from the previous article to create the Insights Interpreter CustomGPT, I compared the outputs:

The results are similar, with slight differences, and that’s expected. If you compare the results carefully, the themes, issues, journey stages, frequency, severity, and estimated effort match with some differences in wording of the theme, issue summary, and problem statement. The opportunities and quotes have more visible differences. Most of it is because of the CustomGPT knowledge and training files, including instructions, examples, and guardrails, now live as always-on guidance.

Keep in mind that in reality, Generative AI is by nature generative, so outputs will vary. Even with the same data, you won’t get identical wording every time. In addition, underlying models and their capabilities rapidly change. If you want to keep things as consistent as possible, recommend a model (though people can change it), track versions of your data, and compare for structure, priorities, and evidence rather than exact wording.

While I’d love for you to use Insights Interpreter, I strongly recommend taking 15 minutes to follow the steps above and create your own. That is exactly what you or your team needs — including the tone, context, output formats, and get the real AI Assistant you need!

Inspiration For Other AI Assistants

We just built the Insight Interpreter and mentioned two contenders: Critique Coach and Prototype Prodigy. Here are a few other realistic uses that can spark ideas for your own AI Assistant:

  • Workshop Wizard: Generates workshop agendas, produces icebreaker questions, and follows up survey drafts.
  • Research Roundup Buddy: Summarizes raw transcripts into key themes, then creates highlight reels (quotes + visuals) for team share-outs.
  • Persona Refresher: Updates stale personas with the latest customer feedback, then rewrites them in different tones (boardroom formal vs. design-team casual).
  • Content Checker: Proofs copy for tone, accessibility, and reading level before it ever hits your site.
  • Trend Tamer: Scans competitor reviews and identifies emerging patterns you can act on before they reach your roadmap.
  • Microcopy Provocateur: Tests alternate copy options by injecting different tones (sassy, calm, ironic, nurturing) and role-playing how users might react, especially useful for error states or Call to Actions.
  • Ethical UX Debater: Challenges your design decisions and deceptive designs by simulating the voice of an ethics board or concerned user.

The best AI Assistants come from carefully inspecting your workflow and looking for areas where AI can augment your work regularly and repetitively. Then follow the steps above to build a team of customized AI assistants.

Ask Me Anything About Assistants
  • What are some limitations of a CustomGPT?
    Right now, the best parallels for AI are a very smart intern with access to a lot of information. CustomGPTs are still running on LLM models that are basically trained on a lot of information and programmed to predictively generate responses based on that data, including possible bias, misinformation, or incomplete information. Keeping that in mind, you can make that intern provide better and more relevant results by using your uploads as onboarding docs, your guardrails as a job description, and your updates as retraining.
  • Can I copy someone else’s public CustomGPT and tweak it?
    Not directly, but if you get inspired by another CustomGPT, you can look at how it’s framed and rebuild your own using WIRE+FRAME & MATCH. That way, you make it your own and have full control of the instructions, files, and updates. But you can do that with Google’s equivalent — Gemini Gems. Shared Gems behave similarly to shared Google Docs, so once shared, any Gem instructions and files that you have uploaded can be viewed by any user with access to the Gem. Any user with edit access to the Gem can also update and delete the Gem.
  • How private are my uploaded files?
    The files you upload are stored and used to answer prompts to your CustomGPT. If your CustomGPT is not private or you didn’t disable the hidden setting to allow CustomGPT conversations to improve the model, that data could be referenced. Don’t upload sensitive, confidential, or personal data you wouldn’t want circulating. Enterprise accounts do have some protections, so check with your company.
  • How many files can I upload, and does size matter?
    Limits vary by platform, but smaller, specific files usually perform better than giant docs. Think “chapter” instead of “entire book.” At the time of publishing, CustomGPTs allow up to 20 files, Copilot Agents up to 200 (if you need anywhere near that many, chances are your agent is not focused enough), and Gemini Gems up to 10.
  • What’s the difference between a CustomGPT and a Project?
    A CustomGPT is a focused assistant, like an intern trained to do one role well (like “Insight Interpreter”). A Project is more like a workspace where you can group multiple prompts, files, and conversations together for a broader effort. CustomGPTs are specialists. Projects are containers. If you want something reusable, shareable, and role-specific, go to CustomGPT. If you want to organize broader work with multiple tools and outputs, and shared knowledge, Projects are the better fit.
From Reading To Building

In this AI x Design series, we’ve gone from messy prompting (“A Week In The Life Of An AI-Augmented Designer”) to a structured prompt framework, WIRE+FRAME (“Prompting Is A Design Act”). And now, in this article, your very own reusable AI sidekick.

CustomGPTs don’t replace designers but augment them. The real magic isn’t in the tool itself, but in how you design and manage it. You can use public CustomGPTs for inspiration, but the ones that truly fit your workflow are the ones you design yourself. They extend your craft, codify your expertise, and give your team leverage that generic AI models can’t.

Build one this week. Even better, today. Train it, share it, stress-test it, and refine it into an AI assistant that can augment your team.

Kategorier: Amerikanska

Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1)

Smashingmagazine - ons, 09/24/2025 - 19:00

There is a spectrum of opinions on how dramatically all creative professions will be changed by the coming wave of agentic AI, from the very skeptical to the wildly optimistic and even apocalyptic. I think that even if you are on the “skeptical” end of the spectrum, it makes sense to explore ways this new technology can help with your everyday work. As for my everyday work, I’ve been doing UX and product design for about 25 years now, and I’m always keen to learn new tricks and share them with colleagues. Right now, I’m interested in AI-assisted prototyping, and I’m here to share my thoughts on how it can change the process of designing digital products.

To set your expectations up front: this exploration focuses on a specific part of the product design lifecycle. Many people know about the Double Diamond framework, which shows the path from problem to solution. However, I think it’s the Triple Diamond model that makes an important point for our needs. It explicitly separates the solution space into two phases: Solution Discovery (ideating and validating the right concept) and Solution Delivery (engineering the validated concept into a final product). This article is focused squarely on that middle diamond: Solution Discovery.

How AI can help with the preceding (Problem Discovery) and the following (Solution Delivery) stages is out of the scope of this article. Problem Discovery is less about prototyping and more about research, and while I believe AI can revolutionize the research process as well, I’ll leave that to people more knowledgeable in the field. As for Solution Delivery, it is more about engineering optimization. There’s no doubt that software engineering in the AI era is undergoing dramatic changes, but I’m not an engineer — I’m a designer, so let me focus on my “sweet spot”.

And my “sweet spot” has a specific flavor: designing enterprise applications. In this world, the main challenge is taming complexity: dealing with complicated data models and guiding users through non-linear workflows. This background has had a big impact on my approach to design, putting a lot of emphasis on the underlying logic and structure. This article explores the potential of AI through this lens.

I’ll start by outlining the typical artifacts designers create during Solution Discovery. Then, I’ll examine the problems with how this part of the process often plays out in practice. Finally, we’ll explore whether AI-powered prototyping can offer a better approach, and if so, whether it aligns with what people call “vibe coding,” or calls for a more deliberate and disciplined way of working.

What We Create During Solution Discovery

The Solution Discovery phase begins with the key output from the preceding research: a well-defined problem and a core hypothesis for a solution. This is our starting point. The artifacts we create from here are all aimed at turning that initial hypothesis into a tangible, testable concept.

Traditionally, at this stage, designers can produce artifacts of different kinds, progressively increasing fidelity: from napkin sketches, boxes-and-arrows, and conceptual diagrams to hi-fi mockups, then to interactive prototypes, and in some cases even live prototypes. Artifacts of lower fidelity allow fast iteration and enable the exploration of many alternatives, while artifacts of higher fidelity help to understand, explain, and validate the concept in all its details.

It’s important to think holistically, considering different aspects of the solution. I would highlight three dimensions:

  1. Conceptual model: Objects, relations, attributes, actions;
  2. Visualization: Screens, from rough sketches to hi-fi mockups;
  3. Flow: From the very high-level user journeys to more detailed ones.

One can argue that those are layers rather than dimensions, and each of them builds on the previous ones (for example, according to Semantic IxD by Daniel Rosenberg), but I see them more as different facets of the same thing, so the design process through them is not necessarily linear: you may need to switch from one perspective to another many times.

This is how different types of design artifacts map to these dimensions:

As Solution Discovery progresses, designers move from the left part of this map to the right, from low-fidelity to high-fidelity, from ideating to validating, from diverging to converging.

Note that at the beginning of the process, different dimensions are supported by artifacts of different types (boxes-and-arrows, sketches, class diagrams, etc.), and only closer to the end can you build a live prototype that encompasses all three dimensions: conceptual model, visualization, and flow.

This progression shows a classic trade-off, like the difference between a pencil drawing and an oil painting. The drawing lets you explore ideas in the most flexible way, whereas the painting has a lot of detail and overall looks much more realistic, but is hard to adjust. Similarly, as we go towards artifacts that integrate all three dimensions at higher fidelity, our ability to iterate quickly and explore divergent ideas goes down. This inverse relationship has long been an accepted, almost unchallenged, limitation of the design process.

The Problem With The Mockup-Centric Approach

Faced with this difficult trade-off, often teams opt for the easiest way out. On the one hand, they need to show that they are making progress and create things that appear detailed. On the other hand, they rarely can afford to build interactive or live prototypes. This leads them to over-invest in one type of artifact that seems to offer the best of both worlds. As a result, the neatly organized “bento box” of design artifacts we saw previously gets shrunk down to just one compartment: creating static high-fidelity mockups.

This choice is understandable, as several forces push designers in this direction. Stakeholders are always eager to see nice pictures, while artifacts representing user flows and conceptual models receive much less attention and priority. They are too high-level and hardly usable for validation, and usually, not everyone can understand them.

On the other side of the fidelity spectrum, interactive prototypes require too much effort to create and maintain, and creating live prototypes in code used to require special skills (and again, effort). And even when teams make this investment, they do so at the end of Solution Discovery, during the convergence stage, when it is often too late to experiment with fundamentally different ideas. With so much effort already sunk, there is little appetite to go back to the drawing board.

It’s no surprise, then, that many teams default to the perceived safety of static mockups, seeing them as a middle ground between the roughness of the sketches and the overwhelming complexity and fragility that prototypes can have.

As a result, validation with users doesn’t provide enough confidence that the solution will actually solve the problem, and teams are forced to make a leap of faith to start building. To make matters worse, they do so without a clear understanding of the conceptual model, the user flows, and the interactions, because from the very beginning, designers’ attention has been heavily skewed toward visualization.

The result is often a design artifact that resembles the famous “horse drawing” meme: beautifully rendered in the parts everyone sees first (the mockups), but dangerously underdeveloped in its underlying structure (the conceptual model and flows).

While this is a familiar problem across the industry, its severity depends on the nature of the project. If your core challenge is to optimize a well-understood, linear flow (like many B2C products), a mockup-centric approach can be perfectly adequate. The risks are contained, and the “lopsided horse” problem is unlikely to be fatal.

However, it’s different for the systems I specialize in: complex applications defined by intricate data models and non-linear, interconnected user flows. Here, the biggest risks are not on the surface but in the underlying structure, and a lack of attention to the latter would be a recipe for disaster.

Transforming The Design Process

This situation makes me wonder:

How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one?

If we were able to answer this question, we would:

  • Learn faster.
    By going straight from intent to a testable artifact, we cut the feedback loop from weeks to days.
  • Gain more confidence.
    Users interact with real logic, which gives us more proof that the idea works.
  • Enforce conceptual clarity.
    A live prototype cannot hide a flawed or ambiguous conceptual model.
  • Establish a clear and lasting source of truth.
    A live prototype, combined with a clearly documented design intent, provides the engineering team with an unambiguous specification.

Of course, the desire for such a process is not new. This vision of a truly prototype-driven workflow is especially compelling for enterprise applications, where the benefits of faster learning and forced conceptual clarity are the best defense against costly structural flaws. But this ideal was still out of reach because prototyping in code took so much work and specialized talents. Now, the rise of powerful AI coding assistants changes this equation in a big way.

The Seductive Promise Of “Vibe Coding”

And the answer seems to be obvious: vibe coding!

“Vibe coding is an artificial intelligence-assisted software development style popularized by Andrej Karpathy in early 2025. It describes a fast, improvisational, collaborative approach to creating software where the developer and a large language model (LLM) tuned for coding is acting rather like pair programmers in a conversational loop.”

Wikipedia

The original tweet by Andrej Karpathy:

The allure of this approach is undeniable. If you are not a developer, you are bound to feel awe when you describe a solution in plain language, and moments later, you can interact with it. This seems to be the ultimate fulfillment of our goal: a direct, frictionless path from an idea to a live prototype. But is this method reliable enough to build our new design process around it?

The Trap: A Process Without A Blueprint

Vibe coding mixes up a description of the UI with a description of the system itself, resulting in a prototype based on changing assumptions rather than a clear, solid model.

The pitfall of vibe coding is that it encourages us to express our intent in the most ambiguous way possible: by having a conversation.

This is like hiring a builder and telling them what to do one sentence at a time without ever presenting them a blueprint. They could make a wall that looks great, but you can’t be sure that it can hold weight.

I’ll give you one example illustrating problems you may face if you try to jump over the chasm between your idea and a live prototype relying on pure vibe coding in the spirit of Andrej Karpathy’s tweet. Imagine I want to prototype a solution to keep track of tests to validate product ideas. I open my vibe coding tool of choice (I intentionally don’t disclose its name, as I believe they all are awesome yet prone to similar pitfalls) and start with the following prompt:

I need an app to track tests. For every test, I need to fill out the following data: - Hypothesis (we believe that...) - Experiment (to verify that, we will...) - When (a single date, or a period) - Status (New/Planned/In Progress/Proven/Disproven)

And in a minute or so, I get a working prototype:

Inspired by success, I go further:

Please add the ability to specify a product idea for every test. Also, I want to filter tests by product ideas and see how many tests each product idea has in each status.

And the result is still pretty good:

But then I want to extend the functionality related to product ideas:

Okay, one more thing. For every product idea, I want to assess the impact score, the confidence score, and the ease score, and get the overall ICE score. Perhaps I need a separate page focused on the product idea, with all the relevant information and related tests.

And from this point on, the results are getting more and more confusing.

The flow of creating tests hasn’t changed much. I can still create a bunch of tests, and they seem to be organized by product ideas. But when I click “Product Ideas” in the top navigation, I see nothing:

I need to create my ideas from scratch, and they are not connected to the tests I created before:

Moreover, when I go back to “Tests”, I see that they are all gone. Clearly something went wrong, and my AI assistant confirms that:

No, this is not expected behavior — it’s a bug! The issue is that tests are being stored in two separate places (local state in the Index page and App state), so tests created on the main page don’t sync with the product ideas page.

Sure, eventually it fixed that bug, but note that we encountered this just on the third step, when we asked to slightly extend the functionality of a very simple app. The more layers of complexity we add, the more roadblocks of this sort we are bound to face.

Also note that this specific problem of a not fully thought-out relationship between two entities (product ideas and tests) is not isolated at the technical level, and therefore, it didn’t go away once the technical bug was fixed. The underlying conceptual model is still broken, and it manifests in the UI as well.

For example, you can still create “orphan” tests that are not connected to any item from the “Product Ideas” page. As a result, you may end up with different numbers of ideas and tests on different pages of the app:

Let’s diagnose what really happened here. The AI’s response that this is a “bug” is only half the story. The true root cause is a conceptual model failure. My prompts never explicitly defined the relationship between product ideas and tests. The AI was forced to guess, which led to the broken experience. For a simple demo, this might be a fixable annoyance. But for a data-heavy enterprise application, this kind of structural ambiguity is fatal. It demonstrates the fundamental weakness of building without a blueprint, which is precisely what vibe coding encourages.

Don’t take this as a criticism of vibe coding tools. They are creating real magic. However, the fundamental truth about “garbage in, garbage out” is still valid. If you don’t express your intent clearly enough, chances are the result won’t fulfill your expectations.

Another problem worth mentioning is that even if you wrestle it into a state that works, the artifact is a black box that can hardly serve as reliable specifications for the final product. The initial meaning is lost in the conversation, and all that’s left is the end result. This makes the development team “code archaeologists,” who have to figure out what the designer was thinking by reverse-engineering the AI’s code, which is frequently very complicated. Any speed gained at the start is lost right away because of this friction and uncertainty.

From Fast Magic To A Solid Foundation

Pure vibe coding, for all its allure, encourages building without a blueprint. As we’ve seen, this results in structural ambiguity, which is not acceptable when designing complex applications. We are left with a seemingly quick but fragile process that creates a black box that is difficult to iterate on and even more so to hand off.

This leads us back to our main question: how might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap? The answer lies in a more methodical, disciplined, and therefore trustworthy process.

In Part 2 of this series, “A Practical Guide to Building with Clarity”, I will outline the entire workflow for Intent Prototyping. This method places the explicit intent of the designer at the forefront of the process while embracing the potential of AI-assisted coding.

Thank you for reading, and I look forward to seeing you in Part 2.

Kategorier: Amerikanska

Ambient Animations In Web Design: Principles And Implementation (Part 1)

Smashingmagazine - mån, 09/22/2025 - 15:00

Unlike timeline-based animations, which tell stories across a sequence of events, or interaction animations that are triggered when someone touches something, ambient animations are the kind of passive movements you might not notice at first. But, they make a design look alive in subtle ways.

In an ambient animation, elements might subtly transition between colours, move slowly, or gradually shift position. Elements can appear and disappear, change size, or they could rotate slowly.

Ambient animations aren’t intrusive; they don’t demand attention, aren’t distracting, and don’t interfere with what someone’s trying to achieve when they use a product or website. They can be playful, too, making someone smile when they catch sight of them. That way, ambient animations add depth to a brand’s personality.

To illustrate the concept of ambient animations, I’ve recreated the cover of a Quick Draw McGraw comic book (PDF) as a CSS/SVG animation. The comic was published by Charlton Comics in 1971, and, being printed, these characters didn’t move, making them ideal candidates to transform into ambient animations.

FYI: Original cover artist Ray Dirgo was best known for his work drawing Hanna-Barbera characters for Charlton Comics during the 1970s. Ray passed away in 2000 at the age of 92. He outlived Charlton Comics, which went out of business in 1986, and DC Comics acquired its characters.

Tip: You can view the complete ambient animation code on CodePen.

Choosing Elements To Animate

Not everything on a page or in a graphic needs to move, and part of designing an ambient animation is knowing when to stop. The trick is to pick elements that lend themselves naturally to subtle movement, rather than forcing motion into places where it doesn’t belong.

Natural Motion Cues

When I’m deciding what to animate, I look for natural motion cues and think about when something would move naturally in the real world. I ask myself: “Does this thing have weight?”, “Is it flexible?”, and “Would it move in real life?” If the answer’s “yes,” it’ll probably feel right if it moves. There are several motion cues in Ray Dirgo’s cover artwork.

For example, the peace pipe Quick Draw’s puffing on has two feathers hanging from it. They swing slightly left and right by three degrees as the pipe moves, just like real feathers would.

#quick-draw-pipe { animation: quick-draw-pipe-rotate 6s ease-in-out infinite alternate; } @keyframes quick-draw-pipe-rotate { 0% { transform: rotate(3deg); } 100% { transform: rotate(-3deg); } } #quick-draw-feather-1 { animation: quick-draw-feather-1-rotate 3s ease-in-out infinite alternate; } #quick-draw-feather-2 { animation: quick-draw-feather-2-rotate 3s ease-in-out infinite alternate; } @keyframes quick-draw-feather-1-rotate { 0% { transform: rotate(3deg); } 100% { transform: rotate(-3deg); } } @keyframes quick-draw-feather-2-rotate { 0% { transform: rotate(-3deg); } 100% { transform: rotate(3deg); } } Atmosphere, Not Action

I often choose elements or decorative details that add to the vibe but don’t fight for attention.

Ambient animations aren’t about signalling to someone where they should look; they’re about creating a mood.

Here, the chief slowly and subtly rises and falls as he puffs on his pipe.

#chief { animation: chief-rise-fall 3s ease-in-out infinite alternate; } @keyframes chief-group-rise-fall { 0% { transform: translateY(0); } 100% { transform: translateY(-20px); } }

For added effect, the feather on his head also moves in time with his rise and fall:

#chief-feather-1 { animation: chief-feather-1-rotate 3s ease-in-out infinite alternate; } #chief-feather-2 { animation: chief-feather-2-rotate 3s ease-in-out infinite alternate; } @keyframes chief-feather-1-rotate { 0% { transform: rotate(0deg); } 100% { transform: rotate(-9deg); } } @keyframes chief-feather-2-rotate { 0% { transform: rotate(0deg); } 100% { transform: rotate(9deg); } } Playfulness And Fun

One of the things I love most about ambient animations is how they bring fun into a design. They’re an opportunity to demonstrate personality through playful details that make people smile when they notice them.

Take a closer look at the chief, and you might spot his eyebrows raising and his eyes crossing as he puffs hard on his pipe. Quick Draw’s eyebrows also bounce at what look like random intervals.

#quick-draw-eyebrow { animation: quick-draw-eyebrow-raise 5s ease-in-out infinite; } @keyframes quick-draw-eyebrow-raise { 0%, 20%, 60%, 100% { transform: translateY(0); } 10%, 50%, 80% { transform: translateY(-10px); } } Keep Hierarchy In Mind

Motion draws the eye, and even subtle movements have a visual weight. So, I reserve the most obvious animations for elements that I need to create the biggest impact.

Smoking his pipe clearly has a big effect on Quick Draw McGraw, so to demonstrate this, I wrapped his elements — including his pipe and its feathers — within a new SVG group, and then I made that wobble.

#quick-draw-group { animation: quick-draw-group-wobble 6s ease-in-out infinite; } @keyframes quick-draw-group-wobble { 0% { transform: rotate(0deg); } 15% { transform: rotate(2deg); } 30% { transform: rotate(-2deg); } 45% { transform: rotate(1deg); } 60% { transform: rotate(-1deg); } 75% { transform: rotate(0.5deg); } 100% { transform: rotate(0deg); } }

Then, to emphasise this motion, I mirrored those values to wobble his shadow:

#quick-draw-shadow { animation: quick-draw-shadow-wobble 6s ease-in-out infinite; } @keyframes quick-draw-shadow-wobble { 0% { transform: rotate(0deg); } 15% { transform: rotate(-2deg); } 30% { transform: rotate(2deg); } 45% { transform: rotate(-1deg); } 60% { transform: rotate(1deg); } 75% { transform: rotate(-0.5deg); } 100% { transform: rotate(0deg); } } Apply Restraint

Just because something can be animated doesn’t mean it should be. When creating an ambient animation, I study the image and note the elements where subtle motion might add life. I keep in mind the questions: “What’s the story I’m telling? Where does movement help, and when might it become distracting?”

Remember, restraint isn’t just about doing less; it’s about doing the right things less often.

Layering SVGs For Export

In “Smashing Animations Part 4: Optimising SVGs,” I wrote about the process I rely on to “prepare, optimise, and structure SVGs for animation.” When elements are crammed into a single SVG file, they can be a nightmare to navigate. Locating a specific path or group can feel like searching for a needle in a haystack.

That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section.

I start by exporting background elements, optimising them, adding class and ID attributes, and pasting their code into my SVG file.

Then, I export elements that often stay static or move as groups, like the chief and Quick Draw McGraw.

Before finally exporting, naming, and adding details, like Quick Draw’s pipe, eyes, and his stoned sparkles.

Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues as they all slot into place automatically.

Implementing Ambient Animations

You don’t need an animation framework or library to add ambient animations to a project. Most of the time, all you’ll need is a well-prepared SVG and some thoughtful CSS.

But, let’s start with the SVG. The key is to group elements logically and give them meaningful class or ID attributes, which act as animation hooks in the CSS. For this animation, I gave every moving part its own identifier like #quick-draw-tail or #chief-smoke-2. That way, I could target exactly what I needed without digging through the DOM like a raccoon in a trash can.

Once the SVG is set up, CSS does most of the work. I can use @keyframes for more expressive movement, or animation-delay to simulate randomness and stagger timings. The trick is to keep everything subtle and remember I’m not animating for attention, I’m animating for atmosphere.

Remember that most ambient animations loop continuously, so they should be lightweight and performance-friendly. And of course, it’s good practice to respect users who’ve asked for less motion. You can wrap your animations in an @media prefers-reduced-motion query so they only run when they’re welcome.

@media (prefers-reduced-motion: no-preference) { #quick-draw-shadow { animation: quick-draw-shadow-wobble 6s ease-in-out infinite; } }

It’s a small touch that’s easy to implement, and it makes your designs more inclusive.

Ambient Animation Design Principles

If you want your animations to feel ambient, more like atmosphere than action, it helps to follow a few principles. These aren’t hard and fast rules, but rather things I’ve learned while animating smoke, sparkles, eyeballs, and eyebrows.

Keep Animations Slow And Smooth

Ambient animations should feel relaxed, so use longer durations and choose easing curves that feel organic. I often use ease-in-out, but cubic Bézier curves can also be helpful when you want a more relaxed feel and the kind of movements you might find in nature.

Loop Seamlessly And Avoid Abrupt Changes

Hard resets or sudden jumps can ruin the mood, so if an animation loops, ensure it cycles smoothly. You can do this by matching start and end keyframes, or by setting the animation-direction to alternate the value so the animation plays forward, then back.

Use Layering To Build Complexity

A single animation might be boring. Five subtle animations, each on separate layers, can feel rich and alive. Think of it like building a sound mix — you want variation in rhythm, tone, and timing. In my animation, sparkles twinkle at varying intervals, smoke curls upward, feathers sway, and eyes boggle. Nothing dominates, and each motion plays its small part in the scene.

Avoid Distractions

The point of an ambient animation is that it doesn’t dominate. It’s a background element and not a call to action. If someone’s eyes are drawn to a raised eyebrow, it’s probably too much, so dial back the animation until it feels like something you’d only catch if you’re really looking.

Consider Accessibility And Performance

Check prefers-reduced-motion, and don’t assume everyone’s device can handle complex animations. SVG and CSS are light, but things like blur filters and drop shadows, and complex CSS animations can still tax lower-powered devices. When an animation is purely decorative, consider adding aria-hidden="true" to keep it from cluttering up the accessibility tree.

Quick On The Draw

Ambient animation is like seasoning on a great dish. It’s the pinch of salt you barely notice, but you’d miss when it’s gone. It doesn’t shout, it whispers. It doesn’t lead, it lingers. It’s floating smoke, swaying feathers, and sparkles you catch in the corner of your eye. And when it’s done well, ambient animation adds personality to a design without asking for applause.

Now, I realise that not everyone needs to animate cartoon characters. So, in part two, I’ll share how I created animations for several recent client projects. Until next time, if you’re crafting an illustration or working with SVG, ask yourself: What would move if this were real? Then animate just that. Make it slow and soft. Keep it ambient.

You can view the complete ambient animation code on CodePen.

Kategorier: Amerikanska

The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence

Smashingmagazine - fre, 09/19/2025 - 12:00

Misuse and misplaced trust of AI is becoming an unfortunate common event. For example, lawyers trying to leverage the power of generative AI for research submit court filings citing multiple compelling legal precedents. The problem? The AI had confidently, eloquently, and completely fabricated the cases cited. The resulting sanctions and public embarrassment can become a viral cautionary tale, shared across social media as a stark example of AI’s fallibility.

This goes beyond a technical glitch; it’s a catastrophic failure of trust in AI tools in an industry where accuracy and trust are critical. The trust issue here is twofold — the law firms are submitting briefs in which they have blindly over-trusted the AI tool to return accurate information. The subsequent fallout can lead to a strong distrust in AI tools, to the point where platforms featuring AI might not be considered for use until trust is reestablished.

Issues with trusting AI aren’t limited to the legal field. We are seeing the impact of fictional AI-generated information in critical fields such as healthcare and education. On a more personal scale, many of us have had the experience of asking Siri or Alexa to perform a task, only to have it done incorrectly or not at all, for no apparent reason. I’m guilty of sending more than one out-of-context hands-free text to an unsuspecting contact after Siri mistakenly pulls up a completely different name than the one I’d requested.

With digital products moving to incorporate generative and agentic AI at an increasingly frequent rate, trust has become the invisible user interface. When it works, our interactions are seamless and powerful. When it breaks, the entire experience collapses, with potentially devastating consequences. As UX professionals, we’re on the front lines of a new twist on a common challenge. How do we build products that users can rely on? And how do we even begin to measure something as ephemeral as trust in AI?

Trust isn’t a mystical quality. It is a psychological construct built on predictable factors. I won’t dive deep into academic literature on trust in this article. However, it is important to understand that trust is a concept that can be understood, measured, and designed for. This article will provide a practical guide for UX researchers and designers. We will briefly explore the psychological anatomy of trust, offer concrete methods for measuring it, and provide actionable strategies for designing more trustworthy and ethical AI systems.

The Anatomy of Trust: A Psychological Framework for AI

To build trust, we must first understand its components. Think of trust like a four-legged stool. If any one leg is weak, the whole thing becomes unstable. Based on classic psychological models, we can adapt these “legs” for the AI context.

1. Ability (or Competence)

This is the most straightforward pillar: Does the AI have the skills to perform its function accurately and effectively? If a weather app is consistently wrong, you stop trusting it. If an AI legal assistant creates fictitious cases, it has failed the basic test of ability. This is the functional, foundational layer of trust.

2. Benevolence

This moves from function to intent. Does the user believe the AI is acting in their best interest? A GPS that suggests a toll-free route even if it’s a few minutes longer might be perceived as benevolent. Conversely, an AI that aggressively pushes sponsored products feels self-serving, eroding this sense of benevolence. This is where user fears, such as concerns about job displacement, directly challenge trust—the user starts to believe the AI is not on their side.

3. Integrity

Does AI operate on predictable and ethical principles? This is about transparency, fairness, and honesty. An AI that clearly states how it uses personal data demonstrates integrity. A system that quietly changes its terms of service or uses dark patterns to get users to agree to something violates integrity. An AI job recruiting tool that has subtle yet extremely harmful social biases, existing in the algorithm, violates integrity.

4. Predictability & Reliability

Can the user form a stable and accurate mental model of how the AI will behave? Unpredictability, even if the outcomes are occasionally good, creates anxiety. A user needs to know, roughly, what to expect. An AI that gives a radically different answer to the same question asked twice is unpredictable and, therefore, hard to trust.

The Trust Spectrum: The Goal of a Well-Calibrated Relationship

Our goal as UX professionals shouldn’t be to maximize trust at all costs. An employee who blindly trusts every email they receive is a security risk. Likewise, a user who blindly trusts every AI output can be led into dangerous situations, such as the legal briefs referenced at the beginning of this article. The goal is well-calibrated trust.

Think of it as a spectrum where the upper-mid level is the ideal state for a truly trustworthy product to achieve:

  • Active Distrust
    The user believes the AI is incompetent or malicious. They will avoid it or actively work against it.
  • Suspicion & Scrutiny
    The user interacts cautiously, constantly verifying the AI’s outputs. This is a common and often healthy state for users of new AI.
  • Calibrated Trust (The Ideal State)
    This is the sweet spot. The user has an accurate understanding of the AI’s capabilities—its strengths and, crucially, its weaknesses. They know when to rely on it and when to be skeptical.
  • Over-trust & Automation Bias
    The user unquestioningly accepts the AI’s outputs. This is where users follow flawed AI navigation into a field or accept a fictional legal brief as fact.

Our job is to design experiences that guide users away from the dangerous poles of Active Distrust and Over-trust and toward that healthy, realistic middle ground of Calibrated Trust.

The Researcher’s Toolkit: How to Measure Trust In AI

Trust feels abstract, but it leaves measurable fingerprints. Academics in the social sciences have done much to define both what trust looks like and how it might be measured. As researchers, we can capture these signals through a mix of qualitative, quantitative, and behavioral methods.

Qualitative Probes: Listening For The Language Of Trust

During interviews and usability tests, go beyond “Was that easy to use?” and listen for the underlying psychology. Here are some questions you can start using tomorrow:

  • To measure Ability:
    “Tell me about a time this tool’s performance surprised you, either positively or negatively.”
  • To measure Benevolence:
    “Do you feel this system is on your side? What gives you that impression?”
  • To measure Integrity:
    “If this AI made a mistake, how would you expect it to handle it? What would be a fair response?”
  • To measure Predictability:
    “Before you clicked that button, what did you expect the AI to do? How closely did it match your expectation?”
Investigating Existential Fears (The Job Displacement Scenario)

One of the most potent challenges to an AI’s Benevolence is the fear of job displacement. When a participant expresses this, it is a critical research finding. It requires a specific, ethical probing technique.

Imagine a participant says, “Wow, it does that part of my job pretty well. I guess I should be worried.”

An untrained researcher might get defensive or dismiss the comment. An ethical, trained researcher validates and explores:

“Thank you for sharing that; it’s a vital perspective, and it’s exactly the kind of feedback we need to hear. Can you tell me more about what aspects of this tool make you feel that way? In an ideal world, how would a tool like this work with you to make your job better, not to replace it?”

This approach respects the participant, validates their concern, and reframes the feedback into an actionable insight about designing a collaborative, augmenting tool rather than a replacement. Similarly, your findings should reflect the concern users expressed about replacement. We shouldn’t pretend this fear doesn’t exist, nor should we pretend that every AI feature is being implemented with pure intention. Users know better than that, and we should be prepared to argue on their behalf for how the technology might best co-exist within their roles.

Quantitative Measures: Putting A Number On Confidence

You can quantify trust without needing a data science degree. After a user completes a task with an AI, supplement your standard usability questions with a few simple Likert-scale items:

  • “The AI’s suggestion was reliable.” (1-7, Strongly Disagree to Strongly Agree)
  • “I am confident in the AI’s output.” (1-7)
  • “I understood why the AI made that recommendation.” (1-7)
  • “The AI responded in a way that I expected.” (1-7)
  • “The AI provided consistent responses over time.” (1-7)

Over time, these metrics can track how trust is changing as your product evolves.

Note: If you want to go beyond these simple questions that I’ve made up, there are numerous scales (measurements) of trust in technology that exist in academic literature. It might be an interesting endeavor to measure some relevant psychographic and demographic characteristics of your users and see how that correlates with trust in AI/your product. Table 1 at the end of the article contains four examples of current scales you might consider using to measure trust. You can decide which is best for your application, or you might pull some of the items from any of the scales if you aren’t looking to publish your findings in an academic journal, yet want to use items that have been subjected to some level of empirical scrutiny.

Behavioral Metrics: Observing What Users Do, Not Just What They Say

People’s true feelings are often revealed in their actions. You can use behaviors that reflect the specific context of use for your product. Here are a few general metrics that might apply to most AI tools that give insight into users’ behavior and the trust they place in your tool.

  • Correction Rate
    How often do users manually edit, undo, or ignore the AI’s output? A high correction rate is a powerful signal of low trust in its Ability.
  • Verification Behavior
    Do users switch to Google or open another application to double-check the AI’s work? This indicates they don’t trust it as a standalone source of truth. It can also potentially be positive that they are calibrating their trust in the system when they use it up front.
  • Disengagement
    Do users turn the AI feature off? Do they stop using it entirely after one bad experience? This is the ultimate behavioral vote of no confidence.
Designing For Trust: From Principles to Pixels

Once you’ve researched and measured trust, you can begin to design for it. This means translating psychological principles into tangible interface elements and user flows.

Designing for Competence and Predictability
  • Set Clear Expectations
    Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle. A simple “I’m still learning about [topic X], so please double-check my answers” can work wonders.
  • Show Confidence Levels
    Instead of just giving an answer, have the AI signal its own uncertainty. A weather app that says “70% chance of rain” is more trustworthy than one that just says “It will rain” and is wrong. An AI could say, “I’m 85% confident in this summary,” or highlight sentences it’s less sure about.
The Role of Explainability (XAI) and Transparency

Explainability isn’t about showing users the code. It’s about providing a useful, human-understandable rationale for a decision.

Instead of:
“Here is your recommendation.”

Try:
“Because you frequently read articles about UX research methods, I’m recommending this new piece on measuring trust in AI.”

This addition transforms AI from an opaque oracle to a transparent logical partner.

Many of the popular AI tools (e.g., ChatGPT and Gemini) show the thinking that went into the response they provide to a user. Figure 3 shows the steps Gemini went through to provide me with a non-response when I asked it to help me generate the masterpiece displayed above in Figure 2. While this might be more information than most users care to see, it provides a useful resource for a user to audit how the response came to be, and it has provided me with instructions on how I might proceed to address my task.

Figure 4 shows an example of a scorecard OpenAI makes available as an attempt to increase users’ trust. These scorecards are available for each ChatGPT model and go into the specifics of how the models perform as it relates to key areas such as hallucinations, health-based conversations, and much more. In reading the scorecards closely, you will see that no AI model is perfect in any area. The user must remain in a trust but verify mode to make the relationship between human reality and AI work in a way that avoids potential catastrophe. There should never be blind trust in an LLM.

Designing For Trust Repair (Graceful Error Handling) And Not Knowing an Answer

Your AI will make mistakes.

Trust is not determined by the absence of errors, but by how those errors are handled.
  • Acknowledge Errors Humbly.
    When the AI is wrong, it should be able to state that clearly. “My apologies, I misunderstood that request. Could you please rephrase it?” is far better than silence or a nonsensical answer.
  • Provide an Easy Path to Correction.
    Make feedback mechanisms (like thumbs up/down or a correction box) obvious. More importantly, show that the feedback is being used. A “Thank you, I’m learning from your correction” can help rebuild trust after a failure. As long as this is true.

Likewise, your AI can’t know everything. You should acknowledge this to your users.

UX practitioners should work with the product team to ensure that honesty about limitations is a core product principle.

This can include the following:

  • Establish User-Centric Metrics: Instead of only measuring engagement or task completion, UXers can work with product managers to define and track metrics like:
    • Hallucination Rate: The frequency with which the AI provides verifiably false information.
    • Successful Fallback Rate: How often the AI correctly identifies its inability to answer and provides a helpful, honest alternative.
  • Prioritize the “I Don’t Know” Experience: UXers should frame the “I don’t know” response not as an error state, but as a critical feature. They must lobby for the engineering and content resources needed to design a high-quality, helpful fallback experience.
UX Writing And Trust

All of these considerations highlight the critical role of UX writing in the development of trustworthy AI. UX writers are the architects of the AI’s voice and tone, ensuring that its communication is clear, honest, and empathetic. They translate complex technical processes into user-friendly explanations, craft helpful error messages, and design conversational flows that build confidence and rapport. Without thoughtful UX writing, even the most technologically advanced AI can feel opaque and untrustworthy.

The words and phrases an AI uses are its primary interface with users. UX writers are uniquely positioned to shape this interaction, ensuring that every tooltip, prompt, and response contributes to a positive and trust-building experience. Their expertise in human-centered language and design is indispensable for creating AI systems that not only perform well but also earn and maintain the trust of their users.

A few key areas for UX writers to focus on when writing for AI include:

  • Prioritize Transparency
    Clearly communicate the AI’s capabilities and limitations, especially when it’s still learning or if its responses are generated rather than factual. Use phrases that indicate the AI’s nature, such as “As an AI, I can...” or “This is a generated response.”
  • Design for Explainability
    When the AI provides a recommendation, decision, or complex output, strive to explain the reasoning behind it in an understandable way. This builds trust by showing the user how the AI arrived at its conclusion.
  • Emphasize User Control
    Empower users by providing clear ways to provide feedback, correct errors, or opt out of certain AI features. This reinforces the idea that the user is in control and the AI is a tool to assist them.
The Ethical Tightrope: The Researcher’s Responsibility

As the people responsible for understanding and advocating for users, we walk an ethical tightrope. Our work comes with profound responsibilities.

The Danger Of “Trustwashing”

We must draw a hard line between designing for calibrated trust and designing to manipulate users into trusting a flawed, biased, or harmful system. For example, if an AI system designed for loan approvals consistently discriminates against certain demographics but presents a user interface that implies fairness and transparency, this would be an instance of trustwashing.

Another example of trustwashing would be if an AI medical diagnostic tool occasionally misdiagnoses conditions, but the user interface makes it seem infallible. To avoid trustwashing, the system should clearly communicate the potential for error and the need for human oversight.

Our goal must be to create genuinely trustworthy systems, not just the perception of trust. Using these principles to lull users into a false sense of security is a betrayal of our professional ethics.

To avoid and prevent trustwashing, researchers and UX teams should:

  • Prioritize genuine transparency.
    Clearly communicate the limitations, biases, and uncertainties of AI systems. Don’t overstate capabilities or obscure potential risks.
  • Conduct rigorous, independent evaluations.
    Go beyond internal testing and seek external validation of system performance, fairness, and robustness.
  • Engage with diverse stakeholders.
    Involve users, ethics experts, and impacted communities in the design, development, and evaluation processes to identify potential harms and build genuine trust.
  • Be accountable for outcomes.
    Take responsibility for the societal impact of AI systems, even if unintended. Establish mechanisms for redress and continuous improvement.
  • Be accountable for outcomes.
    Establish clear and accessible mechanisms for redress when harm occurs, ensuring that individuals and communities affected by AI decisions have avenues for recourse and compensation.
  • Educate the public.
    Help users understand how AI works, its limitations, and what to look for when evaluating AI products.
  • Advocate for ethical guidelines and regulations.
    Support the development and implementation of industry standards and policies that promote responsible AI development and prevent deceptive practices.
  • Be wary of marketing hype.
    Critically assess claims made about AI systems, especially those that emphasize “trustworthiness” without clear evidence or detailed explanations.
  • Publish negative findings.
    Don’t shy away from reporting challenges, failures, or ethical dilemmas encountered during research. Transparency about limitations is crucial for building long-term trust.
  • Focus on user empowerment.
    Design systems that give users control, agency, and understanding rather than just passively accepting AI outputs.
The Duty To Advocate

When our research uncovers deep-seated distrust or potential harm — like the fear of job displacement — our job has only just begun. We have an ethical duty to advocate for that user. In my experience directing research teams, I’ve seen that the hardest part of our job is often carrying these uncomfortable truths into rooms where decisions are made. We must champion these findings and advocate for design and strategy shifts that prioritize user well-being, even when it challenges the product roadmap.

I personally try to approach presenting this information as an opportunity for growth and improvement, rather than a negative challenge.

For example, instead of stating “Users don’t trust our AI because they fear job displacement,” I might frame it as “Addressing user concerns about job displacement presents a significant opportunity to build deeper trust and long-term loyalty by demonstrating our commitment to responsible AI development and exploring features that enhance human capabilities rather than replace them.” This reframing can shift the conversation from a defensive posture to a proactive, problem-solving mindset, encouraging collaboration and innovative solutions that ultimately benefit both the user and the business.

It’s no secret that one of the more appealing areas for businesses to use AI is in workforce reduction. In reality, there will be many cases where businesses look to cut 10–20% of a particular job family due to the perceived efficiency gains of AI. However, giving users the opportunity to shape the product may steer it in a direction that makes them feel safer than if they do not provide feedback. We should not attempt to convince users they are wrong if they are distrustful of AI. We should appreciate that they are willing to provide feedback, creating an experience that is informed by the human experts who have long been doing the task being automated.

Conclusion: Building Our Digital Future On A Foundation Of Trust

The rise of AI is not the first major technological shift our field has faced. However, it presents one of the most significant psychological challenges of our current time. Building products that are not just usable but also responsible, humane, and trustworthy is our obligation as UX professionals.

Trust is not a soft metric. It is the fundamental currency of any successful human-technology relationship. By understanding its psychological roots, measuring it with rigor, and designing for it with intent and integrity, we can move from creating “intelligent” products to building a future where users can place their confidence in the tools they use every day. A trust that is earned and deserved.

Table 1: Published Academic Scales Measuring Trust In Automated Systems Survey Tool Name Focus Key Dimensions of Trust Citation Trust in Automation Scale 12-item questionnaire to assess trust between people and automated systems. Measures a general level of trust, including reliability, predictability, and confidence. Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. Trust of Automated Systems Test (TOAST) 9-items used to measure user trust in a variety of automated systems, designed for quick administration. Divided into two main subscales: Understanding (user’s comprehension of the system) and Performance (belief in the system’s effectiveness). Wojton, H. M., Porter, D., Lane, S. T., Bieber, C., & Madhavan, P. (2020). Initial validation of the trust of automated systems test (TOAST). (PDF) The Journal of Social Psychology, 160(6), 735–750. Trust in Automation Questionnaire A 19-item questionnaire capable of predicting user reliance on automated systems. A 2-item subscale is available for quick assessments; the full tool is recommended for a more thorough analysis. Measures 6 factors: Reliability, Understandability, Propensity to trust, Intentions of developers, Familiarity, Trust in automation Körber, M. (2018). Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings 20th Triennial Congress of the IEA. Springer. Human Computer Trust Scale 12-item questionnaire created to provide an empirically sound tool for assessing user trust in technology. Divided into two key factors:
  1. Benevolence and Competence: This dimension captures the positive attributes of the technology
  2. Perceived Risk: This factor measures the user’s subjective assessment of the potential for negative consequences when using a technical artifact.
Siddharth Gulati, Sonia Sousa & David Lamas (2019): Design, development and evaluation of a human-computer trust scale, (PDF) Behaviour & Information Technology Appendix A: Trust-Building Tactics Checklist

To design for calibrated trust, consider implementing the following tactics, organized by the four pillars of trust:

1. Ability (Competence) & Predictability
  • Set Clear Expectations: Use onboarding, tooltips, and empty states to honestly communicate the AI’s strengths and weaknesses.
  • Show Confidence Levels: Display the AI’s uncertainty (e.g., “70% chance,” “85% confident”) or highlight less certain parts of its output.
  • Provide Explainability (XAI): Offer useful, human-understandable rationales for the AI’s decisions or recommendations (e.g., “Because you frequently read X, I’m recommending Y”).
  • Design for Graceful Error Handling:
    • ✅ Acknowledge errors humbly (e.g., “My apologies, I misunderstood that request.”).
    • ✅ Provide easy paths to correction (e. ] g., prominent feedback mechanisms like thumbs up/down).
    • ✅ Show that feedback is being used (e.g., “Thank you, I’m learning from your correction”).
  • Design for “I Don’t Know” Responses:
    • ✅ Acknowledge limitations honestly.
    • ✅ Prioritize a high-quality, helpful fallback experience when the AI cannot answer.
  • Prioritize Transparency: Clearly communicate the AI’s capabilities and limitations, especially if responses are generated.
2. Benevolence
  • Address Existential Fears: When users express concerns (e.g., job displacement), validate their concerns and reframe the feedback into actionable insights about collaborative tools.
  • Prioritize User Well-being: Advocate for design and strategy shifts that prioritize user well-being, even if it challenges the product roadmap.
  • Emphasize User Control: Provide clear ways for users to give feedback, correct errors, or opt out of AI features.
3. Integrity
  • Adhere to Ethical Principles: Ensure the AI operates on predictable, ethical principles, demonstrating fairness and honesty.
  • Prioritize Genuine Transparency: Clearly communicate the limitations, biases, and uncertainties of AI systems; avoid overstating capabilities or obscuring risks.
  • Conduct Rigorous, Independent Evaluations: Seek external validation of system performance, fairness, and robustness to mitigate bias.
  • Engage Diverse Stakeholders: Involve users, ethics experts, and impacted communities in the design and evaluation processes.
  • Be Accountable for Outcomes: Establish clear mechanisms for redress and continuous improvement for societal impacts, even if unintended.
  • Educate the Public: Help users understand how AI works, its limitations, and how to evaluate AI products.
  • Advocate for Ethical Guidelines: Support the development and implementation of industry standards and policies that promote responsible AI.
  • Be Wary of Marketing Hype: Critically assess claims about AI “trustworthiness” and demand verifiable data.
  • Publish Negative Findings: Be transparent about challenges, failures, or ethical dilemmas encountered during research.
4. Predictability & Reliability
  • Set Clear Expectations: Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle.
  • Show Confidence Levels: Instead of just giving an answer, have the AI signal its own uncertainty.
  • Provide Explainability (XAI) and Transparency: Offer a useful, human-understandable rationale for AI decisions.
  • Design for Graceful Error Handling: Acknowledge errors humbly and provide easy paths to correction.
  • Prioritize the “I Don’t Know” Experience: Frame “I don’t know” as a feature and design a high-quality fallback experience.
  • Prioritize Transparency (UX Writing): Clearly communicate the AI’s capabilities and limitations, especially when it’s still learning or if responses are generated.
  • Design for Explainability (UX Writing): Explain the reasoning behind AI recommendations, decisions, or complex outputs.
Kategorier: Amerikanska

How To Minimize The Environmental Impact Of Your Website

Smashingmagazine - tors, 09/18/2025 - 12:00

Climate change is the single biggest health threat to humanity, accelerated by human activities such as the burning of fossil fuels, which generate greenhouse gases that trap the sun’s heat.

The average temperature of the earth’s surface is now 1.2°C warmer than it was in the late 1800’s, and projected to more than double by the end of the century.

The consequences of climate change include intense droughts, water shortages, severe fires, melting polar ice, catastrophic storms, and declining biodiversity.

The Internet Is A Significant Part Of The Problem

Shockingly, the internet is responsible for higher global greenhouse emissions than the aviation industry, and is projected to be responsible for 14% of all global greenhouse gas emissions by 2040.

If the internet were a country, it would be the 4th largest polluter in the world and represents the largest coal-powered machine on the planet.

But how can something digital like the internet produce harmful emissions?

Internet emissions come from powering the infrastructure that drives the internet, such as the vast data centres and data transmission networks that consume huge amounts of electricity.

Internet emissions also come from the global manufacturing, distribution, and usage of the estimated 30.5 billion devices (phones, laptops, etc.) that we use to access the internet.

Unsurprisingly, internet related emissions are increasing, given that 60% of the world’s population spend, on average, 40% of their waking hours online.

We Must Urgently Reduce The Environmental Impact Of The Internet

As responsible digital professionals, we must act quickly to minimise the environmental impact of our work.

It is encouraging to see the UK government encourage action by adding “Minimise environmental impact” to their best practice design principles, but there is still too much talking and not enough corrective action taking place within our industry.

The reality of many tightly constrained, fast-paced, and commercially driven web projects is that minimising environmental impact is far from the agenda.

So how can we make the environment more of a priority and talk about it in ways that stakeholders will listen to?

A eureka moment on a recent web optimisation project gave me an idea.

My Eureka Moment

I led a project to optimise the mobile performance of www.talktofrank.com, a government drug advice website that aims to keep everyone safe from harm.

Mobile performance is critically important for the success of this service to ensure that users with older mobile devices and those using slower network connections can still access the information they need.

Our work to minimise page weights focused on purely technical changes that our developer made following recommendations from tools such as Google Lighthouse that reduced the size of the webpages of a key user journey by up to 80%. This resulted in pages downloading up to 30% faster and the carbon footprint of the journey being reduced by 80%.

We hadn’t set out to reduce the carbon footprint, but seeing these results led to my eureka moment.

I realised that by minimising page weights, you improve performance (which is a win for users and service owners) and also consume less energy (due to needing to transfer and store less data), creating additional benefits for the planet — so everyone wins.

This felt like a breakthrough because business, user, and environmental requirements are often at odds with one another. By focussing on minimising websites to be as simple, lightweight and easy to use as possible you get benefits that extend beyond the triple bottom line of people, planet and profit to include performance and purpose.

So why is ‘minimising’ such a great digital sustainability strategy?

  • Profit
    Website providers win because their website becomes more efficient and more likely to meet its intended outcomes, and a lighter site should also lead to lower hosting bills.
  • People
    People win because they get to use a website that downloads faster, is quick and easy to use because it's been intentionally designed to be as simple as possible, enabling them to complete their tasks with the minimum amount of effort and mental energy.
  • Performance
    Lightweight webpages download faster so perform better for users, particularly those on older devices and on slower network connections.
  • Planet
    The planet wins because the amount of energy (and associated emissions) that is required to deliver the website is reduced.
  • Purpose
    We know that we do our best work when we feel a sense of purpose. It is hugely gratifying as a digital professional to know that our work is doing good in the world and contributing to making things better for people and the environment.

In order to prioritise the environment, we need to be able to speak confidently in a language that will resonate with the business and ensure that any investment in time and resources yields the widest range of benefits possible.

So even if you feel that the environment is a very low priority on your projects, focusing on minimising page weights to improve performance (which is generally high on the agenda) presents the perfect trojan horse for an environmental agenda (should you need one).

Doing the right thing isn’t always easy, but we’ve done it before when managing to prioritise issues such as usability, accessibility, and inclusion on digital projects.

Many of the things that make websites easier to use, more accessible, and more effective also help to minimise their environmental impact, so the things you need to do will feel familiar and achievable, so don’t worry about it all being another new thing to learn about!

So this all makes sense in theory, but what’s the master plan to use when putting it into practice?

The Masterplan

The masterplan for creating websites that have minimal environmental impact is to focus on offering the maximum value from the minimum input of energy.

It’s an adaptation of Buckminister Fuller’s ‘Dymaxion’ principle, which is one of his many progressive and groundbreaking sustainability strategies for living and surviving on a planet with finite resources.

Inputs of energy include both the electrical energy that is required to operate websites and also the mental energy that is required to use them.

You can achieve this by minimising websites to their core content, features, and functionality, ensuring that everything can be justified from the perspective of meeting a business or user need. This means that anything that isn’t adding a proportional amount of value to the amount of energy it requires to provide it should be removed.

So that’s the masterplan, but how do you put it into practice?

Decarbonise Your Highest Value User Journeys

I’ve developed a new approach called ‘Decarbonising User Journeys’ that will help you to minimise the environmental impact of your website and maximise its performance.

Note: The approach deliberately focuses on optimising key user journeys and not entire websites to keep things manageable and to make it easier to get started.

The secret here is to start small, demonstrate improvements, and then scale.

The approach consists of five simple steps:

  1. Identify your highest value user journey,
  2. Benchmark your user journey,
  3. Set targets,
  4. Decarbonise your user journey,
  5. Track and share your progress.

Here’s how it works.

Step 1: Identify Your Highest Value User Journey

Your highest value user journey might be the one that your users value the most, the one that brings you the highest revenue, or the one that is fundamental to the success of your organisation.

You could also focus on a user journey that you know is performing particularly badly and has the potential to deliver significant business and user benefits if improved.

You may have lots of important user journeys, and it’s fine to decarbonise multiple journeys in parallel if you have the resources, but I’d recommend starting with one first to keep things simple.

To bring this to life, let’s consider a hypothetical example of a premiership football club trying to decarbonise its online ticket-buying journey that receives high levels of traffic and is responsible for a significant proportion of its weekly income.

Step 2: Benchmark Your User Journey

Once you’ve selected your user journey, you need to benchmark it in terms of how well it meets user needs, the value it offers your organisation, and its carbon footprint.

It is vital that you understand the job it needs to do and how well it is doing it before you start to decarbonise it. There is no point in removing elements of the journey in an effort to reduce its carbon footprint, for example, if you compromise its ability to meet a key user or business need.

You can benchmark how well your user journey is meeting user needs by conducting user research alongside analysing existing customer feedback. Interviews with business stakeholders will help you to understand the value that your journey is providing the organisation and how well business needs are being met.

You can benchmark the carbon footprint and performance of your user journey using online tools such as Cardamon, Ecograder, Website Carbon Calculator, Google Lighthouse, and Bioscore. Make sure you have your analytics data to hand to help get the most accurate estimate of your footprint.

To use these tools, simply add the URL of each page of your journey, and they will give you a range of information such as page weight, energy rating, and carbon emissions. Google Lighthouse works slightly differently via a browser plugin and generates a really useful and detailed performance report as opposed to giving you a carbon rating.

A great way to bring your benchmarking scores to life is to visualise them in a similar way to how you would present a customer journey map or service blueprint.

This example focuses on just communicating the carbon footprint of the user journey, but you can also add more swimlanes to communicate how well the journey is performing from a user and business perspective, too, adding user pain points, quotes, and business metrics where appropriate.

I’ve found that adding the energy efficiency ratings is really effective because it’s an approach that people recognise from their household appliances. This adds a useful context to just showing the weights (such as grams or kilograms) of CO2, which are generally meaningless to people.

Within my benchmarking reports, I also add a set of benchmarking data for every page within the user journey. This gives your stakeholders a more detailed breakdown and a simple summary alongside a snapshot of the benchmarked page.

Your benchmarking activities will give you a really clear picture of where remedial work is required from an environmental, user, and business point of view.

In our football user journey example, it’s clear that the ‘News’ and ‘Tickets’ pages need some attention to reduce their carbon footprint, so they would be a sensible priority for decarbonising.

Step 3: Set Targets

Use your benchmarking results to help you set targets to aim for, such as a carbon budget, energy efficiency, maximum page weight, and minimum Google Lighthouse performance targets for each individual page, in addition to your existing UX metrics and business KPIs.

There is no right or wrong way to set targets. Choose what you think feels achievable and viable for your business, and you’ll only learn how reasonable and achievable they are when you begin to decarbonise your user journeys.

Setting targets is important because it gives you something to aim for and keeps you focused and accountable. The quantitative nature of this work is great because it gives you the ability to quickly demonstrate the positive impact of your work, making it easier to justify the time and resources you are dedicating to it.

Step 4: Decarbonise Your User Journey

Your objective now is to decarbonise your user journey by minimising page weights, improving your Lighthouse performance rating, and minimising pages so that they meet both user and business needs in the most efficient, simple, and effective way possible.

It’s up to you how you approach this depending on the resources and skills that you have, you can focus on specific pages or addressing a specific problem area such as heavyweight images or videos across the entire user journey.

Here’s a list of activities that will all help to reduce the carbon footprint of your user journey:

  • Work through the recommendations in the ‘diagnostics’ section of your Google Lighthouse report to help optimise page performance.
  • Switch to a green hosting provider if you are not already using one. Use the Green Web Directory to help you choose one.
  • Work through the W3C Web Sustainability Guidelines, implementing the most relevant guidelines to your specific user journey.
  • Remove anything that is not adding any user or business value.
  • Reduce the amount of information on your webpages to make them easier to read and less overwhelming for people.
  • Replace content with a lighter-weight alternative (such as swapping a video for text) if the lighter-weight alternative provides the same value.
  • Optimise assets such as photos, videos, and code to reduce file sizes.
  • Remove any barriers to accessing your website and any distractions that are getting in the way.
  • Re-use familiar components and design patterns to make your websites quicker and easier to use.
  • Write simply and clearly in plain English to help people get the most value from your website and to help them avoid making mistakes that waste time and energy to resolve.
  • Fix any usability issues you identified during your benchmarking to ensure that your website is as easy to use and useful as possible.
  • Ensure your user journey is as accessible as possible so the widest possible audience can benefit from using it, offsetting the environmental cost of providing the website.
Step 5: Track And Share Your Progress

As you decarbonise your user journeys, use the benchmarking tools from step 2 to track your progress against the targets you set in step 3 and share your progress as part of your wider sustainability reporting initiatives.

All being well at this point, you will have the numbers to demonstrate how the performance of your user journey has improved and also how you have managed to reduce its carbon footprint.

Share these results with the business as soon as you have them to help you secure the resources to continue the work and initiate similar work on other high-value user journeys.

You should also start to communicate your progress with your users.

It’s important that they are made aware of the carbon footprint of their digital activity and empowered to make informed choices about the environmental impact of the websites that they use.

Ideally, every website should communicate the emissions generated from viewing their pages to help people make these informed choices and also to encourage website providers to minimise their emissions if they are being displayed publicly.

Often, people will have no choice but to use a specific website to complete a specific task, so it is the responsibility of the website provider to ensure the environmental impact of using their website is as small as possible.

You can also help to raise awareness of the environmental impact of websites and what you are doing to minimise your own impact by publishing a digital sustainability statement, such as Unilever’s, as shown below.

A good digital sustainability statement should acknowledge the environmental impact of your website, what you have done to reduce it, and what you plan to do next to minimise it further.

As an industry, we should normalise publishing digital sustainability statements in the same way that accessibility statements have become a standard addition to website footers.

Useful Decarbonising Principles

Keep these principles in mind to help you decarbonise your user journeys:

  • More doing and less talking.
    Start decarbonising your user journeys as soon as possible to accelerate your learning and positive change.
  • Start small.
    Starting small by decarbonising an individual journey makes it easier to get started and generates results to demonstrate value faster.
  • Aim to do more with less.
    Minimise what you offer to ensure you are providing the maximum amount of value for the energy you are consuming.
  • Make your website as useful and as easy to use as possible.
    Useful websites can justify the energy they consume to provide them, ensuring they are net positive in terms of doing more good than harm.
  • Focus on progress over perfection.
    Websites are never finished or perfect but they can always be improved, every small improvement you make will make a difference.
Start Decarbonising Your User Journeys Today

Decarbonising user journeys shouldn’t be done as a one-off, reserved for the next time that you decide to redesign or replatform your website; it should happen on a continual basis as part of your broader digital sustainability strategy.

We know that websites are never finished and that the best websites continually improve as both user and business needs change. I’d like to encourage people to adopt the same mindset when it comes to minimising the environmental impact of their websites.

Decarbonising will happen most effectively when digital professionals challenge themselves on a daily basis to ‘minimise’ the things they are working on.

This avoids building ‘carbon debt’ that consists of compounding technical and design debt within our websites, which is always harder to retrospectively remove than avoid in the first place.

By taking a pragmatic approach, such as optimising high-value user journeys and aligning with business metrics such as performance, we stand the best possible chance of making digital sustainability a priority.

You’ll have noticed that, other than using website carbon calculator tools, this approach doesn’t require any skills that don’t already exist within typical digital teams today. This is great because it means you’ve already got the skills that you need to do this important work.

I would encourage everyone to raise the issue of the environmental impact of the internet in their next team meeting and to try this decarbonising approach to create better outcomes for people, profit, performance, purpose, and the planet.

Good luck!

Kategorier: Amerikanska

SerpApi: A Complete API For Fetching Search Engine Data

Smashingmagazine - tis, 09/16/2025 - 19:00

This article is a sponsored by SerpApi

SerpApi leverages the power of search engine giants, like Google, DuckDuckGo, Baidu, and more, to put together the most pertinent and accurate search result data for your users from the comfort of your app or website. It’s customizable, adaptable, and offers an easy integration into any project.

What do you want to put together?

  • Search information on a brand or business for SEO purposes;
  • Input data to train AI models, such as the Large Language Model, for a customer service chatbot;
  • Top news and websites to pick from for a subscriber newsletter;
  • Google Flights API: collect flight information for your travel app;
  • Price comparisons for the same product across different platforms;
  • Extra definitions and examples for words that can be offered along a language learning app.

The list goes on.

In other words, you get to leverage the most comprehensive source of data on the internet for any number of needs, from competitive SEO research and tracking news to parsing local geographic data and even completing personal background checks for employment.

Start With A Simple GET Request

The results from the search API are only a URL request away for those who want a super quick start. Just add your search details in the URL parameters. Say you need the search result for “Stone Henge” from the location “Westminster, England, United Kingdom” in language “en-GB”, and country of search origin “uk” from the domain “google.co.uk”. Here’s how simple it is to put the GET request together:

https://serpapi.com/search.json?q=Stone+Henge&location=Westminster,+England,+United+Kingdom&hl=en-GB&gl=uk&google_domain=google.co.uk&api_key=your_api_key

Then there’s the impressive list of libraries that seamlessly integrate the APIs into mainstream programming languages and frameworks such as JavaScript, Ruby, .NET, and more.

Give It A Quick Try

Want to give it a spin? Sign up and start for free, or tinker with the SerpApi’s live playground without signing up. The playground allows you to choose which search engine to target, and you can fill in the values for all the basic parameters available in the chosen API to customize your search. On clicking “Search”, you get the search result page and its extracted JSON data.

If you need to get a feel for the full API first, you can explore their easy-to-grasp web documentation before making any decision. You have the chance to work with all of the APIs to your satisfaction before committing to it, and when that time comes, SerpApi’s multiple price plans tackle anywhere between an economic few hundred searches a month and bulk queries fit for large corporations.

What Data Do You Need?

Beyond the rudimentary search scraping, SerpApi provides a range of configurations, features, and additional APIs worth considering.

Geolocation

Capture the global trends, or refine down to more localized particulars by names of locations or Google’s place identifiers. SerpApi’s optimized routing of requests ensures accurate retrieval of search result data from any location worldwide. If locations themselves are the answers to your queries — say, a cycle trail to be suggested in a fitness app — those can be extracted and presented as maps using SerpApi’s Google Maps API.

Structured JSON

Although search engines reveal results in a tidy user interface, deriving data into your application could cause you to end up with a large data dump to be sifted through — but not if you’re using SerpApi.

SerpApi pulls data in a well-structured JSON format, even for the popular kinds of enriched search results, such as knowledge graphs, review snippets, sports league stats, ratings, product listings, AI overview, and more.

Speedy Results

SerpApi’s baseline performance can take care of timely search data for real-time requirements. But what if you need more? SerpApi’s Ludicrous Speed option, easily enabled from the dashboard with an upgrade, provides a super-fast response time. More than twice as fast as usual, thanks to twice the server power.

There’s also Ludicrous Speed Max, which allocates four times more server resources for your data retrieval. Data that is time-sensitive and for monitoring things in real-time, such as sports scores and tracking product prices, will lose its value if it is not handled in a timely manner. Ludicrous Speed Max guarantees no delays, even for a large-scale enterprise haul.

You can also use a relevant SerpApi API to hone in on your relevant category, like Google Flights API, Amazon API, Google News API, etc., to get fresh and apt results.

If you don’t need the full depth of the search API, there’s a Light version available for Google Search, Google Images, Google Videos, Google News, and DuckDuckGo Search APIs.

Search Controls & Privacy

Need the results asynchronously picked up? Want a refined output using advanced search API parameters and a JSON Restrictor? Looking for search outcomes for specific devices? Don’t want auto-corrected query results? There’s no shortage of ways to configure SerpApi to get exactly what you need.

Additionally, if you prefer not to have your search metadata on their servers, simply turn on the “ZeroTrace” mode that’s available for selected plans.

The X-Ray

Save yourself a headache, literally, trying to play match between what you see on a search result page and its extracted data in JSON. SerpApi’s X-Ray tool shows you where what comes from. It’s available and free in all plans.

Inclusive Support

If you don’t have the expertise or resources for tackling the validity of scraping search results, here’s what SerpApi says:

“SerpApi, LLC assumes scraping and parsing liabilities for both domestic and foreign companies unless your usage is otherwise illegal”.

You can reach out and have a conversation with them regarding the legal protections they offer, as well as inquire about anything else you might want to know about, including SerpApi in your project, such as pricing, performance expected, on-demand options, and technical support. Just drop a message at their contact page.

In other words, the SerpApi team has your back with the support and expertise to get the most from your fetched data.

Try SerpApi Free

That’s right, you can get your hands on SerpApi today and start fetching data with absolutely no commitment, thanks to a free starter plan that gives you up to 250 free search queries. Give it a try and then bump up to one of the reasonably-priced monthly subscription plans with generous search limits.

Kategorier: Amerikanska

This full-body MRI scan could save your life

Scobleizer - sön, 11/01/2020 - 19:54

This summer a 40-year-old friend and brilliant software engineer, Brandon Wirtz, died due to colon cancer and my dad died of pancreatic cancer too. At first neither of their doctors diagnosed properly (Brandon was frequently getting sick and my dad kept having more and more problems). Ever since Brandon discovered his cancer, I’ve started taking healthcare more seriously, wondering if there’s a way to diagnose such diseases earlier.

Last week a new clinic, Prenuvo, opened near San Francisco, that promises to do just that by doing a full-body MRI scan (Magnetic Resonance Imaging). This is like a high-resolution X-ray machine except it doesn’t use radiation to make its images.

I was lucky enough to be one of the first to be scanned in its new location (it has been doing such scans for a decade up in Vancouver, Canada) by founder, Dr Raj Attariwala. Here I filmed the consultation with Dr. Raj right after my scans were done.

The process? You pay $2,500. You spend an hour inside an MRI machine. For me, it was a chance to hold perfectly still for an hour, while I listened to the machine whir and buzz around me. After the hour, it takes a few minutes to process your images and then you sit down with a doctor, like I did here.

Luckily for me I got a pretty clean bill of health but you can see this is a powerful diagnostic tool to help doctors find dozens of problems before they become untreatable. Everything from heart disease to a variety of cancers. You can see how Dr. Raj walks through my entire body, including my brain, looking for problems that I’ll need to work on. He did find one with me, my mom had a bad back, and it looks like I’ve been blessed with the same problems, and he told me to do exercises to strengthen my core muscles to minimize that problem in the future.

In talking with cofounder/CEO Andrew Lacy the company has developed its own MRI machine to do these scans. He told me that most other MRIs are used only for specific body parts, usually after a cancer or problem has already been found. Prenuvo, he told me, has modified the software running the MRI machine to do specialized full-body scans that other machines can’t do easily. Also, his team is using these images to build machine learning to assist the doctors in helping find various problems and, also, in its plans to scale this to more people over time (the San Francisco location has two scanners that can do two people an hour, the company has plans to open more locations and do more scans per hour, but that will need more AI work, and a training of doctors to look for problems when they are early, rather late-stage like they usually see).

For me it’s amazing to see inside your own body for the first time and the company gives its customers all scans on a mobile app that you can explore on your own time later. It also sends the scans to your primary-care physician, or to other doctors for second opinions.

You can learn more about this service at https://www.prenuvo.com.

Kategorier: Amerikanska

Getting ready for VMworld with AI deep dive with 14-year employee

Scobleizer - ons, 09/23/2020 - 16:56

Business of the future will need to be more predictive.

That’s what VMware’s Justin Murray, a long-time VMware employee told me here as he explained the latest in AI and Machine Learning that he’s seeing evolve.

The folks who run VMware’s huge conference, VMworld (happens September 29-October 1), got interested in me after reading my latest book, “The Infinite Retina,” which is how augmented reality and artificial intelligence, along with a few other technologies that I call Spatial Computing, will radically change seven industries over the next decade.

You see this predictive nature of AI in things like robots, autonomous cars, and, even, other things like Spotify, which uses AI to build playlists. That predicts what kind of music we would like to listen to. Autonomous cars predict the next action of both people and other cars on the streets. The AI inside is always trying to answer questions like: “will a child walking on the side of the road try to cross the street in front of us?” Properly predicting what is about to happen on the road is important and, I ask Murray, if that same predictive technology is what he’s thinking will be used elsewhere in businesses?

Murray says “yes,” but then goes a lot further, and predicts what some of the hot discussions will be at VMworld next week.

For instance:

1. Every major cloud provider, like Amazon’s Web Services, Microsoft’s Azure, or Google’s Cloud Services, is buying tons of NVidia’s powerful GPUs for their datacenters to support these new, predictive, AI services that businesses are starting to build.

2. The AI architecture and tooling stack that runs on these is seeing sizeable changes, and NVidia and VMware will make some announcements there next week.

3. Powerful new AI supercomputers are now being built because NVidia cards are being “connected together” in a powerful new way to make new workloads possible.

Why do I care, especially since usually I’m interested in new startups or consumer electronics gadgets?

Well, let’s walk through my life. Recently I got a June Oven. You put a piece of toast in it, or a piece of salmon, and it uses a small camera and an NVidia card inside, along with machine learning-based software, to automatically recognize what food I am trying to cook or bake. It’s magic. Plus, I never burn my toast anymore the way we used to because I often didn’t get the settings right.

Or, look at the new DJI Osmo 4 I used to do the intro to this video. On there is three little motors, and the instructions for how to move those motors is generated, in part, by machine learning that is constantly evaluating how best to steady my iPhone.

Finally, look at my Tesla. Murray told me that there’s more than a dozen AI-based systems running on that, and it drove most of the way to VMware’s headquarters in Palo Alto, CA, which makes my drives more relaxing (particularly in traffic where my car does all the stop-and-go duties) and safer.

Already AI has radically changed my life and most people in the industry say AI is just getting going. One reason VMware is compensating me to do this series of posts is because about a decade ago I was the first to see Siri, which was the first AI-based consumer application to come to market. My posts back then kicked off the AI age but a decade later AI has started deeply falling in price and is getting simpler to do, so it’s being used in a lot more new workloads.

You might not realize just who VMware is, but you probably use one of its services everyday, or, rather, the companies you deal with everyday use VMware to run their businesses. When I worked at Rackspace, a major cloud computing provider, for instance, we used VMware all over the place in our datacenters. “VM” stands for “Virtual Machine,” and VMware is the one that popularized that term for technology that can split up a physical computer into tons of “virtual” computers (or join them together with millions of other machines to build a supercomputer). Today that technology is used to do a bunch of things, from letting you manage your laptop better and run it more safely, to managing huge businesses, to soon managing new Spatial Computing infrastructure and devices (I wrote about such in my latest book).

All of this will be discussed at VMworld, which is a huge free virtual event, with more than 100,000 attendees and hundreds of sessions, covers not just what is happening here in AI, but across a range of technologies that businesses use everyday, from security-focused ones, to data-center-management focused ones. If you like this conversation, which is just one out of thousands of VMware employees or customers you will meet at VMworld, register for your free attendee badge here.

I don’t even need an AI to predict that you’ll find at least a few of the sessions out of the 900 offered useful for your business, see you on the 29th!

Kategorier: Amerikanska

Sidor

Prenumerera på Sten Sandvall innehållssamlare