Dyslextiskt Kebabstuk

Smashingmagazine

Prenumerera på Nyhetsflöde Smashingmagazine Smashingmagazine
Recent content in Articles on Smashing Magazine — For Web Designers And Developers
Uppdaterad: 2 månader 2 veckor sedan

Smashing Animations Part 7: Recreating Toon Text With CSS And SVG

ons, 12/17/2025 - 11:00

After finishing a project that required me to learn everything I could about CSS and SVG animations, I started writing this series about Smashing Animations and “How Classic Cartoons Inspire Modern CSS.” To round off this year, I want to show you how to use modern CSS to create that element that makes Toon Titles so impactful: their typography.

Title Artwork Design

In the silent era of the 1920s and early ’30s, the typography of a film’s title card created a mood, set the scene, and reminded an audience of the type of film they’d paid to see.

Cartoon title cards were also branding, mood, and scene-setting, all rolled into one. In the early years, when major studio budgets were bigger, these title cards were often illustrative and painterly.

But when television boomed during the 1950s, budgets dropped, and cards designed by artists like Lawrence “Art” Goble adopted a new visual language, becoming more graphic, stylised, and less intricate.

Note: Lawrence “Art” Goble is one of the often overlooked heroes of mid-century American animation. He primarily worked for Hanna-Barbera during its most influential years of the 1950s and 1960s.

Goble wasn’t a character animator. His role was to create atmosphere, so he designed environments for The Flintstones, Huckleberry Hound, Quick Draw McGraw, and Yogi Bear, as well as the opening title cards that set the tone. His title cards, featuring paintings with a logo overlaid, helped define the iconic look of Hanna-Barbera.

Goble’s artwork for characters such as Quick Draw McGraw and Yogi Bear was effective on smaller TV screens. Rather than reproducing a still from the cartoon, he focused on presenting a single, strong idea — often in silhouette — that captured its essence. In “The Buzzin’ Bear,” Yogi buzzes by in a helicopter. He bounces away, pic-a-nic basket in hand, in “Bear on a Picnic,” and for his “Prize Fight Fright,” Yogi boxes the title text.

With little or no motion to rely on, Goble’s single frames had to create a mood, set the scene, and describe a story. They did this using flat colours, graphic shapes, and typography that was frequently integrated into the artwork.

As designers who work on the web, toon titles can teach us plenty about how to convey a brand’s personality, make a first impression, and set expectations for someone’s experience using a product or website. We can learn from the artists’ techniques to create effective banners, landing-page headers, and even good ol’ fashioned splash screens.

Toon Title Typography

Cartoon title cards show how merging type with imagery delivers the punch a header or hero needs. With a handful of text-shadow, text-stroke, and transform tricks, modern CSS lets you tap into that same energy.

The Toon Text Title Generator

Partway through writing this article, I realised it would be useful to have a tool for generating text styled like the cartoon titles I love so much. So I made one.

My Toon Text Title Generator lets you experiment with colours, strokes, and multiple text shadows. You can adjust paint order, apply letter spacing, preview your text in a selection of sample fonts, and then copy the generated CSS straight to your clipboard to use in a project.

Toon Title CSS

You can simply copy-paste the CSS that the Toon Text Title Generator provides you. But let’s look closer at what it does.

Text shadow

Look at the type in this title from Augie Doggie’s episode “Yuk-Yuk Duck,” with its pale yellow letters and dark, hard, offset shadow that lifts it off the background and creates the illusion of depth.

You probably already know that text-shadow accepts four values: (1) horizontal and (2) vertical offsets, (3) blur, and (4) a colour which can be solid or semi-transparent. Those offset values can be positive or negative, so I can replicate “Yuk-Yuk Duck” using a hard shadow pulled down and to the right:

color: #f7f76d; text-shadow: 5px 5px 0 #1e1904;

On the other hand, this “Pint Giant” title has a different feel with its negative semi-soft shadow:

color: #c2a872; text-shadow: -7px 5px 0 #b100e, 0 -5px 10px #546c6f;

To add extra depth and create more interesting effects, I can layer multiple shadows. For “Let’s Duck Out,” I combine four shadows: the first a solid shadow with a negative horizontal offset to lift the text off the background, followed by progressively softer shadows to create a blur around it:

color: #6F4D80; text-shadow: -5px 5px 0 #260e1e, /* Shadow 1 */ 0 0 15px #e9ce96, /* Shadow 2 */ 0 0 30px #e9ce96, /* Shadow 3 */ 0 0 30px #e9ce96; /* Shadow 4 */

These shadows show that using text-shadow isn’t just about creating lighting effects, as they can also be decorative and add personality.

Text Stroke

Many cartoon title cards feature letters with a bold outline that makes them stand out from the background. I can recreate this effect using text-stroke. For a long time, this property was only available via a -webkit- prefix, but that also means it’s now supported across modern browsers.

text-stroke is a shorthand for two properties. The first, text-stroke-width, draws a contour around individual letters, while the second, text-stroke-color, controls its colour. For “Whatever Goes Pup,” I added a 4px blue stroke to the yellow text:

color: #eff0cd; -webkit-text-stroke: 4px #7890b5; text-stroke: 4px #7890b5;

Strokes can be especially useful when they’re combined with shadows, so for “Growing, Growing, Gone,” I added a thin 3px stroke to a barely blurred 1px shadow to create this three-dimensional text effect:

color: #fbb999; text-shadow: 3px 5px 1px #5160b1; -webkit-text-stroke: 3px #984336; text-stroke: 3px #984336; Paint Order

Using text-stroke doesn’t always produce the expected result, especially with thinner letters and thicker strokes, because by default the browser draws a stroke over the fill. Sadly, CSS still does not permit me to adjust stroke placement as I often do in Sketch. However, the paint-order property has values that allow me to place the stroke behind, rather than in front of, the fill.

paint-order: stroke paints the stroke first, then the fill, whereas paint-order: fill does the opposite:

color: #fbb999; paint-order: fill; text-shadow: 3px 5px 1px #5160b1; text-stroke-color:#984336; text-stroke-width: 3px;

An effective stroke keeps letters readable, adds weight, and — when combined with shadows and paint order — gives flat text real presence.

Backgrounds Inside Text

Many cartoon title cards go beyond flat colour by adding texture, gradients, or illustrated detail to the lettering. Sometimes that’s a texture, other times it might be a gradient with a subtle tonal shift. On the web, I can recreate this effect by using a background image or gradient behind the text, and then clipping it to the shape of the letters. This relies on two properties working together: background-clip: text and text-fill-color: transparent.

First, I apply a background behind the text. This can be a bitmap or vector image or a CSS gradient. For this example from the Quick Draw McGraw episode “Baba Bait,” the title text includes a subtle top–bottom gradient from dark to light:

background: linear-gradient(0deg, #667b6a, #1d271a);

Next, I clip that background to the glyphs and make the text transparent so the background shows through:

-webkit-background-clip: text; -webkit-text-fill-color: transparent;

With just those two lines, the background is no longer painted behind the text; instead, it’s painted within it. This technique works especially well when combined with strokes and shadows. A clipped gradient provides the lettering with colour and texture, a stroke keeps its edges sharp, and a shadow elevates it from the background. Together, they recreate the layered look of hand-painted title cards using nothing more than a little CSS. As always, test clipped text carefully, as browser quirks can sometimes affect shadows and rendering.

Splitting Text Into Individual Characters

Sometimes I don’t want to style a whole word or heading. I want to style individual letters — to nudge a character into place, give one glyph extra weight, or animate a few letters independently.

In plain HTML and CSS, there’s only one reliable way to do that: wrap each character in its own span element. I could do that manually, but that would be fragile, hard to maintain, and would quickly fall apart when copy changes. Instead, when I need per-letter control, I use a text-splitting library like splt.js (although other solutions are available). This takes a text node and automatically wraps words or characters, giving me extra hooks to animate and style without messing up my markup.

It’s an approach that keeps my HTML readable and semantic, while giving me the fine-grained control I need to recreate the uneven, characterful typography you see in classic cartoon title cards. However, this approach comes with accessibility caveats, as most screen readers read text nodes in order. So this:

<h2>Hum Sweet Hum</h2>

…reads as you’d expect:

Hum Sweet Hum

But this:

<h2> <span>H</span> <span>u</span> <span>m</span> <!-- etc. --> </h2>

…can be interpreted differently depending on the browser and screen reader. Some will concatenate the letters and read the words correctly. Others may pause between letters, which in a worst-case scenario might sound like:

“H…” “U…” “M…”

Sadly, some splitting solutions don’t deliver an always accessible result, so I’ve written my own text splitter, splinter.js, which is currently in beta.

Transforming Individual Letters

To activate my Toon Text Splitter, I add a data- attribute to the element I want to split:

<h2 data-split="toon">Hum Sweet Hum</h2>

First, my script separates each word into individual letters and wraps them in a span element with class and ARIA attributes applied:

<span class="toon-char" aria-hidden="true">H</span> <span class="toon-char" aria-hidden="true">u</span> <span class="toon-char" aria-hidden="true">m</span>

The script then takes the initial content of the split element and adds it as an aria attribute to help maintain accessibility:

<h2 data-split="toon" aria-label="Hum Sweet Hum"> <span class="toon-char" aria-hidden="true">H</span> <span class="toon-char" aria-hidden="true">u</span> <span class="toon-char" aria-hidden="true">m</span> </h2>

With those class attributes applied, I can then style individual characters as I choose.

For example, for “Hum Sweet Hum,” I want to replicate how its letters shift away from the baseline. After using my Toon Text Splitter, I applied four different translate values using several :nth-child selectors to create a semi-random look:

/* 4th, 8th, 12th... */ .toon-char:nth-child(4n) { translate: 0 -8px; } /* 1st, 5th, 9th... */ .toon-char:nth-child(4n+1) { translate: 0 -4px; } /* 2nd, 6th, 10th... */ .toon-char:nth-child(4n+2) { translate: 0 4px; } /* 3rd, 7th, 11th... */ .toon-char:nth-child(4n+3) { translate: 0 8px; }

But translate is only one property I can use to transform my toon text.

I could also rotate those individual characters for an even more chaotic look:

/* 4th, 8th, 12th... */ .toon-line .toon-char:nth-child(4n) { rotate: -4deg; } /* 1st, 5th, 9th... */ .toon-char:nth-child(4n+1) { rotate: -8deg; } /* 2nd, 6th, 10th... */ .toon-char:nth-child(4n+2) { rotate: 4deg; } /* 3rd, 7th, 11th... */ .toon-char:nth-child(4n+3) { rotate: 8deg; }

But translate is only one property I can use to transform my toon text. I could also rotate those individual characters for an even more chaotic look:

/* 4th, 8th, 12th... */ .toon-line .toon-char:nth-child(4n) { rotate: -4deg; } /* 1st, 5th, 9th... */ .toon-char:nth-child(4n+1) { rotate: -8deg; } /* 2nd, 6th, 10th... */ .toon-char:nth-child(4n+2) { rotate: 4deg; } /* 3rd, 7th, 11th... */ .toon-char:nth-child(4n+3) { rotate: 8deg; }

And, of course, I could add animations to jiggle those characters and bring my toon text style titles to life. First, I created a keyframe animation that rotates the characters:

@keyframes jiggle { 0%, 100% { transform: rotate(var(--base-rotate, 0deg)); } 25% { transform: rotate(calc(var(--base-rotate, 0deg) + 3deg)); } 50% { transform: rotate(calc(var(--base-rotate, 0deg) - 2deg)); } 75% { transform: rotate(calc(var(--base-rotate, 0deg) + 1deg)); } }

Before applying it to the span elements created by my Toon Text Splitter:

.toon-char { animation: jiggle 3s infinite ease-in-out; transform-origin: center bottom; }

And finally, setting the rotation amount and a delay before each character begins to jiggle:

.toon-char:nth-child(4n) { --base-rotate: -2deg; } .toon-char:nth-child(4n+1) { --base-rotate: -4deg; } .toon-char:nth-child(4n+2) { --base-rotate: 2deg; } .toon-char:nth-child(4n+3) { --base-rotate: 4deg; } .toon-char:nth-child(4n) { animation-delay: 0.1s; } .toon-char:nth-child(4n+1) { animation-delay: 0.3s; } .toon-char:nth-child(4n+2) { animation-delay: 0.5s; } .toon-char:nth-child(4n+3) { animation-delay: 0.7s; }

One Frame To Make An Impression

Cartoon title artists had one frame to make an impression, and their typography was as important as the artwork they painted. The same is true on the web.

A well-designed header or hero area needs clarity, character, and confidence — not simply a faded full-width background image.

With a few carefully chosen CSS properties — shadows, strokes, clipped backgrounds, and some restrained animation — we can recreate that same impact. I love toon text not because I’m nostalgic, but because its design is intentional. Make deliberate choices, and let a little toon text typography add punch to your designs.

Kategorier: Amerikanska

Accessible UX Research, eBook Now Available For Download

tis, 12/09/2025 - 17:00

Smashing Library expands again! We’re so happy to announce our newest book, Accessible UX Research, is now available for download in eBook formats. Michele A. Williams takes us for a deep dive into the real world of UX testing, and provides a road map for including users with different abilities and needs in every phase of testing.

But the truth is, you don’t need to be conducting UX testing or even be a UX professional to get a lot out of this book. Michele gives in-depth descriptions of the assistive technology we should all be familiar with, in addition to disability etiquette, common pitfalls when creating accessible prototypes, and so much more. You’ll refer to this book again and again in your daily work.

This is also your last chance to get your printed copy at our discounted presale price. We expect printed copies to start shipping in February 2026. We know you’ll love this book, but don’t just take our word for it — we asked a few industry experts to check out Accessible UX Research too:

Accessible UX Research stands as a vital and necessary resource. In addressing disability at the User Experience Research layer, it helps to set an equal and equitable tone for products and features that resonates through the rest of the creation process. The book provides a solid framework for all aspects of conducting research efforts, including not only process considerations, but also importantly the mindset required to approach the work.

This is the book I wish I had when I was first getting started with my accessibility journey. It is a gift, and I feel so fortunate that Michele has chosen to share it with us all.”

Eric Bailey, Accessibility Advocate “User research in accessibility is non-negotiable for actually meeting users’ needs, and this book is a critical piece in the puzzle of actually doing and integrating that research into accessibility work day to day.”

Devon Pershing, Author of The Accessibility Operations Guidebook “Our decisions as developers and designers are often based on recommendations, assumptions, and biases. Usually, this doesn’t work, because checking off lists or working solely from our own perspective can never truly represent the depth of human experience. Michele’s book provides you with the strategies you need to conduct UX research with diverse groups of people, challenge your assumptions, and create truly great products.”

Manuel Matuzović, Author of the Web Accessibility Cookbook “This book is a vital resource on inclusive research. Michele Williams expertly breaks down key concepts, guiding readers through disability models, language, and etiquette. A strong focus on real-world application equips readers to conduct impactful, inclusive research sessions. By emphasizing diverse perspectives and proactive inclusion, the book makes a compelling case for accessibility as a core principle rather than an afterthought. It is a must-read for researchers, product-makers, and advocates!”

Anna E. Cook, Accessibility and Inclusive Design Specialist About The Book

The book isn’t a checklist for you to complete as a part of your accessibility work. It’s a practical guide to inclusive UX research, from start to finish. If you’ve ever felt unsure how to include disabled participants, or worried about “getting it wrong,” this book is for you. You’ll get clear, practical strategies to make your research more inclusive, effective, and reliable.

Inside, you’ll learn how to:

  • Plan research that includes disabled participants from the start,
  • Recruit participants with disabilities,
  • Facilitate sessions that work for a range of access needs,
  • Ask better questions and avoid unintentionally biased research methods,
  • Build trust and confidence in your team around accessibility and inclusion.

The book also challenges common assumptions about disability and urges readers to rethink what inclusion really means in UX research and beyond. Let’s move beyond compliance and start doing research that reflects the full diversity of your users. Whether you’re in industry or academia, this book gives you the tools — and the mindset — to make it happen.

High-quality hardcover, 320 pages. Written by Dr. Michele A. Williams. Cover art by Espen Brunborg. Print edition shipping starting February 2026. eBook now available for download. Download a free sample (PDF, 2.3MB) and reserve your print copy at the presale price.

“Accessible UX Research” shares successful strategies that’ll help you recruit the participants you need for the study you’re designing. (Large preview) Contents
  1. Disability mindset: For inclusive research to succeed, we must first confront our mindset about disability, typically influenced by ableism.
  2. Diversity of disability: Accessibility is not solely about blind screen reader users; disability categories help us unpack and process the diversity of disabled users.
  3. Disability in the stages of UX research: Disabled participants can and should be part of every research phase — formative, prototype, and summative.
  4. Recruiting disabled participants: Recruiting disabled participants is not always easy, but that simply means we need to learn strategies on where to look.
  5. Designing your research: While our goal is to influence accessible products, our research execution must also be accessible.
  6. Facilitating an accessible study: Preparation and communication with your participants can ensure your study logistics run smoothly.
  7. Analyzing and reporting with accuracy and impact: How you communicate your findings is just as important as gathering them in the first place — so prepare to be a storyteller, educator, and advocate.
  8. Disability in the UX research field: Inclusion isn’t just for research participants, it’s important for our colleagues as well, as explained by blind UX Researcher Dr. Cynthia Bennett.
The book will challenge your disability mindset and what it means to be truly inclusive in your work. (Large preview) Who This Book Is For

Whether a UX professional who conducts research in industry or academia, or more broadly part of an engineering, product, or design function, you’ll want to read this book if…

  1. You have been tasked to improve accessibility of your product, but need to know where to start to facilitate this successfully.
  2. You want to establish a culture for accessibility in your company, but not sure how to make it work.
  3. You want to move from WCAG/EAA compliance to established accessibility practices and inclusion in research practices and beyond.
  4. You want to improve your overall accessibility knowledge and be viewed as an Accessibility Specialist for your organization.
About the Author

Dr. Michele A. Williams is owner of M.A.W. Consulting, LLC - Making Accessibility Work. Her 20+ years of experience include influencing top tech companies as a Senior User Experience (UX) Researcher and Accessibility Specialist and obtaining a PhD in Human-Centered Computing focused on accessibility. An international speaker, published academic author, and patented inventor, she is passionate about educating and advising on technology that does not exclude disabled users.

Technical Details Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Trine, Heather, and Steven are three of these people. Have you checked out their books already?

The Ethical Design Handbook

A practical guide on ethical design for digital products.

Add to cart $44

Understanding Privacy

Everything you need to know to put your users first and make a better web.

Add to cart $44

Touch Design for Mobile Interfaces

Learn how touchscreen devices really work — and how people really use them.

Add to cart $44

Kategorier: Amerikanska

State, Logic, And Native Power: CSS Wrapped 2025

tis, 12/09/2025 - 11:00

If I were to divide CSS evolutions into categories, we have moved far beyond the days when we simply asked for border-radius to feel like we were living in the future. We are currently living in a moment where the platform is handing us tools that don’t just tweak the visual layer, but fundamentally redefine how we architect interfaces. I thought the number of features announced in 2024 couldn’t be topped. I’ve never been so happily wrong.

The Chrome team’s “CSS Wrapped 2025” is not just a list of features; it is a manifesto for a dynamic, native web. As someone who has spent a couple of years documenting these evolutions — from defining “CSS5” eras to the intricacies of modern layout utilities — I find myself looking at this year’s wrap-up with a huge sense of excitement. We are seeing a shift towards “Optimized Ergonomics” and “Next-gen interactions” that allow us to stop fighting the code and start sculpting interfaces in their natural state.

In this article, you can find a comprehensive look at the standout features from Chrome’s report, viewed through the lens of my recent experiments and hopes for the future of the platform.

The Component Revolution: Finally, A Native Customizable Select

For years, we have relied on heavy JavaScript libraries to style dropdowns, a “decades-old problem” that the platform has finally solved. As I detailed in my deep dive into the history of the customizable select (and related articles), this has been a long road involving Open UI, bikeshedding names like <selectmenu> and <selectlist>, and finally landing on a solution that re-uses the existing <select> element.

The introduction of appearance: base-select is a strong foundation. It allows us to fully customize the <select> element — including the button and the dropdown list (via ::picker(select)) — using standard CSS. Crucially, this is built with progressive enhancement in mind. By wrapping our styles in a feature query, we ensure a seamless experience across all browsers.

We can opt in to this new behavior without breaking older browsers:

select { /* Opt-in for the new customizable select */ @supports (appearance: base-select) { &, &::picker(select) { appearance: base-select; } } }

The fantastic addition to allow rich content inside options, such as images or flags, is a lot of fun. We can create all sorts of selects nowadays:

  • Demo: I created a Poké-adventure demo showing how the new <selectedcontent> element can clone rich content (like a Pokéball icon) from an option directly into the button.

See the Pen A customizable select with images inside of the options and the selectedcontent [forked] by utilitybend.

See the Pen A customizable select with only pseudo-elements [forked] by utilitybend.

See the Pen An actual Select Menu with optgroups [forked] by utilitybend.

This feature alone signals a massive shift in how we will build forms, reducing dependencies and technical debt.

Scroll Markers And The Death Of The JavaScript Carousel

Creating carousels has historically been a friction point between developers and clients. Clients love them, developers dread the JavaScript required to make them accessible and performant. The arrival of ::scroll-marker and ::scroll-button() pseudo-elements changes this dynamic entirely.

These features allow us to create navigation dots and scroll buttons purely with CSS, linked natively to the scroll container. As I wrote on my blog, this was Love at first slide. The ability to create a fully functional, accessible slider without a single line of JavaScript is not just convenient; it is a triumph for performance. There are some accessibility concerns around this feature, and even though these are valid, there might be a way for us developers to make it work. The good thing is, all these UI changes are making it a lot easier than custom DOM manipulation and dragging around aria tags, but I digress…

We can now group markers automatically using scroll-marker-group and style the buttons using anchor positioning to place them exactly where we want.

.carousel { overflow-x: auto; scroll-marker-group: after; /* Creates the container for dots */ /* Create the buttons */ &::scroll-button(inline-end), &::scroll-button(inline-start) { content: " "; position: absolute; /* Use anchor positioning to center them */ position-anchor: --carousel; top: anchor(center); } /* Create the markers on the children */ div { &::scroll-marker { content: " "; width: 24px; border-radius: 50%; cursor: pointer; } /* Highlight the active marker */ &::scroll-marker:target-current { background: white; } } }

See the Pen Carousel Pure HTML and CSS [forked] by utilitybend.

See the Pen Webshop slick slider remake in CSS [forked] by utilitybend.

State Queries: Sticky Thing Stuck? Snappy Thing Snapped?

For a long time, we have lacked the ability to know if a “sticky thing is stuck” or if a “snappy item is snapped” without relying on IntersectionObserver hacks. Chrome 133 introduced scroll-state queries, allowing us to query these states declaratively.

By setting container-type: scroll-state, we can now style children based on whether they are stuck, snapped, or overflowing. This is a massive “quality of life” improvement that I have been eagerly waiting for since CSS Day 2023. It has even evolved a lot since we can also see the direction of the scroll, lovely!

For a simple example: we can finally apply a shadow to a header only when it is actually sticking to the top of the viewport:

.header-container { container-type: scroll-state; position: sticky; top: 0; header { transition: box-shadow 0.5s ease-out; /* The query checks the state of the container */ @container scroll-state(stuck: top) { box-shadow: rgba(0, 0, 0, 0.6) 0px 12px 28px 0px; } } }
  • Demo: A sticky header that only applies a shadow when it is actually stuck.

See the Pen Sticky headers with scroll-state query, checking if the sticky element is stuck [forked] by utilitybend.

  • Demo: A Pokémon-themed list that uses scroll-state queries combined with anchor positioning to move a frame over the currently snapped character.

See the Pen Scroll-state query to check which item is snapped with CSS, Pokemon version [forked] by utilitybend.

Optimized Ergonomics: Logic In CSS

The “Optimized Ergonomics” section of CSS Wrapped highlights features that make our workflows more intuitive. Three features stand out as transformative for how we write logic:

  1. if() Statements
    We are finally getting conditionals in CSS. The if() function acts like a ternary operator for stylesheets, allowing us to apply values based on media, support, or style queries inline. This reduces the need for verbose @media blocks for single property changes.
  2. @function functions
    We can finally move some logic to a different place, resulting in some cleaner files, a real quality of life feature.
  3. sibling-index() and sibling-count()
    These tree-counting functions solve the issue of staggering animations or styling items based on list size. As I explored in Styling siblings with CSS has never been easier, this eliminates the need to hard-code custom properties (like --index: 1) in our HTML.
Example: Calculating Layouts

We can now write concise mathematical formulas. For example, staggering an animation for cards entering the screen becomes trivial:

.card-container > * { animation: reveal 0.6s ease-out forwards; /* No more manual --index variables! */ animation-delay: calc(sibling-index() * 0.1s); }

I even experimented with using these functions along with trigonometry to place items in a perfect circle without any JavaScript.

See the Pen Stagger cards using sibling-index() [forked] by utilitybend.

  • Demo: Placing items in a perfect circle using sibling-index, sibling-count, and the new CSS @function feature.

See the Pen The circle using sibling-index, sibling-count and functions [forked] by utilitybend.

My CSS To-Do List: Features I Can’t Wait To Try

While I have been busy sculpting selects and transitions, the “CSS Wrapped 2025” report is packed with other goodies that I haven’t had the chance to fire up in CodePen yet. These are high on my list for my next experiments:

Anchored Container Queries

I used CSS Anchor Positioning for the buttons in my carousel demo, but “CSS Wrapped” highlights an evolution of this: Anchored Container Queries. This solves a problem we’ve all had with tooltips: if the browser flips the tooltip from top to bottom because of space constraints, the “arrow” often stays pointing the wrong way. With anchored container queries (@container anchored(fallback: flip-block)), we can style the element based on which fallback position the browser actually chose.

Nested View Transition Groups

View Transitions have been a revolution, but they came with a specific trade-off: they flattened the element tree, which often broke 3D transforms or overflow: clip. I always had a feeling that it was missing something, and this might just be the answer. By using view-transition-group: nearest, we can finally nest transition groups within each other.

This allows us to maintain clipping effects or 3D rotations during a transition — something that was previously impossible because the elements were hoisted up to the top level.

.card img { view-transition-name: photo; view-transition-group: nearest; /* Keep it nested! */ } Typography and Shapes

Finally, the ergonomist in me is itching to try Text Box Trim, which promises to remove that annoying extra whitespace above and below text content (the leading) to finally achieve perfect vertical alignment. And for the creative side, corner-shape and the shape() function are opening up non-rectangular layouts, allowing for “squaricles” and complex paths that respond to CSS variables. That being said, I can’t wait to have a design full of squircles!

A Hopeful Future

We are witnessing a world where CSS is becoming capable of handling logic, state, and complex interactions that previously belonged to JavaScript. Features like moveBefore (preserving DOM state for iframes/videos) and attr() (using types beyond strings for colors and grids) further cement this reality.

While some of these features are currently experimental or specific to Chrome, the momentum is undeniable. We must hope for continued support across all browsers through initiatives like Interop to ensure these capabilities become the baseline. That being said, having browser engines is just as important as having all these awesome features in “Chrome first”. These new features need to be discussed, tinkered with, and tested before ever landing in browsers.

It is a fantastic moment to get into CSS. We are no longer just styling documents; we are crafting dynamic, ergonomic, and robust applications with a native toolkit that is more powerful than ever.

Let’s get going with this new era and spread the word.

This is CSS Wrapped!

Kategorier: Amerikanska

How UX Professionals Can Lead AI Strategy

mån, 12/08/2025 - 09:00

Your senior management is excited about AI. They’ve read the articles, attended the webinars, and seen the demos. They’re convinced that AI will transform your organization, boost productivity, and give you a competitive edge.

Meanwhile, you’re sitting in your UX role wondering what this means for your team, your workflow, and your users. You might even be worried about your job security.

The problem is that the conversation about how AI gets implemented is happening right now, and if you’re not part of it, someone else will decide how it affects your work. That someone probably doesn’t understand user experience, research practices, or the subtle ways poor implementation can damage the very outcomes management hopes to achieve.

You have a choice. You can wait for directives to come down from above, or you can take control of the conversation and lead the AI strategy for your practice.

Why UX Professionals Must Own the AI Conversation

Management sees AI as efficiency gains, cost savings, competitive advantage, and innovation all wrapped up in one buzzword-friendly package. They’re not wrong to be excited. The technology is genuinely impressive and can deliver real value.

But without UX input, AI implementations often fail users in predictable ways:

  • They automate tasks without understanding the judgment calls those tasks require.
  • They optimize for speed while destroying the quality that made your work valuable.

Your expertise positions you perfectly to guide implementation. You understand users, workflows, quality standards, and the gap between what looks impressive in a demo and what actually works in practice.

Use AI Momentum to Advance Your Priorities

Management’s enthusiasm for AI creates an opportunity to advance priorities you’ve been fighting for unsuccessfully. When management is willing to invest in AI, you can connect those long-standing needs to the AI initiative. Position user research as essential for training AI systems on real user needs. Frame usability testing as the validation method that ensures AI-generated solutions actually work.

How AI gets implemented will shape your team’s roles, your users’ experiences, and your organization’s capability to deliver quality digital products.

Your Role Isn’t Disappearing (It’s Evolving)

Yes, AI will automate some of the tasks you currently do. But someone needs to decide which tasks get automated, how they get automated, what guardrails to put in place, and how automated processes fit around real humans doing complex work.

That someone should be you.

Think about what you already do. When you conduct user research, AI might help you transcribe interviews or identify themes. But you’re the one who knows which participant hesitated before answering, which feedback contradicts what you observed in their behavior, and which insights matter most for your specific product and users.

When you design interfaces, AI might generate layout variations or suggest components from your design system. But you’re the one who understands the constraints of your technical platform, the political realities of getting designs approved, and the edge cases that will break a clever solution.

Your future value comes from the work you’re already doing:

  • Seeing the full picture.
    You understand how this feature connects to that workflow, how this user segment differs from that one, and why the technically correct solution won’t work in your organization’s reality.
  • Making judgment calls.
    You decide when to follow the design system and when to break it, when user feedback reflects a real problem versus a feature request from one vocal user, and when to push back on stakeholders versus find a compromise.
  • Connecting the dots.
    You translate between technical constraints and user needs, between business goals and design principles, between what stakeholders ask for and what will actually solve their problem.

AI will keep getting better at individual tasks. But you’re the person who decides which solution actually works for your specific context. The people who will struggle are those doing simple, repeatable work without understanding why. Your value is in understanding context, making judgment calls, and connecting solutions to real problems.

Step 1: Understand Management’s AI Motivations

Before you can lead the conversation, you need to understand what’s driving it. Management is responding to real pressures: cost reduction, competitive pressure, productivity gains, and board expectations.

Speak their language.
When you talk to management about AI, frame everything in terms of ROI, risk mitigation, and competitive advantage. “This approach will protect our quality standards” is less compelling than “This approach reduces the risk of damaging our conversion rate while we test AI capabilities.”

Separate hype from reality.
Take time to research what AI capabilities actually exist versus what’s hype. Read case studies, try tools yourself, and talk to peers about what’s actually working.

Identify real pain points.
AI might legitimately address in your organization. Maybe your team spends hours formatting research findings, or accessibility testing creates bottlenecks. These are the problems worth solving.

Step 2: Audit Your Current State and Opportunities

Map your team’s work. Where does time actually go? Look at the past quarter and categorize how your team spent their hours.

Identify high-volume, repeatable tasks versus high-judgment work.
Repeatable tasks are candidates for automation. High-judgment work is where you add irreplaceable value.

Also, identify what you’ve wanted to do but couldn’t get approved.
This is your opportunity list. Maybe you’ve wanted quarterly usability tests, but only get budget annually. Write these down separately. You’ll connect them to your AI strategy in the next step.

Spot opportunities where AI could genuinely help:

  • Research synthesis:
    AI can help organize and categorize findings.
  • Analyzing user behavior data:
    AI can process analytics and session recordings to surface patterns you might miss.
  • Rapid prototyping:
    AI can quickly generate testable prototypes, speeding up your test cycles.
Step 3: Define AI Principles for Your UX Practice

Before you start forming your strategy, establish principles that will guide every decision.

Set non-negotiables.
User privacy, accessibility, and human oversight of significant decisions. Write these down and get agreement from leadership before you pilot anything.

Define criteria for AI use.
AI is good at pattern recognition, summarization, and generating variations. AI is poor at understanding context, making ethical judgments, and knowing when rules should be broken.

Define success metrics beyond efficiency.
Yes, you want to save time. But you also need to measure quality, user satisfaction, and team capability. Build a balanced scorecard that captures what actually matters.

Create guardrails.
Maybe every AI-generated interface needs human review before it ships. These guardrails prevent the obvious disasters and give you space to learn safely.

Step 4: Build Your AI-in-UX Strategy

Now you’re ready to build the actual strategy you’ll pitch to leadership. Start small with pilot projects that have a clear scope and evaluation criteria.

Connect to business outcomes management cares about.
Don’t pitch “using AI for research synthesis.” Pitch “reducing time from research to insights by 40%, enabling faster product decisions.”

Piggyback your existing priorities on AI momentum.
Remember that opportunity list from Step 2? Now you connect those long-standing needs to your AI strategy. If you’ve wanted more frequent usability testing, explain that AI implementations need continuous validation to catch problems before they scale. AI implementations genuinely benefit from good research practices. You’re simply using management’s enthusiasm for AI as the vehicle to finally get resources for practices that should have been funded all along.

Define roles clearly.
Where do humans lead? Where does AI assist? Where won’t you automate? Management needs to understand that some work requires human judgment and should never be fully automated.

Plan for capability building.
Your team will need training and new skills. Budget time and resources for this.

Address risks honestly.
AI could generate biased recommendations, miss important context, or produce work that looks good but doesn’t actually function. For each risk, explain how you’ll detect it and what you’ll do to mitigate it.

Step 5: Pitch the Strategy to Leadership

Frame your strategy as de-risking management’s AI ambitions, not blocking them. You’re showing them how to implement AI successfully while avoiding the obvious pitfalls.

Lead with outcomes and ROI they care about.
Put the business case up front.

Bundle your wish list into the AI strategy.
When you present your strategy, include those capabilities you’ve wanted but couldn’t get approved before. Don’t present them as separate requests. Integrate them as essential components. “To validate AI-generated designs, we’ll need to increase our testing frequency from annual to quarterly” sounds much more reasonable than “Can we please do more testing?” You’re explaining what’s required for their AI investment to succeed.

Show quick wins alongside a longer-term vision.
Identify one or two pilots that can show value within 30-60 days. Then show them how those pilots build toward bigger changes over the next year.

Ask for what you need.
Be specific. You need a budget for tools, time for pilots, access to data, and support for team training.

Step 6: Implement and Demonstrate Value

Run your pilots with clear before-and-after metrics. Measure everything: time saved, quality maintained, user satisfaction, team confidence.

Document wins and learning.
Failures are useful too. If a pilot doesn’t work out, document why and what you learned.

Share progress in management’s language. Monthly updates should focus on business outcomes, not technical details. “We’ve reduced research synthesis time by 35% while maintaining quality scores” is the right level of detail.

Build internal advocates by solving real problems.
When your AI pilots make someone’s job easier, you create advocates who will support broader adoption.

Iterate based on what works in your specific context. Not every AI application will fit your organization. Pay attention to what’s actually working and double down on that.

Taking Initiative Beats Waiting

AI adoption is happening. The question isn’t whether your organization will use AI, but whether you’ll shape how it gets implemented.

Your UX expertise is exactly what’s needed to implement AI successfully. You understand users, quality, and the gap between impressive demos and useful reality.

Take one practical first step this week.
Schedule 30 minutes to map one AI opportunity in your practice. Pick one area where AI might help, think through how you’d pilot it safely, and sketch out what success would look like.

Then start the conversation with your manager. You might be surprised how receptive they are to someone stepping up to lead this.

You know how to understand user needs, test solutions, measure outcomes, and iterate based on evidence. Those skills don’t change just because AI is involved. You’re applying your existing expertise to a new tool.

Your role isn’t disappearing. It’s evolving into something more strategic, more valuable, and more secure. But only if you take the initiative to shape that evolution yourself.

Further Reading On SmashingMag
Kategorier: Amerikanska

Beyond The Black Box: Practical XAI For UX Practitioners

fre, 12/05/2025 - 16:00

In my last piece, we established a foundational truth: for users to adopt and rely on AI, they must trust it. We talked about trust being a multifaceted construct, built on perceptions of an AI’s Ability, Benevolence, Integrity, and Predictability. But what happens when an AI, in its silent, algorithmic wisdom, makes a decision that leaves a user confused, frustrated, or even hurt? A mortgage application is denied, a favorite song is suddenly absent from a playlist, and a qualified resume is rejected before a human ever sees it. In these moments, ability and predictability are shattered, and benevolence feels a world away.

Our conversation now must evolve from the why of trust to the how of transparency. The field of Explainable AI (XAI), which focuses on developing methods to make AI outputs understandable to humans, has emerged to address this, but it’s often framed as a purely technical challenge for data scientists. I argue it’s a critical design challenge for products relying on AI. It’s our job as UX professionals to bridge the gap between algorithmic decision-making and human understanding.

This article provides practical, actionable guidance on how to research and design for explainability. We’ll move beyond the buzzwords and into the mockups, translating complex XAI concepts into concrete design patterns you can start using today.

De-mystifying XAI: Core Concepts For UX Practitioners

XAI is about answering the user’s question: “Why?” Why was I shown this ad? Why is this movie recommended to me? Why was my request denied? Think of it as the AI showing its work on a math problem. Without it, you just have an answer, and you’re forced to take it on faith. In showing the steps, you build comprehension and trust. You also allow for your work to be double-checked and verified by the very humans it impacts.

Feature Importance And Counterfactuals

There are a number of techniques we can use to clarify or explain what is happening with AI. While methods range from providing the entire logic of a decision tree to generating natural language summaries of an output, two of the most practical and impactful types of information UX practitioners can introduce into an experience are feature importance (Figure 1) and counterfactuals. These are often the most straightforward for users to understand and the most actionable for designers to implement.

Feature Importance

This explainability method answers, “What were the most important factors the AI considered?” It’s about identifying the top 2-3 variables that had the biggest impact on the outcome. It’s the headline, not the whole story.

Example: Imagine an AI that predicts whether a customer will churn (cancel their service). Feature importance might reveal that “number of support calls in the last month” and “recent price increases” were the two most important factors in determining if a customer was likely to churn. Counterfactuals

This powerful method answers, “What would I need to change to get a different outcome?” This is crucial because it gives users a sense of agency. It transforms a frustrating “no” into an actionable “not yet.”

Example: Imagine a loan application system that uses AI. A user is denied a loan. Instead of just seeing “Application Denied,” a counterfactual explanation would also share, “If your credit score were 50 points higher, or if your debt-to-income ratio were 10% lower, your loan would have been approved.” This gives Sarah clear, actionable steps she can take to potentially get a loan in the future. Using Model Data To Enhance The Explanation

Although technical specifics are often handled by data scientists, it's helpful for UX practitioners to know that tools like LIME (Local Interpretable Model-agnostic Explanations) which explains individual predictions by approximating the model locally, and SHAP (SHapley Additive exPlanations) which uses a game theory approach to explain the output of any machine learning model are commonly used to extract these “why” insights from complex models. These libraries essentially help break down an AI’s decision to show which inputs were most influential for a given outcome.

When done properly, the data underlying an AI tool’s decision can be used to tell a powerful story. Let’s walk through feature importance and counterfactuals and show how the data science behind the decision can be utilized to enhance the user’s experience.

Now let’s cover feature importance with the assistance of Local Explanations (e.g., LIME) data: This approach answers, “Why did the AI make this specific recommendation for me, right now?” Instead of a general explanation of how the model works, it provides a focused reason for a single, specific instance. It’s personal and contextual.

Example: Imagine an AI-powered music recommendation system like Spotify. A local explanation would answer, “Why did the system recommend this specific song by Adele to you right now?” The explanation might be: “Because you recently listened to several other emotional ballads and songs by female vocalists.”

Finally, let’s cover the inclusion of Value-based Explanations (e.g. Shapley Additive Explanations (SHAP) data to an explanation of a decision: This is a more nuanced version of feature importance that answers, “How did each factor push the decision one way or the other?” It helps visualize what mattered, and whether its influence was positive or negative.

Example: Imagine a bank uses an AI model to decide whether to approve a loan application.

Feature Importance: The model output might show that the applicant’s credit score, income, and debt-to-income ratio were the most important factors in its decision. This answers what mattered.

Feature Importance with Value-Based Explanations (SHAP): SHAP values would take feature importance further based on elements of the model.

  • For an approved loan, SHAP might show that a high credit score significantly pushed the decision towards approval (positive influence), while a slightly higher-than-average debt-to-income ratio pulled it slightly away (negative influence), but not enough to deny the loan.
  • For a denied loan, SHAP could reveal that a low income and a high number of recent credit inquiries strongly pushed the decision towards denial, even if the credit score was decent.

This helps the loan officer explain to the applicant beyond what was considered, to how each factor contributed to the final “yes” or “no” decision.

It’s crucial to recognize that the ability to provide good explanations often starts much earlier in the development cycle. Data scientists and engineers play a pivotal role by intentionally structuring models and data pipelines in ways that inherently support explainability, rather than trying to bolt it on as an afterthought.

Research and design teams can foster this by initiating early conversations with data scientists and engineers about user needs for understanding, contributing to the development of explainability metrics, and collaboratively prototyping explanations to ensure they are both accurate and user-friendly.

XAI And Ethical AI: Unpacking Bias And Responsibility

Beyond building trust, XAI plays a critical role in addressing the profound ethical implications of AI*, particularly concerning algorithmic bias. Explainability techniques, such as analyzing SHAP values, can reveal if a model’s decisions are disproportionately influenced by sensitive attributes like race, gender, or socioeconomic status, even if these factors were not explicitly used as direct inputs.

For instance, if a loan approval model consistently assigns negative SHAP values to applicants from a certain demographic, it signals a potential bias that needs investigation, empowering teams to surface and mitigate such unfair outcomes.

The power of XAI also comes with the potential for “explainability washing.” Just as “greenwashing” misleads consumers about environmental practices, explainability washing can occur when explanations are designed to obscure, rather than illuminate, problematic algorithmic behavior or inherent biases. This could manifest as overly simplistic explanations that omit critical influencing factors, or explanations that strategically frame results to appear more neutral or fair than they truly are. It underscores the ethical responsibility of UX practitioners to design explanations that are genuinely transparent and verifiable.

UX professionals, in collaboration with data scientists and ethicists, hold a crucial responsibility in communicating the why of a decision, and also the limitations and potential biases of the underlying AI model. This involves setting realistic user expectations about AI accuracy, identifying where the model might be less reliable, and providing clear channels for recourse or feedback when users perceive unfair or incorrect outcomes. Proactively addressing these ethical dimensions will allow us to build AI systems that are truly just and trustworthy.

From Methods To Mockups: Practical XAI Design Patterns

Knowing the concepts is one thing; designing them is another. Here’s how we can translate these XAI methods into intuitive design patterns.

Pattern 1: The "Because" Statement (for Feature Importance)

This is the simplest and often most effective pattern. It’s a direct, plain-language statement that surfaces the primary reason for an AI’s action.

  • Heuristic: Be direct and concise. Lead with the single most impactful reason. Avoid jargon at all costs.
Example: Imagine a music streaming service. Instead of just presenting a “Discover Weekly” playlist, you add a small line of microcopy.

Song Recommendation: “Velvet Morning”
Because you listen to “The Fuzz” and other psychedelic rock. Pattern 2: The "What-If" Interactive (for Counterfactuals)

Counterfactuals are inherently about empowerment. The best way to represent them is by giving users interactive tools to explore possibilities themselves. This is perfect for financial, health, or other goal-oriented applications.

  • Heuristic: Make explanations interactive and empowering. Let users see the cause and effect of their choices.
Example: A loan application interface. After a denial, instead of a dead end, the user gets a tool to determine how various scenarios (what-ifs) might play out (See Figure 1).

Pattern 3: The Highlight Reel (For Local Explanations)

When an AI performs an action on a user’s content (like summarizing a document or identifying faces in photos), the explanation should be visually linked to the source.

  • Heuristic: Use visual cues like highlighting, outlines, or annotations to connect the explanation directly to the interface element it’s explaining.
Example: An AI tool that summarizes long articles.

AI-Generated Summary Point:
Initial research showed a market gap for sustainable products.

Source in Document:
“...Our Q2 analysis of market trends conclusively demonstrated that no major competitor was effectively serving the eco-conscious consumer, revealing a significant market gap for sustainable products...” Pattern 4: The Push-and-Pull Visual (for Value-based Explanations)

For more complex decisions, users might need to understand the interplay of factors. Simple data visualizations can make this clear without being overwhelming.

  • Heuristic: Use simple, color-coded data visualizations (like bar charts) to show the factors that positively and negatively influenced a decision.
Example: An AI screening a candidate’s profile for a job.

Why this candidate is a 75% match:

Factors pushing the score up:
  • 5+ Years UX Research Experience
  • Proficient in Python

Factors pushing the score down:
  • No experience with B2B SaaS

Learning and using these design patterns in the UX of your AI product will help increase the explainability. You can also use additional techniques that I’m not covering in-depth here. This includes the following:

  • Natural language explanations: Translating an AI’s technical output into simple, conversational human language that non-experts can easily understand.
  • Contextual explanations: Providing a rationale for an AI’s output at the specific moment and location, it is most relevant to the user’s task.
  • Relevant visualizations: Using charts, graphs, or heatmaps to visually represent an AI’s decision-making process, making complex data intuitive and easier for users to grasp.

A Note For the Front End: Translating these explainability outputs into seamless user experiences also presents its own set of technical considerations. Front-end developers often grapple with API design to efficiently retrieve explanation data, and performance implications (like the real-time generation of explanations for every user interaction) need careful planning to avoid latency.

Some Real-world Examples

UPS Capital’s DeliveryDefense

UPS uses AI to assign a “delivery confidence score” to addresses to predict the likelihood of a package being stolen. Their DeliveryDefense software analyzes historical data on location, loss frequency, and other factors. If an address has a low score, the system can proactively reroute the package to a secure UPS Access Point, providing an explanation for the decision (e.g., “Package rerouted to a secure location due to a history of theft”). This system demonstrates how XAI can be used for risk mitigation and building customer trust through transparency.

Autonomous Vehicles

These vehicles of the future will need to effectively use XAI to help their vehicles make safe, explainable decisions. When a self-driving car brakes suddenly, the system can provide a real-time explanation for its action, for example, by identifying a pedestrian stepping into the road. This is not only crucial for passenger comfort and trust but is a regulatory requirement to prove the safety and accountability of the AI system.

IBM Watson Health (and its challenges)

While often cited as a general example of AI in healthcare, it’s also a valuable case study for the importance of XAI. The failure of its Watson for Oncology project highlights what can go wrong when explanations are not clear, or when the underlying data is biased or not localized. The system’s recommendations were sometimes inconsistent with local clinical practices because they were based on U.S.-centric guidelines. This serves as a cautionary tale on the need for robust, context-aware explainability.

The UX Researcher’s Role: Pinpointing And Validating Explanations

Our design solutions are only effective if they address the right user questions at the right time. An explanation that answers a question the user doesn’t have is just noise. This is where UX research becomes the critical connective tissue in an XAI strategy, ensuring that we explain the what and how that actually matters to our users. The researcher’s role is twofold: first, to inform the strategy by identifying where explanations are needed, and second, to validate the designs that deliver those explanations.

Informing the XAI Strategy (What to Explain)

Before we can design a single explanation, we must understand the user’s mental model of the AI system. What do they believe it’s doing? Where are the gaps between their understanding and the system’s reality? This is the foundational work of a UX researcher.

Mental Model Interviews: Unpacking User Perceptions Of AI Systems

Through deep, semi-structured interviews, UX practitioners can gain invaluable insights into how users perceive and understand AI systems. These sessions are designed to encourage users to literally draw or describe their internal “mental model” of how they believe the AI works. This often involves asking open-ended questions that prompt users to explain the system’s logic, its inputs, and its outputs, as well as the relationships between these elements.

These interviews are powerful because they frequently reveal profound misconceptions and assumptions that users hold about AI. For example, a user interacting with a recommendation engine might confidently assert that the system is based purely on their past viewing history. They might not realize that the algorithm also incorporates a multitude of other factors, such as the time of day they are browsing, the current trending items across the platform, or even the viewing habits of similar users.

Uncovering this gap between a user’s mental model and the actual underlying AI logic is critically important. It tells us precisely what specific information we need to communicate to users to help them build a more accurate and robust mental model of the system. This, in turn, is a fundamental step in fostering trust. When users understand, even at a high level, how an AI arrives at its conclusions or recommendations, they are more likely to trust its outputs and rely on its functionality.

AI Journey Mapping: A Deep Dive Into User Trust And Explainability

By meticulously mapping the user’s journey with an AI-powered feature, we gain invaluable insights into the precise moments where confusion, frustration, or even profound distrust emerge. This uncovers critical junctures where the user’s mental model of how the AI operates clashes with its actual behavior.

Consider a music streaming service: Does the user’s trust plummet when a playlist recommendation feels “random,” lacking any discernible connection to their past listening habits or stated preferences? This perceived randomness is a direct challenge to the user’s expectation of intelligent curation and a breach of the implicit promise that the AI understands their taste. Similarly, in a photo management application, do users experience significant frustration when an AI photo-tagging feature consistently misidentifies a cherished family member? This error is more than a technical glitch; it strikes at the heart of accuracy, personalization, and even emotional connection.

These pain points are vivid signals indicating precisely where a well-placed, clear, and concise explanation is necessary. Such explanations serve as crucial repair mechanisms, mending a breach of trust that, if left unaddressed, can lead to user abandonment.

The power of AI journey mapping lies in its ability to move us beyond simply explaining the final output of an AI system. While understanding what the AI produced is important, it’s often insufficient. Instead, this process compels us to focus on explaining the process at critical moments. This means addressing:

  • Why a particular output was generated: Was it due to specific input data? A particular model architecture?
  • What factors influenced the AI’s decision: Were certain features weighted more heavily?
  • How the AI arrived at its conclusion: Can we offer a simplified, analogous explanation of its internal workings?
  • What assumptions the AI made: Were there implicit understandings of the user’s intent or data that need to be surfaced?
  • What the limitations of the AI are: Clearly communicating what the AI cannot do, or where its accuracy might waver, builds realistic expectations.

AI journey mapping transforms the abstract concept of XAI into a practical, actionable framework for UX practitioners. It enables us to move beyond theoretical discussions of explainability and instead pinpoint the exact moments where user trust is at stake, providing the necessary insights to build AI experiences that are powerful, transparent, understandable, and trustworthy.

Ultimately, research is how we uncover the unknowns. Your team might be debating how to explain why a loan was denied, but research might reveal that users are far more concerned with understanding how their data was used in the first place. Without research, we are simply guessing what our users are wondering.

Collaborating On The Design (How to Explain Your AI)

Once research has identified what to explain, the collaborative loop with design begins. Designers can prototype the patterns we discussed earlier—the “Because” statement, the interactive sliders—and researchers can put those designs in front of users to see if they hold up.

Targeted Usability & Comprehension Testing: We can design research studies that specifically test the XAI components. We don’t just ask, “Is this easy to use?” We ask, “After seeing this, can you tell me in your own words why the system recommended this product?” or “Show me what you would do to see if you could get a different result.” The goal here is to measure comprehension and actionability, alongside usability.

Measuring Trust Itself: We can use simple surveys and rating scales before and after an explanation is shown. For instance, we can ask a user on a 5-point scale, “How much do you trust this recommendation?” before they see the “Because” statement, and then ask them again afterward. This provides quantitative data on whether our explanations are actually moving the needle on trust.

This process creates a powerful, iterative loop. Research findings inform the initial design. That design is then tested, and the new findings are fed back to the design team for refinement. Maybe the “Because” statement was too jargony, or the “What-If” slider was more confusing than empowering. Through this collaborative validation, we ensure that the final explanations are technically accurate, genuinely understandable, useful, and trust-building for the people using the product.

The Goldilocks Zone Of Explanation

A critical word of caution: it is possible to over-explain. As in the fairy tale, where Goldilocks sought the porridge that was ‘just right’, the goal of a good explanation is to provide the right amount of detail—not too much and not too little. Bombarding a user with every variable in a model will lead to cognitive overload and can actually decrease trust. The goal is not to make the user a data scientist.

One solution is progressive disclosure.

  1. Start with the simple. Lead with a concise “Because” statement. For most users, this will be enough.
  2. Offer a path to detail. Provide a clear, low-friction link like “Learn More” or “See how this was determined.”
  3. Reveal the complexity. Behind that link, you can offer the interactive sliders, the visualizations, or a more detailed list of contributing factors.

This layered approach respects user attention and expertise, providing just the right amount of information for their needs. Let’s imagine you’re using a smart home device that recommends optimal heating based on various factors.

Start with the simple: “Your home is currently heated to 72 degrees, which is the optimal temperature for energy savings and comfort.

Offer a path to detail: Below that, a small link or button: “Why is 72 degrees optimal?"

Reveal the complexity: Clicking that link could open a new screen showing:

  • Interactive sliders for outside temperature, humidity, and your preferred comfort level, demonstrating how these adjust the recommended temperature.
  • A visualization of energy consumption at different temperatures.
  • A list of contributing factors like “Time of day,” “Current outside temperature,” “Historical energy usage,” and “Occupancy sensors.”

It’s effective to combine multiple XAI methods and this Goldilocks Zone of Explanation pattern, which advocates for progressive disclosure, implicitly encourages this. You might start with a simple “Because” statement (Pattern 1) for immediate comprehension, and then offer a “Learn More” link that reveals a “What-If” Interactive (Pattern 2) or a “Push-and-Pull Visual” (Pattern 4) for deeper exploration.

For instance, a loan application system could initially state the primary reason for denial (feature importance), then allow the user to interact with a “What-If” tool to see how changes to their income or debt would alter the outcome (counterfactuals), and finally, provide a detailed “Push-and-Pull” chart (value-based explanation) to illustrate the positive and negative contributions of all factors. This layered approach allows users to access the level of detail they need, when they need it, preventing cognitive overload while still providing comprehensive transparency.

Determining which XAI tools and methods to use is primarily a function of thorough UX research. Mental model interviews and AI journey mapping are crucial for pinpointing user needs and pain points related to AI understanding and trust. Mental model interviews help uncover user misconceptions about how the AI works, indicating areas where fundamental explanations (like feature importance or local explanations) are needed. AI journey mapping, on the other hand, identifies critical moments of confusion or distrust in the user’s interaction with the AI, signaling where more granular or interactive explanations (like counterfactuals or value-based explanations) would be most beneficial to rebuild trust and provide agency.

Ultimately, the best way to choose a technique is to let user research guide your decisions, ensuring that the explanations you design directly address actual user questions and concerns, rather than simply offering technical details for their own sake.

XAI for Deep Reasoning Agents

Some of the newest AI systems, known as deep reasoning agents, produce an explicit “chain of thought” for every complex task. They do not merely cite sources; they show the logical, step-by-step path they took to arrive at a conclusion. While this transparency provides valuable context, a play-by-play that spans several paragraphs can feel overwhelming to a user simply trying to complete a task.

The principles of XAI, especially the Goldilocks Zone of Explanation, apply directly here. We can curate the journey, using progressive disclosure to show only the final conclusion and the most salient step in the thought process first. Users can then opt in to see the full, detailed, multi-step reasoning when they need to double-check the logic or find a specific fact. This approach respects user attention while preserving the agent’s full transparency.

Next Steps: Empowering Your XAI Journey

Explainability is a fundamental pillar for building trustworthy and effective AI products. For the advanced practitioner looking to drive this change within their organization, the journey extends beyond design patterns into advocacy and continuous learning.

To deepen your understanding and practical application, consider exploring resources like the AI Explainability 360 (AIX360) toolkit from IBM Research or Google’s What-If Tool, which offer interactive ways to explore model behavior and explanations. Engaging with communities like the Responsible AI Forum or specific research groups focused on human-centered AI can provide invaluable insights and collaboration opportunities.

Finally, be an advocate for XAI within your own organization. Frame explainability as a strategic investment. Consider a brief pitch to your leadership or cross-functional teams:

“By investing in XAI, we’ll go beyond building trust; we’ll accelerate user adoption, reduce support costs by empowering users with understanding, and mitigate significant ethical and regulatory risks by exposing potential biases. This is good design and smart business.”

Your voice, grounded in practical understanding, is crucial in bringing AI out of the black box and into a collaborative partnership with users.

Kategorier: Amerikanska

Masonry: Things You Won’t Need A Library For Anymore

tis, 12/02/2025 - 11:00

About 15 years ago, I was working at a company where we built apps for travel agents, airport workers, and airline companies. We also built our own in-house framework for UI components and single-page app capabilities.

We had components for everything: fields, buttons, tabs, ranges, datatables, menus, datepickers, selects, and multiselects. We even had a div component. Our div component was great by the way, it allowed us to do rounded corners on all browsers, which, believe it or not, wasn't an easy thing to do at the time.

Our work took place at a point in our history when JS, Ajax, and dynamic HTML were seen as a revolution that brought us into the future. Suddenly, we could update a page dynamically, get data from a server, and avoid having to navigate to other pages, which was seen as slow and flashed a big white rectangle on the screen between the two pages.

There was a phrase, made popular by Jeff Atwood (the founder of StackOverflow), which read:

“Any application that can be written in JavaScript will eventually be written in JavaScript.”

Jeff Atwood

To us at the time, this felt like a dare to actually go and create those apps. It felt like a blanket approval to do everything with JS.

So we did everything with JS, and we didn’t really take the time to research other ways of doing things. We didn’t really feel the incentive to properly learn what HTML and CSS could do. We didn’t really perceive the web as an evolving app platform in its entirety. We mostly saw it as something we needed to work around, especially when it came to browser support. We could just throw more JS at it to get things done.

Would taking the time to learn more about how the web worked and what was available on the platform have helped me? Sure, I could probably have shaved a bunch of code that wasn’t truly needed. But, at the time, maybe not that much.

You see, browser differences were pretty significant back then. This was a time when Internet Explorer was still the dominant browser, with Firefox being the close second, but starting to lose market share due to Chrome rapidly gaining popularity. Although Chrome and Firefox were quite good at agreeing on web standards, the environments in which our apps were running meant that we had to support IE6 for a long time. Even when we were allowed to support IE8, we still had to deal with a lot of differences between browsers. Not only that, but the web of the time just didn't have that many capabilities built right into the platform.

Fast forward to today. Things have changed tremendously. Not only do we have more of these capabilities than ever before, but the rate at which they become available has increased as well.

Let me ask the question again, then: Would taking the time to learn more about how the web works and what is available on the platform help you today? Absolutely yes. Learning to understand and use the web platform today puts you at a huge advantage over other developers.

Whether you work on performance, accessibility, responsiveness, all of them together, or just shipping UI features, if you want to do it as a responsible engineer, knowing the tools that are available to you helps you reach your goals faster and better.

Some Things You Might Not Need A Library For Anymore

Knowing what browsers support today, the question, then, is: What can we ditch? Do we need a div component to do rounded corners in 2025? Of course, we don’t. The border-radius property has been supported by all currently used browsers for more than 15 years at this point. And corner-shape is also coming soon, for even fancier corners.

Let’s take a look at relatively recent features that are now available in all major browsers, and which you can use to replace existing dependencies in your codebase.

The point isn't to immediately ditch all your beloved libraries and rewrite your codebase. As for everything else, you’ll need to take browser support into account first and decide based on other factors specific to your project. The following features are implemented in the three main browser engines (Chromium, WebKit, and Gecko), but you might have different browser support requirements that prevent you from using them right away. Now is still a good time to learn about these features, though, and perhaps plan to use them at some point.

Popovers And Dialogs

The Popover API, the <dialog> HTML element, and the ::backdrop pseudo-element can help you get rid of dependencies on popup, tooltip, and dialog libraries, such as Floating UI, Tippy.js, Tether, or React Tooltip.

They handle accessibility and focus management for you, out of the box, are highly customizable by using CSS, and can easily be animated.

Accordions

The <details> element, its name attribute for mutually exclusive elements, and the ::details-content pseudo-element remove the need for accordion components like the Bootstrap Accordion or the React Accordion component.

Just using the platform here means it’s easier for folks who know HTML/CSS to understand your code without having to first learn to use a specific library. It also means you’re immune to breaking changes in the library or the discontinuation of that library. And, of course, it means less code to download and run. Mutually exclusive details elements don’t need JS to open, close, or animate.

CSS Syntax

Cascade layers, for a more organized CSS codebase, CSS nesting, for more compact CSS, new color functions, relative colors, and color-mix, new Maths functions like abs(), sign(), pow() and others help reduce dependencies on CSS pre-processors, utility libraries like Bootstrap and Tailwind, or even runtime CSS-in-JS libraries.

The game changer :has(), one of the most requested features for a long time, removes the need for more complicated JS-based solutions.

JS Utilities

Modern Array methods like findLast(), or at(), as well as Set methods like difference(), intersection(), union() and others can reduce dependencies on libraries like Lodash.

Container Queries

Container queries make UI components respond to things other than the viewport size, and therefore make them more reusable across different contexts.

No need to use a JS-heavy UI library for this anymore, and no need to use a polyfill either.

Layout

Grid, subgrid, flexbox, or multi-column have been around for a long time now, but looking at the results of the State of CSS surveys, it’s clear that developers tend to be very cautious with adopting new things, and wait for a very long time before they do.

These features have been Baseline for a long time and you could use them to get rid of dependencies on things like the Bootstrap’s grid system, Foundation Framework’s flexbox utilities, Bulma fixed grid, Materialize grid, or Tailwind columns.

I’m not saying you should drop your framework. Your team adopted it for a reason, and removing it might be a big project. But looking at what the web platform can offer without a third-party wrapper on top comes with a lot of benefits.

Things You Might Not Need Anymore In The Near Future

Now, let’s take a quick look at some of the things you will not need a library for in the near future. That is to say, the things below are not quite ready for mass adoption, but being aware of them and planning for potential later use can be helpful.

Anchor Positioning

CSS anchor positioning handles the positioning of popovers and tooltips relative to other elements, and takes care of keeping them in view, even when moving, scrolling, or resizing the page.

This is a great complement to the Popover API mentioned before, which will make it even easier to migrate away from more performance-intensive JS solutions.

Navigation API

The Navigation API can be used to handle navigation in single-page apps and might be a great complement, or even a replacement, to React Router, Next.js routing, or Angular routing tasks.

View Transitions API

The View Transitions API can animate between the different states of a page. On a single-page application, this makes smooth transitions between states very easy, and can help you get rid of animation libraries such as Anime.js, GSAP, or Motion.dev.

Even better, the API can also be used with multiple-page applications.

Remember earlier, when I said that the reason we built single-page apps at the company where I worked 15 years ago was to avoid the white flash of page reloads when navigating? Had that API been available at the time, we would have been able to achieve beautiful page transition effects without a single-page framework and without a huge initial download of the entire app.

Scroll-driven Animations

Scroll-driven animations run on the user’s scroll position, rather than over time, making them a great solution for storytelling and product tours.

Some people have gone a bit over the top with it, but when used well, this can be a very effective design tool, and can help get rid of libraries like: ScrollReveal, GSAP Scroll, or WOW.js.

Customizable Selects

A customizable select is a normal <select> element that lets you fully customize its appearance and content, while ensuring accessibility and performance benefits.

This has been a long time coming, and a highly requested feature, and it’s amazing to see it come to the web platform soon. With a built-in customizable select, you can finally ditch all this hard-to-maintain JS code for your custom select components.

CSS Masonry

CSS Masonry is another upcoming web platform feature that I want to spend more time on.

With CSS Masonry, you can achieve layouts that are very hard, or even impossible, with flex, grid, or other built-in CSS layout primitives. Developers often resort to using third-party libraries to achieve Masonry layouts, such as the Masonry JS library.

But, more on that later. Let’s wrap this point up before moving on to Masonry.

Why You Should Care

The job market is full of web developers with experience in JavaScript and the latest frameworks of the day. So, really, what’s the point in learning to use the web platform primitives more, if you can do the same things with the libraries, utilities, and frameworks you already know today?

When an entire industry relies on these frameworks, and you can just pull in the right library, shouldn’t browser vendors just work with these libraries to make them load and run faster, rather than trying to convince developers to use the platform instead?

First of all, we do work with library authors, and we do make frameworks better by learning about what they use and improving those areas.

But secondly, “just using the platform” can bring pretty significant benefits.

Sending Less Code To Devices

The main benefit is that you end up sending far less code to your clients’ devices.

According to the 2024 Web Almanac, the average number of HTTP requests is around 70 per site, most of which is due to JavaScript with 23 requests. In 2024, JS overtook images as the dominant file type too. The median number of page requests for JS files is 23, up 8% since 2022.

And page size continues to grow year over year. The median page weight is around 2MB now, which is 1.8MB more than it was 10 years ago.

Sure, your internet connection speed has probably increased, too, but that’s not the case for everyone. And not everyone has the same device capabilities either.

Pulling in third-party code for things you can do with the platform, instead, most probably means you ship more code, and therefore reach fewer customers than you normally would. On the web, bad loading performance leads to large abandonment rates and hurts brand reputation.

Running Less Code On Devices

Furthermore, the code you do ship on your customers’ devices likely runs faster if it uses fewer JavaScript abstractions on top of the platform. It’s also probably more responsive and more accessible by default. All of this leads to more and happier customers.

Check my colleague Alex Russell’s yearly performance inequality gap blog, which shows that premium devices are largely absent from markets with billions of users due to wealth inequality. And this gap is only growing over time.

Built-in Masonry Layout

One web platform feature that’s coming soon and which I’m very excited about is CSS Masonry.

Let me start by explaining what Masonry is.

What Is Masonry Masonry is a type of layout that was made popular by Pinterest years ago. It creates independent tracks of content within which items pack themselves up as close to the start of the track as they can.

Many people see Masonry as a great option for portfolios and photo galleries, which it certainly can do. But Masonry is more flexible than what you see on Pinterest, and it’s not limited to just waterfall-like layouts.

In a Masonry layout:

  • Tracks can be columns or rows:

  • Tracks of content don’t all have to be the same size:

  • Items can span multiple tracks:

  • Items can be placed on specific tracks; they don’t have to always follow the automatic placement algorithm:

Demos

Here are a few simple demos I made by using the upcoming implementation of CSS Masonry in Chromium.

A photo gallery demo, showing how items (the title in this case) can span multiple tracks:

Another photo gallery showing tracks of different sizes:

A news site layout with some tracks wider than others, and some items spanning the entire width of the layout:

A kanban board showing that items can be placed onto specific tracks:

Note: The previous demos were made with a version of Chromium that’s not yet available to most web users, because CSS Masonry is only just starting to be implemented in browsers.

However, web developers have been happily using libraries to create Masonry layouts for years already.

Sites Using Masonry Today

Indeed, Masonry is pretty common on the web today. Here are a few examples I found besides Pinterest:

And a few more, less obvious, examples:

So, how were these layouts created?

Workarounds

One trick that I’ve seen used is using a Flexbox layout instead, changing its direction to column, and setting it to wrap.

This way, you can place items of different heights in multiple, independent columns, giving the impression of a Masonry layout:

There are, however, two limitations with this workaround:

  1. The order of items is different from what it would be with a real Masonry layout. With Flexbox, items fill the first column first and, when it’s full, then go to the next column. With Masonry, items would stack in whichever track (or column in this case) has more space available.
  2. But also, and perhaps more importantly, this workaround requires that you set a fixed height to the Flexbox container; otherwise, no wrapping would occur.
Third-party Masonry Libraries

For more advanced cases, developers have been using libraries.

The most well-known and popular library for this is simply called Masonry, and it gets downloaded about 200,000 times per week according to NPM.

Squarespace also provides a layout component that renders a Masonry layout, for a no-code alternative, and many sites use it.

Both of these options use JavaScript code to place items in the layout.

Built-in Masonry

I’m really excited that Masonry is now starting to appear in browsers as a built-in CSS feature. Over time, you will be able to use Masonry just like you do Grid or Flexbox, that is, without needing any workarounds or third-party code.

My team at Microsoft has been implementing built-in Masonry support in the Chromium open source project, which Edge, Chrome, and many other browsers are based on. Mozilla was actually the first browser vendor to propose an experimental implementation of Masonry back in 2020. And Apple has also been very interested in making this new web layout primitive happen.

The work to standardize the feature is also moving ahead, with agreement within the CSS working group about the general direction and even a new display type display: grid-lanes.

If you want to learn more about Masonry and track progress, check out my CSS Masonry resources page.

In time, when Masonry becomes a Baseline feature, just like Grid or Flexbox, we’ll be able to simply use it and benefit from:

  • Better performance,
  • Better responsiveness,
  • Ease of use and simpler code.

Let’s take a closer look at these.

Better Performance

Making your own Masonry-like layout system, or using a third-party library instead, means you’ll have to run JavaScript code to place items on the screen. This also means that this code will be render blocking. Indeed, either nothing will appear, or things won’t be in the right places or of the right sizes, until that JavaScript code has run.

Masonry layout is often used for the main part of a web page, which means the code would be making your main content appear later than it could otherwise have, degrading your LCP, or Largest Contentful Paint metric, which plays a big role in perceived performance and search engine optimization.

I tested the Masonry JS library with a simple layout and by simulating a slow 4G connection in DevTools. The library is not very big (24KB, 7.8KB gzipped), but it took 600ms to load under my test conditions.

Here is a performance recording showing that long 600ms load time for the Masonry library, and that no other rendering activity happened while that was happening:

In addition, after the initial load time, the downloaded script then needed to be parsed, compiled, and then run. All of which, as mentioned before, was blocking the rendering of the page.

With a built-in Masonry implementation in the browser, we won’t have a script to load and run. The browser engine will just do its thing during the initial page rendering step.

Better Responsiveness

Similar to when a page first loads, resizing the browser window leads to rendering the layout in that page again. At this point, though, if the page is using the Masonry JS library, there’s no need to load the script again, because it’s already here. However, the code that moves items in the right places needs to run.

Now this particular library seems to be pretty fast at doing this when the page loads. However, it animates the items when they need to move to a different place on window resize, and this makes a big difference.

Of course, users don’t spend time resizing their browser windows as much as we developers do. But this animated resizing experience can be pretty jarring and adds to the perceived time it takes for the page to adapt to its new size.

Ease Of Use And Simpler Code

How easy it is to use a web feature and how simple the code looks are important factors that can make a big difference for your team. They can’t ever be as important as the final user experience, of course, but developer experience impacts maintainability. Using a built-in web feature comes with important benefits on that front:

  • Developers who already know HTML, CSS, and JS will most likely be able to use that feature easily because it’s been designed to integrate well and be consistent with the rest of the web platform.
  • There’s no risk of breaking changes being introduced in how the feature is used.
  • There’s almost zero risk of that feature becoming deprecated or unmaintained.

In the case of built-in Masonry, because it’s a layout primitive, you use it from CSS, just like Grid or Flexbox, no JS involved. Also, other layout-related CSS properties, such as gap, work as you’d expect them to. There are no tricks or workarounds to know about, and the things you do learn are documented on MDN.

For the Masonry JS lib, initialization is a bit complex: it requires a data attribute with a specific syntax, along with hidden HTML elements to set the column and gap sizes.

Plus, if you want to span columns, you need to include the gap size yourself to avoid problems:

<script src="https://unpkg.com/masonry-layout@4.2.2/dist/masonry.pkgd.min.js"></script> <style> .track-sizer, .item { width: 20%; } .gutter-sizer { width: 1rem; } .item { height: 100px; margin-block-end: 1rem; } .item:nth-child(odd) { height: 200px; } .item--width2 { width: calc(40% + 1rem); } </style> <div class="container" data-masonry='{ "itemSelector": ".item", "columnWidth": ".track-sizer", "percentPosition": true, "gutter": ".gutter-sizer" }'> <div class="track-sizer"></div> <div class="gutter-sizer"></div> <div class="item"></div> <div class="item item--width2"></div> <div class="item"></div> ... </div>

Let’s compare this to what a built-in Masonry implementation would look like:

<style> .container { display: grid-lanes; grid-lanes: repeat(4, 20%); gap: 1rem; } .item { height: 100px; } .item:nth-child(odd) { height: 200px; } .item--width2 { grid-column: span 2; } </style> <div class="container"> <div class="item"></div> <div class="item item--width2"></div> <div class="item"></div> ... </div>

Simpler, more compact code that can just use things like gap and where spanning tracks is done with span 2, just like in grid, and doesn’t require you to calculate the right width that includes the gap size.

How To Know What’s Available And When It’s Available?

Overall, the question isn’t really if you should use built-in Masonry over a JS library, but rather when. The Masonry JS library is amazing and has been filling a gap in the web platform for many years, and for many happy developers and users. It has a few drawbacks if you compare it to a built-in Masonry implementation, of course, but those are not important if that implementation isn’t ready.

It’s easy for me to list these cool new web platform features because I work at a browser vendor, and I therefore tend to know what’s coming. But developers often share, survey after survey, that keeping track of new things is hard. Staying informed is difficult, and companies don’t always prioritize learning anyway.

To help with this, here are a few resources that provide updates in simple and compact ways so you can get the information you need quickly:

If you have a bit more time, you might also be interested in browser vendors’ release notes:

For even more resources, check out my Navigating the Web Platform Cheatsheet.

My Thing Is Still Not Implemented

That’s the other side of the problem. Even if you do find the time, energy, and ways to keep track, there’s still frustration with getting your voice heard and your favorite features implemented.

Maybe you’ve been waiting for years for a specific bug to be resolved, or a specific feature to ship in a browser where it’s still missing.

What I’ll say is browser vendors do listen. I’m part of several cross-organization teams where we discuss developer signals and feedback all the time. We look at many different sources of feedback, both internal at each browser vendor and external/public on forums, open source projects, blogs, and surveys. And, we’re always trying to create better ways for developers to share their specific needs and use cases.

So, if you can, please demand more from browser vendors and pressure us to implement the features you need. I get that it takes time, and can also be intimidating (not to mention a high barrier to entry), but it also works.

Here are a few ways you can get your (or your company’s) voice heard: Take the annual State of JS, State of CSS, and State of HTML surveys. They play a big role in how browser vendors prioritize their work.

If you need a specific standard-based API to be implemented consistently across browsers, consider submitting a proposal at the next Interop project iteration. It requires more time, but consider how Shopify and RUMvision shared their wish lists for Interop 2026. Detailed information like this can be very useful for browser vendors to prioritize.

For more useful links to influence browser vendors, check out my Navigating the Web Platform Cheatsheet.

Conclusion

To close, I hope this article has left you with a few things to think about:

  • Excitement for Masonry and other upcoming web features.
  • A few web features you might want to start using.
  • A few pieces of custom or 3rd-party code you might be able to remove in favor of built-in features.
  • A few ways to keep track of what’s coming and influence browser vendors.

More importantly, I hope I’ve convinced you of the benefits of using the web platform to its full potential.

Kategorier: Amerikanska

The Accessibility Problem With Authentication Methods Like CAPTCHA

tors, 11/27/2025 - 11:00

The Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) has become ingrained in internet browsing since personal computers gained momentum in the consumer electronics market. For nearly as long as people have been going online, web developers have sought ways to block spam bots.

The CAPTCHA service distinguishes between human and bot activity to keep bots out. Unfortunately, its methods are less than precise. In trying to protect humans, developers have made much of the web inaccessible to people with disabilities.

Authentication methods, such as CAPTCHA, typically use image classification, puzzles, audio samples, or click-based tests to determine whether the user is human. While the types of challenges are well-documented, their logic is not public knowledge. People can only guess what it takes to “prove” they are human.

What Is CAPTCHA?

A CAPTCHA is a reverse Turing test that takes the form of a challenge-response test. For example, if it instructs users to “select all images with stairs,” they must pick the stairs out from railings, driveways, and crosswalks. Alternatively, they may be asked to enter the text they see, add the sum of dice faces, or complete a sliding puzzle.

Image-based CAPTCHAs are responsible for the most frustrating shared experiences internet users have — deciding whether to select a square when only a small sliver of the object in question is in it.

Regardless of the method, a computer or algorithm ultimately determines whether the test-taker is human or machine. This authentication service has spawned many offshoots, including reCAPTCHA and hCAPTCHA. It has even led to the creation of entire companies, such as GeeTest and Arkose Labs. The Google-owned automated system reCAPTCHA requires users to click a checkbox labeled “I’m not a robot” for authentication. It runs an adaptive analysis in the background to assign a risk score. hCAPTCHA is an image-classification-based alternative.

Other authentication methods include multi-factor authentication (MFA), QR codes, temporary personal identification numbers (PINs), and biometrics. They do not follow the challenge-response formula, but serve fundamentally similar purposes.

These offshoots are intended to be better than the original, but they often fail to meet modern accessibility standards. Take hCaptcha, for instance, which uses a cookie to let you bypass the challenge-response test entirely. It’s a great idea in theory, but it doesn’t work in practice.

You’re supposed to receive a one-time code via email that you send to a specific number over SMS. Users report receiving endless error messages, forcing them to complete the standard text CAPTCHA. This is only available if the site explicitly enabled it during configuration. If it is not set up, you must complete an image challenge that does not recognize screen readers.

Even when the initial process works, subsequent authentication relies on a third-party cross-site cookie, which most browsers block automatically. Also, the code expires after a short period, so you have to redo the entire process if it takes you too long to move on to the next step.

Why Do Teams Use CAPTCHA And Similar Authentication Methods?

CAPTCHA is common because it is easy to set up. Developers can program it to appear, and it conducts the test automatically. This way, they can focus on more important matters while still preventing spam, fraud, and abuse. These tools are supposed to make it easier for humans to use the internet safely, but they often keep real people from logging in.

These tests result in a poor user experience overall. One study found users wasted over 819 million hours on over 512 billion reCAPTCHA v2 sessions as of 2023. Despite it all, bots prevail. Machine learning models can solve text-based CAPTCHA within fractions of a second with over 97% accuracy.

A 2024 study on Google’s reCAPTCHA v2 — which is still widely used despite the rollout of reCAPTCHA v3 — found bots can solve image classification CAPTCHA with up to 100% accuracy, depending on the object they are tasked with identifying. The researchers used a free, open-source model, which means that bad actors could easily replicate their work.

Why Should Web Developers Stop Using CAPTCHA?

Authentication methods like CAPTCHA have an accessibility problem. Machine learning advances forced these services to grow increasingly complex. Even still, they are not foolproof. Bots get it right more than people do. Research shows they can complete reCAPTCHA within 17.5 seconds, achieving 85% accuracy. Humans take longer and are less accurate.

Many people fail CAPTCHA tests and have no idea what they did wrong. For example, a prompt instructing users to “select all squares with traffic lights” seems simple enough, but it gets complicated if a sliver of the pole is in another square. Should they select that box, or is that what an algorithm would do?

Although bot capabilities have grown by magnitudes, humans have remained the same. As tests get progressively more difficult, they feel less inclined to attempt them. One survey shows nearly 59% of people will stop using a product after several bad experiences. If authentication is too cumbersome or complex, they might stop using the website entirely.

People can fail these tests for various reasons, including technical ones. If they block third-party cookies, have a local proxy running, or have not updated their browser in a while, they may keep failing, regardless of how many times they try.

Authentication Issues With CAPTCHA

Due to the reasons mentioned above, most types of CAPTCHA are inherently inaccessible. This is especially true for people with disabilities, as these challenge-response tests were not designed with their needs in mind. Some of the common issues include the following:

Issues Related To Visuals And Screen Reader Use

Screen readers cannot read standard visual CAPTCHAs, such as the distorted text test, since the jumbled, contorted words are not machine-readable. The image classification and sliding puzzle methods are similarly inaccessible.

In one WebAIM survey conducted from 2023 to 2024, screen reader users agreed CAPTCHA was the most problematic item, ranking it above ambiguous links, unexpected screen changes, missing alt text, inaccessible search, and lack of keyboard accessibility. Its spot at the top has remained largely unchanged for over a decade, illustrating its history of inaccessibility.

Issues Related To Hearing and Audio Processing

Audio CAPTCHAs are relatively uncommon because web development best practices advise against autoplay audio and emphasize the importance of user controls. However, audio CAPTCHAs still exist. People who are hard of hearing or deaf may encounter a barrier when attempting these tests. Even with assistive technology, the intentional audio distortion and background noise make these samples challenging for individuals with auditory processing disorders to comprehend.

Issues Related To Motor And Dexterity

Tests requiring motor and dexterity skills can be challenging for those with motor impairments or physical disabilities. For example, someone with a hand tremor may find the sliding puzzles difficult. Also, the image classification tests that load more images until none that fit the criteria are left may pose a challenge.

Issues Related To Cognition And Language

As CAPTCHAs become increasingly complex, some developers are turning to tests that require a combination of creative and critical thinking. Those that require users to solve a math problem or complete a puzzle can be challenging for people with dyslexia, dyscalculia, visual processing disorders, or cognitive impairments.

Why Assistive Technology Won’t Bridge The Gap

CAPTCHAs are intentionally designed for humans to interpret and solve, so assistive technology like screen readers and hands-free controls may be of little help. ReCAPTCHA in particular poses an issue because it analyzes background activity. If it flags the accessibility devices as bots, it will serve a potentially inaccessible CAPTCHA.

Even if this technology could bridge the gap, web developers shouldn’t expect it to. Industry standards dictate that they should follow universal design principles to make their websites as accessible and functional as possible.

CAPTCHA’s accessibility issues could be forgiven if it were an effective security tool, but it is far from foolproof since bots get it right more than humans do. Why keep using a method that is ineffective and creates barriers for people with disabilities?

There are better alternatives.

Principles For Accessible Authentication

The idea that humans should consistently outperform algorithms is outdated. Better authentication methods exist, such as multifactor authentication (MFA). The two-factor authentication market will be worth an estimated $26.7 billion by 2027, underscoring its popularity. This tool is more effective than a CAPTCHA because it prevents unauthorized access, even with legitimate credentials.

Ensure your MFA technique is accessible. Instead of having website visitors transcribe complex codes, you should send push notifications or SMS messages. Rely on the verification code autofill to automatically capture and enter the code. Alternatively, you can introduce a “remember this device” feature to skip authentication on trusted devices.

Apple’s two-factor authentication approach is designed this way. A trusted device automatically displays a six-digit verification code, so they do not have to search for it. When prompted, iPhone users can tap the suggestion that appears above their mobile keyboard for autofill.

Single sign-on is another option. This session and user authentication service allows people to log in to multiple websites or applications with a single set of login credentials, minimizing the need for repeated identity verification.

One-time-use “magic links” are an excellent alternative to reCAPTCHA and temporary PINs. Rather than remembering a code or solving a puzzle, the user clicks on a button. Avoid imposing deadlines because, according to WCAG Success Criterion 2.2.3, users should not face time limits since those with disabilities may need more time to complete specific actions.

Alternatively, you could use Cloudflare Turnstile. It authenticates without showing a CAPTCHA, and most people never even have to check a box or hit a button. The software works by issuing a small JavaScript challenge behind the scenes to automatically differentiate between bots and humans. Cloudflare Turnstile can be embedded into any website, making it an excellent alternative to standard classification tasks.

Testing And Evaluation Of Accessible Authentication Designs

Testing and evaluating your accessible alternative authentication methods is essential. Many designs look good on paper but do not work in practice. If possible, gather feedback from actual users. An open beta may be an effective way to maximize visibility.

Remember, general accessibility considerations do not only apply to people with disabilities. They also include those who are neurodivergent, lack access to a mobile device, or use assistive technology. Ensure your alternative designs consider these individuals.

Realistically, you cannot create a perfect system since everyone is unique. Many people struggle to follow multistep processes, solve equations, process complex instructions, or remember passcodes. While universal web design principles can improve flexibility, no single solution can meet everyone’s needs.

Regardless of the authentication technique you use, you should present users with multiple authentication options upfront. They know their capabilities best, so let them decide what to use instead of trying to over-engineer a solution that works for every edge case.

Address The Accessibility Problem With Design Changes

A person with hand tremors may be unable to complete a sliding puzzle, while someone with an audio processing disorder may have trouble with distorted audio samples. However, you cannot simply replace CAPTCHAs with alternatives because they are often equally inaccessible.

QR codes, for example, may be difficult to scan for those with reduced fine motor control. People who are visually impaired may struggle to find it on the screen. Similarly, biometrics can pose an issue for people with facial deformities or a limited range of motion. Addressing the accessibility problem requires creative thinking.

You can start by visiting the Web Accessibility Initiative’s accessibility tutorials for developers to better understand universal design. Although these tutorials focus more on content than authentication, you can still use them to your advantage. The W3C Group Draft Note on the Inaccessibility of CAPTCHA provides more relevant guidance.

Getting started is as easy as researching best practices. Understanding the basics is essential because there is no universal solution for accessible web design. If you want to optimize accessibility, consider sourcing feedback from the people who actually visit your website.

Further Reading
Kategorier: Amerikanska

Design System Culture: What It Is And Why It Matters (Excerpt)

tis, 11/25/2025 - 19:00

Design systems have become an integral part of our everyday work, so much that the successful growth and maturation of a design system can make or break a product or project. Great tokens, components and organization aren’t enough — it is most often the culture and curation that creates a sustainable, widely-adopted system. It can be hard to determine where to invest our time and attention. How do we build and maintain design systems that support our teams, enhance our work, and grow along with us?

Excerpt: Design System Culture

Culture is a funny thing. We all have some intuition about how important it is—at least we know we want to work in a great culture and avoid the toxic ones. But culture is notoriously difficult to define, and changing it can feel more like magic than reality. One company culture can be inspiring for some and boring for others, a place of growth for some and stifling for others.

Adding to the nuance, not only does your company have a culture as a whole, but it has many subcultures. That’s because culture is not created by any individual. Culture is something that happens when the same group of people gather together repeatedly over time. So, as a company grows, adding hierarchy and structure, the teams formed around specific goals, products, features, disciplines, and so on, all develop their own subcultures.

You probably have a design subculture. You probably have a product ownership subculture. You probably even have a subculture forming around those folks who get on a Zoom call every Tuesday at lunch to knit and chat. There are hundreds or more subcultures at most good-sized organizations. It’s complicated, nuanced, and immensely important.

When an individual is struggling with the way they are managed, one culture enables them to offer authentic feedback to their boss, while another leads them to look for a new job. When a company provides free lunch on Fridays, one culture creates a sense of gratitude for this benefit; another makes you feel like this free lunch comes with the expectation that you can’t ever leave work. One culture prioritizes financial results over respectful interactions. One culture encourages competition between teams, while another emphasizes collaboration with coworkers.

Why Culture?

At the beginning of 2021, my company was asked to help a large organization plan, design, and build a design system alongside the minimum viable product of a new product idea. This is the kind of work we truly love, so the team was excited to jump in.

As an author of a book about design systems, I want nothing more than to tell you how amazingly this engagement went. Instead, it was a tremendous struggle. Despite this being the perfect kind of work for my team and I on paper, we had to make the hard decision to walk away from our client at the end of that year. Not because we couldn’t do the work. Not because of any technical challenges or budget concerns. The reason we gave was “cultural incompatibility.” In almost twenty years of running my own businesses, this had never happened to me. After all, our clients don’t come to us because they have everything figured out — they come because they know they need help. If we couldn’t guide them through a difficult season, why did we even exist!?

Needless to say, it didn’t sit well with me. So, after following a few useless threads of fear that we just couldn’t cut it, I spent the next year diving down a rabbit hole of research on organizational culture. This next section is a summary of what I learned in that year and how I’ve been putting that to use since. To start, let’s find a common understanding of what culture is.

What Is Culture?

Over the last few decades, a lot has been said about workplace culture. From understanding why it matters and how it impacts the ways we lead, to offering methodologies for changing it. I’ve found tremendous value in the research and writings of Edgar Schein, a business theorist and psychologist. Schein offers a simple model to explain what culture is, breaking it down into three levels:

Artifacts

Artifacts are the top level of Schein’s model. These are the things people think of when you say “culture” — the visible perks a company offers. I once worked at a place where we could expense bringing in donuts for the team. Another job I had provided a foosball table. One company encouraged us to cook lunch together each week. These kinds of things, along with the company swag, the channel in Slack where you get to brag about your peers, and the company retreat are all “artifacts” of your company culture.

Espoused Values And Beliefs

The next layer down is called “espoused values and beliefs.” This is what people inside the culture say they believe. It’s the list of values, the mission statement, the vision. It’s the content on the website and plastered on the walls. It’s the stuff you expect to get when you accept the job because it’s how people answered all your questions throughout the interview process.

Basic Underlying Assumptions

The deepest layer is called “basic underlying assumptions.” This is what people inside the organization actually believe. It’s the way the leadership and employees behave, most notably

in the face of a difficult decision. This layer is the root of your culture. And no matter what you show (artifacts), no matter what you say (espoused beliefs), the things you believe (underlying assumptions) will come out eventually.

It Starts At The Bottom

As an employee, you will experience these things from the top down. On your first day, you observe what’s happening around you — you see the artifacts of the culture. Eventually, you get to know a few folks. As you have more and more conversations with them, you’ll begin to hear how they talk about the culture — their espoused beliefs. At some point, people inside your culture will be faced with some tough situations. This is where the rubber meets the road and when you’ll learn what those individuals’ basic underlying assumptions are.

Unhealthy organizations don’t have a process for surfacing and valuing those underlying assumptions. Healthy organizations know that culture starts with the basic underlying assumptions of every individual at the company.

Unhealthy organizations try to create culture with perks and mission statements. Healthy organizations allow the top two layers to emerge naturally from the bottom layer.

When the basic underlying assumptions don’t line up with the espoused beliefs and artifacts, the disconnect is strong. It’s often hard to articulate the problem, but people will feel it. This is the company with a core value of “family first” that requires you to travel all the time with no recognition of the impact it has on your actual family. The espoused belief to prioritize family is not actively supported in the decisions being made.

Strength And Weakness

We all subconsciously know these things, and that is reflected in the language we use as we talk about the culture of an organization. We tend to use the words “strong” and “weak” to describe culture. You might say, “That company has a strong culture.” This statement is an indication that the layers are aligned, and that means the culture itself serves as a way of guiding decisions. If we all have shared values, we can trust one another’s ability to make decisions that will align with those values.

Conversely, an organization with a weak culture is missing the alignment between the things they say and the decisions they make. These cultures often continually add policies and procedures in order to police the behavior of individuals. In this scenario, the culture is weak because it doesn’t offer the organic guidance a stronger culture does — the misalignment means the things we choose to do differ from the things we say.

That is not to say policies and procedures are bad. As companies grow, there is a need to document the expectations for people. The proactive nature of a strong culture means these documents are often a formalization of what has emerged organically, whereas a weak culture reacts to negative situations in hopes to prevent the bad from happening again.

Editor’s Note

Do you like what you’ve read so far? This is just an excerpt of Ben’s upcoming book, Maturing Design Systems, in which he explores the anatomy of a design system, explains how culture shapes outcomes, and shares practical guidance for the challenges at each stage — from building v1 and growing healthy adoption to navigating “the teenage years” and ultimately running a stable, influential system.

Table of Contents
  • Context
    An introduction to the context of design systems, understanding where they live in your organization, what feeds them, and whether you should build one.
  • Design System Culture
    A deep dive into what culture is, why it’s important for design system teams to understand, and how it unlocks the ability for you to deliver real value.
  • The Anatomy of a Design System
    An exploration of the layers and parts that make up a design system based on the evaluation of hundreds of design systems over many years.
  • Maturity
    An over view of the design system maturity model including the fours stages of maturity, origin stories, a framework for maturing in a healthy way, and a framework for creating design system stability.
  • Stage 1, Building Version One
    A dive into what it means to be in stage 1 of the design system maturity and a few mental models to keep you focused on the right things in this early stage.
  • Stage 2, Growing Adoption
    Unpacking stage 2 of the design system maturity model and a deep dive into adoption: broadening your perspective on adoption, the adoption curve, and how to create sustainable adoption.
  • Stage 3, Surviving the Teenage Years
    Understanding the relevant concerns for stage 3 of the design system maturity model and how to address the more nuanced challenges that come with this level of maturity.
  • Stage 4, Evolving a Healthy Program
    Exploring what it means to be in stage 4 of the design system maturity model, when you’ve become an influential leader in the eyes of the rest of your organization.
About The Author

Ben Callahan is an author, design system researcher, coach, and speaker. He founded Redwoods, a design system community, and The Question, a weekly forum for collaborative learning. As a founding partner at Sparkbox, he helps organizations embed human-centered culture into their design systems. His work bridges people and systems, emphasizing sustainable growth, team alignment, and meaningful impact in technology. He believes every interaction is an opportunity to learn.

Reviewers’ Testimonials “This book is a clear and insightful blueprint for maturing design systems at scale. For well-supported teams, it offers strategy and clarity grounded in real examples. For smaller teams like mine, it serves as a North Star that helps you advocate for the work and find solutions that fit your team's maturity. I highly recommend it to anyone building a design system.”

Lenora Porter, Product Designer “Ben draws connections between process, collaboration, and identity in ways that feel both intuitive and revelatory. Many design system books live comfortably in the tactical and technical, but this one moves beyond the how and into the why — inviting readers to reflect on their roles not just as product owners, designers or engineers, but as stewards of shared understanding within complex organisations. This book doesn’t prescribe rigid solutions. Instead, it encourages self-inquiry and alignment, asking readers to consider how they can bring intentionality, empathy, and resilience into the systems they touch.”

Tarunya Varma, Product Design Manager, Tide “Ben Callahan’s “Maturing Design Systems” puts language to the struggles many of us feel but can’t quite explain. It unpacks the hidden influence of culture, setup, and leadership, providing you with the clarity, tools, and frameworks to course-correct and move your system work forward, whether you’re navigating a growing startup or a scaling enterprise.”

Ness Grixti, Design Lead, Wise, and Author of “A Practical Guide to Design System Components” Don’t Miss Out!

Through years of interviews, coaching, and consulting, Ben has discovered a model for how design systems mature. Understanding how systems tend to mature allows you to create a sustainable program around your design system — one that acknowledges the human and change-management side of this work, not just the technical and creative.

This book will be a valuable resource for anyone working with design systems!

Spread The Word

Sign up to our Smashing newsletter and be one of the first to know when Maturing Design Systems is available for preorder. We can’t wait to share this book with you!

Kategorier: Amerikanska

Keyframes Tokens: Standardizing Animation Across Projects

fre, 11/21/2025 - 09:00

Picture this: you join a new project, dive into the codebase, and within the first few hours, you discover something frustratingly familiar. Scattered throughout the stylesheets, you find multiple @keyframes definitions for the same basic animations. Three different fade-in effects, two or three slide variations, a handful of zoom animations, and at least two different spin animations because, well, why not?

@keyframes pulse { from { scale: 1; } to { scale: 1.1; } } @keyframes bigger-pulse { 0%, 20%, 100% { scale: 1; } 10%, 40% { scale: 1.2; } }

If this scenario sounds familiar, you’re not alone. In my experience across various projects, one of the most consistent quick wins I can deliver is consolidating and standardizing keyframes. It’s become such a reliable pattern that I now look forward to this cleanup as one of my first tasks on any new codebase.

The Logic Behind The Chaos

This redundancy makes perfect sense when you think about it. We all use the same fundamental animations in our day-to-day work: fades, slides, zooms, spins, and other common effects. These animations are pretty straightforward, and it's easy to whip up a quick @keyframes definition to get the job done.

Without a centralized animation system, developers naturally write these keyframes from scratch, unaware that similar animations already exist elsewhere in the codebase. This is especially common when working in component-based architectures (which most of us do these days), as teams often work in parallel across different parts of the application.

The result? Animation chaos.

The Small Problem

The most obvious issues with keyframes duplication are wasted development time and unnecessary code bloat. Multiple keyframe definitions mean multiple places to update when requirements change. Need to adjust the timing of your fade animation? You’ll need to hunt down every instance across your codebase. Want to standardize easing functions? Good luck finding all the variations. This multiplication of maintenance points makes even simple animation updates a time-consuming task.

The Bigger Problem

This keyframes duplication creates a much more insidious problem lurking beneath the surface: the global scope trap. Even when working with component-based architectures, CSS keyframes are always defined in the global scope. This means all keyframes apply to all components. Always. Yes, your animation doesn't necessarily use the keyframes you defined in your component. It uses the last keyframes that match that exact same name that were loaded into the global scope.

As long as all your keyframes are identical, this might seem like a minor issue. But the moment you want to customize an animation for a specific use case, you’re in trouble, or worse, you’ll be the one causing them.

Either your animation won’t work because another component loaded after yours, overwriting your keyframes, or your component loads last and accidentally changes the animation behavior for every other component using that keyframe's name, and you may not even realize it.

Here’s a simple example that demonstrates the problem:

.component-one { /* component styles */ animation: pulse 1s ease-in-out infinite alternate; } /* this @keyframes definition will not work */ @keyframes pulse { from { scale: 1; } to { scale: 1.1; } } /* later in the code... */ .component-two { /* component styles */ animation: pulse 1s ease-in-out infinite; } /* this keyframes will apply to both components */ @keyframes pulse { 0%, 20%, 100% { scale: 1; } 10%, 40% { scale: 1.2; } }

Both components use the same animation name, but the second @keyframes definition overwrites the first one. Now both component-one and component-two will use the second keyframes, regardless of which component defined which keyframes.

See the Pen Keyframes Tokens - Demo 1 [forked] by Amit Sheen.

The worst part? This often works perfectly in local development but breaks mysteriously in production when build processes change the loading order of your stylesheets. You end up with animations that behave differently depending on which components are loaded and in what sequence.

The Solution: Unified Keyframes

The answer to this chaos is surprisingly simple: predefined dynamic keyframes stored in a shared stylesheet. Instead of letting every component define its own animations, we create centralized keyframes that are well-documented, easy to use, maintainable, and tailored to the specific needs of your project.

Think of it as keyframes tokens. Just as we use tokens for colors and spacing, and many of us already use tokens for animation properties, like duration and easing functions, why not use tokens for keyframes as well?

This approach can integrate naturally with any current design token workflow you’re using, while solving both the small problem (code duplication) and the bigger problem (global scope conflicts) in one go.

The idea is straightforward: create a single source of truth for all our common animations. This shared stylesheet contains carefully crafted keyframes that cover the animation patterns our project actually uses. No more guessing whether a fade animation already exists somewhere in our codebase. No more accidentally overwriting animations from other components.

But here’s the key: these aren’t just static copy-paste animations. They’re designed to be dynamic and customizable through CSS custom properties, allowing us to maintain consistency while still having the flexibility to adapt animations to specific use cases, like if you need a slightly bigger “pulse” animation in one place.

Building The First Keyframes Token

One of the first low-hanging fruits we should tackle is the “fade-in” animation. In one of my recent projects, I found over a dozen separate fade-in definitions, and yes, they all simply animated the opacity from 0 to 1.

So, let’s create a new stylesheet, call it kf-tokens.css, import it into our project, and place our keyframes with proper comments inside of it.

/* keyframes-tokens.css */ /* * Fade In - fade entrance animation * Usage: animation: kf-fade-in 0.3s ease-out; */ @keyframes kf-fade-in { from { opacity: 0; } to { opacity: 1; } }

This single @keyframes declaration replaces all those scattered fade-in animations across our codebase. Clean, simple, and globally applicable. And now that we have this token defined, we can use it from any component throughout our project:

.modal { animation: kf-fade-in 0.3s ease-out; } .tooltip { animation: kf-fade-in 0.2s ease-in-out; } .notification { animation: kf-fade-in 0.5s ease-out; }

See the Pen Keyframes Tokens - Demo 2 [forked] by Amit Sheen.

Note: We’re using a kf- prefix in all our @keyframes names. This prefix serves as a namespace that prevents naming conflicts with existing animations in the project and makes it immediately clear that these keyframes come from our keyframes tokens file.

Making A Dynamic Slide

The kf-fade-in keyframes work great because it's simple and there's little room to mess things up. In other animations, however, we need to be much more dynamic, and here we can leverage the enormous power of CSS custom properties. This is where keyframes tokens really shine compared to scattered static animations.

Let’s take a common scenario: “slide-in” animations. But slide in from where? 100px from the right? 50% from the left? Should it enter from the top of the screen? Or maybe float in from the bottom? So many possibilities, but instead of creating separate keyframes for each direction and each variation, we can build one flexible token that adapts to all scenarios:

/* * Slide In - directional slide animation * Use --kf-slide-from to control direction * Default: slides in from left (-100%) * Usage: * animation: kf-slide-in 0.3s ease-out; * --kf-slide-from: -100px 0; // slide from left * --kf-slide-from: 100px 0; // slide from right * --kf-slide-from: 0 -50px; // slide from top */ @keyframes kf-slide-in { from { translate: var(--kf-slide-from, -100% 0); } to { translate: 0 0; } }

Now we can use this single @keyframes token for any slide direction simply by changing the --kf-slide-from custom property:

.sidebar { animation: kf-slide-in 0.3s ease-out; /* Uses default value: slides from left */ } .notification { animation: kf-slide-in 0.4s ease-out; --kf-slide-from: 0 -50px; /* slide from top */ } .modal { animation: kf-fade-in 0.5s, kf-slide-in 0.5s cubic-bezier(0.34, 1.56, 0.64, 1); --kf-slide-from: 50px 50px; /* slide from bottom-right */ }

This approach gives us incredible flexibility while maintaining consistency. One keyframe declaration, infinite possibilities.

See the Pen Keyframes Tokens - Demo 3 [forked] by Amit Sheen.

And if we want to make our animations even more flexible, allowing for “slide-out” effects as well, we can simply add a --kf-slide-to custom property, similar to what we’ll see in the next section.

Bidirectional Zoom Keyframes

Another common animation that gets duplicated across projects is “zoom” effects. Whether it’s a subtle scale-up for toast messages, a dramatic zoom-in for modals, or a gentle scale-down effect for headings, zoom animations are everywhere.

Instead of creating separate keyframes for each scale value, let’s build one flexible set of kf-zoom keyframes:

/* * Zoom - scale animation * Use --kf-zoom-from and --kf-zoom-to to control scale values * Default: zooms from 80% to 100% (0.8 to 1) * Usage: * animation: kf-zoom 0.2s ease-out; * --kf-zoom-from: 0.5; --kf-zoom-to: 1; // zoom from 50% to 100% * --kf-zoom-from: 1; --kf-zoom-to: 0; // zoom from 100% to 0% * --kf-zoom-from: 1; --kf-zoom-to: 1.1; // zoom from 100% to 110% */ @keyframes kf-zoom { from { scale: var(--kf-zoom-from, 0.8); } to { scale: var(--kf-zoom-to, 1); } }

With one definition, we can achieve any zoom variation we need:

.toast { animation: kf-slide-in 0.2s, kf-zoom 0.4s ease-out; --kf-slide-from: 0 100%; /* slide from top */ /* Uses default zoom: scales from 80% to 100% */ } .modal { animation: kf-zoom 0.3s cubic-bezier(0.34, 1.56, 0.64, 1); --kf-zoom-from: 0; /* dramatic zoom from 0% to 100% */ } .heading { animation: kf-fade-in 2s, kf-zoom 2s ease-in; --kf-zoom-from: 1.2; --kf-zoom-to: 0.8; /* gentle scale down */ }

The default of 0.8 (80%) works perfectly for most UI elements, like toast messages and cards, while still being easy to customize for special cases.

See the Pen Keyframes Tokens - Demo 4 [forked] by Amit Sheen.

You might have noticed something interesting in the recent examples: we've been combining animations. One of the key advantages of working with @keyframes tokens is that they’re designed to integrate seamlessly with each other. This smooth composition is intentional, not accidental.

We’ll discuss animation composition in more detail later, including where they can become problematic, but most combinations are straightforward and easy to implement.

Note: While writing this article, and maybe because of writing it, I found myself rethinking the whole idea of entrance animations. With all the recent advances in CSS, do we still need them at all? Luckily, Adam Argyle explored the same questions and expressed them brilliantly in his blog. This doesn’t contradict what’s written here, but it does present an approach worth considering, especially if your projects rely heavily on entrance animations.

Continuous Animations

While entrance animations, like “fade”, “slide”, and “zoom” happen once and then stop, continuous animations loop indefinitely to draw attention or indicate ongoing activity. The two most common continuous animations I encounter are “spin” (for loading indicators) and “pulse” (for highlighting important elements).

These animations present unique challenges when it comes to creating keyframes tokens. Unlike entrance animations that typically go from one state to another, continuous animations need to be highly customizable in their behavior patterns.

The Spin Doctor

Every project seems to use multiple spin animations. Some spin clockwise, others counterclockwise. Some do a single 360-degree rotation, others do multiple turns for a faster effect. Instead of creating separate keyframes for each variation, let’s build one flexible spin that handles all scenarios:

/* * Spin - rotation animation * Use --kf-spin-from and --kf-spin-to to control rotation range * Use --kf-spin-turns to control rotation amount * Default: rotates from 0deg to 360deg (1 full rotation) * Usage: * animation: kf-spin 1s linear infinite; * --kf-spin-turns: 2; // 2 full rotations * --kf-spin-from: 0deg; --kf-spin-to: 180deg; // half rotation * --kf-spin-from: 0deg; --kf-spin-to: -360deg; // counterclockwise */ @keyframes kf-spin { from { rotate: var(--kf-spin-from, 0deg); } to { rotate: calc(var(--kf-spin-from, 0deg) + var(--kf-spin-to, 360deg) * var(--kf-spin-turns, 1)); } }

Now we can create any spin variation we like:

.loading-spinner { animation: kf-spin 1s linear infinite; /* Uses default: rotates from 0deg to 360deg */ } .fast-loader { animation: kf-spin 1.2s ease-in-out infinite alternate; --kf-spin-turns: 3; /* 3 full rotations for each direction per cycle */ } .steped-reverse { animation: kf-spin 1.5s steps(8) infinite; --kf-spin-to: -360deg; /* counterclockwise */ } .subtle-wiggle { animation: kf-spin 2s ease-in-out infinite alternate; --kf-spin-from: -16deg; --kf-spin-to: 32deg; /* wiggle 36 deg: between -18deg and +18deg */ }

See the Pen Keyframes Tokens - Demo 5 [forked] by Amit Sheen.

The beauty of this approach is that the same keyframes work for loading spinners, rotating icons, wiggle effects, and even complex multi-turn animations.

The Pulse Paradox

Pulse animations are trickier because they can “pulse” different properties. Some pulse the scale, others pulse the opacity, and some pulse color properties like brightness or saturation. Rather than creating separate keyframes for each property, we can create keyframes that work with any CSS property.

Here's an example of a pulse keyframe with scale and opacity options:

/* * Pulse - pulsing animation * Use --kf-pulse-scale-from and --kf-pulse-scale-to to control scale range * Use --kf-pulse-opacity-from and --kf-pulse-opacity-to to control opacity range * Default: no pulse (all values 1) * Usage: * animation: kf-pulse 2s ease-in-out infinite alternate; * --kf-pulse-scale-from: 0.95; --kf-pulse-scale-to: 1.05; // scale pulse * --kf-pulse-opacity-from: 0.7; --kf-pulse-opacity-to: 1; // opacity pulse */ @keyframes kf-pulse { from { scale: var(--kf-pulse-scale-from, 1); opacity: var(--kf-pulse-opacity-from, 1); } to { scale: var(--kf-pulse-scale-to, 1); opacity: var(--kf-pulse-opacity-to, 1); } }

This creates a flexible pulse that can animate multiple properties:

.call-to-action { animation: kf-pulse 0.6s infinite alternate; --kf-pulse-opacity-from: 0.5; /* opacity pulse */ } .notification-dot { animation: kf-pulse 0.6s ease-in-out infinite alternate; --kf-pulse-scale-from: 0.9; --kf-pulse-scale-to: 1.1; /* scale pulse */ } .text-highlight { animation: kf-pulse 1.5s ease-out infinite; --kf-pulse-scale-from: 0.8; --kf-pulse-opacity-from: 0.2; /* scale and opacity pulse */ }

See the Pen Keyframes Tokens - Demo 6 [forked] by Amit Sheen.

This single kf-pulse keyframe can handle everything from subtle attention grabs to dramatic highlights, all while being easy to customize.

Advanced Easing

One of the great things about using keyframes tokens is how easy it is to expand our animation library and provide effects that most developers would not bother to write from scratch, like elastic or bounce.

Here is an example of a simple “bounce” keyframes token that uses a --kf-bounce-from custom property to control the jump height.

/* * Bounce - bouncing entrance animation * Use --kf-bounce-from to control jump height * Default: jumps from 100vh (off screen) * Usage: * animation: kf-bounce 3s ease-in; * --kf-bounce-from: 200px; // jump from 200px height */ @keyframes kf-bounce { 0% { translate: 0 calc(var(--kf-bounce-from, 100vh) * -1); } 34% { translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.4); } 55% { translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.2); } 72% { translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.1); } 85% { translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.05); } 94% { translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.025); } 99% { translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.0125); } 22%, 45%, 64%, 79%, 90%, 97%, 100% { translate: 0 0; animation-timing-function: ease-out; } }

Animations like “elastic” are a bit trickier because of the calculations inside the keyframes. We need to define --kf-elastic-from-X and --kf-elastic-from-Y separately (both are optional), and together they let us create an elastic entrance from any point on the screen.

/* * Elastic In - elastic entrance animation * Use --kf-elastic-from-X and --kf-elastic-from-Y to control start position * Default: enters from top center (0, -100vh) * Usage: * animation: kf-elastic-in 2s ease-in-out both; * --kf-elastic-from-X: -50px; * --kf-elastic-from-Y: -200px; // enter from (-50px, -200px) */ @keyframes kf-elastic-in { 0% { translate: calc(var(--kf-elastic-from-X, -50vw) * 1) calc(var(--kf-elastic-from-Y, 0px) * 1); } 16% { translate: calc(var(--kf-elastic-from-X, -50vw) * -0.3227) calc(var(--kf-elastic-from-Y, 0px) * -0.3227); } 28% { translate: calc(var(--kf-elastic-from-X, -50vw) * 0.1312) calc(var(--kf-elastic-from-Y, 0px) * 0.1312); } 44% { translate: calc(var(--kf-elastic-from-X, -50vw) * -0.0463) calc(var(--kf-elastic-from-Y, 0px) * -0.0463); } 59% { translate: calc(var(--kf-elastic-from-X, -50vw) * 0.0164) calc(var(--kf-elastic-from-Y, 0px) * 0.0164); } 73% { translate: calc(var(--kf-elastic-from-X, -50vw) * -0.0058) calc(var(--kf-elastic-from-Y, 0px) * -0.0058); } 88% { translate: calc(var(--kf-elastic-from-X, -50vw) * 0.0020) calc(var(--kf-elastic-from-Y, 0px) * 0.0020); } 100% { translate: 0 0; } }

This approach makes it easy to reuse and customize advanced keyframes across our project, just by changing a single custom property.

.bounce-and-zoom { animation: kf-bounce 3s ease-in, kf-zoom 3s linear; --kf-zoom-from: 0; } .bounce-and-slide { animation-composition: add; /* Both animations use translate */ animation: kf-bounce 3s ease-in, kf-slide-in 3s ease-out; --kf-slide-from: -200px; } .elastic-in { animation: kf-elastic-in 2s ease-in-out both; }

See the Pen Keyframes Tokens - Demo 7 [forked] by Amit Sheen.

Up to this point, we’ve seen how we can consolidate keyframes in a smart and efficient way. Of course, you might want to tweak things to better fit your project’s needs, but we’ve covered examples of several common animations and everyday use cases. And with these keyframes tokens in place, we now have powerful building blocks for creating consistent, maintainable animations across the entire project. No more duplicated keyframes, no more global scope conflicts. Just a clean, convenient way to handle all our animation needs.

But the real question is: How do we compose these building blocks together?

Putting It All Together

We’ve seen that combining basic keyframes tokens is simple. We don’t need anything special but to define the first animation, define the second one, set the variables as needed, and that’s it.

/* Fade in + slide in */ .toast { animation: kf-fade-in 0.4s, kf-slide-in 0.4s cubic-bezier(0.34, 1.56, 0.64, 1); --kf-slide-from: 0 40px; } /* Zoom in + fade in */ .modal { animation: kf-fade-in 0.3s, kf-zoom 0.3s cubic-bezier(0.34, 1.56, 0.64, 1); --kf-zoom-from: 0.7; --kf-zoom-to: 1; } /* Slide in + pulse */ .notification { animation: kf-slide-in 0.5s, kf-pulse 1.2s ease-in-out infinite alternate; --kf-slide-from: -100px 0; --kf-pulse-scale-from: 0.95; --kf-pulse-scale-to: 1.05; }

These combinations work beautifully because each animation targets a different property: opacity, transform (translate/scale), etc. But sometimes there are conflicts, and we need to know why and how to deal with them.

When two animations try to animate the same property — for example, both animating scale or both animating opacity — the result will not be what you expect. By default, only one of the animations is actually applied to that property, which is the last one in the animation list. This is a limitation of how CSS handles multiple animations on the same property.

For example, this will not work as intended because only the kf-pulse animation will apply.

.bad-combo { animation: kf-zoom 0.5s forwards, kf-pulse 1.2s infinite alternate; --kf-zoom-from: 0.5; --kf-zoom-to: 1.2; --kf-pulse-scale-from: 0.8; --kf-pulse-scale-to: 1.1; } Animation Addition

The simplest and most direct way to handle multiple animations that affect the same property is to use the animation-composition property. In the last example above, the kf-pulse animation replaces the kf-zoom animation, so we will not see the initial zoom and will not get the expected scale to of 1.2.

By setting the animation-composition to add, we tell the browser to combine both animations. This gives us the result we want.

.component-two { animation-composition: add; }

See the Pen Keyframes Tokens - Demo 8 [forked] by Amit Sheen.

This approach works well for most cases where we want to combine effects on the same property. It is also useful when we need to combine animations with static property values.

For example, if we have an element that uses the translate property to position it exactly where we want, and then we want to animate it in with the kf-slide-in keyframes, we get a nasty visible jump without animation-composition.

See the Pen Keyframes Tokens - Demo 9 [forked] by Amit Sheen.

With animation-composition set to add, the animation is smoothly combined with the existing transform, so the element stays in place and animates as expected.

Animation Stagger

Another way of handling multiple animations is to “stagger” them — that is, start the second animation slightly after the first one finishes. It is not a solution that works for every case, but it is useful when we have an entrance animation followed by a continuous animation.

/* fade in + opacity pulse */ .notification { animation: kf-fade-in 2s ease-out, kf-pulse 0.5s 2s ease-in-out infinite alternate; --kf-pulse-opacity-to: 0.5; }

See the Pen Keyframes Tokens - Demo 10 [forked] by Amit Sheen.

Order Matters

A large part of the animations we work with use the transform property. In most cases, this is simply more convenient. It also has a performance advantage as transform animations can be GPU-accelerated. But if we use transforms, we need to accept that the order in which we perform our transformations matters. A lot.

In our keyframes so far, we’ve used individual transforms. According to the specs, these are always applied in a fixed order: first, the element gets translate, then rotate, then scale. This makes sense and is what most of us expect.

However, if we use the transform property, the order in which the functions are written is the order in which they are applied. In this case, if we move something 100 pixels on the X-axis and then rotate it by 45 degrees, it is not the same as first rotating it by 45 degrees and then moving it 100 pixels.

/* Pink square: First translate, then rotate */ .example-one { transform: translateX(100px) rotate(45deg); } /* Green square: First rotate, then translate */ .example-two { transform: rotate(45deg) translateX(100px); }

See the Pen Keyframes Tokens - Demo 11 [forked] by Amit Sheen.

But according to the transform order, all individual transforms — everything we’ve used for the keyframes tokens — happens before the transform functions. That means anything you set in the transform property will happen after the animations. But if you set, for example, translate together with the kf-spin keyframes, the translate will happen before the animation. Confused yet?!

This leads to situations where static values can cause different results for the same animation, like in the following case:

/* Common animation for both spinners */ .spinner { animation: kf-spin 1s linear infinite; } /* Pink spinner: translate before rotate (individual transform) */ .spinner-pink { translate: 100% 50%; } /* Green spinner: rotate then translate (function order) */ .spinner-green { transform: translate(100%, 50%); }

See the Pen Keyframes Tokens - Demo 12 [forked] by Amit Sheen.

You can see that the first spinner (pink) gets a translate that happens before the rotate of kf-spin, so it first moves to its place and then spins. The second spinner (green) gets a translate() function that happens after the individual transform, so the element first spins, then moves relative to its current angle, and we get that wide orbit effect.

No, this is not a bug. It is just one of those things we need to know about CSS and keep in mind when working with multiple animations or multiple transforms. If needed, you can also create an additional set of kf-spin-alt keyframes that rotate elements using the rotate() function.

Reduced Motion

And while we’re talking about alternative keyframes, we cannot ignore the “no animation” option. One of the biggest advantages of using keyframes tokens is that accessibility can be baked in, and it is actually quite easy to do. By designing our keyframes with accessibility in mind, we can ensure that users who prefer reduced motion get a smoother, less distracting experience, without extra work or code duplication.

The exact meaning of “Reduced Motion” can change a bit from one animation to another, and from project to project, but here are a few important points to keep in mind:

Muting Keyframes

While some animations can be softened or slowed down, there are others that should disappear completely when reduced motion is requested. Pulse animations are a good example. To make sure these animations do not run in reduced motion mode, we can simply wrap them in the appropriate media query.

@media (prefers-reduced-motion: no-preference) { @keyfrmaes kf-pulse { from { scale: var(--kf-pulse-scale-from, 1); opacity: var(--kf-pulse-opacity-from, 1); } to { scale: var(--kf-pulse-scale-to, 1); opacity: var(--kf-pulse-opacity-to, 1); } } }

This ensures that users who have set prefers-reduced-motion to reduce will not see the animation and will get an experience that matches their preference.

Instant In

There are some keyframes we cannot simply remove, such as entrance animations. The value must change, must animate; otherwise, the element won't have the correct values. But in reduced motion, this transition from the initial value should be instant.

To achieve this, we’ll define an extra set of keyframes where the value jumps immediately to the end state. These become our default keyframes. Then, we’ll add the regular keyframes inside a media query for prefers-reduced-motion set to no-preference, just like in the previous example.

/* pop in instantly for reduced motion */ @keyframes kf-zoom { from, to { scale: var(--kf-zoom-to, 1); } } @media (prefers-reduced-motion: no-preference) { /* Original zoom keyframes */ @keyframes kf-zoom { from { scale: var(--kf-zoom-from, 0.8); } to { scale: var(--kf-zoom-to, 1); } } }

This way, users who prefer reduced motion will see the element appear instantly in its final state, while everyone else gets the animated transition.

The Soft Approach

There are cases where we do want to keep some movement, but much softer and calmer than the original animation. For example, we can replace a bounce entrance with a gentle fade-in.

@keyframes kf-bounce { /* Soft fade-in for reduced motion */ } @media (prefers-reduced-motion: no-preference) { @keyframes kf-bounce { /* Original bounce keyframes */ } }

Now, users with reduced motion enabled still get a sense of appearance, but without the intense movement of a bounce or elastic animation.

With the building blocks in place, the next question is how to make them part of the actual workflow. Writing flexible keyframes is one thing, but making them reliable across a large project requires a few strategies that I had to learn the hard way.

Implementation Strategies & Best Practices

Once we have a solid library of keyframes tokens, the real challenge is how to bring them into everyday work.

  • The temptation is to drop all keyframes in at once and declare the problem solved, but in practice I have found that the best results come from gradual adoption. Start with the most common animations, such as fade or slide. These are easy wins that show immediate value without requiring big rewrites.
  • Naming is another point that deserves attention. A consistent prefix or namespace makes it obvious which animations are tokens and which are local one-offs. It also prevents accidental collisions and helps new team members recognize the shared system at a glance.
  • Documentation is just as important as the code itself. Even a short comment above each keyframes token can save hours of guessing later. A developer should be able to open the tokens file, scan for the effect they need, and copy the usage pattern straight into their component.
  • Flexibility is what makes this approach worth the effort. By exposing sensible custom properties, we give teams room to adapt the animation without breaking the system. At the same time, try not to overcomplicate. Provide the knobs that matter and keep the rest opinionated.
  • Finally, remember accessibility. Not every animation needs a reduced motion alternative, but many do. Baking in these adjustments early means we never have to retrofit them later, and it shows a level of care that our users will notice even if they never mention it.

In my experience, treating keyframes tokens as part of our design tokens workflow is what makes them stick. Once they are in place, they stop feeling like special effects and become part of the design language, a natural extension of how the product moves and responds.

Wrapping Up

Animations can be one of the most joyful parts of building interfaces, but without structure, they can also become one of the biggest sources of frustration. By treating keyframes as tokens, you take something that is usually messy and hard to manage and turn it into a clear, predictable system.

The real value is not just in saving a few lines of code. It is in the confidence that when you use a fade, slide, zoom, or spin, you know exactly how it will behave across the project. It is in the flexibility that comes from custom properties without the chaos of endless variations. And it is in the accessibility built into the foundation rather than added as an afterthought.

I have seen these ideas work in different teams and different codebases, and the pattern is always the same.

Once the tokens are in place, keyframes stop being a scattered collection of tricks and become part of the design language. They make the product feel more intentional, more consistent, and more alive.

If you take one thing from this article, let it be this: animations deserve the same care and structure we already give to colors, typography, and spacing. A small investment in keyframes tokens pays off every time your interface moves.

Kategorier: Amerikanska

From Chaos To Clarity: Simplifying Server Management With AI And Automation

tis, 11/18/2025 - 11:00

This article is a sponsored by Cloudways

If you build or manage websites for a living, you know the feeling. Your day is a constant juggle; one moment you’re fine-tuning a design, the next you’re troubleshooting a slow server or a mysterious error. Daily management of a complex web of plugins, integrations, and performance tools often feels like you’re just reacting to problems—putting out fires instead of building something new.

This reactive cycle is exhausting, and it pulls your focus away from meaningful work and into the technical weeds. A recent industry event, Cloudways Prepathon 2025, put a sharp focus on this very challenge. The discussions made it clear: the future of web work demands a better way. It requires an infrastructure that’s ready for AI; one that can actively help you turn this daily chaos into clarity.

The stakes for performance are higher than ever.

Suhaib Zaheer, SVP of Managed Hosting at DigitalOcean, and Ali Ahmed Khan, Sr. Director of Product Management, shared a telling statistic during their panel: 53% of mobile visitors will leave a site if it takes more than three seconds to load.

Think about that for a second, and within half that time, your potential traffic is gone. This isn’t just about a slow website, but about lost trust, abandoned carts, and missed opportunities. Performance is no longer just a feature; it’s the foundation of user experience. And in today’s landscape, automation is the key to maintaining it consistently.

So how do we stop reacting and start preventing?

The Old Way: A Constant State Of Alert

For too long, server management has worked like this: something breaks, you receive an alert (or worse, a client complaint), and you start digging. You log into your server, check logs, try to correlate different metrics, and eventually (hopefully) find the root cause. Then you manually apply a fix.

This process is fragile and relies on your constant attention while eating up hours that could be spent on development, strategy, or client work. For freelancers and small teams, this time is your most valuable asset. Every minute spent manually diagnosing a disk space issue or a web stack failure is a minute not spent on growing your business.

The problem isn't a lack of tools. It's that most tools just show you the data; they don't help you understand it or act on it. They add to the noise instead of providing clarity.

A New Approach: From Diagnosis To Automatic Resolution

This is where a shift towards intelligent automation changes the game. Tools like Cloudways Copilot, which became generally available earlier this year, are built specifically to simplify this workflow. The goal is straightforward: combine AI-driven diagnostics with automated fixes to predict and resolve performance issues before they affect your users.

Here’s a practical look at how it works.

Imagine your site starts running slowly. In the past, you'd begin the tedious investigation.

1. The AI Insights

Instead of a generic "high CPU" alert, you get a detailed insight. It tells you what happened (e.g., "MySQL process is consuming excessive resources"), why it happened (e.g., "caused by a poorly optimized query from a recent plugin update"), and provides a step-by-step guide to fix it manually. This alone cuts diagnosis time from 30-40 minutes down to about five. You understand the problem, not just the diagnosis.

2. The SmartFix

This is where it moves from helpful to transformative. For common issues, you don’t just get a manual guide. You get a one-click SmartFix button. After reviewing the actions Copilot will take, you can let it automatically resolve the issue. It applies the necessary steps safely and without you needing to touch a command line. This is the clarity we’re talking about. The system doesn’t just tell you about the problem; it solves it for you.

For developers managing multiple sites, this is a fundamental change. It means you can handle routine server issues at scale. A disk cleanup that would have required logging into ten different servers can now be handled with a few clicks. It frees your brain from repetitive troubleshooting and lets you focus on the work that actually requires your expertise.

Building An AI-Ready Foundation

The principles discussed at Prepathon go beyond any single tool. The theme was about building a resilient foundation. Meeky Hwang, CEO at Ndevr, introduced the "3E Framework," which perfectly applies here. A strong platform must balance:

  • Audience Experience
    What your visitors see and feel—blazing speed and seamless operation.
  • Creator Experience
    The workflow for you and your team—managing content and marketing without technical friction.
  • Developer Experience
    The backend foundation—server management that is secure, stable, and efficient.

AI-driven server management directly strengthens all three. A faster, more stable server improves the Audience Experience. Fewer emergencies and simpler workflows improve the Creator and Developer Experience. When these are aligned, you can scale with confidence.

This Isn’t About Replacing You

It’s important to be clear. This isn’t about replacing the developer but about augmenting your capabilities. As Vito Peleg, Co-founder & CEO at Atarim, noted during Prepathon:

“We're all becoming prompt engineers in the modern world. Our job is no longer to do the task, but to orchestrate the fleet of AI agents that can do it at a scale we never could alone.”

— Vito Peleg, Co-founder & CEO at Atarim

Think of Cloudways Copilot as an expert sysadmin on your team. It handles the routine, often tedious, work. It alerts you to what’s important and provides clear, actionable context. This gives you back the mental space and time to focus on architecture, innovation, and client strategy.

“The challenge isn’t managing servers anymore — it’s managing focus,”

Suhaib Zaheer noted.

“AI-driven infrastructure should help developers spend less time reacting to issues and more time creating better digital experiences.” A Practical Path Forward

For freelancers, WordPress experts, and small agency developers, this shift offers a tangible way to:

  • Drastically reduce the hours spent manually troubleshooting infrastructure issues.
  • Implement predictive monitoring that catches slowdowns and bottlenecks early.
  • Manage your entire stack through clear, plain-English AI insights instead of raw data.
  • Balance speed, security, and uptime without needing an enterprise-scale budget or team.

The goal is to make powerful infrastructure simple, while also giving you back control and your time so you can focus on what you do best: creating exceptional web experiences.

You can use promo code BFCM5050 to get 50% off for 3 months plus 50 Free Migrations using Cloudways. This offer is valid from November 18th to December 4th, 2025.

Kategorier: Amerikanska

CSS Gamepad API Visual Debugging With CSS Layers

fre, 11/14/2025 - 14:00

When you plug in a controller, you mash buttons, move the sticks, pull the triggers… and as a developer, you see none of it. The browser’s picking it up, sure, but unless you’re logging numbers in the console, it’s invisible. That’s the headache with the Gamepad API.

It’s been around for years, and it’s actually pretty powerful. You can read buttons, sticks, triggers, the works. But most people don’t touch it. Why? Because there’s no feedback. No panel in developer tools. No clear way to know if the controller’s even doing what you think. It feels like flying blind.

That bugged me enough to build a little tool: Gamepad Cascade Debugger. Instead of staring at console output, you get a live, interactive view of the controller. Press something and it reacts on the screen. And with CSS Cascade Layers, the styles stay organized, so it’s cleaner to debug.

In this post, I’ll show you why debugging controllers is such a pain, how CSS helps clean it up, and how you can build a reusable visual debugger for your own projects.

Even if you are able to log them all, you’ll quickly end up with unreadable console spam. For example:

[0,0,1,0,0,0.5,0,...] [0,0,0,0,1,0,0,...] [0,0,1,0,0,0,0,...]

Can you tell what button was pressed? Maybe, but only after straining your eyes and missing a few inputs. So, no, debugging doesn’t come easily when it comes to reading inputs.

Problem 3: Lack Of Structure

Even if you throw together a quick visualizer, styles can quickly get messy. Default, active, and debug states can overlap, and without a clear structure, your CSS becomes brittle and hard to extend.

CSS Cascade Layers can help. They group styles into “layers” that are ordered by priority, so you stop fighting specificity and guessing, “Why isn’t my debug style showing?” Instead, you maintain separate concerns:

  • Base: The controller’s standard, initial appearance.
  • Active: Highlights for pressed buttons and moved sticks.
  • Debug: Overlays for developers (e.g., numeric readouts, guides, and so on).

If we were to define layers in CSS according to this, we’d have:

/* lowest to highest priority */ @layer base, active, debug; @layer base { /* ... */ } @layer active { /* ... */ } @layer debug { /* ... */ }

Because each layer stacks predictably, you always know which rules win. That predictability makes debugging not just easier, but actually manageable.

We’ve covered the problem (invisible, messy input) and the approach (a visual debugger built with Cascade Layers). Now we’ll walk through the step-by-step process to build the debugger.

The Debugger Concept

The easiest way to make hidden input visible is to just draw it on the screen. That’s what this debugger does. Buttons, triggers, and joysticks all get a visual.

  • Press A: A circle lights up.
  • Nudge the stick: The circle slides around.
  • Pull a trigger halfway: A bar fills halfway.

Now you’re not staring at 0s and 1s, but actually watching the controller react live.

Of course, once you start piling on states like default, pressed, debug info, maybe even a recording mode, the CSS starts getting larger and more complex. That’s where cascade layers come in handy. Here’s a stripped-down example:

@layer base { .button { background: #222; border-radius: 50%; width: 40px; height: 40px; } } @layer active { .button.pressed { background: #0f0; /* bright green */ } } @layer debug { .button::after { content: attr(data-value); font-size: 12px; color: #fff; } }

The layer order matters: base → active → debug.

  • base draws the controller.
  • active handles pressed states.
  • debug throws on overlays.

Breaking it up like this means you’re not fighting weird specificity wars. Each layer has its place, and you always know what wins.

Building It Out

Let’s get something on screen first. It doesn’t need to look good — just needs to exist so we have something to work with.

<h1>Gamepad Cascade Debugger</h1> <!-- Main controller container --> <div id="controller"> <!-- Action buttons --> <div id="btn-a" class="button">A</div> <div id="btn-b" class="button">B</div> <div id="btn-x" class="button">X</div> <!-- Pause/menu button (represented as two bars) --> <div> <div id="pause1" class="pause"></div> <div id="pause2" class="pause"></div> </div> </div> <!-- Toggle button to start/stop the debugger --> <button id="toggle">Toggle Debug</button> <!-- Status display for showing which buttons are pressed --> <div id="status">Debugger inactive</div> <script src="script.js"></script>

That’s literally just boxes. Not exciting yet, but it gives us handles to grab later with CSS and JavaScript.

Okay, I’m using cascade layers here because it keeps stuff organized once you add more states. Here’s a rough pass:

/* =================================== CASCADE LAYERS SETUP Order matters: base → active → debug =================================== */ /* Define layer order upfront */ @layer base, active, debug; /* Layer 1: Base styles - default appearance */ @layer base { .button { background: #333; border-radius: 50%; width: 70px; height: 70px; display: flex; justify-content: center; align-items: center; } .pause { width: 20px; height: 70px; background: #333; display: inline-block; } } /* Layer 2: Active states - handles pressed buttons */ @layer active { .button.active { background: #0f0; /* Bright green when pressed */ transform: scale(1.1); /* Slightly enlarges the button */ } .pause.active { background: #0f0; transform: scaleY(1.1); /* Stretches vertically when pressed */ } } /* Layer 3: Debug overlays - developer info */ @layer debug { .button::after { content: attr(data-value); /* Shows the numeric value */ font-size: 12px; color: #fff; } }

The beauty of this approach is that each layer has a clear purpose. The base layer can never override active, and active can never override debug, regardless of specificity. This eliminates the CSS specificity wars that usually plague debugging tools.

Now it looks like some clusters are sitting on a dark background. Honestly, not too bad.

Adding the JavaScript

JavaScript time. This is where the controller actually does something. We’ll build this step by step.

Step 1: Set Up State Management

First, we need variables to track the debugger’s state:

// =================================== // STATE MANAGEMENT // =================================== let running = false; // Tracks whether the debugger is active let rafId; // Stores the requestAnimationFrame ID for cancellation

These variables control the animation loop that continuously reads gamepad input.

Step 2: Grab DOM References

Next, we get references to all the HTML elements we’ll be updating:

// =================================== // DOM ELEMENT REFERENCES // =================================== const btnA = document.getElementById("btn-a"); const btnB = document.getElementById("btn-b"); const btnX = document.getElementById("btn-x"); const pause1 = document.getElementById("pause1"); const pause2 = document.getElementById("pause2"); const status = document.getElementById("status");

Storing these references up front is more efficient than querying the DOM repeatedly.

Step 3: Add Keyboard Fallback

For testing without a physical controller, we’ll map keyboard keys to buttons:

// =================================== // KEYBOARD FALLBACK (for testing without a controller) // =================================== const keyMap = { "a": btnA, "b": btnB, "x": btnX, "p": [pause1, pause2] // 'p' key controls both pause bars };

This lets us test the UI by pressing keys on a keyboard.

Step 4: Create The Main Update Loop

Here’s where the magic happens. This function runs continuously and reads gamepad state:

// =================================== // MAIN GAMEPAD UPDATE LOOP // =================================== function updateGamepad() { // Get all connected gamepads const gamepads = navigator.getGamepads(); if (!gamepads) return; // Use the first connected gamepad const gp = gamepads[0]; if (gp) { // Update button states by toggling the "active" class btnA.classList.toggle("active", gp.buttons[0].pressed); btnB.classList.toggle("active", gp.buttons[1].pressed); btnX.classList.toggle("active", gp.buttons[2].pressed); // Handle pause button (button index 9 on most controllers) const pausePressed = gp.buttons[9].pressed; pause1.classList.toggle("active", pausePressed); pause2.classList.toggle("active", pausePressed); // Build a list of currently pressed buttons for status display let pressed = []; gp.buttons.forEach((btn, i) => { if (btn.pressed) pressed.push("Button " + i); }); // Update status text if any buttons are pressed if (pressed.length > 0) { status.textContent = "Pressed: " + pressed.join(", "); } } // Continue the loop if debugger is running if (running) { rafId = requestAnimationFrame(updateGamepad); } }

The classList.toggle() method adds or removes the active class based on whether the button is pressed, which triggers our CSS layer styles.

Step 5: Handle Keyboard Events

These event listeners make the keyboard fallback work:

// =================================== // KEYBOARD EVENT HANDLERS // =================================== document.addEventListener("keydown", (e) => { if (keyMap[e.key]) { // Handle single or multiple elements if (Array.isArray(keyMap[e.key])) { keyMap[e.key].forEach(el => el.classList.add("active")); } else { keyMap[e.key].classList.add("active"); } status.textContent = "Key pressed: " + e.key.toUpperCase(); } }); document.addEventListener("keyup", (e) => { if (keyMap[e.key]) { // Remove active state when key is released if (Array.isArray(keyMap[e.key])) { keyMap[e.key].forEach(el => el.classList.remove("active")); } else { keyMap[e.key].classList.remove("active"); } status.textContent = "Key released: " + e.key.toUpperCase(); } }); Step 6: Add Start/Stop Control

Finally, we need a way to toggle the debugger on and off:

// =================================== // TOGGLE DEBUGGER ON/OFF // =================================== document.getElementById("toggle").addEventListener("click", () => { running = !running; // Flip the running state if (running) { status.textContent = "Debugger running..."; updateGamepad(); // Start the update loop } else { status.textContent = "Debugger inactive"; cancelAnimationFrame(rafId); // Stop the loop } });

So yeah, press a button and it glows. Push the stick and it moves. That’s it.

One more thing: raw values. Sometimes you just want to see numbers, not lights.

At this stage, you should see:

  • A simple on-screen controller,
  • Buttons that react as you interact with them, and
  • An optional debug readout showing pressed button indices.

To make this less abstract, here’s a quick demo of the on-screen controller reacting in real time:

Now, pressing Start Recording logs everything until you hit Stop Recording.

2. Exporting Data to CSV/JSON

Once we have a log, we’ll want to save it.

<div class="controls"> <button id="export-json" class="btn">Export JSON</button> <button id="export-csv" class="btn">Export CSV</button> </div> Step 1: Create The Download Helper

First, we need a helper function that handles file downloads in the browser:

// =================================== // FILE DOWNLOAD HELPER // =================================== function downloadFile(filename, content, type = "text/plain") { // Create a blob from the content const blob = new Blob([content], { type }); const url = URL.createObjectURL(blob); // Create a temporary download link and click it const a = document.createElement("a"); a.href = url; a.download = filename; a.click(); // Clean up the object URL after download setTimeout(() => URL.revokeObjectURL(url), 100); }

This function works by creating a Blob (binary large object) from your data, generating a temporary URL for it, and programmatically clicking a download link. The cleanup ensures we don’t leak memory.

Step 2: Handle JSON Export

JSON is perfect for preserving the complete data structure:

// =================================== // EXPORT AS JSON // =================================== document.getElementById("export-json").addEventListener("click", () => { // Check if there's anything to export if (!frames.length) { console.warn("No recording available to export."); return; } // Create a payload with metadata and frames const payload = { createdAt: new Date().toISOString(), frames }; // Download as formatted JSON downloadFile( "gamepad-log.json", JSON.stringify(payload, null, 2), "application/json" ); });

The JSON format keeps everything structured and easily parseable, making it ideal for loading back into dev tools or sharing with teammates.

Step 3: Handle CSV Export

For CSV exports, we need to flatten the hierarchical data into rows and columns:

// =================================== // EXPORT AS CSV // =================================== document.getElementById("export-csv").addEventListener("click", () => { // Check if there's anything to export if (!frames.length) { console.warn("No recording available to export."); return; } // Build CSV header row (columns for timestamp, all buttons, all axes) const headerButtons = frames[0].buttons.map((_, i) => btn${i}); const headerAxes = frames[0].axes.map((_, i) => axis${i}); const header = ["t", ...headerButtons, ...headerAxes].join(",") + "\n"; // Build CSV data rows const rows = frames.map(f => { const btnVals = f.buttons.map(b => b.value); return [f.t, ...btnVals, ...f.axes].join(","); }).join("\n"); // Download as CSV downloadFile("gamepad-log.csv", header + rows, "text/csv"); });

CSV is brilliant for data analysis because it opens directly in Excel or Google Sheets, letting you create charts, filter data, or spot patterns visually.

Now that the export buttons are in, you’ll see two new options on the panel: Export JSON and Export CSV. JSON is nice if you want to throw the raw log back into your dev tools or poke around the structure. CSV, on the other hand, opens straight into Excel or Google Sheets so you can chart, filter, or compare inputs. The following figure shows what the panel looks like with those extra controls.

3. Snapshot System

Sometimes you don’t need a full recording, just a quick “screenshot” of input states. That’s where a Take Snapshot button helps.

<div class="controls"> <button id="snapshot" class="btn">Take Snapshot</button> </div>

And the JavaScript:

// =================================== // TAKE SNAPSHOT // =================================== document.getElementById("snapshot").addEventListener("click", () => { // Get all connected gamepads const pads = navigator.getGamepads(); const activePads = []; // Loop through and capture the state of each connected gamepad for (const gp of pads) { if (!gp) continue; // Skip empty slots activePads.push({ id: gp.id, // Controller name/model timestamp: performance.now(), buttons: gp.buttons.map(b => ({ pressed: b.pressed, value: b.value })), axes: [...gp.axes] }); } // Check if any gamepads were found if (!activePads.length) { console.warn("No gamepads connected for snapshot."); alert("No controller detected!"); return; } // Log and notify user console.log("Snapshot:", activePads); alert(Snapshot taken! Captured ${activePads.length} controller(s).); });

Snapshots freeze the exact state of your controller at one moment in time.

4. Ghost Input Replay

Now for the fun one: ghost input replay. This takes a log and plays it back visually as if a phantom player was using the controller.

<div class="controls"> <button id="replay" class="btn">Replay Last Recording</button> </div>

JavaScript for replay:

// =================================== // GHOST REPLAY // =================================== document.getElementById("replay").addEventListener("click", () => { // Ensure we have a recording to replay if (!frames.length) { alert("No recording to replay!"); return; } console.log("Starting ghost replay..."); // Track timing for synced playback let startTime = performance.now(); let frameIndex = 0; // Replay animation loop function step() { const now = performance.now(); const elapsed = now - startTime; // Process all frames that should have occurred by now while (frameIndex < frames.length && frames[frameIndex].t <= elapsed) { const frame = frames[frameIndex]; // Update UI with the recorded button states btnA.classList.toggle("active", frame.buttons[0].pressed); btnB.classList.toggle("active", frame.buttons[1].pressed); btnX.classList.toggle("active", frame.buttons[2].pressed); // Update status display let pressed = []; frame.buttons.forEach((btn, i) => { if (btn.pressed) pressed.push("Button " + i); }); if (pressed.length > 0) { status.textContent = "Ghost: " + pressed.join(", "); } frameIndex++; } // Continue loop if there are more frames if (frameIndex < frames.length) { requestAnimationFrame(step); } else { console.log("Replay finished."); status.textContent = "Replay complete"; } } // Start the replay step(); });

To make debugging a bit more hands-on, I added a ghost replay. Once you’ve recorded a session, you can hit replay and watch the UI act it out, almost like a phantom player is running the pad. A new Replay Ghost button shows up in the panel for this.

Hit Record, mess around with the controller a bit, stop, then replay. The UI just echoes everything you did, like a ghost following your inputs.

Why bother with these extras?

  • Recording/export makes it easy for testers to show exactly what happened.
  • Snapshots freeze a moment in time, super useful when you’re chasing odd bugs.
  • Ghost replay is great for tutorials, accessibility checks, or just comparing control setups side by side.

At this point, it’s not just a neat demo anymore, but something you could actually put to work.

Real-World Use Cases

Now we’ve got this debugger that can do a lot. It shows live input, records logs, exports them, and even replays stuff. But the real question is: who actually cares? Who’s this useful for?

Game Developers

Controllers are part of the job, but debugging them? Usually a pain. Imagine you’re testing a fighting game combo, like ↓ → + punch. Instead of praying, you pressed it the same way twice, you record it once, and replay it. Done. Or you swap JSON logs with a teammate to check if your multiplayer code reacts the same on their machine. That’s huge.

Accessibility Practitioners

This one’s close to my heart. Not everyone plays with a “standard” controller. Adaptive controllers throw out weird signals sometimes. With this tool, you can see exactly what’s happening. Teachers, researchers, whoever. They can grab logs, compare them, or replay inputs side-by-side. Suddenly, invisible stuff becomes obvious.

Quality Assurance Testing

Testers usually write notes like “I mashed buttons here and it broke.” Not very helpful. Now? They can capture the exact presses, export the log, and send it off. No guessing.

Educators

If you’re making tutorials or YouTube vids, ghost replay is gold. You can literally say, “Here’s what I did with the controller,” while the UI shows it happening. Makes explanations way clearer.

Beyond Games

And yeah, this isn’t just about games. People have used controllers for robots, art projects, and accessibility interfaces. Same issue every time: what is the browser actually seeing? With this, you don’t have to guess.

Conclusion

Debugging a controller input has always felt like flying blind. Unlike the DOM or CSS, there’s no built-in inspector for gamepads; it’s just raw numbers in the console, easily lost in the noise.

With a few hundred lines of HTML, CSS, and JavaScript, we built something different:

  • A visual debugger that makes invisible inputs visible.
  • A layered CSS system that keeps the UI clean and debuggable.
  • A set of enhancements (recording, exporting, snapshots, ghost replay) that elevate it from demo to developer tool.

This project shows how far you can go by mixing the Web Platform’s power with a little creativity in CSS Cascade Layers.

The tool I just explained in its entirety is open-source. You can clone the GitHub repo and try it for yourself.

But more importantly, you can make it your own. Add your own layers. Build your own replay logic. Integrate it with your game prototype. Or even use it in ways I haven’t imagined. For teaching, accessibility, or data analysis.

At the end of the day, this isn’t just about debugging gamepads. It’s about shining a light on hidden inputs, and giving developers the confidence to work with hardware that the web still doesn’t fully embrace.

So, plug in your controller, open up your editor, and start experimenting. You might be surprised at what your browser and your CSS can truly accomplish.

Kategorier: Amerikanska

Older Tech In The Browser Stack

tors, 11/13/2025 - 09:00

I’ve been in front-end development long enough to see a trend over the years: younger developers working with a new paradigm of programming without understanding the historical context of it.

It is, of course, perfectly understandable to not know something. The web is a very big place with a diverse set of skills and specialties, and we don’t always know what we don’t know. Learning in this field is an ongoing journey rather than something that happens once and ends.

Case in point: Someone on my team asked if it was possible to tell if users navigate away from a particular tab in the UI. I pointed out JavaScript’s beforeunload event. But those who have tackled this before know this is possible because they have been hit with alerts about unsaved data on other sites, for which beforeunload is a typical use case. I also pointed out the pageHide and visibilityChange events to my colleague for good measure.

How did I know about that? Because it came up in another project, not because I studied up on it when initially learning JavaScript.

The fact is that modern front-end frameworks are standing on the shoulders of the technology giants that preceded them. They abstract development practices, often for a better developer experience that reduces, or even eliminates, the need to know or touch what have traditionally been essential front-end concepts everyone probably ought to know.

Consider the CSS Object Model (CSSOM). You might expect that anyone working in CSS and JavaScript has a bunch of hands-on CSSOM experience, but that’s not always going to be the case.

There was a React project for an e-commerce site I worked on where we needed to load a stylesheet for the currently selected payment provider. The problem was that the stylesheet was loading on every page when it was only really needed on a specific page. The developer tasked with making this happen hadn’t ever loaded a stylesheet dynamically. Again, this is totally understandable when React abstracts away the traditional approach you might have reached for.

The CSSOM is likely not something you need in your everyday work. But it is likely you will need to interact with it at some point, even in a one-off instance.

These experiences inspired me to write this article. There are many existing web features and technologies in the wild that you may never touch directly in your day-to-day work. Perhaps you’re fairly new to web development and are simply unaware of them because you’re steeped in the abstraction of a specific framework that doesn’t require you to know it deeply, or even at all.

I’m speaking specifically about XML, which many of us know is an ancient language not totally dissimilar from HTML.

I’m bringing this up because of recent WHATWG discussions suggesting that a significant chunk of the XML stack known as XSLT programming should be removed from browsers. This is exactly the sort of older, existing technology we’ve had for years that could be used for something as practical as the CSSOM situation my team was in.

Have you worked with XSLT before? Let’s see if we lean heavily into this older technology and leverage its features outside the context of XML to tackle real-world problems today.

XPath: The Central API

The most important XML technology that is perhaps the most useful outside of a straight XML perspective is XPath, a query language that allows you to find any node or attribute in a markup tree with one root element. I have a personal affection for XSLT, but that also relies on XPath, and personal affection must be put aside in ranking importance.

The argument for removing XSLT does not make any mention of XPath, so I suppose it is still allowed. That’s good because XPath is the central and most important API in this suite of technologies, especially when trying to find something to use outside normal XML usage. It is important because, while CSS selectors can be used to find most of the elements in your page, they cannot find them all. Furthermore, CSS selectors cannot be used to find an element based on its current position in the DOM.

XPath can.

Now, some of you reading this might know XPath, and some might not. XPath is a pretty big area of technology, and I can’t really teach all the basics and also show you cool things to do with it in a single article like this. I actually tried writing that article, but the average Smashing Magazine publication doesn’t go over 5,000 words. I was already at more than 2,000 words while only halfway through the basics.

So, I’m going to start doing cool stuff with XPath and give you some links that you can use for the basics if you find this stuff interesting.

Combining XPath & CSS

XPath can do lots of things that CSS selectors can’t when querying elements. But CSS selectors can also do a few things that XPath can’t, namely, query elements by class name.

CSS XPath .myClass /*[contains(@class, "myClass")]

In this example, CSS queries elements that contain a .myClass classname. Meanwhile, the XPath example queries elements that contain an attribute class with the string “myClass”. In other words, it selects elements with myClass in any attribute, including elements with the .myClass classname — as well as elements with “myClass” in the string, such as .myClass2. XPath is broader in that sense.

So, no. I’m not suggesting that we ought to toss out CSS and start selecting all elements via XPath. That’s not the point.

The point is that XPath can do things that CSS cannot and could still be very useful, even though it is an older technology in the browser stack and may not seem obvious at first glance.

Let’s use the two technologies together not only because we can, but because we’ll learn something about XPath in the process, making it another tool in your stack — one you might not have known has been there all along!

The problem is that JavaScript’s document.evaluate method and the various query selector methods we use with the CSS APIs for JavaScript are incompatible.

I have made a compatible querying API to get us started, though admittedly, I have not put a lot of thought into it since it’s a departure from what we’re doing here. Here’s a fairly simple working example of a reusable query constructor:

See the Pen queryXPath [forked] by Bryan Rasmussen.

I’ve added two methods on the document object: queryCSSSelectors (which is essentially querySelectorAll) and queryXPaths. Both of these return a queryResults object:

{ queryType: nodes | string | number | boolean, results: any[] // html elements, xml elements, strings, numbers, booleans, queryCSSSelectors: (query: string, amend: boolean) => queryResults, queryXpaths: (query: string, amend: boolean) => queryResults }

The queryCSSSelectors and queryXpaths functions run the query you give them over the elements in the results array, as long as the results array is of type nodes, of course. Otherwise, it will return a queryResult with an empty array and a type of nodes. If the amend property is set to true, the functions will change their own queryResults.

Under no circumstances should this be used in a production environment. I am doing it this way purely to demonstrate the various effects of using the two query APIs together.

Example Queries

I want to show a few examples of different XPath queries that demonstrate some of the powerful things they can do and how they can be used in place of other approaches.

The first example is //li/text(). This queries all li elements and returns their text nodes. So, if we were to query the following HTML:

<ul> <li>one</li> <li>two</li> <li>three</li> </ul>

…this is what is returned:

{"queryType":"xpathEvaluate","results":["one","two","three"],"resultType":"string"}

In other words, we get the following array: ["one","two","three"].

Normally, you would query for the li elements to get that, turn the result of that query into an array, map the array, and return the text node of each element. But we can do that more concisely with XPath:

document.queryXPaths("//li/text()").results.

Notice that the way to get a text node is to use text(), which looks like a function signature — and it is. It returns the text node of an element. In our example, there are three li elements in the markup, each containing text ("one", "two", and "three").

Let’s look at one more example of a text() query. Assume this is our markup:

<pa href="/login.html">Sign In</a>

Let’s write a query that returns the href attribute value:

document.queryXPaths("//a[text() = 'Sign In']/@href").results.

This is an XPath query on the current document, just like the last example, but this time we return the href attribute of a link (a element) that contains the text “Sign In”. The actual returned result is ["/login.html"].

XPath Functions Overview

There are a number of XPath functions, and you’re probably unfamiliar with them. There are several, I think, that are worth knowing about, including the following:

  • starts-with
    If a text starts with a particular other text example, starts-with(@href, 'http:') returns true if an href attribute starts with http:.
  • contains
    If a text contains a particular other text example, contains(text(), "Smashing Magazine") returns true if a text node contains the words “Smashing Magazine” in it anywhere.
  • count
    Returns a count of how many matches there are to a query. For example, count(//*[starts-with(@href, 'http:']) returns a count of how many links in the context node have elements with an href attribute that contains the text beginning with the http:.
  • substring
    Works like JavaScript substring, except you pass the string as an argument. For example, substring("my text", 2, 4) returns "y t".
  • substring-before
    Returns the part of a string before another string. For example, substing-before("my text", " ") returns "my". Similarly, substring-before("hi","bye") returns an empty string.
  • substring-after
    Returns the part of a string after another string. For example, substing-after("my text", " ") returns "text". Similarly, substring-after("hi","bye")returns an empty string.
  • normalize-space
    Returns the argument string with whitespace normalized by stripping leading and trailing whitespace and replacing sequences of whitespace characters by a single space.
  • not
    Returns a boolean true if the argument is false, otherwise false.
  • true
    Returns boolean true.
  • false
    Returns boolean false.
  • concat
    The same thing as JavaScript concat, except you do not run it as a method on a string. Instead, you put in all the strings you want to concatenate.
  • string-length
    This is not the same as JavaScript string-length, but rather returns the length of the string it is given as an argument.
  • translate
    This takes a string and changes the second argument to the third argument. For example, translate("abcdef", "abc", "XYZ") outputs XYZdef.

Aside from these particular XPath functions, there are a number of other functions that work just the same as their JavaScript counterparts — or counterparts in basically any programming language — that you would probably also find useful, such as floor, ceiling, round, sum, and so on.

The following demo illustrates each of these functions:

See the Pen XPath Numerical functions [forked] by Bryan Rasmussen.

Note that, like most of the string manipulation functions, many of the numerical ones take a single input. This is, of course, because they are supposed to be used for querying, as in the last XPath example:

//li[floor(text()) > 250]/@val

If you use them, as most of the examples do, you will end up running it on the first node that matches the path.

There are also some type conversion functions you should probably avoid because JavaScript already has its own type conversion problems. But there can be times when you want to convert a string to a number in order to check it against some other number.

Functions that set the type of something are boolean, number, string, and node. These are the important XPath datatypes.

And as you might imagine, most of these functions can be used on datatypes that are not DOM nodes. For example, substring-after takes a string as we’ve already covered, but it could be the string from an href attribute. It can also just be a string:

const testSubstringAfter = document.queryXPaths("substring-after('hello world',' ')");

Obviously, this example will give us back the results array as ["world"]. To show this in action, I have made a demo page using functions against things that are not DOM nodes:

See the Pen queryXPath [forked] by Bryan Rasmussen.

You should note the surprising aspect of the translate function, which is that if you have a character in the second argument (i.e., the list of characters you want translated) and no matching character to translate to, that character gets removed from the output.

Thus, this:

translate('Hello, My Name is Inigo Montoya, you killed my father, prepare to die','abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ,','*')

…results in the string, including spaces:

[" * * ** "]

This means that the letter “a” is being translated to an asterisk (*), but every other character that does not have a translation given the target string is completely removed. The whitespace is all we have left between the translated “a” characters.

Then again, this query:

translate('Hello, My Name is Inigo Montoya, you killed my father, prepare to die','abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ,','**************************************************')")

…does not have the problem and outputs a result that looks like this:

"***** ** **** ** ***** ******* *** ****** ** ****** ******* ** ***"

It might strike you that there is no easy way in JavaScript to do exactly what the XPath translate function does, although for many use cases, replaceAll with regular expressions can handle it.

You could use the same approach I have demonstrated, but that is suboptimal if all you want is to translate the strings. The following demo wraps XPath’s translate function to provide a JavaScript version:

See the Pen translate function [forked] by Bryan Rasmussen.

Where might you use something like this? Consider Caesar Cipher encryption with a three-place offset (e.g., top-of-the-line encryption from 48 B.C.):

translate("Caesar is planning to cross the Rubicon!", "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz", "XYZABCDEFGHIJKLMNOPQRSTUVWxyzabcdefghijklmnopqrstuvw")

The input text “Caesar is planning to cross the Rubicon!” results in “Zxbpxo fp mixkkfkd ql zolpp qeb Oryfzlk!”

To give another quick example of different possibilities, I made a metal function that takes a string input and uses a translate function to return the text, including all characters that take umlauts.

See the Pen metal function [forked] by Bryan Rasmussen.

const metal = (str) => { return translate(str, "AOUaou","ÄÖÜäöü"); }

And, if given the text “Motley Crue rules, rock on dudes!”, returns “Mötley Crüe rüles, röck ön düdes!”

Obviously, one might have all sorts of parody uses of this function. If that’s you, then this TVTropes article ought to provide you with plenty of inspiration.

Using CSS With XPath

Remember our main reason for using CSS selectors together with XPath: CSS pretty much understands what a class is, whereas the best you can do with XPath is string comparisons of the class attribute. That will work in most cases.

But if you were to ever run into a situation where, say, someone created classes named .primaryLinks and .primaryLinks2 and you were using XPath to get the .primaryLinks class, then you would likely run into problems. As long as there’s nothing silly like that, you would probably use XPath. But I am sad to report that I have worked at places where people do those types of silly things.

Here’s another demo using CSS and XPath together. It shows what happens when we use the code to run an XPath on a context node that is not the document’s node.

See the Pen css and xpath together [forked] by Bryan Rasmussen.

The CSS query is .relatedarticles a, which fetches the two a elements in a div assigned a .relatedarticles class.

After that are three “bad” queries, that is to say, queries that do not do what we want them to do when running with these elements as the context node.

I can explain why they are behaving differently than you might expect. The three bad queries in question are:

  • //text(): Returns all the text in the document.
  • //a/text(): Returns all the text inside of links in the document.
  • ./a/text(): Returns no results.

The reason for these results is that while your context is a elements returned from the CSS query, // goes against the whole document. This is the strength of XPath; CSS cannot go from a node up to an ancestor and then to a sibling of that ancestor, and walk down to a descendant of that sibling. But XPath can.

Meanwhile, ./ queries the children of the current node, where the dot (.) represents the current node, and the forward slash (/) represents going to some child node — whether it is an attribute, element, or text is determined by the next part of the path. But there is no child a element selected by the CSS query, thus that query also returns nothing.

There are three good queries in that last demo:

  • .//text(),
  • ./text(),
  • normalize-space(./text()).

The normalize-space query demonstrates XPath function usage, but also fixes a problem included in the other queries. The HTML is structured like this:

<a href="https://www.smashingmagazine.com/2018/04/feature-testing-selenium-webdriver/"> Automating Your Feature Testing With Selenium WebDriver </a>

The query returns a line feed at the beginning and end of the text node, and normalize-space removes this.

Using any XPath function that returns something other than a boolean with an input XPath applies to other functions. The following demo shows a number of examples:

See the Pen xpath functions examples [forked] by Bryan Rasmussen.

The first example shows a problem you should watch out for. Specifically, the following code:

document.queryXPaths("substring-after(//a/@href,'https://')");

…returns one string:

"www.smashingmagazine.com/2018/04/feature-testing-selenium-webdriver/"

It makes sense, right? These functions do not return arrays but rather single strings or single numbers. Running the function anywhere with multiple results only returns the first result.

The second result shows what we really want:

document.queryCSSSelectors("a").queryXPaths("substring-after(./@href,'https://')");

Which returns an array of two strings:

["www.smashingmagazine.com/2018/04/feature-testing-selenium-webdriver/","www.smashingmagazine.com/2022/11/automated-test-results-improve-accessibility/"]

XPath functions can be nested just like functions in JavaScript. So, if we know the Smashing Magazine URL structure, we could do the following (using template literals is recommended):

`translate( substring( substring-after(./@href, ‘www.smashingmagazine.com/') ,9), '/','')`

This is getting a bit too complex to the extent that it needs comments describing what it does: take all of the URL from the href attribute after www.smashingmagazine.com/, remove the first nine characters, then translate the forward slash (/) character to nothing so as to get rid of the ending forward slash.

The resulting array:

["feature-testing-selenium-webdriver","automated-test-results-improve-accessibility"] More XPath Use Cases

XPath can really shine in testing. The reason is not difficult to see, as XPath can be used to get every element in the DOM, from any position in the DOM, whereas CSS cannot.

You cannot count on CSS classes remaining consistent in many modern build systems, but with XPath, we are able to make more robust matches as to what the text content of an element is, regardless of a changing DOM structure.

There has been research on techniques that allow you to make resilient XPath tests. Nothing is worse than having tests flake out and fail just because a CSS selector no longer works because something has been renamed or removed.

XPath is also really great at multiple locator extraction. There is more than one way to use XPath queries to match an element. The same is true with CSS. But XPath queries can drill into things in a more targeted way that limits what gets returned, allowing you to find a specific match where there may be several possible matches.

For example, we can use XPath to return a specific h2 element that is contained inside a div that immediately follows a sibling div that, in turn, contains a child image element with a data-testID="leader" attribute on it:

<div> <div> <h1>don't get this headline</h1> </div> <div> <h2>Don't get this headline either</h2> </div> <div> <h2>The header for the leader image</h2> </div> <div> <img data-testID="leader" src="image.jpg"/> </div> </div>

This is the query:

document.queryXPaths(` //div[ following-sibling::div[1] /img[@data-testID='leader'] ] /h2/ text() `);

Let’s drop in a demo to see how that all comes together:

See the Pen Complex H2 Query [forked] by Bryan Rasmussen.

So, yes. There are lots of possible paths to any element in a test using XPath.

XSLT 1.0 Deprecation

I mentioned early on that the Chrome team plans on removing XSLT 1.0 support from the browser. That’s important because XSLT 1.0 uses XML-focused programming for document transformation that, in turn, relies on XPath 1.0, which is what is found in most browsers.

When that happens, we’ll lose a key component of XPath. But given the fact that XPath is really great for writing tests, I find it unlikely that XPath as a whole will disappear anytime soon.

That said, I’ve noticed that people get interested in a feature when it’s taken away. And that’s certainly true in the case of XSLT 1.0 being deprecated. There’s an entire discussion happening over at Hacker News filled with arguments against the deprecation. The post itself is a great example of creating a blogging framework with XSLT. You can read the discussion for yourself, but it gets into how JavaScript might be used as a shim for XLST to handle those sorts of cases.

I have also seen suggestions that browsers should use SaxonJS, which is a port to JavaScript’s Saxon XSLT, XQUERY, and XPath engines. That’s an interesting idea, especially as Saxon-JS implements the current version of these specifications, whereas there is no browser that implements any version of XPath or XSLT beyond 1.0, and none that implements XQuery.

I reached out to Norm Tovey-Walsh at Saxonica, the company behind SaxonJS and other versions of the Saxon engine. He said:

“If any browser vendor was interested in taking SaxonJS as a starting point for integrating modern XML technologies into the browser, we’d be thrilled to discuss it with them.”

Norm Tovey-Walsh

But also added:

“I would be very surprised if anyone thought that taking SaxonJS in its current form and dropping it into the browser build unchanged would be the ideal approach. A browser vendor, by nature of the fact that they build the browser, could approach the integration at a much deeper level than we can ‘from the outside’.”

Norm Tovey-Walsh

It’s worth noting that Tovey-Walsh’s comments came about a week before the XSLT deprecation announcement.

Conclusion

I could go on and on. But I hope this has demonstrated the power of XPath and given you plenty of examples demonstrating how to use it for achieving great things. It’s a perfect example of older technology in the browser stack that still has plenty of utility today, even if you’ve never known it existed or never considered reaching for it.

Further Reading
  • Enhancing the Resiliency of Automated Web Tests with Natural Language” (ACM Digital Library) by Maroun Ayli, Youssef Bakouny, Nader Jalloul, and Rima Kilany
    This article provides many XPath examples for writing resilient tests.
  • XPath (MDN)
    This is an excellent place to start if you want a technical explanation detailing how XPath works.
  • XPath Tutorial (ZVON)
    I’ve found this tutorial to be the most helpful in my own learning, thanks to a wealth of examples and clear explanations.
  • XPather
    This interactive tool lets you work directly with the code.
Kategorier: Amerikanska

Effectively Monitoring Web Performance

tis, 11/11/2025 - 11:00

This article is a sponsored by DebugBear

There’s no single way to measure website performance. That said, the Core Web Vitals metrics that Google uses as a ranking factor are a great starting point, as they cover different aspects of visitor experience:

  • Largest Contentful Paint (LCP): Measures the initial page load time.
  • Cumulative Layout Shift (CLS): Measures if content is stable after rendering.
  • Interaction to Next Paint (INP): Measures how quickly the page responds to user input.

There are also many other web performance metrics that you can use to track technical aspects, like page weight or server response time. While these often don’t matter directly to the end user, they provide you with insight into what’s slowing down your pages.

You can also use the User Timing API to track page load milestones that are important on your website specifically.

Synthetic And Real User Data

There are two different types of web performance data:

  • Synthetic tests are run in a controlled test environment.
  • Real user data is collected from actual website visitors.

Synthetic monitoring can provide super-detailed reports to help you identify page speed issues. You can configure exactly how you want to collect the data, picking a specific network speed, device size, or test location.

Get a hands-on feel for synthetic monitoring by using the free DebugBear website speed test to check on your website.

That said, your synthetic test settings might not match what’s typical for your real visitors, and you can’t script all of the possible ways that people might interact with your website.

That’s why you also need real user monitoring (RUM). Instead of looking at one experience, you see different load times and how specific visitor segments are impacted. You can review specific page views to identify what caused poor performance for a particular visitor.

At the same time, real user data isn’t quite as detailed as synthetic test reports, due to web API limitations and performance concerns.

DebugBear offers both synthetic monitoring and real user monitoring:

  • To set up synthetic tests, you just need to enter a website URL, and
  • To collect real user metrics, you need to install an analytics snippet on your website.
Three Steps To A Fast Website

Collecting data helps you throughout the lifecycle of your web performance optimizations. You can follow this three-step process:

  1. Identify: Collect data across your website and identify slow visitor experiences.
  2. Diagnose: Dive deep into technical analysis to find optimizations.
  3. Monitor: Check that optimizations are working and get alerted to performance regressions.

Let’s take a look at each step in detail.

Step 1: Identify Slow Visitor Experiences

What’s prompting you to look into website performance issues in the first place? You likely already have some specific issues in mind, whether that’s from customer reports or because of poor scores in the Core Web Vitals section of Google Search Console.

Real user data is the best place to check for slow pages. It tells you whether the technical issues on your site actually result in poor user experience. It’s easy to collect across your whole website (while synthetic tests need to be set up for each URL). And, you can often get a view count along with the performance metrics. A moderately slow page that gets two visitors a month isn’t as important as a moderately fast page that gets thousands of visits a day.

The Web Vitals dashboard in DebugBear’s RUM product checks your site’s performance health and surfaces the most-visited pages and URLs where many visitors have a poor experience.

You can also run a website scan to get a list of URLs from your sitemap and then check each of these pages against real user data from Google’s Chrome User Experience Report (CrUX). However, this will only work for pages that meet a minimum traffic threshold to be included in the CrUX dataset.

The scan result highlights pages with poor web vitals scores where you might want to investigate further.

If no real-user data is available, then there is a scanning tool called Unlighthouse, which is based on Google’s Lighthouse tool. It runs synthetic tests for each page, allowing you to filter through the results in order to identify pages that need to be optimized.

Step 2: Diagnose Web Performance Issues

Once you’ve identified slow pages on your website, you need to look at what’s actually happening on your page that is causing delays.

Debugging Page Load Time

If there are issues with page load time metrics — like the Largest Contentful Paint (LCP) — synthetic test results can provide a detailed analysis. You can also run page speed experiments to try out and measure the impact of certain optimizations.

Real user data can still be important when debugging page speed, as load time depends on many user- and device-specific factors. For example, depending on the size of the user’s device, the page element that’s responsible for the LCP can vary. RUM data can provide a breakdown of possible influencing factors, like CSS selectors and image URLs, across all visitors, helping you zero in on what exactly needs to be fixed.

Debugging Slow Interactions

RUM data is also generally needed to properly diagnose issues related to the Interaction to Next Paint (INP) metric. Specifically, real user data can provide insight into what causes slow interactions, which helps you answer questions like:

  • What page elements are responsible?
  • Is time spent processing already-active background tasks or handling the interaction itself?
  • What scripts contribute the most to overall CPU processing time?

You can view this data at a high level to identify trends, as well as review specific page views to see what impacted a specific visitor experience.

Step 3: Monitor Performance & Respond To Regressions

Continuous monitoring of your website performance lets you track whether the performance is improving after making a change, and alerts you when scores decline.

How you respond to performance regressions depends on whether you’re looking at lab-based synthetic tests or real user analytics.

Synthetic Data

Test settings for synthetic tests are standardized between runs. While infrastructure changes, like browser upgrades, occasionally cause changes, performance is more generally determined by resources loaded by the website and the code it runs.

When a metric changes, DebugBear lets you view a before-and-after comparison between the two test results. For example, the next screenshot displays a regression in the First Contentful Paint (FCP) metric. The comparison reveals that new images were added to the page, competing for bandwidth with other page resources.

From the report, it’s clear that a CSS file that previously took 255 milliseconds to load now takes 915 milliseconds. Since stylesheets are required to render page content, this means the page now loads more slowly, giving you better insight into what needs optimization.

Real User Data

When you see a change in real user metrics, there can be two causes:

  1. A shift in visitor characteristics or behavior, or
  2. A technical change on your website.

Launching an ad campaign, for example, often increases redirects, reduces cache hits, and shifts visitor demographics. When you see a regression in RUM data, the first step is to find out if the change was on your website or in your visitor’s browser. Check for view count changes in ad campaigns, referrer domains, or network speed to get a clearer picture.

If those visits have different performance compared to your typical visitors, then that suggests the repression is not due to a change on your website. However, you may still need to make changes on your website to better serve these visitor cohorts and deliver a good experience for them.

To identify the cause of a technical change, take a look at component breakdown metrics, such as LCP subparts. This helps you narrow down the cause of a regression, whether it is due to changes in server response time, new render-blocking resources, or the LCP image.

You can also check for shifts in page view properties, like different LCP element selectors or specific scripts that cause poor performance.

Conclusion

One-off page speed tests are a great starting point for optimizing performance. However, a monitoring tool like DebugBear can form the basis for a more comprehensive web performance strategy that helps you stay fast for the long term.

Get a free DebugBear trial on our website!

Kategorier: Amerikanska

Smashing Animations Part 6: Magnificent SVGs With `<use>` And CSS Custom Properties

fre, 11/07/2025 - 16:00

I explained recently how I use <symbol>, <use>, and CSS Media Queries to develop what I call adaptive SVGs. Symbols let us define an element once and then use it again and again, making SVG animations easier to maintain, more efficient, and lightweight.

Since I wrote that explanation, I’ve designed and implemented new Magnificent 7 animated graphics across my website. They play on the web design pioneer theme, featuring seven magnificent Old West characters.

<symbol> and <use> let me define a character design and reuse it across multiple SVGs and pages. First, I created my characters and put each into a <symbol> inside a hidden library SVG:

<!-- Symbols library --> <svg xmlns="http://www.w3.org/2000/svg" style="display:none;"> <symbol id="outlaw-1">[...]</symbol> <symbol id="outlaw-2">[...]</symbol> <symbol id="outlaw-3">[...]</symbol> <!-- etc. --> </svg>

Then, I referenced those symbols in two other SVGs, one for large and the other for small screens:

<!-- Large screens --> <svg xmlns="http://www.w3.org/2000/svg" id="svg-large"> <use href="outlaw-1" /> <!-- ... --> </svg> <!-- Small screens --> <svg xmlns="http://www.w3.org/2000/svg" id="svg-small"> <use href="outlaw-1" /> <!-- ... --> </svg>

Elegant. But then came the infuriating. I could reuse the characters, but couldn’t animate or style them. I added CSS rules targeting elements within the symbols referenced by a <use>, but nothing happened. Colours stayed the same, and things that should move stayed static. It felt like I’d run into an invisible barrier, and I had.

Understanding The Shadow DOM Barrier

When you reference the contents of a symbol with use, a browser creates a copy of it in the Shadow DOM. Each <use> instance becomes its own encapsulated copy of the referenced <symbol>, meaning that CSS from outside can’t break through the barrier to style any elements directly. For example, in normal circumstances, this tapping value triggers a CSS animation:

<g class="outlaw-1-foot tapping"> <!-- ... --> </g> .tapping { animation: tapping 1s ease-in-out infinite; }

But when the same animation is applied to a <use> instance of that same foot, nothing happens:

<symbol id="outlaw-1"> <g class="outlaw-1-foot"><!-- ... --></g> </symbol> <use href="#outlaw-1" class="tapping" /> .tapping { animation: tapping 1s ease-in-out infinite; }

That’s because the <g> inside the <symbol> element is in a protected shadow tree, and the CSS Cascade stops dead at the <use> boundary. This behaviour can be frustrating, but it’s intentional as it ensures that reused symbol content stays consistent and predictable.

While learning how to develop adaptive SVGs, I found all kinds of attempts to work around this behaviour, but most of them sacrificed the reusability that makes SVG so elegant. I didn’t want to duplicate my characters just to make them blink at different times. I wanted a single <symbol> with instances that have their own timings and expressions.

CSS Custom Properties To The Rescue

While working on my pioneer animations, I learned that regular CSS values can’t cross the boundary into the Shadow DOM, but CSS Custom Properties can. And even though you can’t directly style elements inside a <symbol>, you can pass custom property values to them. So, when you insert custom properties into an inline style, a browser looks at the cascade, and those styles become available to elements inside the <symbol> being referenced.

I added rotate to an inline style applied to the <symbol> content:

<symbol id="outlaw-1"> <g class="outlaw-1-foot" style=" transform-origin: bottom right; transform-box: fill-box; transform: rotate(var(--foot-rotate));"> <!-- ... --> </g> </symbol>

Then, defined the foot tapping animation and applied it to the element:

@keyframes tapping { 0%, 60%, 100% { --foot-rotate: 0deg; } 20% { --foot-rotate: -5deg; } 40% { --foot-rotate: 2deg; } } use[data-outlaw="1"] { --foot-rotate: 0deg; animation: tapping 1s ease-in-out infinite; } Passing Multiple Values To A Symbol

Once I’ve set up a symbol to use CSS Custom Properties, I can pass as many values as I want to any <use> instance. For example, I might define variables for fill, opacity, or transform. What’s elegant is that each <symbol> instance can then have its own set of values.

<g class="eyelids" style=" fill: var(--eyelids-colour, #f7bea1); opacity: var(--eyelids-opacity, 1); transform: var(--eyelids-scale, 0);" > <!-- etc. --> </g> use[data-outlaw="1"] { --eyelids-colour: #f7bea1; --eyelids-opacity: 1; } use[data-outlaw="2"] { --eyelids-colour: #ba7e5e; --eyelids-opacity: 0; }

Support for passing CSS Custom Properties like this is solid, and every contemporary browser handles this behaviour correctly. Let me show you a few ways I’ve been using this technique, starting with a multi-coloured icon system.

A Multi-Coloured Icon System

When I need to maintain a set of icons, I can define an icon once inside a <symbol> and then use custom properties to apply colours and effects. Instead of needing to duplicate SVGs for every theme, each use can carry its own values.

For example, I applied an --icon-fill custom property for the default fill colour of the <path> in this Bluesky icon :

<symbol id="icon-bluesky"> <path fill="var(--icon-fill, currentColor)" d="..." /> </symbol>

Then, whenever I need to vary how that icon looks — for example, in a <header> and <footer> — I can pass new fill colour values to each instance:

<header> <svg xmlns="http://www.w3.org/2000/svg"> <use href="#icon-bluesky" style="--icon-fill: #2d373b;" /> </svg> </header> <footer> <svg xmlns="http://www.w3.org/2000/svg"> <use href="#icon-bluesky" style="--icon-fill: #590d1a;" /> </svg> </footer>

These icons are the same shape but look different thanks to their inline styles.

Data Visualisations With CSS Custom Properties

We can use <symbol> and <use> in plenty more practical ways. They’re also helpful for creating lightweight data visualisations, so imagine an infographic about three famous Wild West sheriffs: Wyatt Earp, Pat Garrett, and Bat Masterson.

Each sheriff’s profile uses the same set of SVG three symbols: one for a bar representing the length of a sheriff’s career, another to represent the number of arrests made, and one more for the number of kills. Passing custom property values to each <use> instance can vary the bar lengths, arrests scale, and kills colour without duplicating SVGs. I first created symbols for those items:

<svg xmlns="http://www.w3.org/2000/svg" style="display:none;"> <symbol id="career-bar"> <rect height="10" width="var(--career-length, 100)" fill="var(--career-colour, #f7bea1)" /> </symbol> <symbol id="arrests-badge"> <path fill="var(--arrest-color, #d0985f)" transform="scale(var(--arrest-scale, 1))" /> </symbol> <symbol id="kills-icon"> <path fill="var(--kill-colour, #769099)" /> </symbol> </svg>

Each symbol accepts one or more values:

  • --career-length adjusts the width of the career bar.
  • --career-colour changes the fill colour of that bar.
  • --arrest-scale controls the arrest badge size.
  • --kill-colour defines the fill colour of the kill icon.

I can use these to develop a profile of each sheriff using <use> elements with different inline styles, starting with Wyatt Earp.

<svg xmlns="http://www.w3.org/2000/svg"> <g id="wyatt-earp"> <use href="#career-bar" style="--career-length: 400; --career-color: #769099;"/> <use href="#arrests-badge" style="--arrest-scale: 2;" /> <!-- ... --> <use href="#arrests-badge" style="--arrest-scale: 2;" /> <use href="#arrests-badge" style="--arrest-scale: 1;" /> <use href="#kills-icon" style="--kill-color: #769099;" /> </g> <g id="pat-garrett"> <use href="#career-bar" style="--career-length: 300; --career-color: #f7bea1;"/> <use href="#arrests-badge" style="--arrest-scale: 2;" /> <!-- ... --> <use href="#arrests-badge" style="--arrest-scale: 2;" /> <use href="#arrests-badge" style="--arrest-scale: 1;" /> <use href="#kills-icon" style="--kill-color: #f7bea1;" /> </g> <g id="bat-masterson"> <use href="#career-bar" style="--career-length: 200; --career-color: #c2d1d6;"/> <use href="#arrests-badge" style="--arrest-scale: 2;" /> <!-- ... --> <use href="#arrests-badge" style="--arrest-scale: 2;" /> <use href="#arrests-badge" style="--arrest-scale: 1;" /> <use href="#kills-icon" style="--kill-color: #c2d1d6;" /> </g> </svg>

Each <use> shares the same symbol elements, but the inline variables change their colours and sizes. I can even animate those values to highlight their differences:

@keyframes pulse { 0%, 100% { --arrest-scale: 1; } 50% { --arrest-scale: 1.2; } } use[href="#arrests-badge"]:hover { animation: pulse 1s ease-in-out infinite; }

CSS Custom Properties aren’t only helpful for styling; they can also channel data between HTML and SVG’s inner geometry, binding visual attributes like colour, length, and scale to semantics like arrest numbers, career length, and kills.

Ambient Animations

I started learning to animate elements within symbols while creating the animated graphics for my website’s Magnificent 7. To reduce complexity and make my code lighter and more maintainable, I needed to define each character once and reuse it across SVGs:

<!-- Symbols library --> <svg xmlns="http://www.w3.org/2000/svg" style="display:none;"> <symbol id="outlaw-1">[…]</symbol> <!-- ... --> </svg> <!-- Large screens --> <svg xmlns="http://www.w3.org/2000/svg" id="svg-large"> <use href="outlaw-1" /> <!-- ... --> </svg> <!-- Small screens --> <svg xmlns="http://www.w3.org/2000/svg" id="svg-small"> <use href="outlaw-1" /> <!-- ... --> </svg>

But I didn’t want those characters to stay static; I needed subtle movements that would bring them to life. I wanted their eyes to blink, their feet to tap, and their moustache whiskers to twitch. So, to animate these details, I pass animation data to elements inside those symbols using CSS Custom Properties, starting with the blinking.

I implemented the blinking effect by placing an SVG group over the outlaws’ eyes and then changing its opacity.

To make this possible, I added an inline style with a CSS Custom Property to the group:

<symbol id="outlaw-1" viewBox="0 0 712 2552"> <g class="eyelids" style="opacity: var(--eyelids-opacity, 1);"> <!-- ... --> </g> </symbol>

Then, I defined the blinking animation by changing --eyelids-opacity:

@keyframes blink { 0%, 92% { --eyelids-opacity: 0; } 93%, 94% { --eyelids-opacity: 1; } 95%, 97% { --eyelids-opacity: 0.1; } 98%, 100% { --eyelids-opacity: 0; } }

…and applied it to every character:

use[data-outlaw] { --blink-duration: 4s; --eyelids-opacity: 1; animation: blink var(--blink-duration) infinite var(--blink-delay); }

…so that each character wouldn’t blink at the same time, I set a different --blink-delay before they all start blinking, by passing another Custom Property:

use[data-outlaw="1"] { --blink-delay: 1s; } use[data-outlaw="2"] { --blink-delay: 2s; } use[data-outlaw="7"] { --blink-delay: 3s; }

Some of the characters tap their feet, so I added an inline style with a CSS Custom Property to those groups, too:

<symbol id="outlaw-1" viewBox="0 0 712 2552"> <g class="outlaw-1-foot" style=" transform-origin: bottom right; transform-box: fill-box; transform: rotate(var(--foot-rotate));"> </g> </symbol>

Defining the foot-tapping animation:

@keyframes tapping { 0%, 60%, 100% { --foot-rotate: 0deg; } 20% { --foot-rotate: -5deg; } 40% { --foot-rotate: 2deg; } }

And adding those extra Custom Properties to the characters’ declaration:

use[data-outlaw] { --blink-duration: 4s; --eyelids-opacity: 1; --foot-rotate: 0deg; animation: blink var(--blink-duration) infinite var(--blink-delay), tapping 1s ease-in-out infinite; }

…before finally making the character’s whiskers jiggle via an inline style with a CSS Custom Property which describes how his moustache transforms:

<symbol id="outlaw-1" viewBox="0 0 712 2552"> <g class="outlaw-1-tashe" style=" transform: translateX(var(--jiggle-x, 0px));" > <!-- ... --> </g> </symbol>

Defining the jiggle animation:

@keyframes jiggle { 0%, 100% { --jiggle-x: 0px; } 20% { --jiggle-x: -3px; } 40% { --jiggle-x: 2px; } 60% { --jiggle-x: -1px; } 80% { --jiggle-x: 4px; } }

And adding those properties to the characters’ declaration:

use[data-outlaw] { --blink-duration: 4s; --eyelids-opacity: 1; --foot-rotate: 0deg; --jiggle-x: 0px; animation: blink var(--blink-duration) infinite var(--blink-delay), jiggle 1s ease-in-out infinite, tapping 1s ease-in-out infinite; }

With these moving parts, the characters come to life, but my markup remains remarkably lean. By combining several animations into a single declaration, I can choreograph their movements without adding more elements to my SVG. Every outlaw shares the same base <symbol>, and their individuality comes entirely from CSS Custom Properties.

Pitfalls And Solutions

Even though this technique might seem bulletproof, there are a few traps it’s best to avoid:

  • CSS Custom Properties only work if they’re referenced with a var() inside a <symbol>. Forget that, and you’ll wonder why nothing updates. Also, properties that aren’t naturally inherited, like fill or transform, need to use var() in their value to benefit from the cascade.
  • It’s always best to include a fallback value alongside a custom property, like opacity: var(--eyelids-opacity, 1); to ensure SVG elements render correctly even without custom property values applied.
  • Inline styles set via the style attribute take precedence, so if you mix inline and external CSS, remember that Custom Properties follow normal cascade rules.
  • You can always use DevTools to inspect custom property values. Select a <use> instance and check the Computed Styles panel to see which custom properties are active.
Conclusion

The <symbol> and <use> elements are among the most elegant but sometimes frustrating aspects of SVG. The Shadow DOM barrier makes animating them trickier, but CSS Custom Properties act as a bridge. They let you pass colour, motion, and personality across that invisible boundary, resulting in cleaner, lighter, and, best of all, fun animations.

Kategorier: Amerikanska

How To Leverage Component Variants In Penpot

tis, 11/04/2025 - 11:00

This article is a sponsored by Penpot

Since Brad Frost popularized the use of design systems in digital design way back in 2013, they’ve become an invaluable resource for organizations — and even individuals — that want to craft reusable design patterns that look and feel consistent.

But Brad didn’t just popularize design systems; he also gave us a framework for structuring them, and while we don’t have to follow that framework exactly (most people adapt it to their needs), a particularly important part of most design systems is the variants, which are variations of components. Component variants allow for the design of components that are the same as other components, but different, so that they’re understood by users immediately, yet provide clarity for a unique context.

This makes component variants just as important as the components themselves. They ensure that we aren’t creating too many components that have to be individually managed, even if they’re only mildly different from other components, and since component variants are grouped together, they also ensure organization and visual consistency.

And now we can use them in Penpot, the web-based, open-source design tool where design is expressed as code. In this article, you’ll learn about variants, their place in design systems, and how to use them effectively in Penpot.

Step 1: Get Your Design Tokens In Order

For the most part, what separates one variant from another is the design tokens that it uses. But what is a design token exactly?

Imagine a brand color, let’s say a color value equal to hsl(270 100 42) in CSS. We save it as a “design token” called color.brand.default so that we can reuse it more easily without having to remember the more cumbersome hsl(270 100 42).

From there, we might also create a second design token called background.button.primary.default and set it to color.brand.default, thereby making them equal to the same color, but with different names to establish semantic separation between the two. This referencing the value of one token from another token is often called an “alias”.

This setup gives us the flexibility to change the value of the color document-wide, change the color used in the component (maybe by switching to a different token alias), or create a variant of the component that uses a different color. Ultimately, the goal is to be able to make changes in many places at once rather than one-by-one, mostly by editing the design token values rather than the design itself, at specific scopes rather than limiting ourselves to all-or-nothing changes. This also enables us to scale our design system without constraints.

With that in mind, here’s a rough idea of just a few color-related design tokens for a primary button with hover and disabled states:

Token name Token value color.brand.default hsl(270 100 42) color.brand.lighter hsl(270 100 52) color.brand.lightest hsl(270 100 95) color.brand.muted hsl(270 5 50) background.button.primary.default {color.brand.default} background.button.primary.hover {color.brand.lighter} background.button.primary.disabled {color.brand.muted} text.button.primary.default {color.brand.lightest} text.button.primary.hover {color.brand.lightest} text.button.primary.disabled {color.brand.lightest}

To create a color token in Penpot, switch to the “Tokens” tab in the left panel, click on the plus (+) icon next to “Color”, then specify the name, value, and optional description.

For example:

  • Name: color.brand.default,
  • Value: hsl(270 100 42) (there’s a color picker if you need it).

It’s pretty much the same process for other types of design tokens.

Don’t worry, I’m not going to walk you through every design token, but I will show you how to create a design token alias. Simply repeat the steps above, but for the value, notice how I’ve just referenced another color token (make sure to include the curly braces):

  • Name: background.button.primary.default,
  • Value: {color.brand.default}

Now, if the value of the color changes, so will the background of the buttons. But also, if we want to decouple the color from the buttons, all we need to do is reference a different color token or value. Mikołaj Dobrucki goes into a lot more detail in another Smashing article, but it’s worth noting here that Penpot design tokens are platform-agnostic. They follow the standardized W3C DTCG format, which means that they’re compatible with other tools and easily export to all platforms, including web, iOS, and Android.

In the next couple of steps, we’ll create a button component and its variants while plugging different design tokens into different variants. You’ll see why doing this is so useful and how using design tokens in variants benefits design systems overall.

Step 2: Create The Component

You’ll need to create what’s called a “main” component, which is the one that you’ll update as needed going forward. Other components — the ones that you’ll actually insert into your designs — will be copies (or “instances”) of the main component, which is sort of the point, right? Update once, and the changes reflect everywhere.

Here’s one I made earlier, minus the colors:

To apply a design token, make sure that you’re on the “Tokens” tab and have the relevant layer selected, then select the design token that you want to apply to it:

It doesn’t matter which variant you create first, but you’ll probably want to go with the default one as a starting point, as I’ve done. Either way, to turn this button into a main component, select the button object via the canvas (or “Layers” tab), right-click on it, then choose the “Create component” option from the context menu (or just press Ctrl / ⌘ + K after selecting it).

Remember to name the component as well. You can do that by double-clicking on the name (also via the canvas or “Layers” tab).

Step 3: Create The Component Variants

To create a variant, select the main component and either hit the Ctrl / ⌘ + K keyboard shortcut, or click on the icon that reveals the “Create variant” tooltip (located in the “Design” tab in the right panel).

Next, while the variant is still selected, make the necessary design changes via the “Design” tab. Or, if you want to swap design tokens out for other design tokens, you can do that in the same way that you applied them to begin with, via the “Tokens” tab. Rinse and repeat until you have all of your variants on the canvas designed:

After that, as you might’ve guessed, you’ll want to name your variants. But avoid doing this via the “Layers” panel. Instead, select a variant and replace “Property 1” with a label that describes the differentiating property of each variant. Since my button variants in this example represent different states of the same button, I’ve named this “State”. This applies to all of the variants, so you only need to do this once.

Next to the property name, you’ll see “Value 1” or something similar. Edit that for each variant, for example, the name of the state. In my case, I’ve named them “Default”, “Hover”, and “Disabled”.

And yes, you can add more properties to a component. To do this, click on the nearby plus (+) icon. I’ll talk more about component variants at scale in a minute, though.

To see the component in action, switch to the “Assets” tab (located in the left panel) and drag the component onto the canvas to initialize one instance of it. Again, remember to choose the correct property value from the “Design” tab:

If you already have a Penpot design system, combining multiple components into one component with variants is not only easy and error-proof, but you might be good to go already if you’re using a robust property naming system that uses forward slashes (/). Penpot has put together a very straightforward guide, but the diagram below sums it up pretty well:

How Component Variants Work At Scale

Design tokens, components, and component variants — the triple-threat of design systems — work together, not just to create powerful yet flexible design systems, but sustainable design systems that scale. This is easier to accomplish when thinking ahead, starting with design tokens that separate the “what” from the “what for” using token aliases, despite how verbose that might seem at first.

For example, I used color.brand.lightest for the text color of every variant, but instead of plugging that color token in directly, I created aliases such as text.button.primary.default. This means that I can change the text color of any variant later without having to dive into the actual variant on the canvas, or force a change to color.brand.lightest that might impact a bunch of other components.

Because remember, while the component and its variants give us reusability of the button, the color tokens give us reusability of the colors, which might be used in dozens, if not hundreds, of other components. A design system is like a living, breathing ecosystem, where some parts of it are connected, some parts of it aren’t connected, and some parts of it are or aren’t connected but might have to be later, and we need to be ready for that.

The good news is that Penpot makes all of this pretty easy to manage as long as you do a little planning beforehand.

Consider the following:

  • The design tokens that you’ll reuse (e.g., colors, font sizes, and so on),
  • Where design token aliases will be reused (e.g., buttons, headings, and so on),
  • Organizing the design tokens into sets,
  • Organizing the sets into themes,
  • Organizing the themes into groups,
  • The different components that you’ll need, and
  • The different variants and variant properties that you’ll need for each component.

Even the buttons that I designed here today can be scaled far beyond what I’ve already mocked up. Think of all the possible variants that might come up, such as a secondary button color, a tertiary color, a confirmation color, a warning color, a cancelled color, different colors for light and dark mode, not to mention more properties for more states, such as active and focus states. What if we want a whole matrix of variants, like where buttons in a disabled state can be hovered and where all buttons can be focused upon? Or where some buttons have icons instead of text labels, or both?

Designs can get very complicated, but once you’ve organized them into design tokens, components, and component variants in Penpot, they’ll actually feel quite simple, especially once you’re able to see them on the canvas, and even more so once you’ve made a significant change in just a few seconds without breaking anything.

Conclusion

This is how we make component variants work at scale. We get the benefits of reusability while keeping the flexibility to fork any aspect of our design system, big or small, without breaking out of it. And design tools like Penpot make it possible to not only establish a design system, but also express its design tokens and styles as code.

Kategorier: Amerikanska

Fading Light And Falling Leaves (November 2025 Wallpapers Edition)

fre, 10/31/2025 - 13:00

November can feel a bit gray in many parts of the world, so what better way to brighten the days than with a splash of colorful inspiration? For this month’s wallpapers edition, artists and designers from around the globe once again tickled their creativity and designed unique and inspiring wallpapers that are sure to bring some good vibes to your desktops and home screens.

As always, the wallpapers in this post come in a variety of screen resolutions and can be downloaded for free — just as it has been a monthly tradition here at Smashing Magazine for more than 14 years already. And since so many beautiful designs have seen the light of day since we first embarked on this monthly creativity adventure, we’ve also added a selection of oldies but goodies from our archives to the collection. Maybe one of your almost-forgotten favorites will catch your eye again this month?

A huge thank you to all the talented creatives who contributed their designs — this post wouldn’t be possible without your support! By the way, if you, too, would like to get featured in one of our upcoming wallpapers posts, please don’t hesitate to submit your design. We can’t wait to see what you’ll come up with! Happy November!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
Falling Into November

“Celebrate the heart of fall with cozy colors, crisp leaves, and the gentle warmth that only November brings.” — Designed by Libra Fire from Serbia.

Crown Me

Designed by Ricardo Gimenes from Spain.

Fireside Stories Under The Stars

“A cozy autumn evening comes alive as friends gather around a warm bonfire, sharing stories beneath a starry night sky. The glow of the fire contrasts beautifully with the cool, serene landscape, capturing the magic of friendship, warmth, and the quiet beauty of November nights.” — Designed by PopArt Studio from Serbia.

Lunchtime

Designed by Ricardo Gimenes from Spain.

Where Innovation Meets Design

“This artwork blends technology and creativity in a clean, modern aesthetic. Soft pastel tones and fluid shapes frame a central smartphone, symbolizing the fusion of innovation and human intelligence in mobile app development.” — Designed by Zco Corporation from the United States.

Colorful Autumn

“Autumn can be dreary, especially in November, when rain starts pouring every day. We wanted to summon better days, so that’s how this colourful November calendar was created. Open your umbrella and let’s roll!” — Designed by PopArt Studio from Serbia.

The Secret Cave

Designed by Ricardo Gimenes from Spain.

Sunset Or Sunrise

“November is autumn in all its splendor. Earthy colors, falling leaves, and afternoons in the warmth of the home. But it is also adventurous and exciting and why not, different. We sit in Bali contemplating Pura Ulun Danu Bratan. We don’t know if it’s sunset or dusk, but… does that really matter?” — Designed by Veronica Valenzuela Jimenez from Spain.

A Jelly November

“Been looking for a mysterious, gloomy, yet beautiful desktop wallpaper for this winter season? We’ve got you, as this month’s calendar marks Jellyfish Day. On November 3rd, we celebrate these unique, bewildering, and stunning marine animals. Besides adorning your screen, we’ve got you covered with some jellyfish fun facts: they aren’t really fish, they need very little oxygen, eat a broad diet, and shrink in size when food is scarce. Now that’s some tenacity to look up to.” — Designed by PopArt Studio from Serbia.

Winter Is Here

Designed by Ricardo Gimenes from Spain.

Moonlight Bats

“I designed some Halloween characters and then this idea came to my mind — a bat family hanging around in the moonlight. A cute and scary mood is just perfect for autumn.” — Designed by Carmen Eisendle from Germany.

Time To Give Thanks

Designed by Glynnis Owen from Australia.

Anbani

Anbani means alphabet in Georgian. The letters that grow on that tree are the Georgian alphabet. It’s very unique!” — Designed by Vlad Gerasimov from Georgia.

Me And The Key Three

Designed by Bart Bonte from Belgium.

Outer Space

“We were inspired by nature around us and the universe above us, so we created an out-of-this-world calendar. Now, let us all stop for a second and contemplate on preserving our forests, let us send birds of passage off to warmer places, and let us think to ourselves — if not on Earth, could we find a home somewhere else in outer space?” — Designed by PopArt Studio from Serbia.

Captain’s Home

Designed by Elise Vanoorbeek from Belgium.

Deer Fall, I Love You

Designed by Maria Porter from the United States.

Holiday Season Is Approaching

Designed by ActiveCollab from the United States.

International Civil Aviation Day

“On December 7, we mark International Civil Aviation Day, celebrating those who prove day by day that the sky really is the limit. As the engine of global connectivity, civil aviation is now, more than ever, a symbol of social and economic progress and a vehicle of international understanding. This monthly calendar is our sign of gratitude to those who dedicate their lives to enabling everyone to reach their dreams.” — Designed by PopArt Studio from Serbia.

Peanut Butter Jelly Time

“November is the Peanut Butter Month so I decided to make a wallpaper around that. As everyone knows peanut butter goes really well with some jelly, so I made two sandwiches, one with peanut butter and one with jelly. Together they make the best combination.” — Designed by Senne Mommens from Belgium.

A Gentleman’s November

Designed by Cedric Bloem from Belgium.

Bug

Designed by Ricardo Gimenes from Spain.

Go To Japan

“November is the perfect month to go to Japan. Autumn is beautiful with its brown colors. Let’s enjoy it!” — Designed by Veronica Valenzuela from Spain.

The Kind Soul

“Kindness drives humanity. Be kind. Be humble. Be humane. Be the best of yourself!” — Designed by Color Mean Creative Studio from Dubai.

Mushroom Season

“It is autumn! It is raining and thus… it is mushroom season! It is the perfect moment to go to the forest and get the best mushrooms to do the best recipe.” — Designed by Verónica Valenzuela from Spain.

Tempestuous November

“By the end of autumn, ferocious Poseidon will part from tinted clouds and timid breeze. After this uneven clash, the sky once more becomes pellucid just in time for imminent luminous snow.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Cozy Autumn Cups And Cute Pumpkins

“Autumn coziness, which is created by fallen leaves, pumpkins, and cups of cocoa, inspired our designers for this wallpaper. — Designed by MasterBundles from Ukraine.

November Nights On Mountains

“Those chill November nights when you see mountain tops covered with the first snow sparkling in the moonlight.” — Designed by Jovana Djokic from Serbia.

Coco Chanel

Beauty begins the moment you decide to be yourself. (Coco Chanel)” — Designed by Tazi from Australia.

Stars

“I don’t know anyone who hasn’t enjoyed a cold night looking at the stars.” — Designed by Ema Rede from Portugal.

Welcome Home Dear Winter

“The smell of winter is lingering in the air. The time to be home! Winter reminds us of good food, of the warmth, the touch of a friendly hand, and a talk beside the fire. Keep calm and let us welcome winter.” — Designed by Acodez IT Solutions from India.

Happy Birthday C.S.Lewis!

“It’s C.S. Lewis’s birthday on November 29th, so I decided to create this ‘Chronicles of Narnia’ inspired wallpaper to honour this day.” — Designed by Safia Begum from the United Kingdom.

Autumn Choir

Designed by Hatchers from Ukraine / China.

Star Wars

Designed by Ricardo Gimenes from Spain.

Hello World, Happy November

“I often read messages at Smashing Magazine from the people in the southern hemisphere ‘it’s spring, not autumn!’ so I wanted to design a wallpaper for the northern and the southern hemispheres. Here it is, northerners and southerns, hope you like it!” — Designed by Agnes Swart from the Netherlands.

Get Featured Next Month

Feeling inspired? We’ll publish the December wallpapers on November 30, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We are already looking forward to it!

Kategorier: Amerikanska

JavaScript For Everyone: Iterators

mån, 10/27/2025 - 14:00

Hey, I’m Mat, but “Wilto” works too — I’m here to teach you JavaScript. Well, not here-here; technically, I’m over at Piccalil.li’s JavaScript for Everyone course to teach you JavaScript. The following is an excerpt from the Iterables and Iterators module: the lesson on Iterators.

Iterators are one of JavaScript’s more linguistically confusing topics, sailing easily over what is already a pretty high bar. There are iterables — array, Set, Map, and string — all of which follow the iterable protocol. To follow said protocol, an object must implement the iterable interface. In practice, that means that the object needs to include a [Symbol.iterator]() method somewhere in its prototype chain. Iterable protocol is one of two iteration protocols. The other iteration protocol is the iterator protocol.

See what I mean about this being linguistically fraught? Iterables implement the iterable iteration interface, and iterators implement the iterator iteration interface! If you can say that five times fast, then you’ve pretty much got the gist of it; easy-peasy, right?

No, listen, by the time you reach the end of this lesson, I promise it won’t be half as confusing as it might sound, especially with the context you’ll have from the lessons that precede it.

An iterable object follows the iterable protocol, which just means that the object has a conventional method for making iterators. The elements that it contains can be looped over with for…of.

An iterator object follows the iterator protocol, and the elements it contains can be accessed sequentially, one at a time.

To reiterate — a play on words for which I do not forgive myself, nor expect you to forgive me — an iterator object follows iterator protocol, and the elements it contains can be accessed sequentially, one at a time. Iterator protocol defines a standard way to produce a sequence of values, and optionally return a value once all possible values have been generated.

In order to follow the iterator protocol, an object has to — you guessed it — implement the iterator interface. In practice, that once again means that a certain method has to be available somewhere on the object's prototype chain. In this case, it’s the next() method that advances through the elements it contains, one at a time, and returns an object each time that method is called.

In order to meet the iterator interface criteria, the returned object must contain two properties with specific keys: one with the key value, representing the value of the current element, and one with the key done, a Boolean value that tells us if the iterator has advanced beyond the final element in the data structure. That’s not an awkward phrasing the editorial team let slip through: the value of that done property is true only when a call to next() results in an attempt to access an element beyond the final element in the iterator, not upon accessing the final element in the iterator. Again, a lot in print, but it’ll make more sense when you see it in action.

You’ve seen an example of a built-in iterator before, albeit briefly:

const theMap = new Map([ [ "aKey", "A value." ] ]); console.log( theMap.keys() ); // Result: Map Iterator { constructor: Iterator() }

That’s right: while a Map object itself is an iterable, Map’s built-in methods keys(), values(), and entries() all return Iterator objects. You’ll also remember that I looped through those using forEach (a relatively recent addition to the language). Used that way, an iterator is indistinguishable from an iterable:

const theMap = new Map([ [ "key", "value " ] ]); theMap.keys().forEach( thing => { console.log( thing ); }); // Result: key

All iterators are iterable; they all implement the iterable interface:

const theMap = new Map([ [ "key", "value " ] ]); theMap.keys()[ Symbol.iterator ]; // Result: function Symbol.iterator()

And if you’re angry about the increasing blurriness of the line between iterators and iterables, wait until you get a load of this “top ten anime betrayals” video candidate: I’m going to demonstrate how to interact with an iterator by using an array.

“BOO,” you surely cry, having been so betrayed by one of your oldest and most indexed friends. “Array is an iterable, not an iterator!” You are both right to yell at me in general, and right about array in specific — an array is an iterable, not an iterator. In fact, while all iterators are iterable, none of the built-in iterables are iterators.

However, when you call that [ Symbol.iterator ]() method — the one that defines an object as an iterable — it returns an iterator object created from an iterable data structure:

const theIterable = [ true, false ]; const theIterator = theIterable[ Symbol.iterator ](); theIterable; // Result: Array [ true, false ] theIterator; // Result: Array Iterator { constructor: Iterator() }

The same goes for Set, Map, and — yes — even strings:

const theIterable = "A string." const theIterator = theIterable[ Symbol.iterator ](); theIterator; // Result: String Iterator { constructor: Iterator() }

What we’re doing here manually — creating an iterator from an iterable using %Symbol.iterator% — is precisely how iterable objects work internally, and why they have to implement %Symbol.iterator% in order to be iterables. Any time you loop through an array, you’re actually looping through an iterator created from that Array. All built-in iterators are iterable. All built-in iterables can be used to create iterators.

Alternately — preferably, even, since it doesn’t require you to graze up against %Symbol.iterator% directly — you can use the built-in Iterator.from() method to create an iterator object from any iterable:

const theIterator = Iterator.from([ true, false ]); theIterator; // Result: Array Iterator { constructor: Iterator() }

You remember how I mentioned that an iterator has to provide a next() method (that returns a very specific Object)? Calling that next() method steps through the elements that the iterator contains one at a time, with each call returning an instance of that Object:

const theIterator = Iterator.from([ 1, 2, 3 ]); theIterator.next(); // Result: Object { value: 1, done: false } theIterator.next(); // Result: Object { value: 2, done: false } theIterator.next(); // Result: Object { value: 3, done: false } theIterator.next(); // Result: Object { value: undefined, done: true }

You can think of this as a more controlled form of traversal than the traditional “wind it up and watch it go” for loops you’re probably used to — a method of accessing elements one step at a time, as-needed. Granted, you don’t have to step through an iterator in this way, since they have their very own Iterator.forEach method, which works exactly like you would expect — to a point:

const theIterator = Iterator.from([ true, false ]); theIterator.forEach( element => console.log( element ) ); /* Result: true false */

But there’s another big difference between iterables and iterators that we haven’t touched on yet, and for my money, it actually goes a long way toward making linguistic sense of the two. You might need to humor me for a little bit here, though.

See, an iterable object is an object that is iterable. No, listen, stay with me: you can iterate over an Array, and when you’re done doing so, you can still iterate over that Array. It is, by definition, an object that can be iterated over; it is the essential nature of an iterable to be iterable:

const theIterable = [ 1, 2 ]; theIterable.forEach( el => { console.log( el ); }); /* Result: 1 2 */ theIterable.forEach( el => { console.log( el ); }); /* Result: 1 2 */

In a way, an iterator object represents the singular act of iteration. Internal to an iterable, it is the mechanism by which the iterable is iterated over, each time that iteration is performed. As a stand-alone iterator object — whether you step through it using the next method or loop over its elements using forEach — once iterated over, that iterator is past tense; it is iterated. Because they maintain an internal state, the essential nature of an iterator is to be iterated over, singular:

const theIterator = Iterator.from([ 1, 2 ]); theIterator.next(); // Result: Object { value: 1, done: false } theIterator.next(); // Result: Object { value: 2, done: false } theIterator.next(); // Result: Object { value: undefined, done: true } theIterator.forEach( el => console.log( el ) ); // Result: undefined

That makes for neat work when you're using the Iterator constructor’s built-in methods to, say, filter or extract part of an Iterator object:

const theIterator = Iterator.from([ "First", "Second", "Third" ]); // Take the first two values from theIterator: theIterator.take( 2 ).forEach( el => { console.log( el ); }); /* Result: "First" "Second" */ // theIterator now only contains anything left over after the above operation is complete: theIterator.next(); // Result: Object { value: "Third", done: false }

Once you reach the end of an iterator, the act of iterating over it is complete. Iterated. Past-tense.

And so too is your time in this lesson, you might be relieved to hear. I know this was kind of a rough one, but the good news is: this course is iterable, not an iterator. This step in your iteration through it — this lesson — may be over, but the essential nature of this course is that you can iterate through it again. Don’t worry about committing all of this to memory right now — you can come back and revisit this lesson anytime.

Conclusion

I stand by what I wrote there, unsurprising as that probably is: this lesson is a tricky one, but listen, you got this. JavaScript for Everyone is designed to take you inside JavaScript’s head. Once you’ve started seeing how the gears mesh — seen the fingerprints left behind by the people who built the language, and the good, bad, and sometimes baffling decisions that went into that — no itera-, whether -ble or -tor will be able to stand in your way.

My goal is to teach you the deep magic — the how and the why of JavaScript, using the syntaxes you’re most likely to encounter in your day-to-day work, at your pace and on your terms. If you’re new to the language, you’ll walk away from this course with a foundational understanding of JavaScript worth hundreds of hours of trial-and-error. If you’re a junior developer, you’ll finish this course with a depth of knowledge to rival any senior.

I hope to see you there.

Kategorier: Amerikanska

Ambient Animations In Web Design: Practical Applications (Part 2)

ons, 10/22/2025 - 15:00

First, a recap:

Ambient animations are the kind of passive movements you might not notice at first. However, they bring a design to life in subtle ways. Elements might subtly transition between colours, move slowly, or gradually shift position. Elements can appear and disappear, change size, or they could rotate slowly, adding depth to a brand’s personality.

In Part 1, I illustrated the concept of ambient animations by recreating the cover of a Quick Draw McGraw comic book as a CSS/SVG animation. But I know not everyone needs to animate cartoon characters, so in Part 2, I’ll share how ambient animation works in three very different projects: Reuven Herman, Mike Worth, and EPD. Each demonstrates how motion can enhance brand identity, personality, and storytelling without dominating a page.

Reuven Herman

Los Angeles-based composer Reuven Herman didn’t just want a website to showcase his work. He wanted it to convey his personality and the experience clients have when working with him. Working with musicians is always creatively stimulating: they’re critical, engaged, and full of ideas.

Reuven’s classical and jazz background reminded me of the work of album cover designer Alex Steinweiss.

I was inspired by the depth and texture that Alex brought to his designs for over 2,500 unique covers, and I wanted to incorporate his techniques into my illustrations for Reuven.

To bring Reuven’s illustrations to life, I followed a few core ambient animation principles:

  • Keep animations slow and smooth.
  • Loop seamlessly and avoid abrupt changes.
  • Use layering to build complexity.
  • Avoid distractions.
  • Consider accessibility and performance.

…followed by their straight state:

The first step in my animation is to morph the stave lines between states. They’re made up of six paths with multi-coloured strokes. I started with the wavy lines:

<!-- Wavy state --> <g fill="none" stroke-width="2" stroke-linecap="round"> <path id="p1" stroke="#D2AB99" d="[…]"/> <path id="p2" stroke="#BDBEA9" d="[…]"/> <path id="p3" stroke="#E0C852" d="[…]"/> <path id="p4" stroke="#8DB38B" d="[…]"/> <path id="p5" stroke="#43616F" d="[…]"/> <path id="p6" stroke="#A13D63" d="[…]"/> </g>

Although CSS now enables animation between path points, the number of points in each state needs to match. GSAP doesn’t have that limitation and can animate between states that have different numbers of points, making it ideal for this type of animation. I defined the new set of straight paths:

<!-- Straight state --> const Waves = { p1: "[…]", p2: "[…]", p3: "[…]", p4: "[…]", p5: "[…]", p6: "[…]" };

Then, I created a GSAP timeline that repeats backwards and forwards over six seconds:

const waveTimeline = gsap.timeline({ repeat: -1, yoyo: true, defaults: { duration: 6, ease: "sine.inOut" } }); Object.entries(Waves).forEach(([id, d]) => { waveTimeline.to(`#${id}`, { morphSVG: d }, 0); });

Another ambient animation principle is to use layering to build complexity. Think of it like building a sound mix. You want variation in rhythm, tone, and timing. In my animation, three rows of musical notes move at different speeds:

<path id="notes-row-1"/> <path id="notes-row-2"/> <path id="notes-row-3"/>

The duration of each row’s animation is also defined using GSAP, from 100 to 400 seconds to give the overall animation a parallax-style effect:

const noteRows = [ { id: "#notes-row-1", duration: 300, y: 100 }, // slowest { id: "#notes-row-2", duration: 200, y: 250 }, // medium { id: "#notes-row-3", duration: 100, y: 400 } // fastest ]; […]

The next layer contains a shadow cast by the piano keys, which slowly rotates around its centre:

gsap.to("shadow", { y: -10, rotation: -2, transformOrigin: "50% 50%", duration: 3, ease: "sine.inOut", yoyo: true, repeat: -1 });

And finally, the piano keys themselves, which rotate at the same time but in the opposite direction to the shadow:

gsap.to("#g3-keys", { y: 10, rotation: 2, transformOrigin: "50% 50%", duration: 3, ease: "sine.inOut", yoyo: true, repeat: -1 });

The complete animation can be viewed in my lab. By layering motion thoughtfully, the site feels alive without ever dominating the content, which is a perfect match for Reuven’s energy.

Mike Worth

As I mentioned earlier, not everyone needs to animate cartoon characters, but I do occasionally. Mike Worth is an Emmy award-winning film, video game, and TV composer who asked me to design his website. For the project, I created and illustrated the character of orangutan adventurer Orango Jones.

Orango proved to be the perfect subject for ambient animations and features on every page of Mike’s website. He takes the reader on an adventure, and along the way, they get to experience Mike’s music.

For Mike’s “About” page, I wanted to combine ambient animations with interactions. Orango is in a cave where he has found a stone tablet with faint markings that serve as a navigation aid to elsewhere on Mike’s website. The illustration contains a hidden feature, an easter egg, as when someone presses Orango’s magnifying glass, moving shafts of light stream into the cave and onto the tablet.

I also added an anchor around a hidden circle, which I positioned over Orango’s magnifying glass, as a large tap target to toggle the light shafts on and off by changing the data-lights value on the SVG:

<a href="javascript:void(0);" id="light-switch" title="Lights on/off"> <circle cx="700" cy="1000" r="100" opacity="0" /> </a>

Then, I added two descendant selectors to my CSS, which adjust the opacity of the light shafts depending on the data-lights value:

[data-lights="lights-off"] .light-shaft { opacity: .05; transition: opacity .25s linear; } [data-lights="lights-on"] .light-shaft { opacity: .25; transition: opacity .25s linear; }

A slow and subtle rotation adds natural movement to the light shafts:

@keyframes shaft-rotate { 0% { rotate: 2deg; } 50% { rotate: -2deg; } 100% { rotate: 2deg; } }

Which is only visible when the light toggle is active:

[data-lights="lights-on"] .light-shaft { animation: shaft-rotate 20s infinite; transform-origin: 100% 0; }

When developing any ambient animation, considering performance is crucial, as even though CSS animations are lightweight, features like blur filters and drop shadows can still strain lower-powered devices. It’s also critical to consider accessibility, so respect someone’s prefers-reduced-motion preferences:

@media screen and (prefers-reduced-motion: reduce) { html { scroll-behavior: auto; animation-duration: 1ms !important; animation-iteration-count: 1 !important; transition-duration: 1ms !important; } }

When an animation feature is purely decorative, consider adding aria-hidden="true" to keep it from cluttering up the accessibility tree:

<a href="javascript:void(0);" id="light-switch" aria-hidden="true"> […] </a>

With Mike’s Orango Jones, ambient animation shifts from subtle atmosphere to playful storytelling. Light shafts and soft interactions weave narrative into the design without stealing focus, proving that animation can support both brand identity and user experience. See this animation in my lab.

EPD

Moving away from composers, EPD is a property investment company. They commissioned me to design creative concepts for a new website. A quick search for property investment companies will usually leave you feeling underwhelmed by their interchangeable website designs. They include full-width banners with faded stock photos of generic city skylines or ethnically diverse people shaking hands.

For EPD, I wanted to develop a distinctive visual style that the company could own, so I proposed graphic, stylised skylines that reflect both EPD’s brand and its global portfolio. I made them using various-sized circles that recall the company’s logo mark.

The point of an ambient animation is that it doesn’t dominate. It’s a background element and not a call to action. If someone’s eyes are drawn to it, it’s probably too much, so I dial back the animation until it feels like something you’d only catch if you’re really looking. I created three skyline designs, including Dubai, London, and Manchester.

In each of these ambient animations, the wheels rotate and the large circles change colour at random intervals.

Next, I exported a layer containing the circle elements I want to change colour.

<g id="banner-dots"> <circle class="data-theme-fill" […]/> <circle class="data-theme-fill" […]/> <circle class="data-theme-fill" […]/> […] </g>

Once again, I used GSAP to select groups of circles that flicker like lights across the skyline:

function animateRandomDots() { const circles = gsap.utils.toArray("#banner-dots circle") const numberToAnimate = gsap.utils.random(3, 6, 1) const selected = gsap.utils.shuffle(circles).slice(0, numberToAnimate) }

Then, at two-second intervals, the fill colour of those circles changes from the teal accent to the same off-white colour as the rest of my illustration:

gsap.to(selected, { fill: "color(display-p3 .439 .761 .733)", duration: 0.3, stagger: 0.05, onComplete: () => { gsap.to(selected, { fill: "color(display-p3 .949 .949 .949)", duration: 0.5, delay: 2 }) } }) gsap.delayedCall(gsap.utils.random(1, 3), animateRandomDots) } animateRandomDots()

The result is a skyline that gently flickers, as if the city itself is alive. Finally, I rotated the wheel. Here, there was no need to use GSAP as this is possible using CSS rotate alone:

<g id="banner-wheel"> <path stroke="#F2F2F2" stroke-linecap="round" stroke-width="4" d="[…]"/> <path fill="#D8F76E" d="[…]"/> </g>

#banner-wheel { transform-box: fill-box; transform-origin: 50% 50%; animation: rotateWheel 30s linear infinite; } @keyframes rotateWheel { to { transform: rotate(360deg); } }

CSS animations are lightweight and ideal for simple, repetitive effects, like fades and rotations. They’re easy to implement and don’t require libraries. GSAP, on the other hand, offers far more control as it can handle path morphing and sequence timelines. The choice of which to use depends on whether I need the precision of GSAP or the simplicity of CSS.

By keeping the wheel turning and the circles glowing, the skyline animations stay in the background yet give the design a distinctive feel. They avoid stock photo clichés while reinforcing EPD’s brand identity and are proof that, even in a conservative sector like property investment, ambient animation can add atmosphere without detracting from the message.

Wrapping up

From Reuven’s musical textures to Mike’s narrative-driven Orango Jones and EPD’s glowing skylines, these projects show how ambient animation adapts to context. Sometimes it’s purely atmospheric, like drifting notes or rotating wheels; other times, it blends seamlessly with interaction, rewarding curiosity without getting in the way.

Whether it echoes a composer’s improvisation, serves as a playful narrative device, or adds subtle distinction to a conservative industry, the same principles hold true:

Keep motion slow, seamless, and purposeful so that it enhances, rather than distracts from, the design.

Kategorier: Amerikanska

AI In UX: Achieve More With Less

fre, 10/17/2025 - 10:00

I have made a lot of mistakes with AI over the past couple of years. I have wasted hours trying to get it to do things it simply cannot do. I have fed it terrible prompts and received terrible output. And I have definitely spent more time fighting with it than I care to admit.

But I have also discovered that when you stop treating AI like magic and start treating it like what it actually is (a very enthusiastic intern with zero life experience), things start to make more sense.

Let me share what I have learned from working with AI on real client projects across user research, design, development, and content creation.

How To Work With AI

Here is the mental model that has been most helpful for me. Treat AI like an intern with zero experience.

An intern fresh out of university has lots of enthusiasm and qualifications, but no real-world experience. You would not trust them to do anything unsupervised. You would explain tasks in detail. You would expect to review their work multiple times. You would give feedback and ask them to try again.

This is exactly how you should work with AI.

The Basics Of Prompting

I am not going to pretend to be an expert. I have just spent way too much time playing with this stuff because I like anything shiny and new. But here is what works for me.

  • Define the role.
    Start with something like “Act as a user researcher” or “Act as a copywriter.” This gives the AI context for how to respond.
  • Break it into steps.
    Do not just say “Analyze these interview transcripts.” Instead, say “I want you to complete the following steps. One, identify recurring themes. Two, look for questions users are trying to answer. Three, note any objections that come up. Four, output a summary of each.”
  • Define success.
    Tell it what good looks like. “I am looking for a report that gives a clear indication of recurring themes and questions in a format I can send to stakeholders. Do not use research terminology because they will not understand it.”
  • Make it think.
    Tell it to think deeply about its approach before responding. Get it to create a way to test for success (known as a rubric) and iterate on its work until it passes that test.

Here is a real prompt I use for online research:

Act as a user researcher. I would like you to carry out deep research online into [brand name]. In particular, I would like you to focus on what people are saying about the brand, what the overall sentiment is, what questions people have, and what objections people mention. The goal is to create a detailed report that helps me better understand the brand perception.

Think deeply about your approach before carrying out the research. Create a rubric for the report to ensure it is as useful as possible. Keep iterating until the report scores extremely high on the rubric. Only then, output the report.

That second paragraph (the bit about thinking deeply and creating a rubric), I basically copy and paste into everything now. It is a universal way to get better output.

Learn When To Trust It

You should never fully trust AI. Just like you would never fully trust an intern you have only just met.

To begin with, double-check absolutely everything. Over time, you will get a sense of when it is losing its way. You will spot the patterns. You will know when to start a fresh conversation because the current one has gone off the rails.

But even after months of working with it daily, I still check its work. I still challenge it. I still make it cite sources and explain its reasoning.

The key is that even with all that checking, it is still faster than doing it yourself. Much faster.

Using AI For User Research

This is where AI has genuinely transformed my work. I use it constantly for five main things.

Online Research

I love AI for this. I can ask it to go and research a brand online. What people are saying about it, what questions they have, what they like, and what frustrates them. Then do the same for competitors and compare.

This would have taken me days of trawling through social media and review sites. Now it takes minutes.

I recently did this for an e-commerce client. I wanted to understand what annoyed people about the brand and what they loved. I got detailed insights that shaped the entire conversion optimization strategy. All from one prompt.

Analyzing Interviews And Surveys

I used to avoid open-ended questions in surveys. They were such a pain to review. Now I use them all the time because AI can analyze hundreds of text responses in seconds.

For interviews, I upload the transcripts and ask it to identify recurring themes, questions, and requests. I always get it to quote directly from the transcripts so I can verify it is not making things up.

The quality is good. Really good. As long as you give it clear instructions about what you want.

Making Sense Of Data

I am terrible with spreadsheets. Put me in front of a person and I can understand them. Put me in front of data, and my eyes glaze over.

AI has changed that. I upload spreadsheets to ChatGPT and just ask questions. “What patterns do you see?” “Can you reformat this?” “Show me this data in a different way.”

Microsoft Clarity now has Copilot built in, so you can ask it questions about your analytics data. Triple Whale does the same for e-commerce sites. These tools are game changers if you struggle with data like I do.

Research Projects

This is probably my favorite technique. In ChatGPT and Claude, you can create projects. In other tools, they are called spaces. Think of them as self-contained folders where everything you put in is available to every conversation in that project.

When I start working with a new client, I create a project and throw everything in. Old user research. Personas. Survey results. Interview transcripts. Documentation. Background information. Site copy. Anything I can find.

Then I give it custom instructions. Here is one I use for my own business:

Act as a business consultant and marketing strategy expert with good copywriting skills. Your role is to help me define the future of my UX consultant business and better articulate it, especially via my website. When I ask for your help, ask questions to improve your answers and challenge my assumptions where appropriate.

I have even uploaded a virtual board of advisors (people I wish I had on my board) and asked AI to research how they think and respond as they would.

Now I have this project that knows everything about my business. I can ask it questions. Get it to review my work. Challenge my thinking. It is like having a co-worker who never gets tired and has a perfect memory.

I do this for every client project now. It is invaluable.

Creating Personas

AI has reinvigorated my interest in personas. I had lost heart in them a bit. They took too long to create, and clients always said they already had marketing personas and did not want to pay to do them again.

Now I can create what I call functional personas. Personas that are actually useful to people who work in UX. Not marketing fluff about what brands people like, but real information about what questions they have and what tasks they are trying to complete.

I upload all my research to a project and say:

Act as a user researcher. Create a persona for [audience type]. For this persona, research the following information: questions they have, tasks they want to complete, goals, states of mind, influences, and success metrics. It is vital that all six criteria are addressed in depth and with equal vigor.

The output is really good. Detailed. Useful. Based on actual data rather than pulled out of thin air.

Here is my challenge to anyone who thinks AI-generated personas are somehow fake. What makes you think your personas are so much better? Every persona is a story of a hypothetical user. You make judgment calls when you create personas, too. At least AI can process far more information than you can and is brilliant at pattern recognition.

My only concern is that relying too heavily on AI could disconnect us from real users. We still need to talk to people. We still need that empathy. But as a tool to synthesize research and create reference points? It is excellent.

Using AI For Design And Development

Let me start with a warning. AI is not production-ready. Not yet. Not for the kind of client work I do, anyway.

Three reasons why:

  1. It is slow if you want something specific or complicated.
  2. It can be frustrating because it gets close but not quite there.
  3. And the quality is often subpar. Unpolished code, questionable design choices, that kind of thing.

But that does not mean it is not useful. It absolutely is. Just not for final production work.

Functional Prototypes

If you are not too concerned with matching a specific design, AI can quickly prototype functionality in ways that are hard to match in Figma. Because Figma is terrible at prototyping functionality. You cannot even create an active form field in a Figma prototype. It’s the biggest thing people do online other than click links — and you cannot test it.

Tools like Relume and Bolt can create quick functional mockups that show roughly how things work. They are great for non-designers who just need to throw together a prototype quickly. For designers, they can be useful for showing developers how you want something to work.

But you can spend ages getting them to put a hamburger menu on the right side of the screen. So use them for quick iteration, not pixel-perfect design.

Small Coding Tasks

I use AI constantly for small, low-risk coding work. I am not a developer anymore. I used to be, back when dinosaurs roamed the earth, but not for years.

AI lets me create the little tools I need. A calculator that calculates the ROI of my UX work. An app for running top task analysis. Bits of JavaScript for hiding elements on a page. WordPress plugins for updating dates automatically.

Just before running my workshop on this topic, I needed a tool to create calendar invites for multiple events. All the online services wanted £16 a month. I asked ChatGPT to build me one. One prompt. It worked. It looked rubbish, but I did not care. It did what I needed.

If you are a developer, you should absolutely be using tools like Cursor by now. They are invaluable for pair programming with AI. But if you are not a developer, just stick with Claude or Bolt for quick throwaway tools.

Reviewing Existing Services

There are some great tools for getting quick feedback on existing websites when budget and time are tight.

If you need to conduct a UX audit, Wevo Pulse is an excellent starting point. It automatically reviews a website based on personas and provides visual attention heatmaps, friction scores, and specific improvement recommendations. It generates insights in minutes rather than days.

Now, let me be clear. This does not replace having an experienced person conduct a proper UX audit. You still need that human expertise to understand context, make judgment calls, and spot issues that AI might miss. But as a starting point to identify obvious problems quickly? It is a great tool. Particularly when budget or time constraints mean a full audit is not on the table.

For e-commerce sites, Baymard has UX Ray, which analyzes flaws based on their massive database of user research.

Checking Your Designs

Attention Insight has taken thousands of hours of eye-tracking studies and trained AI on it to predict where people will look on a page. It has about 90 to 96 percent accuracy.

You upload a screenshot of your design, and it shows you where attention is going. Then you can play around with your imagery and layout to guide attention to the right place.

It is great for dealing with stakeholders who say, “People won’t see that.” You can prove they will. Or equally, when stakeholders try to crowd the interface with too much stuff, you can show them attention shooting everywhere.

I use this constantly. Here is a real example from a pet insurance company. They had photos of a dog, cat, and rabbit for different types of advice. The dog was far from the camera. The cat was looking directly at the camera, pulling all the attention. The rabbit was half off-frame. Most attention went to the cat’s face.

I redesigned it using AI-generated images, where I could control exactly where each animal looked. Dog looking at the camera. Cat looking right. Rabbit looking left. All the attention drawn into the center. Made a massive difference.

Creating The Perfect Image

I use AI all the time for creating images that do a specific job. My preferred tools are Midjourney and Gemini.

I like Midjourney because, visually, it creates stunning imagery. You can dial in the tone and style you want. The downside is that it is not great at following specific instructions.

So I produce an image in Midjourney that is close, then upload it to Gemini. Gemini is not as good at visual style, but it is much better at following instructions. “Make the guy reach here” or “Add glasses to this person.” I can get pretty much exactly what I want.

The other thing I love about Midjourney is that you can upload a photograph and say, “Replicate this style.” This keeps consistency across a website. I have a master image I use as a reference for all my site imagery to keep the style consistent.

Using AI For Content

Most clients give you terrible copy. Our job is to improve the user experience or conversion rate, and anything we do gets utterly undermined by bad copy.

I have completely stopped asking clients for copy since AI came along. Here is my process.

Build Everything Around Questions

Once I have my information architecture, I get AI to generate a massive list of questions users will ask. Then I run a top task analysis where people vote on which questions matter most.

I assign those questions to pages on the site. Every page gets a list of the questions it needs to answer.

Get Bullet Point Answers From Stakeholders

I spin up the content management system with a really basic theme. Just HTML with very basic formatting. I go through every page and assign the questions.

Then I go to my clients and say: “I do not want you to write copy. Just go through every page and bullet point answers to the questions. If the answer exists on the old site, copy and paste some text or link to it. But just bullet points.”

That is their job done. Pretty much.

Let AI Draft The Copy

Now I take control. I feed ChatGPT the questions and bullet points and say:

Act as an online copywriter. Write copy for a webpage that answers the question [question]. Use the following bullet points to answer that question: [bullet points]. Use the following guidelines: Aim for a ninth-grade reading level or below. Sentences should be short. Use plain language. Avoid jargon. Refer to the reader as you. Refer to the writer as us. Ensure the tone is friendly, approachable, and reassuring. The goal is to [goal]. Think deeply about your approach. Create a rubric and iterate until the copy is excellent. Only then, output it.

I often upload a full style guide as well, with details about how I want it to be written.

The output is genuinely good. As a first draft, it is excellent. Far better than what most stakeholders would give me.

Stakeholders Review And Provide Feedback

That goes into the website, and stakeholders can comment on it. Once I get their feedback, I take the original copy and all their comments back into ChatGPT and say, “Rewrite using these comments.”

Job done.

The great thing about this approach is that even if stakeholders make loads of changes, they are making changes to a good foundation. The overall quality still comes out better than if they started with a blank sheet.

It also makes things go smoother because you are not criticizing their content, where they get defensive. They are criticizing AI content.

Tools That Help

If your stakeholders are still giving you content, Hemingway Editor is brilliant. Copy and paste text in, and it tells you how readable and scannable it is. It highlights long sentences and jargon. You can use this to prove to clients that their content is not good web copy.

If you pay for the pro version, you get AI tools that will rewrite the copy to be more readable. It is excellent.

What This Means for You

Let me be clear about something. None of this is perfect. AI makes mistakes. It hallucinates. It produces bland output if you do not push it hard enough. It requires constant checking and challenging.

But here is what I know from two years of using this stuff daily. It has made me faster. It has made me better. It has freed me up to do more strategic thinking and less grunt work.

A report that would have taken me five days now takes three hours. That is not an exaggeration. That is real.

Overall, AI probably gives me a 25 to 33 percent increase in what I can do. That is significant.

Your value as a UX professional lies in your ideas, your questions, and your thinking. Not your ability to use Figma. Not your ability to manually review transcripts. Not your ability to write reports from scratch.

AI cannot innovate. It cannot make creative leaps. It cannot know whether its output is good. It cannot understand what it is like to be human.

That is where you come in. That is where you will always come in.

Start small. Do not try to learn everything at once. Just ask yourself throughout your day: Could I do this with AI? Try it. See what happens. Double-check everything. Learn what works and what does not.

Treat it like an enthusiastic intern with zero life experience. Give it clear instructions. Check its work. Make it try again. Challenge it. Push it further.

And remember, it is not going to take your job. It is going to change it. For the better, I think. As long as we learn to work with it rather than against it.

Kategorier: Amerikanska

The Grayscale Problem

mån, 10/13/2025 - 12:00

Last year, a study found that cars are steadily getting less colourful. In the US, around 80% of cars are now black, white, gray, or silver, up from 60% in 2004. This trend has been attributed to cost savings and consumer preferences. Whatever the reasons, the result is hard to deny: a big part of daily life isn’t as colourful as it used to be.

The colourfulness of mass consumer products is hardly the bellwether for how vibrant life is as a whole, but the study captures a trend a lot of us recognise — offline and on. From colour to design to public discourse, a lot of life is getting less varied, more grayscale.

The web is caught in the same current. There is plenty right with it — it retains plenty of its founding principles — but its state is not healthy. From AI slop to shoddy service providers to enshittification, the digital world faces its own grayscale problem.

This bears talking about. One of life’s great fallacies is that things get better over time on their own. They can, but it’s certainly not a given. I don’t think the moral arc of the universe does not bend towards justice, not on its own; I think it bends wherever it is dragged, kicking and screaming, by those with the will and the means to do so.

Much of the modern web, and the forces of optimisation and standardisation that drive it, bear an uncanny resemblance to the trend of car colours. Processes like market research and A/B testing — the process by which two options are compared to see which ‘performs’ better on clickthrough, engagement, etc. — have their value, but they don’t lend themselves to particularly stimulating design choices.

The spirit of free expression that made the formative years of the internet so exciting — think GeoCities, personal blogging, and so on — is on the slide.

The ongoing transition to a more decentralised, privacy-aware Web3 holds some promise. Two-thirds of the world’s population now has online access — though that still leaves plenty of work to do — with a wealth of platforms allowing billions of people to connect. The dream of a digital world that is open, connected, and flat endures, but is tainted.

Monopolies

One of the main sources of concern for me is that although more people are online than ever, they are concentrating on fewer and fewer sites. A study published in 2021 found that activity is concentrated in a handful of websites. Think Google, Amazon, Facebook, Instagram, and, more recently, ChatGPT:

“So, while there is still growth in the functions, features, and applications offered on the web, the number of entities providing these functions is shrinking. [...] The authority, influence, and visibility of the top 1,000 global websites (as measured by network centrality or PageRank) is growing every month, at the expense of all other sites.”

Monopolies by nature reduce variance, both through their domination of the market and (understandably in fairness) internal preferences for consistency. And, let’s be frank, they have a vested interest in crushing any potential upstarts.

Dominant websites often fall victim to what I like to call Internet Explorer Syndrome, where their dominance breeds a certain amount of complacency. Why improve your quality when you’re sitting on 90% market share? No wonder the likes of Google are getting worse.

The most immediate sign of this is obviously how sites are designed and how they look. A lot of the big players look an awful lot like each other. Even personal websites are built atop third-party website builders. Millions of people wind up using the same handful of templates, and that’s if they have their own website at all. On social media, we are little more than a profile picture and a pithy tagline. The rest is boilerplate.

Should there be sleek, minimalist, ‘grayscale’ design systems and websites? Absolutely. But there should be colourful, kooky ones too, and if anything, they’re fading away. Do we really want to spend our online lives in the digital equivalent of Levittowns? Even logos are contriving to be less eye-catching. It feels like a matter of time before every major logo is a circle in a pastel colour.

The arrival of Artificial Intelligence into our everyday lives (and a decent chunk of the digital services we use) has put all of this into overdrive. Amalgamating — and hallucinating from — content that was already trending towards a perfect average, it is grayscale in its purest form.

Mix all the colours together, and what do you get? A muddy gray gloop.

I’m not railing against best practice. A lot of conventions have become the standard for good reason. One could just as easily shake their fist at the sky and wonder why all newspapers look the same, or all books. I hope the difference here is clear, though.

The web is a flexible enough domain that I think it belongs in the realm of architecture. A city where all buildings look alike has a soul-crushing quality about it. The same is true, I think, of the web.

In the Oscar Wilde play Lady Windermere’s Fan, a character quips that a cynic “knows the price of everything and the value of nothing.” In fairness, another quips back that a sentimentalist “sees an absurd value in everything, and doesn’t know the market price of any single thing.”

The sweet spot is somewhere in between. Structure goes a long way, but life needs a bit of variety too.

So, how do we go about bringing that variety? We probably shouldn’t hold our breath on big players to lead the way. They have the most to lose, after all. Why risk being colourful or dynamic if it impacts the bottom line?

We, the citizens of the web, have more power than we realise. This is the web, remember, a place where if you can imagine it, odds are you can make it. And at zero cost. No materials to buy and ship, no shareholders to appease. A place as flexible — and limitless — as the web has no business being boring.

There are plenty of ways, big and small, of keeping this place colourful. Whether our digital footprints are on third-party websites or ones we build ourselves, we needn’t toe the line.

Colour seems an appropriate place to start. When given the choice, try something audacious rather than safe. The worst that can happen is that it doesn’t work. It’s not like the sunk cost of painting a room; if you don’t like the palette, you simply change the hex codes. The same is true of fonts, icons, and other building blocks of the web.

As an example, a couple of friends and I listen to and review albums occasionally as a hobby. On the website, the palette of each review page reflects the album artwork:

I couldn’t tell you if reviews ‘perform’ better or worse than if they had a grayscale palette, because I don’t care. I think it’s a lot nicer to look at. And for those wondering, yes, I have tried to make every page meet AA Web Accessibility standards. Vibrant and accessible aren’t mutually exclusive.

Another great way of bringing vibrancy to the web is a degree of randomisation. Bruno Simon of Three Journey and awesome portfolio fame weaves random generation into a lot of his projects, and the results are gorgeous. What’s more, they feel familiar, natural, because life is full of wildcards.

This needn’t be in fancy 3D models. You could lightly rotate images to create a more informal, photo album mood, or chuck in the occasional random link in a list of recommended articles, just to shake things up.

In a lot of ways, it boils down to an attitude of just trying stuff out. Make your own font, give the site a sepia filter, and add that easter egg you keep thinking about. Just because someone, somewhere has already done it doesn’t mean you can’t do it your own way. And who knows, maybe your way stumbles onto someplace wholly new.

I’m wary of being too prescriptive. I don’t have the keys to a colourful web. No one person does. A vibrant community is the sum total of its people. What keeps things interesting is individuals trying wacky ideas and putting them out there. Expression for expression’s sake. Experimentation for experimentation’s sake. Tinkering for tinkering’s sake.

As users, there’s also plenty of room to be adventurous and try out open source alternatives to the software monopolies that shape so much of today’s Web. Being active in the communities that shape those tools helps to sustain a more open, collaborative digital world.

Although there are lessons to be taken from it, we won’t get a more colourful web by idealising the past or pining to get back to the ‘90s. Nor is there any point in resisting new technologies. AI is here; the choice is whether we use it or it uses us. We must have the courage to carry forward what still holds true, drop what doesn’t, and explore new ideas with a spirit of play.

Here are a few more Smashing articles in that spirit:

I do think there’s a broader discussion to be had about the extent to which A/B tests, bottom lines, and focus groups seem to dictate much of how the modern web looks and feels. With sites being squeezed tighter and tighter by dwindling advertising revenues, and AI answers muscling in on search traffic, the corporate entities behind larger websites can’t justify doing anything other than what is safe and proven, for fear of shrinking their slice of the pie.

Lest we forget, though, most of the web isn’t beholden to those types of pressure. From pet projects to wikis to forums to community news outlets to all manner of other things, there are countless reasons for websites to exist, and they needn’t take design cues from the handful of sites slugging it out at the top.

Connected with this is the dire need for digital literacy (PDF) — ‘the confident and critical use of a full range of digital technologies for information, communication and basic problem-solving in all aspects of life.’ For as long as using third-party platforms is a necessity rather than a choice, the needle’s only going to move so much.

There’s a reason why Minecraft is the world’s best-selling game. People are creative. When given the tools — and the opportunity — that creativity will manifest in weird and wonderful ways. That game is a lot of things, but gray ain’t one of them.

The web has all of that flexibility and more. It is a manifestation of imagination. Imagination trends towards colour, not grayness. It doesn’t always feel like it, but where the internet goes is decided by its citizens. The internet is ours. If we want to, we can make it technicolor.

Kategorier: Amerikanska

Sidor