Dyslextiskt Kebabstuk

Smashingmagazine

Prenumerera på Nyhetsflöde Smashingmagazine
Recent content in Articles on Smashing Magazine — For Web Designers And Developers
Uppdaterad: 7 timmar 38 minuter sedan

The 101 Course On Crafting 404 Pages

tors, 11/01/2018 - 10:48
The 101 Course On Crafting 404 Pages The 101 Course On Crafting 404 Pages Shelby Rogers 2018-11-01T09:48:57+00:00 2018-11-13T14:40:42+00:00

A lot of people toss around the phrase, “It’s not about the destination. It’s about the journey.” And those people are telling the truth — until they hit a roadblock.

Missed turns or poorly-given directions can cost someone hours on a trip. When you’re on a mission, those hours spent trying to find what you need could ruin the entire experience.

It doesn’t always have to end in disaster. This more optimal scenario could occur: you take a wrong turn, but after stopping at a nearby gas station, you leave with more than accurate directions to your final destination. You’ve also managed to score a free ice cream cone from the sweet old lady working behind the gas station’s register because she saw you were lost… and wanted to cheer you up.

Often, website visitors can wind up getting turned around. It’s not always their fault. They could’ve typed in the wrong URL (common) or clicked on a broken link (our mistake). Whatever the reasoning, you now have confused people who only wanted to engage with your website in some way and now can’t. You hold the reins on their navigation. You can guide them back to where you wanted them to go all along or you can leave them frustrated and in the dark. Do they need to make a U-turn? Did they get off at the wrong exit? Only you can tell them, and the best way to do so is through a 404 error page.

Your website’s 404 error page can deliver either of these scenarios with regard to getting your visitors back on their buyer’s journey. A lackluster 404 page irritates your visitors and chases them away into the hands of a competing website that better guides them to what they’re looking for. That lackluster 404 page has bland messaging with minimal visual elements. It might include a variation of the same serif text: “This page does not exist.” That’s like your web users asking you for directions and telling them nothing more than “well, what you’re looking for isn’t here. Good luck.” Nothing more.

Even brands with seemingly clever branding can neglect a 404 page! The owner of this sad excuse for an error page will remain anonymous (but it rhymes with Bards Tragainst Bubanity). (Large preview)

Unfortunately, even some of the world’s best brands use these 404 pages. No navigation. No interesting text. Nothing that reflects their brand messaging. Visitors are left even more disappointed in their encounter than before.

However, there are some 404 pages that go above and beyond. Rather than the stark white of a standard 404 error page, these pages take an opportunity to speak to users in a more personal tone. Excellent 404 pages are exactly like getting an unexpected treat from a friendly face. Well-crafted 404 Pages can redirect your pages’ visitors away from being lost and confused and to a much happier mood and onto a more helpful page on your website.

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬

Take Amazon, for instance. On Amazon Day 2018, Amazon learned firsthand the importance of a decent 404 page. Sure, buyers were still frustrated upon reaching a 404 page — even if it included a puppy. However, could you imagine how much more irritated buyers would’ve been had the 404 page looked clinical, cold, and not helpful?

Regardless of what tone you want to take or what visuals you want to use or what copy will best engage your readers, a great 404 page does one thing above all else: Makes website visitors okay with not finding what they need — if only for a moment — and directs them to where they need to go.

While 404 pages vary greatly, the best ones all seem to do two things well:

  1. Support the company’s overall brand and messaging;
  2. Successfully redirect website visitors elsewhere on the page through clear navigation and support.

Thematically, there are a few ways to accomplish the ‘perfect’ 404 page:

1. Nail Down The Overall Tone

If content isn’t your brand’s strong suit, this could be a struggle. However, if you have a sense of your brand’s voice and messaging, you can quickly identify where you can offer something unexpected. Visitors are already going to be disappointed when they hit your 404 page; they’re not getting what they wanted. Your 404 page is an opportunity to show that your brand has humans behind its marketing rather than robotic, cold, automated messages seen elsewhere. In short, move beyond the “this page is unavailable” and its variants.

Regardless of the tone, good 404 pages work like magicians. The best illusionists often acknowledge they’re magicians; they don’t pretend to be something they’re not. 404 pages own up to being an error page; the copy and visuals often reflect that. And then, like any good magician, 404 pages pull the attention away from the problem and put that attention elsewhere. Typically, that’s done with copy that matches the visual elements.

Here are some themes/moods that successful 404 pages have leveraged in the past used to succeed.

Crack A Joke

A joke (even a corny one) can do wonders for alleviating awkwardness or inconvenience. However, unless your brand is built on crude humor (i.e. Cards Against Humanity which ironically doesn’t have a good 404 page), it’s best to make the jokes either tongue in cheek or punny rather than too crass. This example from Modcloth makes a quick pun but keeps the mood light.

Happy and snappy, this 404 page aligns with the rest of the brand’s fun copy. (Large preview) Get Clever

It might not be outright funny, but it’s something that gets a visitor’s attention shortly after arriving on your page. It can be a little sassy, snarky, even unexpected. This 404 page from Blizzard Entertainment does a great job at flipping the script both with its visual tone and its copy.

Sarcasm pays off well for the gaming giant’s 404 page. (Large preview) Be Friendly

Prime example would be LEGO Shop’s 404 page with a friendly customer service rep (albeit a LEGO rep). The friendliness can come from an inviting design or warm copy. Ultimately, it’s anything that culminates in a sense of “oh hey, we’re really sorry about that. Let us try to fix it.”

“If your company’s brand excels in customer service and customer care, maybe taking a tone of genuine friendliness would be most appropriate to carry over brand messaging. If that’s the case, treat your 404 page like an extension of your guest services window.”

(Large preview) Integrate Interactivity

People love to click on things, especially if they’re engaging with the 404 page on desktop. And if they’re engaging with your website, all the better! One of the best examples online of interactivity on a 404 page is from Kualo. The site hosting provider gamified its 404 page into a recreation of Space Invaders, complete with the ability to earn extra lives as you level up. Even more impressive is that Kualo actually offers discounts on its hosting for certain thresholds of points that users reach.

The gamification of Kualo’s 404 keeps users coming back for more chances to win. (Large preview) Be Thought-provoking

Yes, your 404 pages can even be educational! 404 pages can offer up resources and links to other helpful spots on your website. It’s an unexpected distraction that could easily keep guests entertained by more information. National Public Radio (NPR) does this exceptionally well. The media outlet provides a collection of features with one major similarity: the stories are about things which have also disappeared.

(Large preview) Topical/pop-culture Based

Use this one with caution, as there’s a very good chance you’ll have to change your 404 message if you’re going to be topical. Pop culture references move fast; if you’re not careful, you’ve spent too much time developing a 404 page that will be irrelevant in two weeks. (And this is a cardinal sin for any organization with a target market of Millennials or younger.) The Spotify 404 page above recently underwent a shift to keep up with trends. Prior to doing a quick play on Kanye West’s “808 and Heartbreak,” the 404 page featured lyrics from Justin Bieber’s “Sorry.”

(Large preview) 2. Craft Visual Elements To Match That Tone

Once you have an idea of the proper tone for your 404 page, visuals are an important next step in the process. Visuals are often the first element of a 404 page people will note first — and thus, it’s the first representation of that page’s desired tone.

Static visuals help emphasize the page copy. Adding in light animation can often collaborate with the text to further a message or tone. For example, Imgur’s 404 page brings its illustrations to life by making the eyes of its characters follow a visitor’s cursor.

(Large preview)

Interactivity among the visual elements give people an opportunity to do what frustrated internet users love to do — click on everything in an attempt at making something useful happen.

3. Nail Down The Navigation Options

You know what tone you want your business to strike. You’ve got an idea of the visuals you’ll use to present that tone. Your website visitors will think it’s great and fun — but only for a moment. Your website still has to get them to what they’re looking for. Clear navigation is the next big step in directing your lost website visitors toward their goals. A 404 page that’s cute but lacks good navigation design is like that sweet old man who is kind but he gives you the world’s worst directions.

“After making a good first impression with your 404 page, the immediate next step should be getting website visitors off it and to where they want to be. There should always be clear indications on where they should go next.”

For example, Shutterstock’s 404 page offers three distinct options. Visitors can either go back to the previous page, which is helpful if they clicked on the wrong link. They can return to the homepage for more of a hard restart in their navigation, or maybe they came in from a search engine and found a broken link, but they’re not quite ready to give up on the website and want to look around. The final option is to report a problem. If someone has been scouring your website for minutes on end and they have an idea of what they’re looking for, they can report that they might have found an issue. At the very least, it gets your web visitors involved with your company and your development team gets feedback about the accessibility of your website.

(Large preview)

In addition to clear navigation, these other navigation-based elements could help your visitors out even more:

  • Chatbots / live chat: Bots are often received one of two ways. Users either find them incredibly annoying or relatively helpful. Bots that pop up within a second of landing on a page often lead visitors to click out of a site entirely as the bot seems intrusive. However, your website can use bots by simply adding a “Click to chat” option. This invites lost visitors who want your help to engage with the bot rather than the bot making a potentially annoying first move.
  • Search Bars: This element can do wonders for websites with a high volume of pages and information. A search bar could also offer up answers to common questions or redirect to an FAQ.

And one final navigation note — make sure those navigation tactics are just as efficient on mobile as they are on desktop. Treat your 404 page as you would any other. In order for it to succeed, it should be easily navigable to a variety of users, especially in a mobile-first world.

While the look of your 404 page is critical, you ideally never want anyone to find it on your website. Knowing the most common 404 errors on your website could give you insights in how to reduce those issues.

How To Track 404 Events Using Google Analytics What You Need To Start Tracking

The code provided will report 404 events within Google Analytics, so you must have an up-and-running account there to take advantage of this tutorial. You also need access to your site’s 404 template and (this is important) the 404 page must preserve the URL structure of the typed/clicked page. This means that your 404 events can’t just redirect to your 404 page. You must serve the 404 template dynamically with the exact URL that is throwing the error. Most web server services (such as Apache) allow you to do this with a variety of rewrite rules.

(Large preview) Tracking 404 Errors With Google Analytics

With Google Analytics, tracking explicit 404 errors is straightforward and simple. Just ensure that your main Google Analytics tracking script is in place and then add the following code to your 404 Error Page/Template:

<script> // Create Tracker - Send to GA ga('create', 'UA-11111111-11'); ga('send', { hitType: 'event', eventCategory: '404 Response', eventAction: window.location.href, eventLabel: document.referrer }); </script>

You will need to swap out the ID of your specific Google Analytics account. After that, the script works by sending an “event” to Google Analytics. The category is “404 Response," the action uses JavaScript to pass the URL that throws the error, and the label uses JavaScript to pass along the previous URL the user was on. Through all of this data, you can then see what URLs cause 404 events and where people are accessing those URLs.

Tracking 404 Errors With Google Tag Manager

More and more web managers have decided to move to Google Tag Manager. This tool gives them the capability of embedding a whole host of scripts through a single container. It's especially useful if you have a lot of tracking scripts from several providers. To begin tracking 404s through Tag Manager, first begin by creating a “Variable” called “Page Title Variable.” This variable type is a “JavaScript” variable and the Variable Name is “document.title”:

(Large preview)

Essentially, we’re creating a variable that checks for a page’s given title. This is how we will check if we are on a 404 page.

Then create a “Trigger” called “404 Page Title Trigger.” The type is “Page View” and the trigger fires when the “Page Title Variable” contains “404 — Page Not Found” or whatever it is your 404 page title displays as within the browser.

(Large preview)

Lastly, you will need to create a “Tag” called “404 Event Tag.” The tag type is “Universal Analytics” and contains the following components:

(Large preview)

The Variable, Trigger, and Tag all work to pass along the relevant data directly to Google Analytics.

404 Event Reporting

No matter your tracking method (be it through Tag Manager or direct event beacons), your reporting should be the same within Google Analytics. Under “Behavior,” you will see an item called “Events.” Here you will see all reported 404 events. The “Event Action” and “Event Label” dimensions will give you the pertinent data of what URLs are throwing 404 errors and their referring source.

(Large preview)

With this in place, you can now regularly monitor your 404 errors and take the necessary steps to minimize their occurrence. In doing so, you optimize your referral sources and provide the best user experience, keeping conversions and engagement on the right path.

What To Do With Your Google Analytics Results

Now that you know how to monitor those 404 errors, what’s a developer to do? The key takeaway from tracking 404 occurrences is to look for patterns that result in those errors. The data should help you determine user intent, cluing you into what your users want. Ideally, you’ll see trends in what brings people to your 404 page, and you can apply that knowledge to adjust your website accordingly.

If your website visitors are stumbling while searching for a page, take the opportunity to create content that fills in that hole. That way people get results they hadn’t previously seen from your site.

The 404 events could be avoided with a tweak in your website’s design. Make sure the navigation on your pages are clear and direct users to logical ending points. The fix could even be as simple as changing descriptions on a page to paint a clearer picture for users.

Putting It All Together

Tone, images, and navigation — these three elements can transform any 404 page from a ghost town into a pleasant serendipitous stop for your website visitors. And while you don’t want them to stay there forever, you can certainly make sure they stay with you is enjoyable before sending them on their way. By regularly monitoring your 404 errors, you can also alleviate some of the ditches, poorly-marked signage, and potholes that frequently derail users. Being proactive and reactive with 404 errors ultimately improves the journey and the destination for your website visitors.

(yk, ra)
Kategorier: Amerikanska

Colorful Inspiration For Gray Days (November 2018 Wallpapers)

ons, 10/31/2018 - 10:38
Colorful Inspiration For Gray Days (November 2018 Wallpapers) Colorful Inspiration For Gray Days (November 2018 Wallpapers) Cosima Mielke 2018-10-31T10:38:11+01:00 2018-11-13T14:40:42+00:00

How about some colorful inspiration for those gray and misty November days? We might have something for you. Just like every month since more than nine years already, artists and designers from across the globe once again tickled their creativity and designed unique wallpapers that are bound to breathe some fresh life into your desktop.

The wallpapers all come in versions with and without a calendar for November 2018 and can be downloaded for free. As a little bonus goodie, we added a selection of favorites from past November editions to this post. Because, well, some things are too good to be forgotten somewhere down in the archives, right? Enjoy!

Further Reading on SmashingMag:

Please note that:

  • All images can be clicked on and lead to the preview of the wallpaper,
  • You can feature your work in our magazine by taking part in our Desktop Wallpaper Calendar series. We are regularly looking for creative designers and artists to be featured on Smashing Magazine. Are you one of them?

Meet Smashing Book 6 — our brand new book focused on real challenges and real front-end solutions in the real world: from design systems and accessible single-page apps to CSS Custom Properties, CSS Grid, Service Workers, performance, AR/VR and responsive art direction. With Marcy Sutton, Yoav Weiss, Lyza D. Gardner, Laura Elizabeth and many others.

Table of Contents → Outer Space

“This November, we are inspired by the nature around us and the universe above us, so we created an out of this world calendar. Now, let us all stop for a second and contemplate on preserving our forests, let us send birds of passage off to warmer places, and let us think to ourselves — if not on Earth, could we find a home somewhere else in outer space?” — Designed by PopArt Studio from Serbia.

Stars

“I don’t know anyone who hasn’t enjoyed a cold night looking at the stars.” — Designed by Ema Rede from Portugal.

Running Through Autumn Mist

“A small tribute to my German Shepherd who adds joy to those grey November days.” — Designed by Franke Margrete from The Netherlands.

Magical Foliage

“Foliage is the most mystical process of nature to ever occur. Leaves bursting and falling in shades of red, orange, yellow and making the landscape look magical.” — Designed by ATop Digital from India.

Sad Kitty

Designed by Ricardo Gimenes from Sweden.

The Light Of Lights

“Diwali is all about celebrating the victory of good over evil and light over darkness. The hearts of the vast majority are as dark as the night of the new moon. The house is lit with lamps, but the heart is full of the darkness of ignorance. Wake up from the slumber of ignorance. Realize the constant and eternal light of the Soul which neither rises nor sets through meditation and make this festive month even brighter and more vibrant.” — Designed by Intranet Software from India.

Her

“I already had an old sketch that I wanted to try to convert to a digital illustration. The colors of the drawing were inspired by nature that at this time of the year has both the warm of fallen leaves as it has the cold greens of the leaves that make it through winter.” — Designed by Ana Matos from Portugal.

Mesmerizing Monsoon

“Monsoon is all about the chill, the tranquillity that whizzes around, a light drizzle that splashes off our faces, the musty aroma of the earth and more than anything - a sense of liberation. The designer here has depicted the soul of monsoon, one that you would want to heartily soak in.” — Designed by Nafis Mohamed from London.

Universal Children’s Day

“Universal Children’s Day, 20 November. It feels like a dream world, it invites you to let your imagination flow, see the details, and find the child inside you.” — Designed by Luis Costa from Portugal.

Stay Little

“It is believed that childhood is the happiest time cause this period of life cannot be matched with any other phases of life. During this month of November, let’s continue celebrating Children’s Day no matter how old you are, by sharing wishes to your children and childhood friends.” — Designed by Taxi Software from India.

Gezelligheid

“This month’s wallpaper is dedicated to the magical place of Barcelona that has filled my soul with renewed purpose and hope. I wanted to recreate the enchanting Parc Güell where I’m celebrating Thanksgiving with the people I’ve met that have given me so much in so little time.” — Designed by Priscilla Li from the United States.

Falling Rainbow

“I have a maple tree in my yard that sometimes turns these colors in the fall - red on the outer leaves, then yellow, and the inner leaves still green.” — Designed by Hannah Joy Patterson from South Carolina, USA.

Origami In The Night Sky

Designed by Rita Gaspar from Portugal.

Oldies But Goodies

Below you’ll find some November goodies from past years. Please note that these wallpapers don’t come with a calendar.

Colorful Autumn

“Autumn can be dreary, especially in November, when rain starts pouring every day. We wanted to summon better days, so that’s how this colourful November calendar was created. Open your umbrella and let’s roll!” — Designed by PopArt Studio from Serbia.

Time To Give Thanks!

Designed by Glynnis Owen from Australia.

No Shave Movember

“The goal of Movember is to ‘change the face of men’s health.’” — Designed by Suman Sil from India.

Welcome Home Dear Winter

“The smell of winter is lingering in the air. The time to be home! Winter reminds us of good food, of the warmth, the touch of a friendly hand, and a talk beside the fire. Keep calm and let us welcome winter.” — Designed by Acodez IT Solutions from India.

Deer Fall, I Love You!

Designed by Maria Porter from the United States.

Mushroom Season!

“It is autumn! It is raining and thus… it is mushroom season! It is the perfect moment to go to the forest and get the best mushrooms to do the best recipe.” — Designed by Verónica Valenzuela from Spain.

Little Mademoiselle P

“Black-and-white drawing of a little girl.” Designed by Jelena Tšekulajeva from Estonia.

Autumn Wreath

“I love the changing seasons — especially the autumn colors and festivals here around this time of year!” — Designed by Rachel Litzinger from the United States.

November Nights On Mountains

“Those chill November nights when you see mountain tops covered with the first snow sparkling in the moonlight.” — Designed by Jovana Djokic from Serbia.

Hello World, Happy November!

“I often read messages at Smashing Magazine from the people in the southern hemisphere ‘it’s spring, not autumn!’ so I’d liked to design a wallpaper for the northern and the southern hemispheres. Here it is, northerners and southerns, hope you like it!” — Designed by Agnes Swart from the Netherlands.

A Gentleman’s November

Designed by Cedric Bloem from Belgium.

Branches

“The design of trees has always fascinated me. Each one has it’s own unique entanglement of branches. With or without leaves they are always intriguing. Take some time to enjoy the trees around you — and the one on this wallpaper if you’d like!” — Designed by Rachel Litzinger from Chiang Rai, Thailand.

Simple Leaves

Designed by Nicky Somers from Belgium.

Captain’s Home

Designed by Elise Vanoorbeek (Doud) from Belgium.

Me And the Key Three

“This wallpaper is based on screenshots from my latest browser game (I’m an indie games designer).” — Designed by Bart Bonte from Belgium.

Red Leaves

Designed by Evacomics from Singapore.

Autumn Choir

Designed by Hatchers from Ukraine / China.

Real Artists Ship

“A tribute to Steve Jobs, from the crew at Busy Building Things.” Designed by Andrew Power from Canada.

Late Autumn

“The late arrival of Autumn.” Designed by Maria Castello Solbes from Spain.

Autumn Impression

Designed by Agnieszka Malarczyk from Poland.

Flying

Designed by Nindze.com from Russia.

Join In Next Month!

Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.

Thank you to all designers for their participation. Join in next month!

Kategorier: Amerikanska

Measuring Performance With Server Timing

tis, 10/30/2018 - 02:40
Measuring Performance With Server Timing Measuring Performance With Server Timing Drew McLellan 2018-10-30T03:40:14+02:00 2018-11-13T14:40:42+00:00

When undertaking any sort of performance optimisation work, one of the very first things we learn is that before you can improve performance you must first measure it. Without being able to measure the speed at which something is working, we can’t tell if the changes being made are improving the performance, having no effect, or even making things worse.

Many of us will be familiar with working on a performance problem at some level. That might be something as simple as trying to figure out why JavaScript on your page isn’t kicking in soon enough, or why images are taking too long to appear on bad hotel wifi. The answer to these sorts of questions is often found in a very familiar place: your browser’s developer tools.

Over the years developer tools have been improved to help us troubleshoot these sorts of performance issues in the front end of our applications. Browsers now even have performance audits built right in. This can help track down front end issues, but these audits can show up another source of slowness that we can’t fix in the browser. That issue is slow server response times.

“Time to First Byte”

There’s very little browser optimisations can do to improve a page that is simply slow to build on the server. That cost is incurred between the browser making the request for the file and receiving the response. Studying your network waterfall chart in developer tools will show this delay up under the category of “Waiting (TTFB)”. This is how long the browser waits between making the request and receiving the response.

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore Smashing Membership ↬

In performance terms this is know as Time to First Byte - the amount of time it takes before the server starts sending something the browser can begin to work with. Encompassed in that wait time is everything the server needs to do to build the page. For a typical site, that might involve routing the request to the correct part of the application, authenticating the request, making multiple calls to backend systems such as databases and so on. It could involve running content through templating systems, making API calls out to third party services, and maybe even things like sending emails or resizing images. Any work that the server does to complete a request is squashed into that TTFB wait that the user experiences in their browser.

Inspecting a document request shows the time the browser spends waiting for the response from the server.

So how do we reduce that time and start delivering the page more quickly to the user? Well, that’s a big question, and the answer depends on your application. That is the work of performance optimisation itself. What we need to do first is measure the performance so that the benefit of any changes can be judged.

The Server Timing Header

The job of Server Timing is not to help you actually time activity on your server. You’ll need to do the timing yourself using whatever toolset your backend platform makes available to you. Rather, the purpose of Server Timing is to specify how those measurements can be communicated to the browser.

The way this is done is very simple, transparent to the user, and has minimal impact on your page weight. The information is sent as a simple set of HTTP response headers.

Server-Timing: db;dur=123, tmpl;dur=56

This example communicates two different timing points named db and tmpl. These aren’t part of the spec - these are names that we’ve picked, in this case to represent some database and template timings respectively.

The dur property is stating the number of milliseconds the operation took to complete. If we look at the request in the Network section of Developer Tools, we can see that the timings have been added to the chart.

A new Server Timing section appears, showing the timings set with the Server-Timing HTTP header.

The Server-Timing header can take multiple metrics separated by commas:

Server-Timing: metric, metric, metric

Each metric can specify three possible properties

  1. A short name for the metric (such as db in our example)
  2. A duration in milliseconds (expressed as dur=123)
  3. A description (expressed as desc="My Description")

Each property is separated with a semicolon as the delimiter. We could add descriptions to our example like so:

Server-Timing: db;dur=123;desc="Database", tmpl;dur=56;desc="Template processing" The names are replaced with descriptions when provided.

The only property that is required is name. Both dur and desc are optional, and can be used optionally where required. For example, if you needed to debug a timing problem that was happening on one server or data center and not another, it might be useful to add that information into the response without an associated timing.

Server-Timing: datacenter;desc="East coast data center", db;dur=123;desc="Database", tmpl;dur=56;desc="Template processing”

This would then show up along with the timings.

The "East coast data center" value is shown, even though it has no timings.

One thing you might notice is that the timing bars don’t show up in a waterfall pattern. This is simply because Server Timing doesn’t attempt to communicate the sequence of timings, just the raw metrics themselves.

Implementing Server Timing

The exact implementation within your own application is going to depend on your specific circumstance, but the principles are the same. The steps are always going to be:

  1. Time some operations
  2. Collect together the timing results
  3. Output the HTTP header

In pseudocode, the generation of response might look like this:

startTimer('db') getInfoFromDatabase() stopTimer('db') startTimer('geo') geolocatePostalAddressWithAPI('10 Downing Street, London, UK') endTimer('geo') outputHeader('Server-Timing', getTimerOutput())

The basics of implementing something along those lines should be straightforward in any language. A very simple PHP implementation could use the microtime() function for timing operations, and might look something along the lines of the following.

class Timers { private $timers = []; public function startTimer($name, $description = null) { $this->timers[$name] = [ 'start' => microtime(true), 'desc' => $description, ]; } public function endTimer($name) { $this->timers[$name]['end'] = microtime(true); } public function getTimers() { $metrics = []; if (count($this->timers)) { foreach($this->timers as $name => $timer) { $timeTaken = ($timer['end'] - $timer['start']) * 1000; $output = sprintf('%s;dur=%f', $name, $timeTaken); if ($timer['desc'] != null) { $output .= sprintf(';desc="%s"', addslashes($timer['desc'])); } $metrics[] = $output; } } return implode($metrics, ', '); } }

A test script would use it as below, here using the usleep() function to artificially create a delay in the running of the script to simulate a process that takes time to complete.

$Timers = new Timers(); $Timers->startTimer('db'); usleep('200000'); $Timers->endTimer('db'); $Timers->startTimer('tpl', 'Templating'); usleep('300000'); $Timers->endTimer('tpl'); $Timers->startTimer('geo', 'Geocoding'); usleep('400000'); $Timers->endTimer('geo'); header('Server-Timing: '.$Timers->getTimers());

Running this code generated a header that looked like this:

Server-Timing: db;dur=201.098919, tpl;dur=301.271915;desc="Templating", geo;dur=404.520988;desc="Geocoding" The Server Timings set in the example show up in the Timings panel with the delays configured in our test script. Existing Implementations

Considering how handy Server Timing is, there are relatively few implementations that I could find. The server-timing NPM package offers a convenient way to use Server Timing from Node projects.

If you use a middleware based PHP framework tuupola/server-timing-middleware provides a handy option too. I’ve been using that in production on Notist for a few months, and I always leave a few basic timings enabled if you’d like to see an example in the wild.

For browser support, the best I’ve seen is in Chrome DevTools, and that’s what I’ve used for the screenshots in this article.

Considerations

Server Timing itself adds very minimal overhead to the HTTP response sent back over the wire. The header is very minimal and is generally safe to be sending without worrying about targeting to only internal users. Even so, it’s worth keeping names and descriptions short so that you’re not adding unnecessary overhead.

More of a concern is the extra work you might be doing on the server to time your page or application. Adding extra timing and logging can itself have an impact on performance, so it’s worth implementing a way to turn this on and off when required.

Using a Server Timing header is a great way to make sure all timing information from both the front-end and the back-end of your application are accessible in one location. Provided your application isn’t too complex, it can be easy to implement and you can be up and running within a very short amount of time.

If you’d like to read more about Server Timing, you might try the following:

(ra)
Kategorier: Amerikanska

The CSS Working Group At TPAC: What’s New In CSS?

fre, 10/26/2018 - 22:30
The CSS Working Group At TPAC: What’s New In CSS? The CSS Working Group At TPAC: What’s New In CSS? Rachel Andrew 2018-10-26T22:30:30+02:00 2018-11-13T14:40:42+00:00

Last week, I attended W3C TPAC as well as the CSS Working Group meeting there. Various changes were made to specifications, and discussions had which I feel are of interest to web designers and developers. In this article, I’ll explain a little bit about what happens at TPAC, and show some examples and demos of the things we discussed at TPAC for CSS in particular.

What Is TPAC?

TPAC is the Technical Plenary / Advisory Committee Meetings Week of the W3C. A chance for all of the various working groups that are part of the W3C to get together under one roof. The event is in a different part of the world each year, this year it was held in Lyon, France. At TPAC, Working Groups such as the CSS Working Group have their own meetings, just as we do at other times of the year. However, because we are all in one building, it means that people from other groups can more easily come as observers, and cross-working group interests can be discussed.

Attendees of TPAC are typically members of one or more of the Working Groups, working on W3C technologies. They will either be representatives of a member organization or Invited Experts. As with any other meetings of W3C Working Groups, the minutes of all of the discussions held at TPAC will be openly available, usually as IRC logs scribed during the meetings.

The CSS Working Group

The CSS Working Group meet face-to-face at TPAC and on at least two other occasions during the year; this is in addition to our weekly phone calls. At all of our meetings, the various issues raised on the specifications are discussed, and decisions made. Some issues are kept for face-to-face discussions due to the benefits of being able to have them with the whole group, or just being able to all get around a whiteboard or see a demo on screen.

When an issue is discussed in any meeting (whether face-to-face or teleconference), the relevant GitHub issue is updated with the minutes of the discussion. This means if you have an issue you want to keep track of, you can star it and see when it is updated. The full IRC minutes are also posted to the www-style mailing list.

Here is a selection of the things we discussed that I think will be of most interest to you.

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬ CSS Scrollbars

The CSS Scrollbars specification seeks to give a standard way of styling the size and color of scrollbars. If you have Firefox Nightly, you can test it out. To see the examples below, use Firefox Nightly and enable the flags layout.css.scrollbar-width.enabled and layout.css.scrollbar-color.enabled by visiting http://about:config in Firefox Nightly.

The specification gives us two new properties: scrollbar-width and scrollbar-color. The scrollbar-width property can take a value of auto, thin, none, or length (such as 1em). It looks as if the length value may be removed from the specification. As you can imagine, it would be possible for a web developer to make a very unusable scrollbar by playing with the width, so it may be better to allow the browser to decide the exact width that makes sense but instead to either show thin or thick scrollbars. Firefox has not implemented the length option.

If you use auto as the value, then the browser will use the default scrollbars: thin will give you a thin scrollbar, and none will show no visible scrollbar (but the element will still be scrollable).

In this example I have set scrollbar-width: thin.(Large preview)

In a browser with support for CSS Scrollbars, you can see this in action in the demo:

See the Pen CSS Scrollbars: scrollbar-width by Rachel Andrew (@rachelandrew) on CodePen.

The scrollbar-color property deals with — as you would expect — scrollbar colors. A scrollbar has two parts which you may wish to color independently:

  • thumb
    The slider that moves up and down as you scroll.
  • track
    The scrollbar background.

The values for the scrollbar-color property are auto, dark, light and <color> <color>.

Using auto as a keyword value will give you the default scrollbar colors for that browser, dark will provide a dark scrollbar, either in the dark mode of that platform or a custom dark mode, light the light mode of the platform or a custom light mode.

To set your own colors, you add two colors as the value that are separated by a space. The first color will be used for the thumb and the second one for the track. You should take care that there is enough contrast between the colors, as otherwise the scrollbar may be difficult to use for some people.

In this example, I have set custom colors for the scrollbar elements. (Large preview)

In a browser with support for CSS Scrollbars, you can see this in the demo:

See the Pen CSS Scrollbars: scrollbar-color by Rachel Andrew (@rachelandrew) on CodePen.

Aspect Ratio Units

We’ve been using the padding hack in CSS to achieve aspect ratio boxes for some time, however, with the advent of Grid Layout and better ways of sizing content, having a real way to do aspect ratios in CSS has become a more pressing need.

There are two issues raised on GitHub which relate to this requirement:

There is now a draft spec in Level 4 of CSS Sizing, and the decision of the meeting was that this needed further discussion on GitHub before any decisions can be made. So, if you are interested in this, or have additional use cases, the CSS Working Group would be interested in your comments on those issues.

The :where() Functional Pseudo-Class

Last year, the CSSWG resolved to add a pseudo-class which acted like :matches() but with zero specificity, thus making it easy to override without needing to artificially inflate the specificity of later elements to override something in a default stylesheet.

The :matches() pseudo-class might be new to you as it is a Level 4 Selector, however, it allows you to specify a group of selectors to apply some CSS to. For example, you could write:

.foo a:hover, p a:hover { color: green; }

Or with :matches()

:matches(.foo, p) a:hover { color: green; }

If you have ever had a big stack of selectors just in order to set the same couple of rules, you will see how useful this will be. The following CodePen uses the prefixed names of webkit-any and -moz-any to demonstrate the matches() functionality. You can also read more about matches() on MDN.

See the Pen :matches() and prefixed versions by Rachel Andrew (@rachelandrew) on CodePen.

Where we often do this kind of stacking of selectors, and thus where :matches() will be most useful is in some kind of initial, default stylesheet. However, we then need to be careful when overwriting those defaults that any overwriting is done in a way that will ensure it is more specific than the defaults. It is for this reason that a zero specificity version was proposed.

The issue that was discussed in the meeting was in regard to naming this pseudo-class, you can see the final resolution here, and if you wonder why various names were ruled out take a look at the full thread. Naming things in CSS is very hard — because we are all going to have to live with it forever! After a lot of debate, the group voted and decided to call this selector :where().

Since the meeting, and while I was writing up this post, a suggestion has been raised to rename matches() to is(). Take a look at the issue and comment if you have any strong feelings either way!

Logical Shorthands For Margins And Padding

On the subject of naming things, I’ve written about Logical Properties and Values here on Smashing Magazine in the past, take a look at “Understanding Logical Properties and Values”. These properties and values provide flow relative mappings. This means that if you are using Writing Modes other than a horizontal top to bottom writing mode, such as English, things like margins and padding, widths and height follow the text direction and are not linked to the physical screen dimensions.

For example, for physical margins we have:

  • margin-top
  • margin-right
  • margin-bottom
  • margin-left

The logical mappings for these (assuming horizontal-tb) are:

  • margin-block-start
  • margin-inline-end
  • margin-block-end
  • margin-inline-start

We can have two value shorthands. For example, to set both margin-block-start and margin-block-end as a shorthand, we can use margin-block: 20px 1em. The first value representing the start edge in the block dimension, the second value the end edge in the block dimension.

We hit a problem, however, when we come to the four-value shorthand margin. That property name is used for physical margins — how do we denote the logical four-value version? Various things have been suggested, including a switch at the top of the file:

@mode "logical";

Or, to use a block that looks a little like a media query:

@mode (flow-mode: relative) { }

Then various suggestions for keyword modifiers, using some punctuation character, or creating a brand new property name:

margin: relative 1em 2em 3em 4em; margin: 1em 2em 3em 4em !relative; margin-relative: 1em 2em 3em 4em; ~margin: 1em 2em 3em 4em;

You can read the issue to see the various things that are being considered. Issues discussed were that while the logical version may well end up being generally the default, sometimes you will want things to relate to the screen geometry; we need to be able to have both options in one stylesheet. Having a @mode setting at the top of the CSS could be confusing; it would fail if someone were to copy and paste a chunk of the stylesheet.

My preference is to have some sort of keyword value. That way, if you look at the rule, you can see exactly which mode is being used, even if it does seem slightly inelegant. It is the sort of thing that a preprocessor could deal with for you; if you did indeed want all of your properties and values to use the logical versions.

We didn’t manage to resolve on the issue, so if you do have thoughts on which of these might be best, or can see problems with them that we haven’t described, please comment on the issue on GitHub.

Web Platform Tests Discussion

At the CSS Working Group meeting and then during the unconference style Technical Plenary Day, I was involved in discussing how to get more people involved in writing tests for CSS specifications. The Web Platform Tests project aims to provide tests for all of the web platform. These tests then help browser vendors check whether their browser is correct as to the spec. In the CSS Working Group, the aim is that any normative change to a specification which has reached Candidate Recommendation (CR) status, should be accompanied by a test. This makes sense as once a spec is in CR, we are asking browsers to implement that spec and provide feedback. They need to know if anything in the spec changes so they can update their code.

The problem is that we have very few people writing specs, so for spec writers to have to write all the tests will slow the progress of CSS down. We would love to see other people writing tests, as it is a way to contribute to the web platform and to gain deep knowledge of how specifications work. So we met to think about how we could encourage people to participate in the effort. I’ve written on this subject in the past; if the idea of writing tests for the platform interests you, take a look at my 24 Ways article on “Testing the Web Platform”.

On With The Work!

TPAC has added to my personal to-do list considerably. However, I’ve been able to pick up tips about specification editing, test writing, and to come up with a plan to get the Multi-Column Layout specification — of which I’m the co-editor — back to CR status. As someone who is not a fan of meetings, I’ve come to see how valuable these face-to-face meetings are for the web platform, giving those of us contributing to it a chance to share the knowledge we individually are developing. I feel it is important though to then take that knowledge and share it outside of the group in order to help more people get involved with developing as well as using the platform.

If you are interested in how the CSS Working Group functions, and how new CSS is invented and ends up in browsers, check out my 2017 CSSConf.eu presentation “Where Does CSS Come From?” and the information from fantasai in her posts “An Inside View of the CSS Working Group at W3C”.

(il)
Kategorier: Amerikanska

Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress

fre, 10/26/2018 - 13:45
Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress Denis Žoljom 2018-10-26T13:45:46+02:00 2018-11-13T14:40:42+00:00

WordPress came a long way from its start as a simple blog writing tool. A long 15 years later it became the number one CMS choice for developers and non-developers alike. WordPress now powers roughly 30% of the top 10 million sites on the web.

Ever since REST API was bundled in the WordPress core, developers can experiment and use it in a decoupled way, i.e. writing the front-end part by using JavaScript frameworks or libraries. At Infinum, we were (and still are) using WordPress in a ‘classic’ way: PHP for the frontend as well as the backend. After a while, we wanted to give the decoupled approach a go. In this article, I’ll share an overview of what it was that we wanted to achieve and what we encountered while trying to implement our goals.

There are several types of projects that can benefit from this approach. For example, simple presentational sites or sites that use WordPress as a backend are the main candidates for the decoupled approach.

In recent years, the industry thankfully started paying more attention to performance. However, being an easy-to-use inclusive and versatile piece of software, WordPress comes with a plethora of options that are not necessarily utilized in each and every project. As a result, website performance can suffer.

Recommended reading: How To Use Heatmaps To Track Clicks On Your WordPress Website

If long website response times keep you up at night, this is a how-to for you. I will cover the basics of creating a decoupled WordPress and some lessons learned, including:

  1. The meaning of a “decoupled WordPress”
  2. Working with the default WordPress REST API
  3. Improving performance with the decoupled JSON approach
  4. Security concerns

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬ So, What Exactly Is A Decoupled WordPress?

When it comes down to how WordPress is programmed, one thing is certain: it doesn’t follow the Model-View-Controller (MVC) design pattern that many developers are familiar with. Because of its history and for being sort of a fork of an old blogging platform called “b2” (more details here), it’s largely written in a procedural way (using function-based code). WordPress core developers used a system of hooks which allowed other developers to modify or extend certain functionalities.

It’s an all-in-one system that is equipped with a working admin interface; it manages database connection, and has a bunch of useful APIs exposed that handle user authentication, routing, and more.

But thanks to the REST API, you can separate the WordPress backend as a sort of model and controller bundled together that handle data manipulation and database interaction, and use REST API Controller to interact with a separate view layer using various API endpoints. In addition to MVC separation, we can (for security reasons or speed improvements) place the JS App on a separate server like in the schema below:

Decoupled WordPress diagram. (Large preview) Advantages Of Using The Decoupled Approach

One thing why you may want to use this approach for is to ensure a separation of concerns. The frontend and the backend are interacting via endpoints; each can be on its separate server which can be optimized specifically for each respective task, i.e. separately running a PHP app and running a Node.js app.

By separating your frontend from the backend, it’s easier to redesign it in the future, without changing the CMS. Also, front-end developers only need to care about what to do with the data the backend provides them. This lets them get creative and use modern libraries like ReactJS, Vue or Angular to deliver highly dynamic web apps. For example, it’s easier to build a progressive web app when using the aforementioned libraries.

Another advantage is reflected in the website security. Exploiting the website through the backend becomes more difficult since it’s largely hidden from the public.

Recommended reading: WordPress Security As A Process

Shortcomings Of Using The Decoupled Approach

First, having a decoupled WordPress means maintaining two separate instances:

  1. WordPress for the backend;
  2. A separate front-end app, including timely security updates.

Second, some of the front-end libraries do have a steeper learning curve. It will either take a lot of time to learn a new language (if you are only accustomed to HTML and CSS for templating), or will require bringing additional JavaScript experts to the project.

Third, by separating the frontend, you are losing the power of the WYSIWYG editor, and the ‘Live Preview’ button in WordPress doesn’t work either.

Working With WordPress REST API

Before we delve deeper in the code, a couple more things about WordPress REST API. The full power of the REST API in WordPress came with version 4.7 on December 6th, 2016.

What WordPress REST API allows you to do is to interact with your WordPress installation remotely by sending and receiving JSON objects.

Setting Up A Project

Since it comes bundled with latest WordPress installation, we will be working on the Twenty Seventeen theme. I’m working on Varying Vagrant Vagrants, and have set up a test site with an URL http://dev.wordpress.test/. This URL will be used throughout the article. We’ll also import posts from the wordpress.org Theme Review Teams repository so that we have some test data to work with. But first, we will get familiar working with default endpoints, and then we’ll create our own custom endpoint.

Access The Default REST Endpoint

As already mentioned, WordPress comes with several built-in endpoints that you can examine by going to the /wp-json/ route:

http://dev.wordpress.test/wp-json/

Either by putting this URL directly in your browser, or adding it in the postman app, you’ll get out a JSON response from WordPress REST API that looks something like this:

{ "name": "Test dev site", "description": "Just another WordPress site", "url": "http://dev.wordpress.test", "home": "http://dev.wordpress.test", "gmt_offset": "0", "timezone_string": "", "namespaces": [ "oembed/1.0", "wp/v2" ], "authentication": [], "routes": { "/": { "namespace": "", "methods": [ "GET" ], "endpoints": [ { "methods": [ "GET" ], "args": { "context": { "required": false, "default": "view" } } } ], "_links": { "self": "http://dev.wordpress.test/wp-json/" } }, "/oembed/1.0": { "namespace": "oembed/1.0", "methods": [ "GET" ], "endpoints": [ { "methods": [ "GET" ], "args": { "namespace": { "required": false, "default": "oembed/1.0" }, "context": { "required": false, "default": "view" } } } ], "_links": { "self": "http://dev.wordpress.test/wp-json/oembed/1.0" } }, ... "wp/v2": { ...

So in order to get all of the posts in our site by using REST, we would need to go to http://dev.wordpress.test/wp-json/wp/v2/posts. Notice that the wp/v2/ marks the reserved core endpoints like posts, pages, media, taxonomies, categories, and so on.

So, how do we add a custom endpoint?

Create A Custom REST Endpoint

Let’s say we want to add a new endpoint or additional field to the existing endpoint. There are several ways we can do that. First, one can be done automatically when creating a custom post type. For instance, we want to create a documentation endpoint. Let’s create a small test plugin. Create a test-documentation folder in the wp-content/plugins folder, and add documentation.php file that looks like this:

<?php /** * Test plugin * * @since 1.0.0 * @package test_plugin * * @wordpress-plugin * Plugin Name: Test Documentation Plugin * Plugin URI: * Description: The test plugin that adds rest functionality * Version: 1.0.0 * Author: Infinum * Author URI: https://infinum.co/ * License: GPL-2.0+ * License URI: http://www.gnu.org/licenses/gpl-2.0.txt * Text Domain: test-plugin */ namespace Test_Plugin; // If this file is called directly, abort. if ( ! defined( 'WPINC' ) ) { die; } /** * Class that holds all the necessary functionality for the * documentation custom post type * * @since 1.0.0 */ class Documentation { /** * The custom post type slug * * @var string * * @since 1.0.0 */ const PLUGIN_NAME = 'documentation-plugin'; /** * The custom post type slug * * @var string * * @since 1.0.0 */ const POST_TYPE_SLUG = 'documentation'; /** * The custom taxonomy type slug * * @var string * * @since 1.0.0 */ const TAXONOMY_SLUG = 'documentation-category'; /** * Register custom post type * * @since 1.0.0 */ public function register_post_type() { $args = array( 'label' => esc_html( 'Documentation', 'test-plugin' ), 'public' => true, 'menu_position' => 47, 'menu_icon' => 'dashicons-book', 'supports' => array( 'title', 'editor', 'revisions', 'thumbnail' ), 'has_archive' => false, 'show_in_rest' => true, 'publicly_queryable' => false, ); register_post_type( self::POST_TYPE_SLUG, $args ); } /** * Register custom tag taxonomy * * @since 1.0.0 */ public function register_taxonomy() { $args = array( 'hierarchical' => false, 'label' => esc_html( 'Documentation tags', 'test-plugin' ), 'show_ui' => true, 'show_admin_column' => true, 'update_count_callback' => '_update_post_term_count', 'show_in_rest' => true, 'query_var' => true, ); register_taxonomy( self::TAXONOMY_SLUG, [ self::POST_TYPE_SLUG ], $args ); } } $documentation = new Documentation(); add_action( 'init', [ $documentation, 'register_post_type' ] ); add_action( 'init', [ $documentation, 'register_taxonomy' ] );

By registering the new post type and taxonomy, and setting the show_in_rest argument to true, WordPress automatically created a REST route in the /wp/v2/namespace. You now have http://dev.wordpress.test/wp-json/wp/v2/documentation and http://dev.wordpress.test/wp-json/wp/v2/documentation-category endpoints available. If we add a post in our newly created documentation custom post going to http://dev.wordpress.test/?post_type=documentation, it will give us a response that looks like this:

[ { "id": 4, "date": "2018-06-11T19:48:51", "date_gmt": "2018-06-11T19:48:51", "guid": { "rendered": "http://dev.wordpress.test/?post_type=documentation&p=4" }, "modified": "2018-06-11T19:48:51", "modified_gmt": "2018-06-11T19:48:51", "slug": "test-documentation", "status": "publish", "type": "documentation", "link": "http://dev.wordpress.test/documentation/test-documentation/", "title": { "rendered": "Test documentation" }, "content": { "rendered": "

This is some documentation content

\n", "protected": false }, "featured_media": 0, "template": "", "documentation-category": [ 2 ], "_links": { "self": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/documentation/4" } ], "collection": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/documentation" } ], "about": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/types/documentation" } ], "version-history": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/documentation/4/revisions" } ], "wp:attachment": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/media?parent=4" } ], "wp:term": [ { "taxonomy": "documentation-category", "embeddable": true, "href": "http://dev.wordpress.test/wp-json/wp/v2/documentation-category?post=4" } ], "curies": [ { "name": "wp", "href": "https://api.w.org/{rel}", "templated": true } ] } } ]

This is a great starting point for our single-page application. Another way we can add a custom endpoint is by hooking to the rest_api_init hook and creating an endpoint ourselves. Let’s add a custom-documentation route that is a bit different than the one we registered. Still working in the same plugin, we can add:

/** * Create a custom endpoint * * @since 1.0.0 */ public function create_custom_documentation_endpoint() { register_rest_route( self::PLUGIN_NAME . '/v1', '/custom-documentation', array( 'methods' => 'GET', 'callback' => [ $this, 'get_custom_documentation' ], ) ); } /** * Create a callback for the custom documentation endpoint * * @return string JSON that indicates success/failure of the update, * or JSON that indicates an error occurred. * @since 1.0.0 */ public function get_custom_documentation() { /* Some permission checks can be added here. */ // Return only documentation name and tag name. $doc_args = array( 'post_type' => self::POST_TYPE_SLUG, 'post_status' => 'publish', 'perm' => 'readable' ); $query = new \WP_Query( $doc_args ); $response = []; $counter = 0; // The Loop if ( $query->have_posts() ) { while ( $query->have_posts() ) { $query->the_post(); $post_id = get_the_ID(); $post_tags = get_the_terms( $post_id, self::TAXONOMY_SLUG ); $response[ $counter ]['title'] = get_the_title(); foreach ( $post_tags as $tags_key => $tags_value ) { $response[ $counter ]['tags'][] = $tags_value->name; } $counter++; } } else { $response = esc_html__( 'There are no posts.', 'documentation-plugin' ); } /* Restore original Post Data */ wp_reset_postdata(); return rest_ensure_response( $response ); }

And hook the create_custom_documentation_endpoint() method to the rest_api_init hook, like so:

add_action( 'rest_api_init', [ $documentation, 'create_custom_documentation_endpoint' ] );

This will add a custom route in the http://dev.wordpress.test/wp-json/documentation-plugin/v1/custom-documentation with the callback returning the response for that route.

[{ "title": "Another test documentation", "tags": ["Another tag"] }, { "title": "Test documentation", "tags": ["REST API", "test tag"] }]

There are a lot of other things you can do with REST API (you can find more details in the REST API handbook).

Work Around Long Response Times When Using The Default REST API

For anyone who has tried to build a decoupled WordPress site, this is not a new thing — REST API is slow.

My team and I first encountered the strange WordPress-lagging REST API on a client site (not decoupled), where we used the custom endpoints to get the list of locations on a Google map, alongside other meta information created using the Advanced Custom Fields Pro plugin. It turned out that the time the first byte (TTFB) — which is used as an indication of the responsiveness of a web server or other network resource — took more than 3 seconds.

After a bit of investigating, we realized the default REST API calls were actually really slow, especially when we “burdened” the site with additional plugins. So, we did a small test. We installed a couple of popular plugins and encountered some interesting results. The postman app gave the load time of 1.97s for 41.9KB of response size. Chrome’s load time was 1.25s (TTFB was 1.25s, content was downloaded in 3.96ms). Just to retrieve a simple list of posts. No taxonomy, no user data, no additional meta fields.

Why did this happen?

It turns out that accessing REST API on the default WordPress will load the entire WordPress core to serve the endpoints, even though it’s not used. Also, the more plugins you add, the worse things get. The default REST controller WP_REST_Controller is a really big class that does a lot more than necessary when building a simple web page. It handles routes registering, permission checks, creating and deleting items, and so on.

There are two common workarounds for this issue:

  1. Intercept the loading of the plugins, and prevent loading them all when you need to serve a simple REST response;
  2. Load only the bare minimum of WordPress and store the data in a transient, from which we then fetch the data using a custom page.
Improving Performance With The Decoupled JSON Approach

When you are working with simple presentation sites, you don’t need all the functionality REST API offers you. Of course, this is where good planning is crucial. You really don’t want to build your site without REST API, and then say in a years time that you’d like to connect to your site, or maybe create a mobile app that needs to use REST API functionality. Do you?

For that reason, we utilized two WordPress features that can help you out when serving simple JSON data out:

  • Transients API for caching,
  • Loading the minimum necessary WordPress using SHORTINIT constant.
Creating A Simple Decoupled Pages Endpoint

Let’s create a small plugin that will demonstrate the effect that we’re talking about. First, add a wp-config-simple.php file in your json-transient plugin that looks like this:

<?php /** * Create simple wp configuration for the routes * * @since 1.0.0 * @package json-transient */ define( 'SHORTINIT', true ); $parse_uri = explode( 'wp-content', $_SERVER['SCRIPT_FILENAME'] ); require_once filter_var( $parse_uri[0] . 'wp-load.php', FILTER_SANITIZE_STRING );

The define( 'SHORTINIT', true ); will prevent the majority of WordPress core files to be loaded, as can be seen in the wp-settings.php file.

We still may need some of the WordPress functionality, so we can require the file (like wp-load.php) manually. Since wp-load.php sits in the root of our WordPress installation, we will fetch it by getting the path of our file using $_SERVER['SCRIPT_FILENAME'], and then exploding that string by wp-content string. This will return an array with two values:

  1. The root of our installation;
  2. The rest of the file path (which is of no interest to us).

Keep in mind that we’re using the default installation of WordPress, and not a modified one, like for example in the Bedrock boilerplate, which splits the WordPress in a different file organization.

Lastly, we require the wp-load.php file, with a little bit of sanitization, for security.

In our init.php file, we’ll add the following:

<?php /** * Test plugin * * @since 1.0.0 * @package json-transient * * @wordpress-plugin * Plugin Name: Json Transient * Plugin URI: * Description: Proof of concept for caching api like calls * Version: 1.0.0 * Author: Infinum * Author URI: https://infinum.co/ * License: GPL-2.0+ * License URI: http://www.gnu.org/licenses/gpl-2.0.txt * Text Domain: json-transient */ namespace Json_Transient; // If this file is called directly, abort. if ( ! defined( 'WPINC' ) ) { die; } class Init { /** * Get the array of allowed types to do operations on. * * @return array * * @since 1.0.0 */ public function get_allowed_post_types() { return array( 'post', 'page' ); } /** * Check if post type is allowed to be save in transient. * * @param string $post_type Get post type. * @return boolean * * @since 1.0.0 */ public function is_post_type_allowed_to_save( $post_type = null ) { if( ! $post_type ) { return false; } $allowed_types = $this->get_allowed_post_types(); if ( in_array( $post_type, $allowed_types, true ) ) { return true; } return false; } /** * Get Page cache name for transient by post slug and type. * * @param string $post_slug Page Slug to save. * @param string $post_type Page Type to save. * @return string * * @since 1.0.0 */ public function get_page_cache_name_by_slug( $post_slug = null, $post_type = null ) { if( ! $post_slug || ! $post_type ) { return false; } $post_slug = str_replace( '__trashed', '', $post_slug ); return 'jt_data_' . $post_type . '_' . $post_slug; } /** * Get full post data by post slug and type. * * @param string $post_slug Page Slug to do Query by. * @param string $post_type Page Type to do Query by. * @return array * * @since 1.0.0 */ public function get_page_data_by_slug( $post_slug = null, $post_type = null ) { if( ! $post_slug || ! $post_type ) { return false; } $page_output = ''; $args = array( 'name' => $post_slug, 'post_type' => $post_type, 'posts_per_page' => 1, 'no_found_rows' => true ); $the_query = new \WP_Query( $args ); if ( $the_query->have_posts() ) { while ( $the_query->have_posts() ) { $the_query->the_post(); $page_output = $the_query->post; } wp_reset_postdata(); } return $page_output; } /** * Return Page in JSON format * * @param string $post_slug Page Slug. * @param string $post_type Page Type. * @return json * * @since 1.0.0 */ public function get_json_page( $post_slug = null, $post_type = null ) { if( ! $post_slug || ! $post_type ) { return false; } return wp_json_encode( $this->get_page_data_by_slug( $post_slug, $post_type ) ); } /** * Update Page to transient for caching on action hooks save_post. * * @param int $post_id Saved Post ID provided by action hook. * * @since 1.0.0 */ public function update_page_transient( $post_id ) { $post_status = get_post_status( $post_id ); $post = get_post( $post_id ); $post_slug = $post->post_name; $post_type = $post->post_type; $cache_name = $this->get_page_cache_name_by_slug( $post_slug, $post_type ); if( ! $cache_name ) { return false; } if( $post_status === 'auto-draft' || $post_status === 'inherit' ) { return false; } else if( $post_status === 'trash' ) { delete_transient( $cache_name ); } else { if( $this->is_post_type_allowed_to_save( $post_type ) ) { $cache = $this->get_json_page( $post_slug, $post_type ); set_transient( $cache_name, $cache, 0 ); } } } } $init = new Init(); add_action( 'save_post', [ $init, 'update_page_transient' ] );

The helper methods in the above code will enable us to do some caching:

  • get_allowed_post_types()
    This method lets post types know that we want to enable showing in our custom ‘endpoint’. You can extend this, and the plugin we’ve actually made this method filterable so that you can just use a filter to add additional items.
  • is_post_type_allowed_to_save()
    This method simply checks to see if the post type we’re trying to fetch the data from is in the allowed array specified by the previous method.
  • get_page_cache_name_by_slug()
    This method will return the name of the transient that the data will be fetched from.
  • get_page_data_by_slug()
    This method is the method that will perform the WP_Query on the post via its slug and post type and return the contents of the post array that we’ll convert with the JSON using the get_json_page() method.
  • update_page_transient()
    This will be run on the save_post hook and will overwrite the transient in the database with the JSON data of our post. This last method is known as the “key caching method”.

Let’s explain transients in more depth.

Transients API

Transients API is used to store data in the options table of your WordPress database for a specific period of time. It’s a persisted object cache, meaning that you are storing some object, for example, results of big and slow queries or full pages that can be persisted across page loads. It is similar to regular WordPress Object Cache, but unlike WP_Cache, transients will persist data across page loads, where WP_Cache (storing the data in memory) will only hold the data for the duration of a request.

It’s a key-value store, meaning that we can easily and quickly fetch the desired data, similar to what in-memory caching systems like Memcached or Redis do. The difference is that you’d usually need to install those separately on the server (which can be an issue on shared servers), whereas transients are built in with WordPress.

As noted on its Codex page — transients are inherently sped up by caching plugins. Since they can store transients in memory instead of a database. The general rule is that you shouldn’t assume that transient is always present in the database — which is why it’s a good practice to check for its existence before fetching it

$transient_name = get_transient( 'transient_name' ); if ( $transient_name === false ) { set_transient( 'transient_name', $transient_data, $transient_expiry ); }

You can use it without expiration (like we are doing), and that’s why we implemented a sort of ‘cache-busting’ on post save. In addition to all the great functionality they provide, they can hold up to 4GB of data in it, but we don’t recommend storing anything that big in a single database field.

Recommended reading: Be Watchful: PHP And WordPress Functions That Can Make Your Site Insecure

Final Endpoint: Testing And Verification

The last piece of the puzzle that we need is an ‘endpoint’. I’m using the term endpoint here, even though it’s not an endpoint since we are directly calling a specific file to fetch our results. So we can create a test.php file that looks like this:

<?php /** * Generate rest doute * * Route location: /wp-content/plugins/json-transient/test.php?slug=sample-page&type=page * * @since 1.0.0 * @package json-transient */ // Load simple version of WordPress, this file can be located anywhere. require_once 'wp-config-simple.php'; // Load function to be able to call some functions. require_once 'init.php'; $init = new Json_Transient\Init(); // Check input and protect it. if ( ( isset( $_GET['slug'] ) || ! empty( $_GET['slug'] ) ) && ( isset( $_GET['type'] ) || ! empty( $_GET['type'] ) ) ) { $post_slug = htmlentities( trim ( $_GET['slug'] ), ENT_QUOTES ); $post_type = htmlentities( trim ( $_GET['type'] ), ENT_QUOTES ); } else { wp_send_json( 'Error, page slug or type is missing!' ); } // Get transient by name. $cache = get_transient( $init->get_page_cache_name_by_slug( $post_slug, $post_type ) ); // Return error on false. if ( $cache === false ) { wp_send_json( 'Error, the page does not exist or it is not cached correctly. Please try rebuilding cache and try again!' ); } // Decode json for output. wp_send_json( json_decode( $cache ) );

If we go to http://dev.wordpress.test/wp-content/plugins/json-transient/test.php, we’ll see this message:

Error, page slug or type is missing!

So, we’ll need to specify the post type and post slug. When we now go to http://dev.wordpress.test/wp-content/plugins/json-transient/test.php?slug=hello-world&type=post we’ll see:

Error, the page does not exist or it is not cached correctly. Please try rebuilding cache and try again!

Oh, wait! We need to re-save our pages and posts first. So when you’re starting out, this can be easy. But if you already have 100+ pages or posts, this can be a challenging task. This is why we implemented a way to clear the transients in the Decoupled JSON Content plugin, and rebuild them in a batch.

But go ahead and re-save the Hello World post and then open the link again. What you should now have is something that looks like this:

{ "ID": 1, "post_author": "1", "post_date": "2018-06-26 18:28:57", "post_date_gmt": "2018-06-26 18:28:57", "post_content": "Welcome to WordPress. This is your first post. Edit or delete it, then start writing!", "post_title": "Hello world!", "post_excerpt": "", "post_status": "publish", "comment_status": "open", "ping_status": "open", "post_password": "", "post_name": "hello-world", "to_ping": "", "pinged": "", "post_modified": "2018-06-30 08:34:52", "post_modified_gmt": "2018-06-30 08:34:52", "post_content_filtered": "", "post_parent": 0, "guid": "http:\/\/dev.wordpress.test\/?p=1", "menu_order": 0, "post_type": "post", "post_mime_type": "", "comment_count": "1", "filter": "raw" }

And that’s it. The plugin we made has some more extra functionality that you can use, but in a nutshell, this is how you can fetch the JSON data from your WordPress that is way faster than using REST API.

Before And After: Improved Response Time

We conducted testing in Chrome, where we could see the total response time and the TTFB separately. We tested response times ten times in a row: first without plugins and then with the plugins added. Also, we tested the response for a list of posts and for a single post.

The results of the test are illustrated in the tables below:

Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach without added plugins. The decoupled approach is 2 to 3 times faster. (Large preview) Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach with added plugins. The decoupled approach is up to 8 times faster. (Large preview)

As you can see, the difference is drastic.

Security Concerns

There are some caveats that you’ll need to take a good look at. First of all, we are manually loading WordPress core files, which in the WordPress world is a big no-no. Why? Well, besides the fact that manually fetching core files can be tricky (especially if you’re using nonstandard installations such as Bedrock), it could pose some security concerns.

If you decide to use the method described in this article, be sure you know how to fortify your server security.

First, add HTML headers like in the test.php file:

header( 'Access-Control-Allow-Origin: your-front-end-app.url' ); header( 'Content-Type: application/json' );

The first header is a way to bypass CORS security measure so that only your front-end app can fetch the contents when going to the specified file.

Second, disable directory traversal of your app. You can do this by modifying nginx settings, or add Options -Indexes to your .htaccess file if you’re on an Apache server.

Adding a token check to the response is also a good measure that can prevent unwanted access. We are actually working on a way to modify our Decoupled JSON plugin so that we can include these security measures by default.

A check for an Authorization header sent by the frontend app could look like this:

if ( ! isset( $_SERVER['HTTP_AUTHORIZATION'] ) ) { return; } $auth_header = $_SERVER['HTTP_AUTHORIZATION'];

Then you can check if the specific token (a secret that is only shared by the front- and back-end apps) is provided and correct.

Conclusion

REST API is great because it can be used to create fully-fledged apps — creating, retrieving, updating and deleting the data. The downside of using it is its speed.

Obviously, creating an app is different than creating a classic website. You probably won’t need all the plugins we installed. But if you just need the data for presentational purposes, caching data and serving it in a custom file seems like the perfect solution at the moment, when working with decoupled sites.

You may be thinking that creating a custom plugin to speed up the website response time is an overkill, but we live in a world in which every second counts. Everyone knows that if a website is slow, users will abandon it. There are many studies that demonstrate the connection between website performance and conversion rates. But if you still need convincing, Google penalizes slow websites.

The method explained in this article solves the speed issue that the WordPress REST API encounters and will give you an extra boost when working on a decoupled WordPress project. As we are on our never-ending quest to squeeze out that last millisecond out of every request and response, we plan to optimize the plugin even more. In the meantime, please share your ideas on speeding up decoupled WordPress!

(md, ra, yk, il)
Kategorier: Amerikanska

Video Playback On The Web: Video Delivery Best Practices (Part 2)

tors, 10/25/2018 - 14:40
Video Playback On The Web: Video Delivery Best Practices (Part 2) Video Playback On The Web: Video Delivery Best Practices (Part 2) Doug Sillars 2018-10-25T14:40:24+02:00 2018-11-13T14:40:42+00:00

In my previous post, I examined video trends on the web today, using data from the HTTP Archive. I found that many websites serve the same video content on mobile and desktop, and that many video streams are being delivered at bitrates that are too high to playback on 3G speed connections. We also discovered that may websites automatically download video to mobile devices — damaging customer’s data plans, battery life, for videos that might not ever be played.

TL;DR: In this post, we look at techniques to optimize the speed and delivery of video to your customers, and provide a list of 9 best practices to help you deliver your video assets.

Video Playback Metrics

There are 3 principal video playback metrics in use today:

  1. Video Startup Time
  2. Video Stalling
  3. Video Quality

Since video files are large, optimizing the video to be as small as possible will lead to faster video delivery, speeding up video start, lowering the number of stalls, and minimizing the effect of the quality of the video delivered. Of course, we need to balance startup speed and stalling with the third metric of quality (and higher quality videos generally use more data).

Video Startup

When a user presses play on a video, they expect to be able to watch the video quickly. According to Conviva (a leader in video metric analysis), in Q1 of 2018, 14% of videos never started playing (that’s 2.4 Billion video plays) after the user pressed play.

Video Start Breakdown (Large preview)

2.3% of videos (400M video requests) failed to play after the user pressed the play button. 11.54% (2B plays) were abandoned by the user after pressing play. Let’s try to break down what might be causing these issues.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ Video Playback Failure

Video playback failure accounted for 2.3% of all video plays. What could lead to this? In the HTTP Archive data, we see 0.3% of all video requests resulting in a 4xx or 5xx HTTP response — so some percentage fail to bad URLs or server misconfigurations. Another potential issue (that is not observed in the HTTP Archive data) are videos that are blocked by Geolocation (blocked based on the location of the viewer and the licensing of the provider to display the video in that locale).

Video Playback Abandonment

The Conviva report states that 11.5% of all video plays would play, but that the customer abandoned the playback before the video started playing. The issue here is that the video is not being delivered to the customer fast enough, and they give up. There are many studies on the mobile web where long delays cause abandonment of web pages, and it appears that the same effect occurs with video playback as well.

Research from Akamai shows that viewers will wait for 2 seconds, but for each subsequent second, 5.8% of viewers abandon the video.

Rate of abandonment over time (Large preview)

So what leads to video playback issues? In general, larger files take longer to download, so will delay playback. Let’s look at a few ways that one can speed up the playback of videos. To reduce the number of videos abandoned at startup, we should ‘slim’ down these files as best as possible, so they download (and begin playback) quickly.

MP4: Video Preload

To ensure fast playback on the web, one option is to preload the video onto the device in advance. That way, when your customer clicks ‘play’ the video is already downloaded, and playback will be fast. HTML offers a preload attribute with 3 possible options: auto, metadata and none.

preload = auto

When your video is delivered with preload="auto", the browser downloads the entire video file and stores it locally. This permits a large performance improvement for video startup, since the video is available locally on the device, and no network interference will slow the startup.

However, preload="auto" should only be used if there is a high probability that the video will be viewed. If the video is simply resident on your webpage, and it is downloaded each time, this will add a large data penalty to your mobile users, as well as increase your server/CDN costs for delivering the entire video to all of your users.

This website has a section entitled “Video Gallery” with several videos. Each video in this section has preload set to auto, and we can visualize their download in the WebPageTest waterfall as green horizontal lines:

Waterfall of video preload (Large preview)

There is a section called “Video Gallery”, and the files for this small section of the website account for 14.6M (83%) of the page download. The odds that one (of many) videos will be played is probably pretty low, and so utilizing preload="auto" only generates a lot of data traffic for the site.

Webpage data breakdown (Large preview)

In this case, it is unlikely that even one of these videos will be viewed, yet all of them are downloaded completely, adding 14.8MB of content to the mobile site (83% of the content on the page). For videos that are have a high probability of playback (perhaps >90% of page views result in video play) — preloading the entire video is a very good idea. But for videos that are unlikely to be played, preload="auto" will only cause extra tonnage of content through your servers and to your customer’s mobile (and desktop) devices.

preload="metadata"

When the preload="metadata" attribute is used, an initial segment of the video is downloaded. This allows the player to know the size of the video window, and to perhaps have a second or 2 of video downloaded for immediate playback. The browser simply makes a 206 (partial request) of the video content. By storing a small bit of video data on the device, video startup time is decreased, without a large impact to the amount of data transferred.

On Chrome, metadata is the default choice if no attribute is chosen.

Note: This can still lead to a large amount of video to be downloaded, if the video is large.

For example, on a mobile website with a video set at preload="metadata", we see just one request for video:

(Large preview)

And the request is a partial download, but it still results in 2.7 MB of video to be downloaded because the full video is 1080p, 150s long and 97 MB (we’ll talk about optimizing video size in the next sections).

KB usage with video metadata (Large preview)

So, I would recommend that preload="metadata" still only be used when there is a fairly high probability that the video will be viewed by your users, or if the video is small.

preload="none"

The most economical download option for videos, as no video files are downloaded when the page is loaded. This will potentially add a delay in playback, but will result in faster initial page load For sites with many videos on a single page, it may make sense to add a poster to the video window, and not download any of the video until it is expressly requested by the end user. All YouTube videos that are embedded on websites never download any video content until the play button is pressed, essentially behaving as if preload="none".

Preload Best Practice: Only use preload="auto" if there is a high probability that the video will be watched. In general, the use of preload="metadata" provides a good balance in data usage vs. startup time, but should be monitored for excessive data usage.

MP4 Video Playback Tips

Now that the video has started, how can we ensure that the video playback can be optimized to not stall and continue playing. Again, the trick is to make sure the video is as small as possible.

Let’s look at some tricks to optimize the size of video downloads. There are several dimensions of video that can be optimized to reduce the size of the video:

Audio

Video files are split into different “streams” — the most common being the video stream. The second most common stream is the audio track that syncs to the video. In some video playback applications, the audio stream is delivered separately; this allows for different languages to be delivered in s seamless manner.

If your video is played back in a silent manner (like a looping GIF, or a background video), removing the audio stream from the video is a quick and easy way to reduce the file size. In one example of a background video, the full file was 5.3 MB, but the audio track (which is never heard) was nearly 300 KB (5% of the file) By simple eliminating the audio, the file will be delivered quickly without wasting bytes.

42% of the MP4 files found on the HTTP Archive have no audio stream.

Best Practice: Remove the audio tracks from videos that are played silently.

Video Encoding

When encoding a video, there are options to reduce the video quality (number of pixels per frame, or the frames per second). Reducing a high-quality video to be suitable for the web is easy to do, and generally does not affect the quality delivered to your end users. This article is not long enough for an in depth discussion of all the various compression techniques available for video. In x264 and x265 encoders, there is a term called the Constant Rate Factor (CRF). Using a CRF of 23-28 will generally give a good compression/quality trade off, and is a great first start into the realm of video compression

Video Size

Video size can be affected by many dimensions: length, width, and height (you could probably include audio here as well).

Video Duration

The length of the video is generally not a feature that a web developer can adjust. If the video is going to playback for three minutes, it is going to playback for three minutes. In cases in which the video is exceptionally long, tools like preload="none" or streaming the video can allow for a smaller amount of data to be downloaded initially to reduce page load time.

Video Dimensions

18% of all video found in the HTTP Archive is identical on mobile and desktop. Those who have worked with responsive web design know how optimizing images for different viewports can drastically reduce load times since the size of the images is much smaller for smaller screens.

The same holds for video. A website with a 30 MB 2560×1226 background video will have a hard time downloading the video on mobile (probably on desktop, too!). Resizing the video drastically decreases the files size, and might even allow for three or four different background videos to be served:

Width Video (MB) 1226 30 1080 8.1 720 43 608 3.3 405 1.76

Now, unfortunately, browsers do not support media queries for video in HTML, meaning that this just does not work:

<video preload="auto" autoplay muted controls source sizes="(max-width:1400px 100vw, 1400px" srcset="small.mp4 200w, medium.mp4 800w, large.mp4 1400w" src="large.mp4" </video>

Therefore, we’ll need to create a small JS wrapper to deliver the videos we want to different screen sizes. But before we go there…

Downloading Video, But Hiding It From View

Another throwback to the early responsive web is to download full-size images, but to hide them on mobile devices. Your customers get all the delay for downloading the large images (and hit to mobile data plan, and extra battery drain, etc.), and none of the benefit of actually seeing the image. This occurs quite frequently with video on mobile. So, as we write our script, we can ensure that smaller screens never request the video that will not appear in the first place.

Retina Quality Videos

You may have different videos for different device screen densities. This can lead to added time to download the videos to your mobile customers. You may wish to prevent retina videos on smaller screen devices, or on devices with a limited network bandwidth, falling to back to standard quality videos for these devices. Tools like the Network Information API can provide you with the network throughput, and help you decide which video quality you’d like to serve to your customer.

Downloading Different Video Types Based On Device Size And Network Quality

We’ve just covered a few ways to optimize the delivery of movies to smaller screens, and also noted the inability of the video tag to choose between video types, so here is a quick JS snippet that will use the screen width to:

  • Not deliver video on screens below 500px;
  • Deliver small videos for screens 500-1400;
  • Deliver a larger sized video to all other devices.
<html><body> <div id="video"> </div> <div id="text"></div> <script> //get screen width and pixel ratio var width = screen.width; var dpr = window.devicePixelRatio; //initialise 2 videos — //“small” is 960 pixels wide (2.6 MB), large is 1920 pixels wide (10 MB) var smallVideo="http://res.cloudinary.com/dougsillars/video/upload/w_960/v1534228645/30s4kbbb_oblsgc.mp4"; var bigVideo = "http://res.cloudinary.com/dougsillars/video/upload/w_1920/v1534228645/30s4kbbb_oblsgc.mp4"; //TODO add logic on adding retina videos if (width<500){ console.log("this is a very small screen, no video will be requested"); } else if (width< 1400){ console.log("let’s call this mobile sized"); var videoTag = "\<video preload=\"auto\" width=\"100%\" autoplay muted controls src=\"" +smallVideo +"\"/\>"; console.log(videoTag); document.getElementById('video').innerHTML = videoTag; document.getElementById('text').innerHTML = "This is a small video."; } else{ var videoTag = "\<video preload=\"auto\" width=\"100%\" autoplay muted controls src=\"" +bigVideo +"\"/\>"; document.getElementById('video').innerHTML = videoTag; document.getElementById('text').innerHTML = "This is a big video."; } </script> </html></body>

This script divides user’s screens into three options:

  1. Under 500 pixels, no video is shown.
  2. Between 500 and 1400, we have a smaller video.
  3. For larger than 1400 pixel wide screens, we have a larger video.

Our page has a responsive video with two different sizes: one for mobile, and another for desktop-sized screens. Mobile users get great video quality, but the file is only 2.6 MB, compared to the 10MB video for desktop.

Animated GIFs

Animated GIFs are big files. While both aGIFs and video files compress the data through width and height dimensions, only video files have compression (on the often larger) time axis. aGIFs are essentially “flipping through” static GIF images quickly. This lack of compression adds a significant amount of data. Thankfully, it is possible to replace aGIFs with a looping video, potentially saving MBs of data for each request.

<video loop autoplay muted playsinline src="pseudoGif.mp4">

In Safari, there is an even fancier approach: You can place a looping mp4 in the picture tag, like so:

<picture> <source type="video/mp4" loop autoplay srcset="loopingmp4.mp4"> <source type="image/webp" srcset="animated.webp"> <src="animated.gif"> </picture>

In this case, Safari will play the animated GIF, while Chrome (and other browsers that support WebP) will play the animated WebP, with a fallback to the animated GIF. You can read more about this approach in Colin Bendell’s great post.

Third-Party Videos

One of the easiest ways to add video to your website is to simply copy/paste the code from a video sharing service and put it on your site. However, just like adding any third party to your site, you need to be vigilant about what kind of content is added to your page, and how that will affect page load. Many of these “simply paste this into your HTML” widgets add 100s of KB of JavaScript. Others will download the entire movie (think preload="auto"), and some will do both.

Third-Party Video Best Practice: Trust but verify. Examine how much content is added, and how much it affects your page load time. Also, the behavior might change, so track with your analytics regularly.

Streaming Startup

When a video stream is requested, the server supplies a manifest file to the player, listing every available stream (with dimensions and bitrate information). In HLS streaming, the player generally chooses the first stream in the list to begin playback. Therefore, the stream positioned first in the manifest file should be optimized for video startup on both mobile and desktop (or perhaps alternative manifest files should be delivered to mobile vs. desktop).

In most cases, the startup is optimized by using a lower quality stream to begin playback. Once the player downloads a few segments, it has a better idea of available throughput and can select a higher quality stream for later segments. As a user, you have likely seen this — where the first few seconds of a video looks very pixelated, but a few seconds into playback the video sharpens.

In examining 1,065 manifest files delivered to mobile devices from the HTTP Archive, we find that 59% of videos have an initial bitrate under 1.2 MBPS — and are likely to start streaming without any much delay at 1.6 MBPS 3G data rates. 11% use a bitrate between 1.2 and 1.6 MBPS — which may slow the startup on 3G, and 30% have a bitrate above 1.6 MBPS — and are unable to playback at this bitrate on a 3G connection. Based on this data, it appears that ~41% of all videos will not be able to sustain the initial bitrate on mobile — adding to startup delay, and possibly increased number of stalls during playback.

Initial bitrate for video streams (Large preview)

Streaming Startup Best Practice: Ensure your initial bitrate in the manifest file is one that will work for most of your customers. If the player has to change streams during startup, playback will be delayed and you will lose video views.

So, what happens when the video’s bitrate is near (or above) the available throughput? After a few seconds of download without a completed video segment ready for playback, the player stops the download and chooses a lower quality bitrate video, and begins the process again. The action of downloading a video segment and then abandoning leads to additional startup delay, which will lead to video abandonment.

We can visualize this by building video manifests with different initial bitrates. We test 3 different scenarios: starting with the lowest (215 KBPS), middle (600 KBPS), and highest bitrate (2.6 MBPS).

When beginning with the lowest quality video, playback begins at 11s. After a few seconds, the player begins requesting a higher quality stream, and the picture sharpens.

When starting with the highest bitrate (testing on a 3G connection at 1.6 MBPS), the player quickly realizes that playback cannot occur, and switches to the lowest bitrate video (215 KBPS). The video starts playing at 17s. There is a 6-second delay, and the video quality is the same low quality delivered to in the first test.

Using the middle-quality video allows for a bit of a tradeoff, the video begins playing at 13s (2 seconds slower), but is high quality from the start -and there is no jump from pixelated to higher quality video.

Best Practice for Video Startup: For fastest playback, start with the lowest quality stream. For longer videos, you might consider using a ‘middle quality’ stream at start to deliver sharp video at startup (with a slightly longer delay).

WebPage Test Thumbnails (Large preview)

WebPageTest results: Initial video stream is low, middle and high (from top to bottom). The video starts the fastest with the lowest quality video. It is important to note that the high quality start video at 17s is the same quality as the low quality start at 11s.

Streaming: Continuing Playback

When the video player can determine the optimal video stream for playback and the stream is lower than the available network speed, the video will playback with no issues. There are tricks that can help ensure that the video will deliver in an optimal manner. If we examine the following manifest entry:

#EXT-X-STREAM-INF:BANDWIDTH=912912,PROGRAM-ID=1,CODECS="avc1.42c01e,mp4a.40.2",RESOLUTION=640x360,SUBTITLES="subs" video/600k.m3u8

The information line reports that this stream has a 913 KBPS bitrate, and 640×360 resolution. If we look at the URL that this line points to, we see that it references a 600k video. Examining the video files shows that the video is 600 KBPS, and the manifest is overstating the bitrate.

Overstating The Video Bitrate
  • PRO
    Overstating the bitrate will ensure that when the player chooses a stream, the video will download faster than expected, and the buffer will fill up faster than expected, reducing the possibility of a stall.
  • CON
    By overstating the bitrate, the video delivered will be a lower quality stream. If we look at the entire list of reported vs. actual bitrates:
Reported (KBS) Actual Resolution 913 600 640x360 142 64 320x180 297 180 512x288 506 320 512x288 689 450 412x288 1410 950 853x480 2090 1500 1280x720

For users on a 1.6 MBPS connection, the player will choose the 913 KBPS bitrate, serving the customer 600 KBPS video. However, if the bitrates had been reported accurately, the 950 KBPS bitrate would be used, and would likely have streamed with no issues. While the choices here prevent stalls, they also lower the quality of the delivered video to the consumer.

Best Practice: A small overstatement of video bitrate may be useful to reduce the number of stalls in playback. However, too large a value can lead to reduced quality playback.

Test the Neilsen video in the browser, and see if you can make it jump back and forth.

Conclusion

In this post, we’ve walked through a number of ways to optimize the videos that you present on your websites. By following the best practices illustrated in this post:

  1. preload="auto"
    Only use if there is a high probability that this video will be watched by your customers.
  2. preload="metadata"
    Default in Chrome, but can still lead to large video file downloads. Use with caution.
  3. Silent Videos (looping GIFs or background videos)
    Strip out the audio channel
  4. Video Dimensions
    Consider delivering differently sized video to mobile over desktop. The videos will be smaller, download faster, and your users are unlikely to see the difference (your server load will go down too!)
  5. Video Compression
    Don’t forget to compress the videos to ensure that they are delivered
  6. Don’t ‘hide’ videos
    If the video will not be displayed — don’t download it.
  7. Audit your third-party videos regularly
  8. Streaming
    Start with a lower quality stream to ensure fast startup. (For longer play videos, consider a medium bitrate for better quality at startup)
  9. Streaming
    It’s OK to be conservative on bitrate to prevent stalling, but go too far, and the streams will deliver a lower quality video.

You will find that the video on your page is streamlined for optimal delivery and that your customers will not only delight in the video you present but also enjoy a faster page load time overall.

(dm, ra, il)
Kategorier: Amerikanska

Video Playback On The Web: The Current State Of Video (Part 1)

ons, 10/24/2018 - 13:30
Video Playback On The Web: The Current State Of Video (Part 1) Video Playback On The Web: The Current State Of Video (Part 1) Doug Sillars 2018-10-24T13:30:24+02:00 2018-11-13T14:40:42+00:00

Usage of video on the web is increasing as devices and networks become faster and more capable of handling video content. Research shows that sites with video increase engagement by 80%. E-Commerce sites with video have higher conversions than sites without video.

But adding video can come at a cost. Videos (being larger files) add to the page load time, and performance research shows that slower pages have the opposite effect of lower customer engagement and conversions. In this aticle, I’ll examine the important metrics to balance performance and video playback on the web, look at how video is being used today, and provide best practices on delivering video on the web.

One of the first steps to improve customer satisfaction is to speed up the load time of a page. Google has shown that mobile pages that take over three seconds to load lose 53% of their audience to abandonment. Other studies find that on improving site performance, usage and sales increase.

Adding video to a website will increase engagement, but it can also dramatically slow down the load time, so it is clear that a balance must be found between adding videos to your site and not impacting the load time too greatly.

Recommended reading: Front-End Performance Checklist 2018 [PDF, Apple Pages]

Video On The Web Today

To examine the state of video on the web today, I’ll use data from the HTTP Archive. The HTTP Archive uses WebPageTest to scan the performance of 1.2 million mobile and desktop websites every two weeks, and then stores the data in Google BigQuery.

Typically just the main page of each domain is checked (meaning www.cnn.com is run, but www.cnn.com/politics is not). This data can help us understand how the usage of video on the web affects the performance of websites. Mobile tests are run on an emulated Motorola G4 with a 3G internet connection (1.6 MBPS down, 768 KBPS up, 300 ms RTT), and desktop tests run Chrome on a cable connection (5 MBPS down, 1 MBPS up, 28ms RTT). I’ll be using the data set from 1 August 2018.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ Sites That Download Video

As a first step to study sites with video, we should look at sites that download video files when the page loads. There are 35k mobile sites and 55k desktop sites with video file downloads in the HTTP Archive data set (that’s 3% of all mobile sites and 4.5% of all desktop sites). Comparing desktop to mobile, we find that 30k of these sites have video on both mobile and desktop (leaving ~5,800 sites on mobile with no video on the desktop).

Mobile and Desktop Sites with Video (Large preview)

The median mobile page with video weighs in at a hefty 7 MB (583% larger than 1.2 MB found for the median mobile site). This increase is not fully accounted for by video alone (2.5 MB). As sites with video tend to be more feature rich and visually engaging, they also use more images (the median site has over 1 MB more), CSS, and Javascript. The table below also shows that the median SpeedIndex (a measurement of how quickly content appears on the screen) for sites with video is 3.7s slower than a typical mobile site, taking a whopping 11.5 seconds to load.

SpeedIndex Bytes Total Bytes Video Bytes CSS Bytes Images Bytes JS Video 11544 6,963,579 2,526,098 80,327 1,596,062 708,978 all sites 7780 1,201,802 0 40,648 449,585 336,973

This clearly shows that sites that are more interactive and have video content take (on average) longer to load that sites without video. But can we speed up video delivery? What else can we learn from the data at hand?

Video Hosting

When examining video delivery, are the files being served from a CDN or video provider, or are developers hosting the videos on their own servers? By examining the domain of the videos delivered on mobile, we find that 12,163 domains are used to deliver video, indicating that ~49% of sites are serving their own video files. If we stack rank the domains by frequency, we are able to determine the most common video hosting solutions:

Video Doman cnt % fbcdn.net 116788 67% akamaihd.net 11170 6% googlevideo.com 10394 6% cloudinary.com 3170 2% amazonaws.com 1939 1% cloudfront.net 1896 1% pixfs.net 1853 1% akamaized.net 1573 1% tedcdn.com 1507 1% contentabc.com 1507 1% vimeocdn.ccom 1373 1% dailymotion.com 1337 1% teads.tv 1022 1% youtube.com 1007 1% adstatic.com 998 1%

Top CDNs and domains Facebook, Akamai, Google, Cloudinary, AWS, and Cloudfront lead the way, which is not surprising. However, it is interesting to see YouTube and Vimeo so far down in the list, as they are two of the most popular video sharing sites.

Let’s look into YouTube, Vimeo and Facebook video delivery:

YouTube Video Counts

By default, pages with a YouTube video embedded do not actually download any video files — just scripts and a placeholder image, so they do not show up in a querly looking for sites with video downloads. One of the Javascript downloads for the YouTube Video player is www-embed-player.js. Searching for this file, we find 69k instances on 66,647 mobile sites. These sites have a median SpeedIndex of 10,700, and data usage of 3.31MB — better than sites with video downloaded, but still slower than sites with no video at all. The increase in data is directly associated with more images and Javascript (as shown below).

Speedindex Bytes Total Bytes Video Bytes CSS Bytes Images Bytes JS Video 11544 6,963,579 2,526,098 80,327 1,596,062 708,978 All sites 7780 1,201,802 0 40,648 449,585 336,973 YouTube script 10700 3,310,000 0 126,314 1,733,473 1,005,758 Vimeo Video Counts

There are 14,148 requests for Vimeo videos in HTTP Archive for Video playback. I see only 5,848 requests for the player.js file (in the format https://f.vimeocdn.com/p/3.2.0/js/player.js — implying that perhaps there are many videos on one page, or perhaps another location for the video player file. There are 17 different versions of the player present into HTTP Archive, with the most popular being 3.1.5 and 3.1.4:

URL cnt https://f.vimeocdn.com/p/3.1.5/js/player.js 1832 https://f.vimeocdn.com/p/3.1.4/js/player.js 1057 https://f.vimeocdn.com/p/3.1.17/js/player.js 730 https://f.vimeocdn.com/p/3.1.8/js/player.js 507 https://f.vimeocdn.com/p/3.1.10/js/player.js 432 https://f.vimeocdn.com/p/3.1.15/js/player.js 352 https://f.vimeocdn.com/p/3.1.19/js/player.js 153 https://f.vimeocdn.com/p/3.1.2/js/player.js 117 https://f.vimeocdn.com/p/3.1.13/js/player.js 105

There does not appear to be any performance gain for any Vimeo Library — all of the pages have similar load times.

Note: Using www-embed-player.js for YouTube or https://f.vimeocdn.com/p/*/js/player.js for Vimeo are good fingerprints for browsers with a clean cache, but if the customer has previously browsed a site with an embedded video, this file might already be in the browser cache, and thus will not be re-requested. But, as Andy Davies recently noted, this is not a safe assumption to make.

Facebook Video Counts

It is surprising that in the HTTP Archive data, 67% of all video requests are from Facebook’s CDN. It turns out that on Chrome, 3rd party Facebook widgets download 30% of all videos posted inside the widget (This effect does not occur in Safari or in Firefox.). It turns out that a 3rd party widget added with just a few lines of code is responsible for 57% of all the video seen in the HTTP Archive.

Video File Types

The majority of videos on pages tested are Mp4s. If we look at all the videos downloaded (excluding those from Facebook), we get the following view:

File extension Video count % .mp4 48,448 53% .ts 18,026 20% .webm 3,946 4% 14,926 16% .m4s 2,017 2% .mpg 1,431 2% .mov 441 0% .m4v 407 0% .swf 251 0%

Of the files with no extension — 10k are googlevideo.com files.

What can we learn about these files? Let’s look each file type to learn more about the content being delivered.

I used FFPROBE to query the 34k unique MP4 files, and obtained data for 14,700 videos (many of the videos had changed or been removed in the 3 weeks from HTTP Archive capture to analysis).

MP4 Video Data

Of the 14.7k videos in the dataset, 8,564 have audio tracks (58%). Shorter videos that autoplay or videos that play in the background do not require audio, so stripping the audio track is a great way to reduce the file size (and speed the delivery) of your videos.

The next most important aspect to quickly downloading a video are the dimensions. The larger the dimensions (and in the case of video, there are three dimensions to consider: width × height × time), the larger the video file will be.

MP4 Video Duration

Most of the 14k videos studied are short: the median (50th percentile) duration is 21s. However, 10% of the videos surveyed are over 2 minutes in duration. Use cases here will, of course, be divided, but for short video loops, or background videos — shorter videos will use less data, and download faster.

Distribution of Video Duration (Large preview) MP4 Video Width And Height

The dimensions of the video on the screen decide how many pixels each frame will have to use. The chart below shows the various video widths that are being served to the mobile device. (As a note, the Moto G4 has a screen size of 1080×1920, and the pages are all viewed in portrait mode).

Video Counts by Width (Large preview)

As the data shows, the two most utilized video widths are significantly larger than the G4 screen (when held in portrait mode). A full 49% of all videos are served with a width greater than 1080 pixels wide. The Alcatel 1x, a new Android Go device, has a 480×960-pixel screen. 77% of the videos delivered in the sample set are larger than 480 pixels wide.

As dimensions of videos decrease, so does the files size (and thus time to deliver the video). This is the primary reason to resize videos.

Why are these videos so large? If we correlate the videos served on mobile and desktop, we find that 18% of videos served on mobile are the same videos served on the desktop. This is a ‘problem’ solved for images years ago with responsive images. By serving differently sized videos to different sized devices, we can ensure that beautiful videos are served, but at a size and dimension that makes sense for the device.

MP4 Video Bitrate

The bitrate of the video delivered to the device plays a large effect on how well the video will play back. The HTTP Archive tests are run on a 3G connection at 1.6 MBPS download speed. To playback (without stalling) the download has to be faster than playback. Let’s provide 80% of the available bitrate to video files (1.3 MBPS). 47% of the videos in the sample set have a bitrate over 1.3 MBPS, meaning that when these videos are played on a 3G connection, they are more likely to stall — leading to unhappy customers. 27% of videos have a bitrate higher than 2.5 MBPS, 10% are higher than 5 MBPS, and 35 of videos served to mobile devices have a bitrate > 10 MBPS. These larger videos are unlikely to play without stalling on many connections — fixed or mobile.

Observed Video Bitrates (Large preview) What Leads To Higher Bitrates

Larger videos tend to carry a larger bitrate, as larger dimensioned videos require a lot more data to populate the additional pixels on the device. Cross referencing the bitrate of each video to the width confirms this: videos with width 1280 (orange) and 1920 (gray) have a much broader distribution of bitrates (more data points to the right in the chart). The column marked in yellow denotes the 136 videos with width 1920, and a bitrate between 10-11 MBPS.

Bitrate Vs. Video Width (Large preview)

If we visualize only the videos over 1.6 MBPS, it becomes clear that the higher screen resolutions (1280 and 1920) are responsible for the higher bitrate videos.

High Bitrate and Video Width (Large preview) MP4: HTTP vs. HTTPS

HTTP2 has redefined content delivery with multiplexed connections — where just one connection per server is required. For large files like video, does HTTP2 provide a meaningful improvement to content delivery? If we look at the stats from the HTTP Archive:

HTTP1 vs HTTTP2 (Large preview)

Omitting the 116k Facebook videos (all sent via HTTP2), we see that it is about a 50:50 split between HTTP 1.1 and HTTP2. However, HTTP1.1 can utilize HTTPS, and when we filter for HTTPS usage, we find that 81% of video streams are sent via HTTPS, with HTTP2 being used slightly more than HTTPS1.1 (41%:36%)

HTTP vs. HTTP2 and secure (Large preview)

As you can see, comparing the speed of HTTP and HTTP2 video delivery is a work in progress.

HLS Video Streaming

Video streaming using adaptive bitrate is an ideal way to deliver video to the end user. Multiple versions of each video are built with different dimensions and bitrates. The list of available streams is presented to the playback device, and the video player on the device can choose the most appropriate stream based on the size of the device screen and the available network conditions. There are 1,065 manifest files (and 14k video stream files) in the HTTP Archive data set that I examined.

Video Stream Playback

One key metric in video streaming is to have the video start as quickly as possible. While the manifest file has a list of available streams, the player has no idea the available bandwidth of the network at the beginning of playback. To begin streaming, and because the player has to pick a stream, it typically chooses the first one in the list. In order to facilitate a fast video startup, it is important to place the correct stream at the top of your manifest file.

Note: Utilizing the Chrome Network Info API to generate manifest files on the fly might be a good way to quickly optimize video content at startup.

One way to ensure that the video starts quickly is to start with the lowest quality video segment, as the download will be the fastest. The initial video quality may be pixelated, but as the player better understands the network quality, it can quickly adjust to a more appropriate (hopefully higher quality) video stream. With that in mind, let’s look at the initial stream bitrates in the HTTP Archive.

Initial Bitrate for video streams (Large preview)

The red line in the above chart denotes 1.5 MBPS (recall that mobile tests are run at 1.6 MBPS, and not only video content is being downloaded). We see 30.5% of all of the streams (everything to the left of the line) start with an initial bitrate higher than 1.5 MBPS (and are thus unlikely to playback on a 3G connection) 17% start above 2 MBPS.

So what happens when video download is slower than the actual playback of the video? Initially, the player will attempt to download the (too) large bitrate files, but based on the download speed, will realise the problem. The player will then switch to downloading a lower bitrate video, so that download is faster than playback. The problem is that the initial download attempt takes time, and adding a delay to video playback start leads to customer abandonment.

We can also look at the most common bitrates of .ts files (the files that have the video content), to see what bitrates end up being downloaded on mobile. This data includes the initial bitrates, and any subsequent file downloaded during the WebPageTest run:

Observed Mobile Bitrates (Large preview)

We can see two major groupings of video streaming bitrates (100-300 KBPS, and a broader peak from 300-1,000 MBPS). This is where we would expect streams to appear, given that the network speed is capped at 1.6 MBPS.

Comparing the data to the desktop, Mobile clearly is higher at the lower bitrates, and desktop streams have high peaks in the 500-600 and 800-900 KBPS ranges, that are not seen in mobile.

Observed mobile and Desktop streaming bitrates (Large preview) Observed bitrates, mobile, desktop compared to initial bitrate (Large preview)

When we compare the initial bitrates observed (blue) with the actual files downloaded, it is very clear that for mobile the bitrate generally decreases during stream playback, indicating that lowering the initial bitrate for video streams might improve the performance of video startup and prevent stalls in early playback of the video. Desktop video also appears to decrease, but it is also possible that some video move to higher playback speeds.

Conclusion

Video content on the web increases customer engagement and satisfaction. Pages that load quickly have the same effect. The addition of video to your website will slow down the page rendering time, necessitating a balance between overall page load and video content. To reduce the size of your video content, ensure that you have versions appropriately sized for mobile device dimensions, and use shorter videos when possible.

If playback of your videos is not essential, follow the path of YouTube and Vimeo — download all the required pieces to be ready for playback, but don’t actually download any video segments until the user presses play. Finally — if you are streaming video — start with the lowest quality setting to ensure a fast video playback.

In my next post on video, I will take these general findings, and dig deeper into how to resolve potential issues with examples.

(dm, ra, il)
Kategorier: Amerikanska