View full site - the worst anti-pattern

The “view full site” link has fostered a negative behavioural trait. When faced with problems on a dedicated mobile site users have a tendency to look for the full site link like some sort of safety net.

How often have you arrived on a broken link on a dedicated mobile site from a search engine and looked for the full site link in an attempt to solve the problem? It’s instinctive among web savvy people, but I have witnessed people outside of our industry perform the same behavioural trait in a similar unconscious manner.

The sad truth is “view full site” can also be used by web developers as a safety net if the small screen experience isn’t good enough. The pinch-to-zoom fallback is there when a poor implementation fails and lazy web developers capitalise on the user’s newfound habits of correcting such website issues by clicking this magical link.

This leaves responsive designers in a difficult position. We are already on the back foot when the user arrives at our lovingly tailored responsive experience — we have to convince the user that our responsive design is the full experience after years of being misled to think they are getting half a product by dedicated mobile sites overzealous in removing content and stripping back features

Part of the problem is that from the average user’s perspective a dedicated mobile site may not look too different to a responsive site on the surface. If they run into problems with our design or perhaps they expected to see some content that is missing because it was simply overlooked, do they feel misled by default?

So how do we fix this problem? By continuing to fight the good fight with ubiquitous experiences and earning back the user’s trust over time — it’s not going to be a quick fix. Bad habits take time to eradicate. The great content swindle needs to stop.

What we can learn from game designers

After watching Indie Game: The Movie during Channel 4’s video games night I couldn’t help but notice the parallels between game design and web design. There are elements of this that I have noticed before, but I thought I’d share some of the facets of video games that aren’t spoken about so often in relation to our industry.

Note that this isn’t a discussion about gamification, rather a look at how game design applies to the web.

Video games are usability success stories

Pong is perhaps one of the greatest examples of usability for a digital product. The original Pong arcade box featured two knobs. A knob on one side of the box for player one and one on the other side for player two. Twisting the knob on the either side moved the corresponding on screen paddle up and down.

The box had no instructions. The player was left to learn the behaviour and play.

The original Pong machine deserves credit for being a usability success

More complex games that require extra behaviours to progress (let’s use Tomb Raider as an example) produce scenarios where environmental situations encourage you to recall certain behaviours, perform the input combination on the controller to trigger the behaviour and progress. In fact most games are essentially sequences of pattern recognition.

One of the most enjoyable facets of gaming is when the player is in a state of flow1 and they perform the actions almost instinctively. The game then feels more rewarding to play because the player is progressing at the ideal pacing and feels more at one with the game when difficulty level and ability level are in perfect synergy.

In Tomb Raider the player quickly learns to apply input combinations in relation to environmental features. Some ledges require a running jump rather than a regular jump to progress to the next area

Repetition plays a key role in establishing these actions in the user’s mind. If you strip the surface texture graphics and reduce the polygon count in Tomb Raider, you would notice that the environments would look quite similar in that they can all be traversed using the same character behaviours but in varied and more challenging ways — the foundations remain the same from the first level through to the end.

There are many lessons we can learn from how games are designed and built and apply the findings to the web. Input combinations that trigger specific actions could appear in the form of gestures or even as part of a visual language. Think about how games teach you to subconsciously perform such actions using techniques like pacing and repetition to help engrain them in the user’s intuition.

You might think that kind of thinking could lead to a deviation from established web design patterns” - it all depends on the implementation. Imagine a first person shooter that didn’t bind the fire action to the right shoulder button on a controller or the left mouse button and instead required you to press a button like ’x’. It would be infuriating because it simply doesn’t feel right. Design patterns occur in games too, they’re just sometimes hard to spot because they almost feel like second nature — which is a very good thing.

Storytelling and narratives

We can learn a lot from point-and-click adventure games. Not even in regards to the nature of the input for the genre — my favourite point-and-click adventure was Grim Fandango which was labelled a point-and-click adventure even though it was controlled entirely by keyboard — I’m talking about how they are structured in terms of their narratives.

All good stories have a beginning, a middle and an end. Point-and-click adventures are (mostly) linear stories made up of sequences that require item collection and puzzle solving to advance to the next sequence. The difference between the web and point-and-click adventures is that the latter wants you to struggle to advance, but not too much. In most cases on the web, that’s not what we want to do, but there are certainly elements from this genre that we can learn from and apply to the web. Adventure games are trying to funnel the user down the linear story without leaving you completely stuck, otherwise the user is less likely to feel compelled to finish the game. It’s the in-between bits that hold this narrative flow together that interests me the most — when the player starts to struggle the game has a job on its hands to keep the player interested and help them back into the narrative so the game can tell its story.

Characters respond to changes in the dynamics of the game. If you are looking for a specific item to proceed to the next sequence and you’re having a hard time doing so, some specific characters will become aware of the situation at hand and perhaps provide help or hints when spoken to.

In Grim Fandango speaking to characters you have already met can often provide clues as to how to proceed to the next sequence

This sense of situational awareness can apply to the web too. As a user progresses through our own narrative, situational awareness can help the user progress to the next scene. Imagine a user adds an item to their cart in an e-commerce website — the dynamics of the situation have changed from when the user began on their journey and we can help them through to the end of the story. Perhaps the user digresses from the shop after deciding to read a rather interesting article on the store’s blog about how the store has been donating a portion of profits to charity. What if the blog post had a helpful block at the end to inform the user of the change in dynamics? Something to the effect of: “And you can help us donate too by buying that t-shirt in your cart. Would you like to do that now?”

Games speak to the players in plain English. No technical jargon like “Your session has expired” — something informative, clear and to the point, but not abrupt and not too long so that it wastes time either. Narrative content is hard.

We really should listen to video game designers and developers a lot more. I would love to see people like Raph Koster speak at web design conferences and talk about flow theory, establishing core gameplay and environmental mechanics and the similar challenges we face in both industries. When it comes to telling compelling stories and teaching users to perform instinctual reactions to scenarios, the games industry has been doing this for a long time and have left a lot of good examples for us to learn from.

Potential use cases for script, hover and pointer CSS Level 4 Media Features

Having recently covered the luminosity media feature, I thought I’d take time to document the potential use cases for the other new media features in the CSS Level 4 Media Queries module. Note that at the time of writing these features don’t exist in any browser yet, so the following use cases are theoretical.

Introducing Script

The script media feature does largely the same job as existing feature detection methods albeit on a more limited scale (browsers that support CSS level 4 media queries). It checks to see if ECMAscript (commonly in the form of JavaScript on the web) is supported in the current document. That means if a user has JavaScript enabled or disabled in their browser, we can detect its state and make appropriate formatting adjustments in CSS.

There are certain limitations to what the script media feature can detect. If JavaScript is switched off and a page is loaded, the media feature will correctly report a value of “0” and it will report a value of “1” when JavaScript is enabled. But what happens to the media feature result when the user switches JavaScript off whilst viewing the page depends on the individual browser’s behaviour — it may or may not report a value of “0”. It also can’t be used as a way of providing a fallback if JavaScript errors cause something to break1 because the browser will correctly say that the current document is capable of showing scripts and return a value of “1”. The specification hints at future levels of CSS being able to provide fine-grained results of which scripts are allowed to run on a page2.

Use cases for script

Using the “not” keyword in a script media query would allow us to style fallbacks where JavaScript isn’t enabled. It could also be used for progressive enhancement where the presence of JavaScript isn’t taken for granted.

Here is an example of a script media query written with progressive enhancement in mind (assuming we have a JavaScript-powered carousel):


@media (script) {
    .carousel-links {
        display:inline-block;
                padding: 0.3em 0.6em;
                ...
    }
}

Introducing Hover

Similar to the script media feature hover returns a basic on/off (1/0) result depending on whether the primary pointing system on the device can hover3. Although touch-based devices may have the potential for hover capable input devices like a mouse, this media query will still report a value of “0” before any peripherals are attached.

The hover media feature solves a very common problem. Some UI elements that require a hover interacting like mega drop-down navigation menus often appear on touch-based devices that aren’t capable of hover functions. Their appearance is usually triggered by media queries based on screen dimensions which is not a reliable method for knowing if a device is capable of touch and/or hover inputs.

Currently the user agent on a touch device mimics a hover in some cases on the first tap of the hover menu item which seems to apply some sort of :hover and :active state. The point is — it’s not that reliable or user friendly. It takes a bit of guess work for the user to know a single tap activates hover. Thankfully the hover media query can help us fix this issue — as you’ve probably guessed, it detects the device’s ability to hover.

Use cases for hover

Upon detection of a hover-capable input device we could format our markup to enhance interfaces for such devices (e.g. drop-down menus, tooltips etc).

If we wanted drop down menus to only appear on devices that have hover input devices connected, then we would write:


@media (hover) {
    nav ul ul {
        display:none;
                /* Hide the sub menu and position absolutely to the parent li for a CSS-based hover drop down menu */
                ...
    }
        nav ul li:hover ul {
            display:block;
            /* Show the menu on hover of the parent li */
        }
}

Introducing Pointer

Pointer was generally thought of a way to differentiate between touch and other peripheral-based inputs. The media query produces 3 different values: none, coarse and fine — the assumption being that when implemented a non-touch or mouse-based device (like the 4th generation Kindles) would report “none”, an iPhone and iPad would report a value of “coarse” and a desktop machine would report “fine”.

Coarse means that the pointer limited accuracy. According to the spec4, that means that at a zoom level of 1 (your default zoom level) the pointing device would have trouble accurately selecting several small adjacent targets.

Inputs that would give a value of fine would likely be a mouse, a trackpad or even a stylus.

I have a couple of reservations about the pointer media feature and I can see potential misuse or misunderstanding of it.

A potential limitation might be when peripherals are used after the page load. For example would the media query update it’s values if I decided to switch to a stylus during browsing? Also the spec notes that for accessibility reasons some user agents may report a value of “coarse” in an environment where the input would normally be reported as “fine”.

The takeaway from all of this? Don’t rely on a coarse pointer to mean touch. Loading touch UI elements based on the accuracy of the input device is trying to solve a problem with the wrong tools. We should take pointer accuracy at face value — it can only tell us how accurate the input device is, therefore our formatting changes within a pointer media query should only make adjustments to for accuracy rather than input type. If you want to write CSS for touch enabled devices use Modernizr to detect touch capabilities because relying on the pointer media feature is like relying on a width media query to definitively say “this browser is a mobile phone”.

Use cases for pointer

The specification provides a good use case where you could theoretically enlarge tricky targets like radio buttons based on the accuracy of the pointer. Perhaps “coarse” would be a good default to work from - if we assume that the device is inaccurate then we can progressively enhance for accuracy.

The following code demonstrates how we could potentially improve accuracy on UI elements for devices with inaccurate pointers and reduce their size for devices with accurate pointers:


@media (pointer:coarse) {
  .btn {
        padding: 1.2em 2em;
        font-size: 1.6em;
  }
}

@media (pointer:fine) {
  .btn {
        padding: 0.6em 1em;
        font-size: 1em;
  }
}

The new media features offer great opportunities to progressively enhance our designs. I don’t think any of them should form the foundation for a design, nor do I think that they were ever intended to be used in such a way. What they do provide is a way for us to finesse the details. Make sure to check PalmReader to see when the new media features arrive on your device.

Responding to environmental lighting with CSS Media Queries Level 4

Media Queries Level 4 builds upon the existing media queries specification that many of us use when we build responsive designs today. The Media Queries Level 4 specification introduces four interesting new media features: script, hover, pointer and luminosity. At the time of writing the specification has yet to be implemented in a browser, but that shouldn’t stop us from exploring the potential use cases.

The luminosity media feature has garnered interest from the web community, it will allow developers to make CSS adjustments based on changes in the ambient lighting in which the device is used. The user’s device must be equipped with a light sensor to trigger this new media feature.

The most obvious use case for the luminosity media feature is to adapt a design depending on whether the user is reading the page during the day where ambient lighting is brighter or during the night where ambient lighting is darker. We already see this behaviour in a few native apps.

Digg Reader on iOS can change its theme depending on the brightness of the environmental lighting

The thing about designing for the web is we don’t have the same prior knowledge of the destination device that designers for native apps do. The light sensor’s sensitivity on an iOS device might be different than an Android device, and the light sensor’s sensitivity on an Android device might be different to the light sensor of another Android device, so we would need to be careful with the degree of change we make as it could be jarring to devices that have an overly keen light sensor.

The code looks something like this:

@media (luminosity: normal) {
    body {
        background: #f5f5f5;
        color: #262626;
    }
}
@media (luminosity: dim) {
    body {
        background: #e9e4e3;
    }
}
@media (luminosity: washed) {
    body {
        background: #ffffff;
    }
}

A normal luminosity level represents the screen being viewed in ideal lighting conditions. I would recommend working from this level as your default — and rather than wrapping those styles in a media query targeted for normal luminosity levels it would probably be best to leave those styles unwrapped so browsers and devices that haven’t got the capability of seeing the luminosity media feature can see the page in it’s ideal condition. You can then use the dim value to make adjustments for darker environments and washed to adjust styles for brighter environments leaving the default styles accessible to all devices whether they are equipped with a light sensor or not.

I mentioned earlier about the potential differences in the light sensor hardware sensitivity — I think this is a key reason not to go over the top with the changes we make within luminosity media queries. Can we definitively say that a cloud passing on front of the sun will not trigger the dim luminosity media feature? The last thing I’d imagine the user wants is a harsh change between light and dark designs that is too easily triggered.

Potential stylesheet changes from dark to light environments (left to right). Subtle changes in contrast are key for avoiding a jarring user experience when environmental changes occur

As with other facets of web design, we should strive to stay out of the user’s way when they are trying to enjoy our design. The luminosity media feature has the potential to be the most annoying media query at our disposal, it could be responsible for ushering in a new era of glow in the dark websites — let’s just be careful with it when it arrives.

In Flux

I count myself lucky to work in such an exciting industry. When friends or family outside of web design circles ask me about my job I tell them that it is exciting. I often tell them that “No two days are the same”. In a recent moment of clarity I realised that there are very few parameters in our work that stay the same. The web as a medium changes every day, in fact it is changing all the time.

The television industry broadcasts to a specified aspect ratio. This aspect ratio fits the dimensions of TV sets and the broadcast is either squashed or stretched to fit the image depending on the size of the set. The viewer has limited control over what they see and how they see it. Using their remote control, they might have an option to change the aspect ratio from 16:9 to 4:3 which distorts the image and breaks the broadcaster’s intended design.

The newspaper industry generally operates to either broadsheet or tabloid sizes on a paper canvas. The reader has slightly more control than the TV viewer, they can fold the paper and take it with them or perhaps fold it to make it easier to hold and read. However this breaks the publisher’s intended design. The graphics and the text are broken between a line through the layout originating from the reader’s fold.

The web industry is more complex. The web is constantly in a state of flux. There are no absolutes. You could almost say that no two users computers are the same. Think about the parameters that affect how our work is observed: different screen widths and heights is an obvious example, but look deeper at the additional parameters that can affect screen size — in Chrome some users prefer to browse with the bookmarks bar in view, some without. Some people might have additional toolbars that reduce the viewing area further meaning their experience will be slightly different to someone else’s. Other factors that affect how someone sees our design include different colour profiles between browsers, different colour profiles between monitors, manually adjusted colours and contrasts, browser extensions that block advertisements, browser extensions that block design and show only the content, browser extensions that add functionality and manipulate designs like Skype’s phone number extension — just to name a few.

Under the hood of the browser a slight difference in the version number could introduce any number of tweaks that affect a user’s experience of a page — perhaps the javascript engine has been improved meaning user A’s computer presents your design better than user B’s computer. Maybe the font hinting has been tweaked between a revision number, Flash might be disabled or enabled, experimental features might be active or inactive, and diving deeper into version numbers — a browser version number might be the same between two computers but they might have different versions of Java™ or QuickTime plugins. I find it hard to believe that there are a wealth of identical environments viewing our designs, the notion of no two being the same doesn’t seem so far-fetched.

The mind-bending truth is that this is changing all the time at micro and macro levels. A plugin version number can change, a browser can update automatically, the web as a platform can change and is constantly changing. The only absolute is change.

We need to be ready to respond to change. Screen size is just one parameter, but look closer and you’ll find a host of differences that are not present in other media — this is what excites me about the web. We can be true to the nature of the web, adhere to the core values that change slower than the rest like fluidity, typography, engaging content and accept change.

Building for Content Choreography using Flexbox

On July 14th 2011 Trent Walton published an article introducing the concept of Content Choreography. The article opened my mind and made me question some of the limitations we face in how we build responsive experiences. At the time content reordering and reflow hadn’t been widely explored beyond JavaScript-based solutions and I had been experimenting a lot with flexbox to reorder and arrange items horizontally for Typecast’s UI. When Trent spoke about content stacking, I started to think what was achievable for reordering content along the y-axis in a single column layout.

A tale of two syntaxes

The flexible box model landscape has been through a series of fundamental changes since its introduction in 2009. The original syntax for declaring the property in CSS is still recognised in a lot of today’s popular browsers. The following 3 years brought two significant changes to the syntax mainly around the language used to specify flexbox properties.

The flexbox specification has been finalised since I wrote about using flexbox to tackle content choreography concerns in May 2012 so it’s a good time to revisit the approach. Although the new flexbox syntax has been agreed it has not been widely implemented. Since the spec was settled we have witnessed the release of new mobile operating systems like iOS 6 that boasted numerous updates to Safari - one of which refrained from implementing the new syntax and reverted to the old 2009 syntax. At the time of writing it is unclear whether iOS 7 will feature the updated syntax. Android’s default browser still uses the old syntax although Chrome on Android uses the new syntax like its desktop counterpart. That leaves us in some sort of syntax-purgatory between the final specification and the 2009 syntax.

Luckily the changes between old and new flexbox won’t affect how we use it to reorder blocks of content, just how we write the code. Some repetition is required, much like writing vendor prefixes.

The bottom line is that flexbox is safe to use on the web today for content choreography. According to caniuse.com’s global browser usage statistics 76.52% of users would be able to view a flexbox-based solution to content choreography at the time of writing. 23.48% might seem like a considerable amount of people, but when we break that figure down further it’s not quite so big. 23.48% takes the desktop versions of Internet Explorer into consideration and we don’t intend to use this solution that demographic.

Content choreography using flexbox is most reliable at the first major breakpoint (usually in a single column). It’s also easiest at this level because we are (largely) dealing with moving element blocks around vertically on the y-axis. When we approach content choreography from this angle we can assume most desktop browsers won’t see our reordered page. Looking again at the browser statistics for flexbox support among mobile browsers only 14.4% of users won’t see choreographed content.

I don’t usually make decisions based on browser statistics, especially project-specific decisions, but for gauging a general sense of browser support for something previously thought of as an edge-case CSS property, I think it’s worth pointing out that it is as safe to use in this context as something trivial like the text-shadow property.

A note about screenreaders

Before we dive into code specifics, we need to talk about accessibility. Screen readers will not read an altered source order changed by flexbox. Instead they will read the original document order which makes for a jarring and confusing experience for users requiring a service like VoiceOver.

Some feel that for this reason it is better to make source order changes in JavaScript rather than CSS although others counter that this hinders innovation and screen readers simply must follow our lead in pushing the web forward as a platform.

I am firmly in favour of a CSS-based approach as this is fundamentally a layout change after the effect. Using JavaScript instead of CSS to achieve a layout goal that can otherwise be achieved in CSS feels like a Frankenstein approach. I feel it’s dragging us back to the dark ages instead of helping our platform mature. If we choose to remain stagnant on issues like this then it would be like encouraging the use of JavaScript for rollover states in 2013. The fact is we have flexbox at our disposal and specced for use in CSS, avoiding it is only going to encourage the makers of screen reading software to do the same.

A solution for past, present and future

Now that we have covered the fundamentals, there are a few simple considerations to account for when building with flexbox to achieve content choreography. I have found the following mindset helps:

  • Start designing and building for mobile first (no-brainer)
  • Your DOM order is your definitive order. Build for this order first before addressing content choreography concerns. If you are unsure about your definitive source order, this is the DOM order shown when your layout has enough space to show everything you want to show in a sensible hierarchy.
  • This requires foresight — one of the most difficult skills in a responsive designer’s arsenal. The DOM order will act as the fallback order on devices with limited screen space and where flexbox doesn’t work (Opera Mini, Opera Mobile < 12.0 etc). Use content choreography only if your layout needs it in your first breakpoint.
  • Address the past, present and future devices by using both flexbox syntaxes.

All things considered the revised flexbox code for a definitive specification looks like something this1:


.container {
display: -moz-box;
display: -webkit-box;
display: -ms-flexbox;
display: -webkit-flex;
display: flex;
-moz-box-orient: vertical;
-webkit-box-orient: vertical;
-ms-flex-direction: column;
-webkit-flex-direction: column;
flex-direction: column;
}

.nav {
  -webkit-box-ordinal-group: 1;
  -moz-box-ordinal-group: 1;
  -ms-flex-order: 1;
  -webkit-order: 1;
  order: 1;
}

...

I have updated the original demo so it considers the new syntax.

Flexbox is no longer the volatile experimental property it once was, now that we have a final specification in place and widespread support on mobile devices flexbox is a robust solution for content choreography exactly where we want it to be — where space is confined and some layout adjustment is required.


  1. Inspiration heavily borrowed from Chris Coyier’s excellent Using Flexbox article 

Beyond the Canvas

When we design we generally do so in two dimensions — length and width. They are the physical constraints of what our technology is currently capable of. Our dimensional restraints are then realised on the devices used to experience our design.

Beyond the two dimensional screen exists the third dimension (and many other theorised dimensions) — the physical space in which our designs exist beyond the canvas. Here, all sorts of physical parameters affect how a person uses our design.

Consider the user’s physical space around them — perhaps they are lying on their side on the sofa or in bed and holding a mobile device with one hand. Can the design be enjoyed when a user’s is physically restricted from using two hands? Luke Wroblewski further elaborates on this idea in his Testing One Thumb, One Eyeball article detailing the test procedure for Polar.

If people can get things done in time sensitive, limited dexterity situations, they’ll be even more efficient when we have their full attention and two-hands focused on our designs.

In a separate article Luke details the reach of one thumb with the diagram below to show the considerations for positioning navigation. This is relevant to anything you want people to reach easily, for example if you had a single purpose web app, you might want to position the primary action in the safe zone just like Instagram does with their primary action (take picture).

Image from Luke Wroblewski’s Responsive Navigation: Optimizing for Touch Across Devices article

Beyond the canvas take into account where the thumb might hover while the device is in use. Sometimes people rest their thumb along the ridge of the hardware while others hover it over a portion of the screen poised to press something. This emphasises the importance of testing with real devices. Real hardware considers space in three dimensions whereas on-screen emulators put themselves into the two dimensional canvas free from physical distractions — it’s not really representative of the physical world we live in beyond the canvas.

Book Review: Combining Typefaces

The craft of combining typefaces can feel like a daunting prospect. It’s easy to shy away from the adventurous cross-pollination of different typefaces and stick to families and superfamilies to achieve typographic harmony.

Thankfully Five Simple Steps’ latest addition to their helpful Pocket Guides series from Tim Brown seeks to offer reassurance and open-mindedness when it comes to making typographic decisions. Tim approaches the subject from a web designer’s perspective:

We need to think about compositions not as layouts, but as coordinated chunks of typeset elements that do specific jobs and exist in many states simultaneously shifting dynamically among those states.

The book itself is the living realisation of Tim’s own words. The methodology described within its pages is broken down into digestible chunks for quick reference and assurance. There are recurring themes of pausing, stepping back and patience. Combining typefaces is an intricate craft that is more rewarding with practice and knowledge about the faces you work with. Tim expresses the importance of absorbing the type, its purpose, its features, the relationships between spaces and rhythms at a micro and a macro level. The deeper you know about the characteristics of different typefaces and the content that the typefaces are going to represent, the easier it is to find harmony between both entities.

Here’s the bottom line: absorb the text and the author’s or client’s intentions with vigor, because it is integral to your success. If the visual decisions you make aren’t meaningfully connected to the ideas they represent, then your typeface combinations don’t matter.

It’s a purposely open-ended book, because with design there are few definitives. The skill of combining typefaces is nourished with learning from, documenting and critiquing work. A few years ago I naively believed there were certain formulas for this practice. After reading Tim’s words I am excited about looking at typography through a more meaningful lens. Combining Typefaces is valuable addition to any web designer’s desk — one that I’ll be keeping within reach.

Thoughts on Toggling a Responsive Design On and Off

Earlier today Roger Johansson posted an article about potential scenarios for disabling a responsive design along with a demo. As noted in the article, this isn’t new thinking as both Bruce Lawson, Chris Coyier and various others have spoken about it before — the purpose was to introduce the demo along with scenarios where it would be useful.

I can’t figure out what my stance is on the approach. I’m leaning more towards thinking that this is a good idea. Some view the inclusion of an off-switch as an admission of failure in the implementation of a responsive design — I’m not so sure it is. Chris Armstrong put it nicely on Twitter:

RWD involves making assumptions on the user’s behalf, why not allow them to be overriden?

I wish I had a toggle for the bad responsive designs, the ones that “don’t get it” (i.e they hide content and bastardise the experience rather than adapt it appropriately). Even if we have examined and tested our assumptions thoroughly, there is always a chance that we’ll get something wrong and I think users deserve a chance to undo something we impose on them (with good intent).

Roger’s article discusses the language around the toggle switch:

The trickiest part is probably what to call the switch. Phrases like “View desktop layout”, “Go to desktop version” and “View full site” seem like saying that what you’re currently viewing is missing something. So instead I called it “View fixed width layout” (and “View flexible width layout” to toggle back). I don’t really know if it helps end users understand the difference, but I think it’s a more accurate description than using “desktop” and “mobile”. Someone’s browser window being narrower than 960 pixels, or 1200 pixels, or wherever you choose to draw the line, does not necessarily mean they’re using a “mobile” device.

After years of m-dot websites users have been conditioned to feel that “View desktop site” means that the page is hiding something from them. That’s why I think trying to add language to the switch won’t work, it may create the feeling of deception that exists with many m-dot sites. Also using terms like “fluid” and “fixed” will mean nothing to the average user. The language would need to feel familiar and create an expectation — although in this situation, the user’s expectation is that the button will reveal a “full version” revealing the stuff that the page is hiding.

I think we could do something visually rather than give it confusing and misguided terms. I’m not saying this suggestion is the definitive approach, it’s more of a rough idea. The image below gives you an idea of what an icon might look like before and after it is toggled.

The left icon shows what it might look like in a standard responsive layout with the option to toggle a fixed width layout, the right shows how it would look from the fixed width layout if the user wanted to switch back to a responsive layout.

Again — this is just a rough idea using a couple of repurposed icons, if you do decide to implement an off switch, then I suggest trying a visual reference rather than potentially confusing wording.

Under the Microscope

When designing for small screens keep in mind that everything you do is under the microscope - design patterns, interactions, usability, speed — everything. These wonderful little devices that we cup in our hands offer a small, personal and intimate window in which to enjoy the web.

It is more important than ever that we get the little details right. A small screen space is limited in what we can show at any particular moment - and that’s what we are dealing with here, lots of tiny moments. By contrast a large screen offers plenty of room for design problems to hide or go mostly unnoticed. Small displays don’t allow that. They are unforgiving to poor design decisions which is why a lot of designers are quick to scrutinise and attack a poorly implemented responsive design. This type of scrutiny is more often than not about the design and technical aspects whereas users scrutinise with a different lens which generally concerns more important aspects content and value.

I suggest looking at a confined layout as a series of moments. Ask difficult questions of it: "Does this moment achieve what I want it to achieve?", "Does this particular moment offer any value?”, "Does this moment facilitate or distract from the overall message/objective/goal?", "Does this moment make sense independently or does it require the bigger picture to give it context?"

Our design decisions are magnified in this personal space. It’s a rewarding space that can offer intimacy and undivided attention - small screens deserve good experiences and a responsive approach1 can allow that good experience to permeate any device or dynamic that it is faced with.


  1. Aside: responsive design doesn’t just mean making things work on small displays.