top of page
popul8it logo
amrfateemteam

The Future of Responsive Design



When you think of the term responsive design, what comes to mind? Is it mobile devices versus desktop devices? Is it designing in a way that works across screen sizes? Thinking about mobile touch versus using a mouse? Is it components changing shape based on where they are laid out on the page? That might be how we used to think about responsive design, but it’s definitely not all there is to it. And the way that we can, and should, be thinking about responsive design is how responsive design affects a user’s context and how we can be the most responsive to the user’s needs and experience.

Screen size is a small part of that context, but so are these elements when the user is accessing your application:

  • The user’s location

  • The light level and noise level

  • The time (including timezone) of where they currently are.

  • How they are holding the device, and how they best access their device (personal settings on that device).

The future of responsive design is a user’s context in space and time, their devices context in space and time, and the user’s preferences on said devices in space and time. It’s all of these things. Today’s web browsers we access and develop for give us the power to leverage these inputs in our designs. We can now access the location, light level or light preference, orientation, and battery-level of a device, and make design choices around these elements.

Location detection

Location detection (AKA geolocation) is one of the most common browser-based detection mechanisms we see on the web today. When you’re in need of furniture shopping and searching for where to find the closest West Elm store, the West Elm website wants to help you out and can use your exact location to find the nearest store:


For the best user experience here, this is the default behavior. West Elm first tries to help you as a user find the nearest store with the least amount of effort from you. Effort in this case means typing in data into a field or searching on a map. Instead, West Elm wants the browser to do the work for you by sending your location to a server so that they can figure out the result.

Beyond the store locator

While this is the most common use case, there are still plenty of other reasons why location detection could affect user experience and the UI of a product. Think of location as proximity to anything. In the store locator, it’s proximity to the store, but it could also be used to sort out proximity to an event.

If you’re building a scheduling app with locations involved, you can tell the user how far away they are from their destination (a 5-minute walk, for example). You could change the UI to remind the user that they are almost there if they’re headed to a party. If the user is consistently moving toward a location and make a wrong turn, the app can alert them of this. Facebook uses location information to provide people who are in the vicinity of a traumatic event to be able to mark themselves as “safe.” If you’re a media company, you could use location to share relevant news to a user about what’s going on in their city. I’m sure you can come up with other uses for using location information in your mobile apps.

When developing for mobile applications, native Java and Swift provide interfaces for detecting location as well. You can see this being used in a lot of different types of apps: from fitness apps that track distance over time to calculate pace (like MapMyFitness), to location-based filters in social sharing apps (like Snapchat), to maps that help you to navigate to a location (like Google Maps).

There are so many reasons that location detection could be useful for designing a better user experience, in a variety of different types of products, both native and web-based. So, let’s dive into how to actually do this with an example on the web.

Using the HTML5 geolocation application

If we just want to get the user’s country, we can do an IP lookup using a service like ipinfo.io. Using IP, we can access the user’s city, region, country, area code, zip code, and more.

If we want to get the user’s actual location, we need to use JavaScript, as Adeyinka Adegbenro shows in her "How to detect the location of your website’s visitor using JavaScript" Medium article:


Show moreif ("geolocation" in navigator) {
 // check if geolocation is supported/enabled on current browser
 navigator.geolocation.getCurrentPosition(
  function success(position) {
    // for when getting location is a success
    console.log('latitude', position.coords.latitude,
                'longitude', position.coords.longitude);
  },
 function error(error_message) {
   // for when getting location results in an error
   console.error('An error has occured while retrieving
                 location', error_message)
 }  
});
} else {
 // geolocation is not supported
 // get your location some other way
 console.log('geolocation is not enabled on this browser')
}

For security reasons, the browser or device must ask the user permission to access their location:


If permission is granted, we can use the calculated latitude and longitude, and work with the code to create a more tangible location (like this, from the Medium article):


Show morefunction success(position) {
 // for when getting location is a success
 console.log('latitude', position.coords.latitude,
            'longitude', position.coords.longitude);
 getAddress(position.coords.latitude, position.coords.longitude)
}

With this data, we can implement visualizations on maps, and we can work with distance data between points. It’s really up to your creativity to see where you can use it.

Time

Time can also be thought of as a “pillar of proximity.” Like location, you should consider time when it comes to your users and their experience with your application. Is it time until an event? Until a discount sale ends? Or a new product sale starts? Time is integrated into so many of our decisions, and it should be integrated to fit our users’ needs as closely as possible.

I love this watch UI from Glev Kuzenstov, who imagines a watch in the sense where it not only gives users the general time of day, but also works closely with their own schedule, and gives them a clear display of time as it relates to their day. They have a meeting in X minutes ticker, and the interface changes from night to day accordingly:


Using locale string

Packages like Moment.js help you to parse time and relate it to a user’s timezone and needs. Web browsers also natively provide us a lot of different options for parsing date strings, one of which is toLocaleString.

The toLocaleString() method returns a string representing the object as it relates to the users locale-specific needs. As with localization of currencies and price, the LocaleString allows for a number or date stamp to be parsed to that users methodology to represent numbers or time. Consider this JavaScript code from the MDN web docs:


Show morevar event = new Date(Date.UTC(2012, 11, 20, 3, 0, 0));

// British English uses day-month-year order and 24-hour time without AM/PM
console.log(event.toLocaleString('en-GB', { timeZone: 'UTC' }));
// expected output: 20/12/2012, 03:00:00

// Korean uses year-month-day order and 12-hour time with AM/PM
console.log(event.toLocaleString('ko-KR', { timeZone: 'UTC' }));
// expected output: 2012. 12. 20. 오전 3:00:00

Light levels

Some applications, like Flux use time-based dimensions to influence the UI. Flux, for example, neutralizes the blue glow from your screen at night to help you go to sleep.

  Stitched screen shots of Flux from wikipedia (https://en.wikipedia.org/wiki/F.lux#/) - Creative commons license

Apple shipped iOS 11 with a feature called Night Mode, which essentially did the same thing — strip out blue light to prevent insomnia in users who use their device late at night.

It’s true that in different environments, readability is affected by light levels, and like flux, we can alter the UI of an application or website and tailor its needs to the light level surrounding our user or preferences set by our user.

So where does this come in handy? Mostly in legibility.

The idea here is to match the light to the user’s environment. In high-light scenarios, screens are hard to read due to the decreased contrast and vibrancy on the screen compared to the brightness surrounding the screen. In low-light scenarios, a too-bright or too-contrasted view will be more difficult to read because of how strong the glare is compared to the environment. A darker UI is easier to read in this case, whereas we would likely want our UI to have stronger contrast and larger text in high-light scenarios. It could look like this:


Mobile devices do this automatically. When a phone detects a high-light scenario, it will adjust the brightness (make the screen brighter) and vise versa in a low-light scenario. This is a default setting on most mobile devices. When trying to do this on the web, however, there is a little more work that needs to be done.

Ambient light queries

In the past, one way developers were able to detect light is with JavaScript, using ambient light queries. This was possible for a time on certain browsers, but security concerns have arisen, and this capability became deprecated in Firefox.

You can check the capabilities of the various browsers on caniuse.com, like this check of which versions of which browsers support ambient light sensor:


If you are using Edge, or the previous version of Firefox, you can see a working demo of this without a flag; otherwise, you could enable this feature behind a flag in Chrome. See the Ambient Light Events Demo by Tomomi Imura on codepen.io.

Light and dark appearance settings

There is a new method, however, which was released in the new MacOS, which gives the control of light levels over to the user, while still allowing developers to customize styles based on these choices. The Safari browser pairs this with CSS media queries to give developers more control of how they style their web applications when dark mode is enabled. This new Standard is registered in the W3C Media Queries Level 5.


Now if we wanted to style light and dark modes, we could use the new media queries:


Show more/* Light mode */
@media (prefers-color-scheme: light) {
 body {
   background: white;
   color: black;
 }
}

/* Dark mode */
@media (prefers-color-scheme: dark) {
 body {
   background: black;
   color: white;
 }
}

This would result in a white background body with black text in light mode, and a black background body with white text in dark mode. It lets us be just that much more responsive to our user’s preferences when we’re building UIs for them.

Device orientation

Device orientation is another device (pun intended!) that we can use to augment and improve user experience. When designing UI, we want to make sure that our application is responsive to all screen sizes, which includes various orientations. With the ability to style things based on viewport height (vh) and viewport width (vw) on the web, we can account for the wider and shorter UI of rotated devices. We can also use CSS to detect orientation:


Show more@media (orientation: portrait) {
 /*...*/
}

@media (orientation: landscape) {
/*...*/
}

We can use these CSS properties to style things differently based on orientation. An example comes from a sports app, where they might have an overview of results in a vertical mode, but show more information in a horizontal format.

Not only can we help create UI’s tailored to orientation, but we can also use screen orientation to get creative with gestures and small touches. I once built a cocktail recipe app in which you shook your device (preferably used on a phone here, but also worked on tablets) to get a random recipe.


I’ve also seen creative uses of this for 404 pages and loading screens.

Accessing device orientation

You can access device orientation to a pretty accurate degree by using JavaScript. A simple way to determine orientation, however, is to simply check the viewport width and height, as David Walsh shows in his "Detect Orientation Change on Mobile Devices" blog post:


Show moreif(window.innerHeight > window.innerWidth){
 console.log('vertical')
}

You can also add events to watch for changes in screen orientation:


Show more// Listen for orientation changes
window.addEventListener("orientationchange", function() {
 // Announce the new orientation number
 alert(screen.orientation);
}, false);

When working with orientation in a more fine-tuned way, we use a devices alpha, beta, and gamma rotation angles. The alpha angle represents rotation around the z-axis. Beta is the x-axis, and gamma is the y-axis. It looks like this:

We can access these like so, as shown in "Detecting device orientation" in the MDN web docs:


Show morefunction handleOrientation(event) {
 var x = event.beta;  // In degree in the range [-180,180]
 var y = event.gamma; // In degree in the range [-90,90]
}

Using device orientation is definitely an interesting idea for future UI’s, and while we most often see it in games now, the creative uses in everyday applications are growing.

Battery detection

Finally, we now have the ability to access battery level within a browser UI and have an opportunity to influence the user experience here too. I’ve heard rumors of this being used for evil (ride share companies increasing prices for clients with low battery life), but you can (and should) use it for good. For example, you can turn off animations or other power-intensive operations when detecting low battery levels on your user’s device. Maybe, you can even provide a simplified version of your UI, without the high-resolution images and unnecessary data that must be downloaded.

Battery status API

Battery status is currently supported in Chrome 38+, Chrome for Android, and Firefox 31+. It provides an event listener for updating battery status based on the host device.

The example from the w3c is as follows:


Show morewindow.onload = function () {
 function updateBatteryStatus(battery) {
   document.querySelector('#charging').textContent = battery.charging ? 'charging' : 'not charging';
   document.querySelector('#level').textContent = battery.level;
   document.querySelector('#dischargingTime').textContent = battery.dischargingTime / 60;
 }

 navigator.getBattery().then(function(battery) {
   // Update the battery status initially when the promise resolves ...
   updateBatteryStatus(battery);

   // .. and for any subsequent updates.
   battery.onchargingchange = function () {
     updateBatteryStatus(battery);
   };

   battery.onlevelchange = function () {
     updateBatteryStatus(battery);
   };

   battery.ondischargingtimechange = function () {
     updateBatteryStatus(battery);
   };
 });
};

Here, we can see not only the battery level, but also if the battery is charging or not, and update the battery status accordingly. Guille Paz created a great basic battery level demo and made it available in GitHub.

Conclusion

With all of these new browser APIs, and native application capabilities, it’s an exciting time to be an application developer and to think about our user’s experience in a more wholistic way. Responsive design no longer needs to be limited to just the size of the screen, but can be expanded to a whole bevy of elements surrounding our users to help provide the best applications possible for their needs and situations.


Source: IBM Developer

https://developer.ibm.com/articles/responsive-design-future/

12 views0 comments

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page