When I was eight years old, my father gave me a brand new computer: a Macintosh 128K, which at that time was the envy of most of my friends and relatives. It wasn’t deluged by signs and characters waiting for simple keyboard commands, but instead, it had a screen featuring a beautiful interface that simulated a physical space of work: a desktop, a recycling bin, a clipboard, and folders that could be created and opened to store and organize documents. Everything in that computer was beautiful, and at the same time, functional.
Without even knowing, I was smack in the middle of one of the most significant computer interface milestones. The once complex and difficult-to-use graphical interface – which to some extent was developed exclusively for scientists – evolved into an interface that had intuitive interactive graphics. And in addition to the familiar keyboard, a new ground-breaking device called a ‘mouse’ introduced an exciting interaction with the machine. Did I mention that up until this time in my world of the 1980’s, computers were only interacted with through keystrokes on a keyboard? (Punch Cards were no longer in use). So now, with a graphical user interface that resembled a shared workspace, machines could still be used by scientists and technical people, but now also writers, students, architects, accountants, designers, artists, moms, dads… and kids like me.
Macintosh 128K, one of the first computers exposing a GUI – © Apple Computer Inc.
Throughout its long history and as computers expanded their domain to other devices such as watches, Smart TVs, streaming devices, game consoles, mobile phones, car screens, washing machines and IoT in general, the interface in computers has evolved to satisfy the way humans interact with different form factors like pencils, trackpads, touch gestures, accessible screens for persons with disabilities, voice controls, TV remote controls, etc.
The revolution expanded at lightning speed, which is why I’ll call this the second UI milestone (sure, we could argue that voice controls, OTT apps or VR devices could be separate milestones, but for this piece, I’ll put all of them in the same bag: Multi-device UI).
What do those UI’s have in common? They all have different hardware specs, and depending on their hardware capabilities, your experience could either be fluid and fast, or like watching paint dry…
Let’s clarify the above with two basic examples: You can’t wait to watch the final season of Game of Thrones, but it’s possible you won’t be on time and in front of your TV for every single episode. You decide to cut the cord and purchase a streaming device to enhance your TV experience, and then install the HBO app on that device so that you have the choice of watching episodes live or on demand. The market offers you different options, each with their own set of features to help you achieve your mission: Roku (multiple options), AppleTV (2 options), Amazon Fire (3 options), and even AndroidTV powered TV sets with different models. You decide to purchase an AndroidTV because you once interacted with your neighbors’. As soon as you turn it on, you figure out that the UI isn’t as fast and fancy, and it’s slightly different from your neighbors’. Well, I’m not saying that the AndroidTV UI itself offers a bad experience, but it turns out that you invested in a 1st generation AndroidTV and only later find out that the manufacturer of your neighbor’s AndroidTV offers a better, more advanced set of hardware and features… ugh.
Welcome to something called “device fragmentation.”
With all that, the developers and designers who built the applications for your device spent a good deal of time working on mitigating the device fragmentation, meaning they needed to provide extra effort to ensure your HBO app would work well on the entire spectrum of hardware choices – from low-end entry level devices, to the most advanced and feature rich ones. This results in building something with the expected user experience but removing or disabling fancy animations and sophisticated UIs when the streaming service detects that users are using lower-end devices. We as developers need to put in extra work, time, resources and effort to allow these interfaces to work in some related and recognizable manner across all the devices. It’s frankly a pain in the ass.
Another clear example is when we developers create web experiences. Depending on the browser capabilities (i.e., Internet Explorer, Edge, Chrome, Firefox), we need to deal with developing strategies to ensure skills are not affected by the type of web browser you use. Again, it requires additional development effort, and if you’re either using an old browser or worse, have a slow machine, you may still be dissatisfied. So even though our efforts are focused on making sure the UI works on all devices, not every user may get the experience they expected from what they’ve seen before or what they are expecting.
The next UI Milestone
So, what’s this all about? Why should I care? Well, I’d like to introduce you to what we call: Virtual UI.
This next milestone in UI, especially one affected by device fragmentation will ensure that every single user will have the same experience no matter what hardware they have. It will mark the end of device fragmentation, and developers, UX designers and most importantly consumers will sleep well at night as they turn off their TV’s and tuck themselves in following an evening of joyous content discovery and consumption.
As the average users internet speed continues to increase, faster services are appearing that allow real-time round trips between local computers (your Roku or Laptop) and remote servers (the machines providing you with the latest episode of Game of Thrones). The difference this time is that you won’t have to have the entire UI software installed within the device, meaning it won’t be reliant on the hardware specs of that not so late model you still own. The whole interface will be served via streaming data: You interact with what you see on the screen, and this information goes and returns with a new refreshed UI. All in less than the blink of a three eyed crow.
So you might be thinking, “Beautiful, that sounds great! When will it happen?” The exciting news (well at least for us designers and developers) is that it already is.Take a look at Google’s recent recent announcement of its cloud gaming service called Stadia, , or check out the Nvidia Shield game service.
As a super geek bonus, developers like me won’t need to worry about development language fragmentation. (Yes that a thing too) We won’t need to learn different programming languages that currently depend on the various devices or platforms running your App, i.e. a PC (C#), a Mac (Swift) or a Roku (BrightScript). You just build the software with a unique code base, and it will be streamed to your users. It’s that simple and it’s coming soon to a streaming device near you. Kind of like winter.
Google Stadia, a cloud gaming service capable of streaming video games in 4K resolution at 60 frames per second.
At Zemoga, we are working closely with our engineering and UX teams as well with our platform partners to bring the latest and most advanced OTT and Mobile video solutions across multiple devices. We continue to lead the charge in providing the most engaging and functional interfaces for consumers. It’s what we do. What do you think will be the next UX/Technology milestone will be?
Get in touch today to talk about it.