Friday, August 1, 2014

The Evolution of Touchscreen Technology

The Evolution of Touchscreen Technology
By Andre Infante
In the tech space, time moves quickly; a little more than seven years ago, smartphones as we know them did not exist — now, they’re the most profitable tech industry on Earth (and so prevalent that it’s actually a problem). A consequence of this is that it’s easy to lose sight of just how revolutionary and important the technologies we use really are.
Touchscreens and multitouch interfaces are now a permanent part of the fundamental language of human-computer interaction. All future UIs will carry echoes of touch interfaces with them, in the same way that the keyboard and the mouse permanently altered the language of the interfaces that came after them. To that end, today we’ll be taking a moment to talk about how touchscreens and the interfaces they enable came to exist, and where they’re going from here.
First, though, take a moment, and watch this video:

Listen to the sound the audience makes when they witness slide to unlock and swipe to scroll for the first time. Those people were completely blown away. They have never seen anything like that before. Steve Jobs might as well have just reached through the screen and pulled a BLT out of the ether, as far as they’re concerned. These basic touch interactions that we take for granted were totally new to them, and had obvious value. So how did we get there? What had to happen to get to that particular day in 2007?

History

Surprisingly enough, the first touchscreen device was capacitive (like modern phones, rather than the resistive technology of the 1980s and 1990s) and dates back to around 1966. The device was a radar screen, used by the Royal Radar Establishment for air traffic control, and was invented by E. A. Johnson, for that purpose. The touchscreen was bulky, slow, imprecise, and very expensive, but (to its credit) remained in use until the 1990s). The technology proved to be largely impractical, and not much progress was made for almost a decade.
The technology used in this kind of monotouch capacitive screen is actually pretty simple. You use a sheet of a conductive, transparent material, and you run a small current through it (creating a static field) and measure the current at each of the four corners. When an object like a finger touches the screen, the gap between it and the charged plate forms a capacitor. By measuring the change in capacitance at each corner of the plate, you can figure out where the touch event is occurring, and report it back to the central computer. This kind of capacitive touchscreen works, but isn’t very accurate, and can’t log more than one touch event at a time.
radardisplay   The Evolution of Touchscreen Technology
The next major event in touchscreen technology was the invention of the resistive touchscreen in 1977, an innovation made by a company called Elographics. Resistive touchscreens work by using two sheets of flexible, transparent material, conductive lines etched onto both, in opposing directions. Each line is given a unique voltage, and the computer rapidly alternates between testing the voltage of each sheet. Both of the sets of lines (horizontal and vertical) can be tested for voltage, and the computer rapidly alternates between feeding current to the horizontal and testing for current in the vertical, and vice-versa. When an object is pressed against the screen, the lines on the two sheets make contact, and, the voltages provided by both combinations tell you which vertical and horizontal lines have been activated. The intersection of those lines give you the precise location of the touch event. Resistive screens have a very high accuracy and aren’t impacted by dust or water, but pay for those advantages with more cumbersome operation: the screens need significantly more pressure than capacitive (making swipe interactions with fingers impractical) and can’t register multiple touch events.
These touchscreens did, however, proved to be both good and cheap enough to be useful, and were used for various fixed-terminal applications, including industrial machine controllers, ATMs, and checkout devices. Touchscreens didn’t really hit their stride until the 1990s, though, when mobile devices first began to hit the market. The Newton, the first PDA, released in 1997 by Apple, Inc. was a then-revolutionary device that combined a calculator, a calendar, an address book, and and a note-taking app. It used a resistive touchscreen to make selections and input text (via early handwriting recognition), and did not support wireless communication.
NewtonPDA   The Evolution of Touchscreen Technology
The PDA market continued to evolve through the early 2000s, eventually merging with cell phones to become the first smartphones. Examples included the earlyTreos and BlackBerry devices. However, these devices were stylus dependent, and usually attempted to imitate the structure of desktop software, which became cumbersome on a tiny, stylus-operated touch screen. These devices (a bit like Google Glass today) were exclusively the domain of power-nerds and businesspeople who actually needed the ability to read their email on the go.
That changed in 2007 with the introduction of the iPhone that you just watched. The iPhone introduced an accurate, inexpensive, multi-touch screen. The multi-touch screens used by the iPhone rely on a carefully etched matrix of capacitance-sensing wires (rather than relying on changes to the whole capacitance of the screen, this scheme can detect which individual wells are building capacitance). This allows for dramatically greater precision, and for registering multiple touch events that are sufficiently far apart (permitting gestures like ‘pinch to zoom’ and better virtual keyboards). To learn more about the operation of different kinds of touchscreens, check out our article on the subject.
The big innovation that the iPhone brought with it, though, was the idea of physicalist software. Virtual objects in iOS obey physical intuitions – you can slide and flick them around, and they have mass and friction. It’s as though you’re dealing with a universe of two dimensional objects that you can manipulate simply by touching them. This allows for dramatically more intuitive user interfaces, because everyone comes with a pre-learned intuition for how to interact with physical things. This is probably the most important idea in human computer interaction since the idea of windows, and it’s been spreading: virtually all modern laptopssupport multi-touch gestures, and many of them have touchscreens.
Since the launch of the iPhone, a number of other mobile operating systems (notably Android and Windows Phone) have successfully reproduced the core good ideas of iOS, and, in many respects, exceeded them. However, the iPhone does get credit for defining the form factor and the design language that all future devices would work within.
androideatingapple   The Evolution of Touchscreen Technology

What’s Next

Multi-touch screens will probably continue to get better in terms of resolution and number of simultaneous touch events that can be registered, but the real future is in terms of software, at least for now. Google’s new material design initiative is an effort to drastically restrict the kinds of UI interactions that are allowed on their various platforms, creating a standardized, intuitive language for interacting with software. The idea is to pretend that all user interfaces are made of sheets of magic paper, which can shrink or grow and be moved around, but can’t flip or perform other actions that wouldn’t be possible within the form factor of the device. Objects that the user is trying to remove must be dragged offscreen. When an element is moved, there is always something underneath it. All objects have mass and friction and move in a predictable fashion.

In a lot of ways, material design is a further refinement of the ideas introduced in iOS, ensuring that all interactions with the software take place using the same language and styles; that users never have to deal with contradictory or unintuitive interaction paradigms. The idea is to enable users to very easily learn the rules for interacting with software, and be able to trust that new software will work in the ways that they expect it to.
On a larger note, human-computer interfaces are approaching the next big challenge, which amounts to taking the ‘screen’ out of touchscreen — the development of immersive interfaces designed to work with VR and AR platforms like the Oculus Rift (read our review) and future versions of Google Glass. Making touch interactions spatial, without the required gestures becoming tiring (“gorilla arm”) is a genuinely hard problem, and one that we haven’t solved yet. We’re seeing the first hints of what those interfaces might look like using devices like the Kinect and the Leap Motion (read our review), but those devices are limited because the content they’re displaying is still stuck to a screen. Making three dimensional gestures to interact with two dimensional content is useful, but it doesn’t have the same kind of intuitive ease that it will when our 3D gestures are interacting with 3D objects that seem to physically share space with us. When our interfaces can do that, that’s when we’ll have the iPhone moment for AR and VR, and that’s when we can really start to work out the design paradigms of the future in earnest.

The design of these future user interfaces will benefit from the work done on touch: virtual objects will probably have mass and friction, and enforce rigid hierarchies of depth. However, these sorts of interfaces have their own unique challenges: how do you input text? How do you prevent arm fatigue? How do you avoid blocking the user’s view with extraneous information? How do you grab an object you can’t feel?
These issues are still being figured out, and the hardware needed to facilitate these kinds of interfaces is still under development. Still, it’ll be here soon: certainly less than ten years, and probably less than five. Seven years from now, we may look back on this article the same way we look back on the iPhone keynote today, and wonder how we could have been so amazed about such obvious ideas.
 Image Credits: “SterretjiRadar”, by Ruper Ganzer, “sin-gular”, by Windell Oskay, “Android eating Apple”, by Aidan Source: www.makeuseof.com

1 comment:

IntuiFace said...

Nice blog, touch screen based devices are more convenient to use and more user friendly.
Visit: https://www.Intuiface.com/

Stream for free

I was written to because I cited Roku on  this page  at Balunywa Bytes.  Here at KillTheCableBill.com, we're helping people beat inflati...