In the past 5 years or so, there hasn’t been much evolution when it comes to mobile technology. In 2007, Apple introduced the iPhone and we were all mesmerized by the new touch screen technology. The idea of swiping your finger across the screen of a mobile device in order to scroll through a webpage was such a great idea. What we liked even more was the ability to use two fingers in a pinching motion to zoom in and out of a particular area of a webpage. In the years that followed, there were a few minor hardware improvements and multitude of software tweaks.
When Apple officially introduced Siri, we all thought “This is great! What will Apple think of next?” Of course nobody really used Siri for what it was originally intended for. It was treated more as novelty item and less like the next generation interface for your mobile device as it was originally envisioned by Apple. Furthermore, Siri was actually intended to learn your habits and make requests that would complement your day. A true assistant, if you will.
Apple was really on to something with this whole integration of voice recognition technology and mobile device computation power. Unfortunately, it seems the public didn’t see it that way or rather they didn’t quite make the connection. Even though it was a great new way to interact and control your device, there was still a learning curve attached to it.
Apple wasn’t the only company working on an actual digital personal assistant. Google as well attempted to integrate voice recognition with its mobile Android OS. Much like Apple’s Siri, Google’s voice recognition product labeled Google Now is a great interface that almost blurs the line between human and device and how we interact. Unfortunately as great as Google Now may be, it also takes some getting used to.
In my opinion, both Siri and Google Now are exciting new technologies. (I use the term “new” in a relative manner. Obviously, they’ve been around for some time now.) As great as both products are, they still seem to be missing something. To me, it just doesn’t feel one hundred percent natural yet. I have to press a certain button or navigate to where the voice app is located in order to fire it up and have it interpret what I’m dictating.
When you speak with family or a best friend, we simply acknowledge them with eye contact and begin speaking. It doesn’t get much more natural than that, right? So, how can these reputable companies develop a digital assistant (keyword here is “assistant”) that takes so much effort to work with? Imagine how wonderful it would be to work with your digital assistant as you would with a live person? Let’s take it a step further and imagine not only interacting with the assistant on a mobile device in this manner but all electronic devices around you. Imagine that!
We’re on the right track here. Soon we’ll have very autonomous assistants which will literally serve you in the exact way you need them to. I am envisioning a world where we won’t need a keyboard, mouse, or even touchscreens to work with technology. All we’ll need is what God already engineered and developed: our senses!
Happy reading my friends!