Showing posts with label Awesome. Show all posts
Showing posts with label Awesome. Show all posts

Friday, 6 March 2015

The Future Of Voice-Activated AI Sounds Awesome

The ImpatientEditor’s Note: Tim Tuttle is the chief executive officer of Expect Labs.
For decades science fiction movies have been imagining the future as one where humans talk to machines just as naturally as they speak to family and friends. In reality, however, using voice to interact with machines has been maddeningly frustrating, with Siri often mistaking “open up my email” for “look up some kale,” for example. This is changing. Increasingly the experience of speaking to your mobile device elicits genuine surprise when Siri or Google Now understands your request and seamlessly executes your request. Put simply, voice recognition in machines is getting very good and is going to get so good that it will completely change the way humans interact with their computing devices. The next few years in voice and speech recognition are going to be exciting. Here are some things to look forward to.
Voice recognition gets freakishly good. It used to be that voice recognition always fell short of our expectations, but there have been some recent major technology breakthroughs that have cracked the code on speech recognition. In the past 18 months, commercial speech recognition technologies have seen a dramatic 30 percent improvement. To put that into perspective, that’s a bigger gain in performance than we’ve seen in the past 15 years combined. These improvements are in part being driven by deep learning approaches combined with massive data sets.
Deep learning is a tool that is used to create systems that have very good accuracy for tasks such as image analysis, speech recognition and language analysis, among other things. Most of the companies that are viewed as leaders in this space do not yet have their platforms available for use by customers; DeepMind and Vicarious fall in this category. There are a few companies that offer APIs, which rely on deep learning. The Alchemy API is one example of a company that uses deep learning for image and language analysis.
As more voice usage data becomes available, speech recognition accuracy will get better and better. This is what is known as the “virtuous cycle of AI,” the more people use voice interfaces, more data is gathered, and as more data is gathered, the better the algorithms work, thus delivering dramatic improvements in accuracy.
Siri, Cortana and Google Now won’t be the only intelligent voice assistants. As computing devices of all shapes and sizes increasingly surround us, we’ll come to rely more on natural interfaces such as voice, touch and gesture. In the past, developing an intelligent voice interface was a complex undertaking – feasible only if you had the development team of a major corporation like Apple, Google or Microsoft. Today however,  due to the emergence of a small but growing number of cloud based APIs like MindMeld, it’s now possible for developers to build an intelligent voice interface for any app or website without requiring an advanced degree in natural language processing.
There aren’t many companies doing this since it is one of the most complex areas of artificial intelligence research. On the consumer side, Google, Apple, Microsoft, Baidu, and Amazon are investing heavily to make web-wide voice search better. For other companies that do not have millions to invest in voice search technology, it’s possible to leverage a cloud-based service to create intelligent voice functionality. Companies that offer a cloud-based API to voice-enable  applications include my company, Expect Labs, as well as Wit.ai, and api.ai. The Siri founders are also working on Viv, but they have not yet launched a product so it is unclear if it is relevant to the emerging generation of voice applications.
Computers will start listening to us non-stop…like the Star Trek computer. Machines already see better than humans, recognize objects better, and can listen and hear better. Eventually they will also understand meaning better. What does a world where computers listen constantly look like? It will certainly change the way we interact with our devices. A conference room, automobile or wearable device that can listen to our conversations and understand what we need will eventually become the norm.  This new world will emerge since we will all expect to have information at our fingertips at any time no matter where we are.  It may seem odd now, but it won’t be long before intelligent voice interfaces are built into all kinds of apps. Right now companies that are invested in the connected home (e.g. Samsung, Comcast, etc.) are leading the way, but we are also seeing other technology companies testing the waters with devices like Amazon’s Echo and Jibo.
Researchers will get closer to developing generalized intelligence. As AI systems get closer to understanding the full breadth of human knowledge, they will become much better at answering all kinds of questions. Eventually, machine learning techniques will be used to help computer scientists develop a universal intelligent assistant that understands a large fraction of all of human knowledge. Human knowledge, while vast, is not infinite. In fact, researchers estimate that a corpus of 100 to 500 billion concepts or “entities” would likely begin to approach the full extent of all useful human knowledge. With deep learning techniques getting better and better at extracting patterns from massive, internet-scale data sets, many AI researchers see the steps toward a form of generalized intelligence coming into focus.
Beyond 2015? Artificial intelligence gets smarter but it won’t destroy human civilization…yet. There’s been a lot of fear mongering of late about artificial intelligence. While any sci-fi movie-goer can envision numerous dangerous AI outcomes – automatically setting off nuclear warheads, stopping to reboot while in auto-driving mode, or destroying us all based on an ill-fated conclusion that humanity is the root of all problems – we are far off from this dystopian reality. Today’s AI systems are so far from becoming self-aware that it is not even a useful exercise to speculate when we might have to pull the plug.  We will likely benefit from decades of incremental AI advances before any of us need to seriously confront the existential threat foretold by hollywood movies.
How can we prevent our domination by robot overlords? Assimilation is inevitable. Resistance is futile. Seriously, we are a long way from even being able to constructively speculate about this. Over the next 15 years, computing systems are going to get really good at many different specific human-like tasks such as understanding images, videos, language and answering questions. There has not yet been any evidence that this will lead to higher-level intelligence that could rival the human brain. Some theorists speculate this might be possible, but at this point, it is merely speculation. If higher-level intelligence does emerge from machine systems over the coming decades, we will certainly need to have a serious debate over the best way to prevent any chance of a robot apocalypse.

Monday, 2 March 2015

Physical Keyboards Are Awesome

YC-Backed Valor Water Helps Utilities Keep The Water RunningRon Miller wrote an article this morning concerning a new Microsoft keyboard announced recently at Mobile World Congress. Miller is not impressed:
Microsoft has a little problem and it’s time we all admitted it. We have to gather the family in the living room, sit down Microsoft in the comfy chair and have a little heart to heart. Everybody can see it, except Microsoft — and it could be time for an intervention.
It’s the keyboard thing, Microsoft. Enough already. Design your software to take advantage of a touch screen. Let the keyboard go, dude.
The article continues in a similar vein for some time. I must protest, because .
If you visit the main TechCrunch office, you’ll note an open office filled with nerds, about half wearing headphones. Everyone is typing on Apple laptops, either by themselves, plugged into one monitor or several, or essentially ‘docked,’ with the user employing a standalone keyboard and touchpad to interact with the machine.
Missing from that mix? Anyone using a fucking iPad for work. Yes, there are some professions where an iPad or other tablet — this is where I cry — can be a great mixed-use device; the examples of doctors and salespeople are usually invoked at this point. But for people who spend quite a lot of time typing, I think that the following things are true:
Typing is cool.Typing quickly is cooler.Typing quickly, and accurately, is coolest.
Ergo, physical keyboards. This post is brought to you by a large, mechanical keyboard that makes more noise than an oil-well fire. But it feels amazing, and I have memorized its layout sufficiently to type at a happy clip. I’ve been using iOS years longer than this keyboard, and am probably half as fast on mobile, at best. I flip between this hulking ruin and the actual best keyboard of all time, the built-in set of keys that my current Macbook Air came with.
Between those I type away quite contentedly across two operating systems and a host of cloud services. The keys themselves are the gateways to your interaction with the Internet.
Certainly mobile is an increasingly important category of our digital lives. For some of us, it is the primary interface for the Internet. But for us working stiffs who have to shit out words in one way or another, either in memos, posts, reports or email, having a keyboard that offers everything is the way to go. And that means that physical keyboards will have a place in my life for a long time — and I suspect yours as well.

 

© 2013 Tech Support. All rights resevered. Designed by Templateism

Back To Top