Posted by: crudbasher | January 24, 2013

Why Siri (Currently) Sucks

Ok I admit the title is a bit provocative. Even so, there are a lot of people who will agree. A little back story first.

Siri didn’t start as an Apple product. It was a third party app that originated in a project launched by the US Department of Defense. It was then spun off into a company to make an app. The original Siri had two major differences with the current version: 1. It was text only, 2. It was able to tap into many (non Apple) sources of information.

When Apple bought the company it folded it into the closed ecosystem. Then they essentially they “Nerfed” Siri. This is a term from video games. To Nerf is to reduce the power or effectiveness of a game element, usually to make the game more balanced. The opposite of Nerf is to Buff. So why did they do this? Simple: Apple is a hardware company not a software company. They want a good and “magical” user experience. The quality of the information doesn’t actually mean as much to them. A good case in point is Apple Maps. It was pretty, but the information had some problems. Outside datasources are largely rejected. For example, you can’t use Siri with other third party apps. On Android you can.

What we are dealing with is a question of philosophy not technology which I have maintained for a while is Apple’s Achilles heel. Since they insist on building their whole ecosystem themselves, they are suffering from a lack of resources. Yes I know it’s hard to envision a company with over 100 billion dollars in the bank having a resource problem, but it’s a brain problem. Simply put, Apple can’t out innovate the whole rest of the open Internet especially when there are certain things you won’t do such as use outside information.

I think this closed system will become more of a problem in the future once we start getting to real virtual assistants. Siri is simply a prototype; future systems will be much more capable. For example, Google and others (and Apple too I wager) are working on having these systems maintain context. Right now when you ask Siri a question it answers it but forgets it. Your next question is treated completely separately. Future systems will maintain a conversational record. Here’ s an example. Let’s say you have your friend Tom coming to visit and they are arriving on a plane later in the day. You can ask your assistant about if the flight is on time. Later on that day you can just ask if Tom is still on time and based on the context of what you were asking about earlier it will know to check the flight status.

Google is working on this technology and is calling it Google Now. They actually want it to be able to suggest things to you based on what it knows about you. If you have shopped at a particular store, it can watch for sales and suggest them to you.

I am confident though that Siri will get better soon although I’m not sure if they can overcome their philosophical limitations completely. Google’s system is going to be very good I think and I saw that just purchased a company to put the same capability into their tablets.

The end result will be a system that can teach things to children all day long. When they ask “why” the system will answer in as much detail as they want.

It’s a brave new world coming and most people don’t even realize what is about to happen. Tomorrow I will try to illustrate this.




  1. In case it’s helpful, just ran across this one:

    The latest Siri integrations:!/The-latest-Siri-integrations:?sc=9h6JDa8DnumS

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s


%d bloggers like this: