The iPhone Hack -- Security Done Wrong or Security Done Right?

  2008-01-13

The iPhone Hack -- Security Done Wrong or Security Done Right?

  22:06, by Hagai Bar-El   , 552 words
Categories: Security Engineering, Counter-media

A while ago the iPhone was hacked so to make it usable on networks other than AT&T's.

Since that moment, many opinions were sounded on how Apple could have done their security better and how the hack could have been eliminated. Moreover, some of the industries security experts went on to their desks to work out a stronger mechanism that can save the gigantic firm from such embarrassments in the future.

An obvious question comes up: couldn't Apple, with its $167 billion market cap, afford to pay some good security designers to protect its assets on the iPhone?

With all that money spent on the iPhone development, couldn't Apple spare a dime for more robust security? Obviously, complete security is hard to come by, and incurs per-device costs that are usually unacceptable for consumer electronics. It seems, however, that in this particular case Apple fell short of utilizing the state of the art, even to the level that does not require expensive measures to be deployed per-device.

Truthfully, I never delved into the details of that hack, so I cannot assert on exactly how easy it was. However, I am aware of the state of the art of protective measures, and I know that for modest per-device costs, hardware-based security could have made this attack significantly less scalable. I designed such mechanisms, and so have others; it is doable.

So why? Why couldn't Apple do what others have done? It is certainly not that they could not afford this or that technology or consultant. The more likely answer is: because they did not want to. It was said before that security cannot be detached from economy. Security is a set of technological tools that are deployed, for a cost, to mitigate some perceived threats, all according to some financial model. Misunderstanding the financial model will necessarily lead to wrong security decisions.

Now let's see how this relates to the Apple iPhone case.

When someone buys an iPhone, Apple gets money of an amount a, which is its profit on the device. When an iPhone user gets services from AT&T, Apple gets a hefty pay from AT&T, call this amount b. If the user was not subscribed to AT&T before, but is now, b is even greater. Apple wants iPhone users to migrate to AT&T, because this will bring the highest revenues: the profit from the iPhone, and the payments from AT&T (a+b). If you are a customer of AT&T — good. If you are not, but you switched to AT&T just for using it now with your shiny iPhone — great! What if you are not a customer of AT&T and one who will just not migrate, not even for being able to use an iPhone? In this case: Apple has two options: either to lose you as an iPhone purchaser and to profit zero, or to let you use the iPhone anyway thus profiting a, which is still way greater than zero.

Apple probably did a good job in setting the security level of its iPhone just right. At least at first glance it seems to accomplish its goal: On one hand, to cause as many customers as possible to migrate into AT&T, and on the other hand to allow the geeks who do not want (or cannot) use AT&T's services to be able to use (and thus buy) the device anyway.

No feedback yet


Form is loading...