Cars will soon be (almost) fully automated. News on experiments with cars that drive by themselves, in different scenarios and situations, make it seem obvious that soon enough the role of the driver is to be similar to that of a pilot in a passenger jet. Many people feel some itch of discomfort with this thought; the itch of “we are not there yet”. Let us see if and why we “are not there” yet, and what we can do about it.Continue reading "Car Automation. Me? Worried?"
No one in the automotive security industry could miss the recently published news article titled “Beware of Hackers Controlling Your Automobile”, published here, and a similar essay titled “Car hackers can kill brakes, engine, and more”, which can be found here. In short, it describes how researchers succeeded in taking over a running car, messing up with its brakes, lights, data systems, and what not.
As alerting and serious as this is, it should not come by as a surprise.
On January 15th, TechWorld published an article called Encryption programs open to kernel hack. Essentially, it warns that the key to encrypted volumes, that is, to volumes of software-encrypted virtual drives, is delivered by the encryption application to the kernel of the operating system, and thus may be captured by a malicious kernel.
“According to a paper […] such OTFE (on-the-fly-encryption) programs typically pass the password and file path information in the clear to a device driver through a Windows programming function called ‘DevicelOControl’.”
And they consider it as a threat:
“Dubbed, the Mount IOCTL (input output control) Attack by Roellgen, an attacker would need to substitute a modified version of the DevicelOControl function that is part of the kernel with one able to log I/O control codes in order to find the one used by an encryption driver. Once found, the plaintext passphrase used to encrypt and decrypt a mounted volume would be vulnerable.”
Continue reading "Right, the kernel can access your encrypted volume keys. So what?"
Such “findings” occur often when the security model of a security system is ignored.
Full-Disk Encryption (FDE) suffers class attacks lately.
As if the latest research (which showed that RAM contents can be recovered after power-down) was not enough, it seems as Firewire ports can form yet an easier attack vector into FDE-locked laptops.
From TechWorld: Windows hacked in seconds via Firewire
The attack takes advantage of the fact that Firewire can directly read and write to a system’s memory, adding extra speed to data transfer.
Continue reading "Firewire threat to FDE"
The tool mentioned seems to only bypass the Win32 unlock screen, but given the free access to RAM, exploit code that digs out FDE keys is a matter of very little extra work.
This is nothing new. The concept was presented a couple of years ago, but I haven’t seen most FDE enthusiasts disable their Firewire ports yet.
I was interviewed (by e-mail) for a project that preferred to remain undisclosed, on the future of secure content distribution. Enclosed are the (slightly modified) questions and answers.Continue reading "An Interview on Secure Content Distribution"
A while ago the iPhone was hacked so to make it usable on networks other than AT&T’s.
Since that moment, many opinions were sounded on how Apple could have done their security better and how the hack could have been eliminated. Moreover, some of the industries security experts went on to their desks to work out a stronger mechanism that can save the gigantic firm from such embarrassments in the future.
An obvious question comes up: couldn’t Apple, with its $167 billion market cap, afford to pay some good security designers to protect its assets on the iPhone?
Most vendors selling security software that deals with removable devices or with flash storage mediums such as Disk-On-Key (DoK) provide the functionality of file wiping (often called shredding) from the removable medium. This feature allows the user to erase sensitive files that are no longer needed, in a way that (presumably) prevents them from ever being recovered; even if forensics gear is involved.
I find file wiping to be a useful function. Software that permanently destroys files is available on PCs since the early 80’s and has always been handy. File encryption utilities also use file wiping to remove the original plaintext file after encrypting it.
The one concern I have is about the reliability of these tools when they run against particular files that are stored on flash memory, such as USB DoK or SD cards.
Here is a question that was raised in a discussion forum, along with my response to it. I figured it is interesting enough to post it here.
Why not just deploy a Enterprise Right Management solution instead of using various encryption tools to prevent data leaks?
The “encryption tools” function according to simple, well understood, and more-or-less enforceable security models. Their assumptions are well understood and, most importantly, match the environments they run on. They solve a simple problem, and solve it effectively.
Rights management solutions have complex security models, and run in environments that do not always satisfy the assumptions. They aim at providing complex functionality, but they often (always?) fail to deliver due to their over-complexity and unrealistic assumptions.
If your security needs can be met by the simple functional model of the “encryption tools”, then you will prefer to enjoy the assurance and thereasonable robustness they provide, which is the most desirable feature after all.
It is already obvious that security is hard to do right. Bruce Schneier has written a good essay called: Why Cryptography Is Harder Than It Looks. This essay refers to cryptography, but touches on the subject as a whole. It is still not always clear, however, where the hard-core of security analysis work is, and where exactly the difference from QA, and from other system engineering domains, lies.
I would like to take a shot at explaining the fundamental difference between assuring functionality and assuring security, and pinpoint the toughest part of security analysis.
One of the biggest hurdles of DRM results is that content can somehow be leaked by a few skilled individuals and then find itself on the peer-to-peer networks again. The only way to mitigate this threat is by embedding a watermark on the plain content data that will be used either by the playback devices to recognize pirated content or for identifying the source of leaked content on the network.
That’s nice, but for this we need a watermarking scheme that can be detected by a non-secret mechanism (called Public Watermarking) and for this mechanism to be such that makes it impossible, or at least very difficult, to peel the mark off. Unfortunately, these two requirements are known to be contradicting. The schemes being public implies that anyone can form an oracle that will tell him as soon as the mark was rendered useless. Once such an oracle is available there is a simple iterative process to be followed by which changes are introduced to and removed from the original content until the result is another piece of content that on one hard is not too different from the original and on the other hand does not contain a usable mark.
This is not to say that watermarking for DRM is doomed to failure - this is just to say that a breakthrough is needed to make it happen.