Shodan is a search engine for computers. It allows to search for hosts on the Internet not by the text they serve but by their technical properties as they reflect in responses to queries. The crawler Shodan uses to build its index does not read text that websites emit when visited, but instead it reads the information that the machine provides when probed.
Like most other technologies, this is another dual-use technology. It has both legitimate and malicious uses. The tool can be used for research, but it can be, and indeed has been, used for vicious purposes. Shodan will readily map and report Internet-accessible web-cams, traffic lights, and other IoT devices, including those with lax protection, such as those using default passwords or no passwords for log-in.
So is Shodan bad? Not at all. Those are exactly the forces that make us all more secure.
Does Shodan help hackers? It sure does, just like car makers provide runaway cars for bank robbers and just like knife makers help street gangs possess cold weapons. The fact that some technology can be used by bad guys as well does not make it bad in total. But the argument for Shodan goes way beyond the dual-use argument.
Shodan is just a messenger. It does not create flaws; it only points to them. Unarguably, detecting weak nodes is an important part of an attack, but this capability is already available to the hacker community anyway. Hackers search and spot weak hosts on the Internet since long before Shodan. Even automated crawlers existed before, they just did not serve the public and so they never got public attention.
Shodan helps attackers just by bit; now let us see what it gives the good guys in return.
The Shodan search engine brings the vulnerability of mis-configured hosts to public awareness. It does not make them vulnerable, it brings the existing vulnerability into the public spotlight. As it seems, most independent security researchers believe that once a vulnerability exists, we are all better off by having it publicized rather than kept a secret. This belief forms the motivation behind the full disclosure and the responsible disclosure paradigms.
The rationale behind the disclosure paradigm is as follows:
Once a flaw or vulnerability is known to a researcher, be it even a single researcher, it shall be assumed that this flaw or vulnerability is already known to the bad guys as well. Every vulnerability is thus always a ticking bomb, regardless of how many people know about it.
Consequently, disclosing a vulnerability to the public does not materially worsen the situation comparing to if it was undisclosed.
On the other hand, there are substantial benefits to disclosing a vulnerability to the public:
The public that relies on the vulnerable system shall be aware of its vulnerable state, so it may take remedial actions, penalize the vendor financially, etc. Anyone relying on a system that puts him at risk has the right to know that.
Publishing a flaw in a commercial product is the only effective way of encouraging the vendor to actually fix it.
History is full of examples for how vendors ignore security issues in their products for months and years, until those flaws are made public. For most vendors, who do not sell their products with any security warranty whatsoever, a security flaw is a public relations issue. If a flaw is uncovered but almost nobody knows about it – it is just not economical to fix it, ever.
The situation with IoT and other public-serving hosts is similar. Once a traffic light accepts connections from everyone and does not require strong authentication, this situation is a given. Those who seek to exploit traffic lights will find it with or without the help of Shodan. Yet the listing of the faulty traffic light is very meaningful. Listing is meaningful not because the bad guys can see it, but because the good guys can see it. When the good guys see the listed traffic light, they know where they stand. They know how they are vulnerable and they know who is responsible. The careless engineer, who is the one who put us at risk and the only one who can eliminate this risk, lost his anonymity in the IoT haystack, and just as importantly: lost his ability to use the all-too-common excuse of “hackers will never target it anyway".
Services such as Shodan show us what is going on in systems we all rely on. Such services also dissolve the haystack excuse used by those who could eliminate the risk but didn’t. Haystacks are not a deterrent for the focused evildoers, they are just blinders for the rest of us.