As you have probably heard by now, SnapChat had a security breach that allowed unauthorized people to get a list of usernames and matching phone numbers. Not the worst breach in the world, but not good either. You can find a surprising amount of information about people based on their phone numbers. What interests me about this story, however, is the notion that this is a security breach and the role of "gray hats" in security.
First, I don't really think of the kind of thing that happened to SnapChat as a security breach. Briefly, SnapChat has a feature that allows users to look up the phone numbers of normal users. Since SnapChat does not enforce a limit on the number of requests one user can make in a given time, it was possible to literally send millions of requests in a short period of time, and thus matching something close to every available SnapChat number to its user account. This is being discussed as a security breach, which in a way it is, but to me it is more reflective of bad software design.
Not limiting your public API (and, frankly, your private APIs) is galactically stupid. You leave yourself open to denial of service attacks, to resource depletion, to inadvertently exposing enough information to allow correlations you never thought of, and simply allowing a few users to get a stranglehold on your service, limiting your profitability. So, yes, it does have security implications. But even if the service in question has no sensitive information, it still puts your business at risk. This should be a fire-able offense, and represents the downside of the build fast, iterate later mindset.
I am actually a fan of agile methods. I do believe in short release cycles, staying close to the business, and iterative rapidly. Those things are not valuable in and of themselves, though. They are valuable only in so far as they help us create better software. When you treat them as good in and of themselves, when you give them reverence better reserved for religions, you get, well, SnapChat. I am quite literally flummoxed that anyone would release an API that doesn;t have a throttle. I would expect the rawest programmers in the world to think of that, and I would certainly expect a business to employ at least one senior developer with the architectural skills of a drunk octopus. I don't know what went into the creation os SnapChat, obviously, but I would wager that they wanted to get to market fast. I am sure that they told themselves they could fix issues "in a later iteration" and patted themselves on the back for being so agile.
That kind of attitude can easily lead to "happy path" only work, work that is focused only on the expected behavior and doesn't take into account the errors, stupidity, maliciousness and pure pig-headed stubbornness that is the usual user base of any application used by more than one person for more than one minute. That attitude in turn leads to software that cannot stand up to much public scrutiny, a kind of beta or alpha test passed off as functioning applications. A whole generation of developers have been brainwashed into believing that "minimum viable product" is a code for "one piece of functionality works as we expect if it's used only as we expect". At that leads to companies like SnapChat making basic mistakes, probably without even realizing how they made them.
The second issue I have is with a description I read of both the company that published the exploit and the people who published the exposed numbers. I have heard both of them described as "gray hats", meaning they are a security firm/group that uses questionable methods to bring attention to security flaws that companies would rather ignore. I think we can dispense with the second group first: they are what I prefer to call "asshole hats". They did something for no real reason: those numbers did not have to be exposed (yes, I know they obscured the last two digits, but that is hardly an insurmountable problem. They also coyly suggested they might release the full numbers under the right conditions.) There isn't any value to doing something like that, and it is more than a little depressing to hear people act as it is.
The group that released the exploit,Gibson Security, is a more interesting question. They did what is apparently accepted gray hat practice: they found the exploit, they told SnapChat, they waited a month and then they released the exploit to the public. Releasing the exploit to the public is done, groups like Gibson Security say, in the hopes that the publicity will force reluctant companies to fix the exploits. Except that releasing the exploit to the public where it can be used to damage real people. Gibson could have gone to the press with a demonstration. The idea that a security flaw in a popular app like SnapChat would not have been covered is laughable. They could have gone to Google and Apple with the same demonstration and gotten their assistance in having the app fixed (SnapChat would certainly have paid attention to Apple telling them to fix the issue or have their app removed). They had options, but they apparently chose the option that was most likely to put people at risk. There doesn't seem much gray about that to me.