The Latest in IT Security

Disclosure: A Case for Bug Bounties

04
Feb
2014

Like it or not, vulnerabilities are a fact of life in information security–and so are the people seeking them out. Yes, independent testing may be illegal without express permission but that doesn’t stop these code pillagers from sniffing out vulnerabilities and weaknesses in your web applications. And while many of them are attackers with malicious intentions, others are whitehat good guys concerned about the security of their data, as well as others, that may be using your application – which is just one reason disclosure has become such a hot topic.

With that said, it is undeniable that illicit security testing on web applications is becoming more widely accepted. Despite the obvious legal and safety pitfalls, bug-chasers have become a regular part of the security landscape – and that means we can all benefit from finding a productive middle ground that respects organizations and protects well-intentioned researchers, while thwarting attackers.

A Reason to be Spooked

So, if there’s a place for benevolent testers why are they hesitant to let the organization know when they find a vulnerability? The potential legal consequences are too intimidating. One example: 16-year-old Australian, Joshua Rogers, who found a SQL injection vulnerability in his local area’s Transport Department website that gave him access to the personal information (credit card numbers, names, birth dates, etc.) of more than 600,000 citizens. Rogers shared the information with a media outlet – only after receiving no response from the website staff – and consequently was reported to the local authorities.

Then there’s Andrew Auernheimer, who stumbled upon an ATT security flaw that allowed him to obtain data on iPad users. Auernheimer did not sell or use the information maliciously, saying he believed ATT needed to “be held accountable for their insecure infrastructure as a public utility.” Nevertheless, federal prosecutors charged him with identity theft and conspiracy charges and Auernheimer was sentenced to 41 months in federal prison. Some media entities anticipate that his prosecution will deter whistleblowers from reporting vulnerabilities, ultimately limiting the protection of citizens the justice system serves.

Disclosure Models

These examples make it clear why so many independent testers are hesitant to contact organizations directly with their findings. Without a protected way to report bugs, many feel safer making a public and anonymous disclosure. Yet that merely shifts the risk to the organization by publicizing its vulnerability before they have time to fix it; it also hangs out a neon sign for malicious parties to exploit it.

This conundrum has created a variety of implicit disclosure models. Proponents of Non-Disclosure believe vulnerability information should always be kept private or shared only under non-disclosure agreement. People who advocate Full Disclosure believe in publishing vulnerabilities without restriction, often through mailing lists, academic papers or industry conferences. Benefits of this model include helping users understand their own systems better as well as empowering customers to pressure vendors to for vulnerability solutions.

Coordinated Disclosure, on the other hand, holds that any vulnerability should be concealed until the software vendor grants permission for disclosure. Even then, the information is shared with a limited audience. Customers benefit from high-quality updates without risking attacks during the update development period.

Then we have the Responsible Disclosure model, where all stakeholders agree to terms that allow the organization to create a solution before the vulnerability details go public. Often these take the form of Bug Bounty programs – an option that’s gaining traction among both researchers and organizations.

Bug Bounty Programs

If you’re not familiar with these programs, they foster responsible disclosure by granting permission to find vulnerabilities and usually provide some type of a reward for disclosing them in a discreet and ethical manner. By compensating – rather than prosecuting – the tester, businesses provide an incentive to find unknown vulnerabilities and report them privately for resolution before an attacker learns of them. It’s a win-win for both parties; the bug-finder receives money, swag and/or recognition and the organization strengthens its security.

Not everyone approves of “vulnerability commercialization.” Yet this kind of crowd-sourced security is being practiced by some of the biggest names in the industry, according to Bugcrowd.com. Microsoft, IBM, Amazon Web Services, Google, Facebook, Salesforce and Yahoo all have operated some type of bounty program. The practice has proven so effective that a bug bounty industry has sprung up, with companies like iDefense and TippingPoint managing programs for organizations.

In the end, part of building a web application is knowing that your app will be tested no matter what. Hackers – good or bad – are inevitable; providing the white-hat testers with a safe and responsible path to disclosure can be the most effective route to patching a vulnerability before it can be maliciously exploited. Is there a place for prosecution? Absolutely, when you’re dealing with a malicious party. But given the number of well-intentioned independent testers out there, meeting them halfway in a spirit of collaboration can reward the right players in the game and keep the wrong players off the field.

Related: Pwn2Own Hacking Contest Targets Microsoft EMET Protections

Related: Researcher Discloses Critical Vulnerabilities in Oracle Forms and Reports

Tweet

Chris Hinkley is a Senior Security Engineer at FireHost where he maintains and configures network security devices, and develops policies and procedures to secure customer servers and websites. Hinkley has been with FireHost since the company’s inception. In his various roles within the organization, he’s serviced hundreds of customer servers, including Windows and Linux, and overseen the security of hosting environments to meet PCI, HIPAA and other compliance guidelines. Previous Columns by Chris Hinkley:Disclosure: A Case for Bug BountiesPCI DSS 3.0: The Impact on Your Security OperationsThe New Compliance Checklist Disasters, Damage and Discovery: Detecting Breaches Before It’s Too LateBlock Tackle: How IP Reputation Filtering is Central to Your Security Success

sponsored links

Tags: INDUSTRY INSIGHTS

Vulnerabilities

Comments are closed.

Categories

SATURDAY, APRIL 20, 2024
WHITE PAPERS

Mission-Critical Broadband – Why Governments Should Partner with Commercial Operators:
Many governments embrace mobile network operator (MNO) networks as ...

ARA at Scale: How to Choose a Solution That Grows With Your Needs:
Application release automation (ARA) tools enable best practices in...

The Multi-Model Database:
Part of the “new normal” where data and cloud applications are ...

Featured

Archives

Latest Comments