Credit where Credit is Due

For security conscious software buyers and SaaS subscribers, “caveat emptor” doesn’t work. It hardly works at all with imperfect information and it has no chance in closed-source security, a world with virtually none.

Buyers simply can’t tell the difference between a product diligently secured and one merely marketed as such. When buyers can’t tell, sellers don’t have the incentive to invest in diligence. The results are predictable.

And that’s a shame because legions of talented technologists aren’t doing what they could be to make the world a more secure place.

Responsibility

Engineers and craftsmen in other fields have a rich history of taking personal and professional responsibility for protecting consumers from the obscured dangers of their craft.

In Canada, engineers wear the Iron Ring on the pinky of their working hand — where they touch the drafting table — to remind themselves of their ethical obligations. In civil and other engineering domains, licensure is a pre-requisite for signing off on weighty work, like the design of a bridge. And in construction, bonding, a personal form of strict liability, is common.

No such traditions or requirements abound in software engineering. And, users are poorer for that.

Incentives

In the boomtowns of technology’s wild west, where competition is fierce and money’s to be made, capitalist Darwinism places the incentives squarely on shipping a valuable product quickly. When your customers can’t look at your product and tell whether you were diligent with security, why put scarce resources into it?

To make matters worse, the marketplace doesn’t have much in the way of quality discovery mechanisms. Other than too-little-too-late headlines and bulletins dispatched from good guys hanging out in the bad parts of the Internet, there are no signs you’ve got a lemon. You don’t know until a product you depend on has been exploited.

Consumer Reports and its ilk take on collective responsibility for this diligence in other markets, and they’re trying to do the same in computer security. But because source is closed, computation and storage are a service, not an inspectable item, and weaknesses of organizations matter as much as those in the product, the costs are exorbitant.

Even where the costs are justified, performing due diligence on a software or SaaS purchase is hard to do well. Researchers are asked to make confident statements about of low-probability, high impact events, an unenviable task.

Regulating our way to safety is a tough sell for good reason. Strict requirements fall short because meeting all criteria on a 210 point list doesn’t guarantee an un-discussed door isn’t left wide open. Even worse, a set of wisely chosen rules today might awkwardly bound tomorrow’s progress. Less prescriptive approaches leave us wanting, too. A licensure requirement could choke the supply of badly needed talent to a booming industry. And strict liability doesn’t cover all the bases. The startup flavor of technology development is financially high-risk, high-reward, and moral hazard suggests companies should be expected to play fast and loose even with strict liability looming.

Giving Technologists Agency

But, what consumers and their usual protectors can’t see isn’t invisible, nor is it beyond good management; plenty of people understand it just fine. Each technology’s creators are intimately familiar with what they’ve built. And often, their competence and their conscience tell them precisely what ought to be done.

On most days, in most organizations, duty doesn’t compel the rank and file to pick a fight over getting something marginal fixed: “Looks like we’re not salting passwords.” has to spar with “Our biggest client wants X fixed yesterday.” Motivating action over complacency is even harder — “Hey, I feel like we should be monitoring for intrusion” and “I just don’t feel comfortable putting this authentication system in production without a serious penetration test” are elective costs in the eyes of the CFO with no upside to sales or marketing.

Pushback from upstairs isn’t Pointy Haired Boss syndrome. It’s pure economic rationality running amok, forcing untold numbers of poor decisions across every organization in the land.

So, if technology creators are the only ones who can see what clothes the emperor is wearing, what are they to do? You can only pick so many fights without taking retribution, the penance for being a squeaky wheel.

Make an Outward Sign of Inward Commitment

More organizations would do the right thing if only they could get credit for it — credit in the form of market share. Getting that credit doesn’t have to be so hard.

A soundly designed system for self-nomination to a high standard combined with (possibly anonymous) reporting could do the trick.

Organizations could self nominate to an “Iron Ring” standard via a registry, saying, “Our company secures our product. We empower our technologists to decide what security is ethically required of the organization, and we encourage them to be transparent about it without fear of retribution. Employees may, alongside this nomination, anonymously report any customer-relevant security concerns we haven’t remedied after 90 days.”

Not every company would self-nominate. But that’s the point. It gives those who are actually willing to meet a particular requirement the opportunity to prove it to customers.

Self-nominations would be periodically certified by a designated agents (CEO/CTO) in the company whose credentials and identity would be the outward facing signs of competence and collateral. This is not without precedent. It’s patterned on Sarbanes-Oxley’s CEO/CFO financial statement certification albeit with social and professional liability instead of SARBOX’s legal force. Nominations could be simple, as above, or more precise. A nomination could warrant, for example, “I am not aware of any unresolved vulnerability that puts customer interests at risk.”

With each self-nominated company’s reputation on the line and a credible system for knowing they’re good to their word, engineers so inclined can find a real home for ethically-driven decision making instead of being awash in a sea of questionable “business decisions”.

Reality

Of course, a system like this isn’t without problems. What about one crotchety employee who simply can’t be satisfied? Is a company to just grin and bear it while their nomination is littered with unreasonable reporting?

Not every CTO or engineer is even qualified to make such a statement on behalf of their company. Not that it would stop them.

What dollar-wielding customer is going to care about a self-nomination to yet another standard?

And couldn’t a group of greedy equity-bearing engineers collude to silently take shortcuts?

Questioning could go on like this for a while.

But, I believe the core problems are solvable. And, more importantly, I know that the current incentives leave us right here at the status quo, so I think it’s high time we take a shot at changing them.

Using incentives as a lever, a small effort like this could move mountains. A relatively simple system could be all it takes direct customer dollars towards organizations trying to do the right thing. That simple logic powers a hell of an innovation economy, and could just as well power the same improvements in security.

Disclosure: A few friends of mine were kind enough to read drafts of this and provide comments. I owe thanks to Michael Specter, Morgan Edwards and Ian Drummond for their advice.