Dan Geer elicited some very interesting questions from his webcast, "Security of information when economics matters."...
In this Q&A, Dan answers those queries and puts to rest concerns about how money and security influence each other.
Can you reiterate what malware is?
It is short for "malicious software," and it's a collective noun. There are, as always, lots of possible places to look for a reference definition, but this will do.
Why is keeping honest people honest an economic problem, but making dishonest people honest isn't? Are you only talking about internal dangers?
Keeping honest people honest is a job that is about incentives and keeping the gentle nudges all pointed in the right direction, with a little bit of watching to see how it turns out. This means that you try to calibrate your effort with your result, and you don't have to prevent the unpreventable. Keeping someone from turning an airliner into a cruise missile is the opposite extreme -- to a large degree the persons involved are not subject to behavior modification through incentives and the cost-effectiveness of various protections is not the first thing you consider.
There is no bright line here. We expect parents to start with the innocence of the baby and raise a good and productive person, but if things get completely out of hand, the parents are no longer where we put our focus of control once police and the courts are forced to get involved. As to whether this is external versus internal, my own view is that if you take care of the internal, you have largely taken care of the external. Ignoring straight vandalism (I'm just going to trash it), the first measure of success of the outside attacker is to gain the credentials of the insider, hence an adequate internal security regime moots the question of external attack and does so as a side effect.
Is TCO only calculated by figuring the anticipation costs and the failure costs? Or are there other factors involved?
The sum of anticipation costs and the failure costs is the total picture, assuming you are willing to classify whatever you are doing as one or the other or, in some cases, a shade of both. These terms are (proudly) borrowed from the National Center for Manufacturing Studies' report on The Cost of Information Assurance. As far as I can tell, this division has rapidly become a bit of a commonplace terminology, so I did not go out of my way to either defend or even describe it. However, the time before an event is when you anticipate, and the time after an event is when you mop up, so it sort of seems hard to argue with that. Perhaps "pre-event" (when you do not actually know what is going to happen but can guess) and "post-event" (when you know what happened and have a recovery job to do) would be terms more immediately transparent, though, I'm myself happy with anticipation and recovery as they describe function rather than timeline.
What is the role of authentication in the future? What does authentication have to do with this topic?
You cannot have accountability without identity, and identity is a near-synonym for authentication. As such, my general theme is that ever more complex access controls eventually become diseconomic by virtue of their complexity, the demands on them for real-time action and the management nightmare of being precise and specific despite constant organizational change. At that point, you need to bolster authentication with accountability. However, while permission granting can be independent of identity -- such as to say that you have to be in Room 12345 to operate some particular machinery or you have to have the role of ABCDE Specialist to be able to call up some control panel -- accountability comes down to what did Dan do, and how do we know it was him?
We need, sooner rather than later, the head-on collision between a privacy defined by anonymity and a security defined by accountability. Any delay in getting this collision resolved is a delay in getting effective security in a world of terror. There are two absolute requirements for effective terror, and they are 1. money and 2. communications. It is also flatly the case that if you have an enemy that can strike in a location-independent fashion without identifying himself in the process, then your strategy must center on pre-emption. Pre-emption, to be effective, requires intelligence, and intelligence requires surveillance. Thus it should be of zero surprise that the question of how badly we want to defeat terror should come down to our decisions about the surveillance of (1) money and (2) communications.
What does this have to do, if anything, with all the new compliance rules I face?
Compliance cannot be done without secure information management. Not to undertake here a theoretic analysis of regulation, but it seems to us that regulation accretes in reaction to events (Enron). In addition, that regulation has its greatest bulk and precision when that which it seeks to deliver is the most difficult to define (medical privacy), that regulation's penalties are more proportional to broad effects (credit availability) than to deep factors (risk tolerance), and that, in contradistinction to judicial practices, regulators tend to demand that the regulatee prove a negative (be guilty until proven innocent). Regulators create risk for regulated parties in the hope that the regulation sufficiently decreases the risk for the class of passive beneficiaries of that net and societal risk is reduced. Sadly, this tacitly ignores both the cost of the regulatory apparatus itself and blithely discounts the cost of regulatee compliance (spending $X to prevent the loss of $Y where X is greater than Y). Thus is created a market opportunity for those who can achieve compliance while being cost-effective. In other words, secure information management is the answer to compliance, but only for products that offer some degree of platform-like adaptability, some degree of getting at the raw material on which a compliance case depends even when the nature of compliance can and does change over time.
Can you say something about ROI? I get asked about it all the time...
Return on investment requires that you be able to measure both the return and the investment. As always, it is easier to put money on the bottom line through the cost side of the ledger. Therefore, the ROI argument is quite likely to be a balancing of measured costs -- costs that are hard dollar only some of the time. To begin with, commit to a cost effectiveness approach rather than a cost benefit approach. Cost benefit requires that all parties agree on how much a benefit is worth (a day in the sun, a human life, being seen as an industry leader). This is generally hard and always involves judgments that are, at best, like taste and style. Cost effectiveness, by contrast, requires that parties agree they are going to spend $X but to then see how much value they can get for that $X. This is "What is the most you can get for $X?" rather than "Which would you rather have, benefit Y or $X in your pocket?" and only cost-effectiveness is sure to be tractable in all settings.
If you accept the argument thus far, then it is simply a matter of logic that you go on to see the value returned for the $X expended, compared across alternatives, to pick the one or ones that yield the better value. There are figures out in various places, e.g., Meta #2856, as to how much you should spend on security as an industry-sensitive percentage of total IT budget, but the point is to pick a target level of effort and then to maximize the value derived, which does require metrics of that effectiveness. There is little doubt, for example, that any reasonable valuation on corporate data or reputation is sufficient to make -- on paper -- rational decision making economically favorable to investment. The issue for you is to have those measures and to make, by adopting cost effectiveness over cost benefit, the problem small enough to be tractable to measure.
How do you decide what information is worth?
- Replacement value. "If starting from scratch today, what would it cost to recover your data to the level it is as we speak?"
- Black market price. "How much is someone willing to pay to get my data?"
- Future value. "If I lost this data how much future revenue or other value would not then appear?"
Each of these is a lower bound on what your information is worth, but they are solid lower bounds. If the decision you have to make tilts one way or the other at a level of value that exceeds those three measures, then further precision on the value of the information would not change your decision, and you can make that decision supported by what you do know without having a requirement for perfect knowledge. Each of these has a methodology familiar to some part of planning, finance or accounting and thus uses skills your firm already has to make up for skills it may not have yet. Perhaps you have a fourth or a fifth way to bound information value in your particular firm -- that's swell; go use that. Note that even if the way you measure this or anything is not particularly good, if the inequalities (A > B) let you make your decision rationally, you are doing the right thing. If you can measure something over time, then do that. If you believe your measurement, then act on its numeric value. If you do not believe your measurement but you can apply it consistently time after time, then trust and act on the trend data it produces.
Are you saying that I should throw out my existing antivirus and firewalls?
Not at all, but I am saying that you should not expect them to do things beyond their design limits. If you expect your firewalls to block remote procedure calls (SOAP, say) if the actions they are asking for are not acceptable, then you are asking your firewalls to fully model the execution environment of the applications behind the firewall, which is just plain foolish to ask. If you ask your antivirus to become fully automatic and real-time, you are creating a new vulnerability (that of the auto-update system being a source of attack) for the one you already have. In other words, the existing technologies of this sort that you have are likely to be operating as well as they can, given what they were designed to do. Asking them to do more than they were designed to do is asking for a lucky break and, as they say, hope is not a strategy. Instead, adopt a defensive in-depth paradigm and turn your focus to data protections. This permits AV and FW to do what they do well and not to insist they do things for which they were not designed, while at the same time it expects the data protection regime to backstop them in true "defense in depth" style.