A few months ago I was attending an IT leadership conference. One of the conference topics was information security...
processes and technology. After several speaker sessions, the conference organizer arranged the participants into a few groups for a facilitated discussion on threats to information security. In our group, the facilitator started off by asking us what we thought was the best news an IT leader could hear about information security. Others in my group gave answers to the effect that the best news would be if we had just passed our audits or gone some time without an incident or something similar. I timidly raised my hand and said that the best news I could ever receive about information security was that someone else got breached. Why is this good news? First of all, it was not me. Second, as long as it was someone else, the likelihood of my getting funding for investments in information security was greater after a well-publicized data breach.
When it comes to information security, the threats, responses and technologies keep changing and I expect they always will.
Through all this churn in the threats to information security and the processes and tools to combat them, there is one area that remains a constant and a consistent risk -- human behavior. I have not yet figured out a way to ensure that humans act in the right way but there are some things we can do that help reduce our exposure.
Three information security processes to mitigate human error
Here is my list of what makes the biggest difference:
1. Classify and restrict access to data. Some types of data are more sensitive than others. It can be overwhelming to think of ways to protect all of our enterprise data. But, if our sensitive data is a subset or sliver of that data, things get simpler. I like to define some criteria we can use to segregate our data into different classifications. Clearly financial data is sensitive. So is the personally identifiable information of our clients. But, a whole bunch of data might not be worth fully protecting.
Once we have classified our data, we should determine who really needs to access the sensitive data -- and the fewer the people who can access that data, the better. There really does need to be a "need to know" criteria baked into your information security processes. Now, restricting access to data can be a pain as we need to define data roles and permissions. We then need to regularly update the data classifications, the data roles and permissions and who should have those roles and permissions. All of this pain is well worth the effort.
2. In addition to restricting access to sensitive data, we should implement a process and tools for logging access to data. This likely implies that we have to rid ourselves of generic user accounts -- particularly admin accounts. As part of our regular access reviews, we should review who is accessing what data and make sure that such access is proper. We can review the access logs to refine our access rules. For some systems, there is no way around generic admin accounts -- thus defeating the goal of knowing who accessed what data. For this, look at key management systems in which an admin checks out a key to use the generic admin account. In general, we want to know who is accessing our sensitive data.
3. Review the ways that we internally pass around sensitive client or financial data. Too often, we do this in an ad hoc, less thoughtful way. Someone asks a question about a client account and in response, we send an email that provides account details that should never be sent via email. I have found it helpful to spend some time with the client services and accounts receivable teams to simply observe how they exchange information with each other and with other departments. Then, if this is an issue, implement a more secure approach for this information exchange.
In my perfect world, those inventing and improving the security tools figure it all out and come up with something comprehensive that battles all the known and future threats to information security. But, as long as people are still falling for the Nigerian prince email scam, it seems a portion of my security profile should assume that I need information security processes in place to compensate for human behavior.
About the author:
Niel Nickolaisen is CTO at O.C. Tanner Co., a human resources consulting company based in Salt Lake City that designs and implements employee recognition programs. A frequent writer and speaker on transforming IT and IT leadership, Nickolaisen holds an M.S. in engineering from MIT, as well as an MBA degree and a B.S. in physics from Utah State University. You can contact Nickolaisen at email@example.com.
For more on how to deal with threats to information security, check out Nickolaisen's column on forming a hybrid security defense and this report from the MIT CIO Symposium on the new cybersecurity criminal.