If hackers want to get into corporate systems, they target the weakest link in the security chain: employees. After all, employees are easy targets because they click on arbitrary links, concoct weak passwords, reuse passwords and even leave their devices in public places, according to Jason Hong, associate professor at Carnegie Mellon University's Human-Computer Interaction Institute in the School of Computer Science.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
But Hong is trying to change that, and he's going about it in a novel way that combines technology with the art of influence. He refers to this area of his research as social cybersecurity, or the application of psychology to cybersecurity. The goal? To use social techniques, including peer pressure and what he calls social proof to get us to buy into the desirability of adopting sound security practices. It's not a stretch to say that Hong and his team are the Mad Men of cybersecurity.
SearchCIO recently caught up with Hong to talk about the social cybersecurity technique called social proof, how this technique may be used in the enterprise and what a social cybersecurity team looks like.
Editor's note: This conversation has been edited for clarity and brevity.
What is social cybersecurity?
Jason Hong: The original name we had for the project was social security, but you can easily see why we didn't choose that one.
The basic idea behind this is that we're looking at how to change people's awareness, knowledge and motivation to be secure. The work is grounded in a discipline known as social psychology. This discipline looks at how people influence one another.
I'll give you a really concrete example. We use a technique known as social proof, which is people tend to do what everyone around them is doing. One of the common pranks fraternities will do from time to time is to have a few fraternity members point up at the sky, and if you look at the number of passers-by who also look up, it's actually very high. It's a simple mechanism that most people aren't aware of, but it's very common.
Another example is, let's say you just got off a plane, which way do you go? Do you turn left or right? The simple heuristic is to follow where everyone else is going, and you will probably be going in the right direction. That's another really easy example of a social proof.
Can you give an example of how you apply this technique to cybersecurity?
Hong: We did this simple study with Facebook where Facebook was about to post a message on people's status updates that says, 'Extra security settings exist; click here to learn more.' This is part of the company's annual campaign to improve cybersecurity. We had a Ph.D. student doing an internship there. He modified the message to say, '108 of your friends use extra security settings,' or, '10% of your friends use extra security settings,' or, 'Some of your friends use extra security settings.' It's a simple change, but it did improve the click-through rates and adoptions of some of these security settings.
And so the basic idea here is that it's a very subtle, but powerful, way of influencing people's awareness, knowledge and motivation to be secure.
How will the enterprise benefit from social cybersecurity?
Hong: These [social-proof techniques] could be used, for example, if you want to get people to adopt two-factor authentication or to make sure they really do update their software correctly. Or to just have a better awareness of how many people around them have strong passwords.
Here's another way of thinking about social cybersecurity: I don't know what kinds of cybersecurity practices you have -- I don't know whether you have good passwords or not. I don't know whether you use two-factor authentication or not. And so on. And same with me: You don't know how good my password security is and so on.
This is actually one of the big problems we've seen with respect to cybersecurity: It's pretty invisible. And because it's invisible, it's harder for people to adopt better practices over time, so we can't really learn from each other.
One of the things we're looking at is how do we make some of these cybersecurity practices more visible -- in a safe way -- so that we have better adoption of best practices. We're going to be working with some of the groups here at Carnegie Mellon and also, hopefully, with some companies to try to figure out how to get people to adopt best practices [by having] better awareness of what everyone else around them is doing.
Do psychologists sit next to computer scientists in your lab to develop these techniques?
Hong: My department at Carnegie Mellon University is known as the Human-Computer Interaction Institute, and we have designers and psychologists and computer scientists sitting next to each other. So, most computer science focuses on the computer itself: Better networking or better automation or better graphics and so on. In my field of human-computer interaction, we're looking at people and computers together and also looking at how we can improve the system in terms of usability, desirability, utility and so on.
The most common version of it is better user experience and user interface design, but we also have a lot of research like the one I just described, where we have psychologists working with computer scientists to improve cybersecurity.
Take cybersecurity seriously or pay the consequences
Identifying a company's cybersecurity challenges can be tough for CISOs
How to uncover cybersecurity blind spots