CIOs have been hearing about the business benefits of robotic process automation, or RPA. These software programs,...
which mimic how humans interact with their computers to complete specific set tasks, can lower operational costs, increase accuracy and improve compliance.
Experts and researchers in the field argue that employees, too, benefit from RPA; they're freed from mundane chores to focus on more interesting, higher-level work.
All of which, in the end, can mean big returns on investments. In fact, McKinsey & Co. estimated that the ROI on investment for RPA deployments can be 30% to 200% in the first year.
RPA, however, isn't without risks. Like any technology, it works best when IT and business leaders collaborate to identify the best use cases for automation and to implement the necessary infrastructure and security products to support it.
Here, Saurabh Gupta, chief strategy officer with HfS Research Ltd., and Martin Sokalski, principal in Emerging Technology Risk services at professional services firm KPMG, discuss the unique RPA security requirements when deploying this emerging technology.
What are the top RPA security risks you're seeing?
Martin Sokalski: From a security perspective, I can see a few key risks: fraud and inappropriate access to sensitive data, being able to impact critical system availability and continuity, and abuse of administration privileges and access. Data privacy can be a bucket as well.
Saurabh Gupta: [RPA] security hasn't been a big issue [in the sense that] we haven't seen big breaches. RPA doesn't hold any data on its own. It's only working as a top layer, and sometimes you don't have to integrate it at all. And when breaches happen, they tend to happen in your under layer where data is, not on the top layer. But privacy is dependent on how you design your solution.
So, is RPA as a technology immune from being hacked?
Gupta: So, we've not seen breaches, and maybe one of the reasons for that is we haven't seen major RPA deployments. Most of them have been fairly small deployments, but it is a real concern.
Sokalski: There are security risks that, at the base, might have human intent. With bots doing the same action thousands of times, if you have a rogue administrator or a hacker who modifies the session that the bot is performing, it can have a pretty profound impact, because that doesn't require a lot of active involvement from the hacker; it only takes a one-time hack into the environment.
Do RPA deployments introduce operational risks?
Gupta: There's a potential scenario that when these transactional activities are automated, no one will go back and improve them. So, what happens when a process changes? How will you reconfigure them? If you deployed RPA for doing A to B to C and you introduce B.1, then you have to go back and reconfigure and test and do everything again.
Saurabh Guptachief strategy officer, HfS Research Ltd.
The second problem is if someone does introduce another component to the process, there's a danger no one goes back to properly configure it and the process can be corrupted. The lack of ongoing governance is going to be a major issue around RPA and enterprises haven't solved for it yet.
Sokalski: The risk will vary for traditional RPA bots -- the ones that do what you tell them to do -- and the bots with AI and cognitive learning. For traditional RPA bots -- the dumb bots -- one of the biggest challenges is change management. If you tell the bot to do something, to, say, walk through the door, it will walk through the door a thousand times. But if you move that door a little to the right, then it will walk into the wall and do that a thousand times. So, you have to manage that change, because if you make a change somewhere in the process or in the system, that will have a profound impact on the bot's ability to perform and do its function.
AI bots, though, need to be protected from themselves. You need to think about how much they can learn themselves; how do you know what they learn and how can you verify that. So, are you going to have a supervised AI model, where you invest resources into training and monitoring so you know that it only learns what it needs to learn? Or is it going to be unsupervised? We suggest you start by thinking about your risk appetite and your risk profile and the kinds of things that can go wrong. Then you can start strategizing.
What are some of the differences between mitigating RPA security risks and mitigating security risks posed by humans doing the task?
Sokalski: When we just had human processes, we developed security mechanisms and protocols to give and restrict access to make sure a human being doesn't do something that he or she shouldn't do. In the world of automation, we're talking about the need for slightly different models. The mechanisms for authenticating bots and restricting bots are different. So are the rules around segregation of duties. There's a different model that has to be applied here for creating restrictions and monitoring access. But we've seen clients who haven't thought through all that, so the bots' access mirrors administrators' access and then they let the bots run wild. And, in those cases, they found they had violations where bots had access to systems they're not supposed to access.
Are there any other unique RPA issues?
Sokalski: We're seeing a lot of functions outside of IT developing code and developing bots. So, you might have folks in finance building bots, and they might give the bots access to do A, B, C and D. But the capabilities that exist outside of IT around developing automation, developing code and developing bots might not include security capabilities. So, how a bot is designed or what it's designed to do might not be aligned with best practices.
How should organizations be managing these RPA security risks?
Gupta: Given that these are relatively new solutions, do your due diligence. One of the key criteria has to be, 'Does it provide security by design or is it after the fact?' Most of the security in legacy IT is after the fact, but emerging IT solutions -- the SaaS products -- they need to include enterprise security when the design is evolving. You can see [if that's the case] if you talk to other clients and you do your due diligence: going beyond the sales pitch, getting in with the product engineers and having conversations with the developers. All [RPA] vendors will say it's secure, but some vendors put in a lot of investment into this, while others do not.
Sokalski: The quality and maturity of IT might not be there in terms of how to determine access. For example, maybe they haven't defined the key risk indicators that need to be monitored when bots are running. You typically want to define what the bot will be accessing and what they will be doing, and you want to assign a risk. If it's going to access personally identifiable information or regulated data or compliance data, you might want to consider that bot to be higher risk and you may want to apply more stringent controls around it and make sure you can monitor and trace transactions. You want to have the ability to monitor key activities and have the ability to see if failures occur. So, if the bot is doing something it's not supposed to be doing, or if it's failing or failing at a certain rate, you have full transparency and visibility.
What is key to successful RPA deployments?
Gupta: A best practice that's distinguishing successful [RPA deployments] is the presence of a center of excellence. We've started to see lot of automation centers of excellence, comprised mostly of IT and business people, that have a centralized control and command of these deployments.
Dig Deeper on Business automation, robotics and business process management
Related Q&A from Mary K. Pratt
Gartner's Kenneth Gonzalez says ITSM software tools should be assessed against three metrics -- quality, cost and performance. That will require CIOs... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.