ekkasit919 - stock.adobe.com

Implementing machine learning to keep Facebook user data safe

Facebook Director of Security Aanchal Gupta shares how the social media giant is implementing machine learning in security to ensure user data is safe on its platform.

Machine learning is increasingly used to improve an organization's security posture, but machine learning models still struggle to recognize context. Facebook, therefore, is not just implementing machine learning, but also deciphering ways to advance the technology and better understand that context, according to Aanchal Gupta, director of security at Facebook.

In part three of this Q&A, Gupta delineates how Facebook is implementing machine learning to flag inappropriate content and accounts, including the detection of unrecognized logins.

Editor's note: This interview has been edited for clarity and length.

How is Facebook implementing machine learning in security?

Aanchal Gupta: We recently used machine learning to take down more than half a million accounts related to a financial scam. To give you another example, we also have something called login alerts, [where we send you alerts about unrecognized logins]. How we do that is, we are looking at your prior login attempts and that way we have pattern matching. If we see, 'This user always comes from California and she usually logs in at 9 p.m. PST and these are the IP addresses she logs in from,' then when we see the same user logging in from a different country and at a very odd time, we see that their initial login was from California and then two hours later it's from China. We know they couldn't have flown that quickly from California to China. That's where machine learning helps us identify these anomalies, and we are able to notify users saying, 'Seems like there is some inappropriate login attempts against your account.'

Aanchal GuptaAanchal Gupta

How does implementing machine learning in security help identify the content that you are trying to flag?

Gupta: To keep our platforms safe for our users, we definitely have to use machine learning to identify inappropriate content and then quickly remove it so that our users have a very safe platform to use and connect with other users.

The way machine learning works is it is basically pattern recognition. You have to train the machine learning model to do the pattern recognition. For example, I'm talking about a very rudimentary, a very baseline example here, say you are trying to identify the use of bad language, so you give your machine learning model a list of bad words. That's how you start to train your machine learning model. As the machine learning algorithm works, it will start to identify those patterns and it will say, 'OK, yes, these are the bad words we have identified.'

Then we send those for manual review to our team to be doubly sure, because sometimes it's hard for machine learning to get the context. We are investing more in how we can advance to the level where machine learning can also start to understand the context.

In part one of the interview, Gupta talked about how Facebook is working to scale security; in part two she discussed how Facebook is building diverse security teams.

Dig Deeper on Risk management and governance

Cloud Computing
Mobile Computing
Data Center
Sustainability and ESG
Close