Google’s AI security strategy

By adding basic security controls, Google has a new plan to help organisations protect their AI systems from a new wave of cyber risks.

When a new tech trend starts to catch on, businesses and customers often don’t think about security and privacy until after the fact.

One example is social media, where people were so excited to meet new people on new platforms that they should have paid more attention to how user data was gathered, shared, or kept safe.

Google worries that the same happens with AI systems as companies build them quickly and add them to their work processes.

What they’re saying: Phil Venables, CISO at Google Cloud, told Axios, “We want people to remember that many of the risks of AI can be managed by some of these basic elements.”

“Even if people are looking for more advanced methods, they should remember that they also need to get the basics right.”

Moving Markets

Google’s Secure AI system forces companies to do six things:

Check to see what existing security controls, like data encryption, can be quickly added to new AI systems;

Add AI-specific threats to the threat intelligence study that is already being done;

Automate the company’s cyber defences so that they can react quickly to any strange activity that targets AI systems;

Review the security steps that are in place around AI models regularly;

Do what is called “penetrating tests” on these AI systems all the time and make changes based on what you find;

And finally, make a team of people who know about AI risks and can help figure out where AI risks should fit into an organisation’s general plan to reduce business risks.

Between the lines: Venables said that many of these security practices grow organisations already use in other parts of their business.”We realised pretty quickly that most of the ways you think about securing the use and development of AI are similar to how you think about securing access to data,” he said.

The mystery is that Google is working with its customers and governments to determine how to use these ideas to get people to accept them.

A blog post says the company is also widening its bug bounty programme to accept new information about security flaws in AI safety and security.

The next step: Venables said that Google would ask its business partners and government groups for feedback on its framework.

Venables said, “We think we know a lot about these parts of our history, but we’re not so proud to think that people can’t tell us how to improve.”

Facebook
Twitter
LinkedIn
Reddit
Telegram
Email

About Post Author