The AI increase is amplifying dangers throughout enterprise information estates and cloud environments, based on cybersecurity knowledgeable Liat Hayun.
In an interview with TechRepublic, Hayun, VP of product administration and analysis of cloud safety at Tenable, suggested organisations to prioritise understanding their danger publicity and tolerance, whereas prioritising tackling key issues like cloud misconfigurations and defending delicate information.
She famous that whereas enterprises stay cautious, AI’s accessibility is accentuating sure dangers. Nevertheless, she defined that CISOs right this moment are evolving into enterprise enablers — and AI might finally function a strong software for bolstering safety.
How AI is affecting cybersecurity, information storage
TechRepublic: What’s altering within the cybersecurity atmosphere on account of AI?
Liat: To begin with, AI has change into way more accessible to organisations. If you happen to look again 10 years in the past, the one organisations creating AI needed to have this specialised information science group that had PhDs in information science and statistics to have the ability to create machine studying and AI algorithms. AI has change into a lot simpler for organisations to create; it’s nearly similar to introducing a brand new programming language or new library into their atmosphere. So many extra organisations — not simply giant organisations like Tenable and others — but additionally any start-ups can now leverage AI and introduce that into their merchandise.
SEE: Gartner Tells Australian IT Leaders To Undertake AI At Their Personal Tempo
The second factor: AI requires plenty of information. So many extra organisations want to gather and retailer increased volumes of information, which additionally typically has increased ranges of sensitivity. Earlier than, my streaming service would have solely saved only a few particulars on me. Now, possibly my geography issues, as a result of they’ll create extra particular suggestions primarily based on that, or my age and my gender, and so forth. As a result of they’ll now use this information for his or her enterprise functions — to generate extra enterprise — they’re now way more motivated to retailer that information in increased volumes and with rising ranges of sensitivity.
TechRepublic: Is that feeding into rising utilization of the cloud?
Liat: If you wish to retailer plenty of information, it’s a lot simpler to try this within the cloud. Each time you determine to retailer a brand new sort of information, it will increase the quantity of information you’re storing. You don’t must go inside your information heart and order new volumes of information to put in. You simply click on, and bam, you’ve a brand new information retailer location. So the cloud has made it a lot simpler to retailer information.
These three parts kind a type of circle that feeds itself. As a result of if it’s simpler to retailer information, you possibly can improve extra AI capabilities, and you then’re motivated to retailer much more information, and so forth. In order that’s what occurred on the earth in the previous couple of years — since LLMs have change into a way more accessible, frequent functionality for organisations — introducing challenges throughout all these three verticals.
Understanding the safety dangers of AI
TechRepublic: Are you seeing particular cybersecurity dangers rise with AI?
Liat: Using AI in organisations, not like the usage of AI by particular person individuals the world over, remains to be in its early phases. Organisations need to ensure that they’re introducing it in a method that, I’d say, doesn’t create any pointless danger or any excessive danger. So when it comes to statistics, we nonetheless solely have just a few examples, and they don’t seem to be essentially a superb illustration as a result of they’re extra experimental.
One instance of a danger is AI being educated on delicate information. That’s one thing we’re seeing. It’s not as a result of organisations will not be being cautious; it’s as a result of it’s very tough to separate delicate information from non-sensitive information and nonetheless have an efficient AI mechanism that’s educated on the proper information set.
The second factor we’re seeing is what we name information poisoning. So, even if in case you have an AI agent that’s being educated on non-sensitive information, if that non-sensitive information is publicly uncovered, as an adversary, as an attacker, I can insert my very own information into that publicly uncovered, publicly accessible information storage and have your AI say issues that you just didn’t intend it to say. It’s not this all-knowing entity. It is aware of what it’s seen.
TechRepublic: How ought to organisations weigh the safety dangers of AI?
Liat: First, I’d ask how organisations can perceive the extent of publicity they’ve, which incorporates the cloud, AI, and information … and every thing associated to how they use third-party distributors, and the way they leverage completely different software program of their organisation, and so forth.
SEE: Australia Proposes Obligatory Guardrails for AI
The second half is, how do you establish the crucial exposures? So if we all know it’s a publicly accessible asset with a high-severity vulnerability to it, that’s one thing that you just in all probability need to handle first. Nevertheless it’s additionally a mix of the influence, proper? If in case you have two points which are very related, and one can compromise delicate information and one can’t, you need to handle that first [issue] first.
You additionally must know which steps to take to handle these exposures with minimal enterprise influence.
TechRepublic: What are some huge cloud safety dangers you warn in opposition to?
Liat: There are three issues we normally advise our clients.
The primary one is on misconfigurations. Simply due to the complexity of the infrastructure, complexity of the cloud, and all of the applied sciences it supplies, even when you’re in a single cloud atmosphere — however particularly when you’re going multi-cloud — the probabilities of one thing changing into a difficulty simply because it wasn’t configured accurately remains to be very excessive. In order that’s undoubtedly one factor I’d concentrate on, particularly when introducing new applied sciences like AI.
The second is over-privileged entry. Many individuals suppose their organisation is tremendous safe. But when your home is a fort, and also you’re giving your keys out to everybody round you, that’s nonetheless a difficulty. So extreme entry to delicate information, to crucial infrastructure, is one other space of focus. Even when every thing is configured completely and also you don’t have any hackers in your atmosphere, it introduces extra danger.
The side individuals take into consideration probably the most is to establish malicious or suspicious exercise as early because it occurs. That is the place AI may be taken benefit of; as a result of if we leverage AI instruments inside our safety instruments inside our infrastructure, we are able to use the truth that they’ll have a look at plenty of information, they usually can do that basically quick, to have the ability to additionally establish suspicious or malicious behaviors in an atmosphere. So we are able to handle these behaviors, these actions, as early as doable earlier than something crucial is compromised.
Implementing AI ‘too good of a chance to overlook out on’
TechRepublic: How are CISOs approaching the dangers you’re seeing with AI?
Liat: I’ve been within the cybersecurity business for 15 years now. What I really like seeing is most safety consultants, most CISOs, are not like what they was like a decade in the past. Versus being a gatekeeper, versus saying, “No, we are able to’t use this as a result of it’s dangerous,” they’re asking themselves, “How can we use this and make it much less dangerous?” Which is an superior development to see. They’re changing into extra of an enabler.
TechRepublic: Are you seeing the great aspect of AI, in addition to the dangers?
Liat: Organisations have to suppose extra about how they’re going to introduce AI, quite than pondering “AI is simply too dangerous proper now”. You’ll be able to’t try this.
Organisations that don’t introduce AI within the subsequent couple of years will simply keep behind. It’s an incredible software that may profit so many enterprise use circumstances, internally for collaboration and evaluation and insights, and externally, for the instruments we are able to present our clients. There’s simply too good of a chance to overlook out on. If I may also help organisations obtain that mindset the place they are saying, “OK, we are able to use AI, however we simply have to take these dangers under consideration,” I’ve finished my job.”