Weaponizing generative AI | InfoWorld



Worsening that scenario is the fact that builders more and more are saving time through the use of AI to writer bug studies. Such “low-quality, spammy, and LLM [large language model]-hallucinated safety studies,” as Python’s Seth Larson calls them, overload undertaking maintainers with time-wasting rubbish, making it more durable to take care of the safety of the undertaking. AI can also be answerable for introducing bugs into software program, as Symbiotic Safety CEO Jerome Robert particulars. “GenAI platforms, comparable to [GitHub] Copilot, study from code posted to websites like GitHub and have the potential to select up some unhealthy habits alongside the way in which” as a result of “safety is a secondary goal (if in any respect).” GenAI, in different phrases, is extremely impressionable and can regurgitate the identical bugs (or racist commentary) that it picks up from its supply materials.

What, me fear?

None of this issues as long as we’re simply utilizing generative AI to wow folks on X with one more demo of “I can’t consider AI can create a video I’d by no means pay to look at.” However as genAI is more and more used to construct all of the software program we use… properly, safety issues. Loads.

Sadly, it doesn’t but matter to OpenAI and the opposite firms constructing giant language fashions. In line with the newly launched AI Security Index, which grades Meta, OpenAI, Anthropic, and others on danger and security, business LLMs are, as a gaggle, on observe to flunk out of their freshman yr in AI faculty. The perfect-performing firm, Anthropic, earned a C. As Stuart Russell, one of many report’s authors and a UC Berkeley professor, opines, “Though there’s quite a lot of exercise at AI firms that goes beneath the heading of ‘security,’ it’s not but very efficient.” Additional, he says, “None of the present exercise offers any sort of quantitative assure of security; nor does it appear doable to supply such ensures given the present method to AI through big black bins skilled on unimaginably huge portions of knowledge.” Not overly encouraging, proper?

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here