Inicio Information Technology Autonomous and credentialed: AI brokers are the following cloud threat

Autonomous and credentialed: AI brokers are the following cloud threat

0
Autonomous and credentialed: AI brokers are the following cloud threat



In April, Anthropic’s CISO made an eye-opening prediction: inside the subsequent 12 months, AI-powered digital workers with company credentials will start working throughout the enterprise. These brokers received’t simply help workflows — they’ll grow to be a part of the workforce.

The enterprise case is clear: AI brokers promise scalable automation, diminished overhead, and tireless productiveness. Salesforce is already making this a actuality, lately introducing AI “digital teammates.” AI agent deployments are anticipated to develop 327% in the course of the subsequent two years, however from the vantage level of cybersecurity, this evolution introduces a risky mixture of innovation and threat. We’re not simply giving software program system entry — we’re giving id, autonomy, and decision-making capabilities. That adjustments how organizations strategy safety solely.

Autonomous, credentialed, and susceptible

Let’s be clear: These AI brokers should not instruments within the conventional sense. In contrast to standard automation or service accounts, these brokers act as authenticated customers working beneath company credentials, making choices, interacting with programs and information, and in some instances, executing delicate duties. Which means they are going to have the identical entry and arguably pose the identical dangers as a human worker.

However not like people, AI brokers don’t perceive context, intent, or penalties the way in which we do. They are often tricked, manipulated, or coerced by methods like prompt injection or adversarial inputs. We’ve lengthy accepted that people are the weakest hyperlink in safety—phishing and social-engineering schemes prey on our psychology—however AI brokers introduce a fair softer goal: They take issues at face worth, don’t name the assistance desk, and function at machine pace. As soon as compromised, they may function a persistent, high-bandwidth assault floor buried deep inside a corporation’s atmosphere.

Rethinking safety within the AI age

Conventional safety instruments have been designed round human conduct: logins, passwords, and entry/privilege ranges. AI workers break these assumptions. Non-human identities, which already far outnumber human customers, have gotten the dominant drive in cloud environments.

As cloud investments proceed to skyrocket, citing AI as the highest driver, and extra AI brokers are deployed within the cloud, organizations should flip in direction of a brand new age of AI safety instruments that may correctly safe all that AI has to supply, particularly questions round:

  • What degree of autonomy and authority will AI brokers have contained in the enterprise?
  • How do you monitor privilege exercise and detect deviations?
  • Can these brokers be exploited or jailbroken through immediate injection or adversarial inputs?
  • What information are these brokers being skilled on?

The following insider risk

AI introduces new, unproven elements to your software stack – infrastructure, fashions, datasets, instruments and plugins. And now, AI innovation is accelerating even sooner with the introduction of brokers. In contrast to LLMs, brokers cause, act autonomously, and coordinate with different brokers. AI brokers can have steady entry, received’t sleep or take holidays, and might be deployed at scale throughout a number of departments. That is bringing new complexity to organizations’ environments and introduces new safety dangers. One compromised agent may doubtlessly do extra injury in minutes than a malicious insider may accomplish in months.

AI workers could quickly rival, or exceed, insiders as probably the most harmful risk vector. OWASP lately printed its Agentic AI Threats and Mitigation highlighting rising threats reminiscent of immediate injection, instrument misuse, id spoofing and extra. Much more so, current analysis from Unit 42 discovered immediate injection stays one of the crucial potent and versatile assault vectors, able to leaking information, misusing instruments, or subverting agent conduct.

We’ve spent years constructing defenses across the human aspect. Now we should flip that very same, and even fiercer, rigor towards the machines performing in our identify.

Taking motion

Palo Alto Networks lately launched Prisma AI Runtime Security (AIRS) designed to assist organizations uncover, assess, and shield each AI app, mannequin, dataset, and agent of their atmosphere. With Prisma AIRS, organizations obtain a complete platform that gives:

  • AI Mannequin Scanning – Safely undertake AI fashions by scanning them for vulnerabilities. Safe your AI ecosystem towards dangers, reminiscent of mannequin tampering, malicious scripts, and deserialization assaults.
  • AI-Security Posture Management – Achieve perception into safety posture dangers related along with your AI ecosystem, reminiscent of extreme permissions, delicate information publicity, platform misconfigurations, entry misconfigurations, and extra.
  • AI Purple Teaming – Uncover potential publicity and lurking dangers earlier than dangerous actors do. Carry out automated penetration exams in your AI apps and fashions utilizing our Purple Teaming agent that stress exams your AI deployments, studying and adapting like an actual attacker.
  • Runtime Safety – Defend LLM-powered AI apps, fashions, and information towards runtime threats, reminiscent of immediate injection, malicious code, poisonous content material, delicate information leak, useful resource overload, hallucination, and extra.
  • AI Agent Safety – Safe brokers (together with these constructed on no-code/low-code platforms) towards new agentic threats, reminiscent of id impersonation, reminiscence manipulation, and power misuse.

As AI reshapes how enterprises function and the way assaults unfold, Prisma AIRS strikes simply as quick. Enterprises can confidently embrace the way forward for AI with Prisma AIRS.

Learn right here how Palo Alto Networks Prisma AIRS, the world’s most complete AI safety platform helps organizations safe all AI apps, brokers, fashions and information.

DEJA UNA RESPUESTA

Por favor ingrese su comentario!
Por favor ingrese su nombre aquí