Identity Security and the AI Adoption Maturity Curve
Identity Security and the AI Adoption Maturity Curve
Identity Security and the AI Adoption Maturity Curve
Identity Security and the AI Adoption Maturity Curve
Identity Security and the AI Adoption Maturity Curve
Identity Security and the AI Adoption Maturity Curve
Opal supports your business with a single system of record to decide who or what may access which systems. It's the only way your business can move fast without breaking things, even when digital employees sit alongside human ones.
Opal supports your business with a single system of record to decide who or what may access which systems. It's the only way your business can move fast without breaking things, even when digital employees sit alongside human ones.
Over the past three years, enterprise AI has evolved from pilot projects to systems that form the fabric of daily work. What began as employees using LLMs to speed up writing and research is now reshaping software delivery, and it’s on the cusp of changing how companies execute entire business processes. Across hundreds of conversations, one pattern keeps repeating – AI adoption follows a predictable, three-phase maturity curve:
Phase 1 starts with organizations’ adoption of LLMs for knowledge work.
Phase 2 consists of enterprise-wide deployments of AI coding agents that interact with code, composable pipelines, and cloud.
Phase 3 entails the rollout of AI agents that act as “digital employees” to automate tasks end‑to‑end.
At every step, identity becomes the control plane. The question isn’t only “what can the AI do?”—it’s “who or what should be allowed to do it, where, for how long, and under which approvals?” That is the problem Opal exists to solve. Allow us to explain how the Opal Security platform secures and accelerates each phase of this adoption curve.

Phase 1: LLMs for Knowledge Work (2023–2024)
Phase 1 began when companies started rolling out enterprise contracts for tools like ChatGPT, Claude, and Gemini. They derived immediate value: faster summarization, better first drafts, accelerated analysis. But the moment those assistants connected to corporate systems (i.e. email, document repositories, chat, issue trackers, data platforms), their security posture changed irreversibly. Data now flows through a defined path, one that was only as safe as the permissions behind it. The winning pattern in this phase has been to treat LLM access like any other privileged capability: explicitly granted, scoped to a purpose, time‑bound, and fully logged. In practice, that means deciding which teams may use which providers, which enterprise connectors are allowed, and which data domains are off‑limits; it also means that access should expire by default and be linked to a clear business justification.
This is where Opal’s approach is most visible. Opal integrates with every enterprise-grade LLM provider and brings their identity and access controls (IAM) into a single coherent system. Company policy dictates that a legal team is allowed to use a particular model for contract review, yet blocked from connecting to customer PII; a finance group is granted read‑only access to internal knowledge bases for quarter‑close analysis but barred from linking to production data warehouses. Access is thus issued just‑in‑time (JIT) for a defined task and duration, and all of it (i.e. who requested, who approved, what scopes were granted, what systems were touched) is preserved as one auditable trail. The result is not only safer usage; it’s the ability for security and compliance teams to answer, in seconds: “who could do what, and when?”

Phase 2: AI Coding Agents to Accelerate Engineering Velocity (2024–2025)
Phase 2 shifts the center of gravity to engineering teams. Coding agents such as Cursor, Claude Code, OpenAI Codex, and emerging tools like Cognition Devin are no longer just autocomplete. They scaffold features, open pull requests, comment on reviews, invoke CI/CD jobs, and sometimes touch cloud resources. Here the risks are different: if an agent holds broad write access across repositories, a single mistake can create a supply‑chain issue; if it can trigger deployments without additional checks, a minor code change can propagate into production; if it manages static credentials, the blast radius of a leak is large.
Opal’s job in this phase is to turn the “agent” from a vague concept into a first‑class identity with owners, purpose, and least‑privilege scopes across repos, pipelines, and cloud. An engineering assistant might read many repositories but write only to feature branches; merges into protected branches would require a human approval captured by Opal’s workflow. A CI job that deploys to staging might acquire ephemeral credentials for two hours, tied to a specific ticket, and then automatically lose them once the issue is verified as resolved and the ticket is closed. Secrets are replaced with brokered, scoped tokens that expire, reducing the lifetime of any mistake to minutes instead of months. Most importantly, every commit, pipeline run, and permission elevation is attributable: you can trace a change from request to approval to execution with a clear chain of custody.
Phase 3: AI Agents as “Digital Employees” for Full Automation (2026–)
Phase 3 will be marked by the moment at which AI stops assisting individuals and starts executing business processes via AI agents. The earliest wave – deterministic agents operating under rule-driven and human-in-the-loop controls – will increasingly see production deployment in 2026, handling bounded tasks like generating purchase orders under a limit, adjudicating low-risk access requests, or triaging routine support tickets. Non-deterministic agents that can fully automate multi-system workflows as digital employees will follow, planning and acting across operations, finance, and security environments.The upside is enormous, but a new risk arises. Actions now carry dollar amounts, compliance implications, and reputational consequences.
In this world, identity governance must be as nuanced as the work itself. Every agent needs a record that names its owner, purpose, risk tier, allowed systems, and expiration. Guardrails should scale with impact: routine tasks might be auto‑approved within a sandbox; medium‑risk actions could require a single approver and enforce budget caps; high‑risk operations should demand dual control and separation of duties. Access must be contextual and ephemeral, meaning the grant must be made “just in time” for a single task and then revoked. Auditability can’t stop at “which or whose API key did it use?” It has to tell the full story: which agent, owned by whom, executed which action against which system, under which policy, for what reason, and at what cost.
Opal provides that operating model. Deterministic workflows encode the preconditions that must be true before an action is even requestable. Non‑deterministic workflows inherit risk‑based policies that determine when a human must step in and when a system‑issued budget or scope boundary should block the operation. The same policy can cover SaaS applications, data platforms, cloud accounts, and internal tools so that one request orchestrates coordinated, scoped grants across everything the agent needs and nothing it doesn’t.
What makes this approach sustainable across all three phases is its consistency. Start with inventory and ownership so there is no such thing as an unowned user or agent. Define scopes that make sense to the business (like data domains, actions, environments) so approvals are meaningful rather than rubber stamps. Grant access with context: who is asking, what is the purpose, where will it run, how long is it needed, and what is the risk tier? Replace standing credentials with ephemeral ones so time becomes your ally. Then monitor and review: detect drift, re-certify high‑impact access, and enforce separation of duties where it matters most. The same model that keeps LLM usage sane in 2024 is the one that will keep digital employees safe in 2026.

Avoiding Common Pitfalls
The pitfalls are predictable and avoidable. Turning an LLM on for everyone invites shadow super‑apps that silently connect to sensitive systems. Letting agents store and use static keys guarantees that they will eventually leak. Granting repo‑wide write permissions and unbounded pipeline rights expands the blast radius far beyond any single change. Allowing “ownerless” agents turns incident response into archaeology. Approvals without context create the illusion of control while offering none. Each of these anti‑patterns is solved by returning to identity: explicit ownership, scoped access, expirations by default, and a paper trail that can withstand audits and post‑mortems.
If you want to measure progress, watch for a handful of signals that cut across tooling and teams. Over time, most access (human or agent) should be just‑in‑time and time‑bound rather than permanent. The median approval for standard, low‑risk requests should compress to minutes because the safe path is automated. The number of systems governed by policy should grow as you pull more SaaS, data, and cloud into a single control plane. Recertifications should actually complete, not linger. Static secrets should disappear in favor of ephemeral tokens. And the incidents you do see should be smaller in scope because least privilege limits what could go wrong.
Risk Tier | Approvals | Budget Limits | Separation of Duties | Credential Type | Example Actions |
|---|---|---|---|---|---|
Low | Auto-approved within Sandbox | Capped per-task | Not required | Ephemeral, scoped | Marketing draft, ops triage |
Medium | Single approver (owner only) | Team/monthly caps | Recommended | Ephemeral, scoped | CRM update, staging & deploy |
High | Dual control (owner + controller) | PO/transaction thresholds | Enforced SoD | Ephemeral only, time-boxed | PO creation, prod access change |
A Practical Onramp
Getting started rarely requires a big‑bang program. In Phase 1, connect your LLM providers to Opal, map departments to allowlisted providers and data domains, and turn on centralized logging. In Phase 2, register coding agents as identities, narrow their repository and pipeline scopes, and require JIT elevation for merges and deploys. In Phase 3, stand up an agent catalog, assign owners and risk tiers, encode separation of duties and budget caps, and schedule periodic reviews. Each step builds on the last—none is wasted effort.
AI is marching from prompt to production, and it’s not stopping. As it progresses, identity remains the most durable way to stay in control. Opal supports your business with a single system of record to decide who or what may access which systems, at what level, for how long, with which approvals—and to prove it after the fact. That’s how you move fast without breaking things, in 2024 and in 2026 when digital employees sit alongside human ones.
If you’re ready to establish an identity platform to tame your AI control plane across LLMs, coding agents, and production automation, connect your providers and systems to Opal and turn on the guardrails that let you scale with confidence.
Please note that we have officially released our LLM and agent connectors to secure agent identities in the enterprise:
Consider reaching out to us to share your specific agent workflows and corresponding security needs.
Opal supports your business with a single system of record to decide who or what may access which systems. It's the only way your business can move fast without breaking things, even when digital employees sit alongside human ones.
Over the past three years, enterprise AI has evolved from pilot projects to systems that form the fabric of daily work. What began as employees using LLMs to speed up writing and research is now reshaping software delivery, and it’s on the cusp of changing how companies execute entire business processes. Across hundreds of conversations, one pattern keeps repeating – AI adoption follows a predictable, three-phase maturity curve:
Phase 1 starts with organizations’ adoption of LLMs for knowledge work.
Phase 2 consists of enterprise-wide deployments of AI coding agents that interact with code, composable pipelines, and cloud.
Phase 3 entails the rollout of AI agents that act as “digital employees” to automate tasks end‑to‑end.
At every step, identity becomes the control plane. The question isn’t only “what can the AI do?”—it’s “who or what should be allowed to do it, where, for how long, and under which approvals?” That is the problem Opal exists to solve. Allow us to explain how the Opal Security platform secures and accelerates each phase of this adoption curve.

Phase 1: LLMs for Knowledge Work (2023–2024)
Phase 1 began when companies started rolling out enterprise contracts for tools like ChatGPT, Claude, and Gemini. They derived immediate value: faster summarization, better first drafts, accelerated analysis. But the moment those assistants connected to corporate systems (i.e. email, document repositories, chat, issue trackers, data platforms), their security posture changed irreversibly. Data now flows through a defined path, one that was only as safe as the permissions behind it. The winning pattern in this phase has been to treat LLM access like any other privileged capability: explicitly granted, scoped to a purpose, time‑bound, and fully logged. In practice, that means deciding which teams may use which providers, which enterprise connectors are allowed, and which data domains are off‑limits; it also means that access should expire by default and be linked to a clear business justification.
This is where Opal’s approach is most visible. Opal integrates with every enterprise-grade LLM provider and brings their identity and access controls (IAM) into a single coherent system. Company policy dictates that a legal team is allowed to use a particular model for contract review, yet blocked from connecting to customer PII; a finance group is granted read‑only access to internal knowledge bases for quarter‑close analysis but barred from linking to production data warehouses. Access is thus issued just‑in‑time (JIT) for a defined task and duration, and all of it (i.e. who requested, who approved, what scopes were granted, what systems were touched) is preserved as one auditable trail. The result is not only safer usage; it’s the ability for security and compliance teams to answer, in seconds: “who could do what, and when?”

Phase 2: AI Coding Agents to Accelerate Engineering Velocity (2024–2025)
Phase 2 shifts the center of gravity to engineering teams. Coding agents such as Cursor, Claude Code, OpenAI Codex, and emerging tools like Cognition Devin are no longer just autocomplete. They scaffold features, open pull requests, comment on reviews, invoke CI/CD jobs, and sometimes touch cloud resources. Here the risks are different: if an agent holds broad write access across repositories, a single mistake can create a supply‑chain issue; if it can trigger deployments without additional checks, a minor code change can propagate into production; if it manages static credentials, the blast radius of a leak is large.
Opal’s job in this phase is to turn the “agent” from a vague concept into a first‑class identity with owners, purpose, and least‑privilege scopes across repos, pipelines, and cloud. An engineering assistant might read many repositories but write only to feature branches; merges into protected branches would require a human approval captured by Opal’s workflow. A CI job that deploys to staging might acquire ephemeral credentials for two hours, tied to a specific ticket, and then automatically lose them once the issue is verified as resolved and the ticket is closed. Secrets are replaced with brokered, scoped tokens that expire, reducing the lifetime of any mistake to minutes instead of months. Most importantly, every commit, pipeline run, and permission elevation is attributable: you can trace a change from request to approval to execution with a clear chain of custody.
Phase 3: AI Agents as “Digital Employees” for Full Automation (2026–)
Phase 3 will be marked by the moment at which AI stops assisting individuals and starts executing business processes via AI agents. The earliest wave – deterministic agents operating under rule-driven and human-in-the-loop controls – will increasingly see production deployment in 2026, handling bounded tasks like generating purchase orders under a limit, adjudicating low-risk access requests, or triaging routine support tickets. Non-deterministic agents that can fully automate multi-system workflows as digital employees will follow, planning and acting across operations, finance, and security environments.The upside is enormous, but a new risk arises. Actions now carry dollar amounts, compliance implications, and reputational consequences.
In this world, identity governance must be as nuanced as the work itself. Every agent needs a record that names its owner, purpose, risk tier, allowed systems, and expiration. Guardrails should scale with impact: routine tasks might be auto‑approved within a sandbox; medium‑risk actions could require a single approver and enforce budget caps; high‑risk operations should demand dual control and separation of duties. Access must be contextual and ephemeral, meaning the grant must be made “just in time” for a single task and then revoked. Auditability can’t stop at “which or whose API key did it use?” It has to tell the full story: which agent, owned by whom, executed which action against which system, under which policy, for what reason, and at what cost.
Opal provides that operating model. Deterministic workflows encode the preconditions that must be true before an action is even requestable. Non‑deterministic workflows inherit risk‑based policies that determine when a human must step in and when a system‑issued budget or scope boundary should block the operation. The same policy can cover SaaS applications, data platforms, cloud accounts, and internal tools so that one request orchestrates coordinated, scoped grants across everything the agent needs and nothing it doesn’t.
What makes this approach sustainable across all three phases is its consistency. Start with inventory and ownership so there is no such thing as an unowned user or agent. Define scopes that make sense to the business (like data domains, actions, environments) so approvals are meaningful rather than rubber stamps. Grant access with context: who is asking, what is the purpose, where will it run, how long is it needed, and what is the risk tier? Replace standing credentials with ephemeral ones so time becomes your ally. Then monitor and review: detect drift, re-certify high‑impact access, and enforce separation of duties where it matters most. The same model that keeps LLM usage sane in 2024 is the one that will keep digital employees safe in 2026.

Avoiding Common Pitfalls
The pitfalls are predictable and avoidable. Turning an LLM on for everyone invites shadow super‑apps that silently connect to sensitive systems. Letting agents store and use static keys guarantees that they will eventually leak. Granting repo‑wide write permissions and unbounded pipeline rights expands the blast radius far beyond any single change. Allowing “ownerless” agents turns incident response into archaeology. Approvals without context create the illusion of control while offering none. Each of these anti‑patterns is solved by returning to identity: explicit ownership, scoped access, expirations by default, and a paper trail that can withstand audits and post‑mortems.
If you want to measure progress, watch for a handful of signals that cut across tooling and teams. Over time, most access (human or agent) should be just‑in‑time and time‑bound rather than permanent. The median approval for standard, low‑risk requests should compress to minutes because the safe path is automated. The number of systems governed by policy should grow as you pull more SaaS, data, and cloud into a single control plane. Recertifications should actually complete, not linger. Static secrets should disappear in favor of ephemeral tokens. And the incidents you do see should be smaller in scope because least privilege limits what could go wrong.
Risk Tier | Approvals | Budget Limits | Separation of Duties | Credential Type | Example Actions |
|---|---|---|---|---|---|
Low | Auto-approved within Sandbox | Capped per-task | Not required | Ephemeral, scoped | Marketing draft, ops triage |
Medium | Single approver (owner only) | Team/monthly caps | Recommended | Ephemeral, scoped | CRM update, staging & deploy |
High | Dual control (owner + controller) | PO/transaction thresholds | Enforced SoD | Ephemeral only, time-boxed | PO creation, prod access change |
A Practical Onramp
Getting started rarely requires a big‑bang program. In Phase 1, connect your LLM providers to Opal, map departments to allowlisted providers and data domains, and turn on centralized logging. In Phase 2, register coding agents as identities, narrow their repository and pipeline scopes, and require JIT elevation for merges and deploys. In Phase 3, stand up an agent catalog, assign owners and risk tiers, encode separation of duties and budget caps, and schedule periodic reviews. Each step builds on the last—none is wasted effort.
AI is marching from prompt to production, and it’s not stopping. As it progresses, identity remains the most durable way to stay in control. Opal supports your business with a single system of record to decide who or what may access which systems, at what level, for how long, with which approvals—and to prove it after the fact. That’s how you move fast without breaking things, in 2024 and in 2026 when digital employees sit alongside human ones.
If you’re ready to establish an identity platform to tame your AI control plane across LLMs, coding agents, and production automation, connect your providers and systems to Opal and turn on the guardrails that let you scale with confidence.
Please note that we have officially released our LLM and agent connectors to secure agent identities in the enterprise:
Consider reaching out to us to share your specific agent workflows and corresponding security needs.
Opal supports your business with a single system of record to decide who or what may access which systems. It's the only way your business can move fast without breaking things, even when digital employees sit alongside human ones.
Over the past three years, enterprise AI has evolved from pilot projects to systems that form the fabric of daily work. What began as employees using LLMs to speed up writing and research is now reshaping software delivery, and it’s on the cusp of changing how companies execute entire business processes. Across hundreds of conversations, one pattern keeps repeating – AI adoption follows a predictable, three-phase maturity curve:
Phase 1 starts with organizations’ adoption of LLMs for knowledge work.
Phase 2 consists of enterprise-wide deployments of AI coding agents that interact with code, composable pipelines, and cloud.
Phase 3 entails the rollout of AI agents that act as “digital employees” to automate tasks end‑to‑end.
At every step, identity becomes the control plane. The question isn’t only “what can the AI do?”—it’s “who or what should be allowed to do it, where, for how long, and under which approvals?” That is the problem Opal exists to solve. Allow us to explain how the Opal Security platform secures and accelerates each phase of this adoption curve.

Phase 1: LLMs for Knowledge Work (2023–2024)
Phase 1 began when companies started rolling out enterprise contracts for tools like ChatGPT, Claude, and Gemini. They derived immediate value: faster summarization, better first drafts, accelerated analysis. But the moment those assistants connected to corporate systems (i.e. email, document repositories, chat, issue trackers, data platforms), their security posture changed irreversibly. Data now flows through a defined path, one that was only as safe as the permissions behind it. The winning pattern in this phase has been to treat LLM access like any other privileged capability: explicitly granted, scoped to a purpose, time‑bound, and fully logged. In practice, that means deciding which teams may use which providers, which enterprise connectors are allowed, and which data domains are off‑limits; it also means that access should expire by default and be linked to a clear business justification.
This is where Opal’s approach is most visible. Opal integrates with every enterprise-grade LLM provider and brings their identity and access controls (IAM) into a single coherent system. Company policy dictates that a legal team is allowed to use a particular model for contract review, yet blocked from connecting to customer PII; a finance group is granted read‑only access to internal knowledge bases for quarter‑close analysis but barred from linking to production data warehouses. Access is thus issued just‑in‑time (JIT) for a defined task and duration, and all of it (i.e. who requested, who approved, what scopes were granted, what systems were touched) is preserved as one auditable trail. The result is not only safer usage; it’s the ability for security and compliance teams to answer, in seconds: “who could do what, and when?”

Phase 2: AI Coding Agents to Accelerate Engineering Velocity (2024–2025)
Phase 2 shifts the center of gravity to engineering teams. Coding agents such as Cursor, Claude Code, OpenAI Codex, and emerging tools like Cognition Devin are no longer just autocomplete. They scaffold features, open pull requests, comment on reviews, invoke CI/CD jobs, and sometimes touch cloud resources. Here the risks are different: if an agent holds broad write access across repositories, a single mistake can create a supply‑chain issue; if it can trigger deployments without additional checks, a minor code change can propagate into production; if it manages static credentials, the blast radius of a leak is large.
Opal’s job in this phase is to turn the “agent” from a vague concept into a first‑class identity with owners, purpose, and least‑privilege scopes across repos, pipelines, and cloud. An engineering assistant might read many repositories but write only to feature branches; merges into protected branches would require a human approval captured by Opal’s workflow. A CI job that deploys to staging might acquire ephemeral credentials for two hours, tied to a specific ticket, and then automatically lose them once the issue is verified as resolved and the ticket is closed. Secrets are replaced with brokered, scoped tokens that expire, reducing the lifetime of any mistake to minutes instead of months. Most importantly, every commit, pipeline run, and permission elevation is attributable: you can trace a change from request to approval to execution with a clear chain of custody.
Phase 3: AI Agents as “Digital Employees” for Full Automation (2026–)
Phase 3 will be marked by the moment at which AI stops assisting individuals and starts executing business processes via AI agents. The earliest wave – deterministic agents operating under rule-driven and human-in-the-loop controls – will increasingly see production deployment in 2026, handling bounded tasks like generating purchase orders under a limit, adjudicating low-risk access requests, or triaging routine support tickets. Non-deterministic agents that can fully automate multi-system workflows as digital employees will follow, planning and acting across operations, finance, and security environments.The upside is enormous, but a new risk arises. Actions now carry dollar amounts, compliance implications, and reputational consequences.
In this world, identity governance must be as nuanced as the work itself. Every agent needs a record that names its owner, purpose, risk tier, allowed systems, and expiration. Guardrails should scale with impact: routine tasks might be auto‑approved within a sandbox; medium‑risk actions could require a single approver and enforce budget caps; high‑risk operations should demand dual control and separation of duties. Access must be contextual and ephemeral, meaning the grant must be made “just in time” for a single task and then revoked. Auditability can’t stop at “which or whose API key did it use?” It has to tell the full story: which agent, owned by whom, executed which action against which system, under which policy, for what reason, and at what cost.
Opal provides that operating model. Deterministic workflows encode the preconditions that must be true before an action is even requestable. Non‑deterministic workflows inherit risk‑based policies that determine when a human must step in and when a system‑issued budget or scope boundary should block the operation. The same policy can cover SaaS applications, data platforms, cloud accounts, and internal tools so that one request orchestrates coordinated, scoped grants across everything the agent needs and nothing it doesn’t.
What makes this approach sustainable across all three phases is its consistency. Start with inventory and ownership so there is no such thing as an unowned user or agent. Define scopes that make sense to the business (like data domains, actions, environments) so approvals are meaningful rather than rubber stamps. Grant access with context: who is asking, what is the purpose, where will it run, how long is it needed, and what is the risk tier? Replace standing credentials with ephemeral ones so time becomes your ally. Then monitor and review: detect drift, re-certify high‑impact access, and enforce separation of duties where it matters most. The same model that keeps LLM usage sane in 2024 is the one that will keep digital employees safe in 2026.

Avoiding Common Pitfalls
The pitfalls are predictable and avoidable. Turning an LLM on for everyone invites shadow super‑apps that silently connect to sensitive systems. Letting agents store and use static keys guarantees that they will eventually leak. Granting repo‑wide write permissions and unbounded pipeline rights expands the blast radius far beyond any single change. Allowing “ownerless” agents turns incident response into archaeology. Approvals without context create the illusion of control while offering none. Each of these anti‑patterns is solved by returning to identity: explicit ownership, scoped access, expirations by default, and a paper trail that can withstand audits and post‑mortems.
If you want to measure progress, watch for a handful of signals that cut across tooling and teams. Over time, most access (human or agent) should be just‑in‑time and time‑bound rather than permanent. The median approval for standard, low‑risk requests should compress to minutes because the safe path is automated. The number of systems governed by policy should grow as you pull more SaaS, data, and cloud into a single control plane. Recertifications should actually complete, not linger. Static secrets should disappear in favor of ephemeral tokens. And the incidents you do see should be smaller in scope because least privilege limits what could go wrong.
Risk Tier | Approvals | Budget Limits | Separation of Duties | Credential Type | Example Actions |
|---|---|---|---|---|---|
Low | Auto-approved within Sandbox | Capped per-task | Not required | Ephemeral, scoped | Marketing draft, ops triage |
Medium | Single approver (owner only) | Team/monthly caps | Recommended | Ephemeral, scoped | CRM update, staging & deploy |
High | Dual control (owner + controller) | PO/transaction thresholds | Enforced SoD | Ephemeral only, time-boxed | PO creation, prod access change |
A Practical Onramp
Getting started rarely requires a big‑bang program. In Phase 1, connect your LLM providers to Opal, map departments to allowlisted providers and data domains, and turn on centralized logging. In Phase 2, register coding agents as identities, narrow their repository and pipeline scopes, and require JIT elevation for merges and deploys. In Phase 3, stand up an agent catalog, assign owners and risk tiers, encode separation of duties and budget caps, and schedule periodic reviews. Each step builds on the last—none is wasted effort.
AI is marching from prompt to production, and it’s not stopping. As it progresses, identity remains the most durable way to stay in control. Opal supports your business with a single system of record to decide who or what may access which systems, at what level, for how long, with which approvals—and to prove it after the fact. That’s how you move fast without breaking things, in 2024 and in 2026 when digital employees sit alongside human ones.
If you’re ready to establish an identity platform to tame your AI control plane across LLMs, coding agents, and production automation, connect your providers and systems to Opal and turn on the guardrails that let you scale with confidence.
Please note that we have officially released our LLM and agent connectors to secure agent identities in the enterprise:
Consider reaching out to us to share your specific agent workflows and corresponding security needs.
Opal supports your business with a single system of record to decide who or what may access which systems. It's the only way your business can move fast without breaking things, even when digital employees sit alongside human ones.
Over the past three years, enterprise AI has evolved from pilot projects to systems that form the fabric of daily work. What began as employees using LLMs to speed up writing and research is now reshaping software delivery, and it’s on the cusp of changing how companies execute entire business processes. Across hundreds of conversations, one pattern keeps repeating – AI adoption follows a predictable, three-phase maturity curve:
Phase 1 starts with organizations’ adoption of LLMs for knowledge work.
Phase 2 consists of enterprise-wide deployments of AI coding agents that interact with code, composable pipelines, and cloud.
Phase 3 entails the rollout of AI agents that act as “digital employees” to automate tasks end‑to‑end.
At every step, identity becomes the control plane. The question isn’t only “what can the AI do?”—it’s “who or what should be allowed to do it, where, for how long, and under which approvals?” That is the problem Opal exists to solve. Allow us to explain how the Opal Security platform secures and accelerates each phase of this adoption curve.

Phase 1: LLMs for Knowledge Work (2023–2024)
Phase 1 began when companies started rolling out enterprise contracts for tools like ChatGPT, Claude, and Gemini. They derived immediate value: faster summarization, better first drafts, accelerated analysis. But the moment those assistants connected to corporate systems (i.e. email, document repositories, chat, issue trackers, data platforms), their security posture changed irreversibly. Data now flows through a defined path, one that was only as safe as the permissions behind it. The winning pattern in this phase has been to treat LLM access like any other privileged capability: explicitly granted, scoped to a purpose, time‑bound, and fully logged. In practice, that means deciding which teams may use which providers, which enterprise connectors are allowed, and which data domains are off‑limits; it also means that access should expire by default and be linked to a clear business justification.
This is where Opal’s approach is most visible. Opal integrates with every enterprise-grade LLM provider and brings their identity and access controls (IAM) into a single coherent system. Company policy dictates that a legal team is allowed to use a particular model for contract review, yet blocked from connecting to customer PII; a finance group is granted read‑only access to internal knowledge bases for quarter‑close analysis but barred from linking to production data warehouses. Access is thus issued just‑in‑time (JIT) for a defined task and duration, and all of it (i.e. who requested, who approved, what scopes were granted, what systems were touched) is preserved as one auditable trail. The result is not only safer usage; it’s the ability for security and compliance teams to answer, in seconds: “who could do what, and when?”

Phase 2: AI Coding Agents to Accelerate Engineering Velocity (2024–2025)
Phase 2 shifts the center of gravity to engineering teams. Coding agents such as Cursor, Claude Code, OpenAI Codex, and emerging tools like Cognition Devin are no longer just autocomplete. They scaffold features, open pull requests, comment on reviews, invoke CI/CD jobs, and sometimes touch cloud resources. Here the risks are different: if an agent holds broad write access across repositories, a single mistake can create a supply‑chain issue; if it can trigger deployments without additional checks, a minor code change can propagate into production; if it manages static credentials, the blast radius of a leak is large.
Opal’s job in this phase is to turn the “agent” from a vague concept into a first‑class identity with owners, purpose, and least‑privilege scopes across repos, pipelines, and cloud. An engineering assistant might read many repositories but write only to feature branches; merges into protected branches would require a human approval captured by Opal’s workflow. A CI job that deploys to staging might acquire ephemeral credentials for two hours, tied to a specific ticket, and then automatically lose them once the issue is verified as resolved and the ticket is closed. Secrets are replaced with brokered, scoped tokens that expire, reducing the lifetime of any mistake to minutes instead of months. Most importantly, every commit, pipeline run, and permission elevation is attributable: you can trace a change from request to approval to execution with a clear chain of custody.
Phase 3: AI Agents as “Digital Employees” for Full Automation (2026–)
Phase 3 will be marked by the moment at which AI stops assisting individuals and starts executing business processes via AI agents. The earliest wave – deterministic agents operating under rule-driven and human-in-the-loop controls – will increasingly see production deployment in 2026, handling bounded tasks like generating purchase orders under a limit, adjudicating low-risk access requests, or triaging routine support tickets. Non-deterministic agents that can fully automate multi-system workflows as digital employees will follow, planning and acting across operations, finance, and security environments.The upside is enormous, but a new risk arises. Actions now carry dollar amounts, compliance implications, and reputational consequences.
In this world, identity governance must be as nuanced as the work itself. Every agent needs a record that names its owner, purpose, risk tier, allowed systems, and expiration. Guardrails should scale with impact: routine tasks might be auto‑approved within a sandbox; medium‑risk actions could require a single approver and enforce budget caps; high‑risk operations should demand dual control and separation of duties. Access must be contextual and ephemeral, meaning the grant must be made “just in time” for a single task and then revoked. Auditability can’t stop at “which or whose API key did it use?” It has to tell the full story: which agent, owned by whom, executed which action against which system, under which policy, for what reason, and at what cost.
Opal provides that operating model. Deterministic workflows encode the preconditions that must be true before an action is even requestable. Non‑deterministic workflows inherit risk‑based policies that determine when a human must step in and when a system‑issued budget or scope boundary should block the operation. The same policy can cover SaaS applications, data platforms, cloud accounts, and internal tools so that one request orchestrates coordinated, scoped grants across everything the agent needs and nothing it doesn’t.
What makes this approach sustainable across all three phases is its consistency. Start with inventory and ownership so there is no such thing as an unowned user or agent. Define scopes that make sense to the business (like data domains, actions, environments) so approvals are meaningful rather than rubber stamps. Grant access with context: who is asking, what is the purpose, where will it run, how long is it needed, and what is the risk tier? Replace standing credentials with ephemeral ones so time becomes your ally. Then monitor and review: detect drift, re-certify high‑impact access, and enforce separation of duties where it matters most. The same model that keeps LLM usage sane in 2024 is the one that will keep digital employees safe in 2026.

Avoiding Common Pitfalls
The pitfalls are predictable and avoidable. Turning an LLM on for everyone invites shadow super‑apps that silently connect to sensitive systems. Letting agents store and use static keys guarantees that they will eventually leak. Granting repo‑wide write permissions and unbounded pipeline rights expands the blast radius far beyond any single change. Allowing “ownerless” agents turns incident response into archaeology. Approvals without context create the illusion of control while offering none. Each of these anti‑patterns is solved by returning to identity: explicit ownership, scoped access, expirations by default, and a paper trail that can withstand audits and post‑mortems.
If you want to measure progress, watch for a handful of signals that cut across tooling and teams. Over time, most access (human or agent) should be just‑in‑time and time‑bound rather than permanent. The median approval for standard, low‑risk requests should compress to minutes because the safe path is automated. The number of systems governed by policy should grow as you pull more SaaS, data, and cloud into a single control plane. Recertifications should actually complete, not linger. Static secrets should disappear in favor of ephemeral tokens. And the incidents you do see should be smaller in scope because least privilege limits what could go wrong.
Risk Tier | Approvals | Budget Limits | Separation of Duties | Credential Type | Example Actions |
|---|---|---|---|---|---|
Low | Auto-approved within Sandbox | Capped per-task | Not required | Ephemeral, scoped | Marketing draft, ops triage |
Medium | Single approver (owner only) | Team/monthly caps | Recommended | Ephemeral, scoped | CRM update, staging & deploy |
High | Dual control (owner + controller) | PO/transaction thresholds | Enforced SoD | Ephemeral only, time-boxed | PO creation, prod access change |
A Practical Onramp
Getting started rarely requires a big‑bang program. In Phase 1, connect your LLM providers to Opal, map departments to allowlisted providers and data domains, and turn on centralized logging. In Phase 2, register coding agents as identities, narrow their repository and pipeline scopes, and require JIT elevation for merges and deploys. In Phase 3, stand up an agent catalog, assign owners and risk tiers, encode separation of duties and budget caps, and schedule periodic reviews. Each step builds on the last—none is wasted effort.
AI is marching from prompt to production, and it’s not stopping. As it progresses, identity remains the most durable way to stay in control. Opal supports your business with a single system of record to decide who or what may access which systems, at what level, for how long, with which approvals—and to prove it after the fact. That’s how you move fast without breaking things, in 2024 and in 2026 when digital employees sit alongside human ones.
If you’re ready to establish an identity platform to tame your AI control plane across LLMs, coding agents, and production automation, connect your providers and systems to Opal and turn on the guardrails that let you scale with confidence.
Please note that we have officially released our LLM and agent connectors to secure agent identities in the enterprise:
Consider reaching out to us to share your specific agent workflows and corresponding security needs.
Opal supports your business with a single system of record to decide who or what may access which systems. It's the only way your business can move fast without breaking things, even when digital employees sit alongside human ones.
Over the past three years, enterprise AI has evolved from pilot projects to systems that form the fabric of daily work. What began as employees using LLMs to speed up writing and research is now reshaping software delivery, and it’s on the cusp of changing how companies execute entire business processes. Across hundreds of conversations, one pattern keeps repeating – AI adoption follows a predictable, three-phase maturity curve:
Phase 1 starts with organizations’ adoption of LLMs for knowledge work.
Phase 2 consists of enterprise-wide deployments of AI coding agents that interact with code, composable pipelines, and cloud.
Phase 3 entails the rollout of AI agents that act as “digital employees” to automate tasks end‑to‑end.
At every step, identity becomes the control plane. The question isn’t only “what can the AI do?”—it’s “who or what should be allowed to do it, where, for how long, and under which approvals?” That is the problem Opal exists to solve. Allow us to explain how the Opal Security platform secures and accelerates each phase of this adoption curve.

Phase 1: LLMs for Knowledge Work (2023–2024)
Phase 1 began when companies started rolling out enterprise contracts for tools like ChatGPT, Claude, and Gemini. They derived immediate value: faster summarization, better first drafts, accelerated analysis. But the moment those assistants connected to corporate systems (i.e. email, document repositories, chat, issue trackers, data platforms), their security posture changed irreversibly. Data now flows through a defined path, one that was only as safe as the permissions behind it. The winning pattern in this phase has been to treat LLM access like any other privileged capability: explicitly granted, scoped to a purpose, time‑bound, and fully logged. In practice, that means deciding which teams may use which providers, which enterprise connectors are allowed, and which data domains are off‑limits; it also means that access should expire by default and be linked to a clear business justification.
This is where Opal’s approach is most visible. Opal integrates with every enterprise-grade LLM provider and brings their identity and access controls (IAM) into a single coherent system. Company policy dictates that a legal team is allowed to use a particular model for contract review, yet blocked from connecting to customer PII; a finance group is granted read‑only access to internal knowledge bases for quarter‑close analysis but barred from linking to production data warehouses. Access is thus issued just‑in‑time (JIT) for a defined task and duration, and all of it (i.e. who requested, who approved, what scopes were granted, what systems were touched) is preserved as one auditable trail. The result is not only safer usage; it’s the ability for security and compliance teams to answer, in seconds: “who could do what, and when?”

Phase 2: AI Coding Agents to Accelerate Engineering Velocity (2024–2025)
Phase 2 shifts the center of gravity to engineering teams. Coding agents such as Cursor, Claude Code, OpenAI Codex, and emerging tools like Cognition Devin are no longer just autocomplete. They scaffold features, open pull requests, comment on reviews, invoke CI/CD jobs, and sometimes touch cloud resources. Here the risks are different: if an agent holds broad write access across repositories, a single mistake can create a supply‑chain issue; if it can trigger deployments without additional checks, a minor code change can propagate into production; if it manages static credentials, the blast radius of a leak is large.
Opal’s job in this phase is to turn the “agent” from a vague concept into a first‑class identity with owners, purpose, and least‑privilege scopes across repos, pipelines, and cloud. An engineering assistant might read many repositories but write only to feature branches; merges into protected branches would require a human approval captured by Opal’s workflow. A CI job that deploys to staging might acquire ephemeral credentials for two hours, tied to a specific ticket, and then automatically lose them once the issue is verified as resolved and the ticket is closed. Secrets are replaced with brokered, scoped tokens that expire, reducing the lifetime of any mistake to minutes instead of months. Most importantly, every commit, pipeline run, and permission elevation is attributable: you can trace a change from request to approval to execution with a clear chain of custody.
Phase 3: AI Agents as “Digital Employees” for Full Automation (2026–)
Phase 3 will be marked by the moment at which AI stops assisting individuals and starts executing business processes via AI agents. The earliest wave – deterministic agents operating under rule-driven and human-in-the-loop controls – will increasingly see production deployment in 2026, handling bounded tasks like generating purchase orders under a limit, adjudicating low-risk access requests, or triaging routine support tickets. Non-deterministic agents that can fully automate multi-system workflows as digital employees will follow, planning and acting across operations, finance, and security environments.The upside is enormous, but a new risk arises. Actions now carry dollar amounts, compliance implications, and reputational consequences.
In this world, identity governance must be as nuanced as the work itself. Every agent needs a record that names its owner, purpose, risk tier, allowed systems, and expiration. Guardrails should scale with impact: routine tasks might be auto‑approved within a sandbox; medium‑risk actions could require a single approver and enforce budget caps; high‑risk operations should demand dual control and separation of duties. Access must be contextual and ephemeral, meaning the grant must be made “just in time” for a single task and then revoked. Auditability can’t stop at “which or whose API key did it use?” It has to tell the full story: which agent, owned by whom, executed which action against which system, under which policy, for what reason, and at what cost.
Opal provides that operating model. Deterministic workflows encode the preconditions that must be true before an action is even requestable. Non‑deterministic workflows inherit risk‑based policies that determine when a human must step in and when a system‑issued budget or scope boundary should block the operation. The same policy can cover SaaS applications, data platforms, cloud accounts, and internal tools so that one request orchestrates coordinated, scoped grants across everything the agent needs and nothing it doesn’t.
What makes this approach sustainable across all three phases is its consistency. Start with inventory and ownership so there is no such thing as an unowned user or agent. Define scopes that make sense to the business (like data domains, actions, environments) so approvals are meaningful rather than rubber stamps. Grant access with context: who is asking, what is the purpose, where will it run, how long is it needed, and what is the risk tier? Replace standing credentials with ephemeral ones so time becomes your ally. Then monitor and review: detect drift, re-certify high‑impact access, and enforce separation of duties where it matters most. The same model that keeps LLM usage sane in 2024 is the one that will keep digital employees safe in 2026.

Avoiding Common Pitfalls
The pitfalls are predictable and avoidable. Turning an LLM on for everyone invites shadow super‑apps that silently connect to sensitive systems. Letting agents store and use static keys guarantees that they will eventually leak. Granting repo‑wide write permissions and unbounded pipeline rights expands the blast radius far beyond any single change. Allowing “ownerless” agents turns incident response into archaeology. Approvals without context create the illusion of control while offering none. Each of these anti‑patterns is solved by returning to identity: explicit ownership, scoped access, expirations by default, and a paper trail that can withstand audits and post‑mortems.
If you want to measure progress, watch for a handful of signals that cut across tooling and teams. Over time, most access (human or agent) should be just‑in‑time and time‑bound rather than permanent. The median approval for standard, low‑risk requests should compress to minutes because the safe path is automated. The number of systems governed by policy should grow as you pull more SaaS, data, and cloud into a single control plane. Recertifications should actually complete, not linger. Static secrets should disappear in favor of ephemeral tokens. And the incidents you do see should be smaller in scope because least privilege limits what could go wrong.
Risk Tier | Approvals | Budget Limits | Separation of Duties | Credential Type | Example Actions |
|---|---|---|---|---|---|
Low | Auto-approved within Sandbox | Capped per-task | Not required | Ephemeral, scoped | Marketing draft, ops triage |
Medium | Single approver (owner only) | Team/monthly caps | Recommended | Ephemeral, scoped | CRM update, staging & deploy |
High | Dual control (owner + controller) | PO/transaction thresholds | Enforced SoD | Ephemeral only, time-boxed | PO creation, prod access change |
A Practical Onramp
Getting started rarely requires a big‑bang program. In Phase 1, connect your LLM providers to Opal, map departments to allowlisted providers and data domains, and turn on centralized logging. In Phase 2, register coding agents as identities, narrow their repository and pipeline scopes, and require JIT elevation for merges and deploys. In Phase 3, stand up an agent catalog, assign owners and risk tiers, encode separation of duties and budget caps, and schedule periodic reviews. Each step builds on the last—none is wasted effort.
AI is marching from prompt to production, and it’s not stopping. As it progresses, identity remains the most durable way to stay in control. Opal supports your business with a single system of record to decide who or what may access which systems, at what level, for how long, with which approvals—and to prove it after the fact. That’s how you move fast without breaking things, in 2024 and in 2026 when digital employees sit alongside human ones.
If you’re ready to establish an identity platform to tame your AI control plane across LLMs, coding agents, and production automation, connect your providers and systems to Opal and turn on the guardrails that let you scale with confidence.
Please note that we have officially released our LLM and agent connectors to secure agent identities in the enterprise:
Consider reaching out to us to share your specific agent workflows and corresponding security needs.
Opal supports your business with a single system of record to decide who or what may access which systems. It's the only way your business can move fast without breaking things, even when digital employees sit alongside human ones.
Over the past three years, enterprise AI has evolved from pilot projects to systems that form the fabric of daily work. What began as employees using LLMs to speed up writing and research is now reshaping software delivery, and it’s on the cusp of changing how companies execute entire business processes. Across hundreds of conversations, one pattern keeps repeating – AI adoption follows a predictable, three-phase maturity curve:
Phase 1 starts with organizations’ adoption of LLMs for knowledge work.
Phase 2 consists of enterprise-wide deployments of AI coding agents that interact with code, composable pipelines, and cloud.
Phase 3 entails the rollout of AI agents that act as “digital employees” to automate tasks end‑to‑end.
At every step, identity becomes the control plane. The question isn’t only “what can the AI do?”—it’s “who or what should be allowed to do it, where, for how long, and under which approvals?” That is the problem Opal exists to solve. Allow us to explain how the Opal Security platform secures and accelerates each phase of this adoption curve.

Phase 1: LLMs for Knowledge Work (2023–2024)
Phase 1 began when companies started rolling out enterprise contracts for tools like ChatGPT, Claude, and Gemini. They derived immediate value: faster summarization, better first drafts, accelerated analysis. But the moment those assistants connected to corporate systems (i.e. email, document repositories, chat, issue trackers, data platforms), their security posture changed irreversibly. Data now flows through a defined path, one that was only as safe as the permissions behind it. The winning pattern in this phase has been to treat LLM access like any other privileged capability: explicitly granted, scoped to a purpose, time‑bound, and fully logged. In practice, that means deciding which teams may use which providers, which enterprise connectors are allowed, and which data domains are off‑limits; it also means that access should expire by default and be linked to a clear business justification.
This is where Opal’s approach is most visible. Opal integrates with every enterprise-grade LLM provider and brings their identity and access controls (IAM) into a single coherent system. Company policy dictates that a legal team is allowed to use a particular model for contract review, yet blocked from connecting to customer PII; a finance group is granted read‑only access to internal knowledge bases for quarter‑close analysis but barred from linking to production data warehouses. Access is thus issued just‑in‑time (JIT) for a defined task and duration, and all of it (i.e. who requested, who approved, what scopes were granted, what systems were touched) is preserved as one auditable trail. The result is not only safer usage; it’s the ability for security and compliance teams to answer, in seconds: “who could do what, and when?”

Phase 2: AI Coding Agents to Accelerate Engineering Velocity (2024–2025)
Phase 2 shifts the center of gravity to engineering teams. Coding agents such as Cursor, Claude Code, OpenAI Codex, and emerging tools like Cognition Devin are no longer just autocomplete. They scaffold features, open pull requests, comment on reviews, invoke CI/CD jobs, and sometimes touch cloud resources. Here the risks are different: if an agent holds broad write access across repositories, a single mistake can create a supply‑chain issue; if it can trigger deployments without additional checks, a minor code change can propagate into production; if it manages static credentials, the blast radius of a leak is large.
Opal’s job in this phase is to turn the “agent” from a vague concept into a first‑class identity with owners, purpose, and least‑privilege scopes across repos, pipelines, and cloud. An engineering assistant might read many repositories but write only to feature branches; merges into protected branches would require a human approval captured by Opal’s workflow. A CI job that deploys to staging might acquire ephemeral credentials for two hours, tied to a specific ticket, and then automatically lose them once the issue is verified as resolved and the ticket is closed. Secrets are replaced with brokered, scoped tokens that expire, reducing the lifetime of any mistake to minutes instead of months. Most importantly, every commit, pipeline run, and permission elevation is attributable: you can trace a change from request to approval to execution with a clear chain of custody.
Phase 3: AI Agents as “Digital Employees” for Full Automation (2026–)
Phase 3 will be marked by the moment at which AI stops assisting individuals and starts executing business processes via AI agents. The earliest wave – deterministic agents operating under rule-driven and human-in-the-loop controls – will increasingly see production deployment in 2026, handling bounded tasks like generating purchase orders under a limit, adjudicating low-risk access requests, or triaging routine support tickets. Non-deterministic agents that can fully automate multi-system workflows as digital employees will follow, planning and acting across operations, finance, and security environments.The upside is enormous, but a new risk arises. Actions now carry dollar amounts, compliance implications, and reputational consequences.
In this world, identity governance must be as nuanced as the work itself. Every agent needs a record that names its owner, purpose, risk tier, allowed systems, and expiration. Guardrails should scale with impact: routine tasks might be auto‑approved within a sandbox; medium‑risk actions could require a single approver and enforce budget caps; high‑risk operations should demand dual control and separation of duties. Access must be contextual and ephemeral, meaning the grant must be made “just in time” for a single task and then revoked. Auditability can’t stop at “which or whose API key did it use?” It has to tell the full story: which agent, owned by whom, executed which action against which system, under which policy, for what reason, and at what cost.
Opal provides that operating model. Deterministic workflows encode the preconditions that must be true before an action is even requestable. Non‑deterministic workflows inherit risk‑based policies that determine when a human must step in and when a system‑issued budget or scope boundary should block the operation. The same policy can cover SaaS applications, data platforms, cloud accounts, and internal tools so that one request orchestrates coordinated, scoped grants across everything the agent needs and nothing it doesn’t.
What makes this approach sustainable across all three phases is its consistency. Start with inventory and ownership so there is no such thing as an unowned user or agent. Define scopes that make sense to the business (like data domains, actions, environments) so approvals are meaningful rather than rubber stamps. Grant access with context: who is asking, what is the purpose, where will it run, how long is it needed, and what is the risk tier? Replace standing credentials with ephemeral ones so time becomes your ally. Then monitor and review: detect drift, re-certify high‑impact access, and enforce separation of duties where it matters most. The same model that keeps LLM usage sane in 2024 is the one that will keep digital employees safe in 2026.

Avoiding Common Pitfalls
The pitfalls are predictable and avoidable. Turning an LLM on for everyone invites shadow super‑apps that silently connect to sensitive systems. Letting agents store and use static keys guarantees that they will eventually leak. Granting repo‑wide write permissions and unbounded pipeline rights expands the blast radius far beyond any single change. Allowing “ownerless” agents turns incident response into archaeology. Approvals without context create the illusion of control while offering none. Each of these anti‑patterns is solved by returning to identity: explicit ownership, scoped access, expirations by default, and a paper trail that can withstand audits and post‑mortems.
If you want to measure progress, watch for a handful of signals that cut across tooling and teams. Over time, most access (human or agent) should be just‑in‑time and time‑bound rather than permanent. The median approval for standard, low‑risk requests should compress to minutes because the safe path is automated. The number of systems governed by policy should grow as you pull more SaaS, data, and cloud into a single control plane. Recertifications should actually complete, not linger. Static secrets should disappear in favor of ephemeral tokens. And the incidents you do see should be smaller in scope because least privilege limits what could go wrong.
Risk Tier | Approvals | Budget Limits | Separation of Duties | Credential Type | Example Actions |
|---|---|---|---|---|---|
Low | Auto-approved within Sandbox | Capped per-task | Not required | Ephemeral, scoped | Marketing draft, ops triage |
Medium | Single approver (owner only) | Team/monthly caps | Recommended | Ephemeral, scoped | CRM update, staging & deploy |
High | Dual control (owner + controller) | PO/transaction thresholds | Enforced SoD | Ephemeral only, time-boxed | PO creation, prod access change |
A Practical Onramp
Getting started rarely requires a big‑bang program. In Phase 1, connect your LLM providers to Opal, map departments to allowlisted providers and data domains, and turn on centralized logging. In Phase 2, register coding agents as identities, narrow their repository and pipeline scopes, and require JIT elevation for merges and deploys. In Phase 3, stand up an agent catalog, assign owners and risk tiers, encode separation of duties and budget caps, and schedule periodic reviews. Each step builds on the last—none is wasted effort.
AI is marching from prompt to production, and it’s not stopping. As it progresses, identity remains the most durable way to stay in control. Opal supports your business with a single system of record to decide who or what may access which systems, at what level, for how long, with which approvals—and to prove it after the fact. That’s how you move fast without breaking things, in 2024 and in 2026 when digital employees sit alongside human ones.
If you’re ready to establish an identity platform to tame your AI control plane across LLMs, coding agents, and production automation, connect your providers and systems to Opal and turn on the guardrails that let you scale with confidence.
Please note that we have officially released our LLM and agent connectors to secure agent identities in the enterprise:
Consider reaching out to us to share your specific agent workflows and corresponding security needs.



