AI Business Strategy - AI News https://www.artificialintelligence-news.com/categories/inside-ai/ai-business-strategy/ Artificial Intelligence News Thu, 16 Apr 2026 11:20:02 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png AI Business Strategy - AI News https://www.artificialintelligence-news.com/categories/inside-ai/ai-business-strategy/ 32 32 OpenAI Agents SDK improves governance with sandbox execution https://www.artificialintelligence-news.com/news/openai-agents-sdk-improves-governance-sandbox-execution/ Thu, 16 Apr 2026 11:20:00 +0000 https://www.artificialintelligence-news.com/?p=113030 OpenAI is introducing sandbox execution that allows enterprise governance teams to deploy automated workflows with controlled risk. Teams taking systems from prototype to production have faced difficult architectural compromises regarding where their operations occurred. Using model-agnostic frameworks offered initial flexibility but failed to fully utilise the capabilities of frontier models. Model-provider SDKs remained closer to […]

The post OpenAI Agents SDK improves governance with sandbox execution appeared first on AI News.

]]>
OpenAI is introducing sandbox execution that allows enterprise governance teams to deploy automated workflows with controlled risk.

Teams taking systems from prototype to production have faced difficult architectural compromises regarding where their operations occurred. Using model-agnostic frameworks offered initial flexibility but failed to fully utilise the capabilities of frontier models. Model-provider SDKs remained closer to the underlying model, but often lacked enough visibility into the control harness.

To complicate matters further, managed agent APIs simplified the deployment process but severely constrained where the systems could run and how they accessed sensitive corporate data. To resolve this, OpenAI is introducing new capabilities to the Agents SDK, offering developers standardised infrastructure featuring a model-native harness and native sandbox execution.

The updated infrastructure aligns execution with the natural operating pattern of the underlying models, improving reliability when tasks require coordination across diverse systems. Oscar Health provides an example of this efficiency regarding unstructured data.

The healthcare provider tested the new infrastructure to automate a clinical records workflow that older approaches could not handle reliably. The engineering team required the automated system to extract correct metadata while correctly understanding the boundaries of patient encounters within complex medical files. By automating this process, the provider could parse patient histories faster, expediting care coordination and improving the overall member experience.

Rachael Burns, Staff Engineer & AI Tech Lead at Oscar Health, said: “The updated Agents SDK made it production-viable for us to automate a critical clinical records workflow that previous approaches couldn’t handle reliably enough.

“For us, the difference was not just extracting the right metadata, but correctly understanding the boundaries of each encounter in long, complex records. As a result, we can more quickly understand what’s happening for each patient in a given visit, helping members with their care needs and improving their experience with us.”

OpenAI optimises AI workflows with a model-native harness

To deploy these systems, engineers must manage vector database synchronisation, control hallucination risks, and optimise expensive compute cycles. Without standard frameworks, internal teams often resort to building brittle custom connectors to manage these workflows.

The new model-native harness helps alleviate this friction by introducing configurable memory, sandbox-aware orchestration, and Codex-like filesystem tools. Developers can integrate standardised primitives such as tool use via MCP, custom instructions via AGENTS.md, and file edits using the apply patch tool.

Progressive disclosure via skills and code execution using the shell tool also enables the system to perform complex tasks sequentially. This standardisation allows engineering teams to spend less time updating core infrastructure and focus on building domain-specific logic that directly benefits the business.

Integrating an autonomous program into a legacy tech stack requires precise routing. When an autonomous process accesses unstructured data, it relies heavily on retrieval systems to pull relevant context.

To manage the integration of diverse architectures and limit operational scope, the SDK introduces a Manifest abstraction. This abstraction standardises how developers describe the workspace, allowing them to mount local files and define output directories.

Teams can connect these environments directly to major enterprise storage providers, including AWS S3, Azure Blob Storage, Google Cloud Storage, and Cloudflare R2. Establishing a predictable workspace gives the model exact parameters on where to locate inputs, write outputs, and maintain organisation during extended operational runs.

This predictability prevents the system from querying unfiltered data lakes, restricting it to specific, validated context windows. Data governance teams can subsequently track the provenance of every automated decision with greater accuracy from local prototype phases through to production deployment.

Enhancing security with native sandbox execution

The SDK natively supports sandbox execution, offering an out-of-the-box layer so programs can run within controlled computer environments containing the necessary files and dependencies. Engineering teams no longer need to piece this execution layer together manually. They can deploy their own custom sandboxes or utilise built-in support for providers like Blaxel, Cloudflare, Daytona, E2B, Modal, Runloop, and Vercel.

Risk mitigation remains the primary concern for any enterprise deploying autonomous code execution. Security teams must assume that any system reading external data or executing generated code will face prompt-injection attacks and exfiltration attempts.

OpenAI approaches this security requirement by separating the control harness from the compute layer. This separation isolates credentials, keeping them entirely out of the environments where the model-generated code executes. By isolating the execution layer, an injected malicious command cannot access the central control plane or steal primary API keys, protecting the wider corporate network from lateral movement attacks.

This separation also addresses compute cost issues regarding system failures. Long-running tasks often fail midway due to network timeouts, container crashes, or API limits. If a complex agent takes twenty steps to compile a financial report and fails at step nineteen, re-running the entire sequence burns expensive computing resources.

If the environment crashes under the new architecture, losing the sandbox container does not mean losing the entire operational run. Because the system state remains externalised, the SDK utilises built-in snapshotting and rehydration. The infrastructure can restore the state within a fresh container and resume exactly from the last checkpoint if the original environment expires or fails. Preventing the need to restart expensive, long-running processes translates directly to reduced cloud compute spend.

Scaling these operations requires dynamic resource allocation. The separated architecture allows runs to invoke single or multiple sandboxes based on current load, route specific subagents into isolated environments, and parallelise tasks across numerous containers for faster execution times.

These new capabilities are generally available to all customers via the API, utilising standard pricing based on tokens and tool use without demanding custom procurement contracts. The new harness and sandbox capabilities are launching first for Python developers, with TypeScript support slated for a future release.

OpenAI plans to bring additional capabilities, including code mode and subagents, to both the Python and TypeScript libraries. The vendor intends to expand the broader ecosystem over time by supporting additional sandbox providers and offering more methods for developers to plug the SDK directly into their existing internal systems.

See also: Commvault launches a ‘Ctrl-Z’ for cloud AI workloads

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post OpenAI Agents SDK improves governance with sandbox execution appeared first on AI News.

]]>
Commvault launches a ‘Ctrl-Z’ for cloud AI workloads https://www.artificialintelligence-news.com/news/commvault-launches-ctrl-z-for-cloud-ai-workloads/ Wed, 15 Apr 2026 16:28:19 +0000 https://www.artificialintelligence-news.com/?p=113020 Enterprise cloud environments now have access to an undo feature for AI agents following the deployment of Commvault AI Protect. Autonomous software now roams across infrastructure, potentially deleting files, reading databases, spinning up server clusters, and even rewriting access policies. Commvault identified this governance issue and the data protection vendor has launched AI Protect, a […]

The post Commvault launches a ‘Ctrl-Z’ for cloud AI workloads appeared first on AI News.

]]>
Enterprise cloud environments now have access to an undo feature for AI agents following the deployment of Commvault AI Protect.

Autonomous software now roams across infrastructure, potentially deleting files, reading databases, spinning up server clusters, and even rewriting access policies. Commvault identified this governance issue and the data protection vendor has launched AI Protect, a system designed to discover, monitor, and forcefully roll back the actions of autonomous models operating inside AWS, Microsoft Azure, and Google Cloud.

Traditional governance relies entirely on static rules. You grant a human user specific permissions and that user performs a predictable, linear task. If something goes wrong, there’s clear responsibility. AI agents, however, exhibit emergent behaviour.

When given a complex prompt, an agent will string together approved permissions in potentially unapproved ways to solve the problem. If an agent decides the most efficient way to optimise cloud storage costs is to delete an entire production database, it will execute that command in milliseconds.

A human engineer might pause before executing a destructive command, questioning the logic. An AI agent simply follows its internal reasoning loop. It loops thousands of API requests a second, vastly outpacing the reaction times of human security operations centres.

Pranay Ahlawat, Chief Technology and AI Officer at Commvault, said: “In agentic environments, agents mutate state across data, systems, and configurations in ways that compound fast and are hard to trace. When something goes wrong, teams need to recover not just data, but the full stack – applications, agent configurations, and dependencies – back to a known good state.”

A new breed of governance tools for cloud AI agents

AI Protect is an example of emerging tools that continuously scan the enterprise cloud footprint to identify active agents. Shadow AI remains a massive difficulty for enterprise IT departments. Developers routinely spin up experimental agents using corporate credentials without notifying security teams and connect language models to internal data lakes to test new workflows.

Commvault forces these hidden actors into the light. Once identified, the software monitors the agent’s specific API calls and data interactions across AWS, Azure, and GCP. It logs every database read, every storage modification, and every configuration change.

The rollback feature provides the safety net. If a model hallucinates or misinterprets a command, administrators can revert the environment to its exact state before the machine initiated the destructive sequence.

However, cloud infrastructure is highly stateful and deeply interconnected. Reversing a complex chain of automated actions requires precise, ledger-based tracking. You cannot just restore a single database table if the machine also modified networking rules, triggered downstream serverless functions, and altered identity access management policies during its run.

Commvault bridges traditional backup architecture with continuous cloud monitoring to achieve this. By mapping the blast radius of the agent’s session, the software isolates the damage. It untangles the specific changes made by the AI from the legitimate changes made by human users during the same timeframe. This prevents a mass rollback from deleting valid customer transactions or wiping out hours of legitimate engineering work.

Machines will continue to execute tasks faster than human operators can monitor them. The priority now is implementing safeguards that guarantee autonomous actions can be instantly and accurately reversed.

See also: Citizen developers now have their own Wingman

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Commvault launches a ‘Ctrl-Z’ for cloud AI workloads appeared first on AI News.

]]>
SAP brings agentic AI to human capital management https://www.artificialintelligence-news.com/news/sap-brings-agentic-ai-human-capital-management/ Tue, 14 Apr 2026 12:55:09 +0000 https://www.artificialintelligence-news.com/?p=112997 According to SAP, integrating agentic AI into core human capital management (HCM) modules helps target operational bloat and reduce costs. SAP’s SuccessFactors 1H 2026 release aims to anticipate administrative bottlenecks before they stall daily operations by embedding a network of AI agents across recruiting, payroll, workforce administration, and talent development. Behind the user interface, these […]

The post SAP brings agentic AI to human capital management appeared first on AI News.

]]>
According to SAP, integrating agentic AI into core human capital management (HCM) modules helps target operational bloat and reduce costs.

SAP’s SuccessFactors 1H 2026 release aims to anticipate administrative bottlenecks before they stall daily operations by embedding a network of AI agents across recruiting, payroll, workforce administration, and talent development. Behind the user interface, these agents must monitor system states, identify anomalies, and prompt human operators with context-aware solutions.

Data synchronisation failures between distributed enterprise systems routinely require dedicated IT support teams to diagnose. When employee master data fails to replicate due to a missing attribute, downstream systems like access management and financial compensation halt.

The agentic approach uses analytical models to cross-reference peer data, identify the missing variable based on organisational patterns, and prompt the administrator with the required correction. This automated troubleshooting dramatically reduces the mean time to resolution for internal support tickets.

Implementing this level of autonomous monitoring requires severe engineering discipline. Integrating modern semantic search mechanisms with highly structured legacy relational databases requires extensive middleware configuration.

Running large language models in the background to continuously scan millions of employee records for inconsistencies consumes massive compute resources. CIOs must carefully balance the cloud infrastructure costs of continuous algorithmic monitoring against the operational savings generated by reduced IT ticket volumes.

To mitigate the risk of algorithmic hallucinations altering core financial data, engineering teams are forced to build strict guardrails. These retrieve-and-generate architectures must be firmly anchored to the company’s verified data lakes, ensuring the AI only acts upon validated corporate policies rather than generalised internet training data.

The SAP release attempts to streamline this knowledge retrieval by introducing intelligent question-and-answer capabilities within its learning module. This functionality delivers instant, context-aware responses drawn directly from an organisation’s learning content, allowing employees to bypass manual documentation searches entirely. The integration also introduces a growing workforce knowledge network that pulls trusted external employment guidance into daily workflows to support confident decision-making.

How SAP is using agentic AI to consolidate the HCM ecosystem

The updated architecture focuses on unified experiences that adapt to operational needs. For example, the delay between a signed offer letter to new talent and the employee achieving full productivity is a drag on profit margins.

Native integration combining SmartRecruiters solutions, SAP SuccessFactors Employee Central, and SAP SuccessFactors Onboarding streamlines the data flow from initial candidate interaction through to the new hire phase.

A candidate’s technical assessments, background checks, and negotiated terms pass automatically into the core human resources repository. Enterprises accelerate the onboarding timeline by eliminating the manual re-entry of personnel data—allowing new technical hires to begin contributing to active commercial projects faster.

Technical leadership teams understand that out-of-the-box software rarely matches internal enterprise processes perfectly. Customisation is necessary, but hardcoded extensions routinely break during cloud upgrade cycles, creating vast maintenance backlogs.

To manage this tension, the software introduces a new extensibility wizard. This tool provides guided, step-by-step support for building custom extensions directly on the SAP Business Technology Platform within the SuccessFactors environment.

By containing custom development within a governed platform environment, technology officers can adapt the interface to unique business requirements while preserving strict governance and ensuring future update compatibility.

Algorithmic auditing and margin protection

The 1H 2026 release incorporates pay transparency insights directly into the People Intelligence package within SAP Business Data Cloud to help with compliance with strict regulatory environments like the EU’s directives on pay transparency (which requires organisations to provide detailed and auditable justifications for wage discrepancies.)

Manual compilation of compensation data across multiple geographic regions and currency zones is highly error-prone. Using the People Intelligence package, organisations can analyse compensation patterns and potential pay gaps across demographics.

Automating this analysis provides a data-driven defence against compliance audits and aligns internal pay practices with evolving regulatory expectations, protecting the enterprise from both litigation costs and brand damage.

Preparing for future demands requires trusted and consistent skills data that leadership can rely on across talent deployment and workforce planning. Unstructured data, where one department labels a capability using differing terminology from another, breaks automated resource allocation models.

The update strengthens the SAP talent intelligence hub by introducing enhanced skills governance to provide administrators with a centralised interface for managing skill definitions, applying corporate standards, and ensuring data aligns across internal applications and external partner ecosystems. 

Standardising this data improves overall system quality and allows resource managers to make deployment decisions without relying on fragmented spreadsheets or guesswork. This inventory prevents organisations from having to outsource to expensive external contractors for capabilities they already possess internally.

By bringing together data, AI, and connected experiences, SAP’s latest enhancements show how agentic AI can help organisations reduce daily friction. For professionals looking to explore these types of enterprise AI integrations and connect directly with the company, SAP is a key sponsor of this year’s AI & Big Data Expo North America.

See also: IBM: How robust AI governance protects enterprise margins

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SAP brings agentic AI to human capital management appeared first on AI News.

]]>
Hyundai expands into robotics and physical AI systems https://www.artificialintelligence-news.com/news/hyundai-expands-into-robotics-and-physical-ai-systems/ Tue, 14 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112984 Hyundai Motor Group is starting to look like a company building machines that act in the real world. The change centres on physical AI: Where AI is placed into robots and systems that move and respond in physical spaces. Current efforts are mainly focused on factory and industrial settings. Hyundai’s move into physical AI systems […]

The post Hyundai expands into robotics and physical AI systems appeared first on AI News.

]]>
Hyundai Motor Group is starting to look like a company building machines that act in the real world. The change centres on physical AI: Where AI is placed into robots and systems that move and respond in physical spaces. Current efforts are mainly focused on factory and industrial settings.

Hyundai’s move into physical AI systems

In an interview with Semafor, chairman Chung Eui-sun said robotics and AI will play a central role in Hyundai’s next phase of growth, pushing the company beyond vehicles and into physical systems. The group plans to invest $26 billion in the US by 2028, according to United Press International, building on roughly $20.5 billion invested over the past 40 years.

A large part of that spending is tied to robotics and AI-driven systems that Hyundai is combining into a single approach. Chung described robotics and physical AI as important to Hyundai’s long-term direction, adding that the company is developing robots to work with people not replace them.

From automation to collaboration

Hyundai is working on systems where robots and humans share tasks in the same space. This includes humanoid robots developed by Boston Dynamics, which Hyundai acquired a controlling stake in 2021. Machines are being prepared for manufacturing use, with deployment planned around 2028. The company expects to scale production to up to 30,000 units per year by 2030, with the goal to improve work on the factory floor. Robots may handle repetitive or physically demanding tasks, while humans focus on oversight and coordination.

Chung said this kind of setup could help improve efficiency and product quality as customer expectations change.

Current deployments remain focused on industrial settings, though Hyundai is exploring other uses. Potential areas include logistics and mobility services that combine vehicles with AI systems. These may affect deliveries and shared services.

Manufacturing as the first use case for physical AI

While these uses are still developing, manufacturing remains the main testing ground. Factories remain the place where Hyundai is putting these ideas into practice. The company is already working on software-driven manufacturing systems in its US operations, combining data and robotics to manage production.

Physical AI builds on this by adding machines that adjust their actions based on real-time data. Chung said changes in regulations and customer demand are pushing the company to rethink how it operates in regions. Hyundai’s response is a mix of global expansion and local production, with AI and robotics helping standardise processes.

Energy and infrastructure

The company continues to invest in hydrogen through its HTWO brand, which covers production, storage and use. Chung pointed to rising demand linked to AI infrastructure and data centres as one reason hydrogen is gaining attention. He described hydrogen and electric vehicles as complementary options. The idea is to offer different energy choices depending on how systems are used. As AI moves into physical environments, energy becomes a more visible constraint.

What physical AI means for end users

Most people will not interact with a humanoid robot in the near term. But they will feel the effects of these systems in other ways. Products may be built faster and services tied to mobility or infrastructure may become more responsive.

Hyundai sells more than 7 million vehicles each year in over 200 countries, supported by 16 global production facilities, according to the same UPI report.

A gradual transition

Hyundai is still a major carmaker, with brands like Hyundai, Kia, and Genesis forming the base of its operations. What is changing is how those vehicles – and the systems around them – are designed and managed.

Physical AI represents a change from products to systems. It places AI in the environments where work and daily life take place. That change is still in progress, and many of the systems Hyundai is developing will take years to scale. The company is building toward a future where machines work with people in the real world.

(Photo by @named_ aashutosh)

See also: Asylon and Thrive Logic bring physical AI to enterprise perimeter security

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Hyundai expands into robotics and physical AI systems appeared first on AI News.

]]>
Strengthening enterprise governance for rising edge AI workloads https://www.artificialintelligence-news.com/news/strengthening-enterprise-governance-for-rising-edge-ai-workloads/ Mon, 13 Apr 2026 13:02:01 +0000 https://www.artificialintelligence-news.com/?p=112976 Models like Google Gemma 4 are increasing enterprise AI governance challenges for CISOs as they scramble to secure edge workloads. Security chiefs have built massive digital walls around the cloud; deploying advanced cloud access security brokers and routing every piece of traffic heading to external large language models through monitored corporate gateways. The logic was […]

The post Strengthening enterprise governance for rising edge AI workloads appeared first on AI News.

]]>
Models like Google Gemma 4 are increasing enterprise AI governance challenges for CISOs as they scramble to secure edge workloads.

Security chiefs have built massive digital walls around the cloud; deploying advanced cloud access security brokers and routing every piece of traffic heading to external large language models through monitored corporate gateways. The logic was sound to boards and executive committees—keep the sensitive data inside the network, police the outgoing requests, and intellectual property remains entirely safe from external leaks.

Google just obliterated that perimeter with the release of Gemma 4. Unlike massive parameter models confined to hyperscale data centres, this family of open weights targets local hardware. It runs directly on edge devices, executes multi-step planning, and can operate autonomous workflows right on a local device.

On-device inference has become a glaring blind spot for enterprise security operations. Security analysts cannot inspect network traffic if the traffic never hits the network in the first place. Engineers can ingest highly classified corporate data, process it through a local Gemma 4 agent, and generate output without triggering a single cloud firewall alarm.

Collapse of API-centric defences

Most corporate IT frameworks treat machine learning tools like standard third-party software vendors. You vet the provider, sign a massive enterprise data processing agreement, and funnel employee traffic through a sanctioned digital gateway. This standard playbook falls apart the moment an engineer downloads an Apache 2.0 licensed model like Gemma 4 and turns their laptop into an autonomous compute node.

Google paired this new model rollout with the Google AI Edge Gallery and a highly optimised LiteRT-LM library. These tools drastically accelerate local execution speeds while providing highly structured outputs required for complex agentic behaviours. An autonomous agent can now sit quietly on a local machine, iterate through thousands of logic steps, and execute code locally at impressive speed.

European data sovereignty laws and strict global financial regulations mandate complete auditability for automated decision-making. When a local agent hallucinates, makes a catastrophic error, or inadvertently leaks internal code across a shared corporate Slack channel, investigators require detailed logs. If the model operates entirely offline on local silicon, those logs simply do not exist inside the centralised IT security dashboard.

Financial institutions stand to lose the most from this architectural adjustment. Banks have spent millions implementing strict API logging to satisfy regulators investigating generative machine learning usage. If algorithmic trading strategies or proprietary risk assessment protocols are parsed by an unmonitored local agent, the bank violates multiple compliance frameworks simultaneously.

Healthcare networks face a similar reality. Patient data processed through an offline medical assistant running Gemma 4 might feel secure because it never leaves the physical laptop. The reality is that unlogged processing of health data violates the core tenets of modern medical auditing. Security leaders must prove how data was handled, what system processed it, and who authorised the execution.

The intent-control dilemma

Industry researchers often refer to this current phase of technological adoption as the governance trap. Management teams panic when they lose visibility. They attempt to rein in developer behaviour by throwing more bureaucratic processes at the problem, mandate sluggish architecture review boards, and force engineers to fill out extensive deployment forms before installing any new repository.

Bureaucracy rarely stops a motivated developer facing an aggressive product deadline; it just forces the entire behaviour further underground. This creates a shadow IT environment powered by autonomous software.

Real governance for local systems requires a different architectural approach. Instead of trying to block the model itself, security leaders must focus intensely on intent and system access. An agent running locally via Gemma 4 still requires specific system permissions to read local files, access corporate databases, or execute shell commands on the host machine.

Access management becomes the new digital firewall. Rather than policing the language model, identity platforms must tightly restrict what the host machine can physically touch. If a local Gemma 4 agent attempts to query a restricted internal database, the access control layer must flag the anomaly immediately.

Enterprise governance in the edge AI era

We are watching the definition of enterprise infrastructure expand in real-time. A corporate laptop is no longer just a dumb terminal used to access cloud services over a VPN; it’s an active compute node capable of running sophisticated autonomous planning software.

The cost of this new autonomy is deep operational complexity. CTOs and CISOs face a requirement to deploy endpoint detection tools specifically tuned for local machine learning inference. They desperately need systems that can differentiate between a human developer compiling standard code, and an autonomous agent rapidly iterating through local file structures to solve a complex prompt.

The cybersecurity market will inevitably catch up to this new reality. Endpoint detection and response vendors are already prototyping quiet agents that monitor local GPU utilisation and flag unauthorised inference workloads. However, those tools remain in their infancy today.

Most corporate security policies written in 2023 assumed all generative tools lived comfortably in the cloud. Revising them requires an uncomfortable admission from the executive board that the IT department no longer dictates exactly where compute happens.

Google designed Gemma 4 to put state-of-the-art agentic skills directly into the hands of anyone with a modern processor. The open-source community will adopt it with aggressive speed. 

Enterprises now face a very short window to figure out how to police code they do not host, running on hardware they cannot constantly monitor. It leaves every security chief staring at their network dashboard with one question: What exactly is running on endpoints right now?

See also: Companies expand AI adoption while keeping control

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Strengthening enterprise governance for rising edge AI workloads appeared first on AI News.

]]>
Companies expand AI adoption while keeping control https://www.artificialintelligence-news.com/news/companies-expand-ai-adoption-while-keeping-control/ Mon, 13 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112964 Many companies are taking a slower, more controlled approach to autonomous systems as AI adoption grows. Rather than deploying systems that act on their own, they are focusing on tools that assist human decision-making and keep control over outputs. This approach is especially clear in sectors where errors carry real financial or legal risk. One […]

The post Companies expand AI adoption while keeping control appeared first on AI News.

]]>
Many companies are taking a slower, more controlled approach to autonomous systems as AI adoption grows. Rather than deploying systems that act on their own, they are focusing on tools that assist human decision-making and keep control over outputs. This approach is especially clear in sectors where errors carry real financial or legal risk.

One example comes from S&P Global Market Intelligence, which builds AI tools into its Capital IQ Pro platform. The system is used by analysts to review company filings, earnings calls, and market data. Its AI features are designed to stay grounded in source material.

According to S&P Global Market Intelligence, its AI tools extract insights from structured and unstructured data, including transcripts and reports, while working with verified source data.

AI adoption ahead of autonomy

The current wave of AI tools in business is often described as a step toward autonomous agents. Systems may eventually plan tasks and act without direct human input. But most companies are not there yet. AI adoption is already widespread, with a majority of organisations using AI in at least one part of their business, according to research from McKinsey & Company. Many organisations have yet to scale AI in the enterprise, showing a disconnect between initial use and broader deployment.

Instead, AI helps with tasks like summarising documents or answering queries, but it does not act independently.

S&P Global Market Intelligence’s tools let users to query large datasets through a chat interface, but the results are tied to verified financial content. In many cases, users can refer back underlying documents, lowering the risk of errors or unsupported outputs.

In its research, the company outlines AI governance as a process in which systems are designed and monitored, with attention to fairness and accountability.

AI in high-risk sectors

In finance, small errors can have large consequences. That shapes how AI is built and used. Tools like Capital IQ Pro are designed to support analysts not replace them. The system may help surface insights or highlight trends, but final decisions still rest with human users.

The gap between adoption and value is becoming clearer. Many organisations report a gap between AI deployment and measurable business outcomes, according to findings from McKinsey & Company.

While autonomous systems may be able to handle certain tasks, companies often need clear accountability. When decisions affect investments, compliance, or reporting, there must be a way to explain how those decisions were made.

Research from S&P Global notes that organisations are increasingly focused on building governance frameworks to manage AI risks, including data quality issues and model bias.

Toward future systems

The difference between today’s controlled AI tools and future autonomous systems remains wide. Interest in more autonomous and agent-driven systems is also growing, even as most organisations remain in early stages of deployment. Systems that can explain their outputs, show their sources, and operate in defined limits are more likely to be trusted.

Autonomous agents may one day handle tasks like financial analysis or supply chain planning with minimal input. But without clear control mechanisms, their use will remain limited.

The themes will feature at AI & Big Data Expo North America 2026 on May 18 – 19. S&P Global Market Intelligence is listed as a bronze sponsor of the event. The agenda features topics like AI governance and the use of AI in regulated industries.

Balancing ability and control

The push toward autonomous AI is unlikely to slow down. Advances in large language models and agent-based systems continue to expand what AI can do.

Enterprise users are asking the question of how to keep those systems under control. S&P Global Market Intelligence’s approach reflects that concern. By keeping AI grounded in verified data and placing humans at the centre of decision-making, it prioritises trust over autonomy.

As systems grow more capable, the ability to govern and control them could become just as important as the tasks they perform.

(Photo by Hitesh Choudhary)

See also: Why companies like Apple are building AI agents with limits

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Companies expand AI adoption while keeping control appeared first on AI News.

]]>
IBM: How robust AI governance protects enterprise margins https://www.artificialintelligence-news.com/news/ibm-how-robust-ai-governance-protects-enterprise-margins/ Fri, 10 Apr 2026 13:57:15 +0000 https://www.artificialintelligence-news.com/?p=112947 To protect enterprise margins, business leaders must invest in robust AI governance to securely manage AI infrastructure. When evaluating enterprise software adoption, a recurring pattern dictates how technology matures across industries. As Rob Thomas, SVP and CCO at IBM, recently outlined, software typically graduates from a standalone product to a platform, and then from a […]

The post IBM: How robust AI governance protects enterprise margins appeared first on AI News.

]]>
To protect enterprise margins, business leaders must invest in robust AI governance to securely manage AI infrastructure.

When evaluating enterprise software adoption, a recurring pattern dictates how technology matures across industries. As Rob Thomas, SVP and CCO at IBM, recently outlined, software typically graduates from a standalone product to a platform, and then from a platform to foundational infrastructure, altering the governing rules entirely.

At the initial product stage, exerting tight corporate control often feels highly advantageous. Closed development environments iterate quickly and tightly manage the end-user experience. They capture and concentrate financial value within a single corporate entity, an approach that functions adequately during early product development cycles.

However, IBM’s analysis highlights that expectations change entirely when a technology solidifies into a foundational layer. Once other institutional frameworks, external markets, and broad operational systems rely on the software, the prevailing standards adapt to a new reality. At infrastructure scale, embracing openness ceases to be an ideological stance and becomes a highly practical necessity.

AI is currently crossing this threshold within the enterprise architecture stack. Models are increasingly embedded directly into the ways organisations secure their networks, author source code, execute automated decisions, and generate commercial value. AI functions less as an experimental utility and more as core operational infrastructure.

The recent limited preview of Anthropic’s Claude Mythos model brings this reality into sharper focus for enterprise executives managing risk. Anthropic reports that this specific model can discover and exploit software vulnerabilities at a level matching few human experts.

In response to this power, Anthropic launched Project Glasswing, a gated initiative designed to place these advanced capabilities directly into the hands of network defenders first. From IBM’s perspective, this development forces technology officers to confront immediate structural vulnerabilities. If autonomous models possess the capability to write exploits and shape the overall security environment, Thomas notes that concentrating the understanding of these systems within a small number of technology vendors invites severe operational exposure.

With models achieving infrastructure status, IBM argues the primary issue is no longer exclusively what these machine learning applications can execute. The priority becomes how these systems are constructed, governed, inspected, and actively improved over extended periods.

As underlying frameworks grow in complexity and corporate importance, maintaining closed development pipelines becomes exceedingly difficult to defend. No single vendor can successfully anticipate every operational requirement, adversarial attack vector, or system failure mode.

Implementing opaque AI structures introduces heavy friction across existing network architecture. Connecting closed proprietary models with established enterprise vector databases or highly sensitive internal data lakes frequently creates massive troubleshooting bottlenecks. When anomalous outputs occur or hallucination rates spike, teams lack the internal visibility required to diagnose whether the error originated in the retrieval-augmented generation pipeline or the base model weights.

Integrating legacy on-premises architecture with highly gated cloud models also introduces severe latency into daily operations. When enterprise data governance protocols strictly prohibit sending sensitive customer information to external servers, technology teams are left attempting to strip and anonymise datasets before processing. This constant data sanitisation creates enormous operational drag. 

Furthermore, the spiralling compute costs associated with continuous API calls to locked models erode the exact profit margins these autonomous systems are supposed to enhance. The opacity prevents network engineers from accurately sizing hardware deployments, forcing companies into expensive over-provisioning agreements to maintain baseline functionality.

Why open-source AI is essential for operational resilience

Restricting access to powerful applications is an understandable human instinct that closely resembles caution. Yet, as Thomas points out, at massive infrastructure scale, security typically improves through rigorous external scrutiny rather than through strict concealment.

This represents the enduring lesson of open-source software development. Open-source code does not eliminate enterprise risk. Instead, IBM maintains it actively changes how organisations manage that risk. An open foundation allows a wider base of researchers, corporate developers, and security defenders to examine the architecture, surface underlying weaknesses, test foundational assumptions, and harden the software under real-world conditions.

Within cybersecurity operations, broad visibility is rarely the enemy of operational resilience. In fact, visibility frequently serves as a strict prerequisite for achieving that resilience. Technologies deemed highly important tend to remain safer when larger populations can challenge them, inspect their logic, and contribute to their continuous improvement.

Thomas addresses one of the oldest misconceptions regarding open-source technology: the belief that it inevitably commoditises corporate innovation. In practical application, open infrastructure typically pushes market competition higher up the technology stack. Open systems transfer financial value rather than destroying it.

As common digital foundations mature, the commercial value relocates toward complex implementation, system orchestration, continuous reliability, trust mechanics, and specific domain expertise. IBM’s position asserts that the long-term commercial winners are not those who own the base technological layer, but rather the organisations that understand how to apply it most effectively.

We have witnessed this identical pattern play out across previous generations of enterprise tooling, cloud infrastructure, and operating systems. Open foundations historically expanded developer participation, accelerated iterative improvement, and birthed entirely new, larger markets built on top of those base layers. Enterprise leaders increasingly view open-source as highly important for infrastructure modernisation and emerging AI capabilities. IBM predicts that AI is highly likely to follow this exact historical trajectory.

Looking across the broader vendor ecosystem, leading hyperscalers are adjusting their business postures to accommodate this reality. Rather than engaging in a pure arms race to build the largest proprietary black boxes, highly profitable integrators are focusing heavily on orchestration tooling that allows enterprises to swap out underlying open-source models based on specific workload demands. Highlighting its ongoing leadership in this space, IBM is a key sponsor of this year’s AI & Big Data Expo North America, where these evolving strategies for open enterprise infrastructure will be a primary focus.

This approach completely sidesteps restrictive vendor lock-in and allows companies to route less demanding internal queries to smaller and highly efficient open models, preserving expensive compute resources for complex customer-facing autonomous logic. By decoupling the application layer from the specific foundation model, technology officers can maintain operational agility and protect their bottom line.

The future of enterprise AI demands transparent governance

Another pragmatic reason for embracing open models revolves around product development influence. IBM emphasises that narrow access to underlying code naturally leads to narrow operational perspectives. In contrast, who gets to participate directly shapes what applications are eventually built. 

Providing broad access enables governments, diverse institutions, startups, and varied researchers to actively influence how the technology evolves and where it is commercially applied. This inclusive approach drives functional innovation while simultaneously building structural adaptability and necessary public legitimacy.

As Thomas argues, once autonomous AI assumes the role of core enterprise infrastructure, relying on opacity can no longer serve as the organising principle for system safety. The most reliable blueprint for secure software has paired open foundations with broad external scrutiny, active code maintenance, and serious internal governance.

As AI permanently enters its infrastructure phase, IBM contends that identical logic increasingly applies directly to the foundation models themselves. The stronger the corporate reliance on a technology, the stronger the corresponding case for demanding openness.

If these autonomous workflows are truly becoming foundational to global commerce, then transparency ceases to be a subject of casual debate. According to IBM, it is an absolute, non-negotiable design requirement for any modern enterprise architecture.

See also: Why companies like Apple are building AI agents with limits

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post IBM: How robust AI governance protects enterprise margins appeared first on AI News.

]]>
Why companies like Apple are building AI agents with limits https://www.artificialintelligence-news.com/news/why-companies-like-apple-are-building-ai-agents-with-limits/ Fri, 10 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112925 Next-generation AI assistants being developed in the Apple ecosystem and by chipmakers like Qualcomm, but early reports suggest they are being designed with limits in place. Tom’s Guide has described early versions of these assistants as capable of navigating apps, carrying out bookings, and managing tasks in services. For instance a private beta agentic system […]

The post Why companies like Apple are building AI agents with limits appeared first on AI News.

]]>
Next-generation AI assistants being developed in the Apple ecosystem and by chipmakers like Qualcomm, but early reports suggest they are being designed with limits in place.

Tom’s Guide has described early versions of these assistants as capable of navigating apps, carrying out bookings, and managing tasks in services. For instance a private beta agentic system completed tasks like booking services or posting content in apps. In one test, it moved through an app workflow and reached a payment screen before asking the user for confirmation.

AI agents are being built with approval checkpoints. Sensitive actions, especially those tied to payments or account changes, require user confirmation before they are completed. The “human-in-the-loop” model lets the system prepare an action, but leaves approval to the user. Research linked to Apple’s AI work has explored ways to ensure systems pause before taking actions users did not explicitly request.

Banking apps already require confirmation for transfers. The same idea is now being applied to AI-driven actions in multiple services.

Limits and control

A control layer comes from restricting what the AI can access. Rather than providing the system full access to apps and data, businesses are establishing limits, such as which apps the AI can interact with and when actions can be triggered.

In practice, this means the AI may be able to draft a purchase or prepare a booking, but not finalise it without approval. It also means the system cannot move freely in all services unless it has been granted permission.

According to Tom’s Guide, the facility is for privacy. If data remains on the device, it eliminates the need to send sensitive information to external servers.

In areas like payments, AI systems are expected to work with partners that already have strict rules in place. In one reported example, payment providers’ services are being integrated to provide secure authentication before transactions are completed, though such safeguards are still under development. The existing systems act as an additional layer of oversight. They can set transaction limits or require extra verification.

Much of the discussion around AI governance has focused on enterprise use. That includes areas like cybersecurity and large-scale automation. The consumer side introduces a different challenge and companies must design controls that work for everyday users. That means clear approval steps and built-in privacy protections.

Autonomy with boundaries

As AI gains the ability to carry out actions, the risks become greater as errors can lead to financial loss or data exposure.

By placing controls at multiple points, including approval and infrastructure, companies are trying to manage those risks.

The approach may shape how agentic AI develops in the near term. Rather than aiming for full independence, companies appear focused on controlled environments where the risks can be managed.

(Photo by Junseong Lee)

See also: Agentic AI’s governance challenges under the EU AI Act in 2026

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Why companies like Apple are building AI agents with limits appeared first on AI News.

]]>
Anthropic keeps new AI model private after it finds thousands of external vulnerabilities https://www.artificialintelligence-news.com/news/anthropic-keeps-new-ai-model-private-after-it-finds-thousands-of-external-vulnerabilities/ Thu, 09 Apr 2026 12:00:00 +0000 https://www.artificialintelligence-news.com/?p=112913 Anthropic’s most capable AI model has already found thousands of AI cybersecurity vulnerabilities across every major operating system and web browser. The company’s response was not to release it, but to quietly hand it to the organisations responsible for keeping the internet running. That model is Claude Mythos Preview, and the initiative is called Project Glasswing. […]

The post Anthropic keeps new AI model private after it finds thousands of external vulnerabilities appeared first on AI News.

]]>
Anthropic’s most capable AI model has already found thousands of AI cybersecurity vulnerabilities across every major operating system and web browser. The company’s response was not to release it, but to quietly hand it to the organisations responsible for keeping the internet running.

That model is Claude Mythos Preview, and the initiative is called Project Glasswing.

The launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. 

Beyond that core group, Anthropic has extended access to over 40 additional organisations that build or maintain critical software infrastructure. Anthropic is committing up to US$100 million in usage credits for Mythos Preview across the effort, along with US$4 million in direct donations to open-source security organisations. 

A model that outgrew its own benchmarks

Mythos Preview was not specifically trained for cybersecurity work. Anthropic said the capabilities “emerged as a downstream consequence of general improvements in code, reasoning, and autonomy”, and that the same improvements making the model better at patching vulnerabilities also make it better at exploiting them. 

That last part matters. Mythos Preview has improved to the extent that it mostly saturates existing security benchmarks, forcing Anthropic to shift its focus to novel real-world tasks–specifically, zero-day vulnerabilities. These flaws were previously unknown to the software’s developers. 

Among the findings: a 27-year-old bug in OpenBSD, an operating system known for its strong security posture. In another case, the model fully autonomously identified and exploited a 17-year-old remote code execution vulnerability in FreeBSD–CVE-2026-4747–that allows an unauthenticated user anywhere on the internet to obtain complete control of a server running NFS. No human was involved in the discovery or exploitation after the initial prompt to find the bug. 

Nicholas Carlini from Anthropic’s research team described the model’s ability to chain together vulnerabilities: “This model can create exploits out of three, four, or sometimes five vulnerabilities that in sequence give you some kind of very sophisticated end outcome. I’ve found more bugs in the last couple of weeks than I found in the rest of my life combined.” 

Why is it not being released?

“We do not plan to make Claude Mythos Preview generally available due to its cybersecurity capabilities,” Newton Cheng, Frontier Red Team Cyber Lead at Anthropic, said. “Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout–for economies, public safety, and national security–could be severe.” 

This is not hypothetical. Anthropic had previously disclosed what it described as the first documented case of a cyberattack largely executed by AI–a Chinese state-sponsored group that used AI agents to autonomously infiltrate roughly 30 global targets, with AI handling the majority of tactical operations independently. 

The company has also privately briefed senior US government officials on Mythos Preview’s full capabilities. The intelligence community is now actively weighing how the model could reshape both offensive and defensive hacking operations. 

The open-source problem

One dimension of Project Glasswing that goes beyond the headline coalition: open-source software. Jim Zemlin, CEO of the Linux Foundation, put it plainly: “In the past, security expertise has been a luxury reserved for organisations with large security teams. Open-source maintainers, whose software underpins much of the world’s critical infrastructure, have historically been left to figure out security on their own.”

Anthropic has donated US$2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, and US$1.5 million to the Apache Software Foundation–giving maintainers of critical open-source codebases access to AI cybersecurity vulnerability scanning at a scale that was previously out of reach.

What comes next

Anthropic says its eventual goal is to deploy Mythos-class models at scale, but only when new safeguards are in place. The company plans to launch new safeguards with an upcoming Claude Opus model first, allowing it to refine them with a model that does not pose the same level of risk as Mythos Preview. 

The competitive picture is already shifting around it. When OpenAI released GPT-5.3-Codex in February, the company called it the first model it had classified as high-capability for cybersecurity tasks under its Preparedness Framework. Anthropic’s move with Glasswing signals that the frontier labs see controlled deployment–not open release–as the emerging standard for models at this capability level.

Whether that standard holds as these capabilities spread further is, at this point, an open question that no single initiative can answer.

See Also: Anthropic’s refusal to arm AI is exactly why the UK wants it

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Anthropic keeps new AI model private after it finds thousands of external vulnerabilities appeared first on AI News.

]]>
Microsoft open-source toolkit secures AI agents at runtime https://www.artificialintelligence-news.com/news/microsoft-open-source-toolkit-secures-ai-agents-at-runtime/ Wed, 08 Apr 2026 10:23:53 +0000 https://www.artificialintelligence-news.com/?p=112906 A new open-source toolkit from Microsoft focuses on runtime security to force strict governance onto enterprise AI agents. The release tackles a growing anxiety: autonomous language models are now executing code and hitting corporate networks way faster than traditional policy controls can keep up. AI integration used to mean conversational interfaces and advisory copilots. Those […]

The post Microsoft open-source toolkit secures AI agents at runtime appeared first on AI News.

]]>
A new open-source toolkit from Microsoft focuses on runtime security to force strict governance onto enterprise AI agents. The release tackles a growing anxiety: autonomous language models are now executing code and hitting corporate networks way faster than traditional policy controls can keep up.

AI integration used to mean conversational interfaces and advisory copilots. Those systems had read-only access to specific datasets, keeping humans strictly in the execution loop. Organisations are currently deploying agentic frameworks that take independent action, wiring these models directly into internal application programming interfaces, cloud storage repositories, and continuous integration pipelines.

When an autonomous agent can read an email, decide to write a script, and push that script to a server, stricter governance is vital. Static code analysis and pre-deployment vulnerability scanning just can’t handle the non-deterministic nature of large language models. One prompt injection attack (or even a basic hallucination) could send an agent to overwrite a database or pull out customer records.

Microsoft’s new toolkit looks at runtime security instead, providing a way to monitor, evaluate, and block actions at the moment the model tries to execute them. It beats relying on prior training or static parameter checks.

Intercepting the tool-calling layer in real time

Looking at the mechanics of agentic tool calling shows how this works. When an enterprise AI agent has to step outside its core neural network to do something like query an inventory system, it generates a command to hit an external tool.

Microsoft’s framework drops a policy enforcement engine right between the language model and the broader corporate network. Every time the agent tries to trigger an outside function, the toolkit grabs the request and checks the intended action against a central set of governance rules. If the action breaks policy (e.g. an agent authorised only to read inventory data tries to fire off a purchase order) the toolkit blocks the API call and logs the event so a human can review it.

Security teams get a verifiable, auditable trail of every single autonomous decision. Developers also win here; they can build complex multi-agent systems without having to hardcode security protocols into every individual model prompt. Security policies get decoupled from the core application logic entirely and are managed at the infrastructure level.

Most legacy systems were never built to talk to non-deterministic software. An old mainframe database or a customised enterprise resource planning suite doesn’t have native defenses against a machine learning model shooting over malformed requests. Microsoft’s toolkit steps in as a protective translation layer. Even if an underlying language model gets compromised by external inputs; the system’s perimeter holds.

Security leaders might wonder why Microsoft decided to release this runtime toolkit under an open-source license. It comes down to how modern software supply chains actually work.

Developers are currently rushing to build autonomous workflows using a massive mix of open-source libraries, frameworks, and third-party models. If Microsoft locked this runtime security feature to its proprietary platforms, development teams would probably just bypass it for faster, unvetted workarounds to hit their deadlines.

Pushing the toolkit out openly means security and governance controls can fit into any technology stack. It doesn’t matter if an organisation runs local open-weight models, leans on competitors like Anthropic, or deploys hybrid architectures.

Setting up an open standard for AI agent security also lets the wider cybersecurity community chip in. Security vendors can stack commercial dashboards and incident response integrations on top of this open foundation, which speeds up the maturity of the whole ecosystem. For businesses, they avoid vendor lock-in but still get a universally scrutinised security baseline.

The next phase of enterprise AI governance

Enterprise governance doesn’t just stop at security; it hits financial and operational oversight too. Autonomous agents run in a continuous loop of reasoning and execution, burning API tokens at every step. Startups and enterprises are already seeing token costs explode when they deploy agentic systems.

Without runtime governance, an agent tasked with looking up a market trend might decide to hit an expensive proprietary database thousands of times before it finishes. Left alone, a badly configured agent caught in a recursive loop can rack up massive cloud computing bills in a few hours.

The runtime toolkit gives teams a way to slap hard limits on token consumption and API call frequency. By setting boundaries on exactly how many actions an agent can take within a specific timeframe, forecasting computing costs gets much easier. It also stops runaway processes from eating up system resources.

A runtime governance layer hands over the quantitative metrics and control mechanisms needed to meet compliance mandates. The days of just trusting model providers to filter out bad outputs are ending. System safety now falls on the infrastructure that actually executes the models’ decisions

Getting a mature governance program off the ground is going to demand tight collaboration between development operations, legal, and security teams. Language models are only scaling up in capability, and the organisations putting strict runtime controls in place today are the only ones who will be equipped to handle the autonomous workflows of tomorrow.

See also: As AI agents take on more tasks, governance becomes a priority

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Microsoft open-source toolkit secures AI agents at runtime appeared first on AI News.

]]>
As AI agents take on more tasks, governance becomes a priority https://www.artificialintelligence-news.com/news/as-ai-agents-take-on-more-tasks-governance-becomes-a-priority/ Mon, 06 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112888 AI systems are starting to move beyond simple responses. In many organisations, AI agents are now being tested to plan tasks, make decisions, and carry out actions with limited human input. It is no longer just about whether a model gives the right answer. It is about what happens when that model is allowed to […]

The post As AI agents take on more tasks, governance becomes a priority appeared first on AI News.

]]>
AI systems are starting to move beyond simple responses. In many organisations, AI agents are now being tested to plan tasks, make decisions, and carry out actions with limited human input. It is no longer just about whether a model gives the right answer. It is about what happens when that model is allowed to act.

Autonomous systems need clear boundaries. They need rules that define what they can access, what they are allowed to do, and how their actions are tracked. Without those controls, even well-trained systems can create problems that are hard to detect or reverse.

One company working on this problem is Deloitte. The firm has been developing governance frameworks and advisory approaches to help organisations manage AI systems.

From tools to AI agents

Most AI systems in use today still depend on human prompts. They generate text, analyse data, or make predictions, but a person usually decides what happens next. Agentic AI changes that pattern. These systems can break down a goal into steps, choose actions, and interact with other systems to complete tasks.

That added independence brings new challenges. When a system acts on its own, it may take paths that were not fully expected or use data in ways that were not intended.

Deloitte’s work focuses on helping organisations prepare for these risks. Rather than treating AI as a standalone tool, the firm looks at how it fits into business processes, including how decisions are made and how data flows through systems.

Building governance into the lifecycle

Governance should not be added after deployment. It needs to be built into the full lifecycle of an AI system.

This starts at the design stage. Organisations need to define what a system is allowed to do and where its limits are. This may include setting rules around data use and outlining how the system should respond in uncertain situations.

The next stage is deployment. At this point, governance focuses on access and control, including who can use the system and what it can connect to. Once the system is live, monitoring becomes the main concern. Autonomous systems can change over time as they interact with new data. Without regular checks, they may drift away from their original purpose.

The role of transparency and accountability

As AI systems take on more responsibility, it becomes more difficult to trace how decisions are made. This creates a demand for stronger transparency. Deloitte’s work highlights the importance of keeping track of how systems operate. This includes logging actions and documenting decisions. These records help organisations in determining what happened if something goes wrong. If an autonomous system takes an action, there needs to be clarity about who is responsible.

Research from Deloitte shows that adoption of AI agents is moving faster than the controls needed to manage them. Around 23% of companies already use them, and that figure is expected to reach 74% within two years. Only 21% report having strong safeguards in place to oversee how they behave.

Real-time oversight for AI agents

Once an autonomous system is active, the focus shifts to how it behaves in real-world conditions. Static rules are not always enough, and systems need to be observed as they operate.

Deloitte’s approach includes real-time monitoring, allowing organisations to track what an AI system is doing as it performs tasks. If the system behaves in an unexpected way, teams can step in quickly. This may involve pausing certain actions or adjusting permissions. Real-time oversight also helps with compliance. In regulated industries, companies need to show that systems follow rules and standards.

In practice, these controls are starting to appear in operational settings. Deloitte describes scenarios where AI systems monitor equipment performance across sites. Sensor data can signal early signs of failure, which can trigger maintenance workflows and update internal systems. Governance frameworks define what actions the system can take, when human approval is required, and how decisions are recorded. The process runs across multiple systems, but from a user’s point of view, it appears as a single action.

Governance is part of discussions at AI & Big Data Expo North America 2026, taking place on May 18–19 in Santa Clara, California. Deloitte is listed as a Diamond Sponsor for the event, placing it among the firms contributing to conversations around how autonomous systems are deployed and controlled in practice.

The challenge is not just building smarter systems, but ensuring they behave in ways organisations can understand, manage, and trust over time.

(Photo by Roman)

See also: Autonomous AI systems depend on data governance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post As AI agents take on more tasks, governance becomes a priority appeared first on AI News.

]]>
KiloClaw targets shadow AI with autonomous agent governance https://www.artificialintelligence-news.com/news/kiloclaw-targets-shadow-ai-autonomous-agent-governance/ Thu, 02 Apr 2026 16:30:53 +0000 https://www.artificialintelligence-news.com/?p=112876 With the launch of KiloClaw, enterprises now have a tool to enforce governance over autonomous agents and manage shadow AI. While businesses spent the last year securing large language models and formalising vendor agreements, developers and knowledge workers started moving on their own. Employees are bypassing official procurement, deploying autonomous agents on personal infrastructure to […]

The post KiloClaw targets shadow AI with autonomous agent governance appeared first on AI News.

]]>
With the launch of KiloClaw, enterprises now have a tool to enforce governance over autonomous agents and manage shadow AI.

While businesses spent the last year securing large language models and formalising vendor agreements, developers and knowledge workers started moving on their own. Employees are bypassing official procurement, deploying autonomous agents on personal infrastructure to automate their daily workflows.

This practice, known as ‘Bring Your Own AI’ or BYOAI, exposes proprietary enterprise data to unregulated external environments. To address this vulnerability, software provider Kilo launched KiloClaw for Organizations, an enterprise-grade platform built to rein in decentralised agent deployments and restore architectural oversight.

Kilo targets the lack of visibility surrounding agent deployment. When engineers set up autonomous agents to parse error logs, or financial analysts deploy local scripts to reconcile spreadsheets, they prioritise immediate efficiency over security protocols. These agents routinely gain access to corporate Slack channels, Jira boards, and private code repositories through personal API keys.

Since these connections happen outside official IT purview, they create blind spots for data exfiltration and intellectual property leaks. KiloClaw provides a centralised control plane for security teams to identify, monitor, and restrict these autonomous actors without blocking their productivity gains.

The unseen infrastructure of Bring-Your-Own-Agent

The current shift mirrors the Bring Your Own Device (BYOD) era of the early 2010s, when employees used personal smartphones for corporate email and forced IT departments to adopt mobile device management.

The AI equivalent carries higher stakes. A compromised phone might expose a static inbox, but an unmonitored autonomous agent has active execution privileges. It reads, writes, modifies, and deletes data across integrated platforms at speeds humans cannot replicate.

These autonomous scripts also frequently rely on external computational power. An employee might run an agent locally while the agent sends corporate data to third-party inference servers to process queries. If those providers use the ingested data to train future models, the enterprise loses control of its intellectual property.

KiloClaw, for its part, establishes a secure boundary around these processes. Instead of ignoring external deployments, the platform pulls them into a registry where compliance officers can audit behaviour and data flows.

Identity and access management for autonomous AI agents

Governing autonomous systems requires a different technical architecture than managing a human workforce. Traditional Identity and Access Management (IAM) systems are built for human credentials or static application-to-application communication.

Autonomous agents, however, are dynamic. Agents chain tasks together sequentially, formulating new requests based on the output of previous actions. An agent might request access to an enterprise resource planning database halfway through a task, and standard security software struggles to determine if this is hostile behaviour or a legitimate operation.

KiloClaw treats agents as distinct entities requiring restrictive, time-bound permission scopes. Instead of developers plugging permanent, high-level API keys into experimental models, KiloClaw issues short-lived, narrowly defined access tokens.

If an agent designed to summarise weekly marketing emails attempts to download a customer database, the platform detects the scope violation and revokes access. This containment limits the blast radius within the corporate network if an open-source model behaves unpredictably.

How tools like KiloClaw balance velocity and compliance

Mandating a blanket ban on custom-built automation tools rarely works; it drives the behaviour underground, encouraging engineers to obfuscate traffic and hide workflows. Platforms like KiloClaw aim to construct a sanctioned environment where employees can safely register their tools.

For this governance framework to work, IT leaders need to prioritise integration. KiloClaw connects directly into the continuous integration and deployment pipelines that software teams already utilise. By automating security checks and permission provisioning, security teams remove the friction that causes employees to bypass rules.

Enterprises can establish baseline templates detailing what data external models can process, allowing workers to deploy agents within pre-approved boundaries. This maintains compliance without sacrificing workflow automation.

The development of shadow AI governance tools points to a new phase of algorithmic regulation. Early corporate reactions to generative models focused on acceptable use policies for text-based chatbots. Now, the focus is shifting toward orchestration, containment, and system-to-system accountability. Regulators globally are also examining how companies monitor automated systems, pushing verifiable oversight toward legal obligation.

As digital agents multiply within corporate networks, the concept of an ‘Agent Firewall’ is becoming a standard IT budget item. Platforms that map the relationships between human intent, machine execution, and corporate data will form the foundation of future security operations.

KiloClaw’s entry into the organisational governance space highlights a shifting reality for the C-suite: the immediate threat includes well-meaning employees handing network keys to unregulated machines. Establishing structural authority over these non-human actors is necessary to safely harness their potential.

See also: Autonomous AI systems depend on data governance

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post KiloClaw targets shadow AI with autonomous agent governance appeared first on AI News.

]]>