World of Work - AI News https://www.artificialintelligence-news.com/categories/ai-and-us/world-of-work/ Artificial Intelligence News Wed, 15 Apr 2026 16:32:52 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png World of Work - AI News https://www.artificialintelligence-news.com/categories/ai-and-us/world-of-work/ 32 32 Commvault launches a ‘Ctrl-Z’ for cloud AI workloads https://www.artificialintelligence-news.com/news/commvault-launches-ctrl-z-for-cloud-ai-workloads/ Wed, 15 Apr 2026 16:28:19 +0000 https://www.artificialintelligence-news.com/?p=113020 Enterprise cloud environments now have access to an undo feature for AI agents following the deployment of Commvault AI Protect. Autonomous software now roams across infrastructure, potentially deleting files, reading databases, spinning up server clusters, and even rewriting access policies. Commvault identified this governance issue and the data protection vendor has launched AI Protect, a […]

The post Commvault launches a ‘Ctrl-Z’ for cloud AI workloads appeared first on AI News.

]]>
Enterprise cloud environments now have access to an undo feature for AI agents following the deployment of Commvault AI Protect.

Autonomous software now roams across infrastructure, potentially deleting files, reading databases, spinning up server clusters, and even rewriting access policies. Commvault identified this governance issue and the data protection vendor has launched AI Protect, a system designed to discover, monitor, and forcefully roll back the actions of autonomous models operating inside AWS, Microsoft Azure, and Google Cloud.

Traditional governance relies entirely on static rules. You grant a human user specific permissions and that user performs a predictable, linear task. If something goes wrong, there’s clear responsibility. AI agents, however, exhibit emergent behaviour.

When given a complex prompt, an agent will string together approved permissions in potentially unapproved ways to solve the problem. If an agent decides the most efficient way to optimise cloud storage costs is to delete an entire production database, it will execute that command in milliseconds.

A human engineer might pause before executing a destructive command, questioning the logic. An AI agent simply follows its internal reasoning loop. It loops thousands of API requests a second, vastly outpacing the reaction times of human security operations centres.

Pranay Ahlawat, Chief Technology and AI Officer at Commvault, said: “In agentic environments, agents mutate state across data, systems, and configurations in ways that compound fast and are hard to trace. When something goes wrong, teams need to recover not just data, but the full stack – applications, agent configurations, and dependencies – back to a known good state.”

A new breed of governance tools for cloud AI agents

AI Protect is an example of emerging tools that continuously scan the enterprise cloud footprint to identify active agents. Shadow AI remains a massive difficulty for enterprise IT departments. Developers routinely spin up experimental agents using corporate credentials without notifying security teams and connect language models to internal data lakes to test new workflows.

Commvault forces these hidden actors into the light. Once identified, the software monitors the agent’s specific API calls and data interactions across AWS, Azure, and GCP. It logs every database read, every storage modification, and every configuration change.

The rollback feature provides the safety net. If a model hallucinates or misinterprets a command, administrators can revert the environment to its exact state before the machine initiated the destructive sequence.

However, cloud infrastructure is highly stateful and deeply interconnected. Reversing a complex chain of automated actions requires precise, ledger-based tracking. You cannot just restore a single database table if the machine also modified networking rules, triggered downstream serverless functions, and altered identity access management policies during its run.

Commvault bridges traditional backup architecture with continuous cloud monitoring to achieve this. By mapping the blast radius of the agent’s session, the software isolates the damage. It untangles the specific changes made by the AI from the legitimate changes made by human users during the same timeframe. This prevents a mass rollback from deleting valid customer transactions or wiping out hours of legitimate engineering work.

Machines will continue to execute tasks faster than human operators can monitor them. The priority now is implementing safeguards that guarantee autonomous actions can be instantly and accurately reversed.

See also: Citizen developers now have their own Wingman

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Commvault launches a ‘Ctrl-Z’ for cloud AI workloads appeared first on AI News.

]]>
SAP brings agentic AI to human capital management https://www.artificialintelligence-news.com/news/sap-brings-agentic-ai-human-capital-management/ Tue, 14 Apr 2026 12:55:09 +0000 https://www.artificialintelligence-news.com/?p=112997 According to SAP, integrating agentic AI into core human capital management (HCM) modules helps target operational bloat and reduce costs. SAP’s SuccessFactors 1H 2026 release aims to anticipate administrative bottlenecks before they stall daily operations by embedding a network of AI agents across recruiting, payroll, workforce administration, and talent development. Behind the user interface, these […]

The post SAP brings agentic AI to human capital management appeared first on AI News.

]]>
According to SAP, integrating agentic AI into core human capital management (HCM) modules helps target operational bloat and reduce costs.

SAP’s SuccessFactors 1H 2026 release aims to anticipate administrative bottlenecks before they stall daily operations by embedding a network of AI agents across recruiting, payroll, workforce administration, and talent development. Behind the user interface, these agents must monitor system states, identify anomalies, and prompt human operators with context-aware solutions.

Data synchronisation failures between distributed enterprise systems routinely require dedicated IT support teams to diagnose. When employee master data fails to replicate due to a missing attribute, downstream systems like access management and financial compensation halt.

The agentic approach uses analytical models to cross-reference peer data, identify the missing variable based on organisational patterns, and prompt the administrator with the required correction. This automated troubleshooting dramatically reduces the mean time to resolution for internal support tickets.

Implementing this level of autonomous monitoring requires severe engineering discipline. Integrating modern semantic search mechanisms with highly structured legacy relational databases requires extensive middleware configuration.

Running large language models in the background to continuously scan millions of employee records for inconsistencies consumes massive compute resources. CIOs must carefully balance the cloud infrastructure costs of continuous algorithmic monitoring against the operational savings generated by reduced IT ticket volumes.

To mitigate the risk of algorithmic hallucinations altering core financial data, engineering teams are forced to build strict guardrails. These retrieve-and-generate architectures must be firmly anchored to the company’s verified data lakes, ensuring the AI only acts upon validated corporate policies rather than generalised internet training data.

The SAP release attempts to streamline this knowledge retrieval by introducing intelligent question-and-answer capabilities within its learning module. This functionality delivers instant, context-aware responses drawn directly from an organisation’s learning content, allowing employees to bypass manual documentation searches entirely. The integration also introduces a growing workforce knowledge network that pulls trusted external employment guidance into daily workflows to support confident decision-making.

How SAP is using agentic AI to consolidate the HCM ecosystem

The updated architecture focuses on unified experiences that adapt to operational needs. For example, the delay between a signed offer letter to new talent and the employee achieving full productivity is a drag on profit margins.

Native integration combining SmartRecruiters solutions, SAP SuccessFactors Employee Central, and SAP SuccessFactors Onboarding streamlines the data flow from initial candidate interaction through to the new hire phase.

A candidate’s technical assessments, background checks, and negotiated terms pass automatically into the core human resources repository. Enterprises accelerate the onboarding timeline by eliminating the manual re-entry of personnel data—allowing new technical hires to begin contributing to active commercial projects faster.

Technical leadership teams understand that out-of-the-box software rarely matches internal enterprise processes perfectly. Customisation is necessary, but hardcoded extensions routinely break during cloud upgrade cycles, creating vast maintenance backlogs.

To manage this tension, the software introduces a new extensibility wizard. This tool provides guided, step-by-step support for building custom extensions directly on the SAP Business Technology Platform within the SuccessFactors environment.

By containing custom development within a governed platform environment, technology officers can adapt the interface to unique business requirements while preserving strict governance and ensuring future update compatibility.

Algorithmic auditing and margin protection

The 1H 2026 release incorporates pay transparency insights directly into the People Intelligence package within SAP Business Data Cloud to help with compliance with strict regulatory environments like the EU’s directives on pay transparency (which requires organisations to provide detailed and auditable justifications for wage discrepancies.)

Manual compilation of compensation data across multiple geographic regions and currency zones is highly error-prone. Using the People Intelligence package, organisations can analyse compensation patterns and potential pay gaps across demographics.

Automating this analysis provides a data-driven defence against compliance audits and aligns internal pay practices with evolving regulatory expectations, protecting the enterprise from both litigation costs and brand damage.

Preparing for future demands requires trusted and consistent skills data that leadership can rely on across talent deployment and workforce planning. Unstructured data, where one department labels a capability using differing terminology from another, breaks automated resource allocation models.

The update strengthens the SAP talent intelligence hub by introducing enhanced skills governance to provide administrators with a centralised interface for managing skill definitions, applying corporate standards, and ensuring data aligns across internal applications and external partner ecosystems. 

Standardising this data improves overall system quality and allows resource managers to make deployment decisions without relying on fragmented spreadsheets or guesswork. This inventory prevents organisations from having to outsource to expensive external contractors for capabilities they already possess internally.

By bringing together data, AI, and connected experiences, SAP’s latest enhancements show how agentic AI can help organisations reduce daily friction. For professionals looking to explore these types of enterprise AI integrations and connect directly with the company, SAP is a key sponsor of this year’s AI & Big Data Expo North America.

See also: IBM: How robust AI governance protects enterprise margins

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SAP brings agentic AI to human capital management appeared first on AI News.

]]>
Strengthening enterprise governance for rising edge AI workloads https://www.artificialintelligence-news.com/news/strengthening-enterprise-governance-for-rising-edge-ai-workloads/ Mon, 13 Apr 2026 13:02:01 +0000 https://www.artificialintelligence-news.com/?p=112976 Models like Google Gemma 4 are increasing enterprise AI governance challenges for CISOs as they scramble to secure edge workloads. Security chiefs have built massive digital walls around the cloud; deploying advanced cloud access security brokers and routing every piece of traffic heading to external large language models through monitored corporate gateways. The logic was […]

The post Strengthening enterprise governance for rising edge AI workloads appeared first on AI News.

]]>
Models like Google Gemma 4 are increasing enterprise AI governance challenges for CISOs as they scramble to secure edge workloads.

Security chiefs have built massive digital walls around the cloud; deploying advanced cloud access security brokers and routing every piece of traffic heading to external large language models through monitored corporate gateways. The logic was sound to boards and executive committees—keep the sensitive data inside the network, police the outgoing requests, and intellectual property remains entirely safe from external leaks.

Google just obliterated that perimeter with the release of Gemma 4. Unlike massive parameter models confined to hyperscale data centres, this family of open weights targets local hardware. It runs directly on edge devices, executes multi-step planning, and can operate autonomous workflows right on a local device.

On-device inference has become a glaring blind spot for enterprise security operations. Security analysts cannot inspect network traffic if the traffic never hits the network in the first place. Engineers can ingest highly classified corporate data, process it through a local Gemma 4 agent, and generate output without triggering a single cloud firewall alarm.

Collapse of API-centric defences

Most corporate IT frameworks treat machine learning tools like standard third-party software vendors. You vet the provider, sign a massive enterprise data processing agreement, and funnel employee traffic through a sanctioned digital gateway. This standard playbook falls apart the moment an engineer downloads an Apache 2.0 licensed model like Gemma 4 and turns their laptop into an autonomous compute node.

Google paired this new model rollout with the Google AI Edge Gallery and a highly optimised LiteRT-LM library. These tools drastically accelerate local execution speeds while providing highly structured outputs required for complex agentic behaviours. An autonomous agent can now sit quietly on a local machine, iterate through thousands of logic steps, and execute code locally at impressive speed.

European data sovereignty laws and strict global financial regulations mandate complete auditability for automated decision-making. When a local agent hallucinates, makes a catastrophic error, or inadvertently leaks internal code across a shared corporate Slack channel, investigators require detailed logs. If the model operates entirely offline on local silicon, those logs simply do not exist inside the centralised IT security dashboard.

Financial institutions stand to lose the most from this architectural adjustment. Banks have spent millions implementing strict API logging to satisfy regulators investigating generative machine learning usage. If algorithmic trading strategies or proprietary risk assessment protocols are parsed by an unmonitored local agent, the bank violates multiple compliance frameworks simultaneously.

Healthcare networks face a similar reality. Patient data processed through an offline medical assistant running Gemma 4 might feel secure because it never leaves the physical laptop. The reality is that unlogged processing of health data violates the core tenets of modern medical auditing. Security leaders must prove how data was handled, what system processed it, and who authorised the execution.

The intent-control dilemma

Industry researchers often refer to this current phase of technological adoption as the governance trap. Management teams panic when they lose visibility. They attempt to rein in developer behaviour by throwing more bureaucratic processes at the problem, mandate sluggish architecture review boards, and force engineers to fill out extensive deployment forms before installing any new repository.

Bureaucracy rarely stops a motivated developer facing an aggressive product deadline; it just forces the entire behaviour further underground. This creates a shadow IT environment powered by autonomous software.

Real governance for local systems requires a different architectural approach. Instead of trying to block the model itself, security leaders must focus intensely on intent and system access. An agent running locally via Gemma 4 still requires specific system permissions to read local files, access corporate databases, or execute shell commands on the host machine.

Access management becomes the new digital firewall. Rather than policing the language model, identity platforms must tightly restrict what the host machine can physically touch. If a local Gemma 4 agent attempts to query a restricted internal database, the access control layer must flag the anomaly immediately.

Enterprise governance in the edge AI era

We are watching the definition of enterprise infrastructure expand in real-time. A corporate laptop is no longer just a dumb terminal used to access cloud services over a VPN; it’s an active compute node capable of running sophisticated autonomous planning software.

The cost of this new autonomy is deep operational complexity. CTOs and CISOs face a requirement to deploy endpoint detection tools specifically tuned for local machine learning inference. They desperately need systems that can differentiate between a human developer compiling standard code, and an autonomous agent rapidly iterating through local file structures to solve a complex prompt.

The cybersecurity market will inevitably catch up to this new reality. Endpoint detection and response vendors are already prototyping quiet agents that monitor local GPU utilisation and flag unauthorised inference workloads. However, those tools remain in their infancy today.

Most corporate security policies written in 2023 assumed all generative tools lived comfortably in the cloud. Revising them requires an uncomfortable admission from the executive board that the IT department no longer dictates exactly where compute happens.

Google designed Gemma 4 to put state-of-the-art agentic skills directly into the hands of anyone with a modern processor. The open-source community will adopt it with aggressive speed. 

Enterprises now face a very short window to figure out how to police code they do not host, running on hardware they cannot constantly monitor. It leaves every security chief staring at their network dashboard with one question: What exactly is running on endpoints right now?

See also: Companies expand AI adoption while keeping control

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Strengthening enterprise governance for rising edge AI workloads appeared first on AI News.

]]>
IBM: How robust AI governance protects enterprise margins https://www.artificialintelligence-news.com/news/ibm-how-robust-ai-governance-protects-enterprise-margins/ Fri, 10 Apr 2026 13:57:15 +0000 https://www.artificialintelligence-news.com/?p=112947 To protect enterprise margins, business leaders must invest in robust AI governance to securely manage AI infrastructure. When evaluating enterprise software adoption, a recurring pattern dictates how technology matures across industries. As Rob Thomas, SVP and CCO at IBM, recently outlined, software typically graduates from a standalone product to a platform, and then from a […]

The post IBM: How robust AI governance protects enterprise margins appeared first on AI News.

]]>
To protect enterprise margins, business leaders must invest in robust AI governance to securely manage AI infrastructure.

When evaluating enterprise software adoption, a recurring pattern dictates how technology matures across industries. As Rob Thomas, SVP and CCO at IBM, recently outlined, software typically graduates from a standalone product to a platform, and then from a platform to foundational infrastructure, altering the governing rules entirely.

At the initial product stage, exerting tight corporate control often feels highly advantageous. Closed development environments iterate quickly and tightly manage the end-user experience. They capture and concentrate financial value within a single corporate entity, an approach that functions adequately during early product development cycles.

However, IBM’s analysis highlights that expectations change entirely when a technology solidifies into a foundational layer. Once other institutional frameworks, external markets, and broad operational systems rely on the software, the prevailing standards adapt to a new reality. At infrastructure scale, embracing openness ceases to be an ideological stance and becomes a highly practical necessity.

AI is currently crossing this threshold within the enterprise architecture stack. Models are increasingly embedded directly into the ways organisations secure their networks, author source code, execute automated decisions, and generate commercial value. AI functions less as an experimental utility and more as core operational infrastructure.

The recent limited preview of Anthropic’s Claude Mythos model brings this reality into sharper focus for enterprise executives managing risk. Anthropic reports that this specific model can discover and exploit software vulnerabilities at a level matching few human experts.

In response to this power, Anthropic launched Project Glasswing, a gated initiative designed to place these advanced capabilities directly into the hands of network defenders first. From IBM’s perspective, this development forces technology officers to confront immediate structural vulnerabilities. If autonomous models possess the capability to write exploits and shape the overall security environment, Thomas notes that concentrating the understanding of these systems within a small number of technology vendors invites severe operational exposure.

With models achieving infrastructure status, IBM argues the primary issue is no longer exclusively what these machine learning applications can execute. The priority becomes how these systems are constructed, governed, inspected, and actively improved over extended periods.

As underlying frameworks grow in complexity and corporate importance, maintaining closed development pipelines becomes exceedingly difficult to defend. No single vendor can successfully anticipate every operational requirement, adversarial attack vector, or system failure mode.

Implementing opaque AI structures introduces heavy friction across existing network architecture. Connecting closed proprietary models with established enterprise vector databases or highly sensitive internal data lakes frequently creates massive troubleshooting bottlenecks. When anomalous outputs occur or hallucination rates spike, teams lack the internal visibility required to diagnose whether the error originated in the retrieval-augmented generation pipeline or the base model weights.

Integrating legacy on-premises architecture with highly gated cloud models also introduces severe latency into daily operations. When enterprise data governance protocols strictly prohibit sending sensitive customer information to external servers, technology teams are left attempting to strip and anonymise datasets before processing. This constant data sanitisation creates enormous operational drag. 

Furthermore, the spiralling compute costs associated with continuous API calls to locked models erode the exact profit margins these autonomous systems are supposed to enhance. The opacity prevents network engineers from accurately sizing hardware deployments, forcing companies into expensive over-provisioning agreements to maintain baseline functionality.

Why open-source AI is essential for operational resilience

Restricting access to powerful applications is an understandable human instinct that closely resembles caution. Yet, as Thomas points out, at massive infrastructure scale, security typically improves through rigorous external scrutiny rather than through strict concealment.

This represents the enduring lesson of open-source software development. Open-source code does not eliminate enterprise risk. Instead, IBM maintains it actively changes how organisations manage that risk. An open foundation allows a wider base of researchers, corporate developers, and security defenders to examine the architecture, surface underlying weaknesses, test foundational assumptions, and harden the software under real-world conditions.

Within cybersecurity operations, broad visibility is rarely the enemy of operational resilience. In fact, visibility frequently serves as a strict prerequisite for achieving that resilience. Technologies deemed highly important tend to remain safer when larger populations can challenge them, inspect their logic, and contribute to their continuous improvement.

Thomas addresses one of the oldest misconceptions regarding open-source technology: the belief that it inevitably commoditises corporate innovation. In practical application, open infrastructure typically pushes market competition higher up the technology stack. Open systems transfer financial value rather than destroying it.

As common digital foundations mature, the commercial value relocates toward complex implementation, system orchestration, continuous reliability, trust mechanics, and specific domain expertise. IBM’s position asserts that the long-term commercial winners are not those who own the base technological layer, but rather the organisations that understand how to apply it most effectively.

We have witnessed this identical pattern play out across previous generations of enterprise tooling, cloud infrastructure, and operating systems. Open foundations historically expanded developer participation, accelerated iterative improvement, and birthed entirely new, larger markets built on top of those base layers. Enterprise leaders increasingly view open-source as highly important for infrastructure modernisation and emerging AI capabilities. IBM predicts that AI is highly likely to follow this exact historical trajectory.

Looking across the broader vendor ecosystem, leading hyperscalers are adjusting their business postures to accommodate this reality. Rather than engaging in a pure arms race to build the largest proprietary black boxes, highly profitable integrators are focusing heavily on orchestration tooling that allows enterprises to swap out underlying open-source models based on specific workload demands. Highlighting its ongoing leadership in this space, IBM is a key sponsor of this year’s AI & Big Data Expo North America, where these evolving strategies for open enterprise infrastructure will be a primary focus.

This approach completely sidesteps restrictive vendor lock-in and allows companies to route less demanding internal queries to smaller and highly efficient open models, preserving expensive compute resources for complex customer-facing autonomous logic. By decoupling the application layer from the specific foundation model, technology officers can maintain operational agility and protect their bottom line.

The future of enterprise AI demands transparent governance

Another pragmatic reason for embracing open models revolves around product development influence. IBM emphasises that narrow access to underlying code naturally leads to narrow operational perspectives. In contrast, who gets to participate directly shapes what applications are eventually built. 

Providing broad access enables governments, diverse institutions, startups, and varied researchers to actively influence how the technology evolves and where it is commercially applied. This inclusive approach drives functional innovation while simultaneously building structural adaptability and necessary public legitimacy.

As Thomas argues, once autonomous AI assumes the role of core enterprise infrastructure, relying on opacity can no longer serve as the organising principle for system safety. The most reliable blueprint for secure software has paired open foundations with broad external scrutiny, active code maintenance, and serious internal governance.

As AI permanently enters its infrastructure phase, IBM contends that identical logic increasingly applies directly to the foundation models themselves. The stronger the corporate reliance on a technology, the stronger the corresponding case for demanding openness.

If these autonomous workflows are truly becoming foundational to global commerce, then transparency ceases to be a subject of casual debate. According to IBM, it is an absolute, non-negotiable design requirement for any modern enterprise architecture.

See also: Why companies like Apple are building AI agents with limits

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post IBM: How robust AI governance protects enterprise margins appeared first on AI News.

]]>
KPMG: Inside the AI agent playbook driving enterprise margin gains https://www.artificialintelligence-news.com/news/kpmg-inside-ai-agent-playbook-enterprise-margin-gains/ Wed, 01 Apr 2026 15:24:01 +0000 https://www.artificialintelligence-news.com/?p=112839 Global AI investment is accelerating, yet KPMG data shows the gap between enterprise AI spend and measurable business value is widening fast. The headline figure from KPMG’s first quarterly Global AI Pulse survey is blunt: despite global organisations planning to spend a weighted average of $186 million on AI over the next 12 months, only […]

The post KPMG: Inside the AI agent playbook driving enterprise margin gains appeared first on AI News.

]]>
Global AI investment is accelerating, yet KPMG data shows the gap between enterprise AI spend and measurable business value is widening fast.

The headline figure from KPMG’s first quarterly Global AI Pulse survey is blunt: despite global organisations planning to spend a weighted average of $186 million on AI over the next 12 months, only 11 percent have reached the stage of deploying and scaling AI agents in ways that produce enterprise-wide business outcomes.

However, the central finding is not that AI is failing; 64 percent of respondents say AI is already delivering meaningful business outcomes. The problem is that “meaningful” is doing a lot of heavy lifting in that sentence, and the distance between incremental productivity gains and the kind of compounding operational efficiency that moves the needle on margin is, for most organisations, still substantial.

The architecture of a performance gap

KPMG’s report distinguishes between what it labels “AI leaders” (i.e. organisations that are scaling or actively operating agentic AI) and everyone else. The gap in outcomes between these two cohorts is striking.

Headshot of Steve Chase, Global Head of AI and Digital Innovation at KPMG International.

Steve Chase, Global Head of AI and Digital Innovation at KPMG International, said: “The first Global AI Pulse results reinforce that spending more on AI is not the same as creating value. Leading organisations are moving beyond enablement, deploying AI agents to reimagine processes and reshape how decisions and work flow across the enterprise.”

Among AI leaders, 82 percent report that AI is already delivering meaningful business value. Among their peers, that figure drops to 62 percent. That 20-percentage-point spread might look modest in isolation, but it compounds quickly when you consider what it reflects: not just better tooling, but fundamentally different deployment philosophies.

The organisations in that 11 percent are deploying agents that coordinate work across functions, route decisions without human intermediation at every step, surface enterprise-wide insights from operational data in near real-time, and flag anomalies before they escalate into incidents.

In IT and engineering functions, 75 percent of AI leaders are using agents to accelerate code development versus 64 percent of their peers. In operations, where supply-chain orchestration is the primary use case, the split is 64 percent versus 55 percent. These are not marginal differences in tool adoption rates; they reflect different levels of process re-architecture.

Most enterprises that have deployed AI have done so by layering models onto existing workflows (e.g. a co-pilot here, a summarisation tool there…) without redesigning the process those tools sit inside. That produces incremental gains.

The organisations closing the performance gap have inverted this approach: they are redesigning the process first, then deploying agents to operate within the redesigned structure. The difference in return on AI spend between these two approaches, over a three-to-five-year horizon, is likely to be the defining competitive variable in several industries.

What $186 million actually buys—and what it does not

The investment figures in the KPMG data deserve scrutiny. A weighted global average of $186 million per organisation sounds substantial, but the regional variance tells a more interesting story.

ASPAC leads at $245 million, the Americas at $178 million, and EMEA at $157 million. Within ASPAC, organisations including those in China and Hong Kong are investing at $235 million on average; within the Americas, US organisations are at $207 million.

These figures represent planned spend across model licensing, compute infrastructure, professional services, integration, and the governance and risk management apparatus needed to operate AI responsibly at scale.

The question is not whether $186 million is too much or too little; it is what proportion of that figure is being allocated to the operational infrastructure required to derive value from the models themselves. The survey data suggests that most organisations are still underweighting this latter category.

Compute and licensing costs are visible and relatively easy to budget for. The friction costs – the engineering hours spent integrating AI outputs with legacy ERP systems, the latency introduced by retrieval-augmented generation pipelines built on top of poorly structured data, and the compliance overhead of maintaining audit trails for AI-assisted decisions in regulated industries – tend to surface late in deployment cycles and often exceed initial estimates.

Vector database integration is a useful example. Many agentic workflows depend on the ability to retrieve relevant context from large, unstructured document repositories in real time. Building and maintaining the infrastructure for this – selecting between providers such as Pinecone, Weaviate, or Qdrant, embedding and indexing proprietary data, and managing refresh cycles as underlying data changes – adds meaningful engineering complexity and ongoing operational cost that rarely appears in initial AI investment proposals. 

When that infrastructure is absent or poorly maintained, agent performance degrades in ways that are often difficult to diagnose, as the model’s behaviour is correct relative to the context it receives, but that context is stale or incomplete.

Governance as an operational variable, not a compliance exercise

Perhaps the most practically useful finding in the KPMG survey is the relationship between AI maturity and risk confidence.

Among organisations still in the experimentation phase, just 20 percent feel confident in their ability to manage AI-related risks. Among AI leaders, that figure rises to 49 percent. 75 percent of global leaders cite data security, privacy, and risk as ongoing concerns regardless of maturity level—but maturity changes how those concerns are operationalised.

This is an important distinction for boards and risk functions that tend to frame AI governance as a constraint on deployment. The KPMG data suggests the opposite dynamic: governance frameworks do not slow AI adoption among mature organisations; they enable it. The confidence to move faster – to deploy agents into higher-stakes workflows, to expand agentic coordination across functions – correlates directly with the maturity of the governance infrastructure surrounding those agents.

In practice, this means that organisations treating governance as a retrospective compliance layer are doubly disadvantaged. They are slower to deploy, because every new use case triggers a fresh governance review, and they are more exposed to operational risk, because the absence of embedded governance mechanisms means that edge cases and failure modes are discovered in production rather than in testing.

Organisations that have embedded governance into the deployment pipeline itself (e.g. model cards, automated output monitoring, explainability tooling, and human-in-the-loop escalation paths for low-confidence decisions) are the ones operating with the confidence that allows them to scale.

“Ultimately, there is no agentic future without trust and no trust without governance that keeps pace,” explains Steve Chase, Global Head of AI and Digital Innovation at KPMG International. “The survey makes clear that sustained investment in people, training and change management is what allows organisations to scale AI responsibly and capture value.”

Regional divergence and what it signals for global deployment

For multinationals managing AI programmes across regions, the KPMG data flags material differences in deployment velocity and organisational posture that will affect global rollout planning.

ASPAC is advancing most aggressively on agent scaling; 49 percent of organisations there are scaling AI agents, compared with 46 percent in the Americas and 42 percent in EMEA. ASPAC also leads on the more complex capability of orchestrating multi-agent systems, at 33 percent.

The barrier profiles also differ in ways that carry real operational implications. In both ASPAC and EMEA, 24 percent of organisations cite a lack of leadership trust and buy-in as a primary barrier to AI agent deployment. In the Americas, that figure drops to 17 percent.

Agentic systems, by definition, make or initiate decisions without per-instance human approval. In organisational cultures where decision accountability is tightly concentrated at the senior level, this can generate institutional resistance that no amount of technical capability resolves. The fix is governance design; specifically, defining in advance what categories of decision an agent is authorised to make autonomously, what triggers escalation, and who carries accountability for agent-initiated outcomes.

The expectation gap around human-AI collaboration is also worth noting for anyone designing agent-assisted workflows at a global scale.

East Asian respondents anticipate AI agents leading projects at a rate of 42 percent. Australian respondents prefer human-directed AI at 34 percent. North American respondents lean toward peer-to-peer human-AI collaboration at 31 percent. These differences will affect how agent-assisted processes need to be designed in different regional deployments of the same underlying system, adding localisation complexity that is easy to underestimate in centralised platform planning.

One data point in the KPMG survey that deserves particular attention from CFOs and boards: 74 percent of respondents say AI will remain a top investment priority even in the event of a recession. This is either a sign of genuine conviction about AI’s role in cost structure and competitive positioning, or it reflects a collective commitment that has not yet been tested against actual budget pressure. Probably both, in different proportions across different organisations.

What it does indicate is that the window for organisations still in the experimentation phase is not indefinite. If the 11 percent of AI leaders continue to compound their advantage (and the KPMG data suggests the mechanisms for doing so are in place) the question for the remaining 89 percent is not whether to accelerate AI deployment, but how to do so without compounding the integration debt and governance deficits that are already constraining their returns.

See also: Hershey applies AI across its supply chain operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post KPMG: Inside the AI agent playbook driving enterprise margin gains appeared first on AI News.

]]>
SAP and ANYbotics drive industrial adoption of physical AI https://www.artificialintelligence-news.com/news/sap-and-anybotics-drive-industrial-adoption-physical-ai/ Tue, 31 Mar 2026 15:20:53 +0000 https://www.artificialintelligence-news.com/?p=112821 Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that. ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot […]

The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News.

]]>
Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that.

ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot as a standalone asset, this turns it into a mobile data-gathering node within an industrial IoT network.

This initiative shows that hardware innovation can now effectively connect with established business workflows. Underscoring that broader trend, SAP is sponsoring this year’s AI & Big Data Expo North America at the San Jose McEnery Convention Center, CA, an event that is fittingly co-located with the IoT Tech Expo and Intelligent Automation & Physical AI Summit.

When equipment breaks at a chemical plant or offshore rig, it costs a fortune. People do routine inspections to catch these issues early, but humans get tired and plants are massive. Robots, on the other hand, can walk the floor constantly, carrying thermal, acoustic, and visual sensors. Hook those sensors into SAP, and a hot pump instantly generates a maintenance request without waiting for a human to report it.

Cutting out the reporting lag

Usually, finding a problem and logging a work order are two disconnected steps. A worker might hear a weird noise in a compressor, write it down, and type it into a computer hours later. By the time the replacement part gets approved, the machine might be wrecked.

Connecting ANYbotics to SAP eliminates that delay. The robot’s onboard AI processes what it sees and hears instantly. If it hears an irregular motor frequency, it doesn’t just flash a warning on a separate screen, it uses APIs to tell the SAP asset management module directly. The system immediately checks for spare parts, figures out the cost of potential downtime, and schedules an engineer.

This automates the flow of information from the floor to management. It also means machinery gets judged on hard, consistent numbers instead of a human inspector’s subjective opinion.

Putting robots in heavy industry isn’t like installing software in an office—companies have to deal with unreliable infrastructure. Factories usually have awful internet connectivity due to thick concrete, metal scaffolding, and electromagnetic interference.

To make this work, the setup relies on edge computing. It takes too much bandwidth to constantly stream high-def thermal video and lidar data to the cloud. So, the robots crunch most of that data locally. Onboard processors figure out the difference between a machine running normally and one that’s dangerously overheating. They only send the crucial details (i.e. the specific fault and its location) back to SAP.

To handle the network issues, many early adopters build private 5G networks. This gives them the coverage they need across huge facilities where regular Wi-Fi fails. It also locks down access, keeping the robot’s data safe from interception.

Of course, security is a major issue. A walking robot packed with cameras is effectively a roaming vulnerability. Companies must use zero-trust network protocols to constantly verify the robot’s identity and limit what SAP modules it can touch. If the robot gets hacked, the system has to cut its connection instantly to stop the attackers from moving laterally into the corporate network.

These robots generate a massive amount of unstructured data as they walk around. Turning raw audio and thermal images into the neat tables SAP requires is difficult.

If companies don’t manage this right, maintenance teams will drown in alerts. A robot that is too sensitive might spit out hundreds of useless warnings a day, making the SAP dashboard completely ignored. IT teams have to set strict rules before turning the system on. They need exact thresholds for what triggers a real maintenance ticket and what just needs to be watched.

The setup usually uses middleware to translate the robot’s telemetry into SAP’s language. This software acts as a filter, throwing out the noise so only actual problems reach the ERP system. The data lake storing all this information also needs to be organised for future machine learning projects. Fixing broken machines is the short-term goal; the long-term payoff is using years of robot data to predict failures before they happen.

Ensuring a successful physical AI deployment

Dropping robots into a factory naturally makes people nervous. The project’s success often comes down to how human resources handles it. Workers usually look at the robots and assume layoffs are next.

Management has to be clear about why the robots are there. The goal is to get people out of dangerous areas like high-voltage zones or toxic chemical sectors to reduce injuries. The robot collects the data, and the human engineer shifts to analysing that data and doing the actual repairs.

This requires retraining. Workers who used to walk the perimeter now have to read SAP dashboards, manage automated tickets, and work with the robots. They have to trust the sensors, and management has to make sure operators know they can take manual control if something unexpected happens.

Companies need to take the rollout slowly. Because syncing physical robots with enterprise software is complicated, large-scale rollouts should start as small, targeted pilots.

The first test should be in one specific area with known hazards but rock-solid internet. This lets IT watch the data flow between the hardware and SAP in a controlled space. At this stage, the main job is making sure the data matches reality. If the robot sees one thing and SAP records another, it has to be audited and fixed daily.

Once the data pipeline actually works, the company can add more robots and connect other systems, like automated parts ordering. IT chiefs have to keep checking if their private networks can handle more robots, while security teams update their defenses against new threats.

If companies treat these autonomous inspectors as an extension of their corporate data architecture, they get a massive amount of information about their physical assets. But pulling it off means getting the network infrastructure, the data rules, and the human element exactly right.

See also: The rise of invisible IoT in enterprise operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News.

]]>
Secure governance accelerates financial AI revenue growth https://www.artificialintelligence-news.com/news/secure-governance-accelerates-financial-ai-revenue-growth/ Mon, 30 Mar 2026 15:54:58 +0000 https://www.artificialintelligence-news.com/?p=112817 Financial institutions are learning to deploy compliant AI solutions for greater revenue growth and market advantage. For the better part of ten years, financial institutions viewed AI primarily as a mechanism for pure efficiency gains. During that era, quantitative teams programmed systems designed to discover ledger discrepancies or eliminate milliseconds from automated trading execution times. […]

The post Secure governance accelerates financial AI revenue growth appeared first on AI News.

]]>
Financial institutions are learning to deploy compliant AI solutions for greater revenue growth and market advantage.

For the better part of ten years, financial institutions viewed AI primarily as a mechanism for pure efficiency gains. During that era, quantitative teams programmed systems designed to discover ledger discrepancies or eliminate milliseconds from automated trading execution times. As long as the quarterly balance sheets reflected positive gains, stakeholders outside the core engineering groups rarely scrutinised the actual maths driving these returns.

The arrival of generative applications and highly complex neural networks completely dismantled that widespread state of comfortable ignorance. Today, it’s not acceptable for banking executives to approve new technology rollouts based simply on promises of accurate predictive capabilities.

Across Europe and North America, lawmakers are aggressively drafting legislation aimed at punishing institutions that utilise opaque algorithmic decision-making processes. Consequently, the dialogue within corporate boardrooms has narrowed intensely to focus on safe AI deployment, ethics, model oversight, and legislation specific to the financial industry.

Institutions that choose to ignore this impending regulatory reality actively place their operational licenses in jeopardy. However, treating this transition purely as a compliance exercise ignores the immense commercial upside. Mastering these requirements creates a highly efficient operational pipeline where good governance functions as a massive accelerant for product delivery rather than an administrative handbrake.

Commercial lending and the price of opacity

The mechanics of retail and commercial lending perfectly illustrate the tangible business impact of proper algorithmic oversight.

Consider a scenario where a multinational bank introduces a deep learning framework to process commercial loan applications. This automated system evaluates credit scores, market sector volatility, and historical cash flows to generate an approval decision in a matter of milliseconds. The resulting competitive edge is immediate and obvious, as the institution reduces administrative overhead while clients secure necessary liquidity exactly when they require it.

However, the inherent danger of this velocity resides entirely within the training data. If the deployed model unknowingly utilises proxy variables that discriminate against a specific demographic or geographic area, the ensuing legal consequences are swift and punishing.

Modern regulators demand total explainability and categorically refuse to accept the complexity of neural networks as an excuse for discriminatory outcomes. When an external auditor investigates why a regional logistics enterprise was denied funding, the bank must possess the capability to trace that exact denial directly back to the specific mathematical weights and historical data points that caused the rejection.

Investing capital into ethics and oversight infrastructure is essentially how modern banks purchase speed-to-market. Constructing an ethically-sound and thoroughly vetted pipeline enables an institution to release new digital products without constantly looking over its shoulder out of fear. Guaranteeing fairness from the absolute beginning prevents nightmarish scenarios that involve delayed product rollouts and retrospective compliance audits. This level of operational confidence translates directly into sustained revenue generation while entirely avoiding massive regulatory penalties.

Engineering unbroken information provenance

Achieving this high standard of safety is impossible without adopting a brutal and uncompromising approach toward internal data maturity. Any algorithm merely reflects the information it consumes. 

Unfortunately, legacy banking institutions are infamous for maintaining highly fractured information architectures. It remains incredibly common to discover customer details resting on thirty-year-old mainframe systems, transaction histories floating in public cloud environments, and risk profiles gathering dust within entirely separate databases. Attempting to navigate this disjointed landscape makes achieving regulatory compliance physically impossible.

To rectify this, data officers must enforce the widespread adoption of comprehensive metadata management across the entire enterprise. Implementing strict data lineage tracking represents the only viable path forward. For example, if a live production model suddenly exhibits bias against minority-owned businesses, engineering teams require the exact capability to surgically isolate the specific dataset responsible for poisoning the results.

Constructing this underlying infrastructure mandates that every single byte of ingested training data becomes cryptographically signed and tightly version-controlled. Modern enterprise platforms must maintain an unbroken chain of custody for every input, stretching all the way from a customer’s initial interaction to the final algorithmic ruling.

Beyond data storage, integration issues arise when connecting advanced vector databases to these legacy systems. Vector embeddings require massive compute resources to process unstructured financial documents. If these databases are not perfectly synchronised with real-time transactional feeds, the AI risks generating severe hallucinations, presenting outdated or entirely fabricated financial advice as absolute fact.

Furthermore, as we’re currently all too aware, economic environments change at a rapid pace. A model trained on interest rates from three years ago will fail spectacularly in today’s market. Technology teams refer to this specific phenomenon as concept drift.

To combat this, developers must wire continuous monitoring systems directly into their live production algorithms. These specialised tools observe the model’s output in real-time, actively comparing results against baseline expectations. If the system begins to drift outside approved ethical parameters, the monitoring software automatically suspends the automated decision-making process.

Exceptional predictive accuracy means absolutely nothing without real-time observability; without it, a highly-tuned model becomes a corporate liability waiting to explode.

Defending the mathematical perimeter

Of course, implementing governance over financial algorithms introduces an entirely new category of operational headaches for CISOs. Traditional cybersecurity disciplines focus primarily on building protective walls around endpoints and corporate networks. Securing advanced AI, however, requires actively defending the actual mathematical integrity of the deployed models. This represents a complex discipline that most internal security operations centres barely understand.

Adversarial attacks present a very real and present danger to modern financial institutions. In a scenario known as a data poisoning attack, malicious actors subtly manipulate the external data feeds that a bank relies upon to train its internal fraud detection models. By doing so, they essentially teach the algorithm to turn a blind eye to specific and highly-lucrative types of illicit financial transfers.

Consider also the threat of prompt injection, where attackers utilise natural language inputs to trick generative customer service bots into freely handing over sensitive account details. Model inversion represents another nightmare scenario for executives, occurring when outsiders repeatedly query a public-facing algorithm until they successfully reverse-engineer the highly confidential financial data buried deep within its training weights.

To counter these evolving threats, security teams are forced to bury zero-trust architectures deep within the machine learning operations pipeline. Absolute device trust becomes non-negotiable. Only fully-authenticated data scientists, working exclusively on locked-down corporate endpoints, should ever possess the administrative permissions required to tweak model weights or introduce new data to the system.

Before any algorithm touches live financial data, it must successfully survive rigorous adversarial testing. Internal red teams must intentionally attempt to break the algorithm’s ethical guardrails using sophisticated simulation techniques. Surviving these simulated corporate attacks serves as a mandatory prerequisite for any public deployment.

Eradicating the engineering and compliance divide

The highest barrier to creating safe AI is rarely the underlying software itself; rather, it is the entrenched corporate culture.

For decades, a very thick wall separated software engineering departments from legal compliance teams. Developers were heavily incentivised to chase speed and rapid feature delivery. Conversely, compliance officers chased institutional safety and maximum risk mitigation. These groups typically operated from entirely different floors, used different software applications, and followed entirely different performance incentives.

That division has to come down. Data scientists can no longer construct models in an isolated engineering vacuum and then carelessly toss them over the fence to the legal team for a quick blessing. Legal constraints, ethical guidelines, and strict compliance rules must dictate the exact architecture of the algorithm starting on day one. Leaders need to actively force this internal collaboration by establishing cross-functional ethics boards. Banks should pack these specific committees with lead developers, corporate counsel, risk officers, and external ethicists.

When a particular business unit pitches a new automated wealth management application, this ethics board dissects the entire project. They must look past the projected profitability margins to deeply interrogate the societal impact and regulatory viability of the proposed tool.

By retraining software developers to view compliance as a core design requirement rather than annoying red tape, a bank actively builds a lasting culture of responsible innovation.

Managing vendor ecosystems and retaining control

The enterprise technology market recognises the urgency surrounding compliance and is aggressively pumping out algorithmic governance solutions.

The major cloud service providers now bake sophisticated compliance dashboards directly into their AI platforms. These tech giants offer banks automated audit trails, reporting templates designed to satisfy global regulators, and built-in bias-detection algorithms.

Simultaneously, a smaller ecosystem of independent startups offers highly specialised governance services. These agile firms focus entirely on testing model explainability or spotting complex concept drift exactly as it happens.

Purchasing these vendor solutions is highly tempting. Buying off-the-shelf software offers operational convenience and allows the enterprise to deploy governed algorithms without writing heavy auditing infrastructure from scratch. Startups are rapidly building application programming interfaces that plug directly into legacy banking systems, providing instant, third-party validation of internal models.

Despite these advantages, relying entirely on outsourced governance introduces a risk of vendor lock-in. If a bank ties its entire compliance architecture to one hyperscale cloud provider, migrating those specific models later to satisfy a new local data sovereignty law becomes an expensive and multi-year nightmare. 

A hard line must be drawn regarding open standards and system interoperability. The specific tools tracking data lineage and auditing model behaviour have to be completely portable across different environments. The bank must retain absolute control over its compliance posture, regardless of whose physical servers actually hold the algorithm.

Vendor contracts require ironclad provisions guaranteeing data portability and safe model extraction. A financial institution must always own its core intellectual property and internal governance frameworks. 

By fixing internal data maturity, securing the development pipeline against adversarial threats, and forcing legal and engineering teams to actually speak to one another, leaders can safely deploy modern algorithms. Treating strict compliance as the absolute foundation of engineering guarantees that AI drives secure and sustainable growth.

See also: Ocorian: Family offices turn to AI for financial data insights

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Secure governance accelerates financial AI revenue growth appeared first on AI News.

]]>
JPMorgan begins tracking how employees use AI at work https://www.artificialintelligence-news.com/news/jpmorgan-begins-tracking-how-employees-use-ai-at-work/ Mon, 30 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112785 Banking house JPMorgan Chase is asking its roughly 65,000 engineers and technologists to use AI tools as part of their regular workflow. Business Insider reported that managers are tracking how often staff use these tools. That use may also influence performance reviews. The report states employees are encouraged to use tools like ChatGPT and Claude […]

The post JPMorgan begins tracking how employees use AI at work appeared first on AI News.

]]>
Banking house JPMorgan Chase is asking its roughly 65,000 engineers and technologists to use AI tools as part of their regular workflow. Business Insider reported that managers are tracking how often staff use these tools. That use may also influence performance reviews.

The report states employees are encouraged to use tools like ChatGPT and Claude Code when writing code, reviewing documents, or handling routine tasks. Internal systems then classify workers based on their level of use. Some are labelled “light users,” while others fall into a “heavy user” category.

JPMorgan has been using in fraud detection and risk analysis. What stands out here is not the technology itself, but how it is being woven into day-to-day expectations for staff.

According to internal materials cited by Business Insider, managers are paying close attention to how employees use AI tools.

JPMorgan shows AI adoption in banks

Many companies have spent the past two years rolling out AI tools in departments. In most cases, adoption has been uneven. Some teams experiment heavily, while others stick to existing workflows.

JPMorgan is treating AI as a standard part of the job. That creates a more uniform level of adoption in teams. In the past, performance reviews focused on output and accuracy. Now, they may also include how effectively employees use AI tools to reach those results.

That raises a practical question for large organisations. If AI can reduce the time needed for certain tasks, should employees be expected to produce more work in the same amount of time?

Keeping pace with internal change

By tracking use, the bank may be trying to avoid a familiar problem in enterprise software rollouts. Tools are deployed, but adoption is slow, limiting their impact. Making AI part of performance reviews creates a stronger incentive to engage with the technology. It also suggests that AI literacy is becoming a baseline skill, similar to how spreadsheets or code tools became standard over time.

New challenges include employees feeling pressure to use AI even in cases where it does not clearly improve the outcome. There is also the matter of how to measure “good” use, as opposed to simply frequent use.

JPMorgan’s AI risks and efficiency gains

Banks operate in a regulated environment, where introducing AI into more workflows increases the need for oversight.

Tools like ChatGPT and Claude Code can help summarise information or generate drafts, but they can also produce incorrect or incomplete results. That means employees still need to verify outputs before using them in decision-making or client-facing work.

JPMorgan has developed internal controls for AI systems in areas like trading and risk. Expanding use in a broader group of employees may require similar safeguards, creating a situation for the bank in which it wants to improve efficiency, but also needs to ensure that heavier AI use does not introduce new risks.

Other financial institutions are likely watching closely. If tying AI use to performance leads to measurable gains in productivity, similar models may spread in the sector.

The bank’s approach may reshape how companies hire and train employees, and skills like prompt writing and output checks could become part of standard job requirements. JPMorgan’ approach suggests that this change is already underway, at least in banking.

(Photo by IKECHUKWU JULIUS UGWU)

See also: RPA matters, but AI changes how automation works

Want to experience the full spectrum of enterprise technology innovation? Join TechEx in Amsterdam, California, and London. Covering AI, Big Data, Cyber Security, IoT, Digital Transformation, Intelligent Automation, Edge Computing, and Data Centres, TechEx brings together global leaders to share real-world use cases and in-depth insights. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post JPMorgan begins tracking how employees use AI at work appeared first on AI News.

]]>
AI agents enter banking roles at Bank of America https://www.artificialintelligence-news.com/news/ai-agents-enter-banking-roles-at-bank-of-america/ Wed, 25 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112768 AI agents are starting to take on a more direct role in how financial advice is delivered, as large banks move into systems that support client interactions. Bank of America is now deploying an internal AI-powered advisory platform to a subset of financial advisers, rolled out to around 1,000 financial advisers, according to Banking Dive. […]

The post AI agents enter banking roles at Bank of America appeared first on AI News.

]]>
AI agents are starting to take on a more direct role in how financial advice is delivered, as large banks move into systems that support client interactions.

Bank of America is now deploying an internal AI-powered advisory platform to a subset of financial advisers, rolled out to around 1,000 financial advisers, according to Banking Dive. The move is one of the clearer early examples of how AI is being used in core banking roles, where systems support decision-making in real time.

The platform is based on Salesforce’s Agentforce, which enables the creation of AI agents to handle tasks. It is designed to help advisers handle client queries and prepare recommendations. It can also help manage daily workflows. According to Banking Dive, the system is part of a wider push among major banks to test how AI agents can work alongside human staff.

Bank of America has been expanding its use of AI in its business. It’s said its virtual assistant Erica handles work equivalent to about 11,000 employees, while 18,000 software developers use AI coding tools that have improved productivity by around 20%.

AI agents move to financial decision-making

The approach differs from earlier deployments of AI in banking, which focused mainly on chatbots or internal productivity tools. In those cases, AI was used to answer simple questions or automate routine tasks. The newer systems are built to handle more complex work, including analysing client data.

Firms like JPMorgan, Wells Fargo, and Goldman Sachs are also testing AI tools aimed at improving productivity and helping staff in client-facing roles, though these efforts vary and are not always focused on advisor-specific AI agent systems. While each bank is taking a different approach, the common goal is to increase output without expanding headcount.

Banks report gains in how quickly advisers can access information or prepare for meetings, based on industry reporting and early deployment feedback. Yet there are ongoing concerns about accuracy and oversight, especially when AI systems are used to suggest financial decisions.

Some analysts remain cautious about how quickly AI is changing banking. Wells Fargo analyst Mike Mayo wrote that recent developments have yet to produce major new products, describing the current phase as “a little boring from a product standpoint”.

Human oversight

Bank of America’s rollout stands out because of its scale. Financial advisers sit at the centre of the bank’s relationship with clients, particularly in wealth management. Introducing AI into that role suggests a growing level of trust in the technology. It also shows a willingness to let it influence how advice is formed and delivered.

When dealing with complex financial decisions or high-value clients, industry executives acknowledge AI is unlikely to completely replace expert roles, particularly in complex financial workflows where context and judgement matter.

This hybrid model is becoming more common in the sector. Firms are treating AI as a part of the workforce, with staff expected to work alongside systems day-to-day.

Progress’s limits

There are also practical challenges. AI systems depend on clean, structured data, which is not always easy to achieve in large organisations. Integration with existing tools can take time, and staff may need training to use new systems effectively.

Regulation adds another layer of complexity. Financial institutions must ensure that AI-driven recommendations meet compliance standards and explain decisions if questioned by regulators. This requirement may limit the amount of autonomy provided to AI systems, particularly in areas like lending or investment advice.

Some estimates imply that up to one-third of banking jobs, or parts of those roles, could eventually be handled by AI. The introduction of AI agents into advisory roles raises questions about how the job itself may change. If systems can handle more of the analytical work, advisers may spend more time on client relationships and less on preparation. Over time, this could shift the skills required for the role.

Reliance on AI introduces new risks. Errors in data or model output could affect recommendations, and over-reliance on automated systems may reduce critical review by human staff. The issues are still being studied as deployments expand.

Bank of America’s rollout offers a view into how an AI transition may play out. It shows a large institution testing how far AI can be integrated into everyday work. As more banks follow a similar path, the focus is likely to shift to how AI can be managed once it becomes part of core operations.

See also: Visa prepares payment systems for AI agent-initiated transactions

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI agents enter banking roles at Bank of America appeared first on AI News.

]]>
Palantir AI to support UK finance operations https://www.artificialintelligence-news.com/news/palantir-ai-to-support-uk-finance-operations/ Mon, 23 Mar 2026 13:14:23 +0000 https://www.artificialintelligence-news.com/?p=112756 UK authorities believe improving efficiency across national finance operations requires applying AI platforms from vendors like Palantir. The country’s financial regulator, the FCA, has initiated a project leveraging AI to identify illicit activities. The FCA is currently testing the Foundry platform from Miami-based software vendor Palantir. This three-month pilot costs upwards of £30,000 per week […]

The post Palantir AI to support UK finance operations appeared first on AI News.

]]>
UK authorities believe improving efficiency across national finance operations requires applying AI platforms from vendors like Palantir. The country’s financial regulator, the FCA, has initiated a project leveraging AI to identify illicit activities.

The FCA is currently testing the Foundry platform from Miami-based software vendor Palantir. This three-month pilot costs upwards of £30,000 per week and focuses on mining the regulator’s internal data lake. The objective centres on detecting money laundering, insider trading, and fraud across the 42,000 financial services businesses under the FCA’s supervision.

Navigating unstructured data lakes

Traditional oversight methods struggle with the sheer volume of information generated by modern markets. AI platforms excel at parsing unstructured intelligence, which regulators gather during investigations into harmful activities like human trafficking and the narcotics trade.

The information fed into these systems spans from highly-confidential internal files and reports on problematic companies to consumer ombudsman complaints. Machine learning tools digest audio recordings from phone calls, social media activity, and email archives.

Uncovering patterns within such a vast array of inputs helps direct enforcement resources exactly where they are needed most. Industry experts note a historical under-exploitation of the intelligence housed within regulatory bodies, making advanced analytics a valuable tool for tackling financial crimes.

When validating AI models, there is often a debate about the merits of synthetic information versus live environments. While standard guidelines encourage using artificial datasets for preliminary testing, the UK’s finance regulatory authority determined that evaluating AI software like Palantir’s required actual operational inputs.

Expanding into national security operations

This public sector adoption extends well beyond financial compliance. In September 2025, the UK government established an AI partnership with Palantir aimed at accelerating military decision-making and targeting capabilities. Palantir plans to invest up to £1.5 billion to establish London as its European defence headquarters, an initiative expected to generate up to 350 jobs.

As businesses evaluate these platforms, the defence sector provides a high-stakes testing environment for data fusion. Military planners utilise these tools to consolidate open-source and classified intelligence, rapidly generating options to neutralise enemy targets. This forms an element of the Digital Targeting Web, which relies on a diverse supplier ecosystem.

Palantir and the military will collaborate on identifying opportunities worth up to £750 million over a five-year period. To foster broader ecosystem growth, the defence agreement includes provisions for mentoring local startups, assisting smaller British technology firms with expanding into US markets on a pro-bono basis.

Deploying private AI like Palantir’s in UK finance operations

CDOs deploying AI solutions often struggle when balancing processing capabilities with privacy mandates. During an enforcement action, regulators frequently compel companies to surrender extensive records.

Such datasets regularly include the personal bank details, telephone numbers, and complete communication logs of individuals tangentially related to a case. Establishing exact boundaries regarding how a software provider interacts with this intelligence is vital. Before selecting Palantir from a two-vendor shortlist, the FCA claims to have run a competitive procurement process and established strict data protection controls.

To mitigate risks associated with information exposure, the FCA structured its agreement with Palantir so the vendor acts strictly as a data processor. Under this arrangement, the software provider operates solely upon instruction. The regulatory agency maintains exclusive possession of encryption keys for the most classified files, and all hosting and storage remain securely within the UK.

Similar data sovereignty principles apply to the defence partnership, ensuring military intelligence remains freely available across the Ministry of Defence while entirely under national control.

The financial contract explicitly forbids the vendor from copying the ingested intelligence to train its own commercial products. Once the pilot concludes, the vendor must destroy the information. Any intellectual property generated during the analysis phase automatically belongs to the regulator.

Setting limitations on data retention and processing rights ensures internal security standards remain intact while achieving efficiency gains from deploying private AI from vendors like Palantir to improve the UK’s finance operations.

See also: Visa prepares payment systems for AI agent-initiated transactions

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Palantir AI to support UK finance operations appeared first on AI News.

]]>
How multi-agent AI economics influence business automation https://www.artificialintelligence-news.com/news/how-multi-agent-ai-economics-business-automation/ Thu, 12 Mar 2026 15:01:20 +0000 https://www.artificialintelligence-news.com/?p=112642 Managing the economics of multi-agent AI now dictates the financial viability of modern business automation workflows. Organisations progressing past standard chat interfaces into multi-agent applications face two primary constraints. The first issue is the thinking tax; complex autonomous agents need to reason at each stage, making the reliance on massive architectures for every subtask too […]

The post How multi-agent AI economics influence business automation appeared first on AI News.

]]>
Managing the economics of multi-agent AI now dictates the financial viability of modern business automation workflows.

Organisations progressing past standard chat interfaces into multi-agent applications face two primary constraints. The first issue is the thinking tax; complex autonomous agents need to reason at each stage, making the reliance on massive architectures for every subtask too expensive and slow for practical enterprise use.

Context explosion acts as the second hurdle; these advanced workflows produce up to 1,500 percent more tokens than standard formats because every interaction demands the resending of full system histories, intermediate reasoning, and tool outputs. Across extended tasks, this token volume drives up expenses and causes goal drift, a scenario where agents diverge from their initial objectives.

Evaluating architectures for multi-agent AI

To address these governance and efficiency hurdles, hardware and software developers are releasing highly optimised tools aimed directly at enterprise infrastructure.

NVIDIA recently introduced Nemotron 3 Super, an open architecture featuring 120 billion parameters (of which 12 billion remain active) that is specifically-engineered to execute complex agentic AI systems.

Available immediately, NVIDIA’s framework blends advanced reasoning features to help autonomous agents finish tasks efficiently and accurately for improved business automation. The system relies on a hybrid mixture-of-experts architecture combining three major innovations to deliver up to five times higher throughput and twice the accuracy of the preceding Nemotron Super model. During inference, only 12 billion of the 120 billion parameters are active.

Mamba layers provide four times the memory and compute efficiency, while standard transformer layers manage the complex reasoning requirements. A latent technique boosts accuracy by engaging four expert specialists for the cost of one during token generation. The system also anticipates multiple future words at the same time, accelerating inference speeds threefold.

Operating on the Blackwell platform, the architecture utilises NVFP4 precision. This setup reduces memory needs and makes inference up to four times faster than FP8 configurations on Hopper systems, all without sacrificing accuracy.

Translating automation capability into business outcomes

The system offers a one-million-token context window, allowing agents to keep the entire workflow state in memory and directly addressing the risk of goal drift. A software development agent can load an entire codebase into context simultaneously, enabling end-to-end code generation and debugging without requiring document segmentation.

Within financial analysis, the system can load thousands of pages of reports into memory, improving efficiency by removing the need to re-reason across lengthy conversations. High-accuracy tool calling ensures autonomous agents reliably navigate massive function libraries, preventing execution errors in high-stakes environments such as autonomous security orchestration within cybersecurity.

Industry leaders – including Amdocs, Palantir, Cadence, Dassault Systèmes, and Siemens – are deploying and customising the model to automate workflows across telecom, cybersecurity, semiconductor design, and manufacturing.

Software development platforms like CodeRabbit, Factory, and Greptile are integrating it alongside proprietary models to achieve higher accuracy at lower costs. Life sciences firms like Edison Scientific and Lila Sciences will use it to power agents for deep literature search, data science, and molecular understanding.

The architecture also powers the AI-Q agent to the top position on DeepResearch Bench and DeepResearch Bench II leaderboards, highlighting its capacity for multistep research across large document sets while maintaining reasoning coherence.

Finally, the model claimed the top spot on Artificial Analysis for efficiency and openness, featuring leading accuracy among models of its size.

Implementation and infrastructure alignment

Built to handle complex subtasks inside multi-agent systems, deployment flexibility remains a priority for leaders driving business automation.

NVIDIA released the model with open weights under a permissive license, letting developers deploy and customise it across workstations, data centres, or cloud environments. It is packaged as an NVIDIA NIM microservice to aid this broad deployment from on-premises systems to the cloud.

The architecture was trained on synthetic data generated by frontier reasoning models. NVIDIA published the complete methodology, encompassing over 10 trillion tokens of pre- and post-training datasets, 15 training environments for reinforcement learning, and evaluation recipes. Researchers can further fine-tune the model or build their own using the NeMo platform.

Any exec planning a digitisation rollout must address context explosion and the thinking tax upfront to prevent goal drift and cost overruns in agentic workflows. Establishing comprehensive architectural oversight ensures these sophisticated agents remain aligned with corporate directives, yielding sustainable efficiency gains and advancing business automation across the organisation.

See also: Ai2: Building physical AI with virtual simulation data

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How multi-agent AI economics influence business automation appeared first on AI News.

]]>
ABB: Physical AI simulation boosts ROI for factory automation https://www.artificialintelligence-news.com/news/abb-physical-ai-simulation-secures-factory-automation-roi/ Tue, 10 Mar 2026 17:22:41 +0000 https://www.artificialintelligence-news.com/?p=112561 A new ABB and NVIDIA partnership shows physical AI simulation is driving real ROI in factory automation and solving production hurdles. Manufacturers have often found it difficult to make intelligent robotics work reliably outside testing environments. The core issue is the gap between digital training models and actual factory floors, where lighting, material physics, and […]

The post ABB: Physical AI simulation boosts ROI for factory automation appeared first on AI News.

]]>
A new ABB and NVIDIA partnership shows physical AI simulation is driving real ROI in factory automation and solving production hurdles.

Manufacturers have often found it difficult to make intelligent robotics work reliably outside testing environments. The core issue is the gap between digital training models and actual factory floors, where lighting, material physics, and part variations refuse to behave as they do on a screen.

Historically, this friction has previously forced engineering teams to fall back on physical prototypes, delaying product launches and driving up costs.

Overcoming the digital to physical AI simulation divide

The partnership between ABB Robotics and NVIDIA attempts to close this gap by bringing industrial-grade physical AI to manufacturing facilities. Slated for release in the second half of 2026, RobotStudio HyperReality is already drawing interest from a global customer base.

By embedding NVIDIA Omniverse libraries within its existing RobotStudio software, ABB provides a platform for physically accurate digital testing. On an operational level, this integration allows engineers to cut deployment costs by up to 40 percent and accelerate time to market by as much as 50 percent.

Realising these efficiency gains demands a workflow where production leaders design, test, and validate complete automation cells before installing any hardware. To do this, the system exports a fully parameterised station – encompassing the robots, sensors, lighting, kinematics, and parts – as a USD file straight into the Omniverse environment.

Inside this digital space, a virtual controller runs the identical firmware found on the physical machine, enabling a 99 percent behavioural match between the digital and physical realms.

Rather than manually programming movements, computer vision models learn using synthetic images generated inside the software. When combined with Absolute Accuracy technology, this method cuts positioning errors down from 8-15 mm to approximately 0.5 mm, providing high precision for industrial applications.

Marc Segura, President of ABB Robotics, said: “Combining RobotStudio with the physically accurate simulation power of NVIDIA Omniverse libraries, we have closed technology’s long-standing ‘sim-to-real’ gap—a huge milestone to deploying physical AI with industrial-grade precision, for real-world customer applications.”

Validating factory automation before deployment

Early adopters are already validating these capabilities on active production lines. 

Foxconn, for example, is testing the software for consumer device assembly—an area where frequent product changes and delicate metal components complicate traditional automation. By generating synthetic data to train their systems virtually, Foxconn achieves high accuracy on the factory floor while anticipating a reduction in setup time and the elimination of costly physical testing.

Similarly, Workr – a California-based automation provider – integrates its WorkrCore platform with ABB hardware trained via Omniverse. At the NVIDIA GTC 2026 event in San Jose, Workr intends to showcase systems capable of onboarding new parts in minutes without requiring specialised programming skills.

Deepu Talla, VP of Robotics and Edge AI at NVIDIA, commented: “The industrial sector needs high-fidelity simulation to bridge the gap between virtual training and real-world deployment of AI-driven robotics at scale.

“Integrating NVIDIA Omniverse libraries into RobotStudio brings advanced simulation and accelerated computing to ABB’s virtual controller technology, accelerating how thousands of manufacturers bring complex products to market.” 

The hardware ecosystem is also expanding to edge computing. ABB is evaluating the integration of NVIDIA’s Jetson edge platform into its Omnicore controllers, a step that would facilitate real-time inference across existing robotic fleets.

Adopting this type of digital-first simulation for physical AI can reduce setup and commissioning times by up to 80 percent. As AI moves from software applications to hardware operations, preparing data pipelines and upskilling engineering teams to work with synthetic data will dictate which manufacturers maintain a competitive edge.

See also: Agentic AI in finance speeds up operational automation

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post ABB: Physical AI simulation boosts ROI for factory automation appeared first on AI News.

]]>