Featured News - AI News https://www.artificialintelligence-news.com/categories/featured-news/ Artificial Intelligence News Thu, 16 Apr 2026 11:20:02 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Featured News - AI News https://www.artificialintelligence-news.com/categories/featured-news/ 32 32 OpenAI Agents SDK improves governance with sandbox execution https://www.artificialintelligence-news.com/news/openai-agents-sdk-improves-governance-sandbox-execution/ Thu, 16 Apr 2026 11:20:00 +0000 https://www.artificialintelligence-news.com/?p=113030 OpenAI is introducing sandbox execution that allows enterprise governance teams to deploy automated workflows with controlled risk. Teams taking systems from prototype to production have faced difficult architectural compromises regarding where their operations occurred. Using model-agnostic frameworks offered initial flexibility but failed to fully utilise the capabilities of frontier models. Model-provider SDKs remained closer to […]

The post OpenAI Agents SDK improves governance with sandbox execution appeared first on AI News.

]]>
OpenAI is introducing sandbox execution that allows enterprise governance teams to deploy automated workflows with controlled risk.

Teams taking systems from prototype to production have faced difficult architectural compromises regarding where their operations occurred. Using model-agnostic frameworks offered initial flexibility but failed to fully utilise the capabilities of frontier models. Model-provider SDKs remained closer to the underlying model, but often lacked enough visibility into the control harness.

To complicate matters further, managed agent APIs simplified the deployment process but severely constrained where the systems could run and how they accessed sensitive corporate data. To resolve this, OpenAI is introducing new capabilities to the Agents SDK, offering developers standardised infrastructure featuring a model-native harness and native sandbox execution.

The updated infrastructure aligns execution with the natural operating pattern of the underlying models, improving reliability when tasks require coordination across diverse systems. Oscar Health provides an example of this efficiency regarding unstructured data.

The healthcare provider tested the new infrastructure to automate a clinical records workflow that older approaches could not handle reliably. The engineering team required the automated system to extract correct metadata while correctly understanding the boundaries of patient encounters within complex medical files. By automating this process, the provider could parse patient histories faster, expediting care coordination and improving the overall member experience.

Rachael Burns, Staff Engineer & AI Tech Lead at Oscar Health, said: “The updated Agents SDK made it production-viable for us to automate a critical clinical records workflow that previous approaches couldn’t handle reliably enough.

“For us, the difference was not just extracting the right metadata, but correctly understanding the boundaries of each encounter in long, complex records. As a result, we can more quickly understand what’s happening for each patient in a given visit, helping members with their care needs and improving their experience with us.”

OpenAI optimises AI workflows with a model-native harness

To deploy these systems, engineers must manage vector database synchronisation, control hallucination risks, and optimise expensive compute cycles. Without standard frameworks, internal teams often resort to building brittle custom connectors to manage these workflows.

The new model-native harness helps alleviate this friction by introducing configurable memory, sandbox-aware orchestration, and Codex-like filesystem tools. Developers can integrate standardised primitives such as tool use via MCP, custom instructions via AGENTS.md, and file edits using the apply patch tool.

Progressive disclosure via skills and code execution using the shell tool also enables the system to perform complex tasks sequentially. This standardisation allows engineering teams to spend less time updating core infrastructure and focus on building domain-specific logic that directly benefits the business.

Integrating an autonomous program into a legacy tech stack requires precise routing. When an autonomous process accesses unstructured data, it relies heavily on retrieval systems to pull relevant context.

To manage the integration of diverse architectures and limit operational scope, the SDK introduces a Manifest abstraction. This abstraction standardises how developers describe the workspace, allowing them to mount local files and define output directories.

Teams can connect these environments directly to major enterprise storage providers, including AWS S3, Azure Blob Storage, Google Cloud Storage, and Cloudflare R2. Establishing a predictable workspace gives the model exact parameters on where to locate inputs, write outputs, and maintain organisation during extended operational runs.

This predictability prevents the system from querying unfiltered data lakes, restricting it to specific, validated context windows. Data governance teams can subsequently track the provenance of every automated decision with greater accuracy from local prototype phases through to production deployment.

Enhancing security with native sandbox execution

The SDK natively supports sandbox execution, offering an out-of-the-box layer so programs can run within controlled computer environments containing the necessary files and dependencies. Engineering teams no longer need to piece this execution layer together manually. They can deploy their own custom sandboxes or utilise built-in support for providers like Blaxel, Cloudflare, Daytona, E2B, Modal, Runloop, and Vercel.

Risk mitigation remains the primary concern for any enterprise deploying autonomous code execution. Security teams must assume that any system reading external data or executing generated code will face prompt-injection attacks and exfiltration attempts.

OpenAI approaches this security requirement by separating the control harness from the compute layer. This separation isolates credentials, keeping them entirely out of the environments where the model-generated code executes. By isolating the execution layer, an injected malicious command cannot access the central control plane or steal primary API keys, protecting the wider corporate network from lateral movement attacks.

This separation also addresses compute cost issues regarding system failures. Long-running tasks often fail midway due to network timeouts, container crashes, or API limits. If a complex agent takes twenty steps to compile a financial report and fails at step nineteen, re-running the entire sequence burns expensive computing resources.

If the environment crashes under the new architecture, losing the sandbox container does not mean losing the entire operational run. Because the system state remains externalised, the SDK utilises built-in snapshotting and rehydration. The infrastructure can restore the state within a fresh container and resume exactly from the last checkpoint if the original environment expires or fails. Preventing the need to restart expensive, long-running processes translates directly to reduced cloud compute spend.

Scaling these operations requires dynamic resource allocation. The separated architecture allows runs to invoke single or multiple sandboxes based on current load, route specific subagents into isolated environments, and parallelise tasks across numerous containers for faster execution times.

These new capabilities are generally available to all customers via the API, utilising standard pricing based on tokens and tool use without demanding custom procurement contracts. The new harness and sandbox capabilities are launching first for Python developers, with TypeScript support slated for a future release.

OpenAI plans to bring additional capabilities, including code mode and subagents, to both the Python and TypeScript libraries. The vendor intends to expand the broader ecosystem over time by supporting additional sandbox providers and offering more methods for developers to plug the SDK directly into their existing internal systems.

See also: Commvault launches a ‘Ctrl-Z’ for cloud AI workloads

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post OpenAI Agents SDK improves governance with sandbox execution appeared first on AI News.

]]>
SAP brings agentic AI to human capital management https://www.artificialintelligence-news.com/news/sap-brings-agentic-ai-human-capital-management/ Tue, 14 Apr 2026 12:55:09 +0000 https://www.artificialintelligence-news.com/?p=112997 According to SAP, integrating agentic AI into core human capital management (HCM) modules helps target operational bloat and reduce costs. SAP’s SuccessFactors 1H 2026 release aims to anticipate administrative bottlenecks before they stall daily operations by embedding a network of AI agents across recruiting, payroll, workforce administration, and talent development. Behind the user interface, these […]

The post SAP brings agentic AI to human capital management appeared first on AI News.

]]>
According to SAP, integrating agentic AI into core human capital management (HCM) modules helps target operational bloat and reduce costs.

SAP’s SuccessFactors 1H 2026 release aims to anticipate administrative bottlenecks before they stall daily operations by embedding a network of AI agents across recruiting, payroll, workforce administration, and talent development. Behind the user interface, these agents must monitor system states, identify anomalies, and prompt human operators with context-aware solutions.

Data synchronisation failures between distributed enterprise systems routinely require dedicated IT support teams to diagnose. When employee master data fails to replicate due to a missing attribute, downstream systems like access management and financial compensation halt.

The agentic approach uses analytical models to cross-reference peer data, identify the missing variable based on organisational patterns, and prompt the administrator with the required correction. This automated troubleshooting dramatically reduces the mean time to resolution for internal support tickets.

Implementing this level of autonomous monitoring requires severe engineering discipline. Integrating modern semantic search mechanisms with highly structured legacy relational databases requires extensive middleware configuration.

Running large language models in the background to continuously scan millions of employee records for inconsistencies consumes massive compute resources. CIOs must carefully balance the cloud infrastructure costs of continuous algorithmic monitoring against the operational savings generated by reduced IT ticket volumes.

To mitigate the risk of algorithmic hallucinations altering core financial data, engineering teams are forced to build strict guardrails. These retrieve-and-generate architectures must be firmly anchored to the company’s verified data lakes, ensuring the AI only acts upon validated corporate policies rather than generalised internet training data.

The SAP release attempts to streamline this knowledge retrieval by introducing intelligent question-and-answer capabilities within its learning module. This functionality delivers instant, context-aware responses drawn directly from an organisation’s learning content, allowing employees to bypass manual documentation searches entirely. The integration also introduces a growing workforce knowledge network that pulls trusted external employment guidance into daily workflows to support confident decision-making.

How SAP is using agentic AI to consolidate the HCM ecosystem

The updated architecture focuses on unified experiences that adapt to operational needs. For example, the delay between a signed offer letter to new talent and the employee achieving full productivity is a drag on profit margins.

Native integration combining SmartRecruiters solutions, SAP SuccessFactors Employee Central, and SAP SuccessFactors Onboarding streamlines the data flow from initial candidate interaction through to the new hire phase.

A candidate’s technical assessments, background checks, and negotiated terms pass automatically into the core human resources repository. Enterprises accelerate the onboarding timeline by eliminating the manual re-entry of personnel data—allowing new technical hires to begin contributing to active commercial projects faster.

Technical leadership teams understand that out-of-the-box software rarely matches internal enterprise processes perfectly. Customisation is necessary, but hardcoded extensions routinely break during cloud upgrade cycles, creating vast maintenance backlogs.

To manage this tension, the software introduces a new extensibility wizard. This tool provides guided, step-by-step support for building custom extensions directly on the SAP Business Technology Platform within the SuccessFactors environment.

By containing custom development within a governed platform environment, technology officers can adapt the interface to unique business requirements while preserving strict governance and ensuring future update compatibility.

Algorithmic auditing and margin protection

The 1H 2026 release incorporates pay transparency insights directly into the People Intelligence package within SAP Business Data Cloud to help with compliance with strict regulatory environments like the EU’s directives on pay transparency (which requires organisations to provide detailed and auditable justifications for wage discrepancies.)

Manual compilation of compensation data across multiple geographic regions and currency zones is highly error-prone. Using the People Intelligence package, organisations can analyse compensation patterns and potential pay gaps across demographics.

Automating this analysis provides a data-driven defence against compliance audits and aligns internal pay practices with evolving regulatory expectations, protecting the enterprise from both litigation costs and brand damage.

Preparing for future demands requires trusted and consistent skills data that leadership can rely on across talent deployment and workforce planning. Unstructured data, where one department labels a capability using differing terminology from another, breaks automated resource allocation models.

The update strengthens the SAP talent intelligence hub by introducing enhanced skills governance to provide administrators with a centralised interface for managing skill definitions, applying corporate standards, and ensuring data aligns across internal applications and external partner ecosystems. 

Standardising this data improves overall system quality and allows resource managers to make deployment decisions without relying on fragmented spreadsheets or guesswork. This inventory prevents organisations from having to outsource to expensive external contractors for capabilities they already possess internally.

By bringing together data, AI, and connected experiences, SAP’s latest enhancements show how agentic AI can help organisations reduce daily friction. For professionals looking to explore these types of enterprise AI integrations and connect directly with the company, SAP is a key sponsor of this year’s AI & Big Data Expo North America.

See also: IBM: How robust AI governance protects enterprise margins

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SAP brings agentic AI to human capital management appeared first on AI News.

]]>
Hyundai expands into robotics and physical AI systems https://www.artificialintelligence-news.com/news/hyundai-expands-into-robotics-and-physical-ai-systems/ Tue, 14 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112984 Hyundai Motor Group is starting to look like a company building machines that act in the real world. The change centres on physical AI: Where AI is placed into robots and systems that move and respond in physical spaces. Current efforts are mainly focused on factory and industrial settings. Hyundai’s move into physical AI systems […]

The post Hyundai expands into robotics and physical AI systems appeared first on AI News.

]]>
Hyundai Motor Group is starting to look like a company building machines that act in the real world. The change centres on physical AI: Where AI is placed into robots and systems that move and respond in physical spaces. Current efforts are mainly focused on factory and industrial settings.

Hyundai’s move into physical AI systems

In an interview with Semafor, chairman Chung Eui-sun said robotics and AI will play a central role in Hyundai’s next phase of growth, pushing the company beyond vehicles and into physical systems. The group plans to invest $26 billion in the US by 2028, according to United Press International, building on roughly $20.5 billion invested over the past 40 years.

A large part of that spending is tied to robotics and AI-driven systems that Hyundai is combining into a single approach. Chung described robotics and physical AI as important to Hyundai’s long-term direction, adding that the company is developing robots to work with people not replace them.

From automation to collaboration

Hyundai is working on systems where robots and humans share tasks in the same space. This includes humanoid robots developed by Boston Dynamics, which Hyundai acquired a controlling stake in 2021. Machines are being prepared for manufacturing use, with deployment planned around 2028. The company expects to scale production to up to 30,000 units per year by 2030, with the goal to improve work on the factory floor. Robots may handle repetitive or physically demanding tasks, while humans focus on oversight and coordination.

Chung said this kind of setup could help improve efficiency and product quality as customer expectations change.

Current deployments remain focused on industrial settings, though Hyundai is exploring other uses. Potential areas include logistics and mobility services that combine vehicles with AI systems. These may affect deliveries and shared services.

Manufacturing as the first use case for physical AI

While these uses are still developing, manufacturing remains the main testing ground. Factories remain the place where Hyundai is putting these ideas into practice. The company is already working on software-driven manufacturing systems in its US operations, combining data and robotics to manage production.

Physical AI builds on this by adding machines that adjust their actions based on real-time data. Chung said changes in regulations and customer demand are pushing the company to rethink how it operates in regions. Hyundai’s response is a mix of global expansion and local production, with AI and robotics helping standardise processes.

Energy and infrastructure

The company continues to invest in hydrogen through its HTWO brand, which covers production, storage and use. Chung pointed to rising demand linked to AI infrastructure and data centres as one reason hydrogen is gaining attention. He described hydrogen and electric vehicles as complementary options. The idea is to offer different energy choices depending on how systems are used. As AI moves into physical environments, energy becomes a more visible constraint.

What physical AI means for end users

Most people will not interact with a humanoid robot in the near term. But they will feel the effects of these systems in other ways. Products may be built faster and services tied to mobility or infrastructure may become more responsive.

Hyundai sells more than 7 million vehicles each year in over 200 countries, supported by 16 global production facilities, according to the same UPI report.

A gradual transition

Hyundai is still a major carmaker, with brands like Hyundai, Kia, and Genesis forming the base of its operations. What is changing is how those vehicles – and the systems around them – are designed and managed.

Physical AI represents a change from products to systems. It places AI in the environments where work and daily life take place. That change is still in progress, and many of the systems Hyundai is developing will take years to scale. The company is building toward a future where machines work with people in the real world.

(Photo by @named_ aashutosh)

See also: Asylon and Thrive Logic bring physical AI to enterprise perimeter security

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Hyundai expands into robotics and physical AI systems appeared first on AI News.

]]>
IBM: How robust AI governance protects enterprise margins https://www.artificialintelligence-news.com/news/ibm-how-robust-ai-governance-protects-enterprise-margins/ Fri, 10 Apr 2026 13:57:15 +0000 https://www.artificialintelligence-news.com/?p=112947 To protect enterprise margins, business leaders must invest in robust AI governance to securely manage AI infrastructure. When evaluating enterprise software adoption, a recurring pattern dictates how technology matures across industries. As Rob Thomas, SVP and CCO at IBM, recently outlined, software typically graduates from a standalone product to a platform, and then from a […]

The post IBM: How robust AI governance protects enterprise margins appeared first on AI News.

]]>
To protect enterprise margins, business leaders must invest in robust AI governance to securely manage AI infrastructure.

When evaluating enterprise software adoption, a recurring pattern dictates how technology matures across industries. As Rob Thomas, SVP and CCO at IBM, recently outlined, software typically graduates from a standalone product to a platform, and then from a platform to foundational infrastructure, altering the governing rules entirely.

At the initial product stage, exerting tight corporate control often feels highly advantageous. Closed development environments iterate quickly and tightly manage the end-user experience. They capture and concentrate financial value within a single corporate entity, an approach that functions adequately during early product development cycles.

However, IBM’s analysis highlights that expectations change entirely when a technology solidifies into a foundational layer. Once other institutional frameworks, external markets, and broad operational systems rely on the software, the prevailing standards adapt to a new reality. At infrastructure scale, embracing openness ceases to be an ideological stance and becomes a highly practical necessity.

AI is currently crossing this threshold within the enterprise architecture stack. Models are increasingly embedded directly into the ways organisations secure their networks, author source code, execute automated decisions, and generate commercial value. AI functions less as an experimental utility and more as core operational infrastructure.

The recent limited preview of Anthropic’s Claude Mythos model brings this reality into sharper focus for enterprise executives managing risk. Anthropic reports that this specific model can discover and exploit software vulnerabilities at a level matching few human experts.

In response to this power, Anthropic launched Project Glasswing, a gated initiative designed to place these advanced capabilities directly into the hands of network defenders first. From IBM’s perspective, this development forces technology officers to confront immediate structural vulnerabilities. If autonomous models possess the capability to write exploits and shape the overall security environment, Thomas notes that concentrating the understanding of these systems within a small number of technology vendors invites severe operational exposure.

With models achieving infrastructure status, IBM argues the primary issue is no longer exclusively what these machine learning applications can execute. The priority becomes how these systems are constructed, governed, inspected, and actively improved over extended periods.

As underlying frameworks grow in complexity and corporate importance, maintaining closed development pipelines becomes exceedingly difficult to defend. No single vendor can successfully anticipate every operational requirement, adversarial attack vector, or system failure mode.

Implementing opaque AI structures introduces heavy friction across existing network architecture. Connecting closed proprietary models with established enterprise vector databases or highly sensitive internal data lakes frequently creates massive troubleshooting bottlenecks. When anomalous outputs occur or hallucination rates spike, teams lack the internal visibility required to diagnose whether the error originated in the retrieval-augmented generation pipeline or the base model weights.

Integrating legacy on-premises architecture with highly gated cloud models also introduces severe latency into daily operations. When enterprise data governance protocols strictly prohibit sending sensitive customer information to external servers, technology teams are left attempting to strip and anonymise datasets before processing. This constant data sanitisation creates enormous operational drag. 

Furthermore, the spiralling compute costs associated with continuous API calls to locked models erode the exact profit margins these autonomous systems are supposed to enhance. The opacity prevents network engineers from accurately sizing hardware deployments, forcing companies into expensive over-provisioning agreements to maintain baseline functionality.

Why open-source AI is essential for operational resilience

Restricting access to powerful applications is an understandable human instinct that closely resembles caution. Yet, as Thomas points out, at massive infrastructure scale, security typically improves through rigorous external scrutiny rather than through strict concealment.

This represents the enduring lesson of open-source software development. Open-source code does not eliminate enterprise risk. Instead, IBM maintains it actively changes how organisations manage that risk. An open foundation allows a wider base of researchers, corporate developers, and security defenders to examine the architecture, surface underlying weaknesses, test foundational assumptions, and harden the software under real-world conditions.

Within cybersecurity operations, broad visibility is rarely the enemy of operational resilience. In fact, visibility frequently serves as a strict prerequisite for achieving that resilience. Technologies deemed highly important tend to remain safer when larger populations can challenge them, inspect their logic, and contribute to their continuous improvement.

Thomas addresses one of the oldest misconceptions regarding open-source technology: the belief that it inevitably commoditises corporate innovation. In practical application, open infrastructure typically pushes market competition higher up the technology stack. Open systems transfer financial value rather than destroying it.

As common digital foundations mature, the commercial value relocates toward complex implementation, system orchestration, continuous reliability, trust mechanics, and specific domain expertise. IBM’s position asserts that the long-term commercial winners are not those who own the base technological layer, but rather the organisations that understand how to apply it most effectively.

We have witnessed this identical pattern play out across previous generations of enterprise tooling, cloud infrastructure, and operating systems. Open foundations historically expanded developer participation, accelerated iterative improvement, and birthed entirely new, larger markets built on top of those base layers. Enterprise leaders increasingly view open-source as highly important for infrastructure modernisation and emerging AI capabilities. IBM predicts that AI is highly likely to follow this exact historical trajectory.

Looking across the broader vendor ecosystem, leading hyperscalers are adjusting their business postures to accommodate this reality. Rather than engaging in a pure arms race to build the largest proprietary black boxes, highly profitable integrators are focusing heavily on orchestration tooling that allows enterprises to swap out underlying open-source models based on specific workload demands. Highlighting its ongoing leadership in this space, IBM is a key sponsor of this year’s AI & Big Data Expo North America, where these evolving strategies for open enterprise infrastructure will be a primary focus.

This approach completely sidesteps restrictive vendor lock-in and allows companies to route less demanding internal queries to smaller and highly efficient open models, preserving expensive compute resources for complex customer-facing autonomous logic. By decoupling the application layer from the specific foundation model, technology officers can maintain operational agility and protect their bottom line.

The future of enterprise AI demands transparent governance

Another pragmatic reason for embracing open models revolves around product development influence. IBM emphasises that narrow access to underlying code naturally leads to narrow operational perspectives. In contrast, who gets to participate directly shapes what applications are eventually built. 

Providing broad access enables governments, diverse institutions, startups, and varied researchers to actively influence how the technology evolves and where it is commercially applied. This inclusive approach drives functional innovation while simultaneously building structural adaptability and necessary public legitimacy.

As Thomas argues, once autonomous AI assumes the role of core enterprise infrastructure, relying on opacity can no longer serve as the organising principle for system safety. The most reliable blueprint for secure software has paired open foundations with broad external scrutiny, active code maintenance, and serious internal governance.

As AI permanently enters its infrastructure phase, IBM contends that identical logic increasingly applies directly to the foundation models themselves. The stronger the corporate reliance on a technology, the stronger the corresponding case for demanding openness.

If these autonomous workflows are truly becoming foundational to global commerce, then transparency ceases to be a subject of casual debate. According to IBM, it is an absolute, non-negotiable design requirement for any modern enterprise architecture.

See also: Why companies like Apple are building AI agents with limits

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post IBM: How robust AI governance protects enterprise margins appeared first on AI News.

]]>
Experian uncovers fraud paradox in financial services’ AI adoption https://www.artificialintelligence-news.com/news/experian-ai-fraud-detection-financial-services-2026/ Thu, 02 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112849 The same technology that financial institutions deploying is being weaponised against them. That is the core tension running through Experian’s 2026 Future of Fraud Forecast, and it’s a tension the company is in a position to name because it sits on both sides of it. According to FTC data cited in the forecast, consumers lost […]

The post Experian uncovers fraud paradox in financial services’ AI adoption appeared first on AI News.

]]>
The same technology that financial institutions deploying is being weaponised against them. That is the core tension running through Experian’s 2026 Future of Fraud Forecast, and it’s a tension the company is in a position to name because it sits on both sides of it.

According to FTC data cited in the forecast, consumers lost more than US$12.5 billion to fraud in 2024. As per Experian’s own data accompanying the report, nearly 60% of companies reported an increase in fraud losses from 2024 to 2025. Experian’s fraud prevention solutions helped clients avoid an estimated US$19 billion in fraud losses globally in 2025, a figure that underscores the scale of the problem and how much defence now depends on AI matching the speed and autonomy of attacks.

The agentic AI issue

The most pressing finding in Experian’s forecast is what the company calls machine-to-machine mayhem, the point at which agentic AI systems, designed to transact autonomously on behalf of users, become indistinguishable from the bots fraudsters deploy for the same purpose.

According to Experian’s forecast, as organisations strive to integrate AI agents capable of independent decision-making, fraudsters are exploiting those same systems to run high-volume digital fraud at a scale and speed no human operation could sustain. The core challenge, as per the report, is that machine-to-machine interactions carry no clear ownership of liability; when an AI agent initiates a transaction that turns out to be fraudulent, the question of who is responsible has no settled answer.

Kathleen Peters, chief innovation officer for Fraud and Identity at Experian North America, framed the problem: “Technology is accelerating the evolution of fraud, making it more sophisticated and harder to detect. By combining differentiated data with advanced analytics and cutting-edge technology, businesses can strengthen fraud defences, safeguard consumers, and deliver secure, seamless experiences.”

Experian predicts that this will reach a tipping point in 2026, forcing substantive industry conversations around liability and the governance of agentic AI in commerce. Some organisations are already making preemptive moves. Amazon, for instance, has stated it blocks third-party AI agents from browsing and transacting on its platform, citing security and privacy concerns.

Four other threats the forecast identifies

Beyond the agentic AI issue, Experian’s forecast identifies four additional trends that financial institutions need to consider in 2026.

Deepfake candidates infiltrating remote workforces; Generative AI tools can now produce tailored CVs and real-time deepfake video capable of passing job interviews. According to the forecast, employers will onboard individuals who are not who they claim to be, granting bad actors access to internal systems. The FBI and Department of Justice issued multiple warnings in 2025 about documented instances of North Korean operatives using this approach to gain employment at US companies.

Website cloning overwhelms fraud teams; AI tools have made it easier to create replicas of legitimate sites, and harder to eliminate them permanently. As per the forecast, even after takedown requests are actioned, spoofed domains continue to resurface, forcing fraud teams into reactive patterns.

Emotionally intelligent scam bots; Generative AI means bots can conduct complex romance fraud and relative-in-need scams without human operators. According to Experian’s forecast, such bots respond convincingly, build trust over extended periods, and are becoming increasingly difficult distinguish from genuine human interaction.

Smart home vulnerabilities: Devices including virtual assistants, smart locks, and connected appliances create new entry points for fraudsters. Experian forecasts that bad actors will exploit these devices to access personal data and monitor household activity as the connected home becomes a more greater part of everyday financial behaviour.

Financial institutions’ responses

According to Experian’s Perceptions of AI Report, drawing on responses from more than 200 decision-makers at leading financial institutions, 84% identify AI as a critical or high priority for their business strategy over the next two years. A further 89% say AI will play an important role in the lending lifecycle.

The governance dimension, however, is where institutions struggle. According to the same report, 73% of respondents are concerned about the regulatory environment around AI, and 65% identify AI-ready data as one of their biggest deployment challenges. Data quality was rated the single most important factor in choosing an AI vendor, which positions Experian’s data-first positioning at the intersection of what financial institutions say they need most.

On the compliance side, Experian’s AI-powered Assistant for Model Risk Management addresses one of the most resource-intensive requirements facing institutions deploying AI. According to a 2025 Experian study of more than 500 global financial institutions, 67% struggle to meet their country’s regulatory requirements, 79% report more frequent supervisory communications from regulators than a year ago, and 60% still use manual compliance processes. In Experian’s announcement, the company states that more than 70% of larger institutions report model documentation compliance involves over 50 people, a figure that signals the scale of the automation opportunity.

Vijay Mehta, EVP of Global Solutions and Analytics at Experian Software Solutions, described the challenge the product addresses: “The AI-enabled speed of data analytics and model development is driving unprecedented business opportunities for financial institutions, but it comes with a challenge: global regulations that require time-consuming documentation. Experian Assistant for Model Risk Management helps solve this labour and resource-intensive requirement with end-to-end model documentation automation.”

The data quality foundation

Running underneath Experian’s fraud and compliance products is the same structural argument that appears in both IBM and Salesforce’s AI narratives that appeared this week: AI is only as reliable as the data it runs on. As per Experian’s Perceptions of AI Report, 65% of financial institution decision-makers consider AI-ready data one of their biggest challenges, and data quality is the most critical factor influencing trust in AI vendors.

That is not a coincidence of messaging. It reflects a constraint facing financial services institutions as they move AI from pilots into production credit decisioning, fraud detection, and regulatory reporting; functions where explainability and auditability are not optional.

Experian’s CDAO Paul Heywood is among the confirmed speakers at the AI & Big Data Expo, part of TechEx North America, taking place 18 – 19 May 2026 at the San Jose McEnery Convention Centre, California. Experian is a Platinum Sponsor at TechEx Global.

See also: Hershey applies AI in its supply chain operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Experian uncovers fraud paradox in financial services’ AI adoption appeared first on AI News.

]]>
KPMG: Inside the AI agent playbook driving enterprise margin gains https://www.artificialintelligence-news.com/news/kpmg-inside-ai-agent-playbook-enterprise-margin-gains/ Wed, 01 Apr 2026 15:24:01 +0000 https://www.artificialintelligence-news.com/?p=112839 Global AI investment is accelerating, yet KPMG data shows the gap between enterprise AI spend and measurable business value is widening fast. The headline figure from KPMG’s first quarterly Global AI Pulse survey is blunt: despite global organisations planning to spend a weighted average of $186 million on AI over the next 12 months, only […]

The post KPMG: Inside the AI agent playbook driving enterprise margin gains appeared first on AI News.

]]>
Global AI investment is accelerating, yet KPMG data shows the gap between enterprise AI spend and measurable business value is widening fast.

The headline figure from KPMG’s first quarterly Global AI Pulse survey is blunt: despite global organisations planning to spend a weighted average of $186 million on AI over the next 12 months, only 11 percent have reached the stage of deploying and scaling AI agents in ways that produce enterprise-wide business outcomes.

However, the central finding is not that AI is failing; 64 percent of respondents say AI is already delivering meaningful business outcomes. The problem is that “meaningful” is doing a lot of heavy lifting in that sentence, and the distance between incremental productivity gains and the kind of compounding operational efficiency that moves the needle on margin is, for most organisations, still substantial.

The architecture of a performance gap

KPMG’s report distinguishes between what it labels “AI leaders” (i.e. organisations that are scaling or actively operating agentic AI) and everyone else. The gap in outcomes between these two cohorts is striking.

Headshot of Steve Chase, Global Head of AI and Digital Innovation at KPMG International.

Steve Chase, Global Head of AI and Digital Innovation at KPMG International, said: “The first Global AI Pulse results reinforce that spending more on AI is not the same as creating value. Leading organisations are moving beyond enablement, deploying AI agents to reimagine processes and reshape how decisions and work flow across the enterprise.”

Among AI leaders, 82 percent report that AI is already delivering meaningful business value. Among their peers, that figure drops to 62 percent. That 20-percentage-point spread might look modest in isolation, but it compounds quickly when you consider what it reflects: not just better tooling, but fundamentally different deployment philosophies.

The organisations in that 11 percent are deploying agents that coordinate work across functions, route decisions without human intermediation at every step, surface enterprise-wide insights from operational data in near real-time, and flag anomalies before they escalate into incidents.

In IT and engineering functions, 75 percent of AI leaders are using agents to accelerate code development versus 64 percent of their peers. In operations, where supply-chain orchestration is the primary use case, the split is 64 percent versus 55 percent. These are not marginal differences in tool adoption rates; they reflect different levels of process re-architecture.

Most enterprises that have deployed AI have done so by layering models onto existing workflows (e.g. a co-pilot here, a summarisation tool there…) without redesigning the process those tools sit inside. That produces incremental gains.

The organisations closing the performance gap have inverted this approach: they are redesigning the process first, then deploying agents to operate within the redesigned structure. The difference in return on AI spend between these two approaches, over a three-to-five-year horizon, is likely to be the defining competitive variable in several industries.

What $186 million actually buys—and what it does not

The investment figures in the KPMG data deserve scrutiny. A weighted global average of $186 million per organisation sounds substantial, but the regional variance tells a more interesting story.

ASPAC leads at $245 million, the Americas at $178 million, and EMEA at $157 million. Within ASPAC, organisations including those in China and Hong Kong are investing at $235 million on average; within the Americas, US organisations are at $207 million.

These figures represent planned spend across model licensing, compute infrastructure, professional services, integration, and the governance and risk management apparatus needed to operate AI responsibly at scale.

The question is not whether $186 million is too much or too little; it is what proportion of that figure is being allocated to the operational infrastructure required to derive value from the models themselves. The survey data suggests that most organisations are still underweighting this latter category.

Compute and licensing costs are visible and relatively easy to budget for. The friction costs – the engineering hours spent integrating AI outputs with legacy ERP systems, the latency introduced by retrieval-augmented generation pipelines built on top of poorly structured data, and the compliance overhead of maintaining audit trails for AI-assisted decisions in regulated industries – tend to surface late in deployment cycles and often exceed initial estimates.

Vector database integration is a useful example. Many agentic workflows depend on the ability to retrieve relevant context from large, unstructured document repositories in real time. Building and maintaining the infrastructure for this – selecting between providers such as Pinecone, Weaviate, or Qdrant, embedding and indexing proprietary data, and managing refresh cycles as underlying data changes – adds meaningful engineering complexity and ongoing operational cost that rarely appears in initial AI investment proposals. 

When that infrastructure is absent or poorly maintained, agent performance degrades in ways that are often difficult to diagnose, as the model’s behaviour is correct relative to the context it receives, but that context is stale or incomplete.

Governance as an operational variable, not a compliance exercise

Perhaps the most practically useful finding in the KPMG survey is the relationship between AI maturity and risk confidence.

Among organisations still in the experimentation phase, just 20 percent feel confident in their ability to manage AI-related risks. Among AI leaders, that figure rises to 49 percent. 75 percent of global leaders cite data security, privacy, and risk as ongoing concerns regardless of maturity level—but maturity changes how those concerns are operationalised.

This is an important distinction for boards and risk functions that tend to frame AI governance as a constraint on deployment. The KPMG data suggests the opposite dynamic: governance frameworks do not slow AI adoption among mature organisations; they enable it. The confidence to move faster – to deploy agents into higher-stakes workflows, to expand agentic coordination across functions – correlates directly with the maturity of the governance infrastructure surrounding those agents.

In practice, this means that organisations treating governance as a retrospective compliance layer are doubly disadvantaged. They are slower to deploy, because every new use case triggers a fresh governance review, and they are more exposed to operational risk, because the absence of embedded governance mechanisms means that edge cases and failure modes are discovered in production rather than in testing.

Organisations that have embedded governance into the deployment pipeline itself (e.g. model cards, automated output monitoring, explainability tooling, and human-in-the-loop escalation paths for low-confidence decisions) are the ones operating with the confidence that allows them to scale.

“Ultimately, there is no agentic future without trust and no trust without governance that keeps pace,” explains Steve Chase, Global Head of AI and Digital Innovation at KPMG International. “The survey makes clear that sustained investment in people, training and change management is what allows organisations to scale AI responsibly and capture value.”

Regional divergence and what it signals for global deployment

For multinationals managing AI programmes across regions, the KPMG data flags material differences in deployment velocity and organisational posture that will affect global rollout planning.

ASPAC is advancing most aggressively on agent scaling; 49 percent of organisations there are scaling AI agents, compared with 46 percent in the Americas and 42 percent in EMEA. ASPAC also leads on the more complex capability of orchestrating multi-agent systems, at 33 percent.

The barrier profiles also differ in ways that carry real operational implications. In both ASPAC and EMEA, 24 percent of organisations cite a lack of leadership trust and buy-in as a primary barrier to AI agent deployment. In the Americas, that figure drops to 17 percent.

Agentic systems, by definition, make or initiate decisions without per-instance human approval. In organisational cultures where decision accountability is tightly concentrated at the senior level, this can generate institutional resistance that no amount of technical capability resolves. The fix is governance design; specifically, defining in advance what categories of decision an agent is authorised to make autonomously, what triggers escalation, and who carries accountability for agent-initiated outcomes.

The expectation gap around human-AI collaboration is also worth noting for anyone designing agent-assisted workflows at a global scale.

East Asian respondents anticipate AI agents leading projects at a rate of 42 percent. Australian respondents prefer human-directed AI at 34 percent. North American respondents lean toward peer-to-peer human-AI collaboration at 31 percent. These differences will affect how agent-assisted processes need to be designed in different regional deployments of the same underlying system, adding localisation complexity that is easy to underestimate in centralised platform planning.

One data point in the KPMG survey that deserves particular attention from CFOs and boards: 74 percent of respondents say AI will remain a top investment priority even in the event of a recession. This is either a sign of genuine conviction about AI’s role in cost structure and competitive positioning, or it reflects a collective commitment that has not yet been tested against actual budget pressure. Probably both, in different proportions across different organisations.

What it does indicate is that the window for organisations still in the experimentation phase is not indefinite. If the 11 percent of AI leaders continue to compound their advantage (and the KPMG data suggests the mechanisms for doing so are in place) the question for the remaining 89 percent is not whether to accelerate AI deployment, but how to do so without compounding the integration debt and governance deficits that are already constraining their returns.

See also: Hershey applies AI across its supply chain operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post KPMG: Inside the AI agent playbook driving enterprise margin gains appeared first on AI News.

]]>
DeepL’s Borderless Business report reveals 83% of enterprises are still behind on language AI https://www.artificialintelligence-news.com/news/deepls-borderless-business-report-reveals-83-of-enterprises-are-still-behind-on-language-ai/ Wed, 01 Apr 2026 12:00:00 +0000 https://www.artificialintelligence-news.com/?p=112828 AI is everywhere in the enterprise. The translation workflow often is not. That is the core finding of DeepL’s 2026 Language AI report, “Borderless Business: Transforming Translation in the Age of AI,” published on March 10. Despite broad AI investment across business functions, the report reveals that language and multilingual operations–workflows that touch sales, legal, customer support, and […]

The post DeepL’s Borderless Business report reveals 83% of enterprises are still behind on language AI appeared first on AI News.

]]>
AI is everywhere in the enterprise. The translation workflow often is not. That is the core finding of DeepL’s 2026 Language AI report, “Borderless Business: Transforming Translation in the Age of AI,” published on March 10. Despite broad AI investment across business functions, the report reveals that language and multilingual operations–workflows that touch sales, legal, customer support, and global expansion–remain the most underautomated part of the enterprise technology stack.

The automation gap hiding in plain sight

According to DeepL’s Borderless Business report, 35% of international businesses still handle translation entirely through manual processes, while a further 33% rely on traditional automation paired with systematic human review. Only 17% have implemented next-generation AI tools–large language models or agentic AI–for multilingual operations. 

That means, as per the report’s findings, 83% of enterprises have not transitioned to modern language AI capabilities despite investing in AI across other parts of the business. The report, which draws on survey data from business leaders across the United States, United Kingdom, France, Germany, and Japan, also found that enterprise content volume has grown 50% since 2023, yet 68% of companies still rely on workflows built for a different era.

Jarek Kutylowski, CEO and founder of DeepL, put it plainly: “AI is everywhere, but efficiency is not. Most companies have deployed AI in some form, yet few achieve real productivity at scale because core workflows remain designed around people, not systems.”

Why language AI is becoming infrastructure

The angle that makes this more than a translation story is where language AI is now being deployed. According to DeepL’s research, global expansion is the top driver of language AI investment at 33%, followed by sales and marketing at 26%, customer support at 23%, and legal and finance at 22%. These are mission-critical business functions, not peripheral content tasks.

DeepL’s broader research from December 2025, surveying 5,000 senior business leaders across the same five markets, found that 54% of global executives say real-time voice translation will be essential in 2026, up from 32% today. As perthat research, the UK and France are leading early adoption at 48% and 33% respectively, while Japan sits at 11%, a gap that points to significant variance in enterprise readiness across global markets.

The company now serves over 200,000 business customers across 228 markets, and at the AI & Big Data Expo in London in February 2026, Scott Ivell, vice president of product marketing at DeepL, told SiliconANGLE that the company has 2,000 customers globally deploying AI agents — being used for report analysis, sales targeting, and legal document review.

The sovereign AI dimension

What separates DeepL’s positioning from general-purpose AI competitors is where it sits on the enterprise trust spectrum.As enterprises in regulated industries–financial services, healthcare, legal, government–accelerate AI adoption, data sovereignty is increasingly the deciding factor in platform selection.

DeepL is ISO 27001, SOC 2 Type 2, and GDPR certified, and offers Bring Your Own Key encryption for enterprise customers, giving organisations the ability to withdraw data access in seconds, a control level that most large language model providers do not offer. As per DeepL’s own security documentation, this means data can effectively be placed beyond anyone’s reach, including DeepL itself, at the customer’s discretion.

Sebastian Enderlein, CTO at DeepL, has framed 2026 as a year of execution rather than experimentation: “I believe 2026 will be the year AI stops experimenting and starts executing, at a scale we haven’t yet seen. After a cycle of pilots and proofs of concept, businesses are now ready to scale, and they’re betting big on agentic AI to do it.”

DeepL Agent and the broader pivot

DeepL’s product direction in 2026 reflects the same shift visible across enterprise AI broadly, from single-function tools to autonomous workflow execution. DeepL Agent, launched in general availability in November 2025, is designed to navigate business systems, execute multi-step workflows, and operate across CRM, email, calendars, and project management tools without requiring complex integrations.

According to DeepL’s announcement, the agent operates with enterprise-grade security and data sovereignty built in by default, a deliberate positioning choice that targets the segment of enterprises that cannot send sensitive documents to OpenAI or Microsoft’s public cloud endpoints.

DeepL’s chief scientist, Stefan Miedzianowski, has described the current moment as a transition on the technology adoption curve: “2026 will undoubtedly be the year of the agent. 2025 was the year when public awareness caught up with the science showing what agents can do, but enterprise adoption at scale will happen now. We are moving from the innovators to the early majority.”

As per the Borderless Business report, 71% of business leaders say transforming workflows with AI is a priority for 2026, with expected returns across customer experience, employee productivity, and time to market. The gap between that ambition and the 17% who have actually modernised their language operations is the market DeepL is squarely targeting.

DeepL is a Platinum Sponsor at TechEx Global, appearing at the AI & Big Data Expo and co-located events at Olympia London, February 3 & 4, 2027.

See also: Automating complex finance workflows with multimodal AI

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post DeepL’s Borderless Business report reveals 83% of enterprises are still behind on language AI appeared first on AI News.

]]>
Hershey applies AI across its supply chain operations https://www.artificialintelligence-news.com/news/hershey-applies-ai-across-its-supply-chain-operations/ Wed, 01 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112824 Artificial intelligence is moving beyond software and further into the physical side of business. Companies in food production and logistics are starting to use data systems to support day-to-day decisions, not long-term planning. That change is visible in The Hershey Company’s latest strategy update. At its Investor Day, the company said it plans to use […]

The post Hershey applies AI across its supply chain operations appeared first on AI News.

]]>
Artificial intelligence is moving beyond software and further into the physical side of business. Companies in food production and logistics are starting to use data systems to support day-to-day decisions, not long-term planning.

That change is visible in The Hershey Company’s latest strategy update. At its Investor Day, the company said it plans to use AI in its operations, from sourcing analytics to plant automation and fulfilment, with a focus on how the business runs behind the scenes.

Hershey said it plans to apply AI to sourcing and fulfilment. This includes using data to guide how ingredients are bought and how products are distributed. In its Investor Day material, the company said it aims to build “a faster, smarter and more resilient supply chain powered by automation and AI-enabled decision making”.

Supply chains in food and snack markets are under steady pressure: Costs can change quickly, demand can change by season, by market, or by product category, and retailers still expect goods to arrive on time and in the right mix.

Hershey said its digital planning tools are meant to connect different parts of the business. The company said those systems are designed to reduce waste and improve inventory levels. It also said digital operational planning can connect data in the supply chain and help raise service levels.

From reporting to action

Part of Hershey’s update is its use of the phrase “AI-enabled decision-making.” The company said its approach will link sourcing and delivery more closely and plans to use automated fulfilment systems for custom assortments and to improve speed to market.

This is a useful way to read strategy. A hard task is turning data into decisions that help operations move faster or with fewer mistakes.

This is where AI is starting to play a bigger role, according to Hershey’s. The value comes from how operations are connected.

AI in the supply chain and plant operations

The changes also extend into manufacturing. Hershey said it will increase plant automation to improve manufacturing efficiency and use AI in more parts of its operating model. What is changing is how AI fits into those systems. Instead of sitting apart from production, it is being positioned as part of the process used to guide planning and support execution.

That may help companies improve planning and respond more quickly when conditions change. In a business where input costs and consumer demand can change often, even small gains in timing can matter.

Food and snack companies deal with constant swings in input costs and demand. Ingredients like cocoa and sugar are affected by weather, trade flows, and supply issues. Companies still have to keep factories running and products moving through retail channels.

Hershey’s plan to use sourcing analytics is one example of how AI may be applied in that setting. By analysing supplier data and market trends, the company may improve how it buys raw materials and manages risk. The company also said it wants to better connect workers in its operations. That suggests the strategy is not only about automation. It is also about coordination in the business.

Hershey said it plans to “incorporate AI in every stage of its operations,” including sourcing analytics and worker connectivity, as well as automated fulfilment and plant automation.

That makes the company a useful case study for a wider change in enterprise AI. Firms are moving away from narrow pilots and toward broader use in business functions. In that model, AI is treated as a part of supply and delivery systems.

CEO Kirk Tanner framed the plan around growth and execution, saying, “The strategy is clear. The team is ready. The next chapter of growth and leading performance starts now”.

Where this may lead

The kind of change is likely to spread as more companies look for ways to connect data with operational decisions. Hershey’s strategy shows how AI is starting to take a larger role in industries built on physical goods. The technology may sit in the background, but its role in daily operations is becoming harder to ignore.

(Photo by Janne Simoes)

See also: JPMorgan begins tracking how employees use AI at work

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Hershey applies AI across its supply chain operations appeared first on AI News.

]]>
Mastercard’s AI payment demo points to agent-led commerce https://www.artificialintelligence-news.com/news/mastercard-ai-payment-demo-points-to-agent-led-commerce/ Mon, 23 Feb 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112338 A recent demonstration from Mastercard suggests that payment systems may be heading toward a future where software agents, not people, complete purchases. During the India AI Impact Summit 2026, Mastercard showed what it described as its first fully authenticated “agentic commerce” transaction. In the demo, as reported by Times of India, an AI agent searched […]

The post Mastercard’s AI payment demo points to agent-led commerce appeared first on AI News.

]]>
A recent demonstration from Mastercard suggests that payment systems may be heading toward a future where software agents, not people, complete purchases. During the India AI Impact Summit 2026, Mastercard showed what it described as its first fully authenticated “agentic commerce” transaction.

In the demo, as reported by Times of India, an AI agent searched for a product, assessed the website, and completed the purchase using stored payment credentials, without the user opening an app or entering card details. The company said the transaction took place inside a secure payment framework designed to verify both the user and the AI acting on their behalf.

The demonstration was controlled, not a public rollout. Mastercard executives told reporters that broader deployment would depend on regulatory approval and ecosystem readiness. Still, the test highlights a change that many enterprises may need to prepare for: the possibility that customers – or corporate systems – will increasingly rely on AI agents to initiate and complete transactions.

Assisted checkout to delegated spending

Digital payments have usually focused on reducing friction for human users through tokenisation, saved credentials, and one-click checkout. Agentic commerce goes further. Instead of helping a user complete a purchase, the system allows software to handle the process from start to finish once permission rules are in place.

The model relies on several building blocks already used in modern payments: identity verification, tokenised card data, and risk monitoring. What changes is who performs the action. If AI agents can act in defined limits, like spending caps or merchant restrictions, checkout may change from a user interaction to a background workflow.

For enterprises, the issue is if software can spend money automatically, procurement rules, approval chains, and audit trails need to account for machine decisions, not human ones. Finance teams may need clearer policies on when an AI agent can commit funds, how liability is assigned if something goes wrong, and how fraud detection should treat automated transactions.

Payment networks position for machine customers

Mastercard is not alone in exploring this direction. Across the payments sector, providers are testing ways to embed transactions into AI-driven tools and digital assistants. The goal is to ensure that when autonomous software begins purchasing goods or services, payment networks remain part of the trust and verification layer.

In public statements tied to the summit demo, Mastercard framed the effort as building infrastructure that allows AI agents to transact safely on behalf of users. That framing points to a broader industry race: not to build smarter AI shopping tools, but to control the authentication systems that make those tools safe enough for financial use.

For banks and fintech firms, the change could affect how customer identity is managed. Traditional authentication often assumes a person is present, entering a password or approving a prompt. Agentic commerce assumes the opposite: the user may not be involved at the moment of purchase. That means identity systems must verify both the account owner’s prior consent and the agent’s authority at the time of transaction.

Merchants may need API-ready storefronts

If AI agents begin acting as buyers, merchant systems may also need to adapt. Online stores built mainly for human browsing may struggle if automated agents become a meaningful share of customers.

To support machine-driven purchases, product catalogues, pricing data, and checkout processes may need to be accessible through structured APIs not only visual web pages. Inventory accuracy, transparent pricing, and clear return policies become more important when decisions are made by software trained to compare options instantly.

This could also influence competition. If agents optimise for price and delivery speed, merchants with inconsistent data or hidden fees may be filtered out before a human even sees them.

Security risks move, not disappear

While agentic commerce promises convenience, it also introduces new risks. A compromised AI assistant with payment authority could execute purchases at scale before detection. Fraud models that look for unusual user behaviour may need updating to distinguish between legitimate automated spending and malicious activity.

Regulators are likely to take a cautious approach. Mastercard’s own comments that the system still awaits approvals suggest that compliance frameworks for AI-initiated payments are still taking shape.

In enterprises deploying AI internally, similar concerns apply. Automated purchasing agents integrated into enterprise resource planning systems could streamline routine procurement, but they also expand the attack surface. Access controls and spending thresholds will matter more when software can execute financial actions without real-time human confirmation.

Where commerce may head

Mastercard’s demonstration does not mean agent-led payments will reach consumers immediately. Yet it offers a glimpse of how commerce may change as AI systems move from advisory roles into operational ones.

If the model matures, the most visible change may be that checkout disappears as a distinct step. Instead of visiting a site and paying, users or companies may set rules, and their software will handle the rest.

For enterprises, the important takeaway is less about Mastercard’s AI technology and more about the direction of travel. As AI agents gain the authority to act, payment systems, identity frameworks, and digital storefronts may need to treat software not as a tool, but as a participant in the transaction.

(Photo by Cova Software)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Mastercard’s AI payment demo points to agent-led commerce appeared first on AI News.

]]>
Hitachi bets on industrial expertise to win the physical AI race https://www.artificialintelligence-news.com/news/hitachi-physical-ai-industrial-expertise/ Mon, 23 Feb 2026 07:00:00 +0000 https://www.artificialintelligence-news.com/?p=112339 Physical AI – the branch of artificial intelligence that controls robots and industrial machinery in the real world – has a hierarchy problem. At the top, OpenAI and Google are scaling multimodal foundation models. In the middle, Nvidia is building the platforms and tools for physical AI development. And then there is a third camp: […]

The post Hitachi bets on industrial expertise to win the physical AI race appeared first on AI News.

]]>
Physical AI – the branch of artificial intelligence that controls robots and industrial machinery in the real world – has a hierarchy problem. At the top, OpenAI and Google are scaling multimodal foundation models. In the middle, Nvidia is building the platforms and tools for physical AI development.

And then there is a third camp: industrial manufacturers like Hitachi and Germany’s Siemens, that are making the quieter but arguably more grounded argument that you cannot train machines to navigate the physical world without first understanding it.

That argument is now moving from boardroom strategy to factory floor deployment, as Hitachi revealed in a recent interview with Nikkei Asia.

Why Physical AI needs a better model

Kosuke Yanai, deputy director of Hitachi’s Centre for Technology Innovation-Artificial Intelligence, is direct about what separates viable physical AI from the theoretical kind. “Physical AI cannot be implemented in society without a systematic understanding that begins with foundational knowledge of physics and industrial equipment,” he told Nikkei.

Hitachi’s pitch is that it already holds much of that foundational knowledge – accumulated over decades of building railways, power infrastructure, and industrial control systems. The company has thermal fluid simulation technology that models the behaviour of gases and liquids, and signal-processing tools for monitoring equipment condition – what Yanai describes as the engineering foundation underpinning Hitachi’s ‘extensive knowledge of product design and control logic construction.’

Daikin and JR East

While Hitachi’s overarching physical AI architecture – the Integrated World Infrastructure Model (IWIM), which it describes as a mixture-of-experts system integrating multiple specialised models and data sets – remains in the concept verification stage, two real-world deployments signal that the underlying approach is already producing results.

In collaboration with Daikin Industries, Hitachi has deployed an AI system that diagnoses malfunctions in commercial air-conditioner manufacturing equipment. The system, trained on equipment maintenance records, procedure manuals, and design drawings, can now identify which component is likely failing when an anomaly is detected – the kind of operational intuition that previously existed only in the heads of experienced engineers.

With East Japan Railway (JR East), Hitachi has built an AI that identifies the root cause of malfunctions in the control devices running the Tokyo metropolitan area’s railway traffic management system, and then assists operators in formulating a response plan. In a network where delays ripple in millions of daily journeys, the ability to accelerate fault diagnosis carries real operational weight.

The R&D pipeline: Cutting development time

Hitachi’s physical AI push is also showing up in its research output. In December 2025, the company published findings from two projects presented at ASE 2025, a top-tier software engineering conference, that address a persistent bottleneck in industrial AI: the time and effort required to write and adapt control software.

In the automotive sector, Hitachi and its subsidiary Astemo developed a system that uses retrieval-augmented generation to automatically produce integration test scripts for vehicle electronic control units (ECUs) – pulling from hardware-specific API information and frontline engineering knowledge. In a pilot involving multi-core ECU testing, the technology reduced integration testing man-hours by 43% compared to manual execution.

In logistics, the company developed variability management technology that modularises robot control software into reusable components structured around a robot operating system (ROS). By mapping out the environmental variables and operational requirements of different warehouse settings in advance, the system lets operators adapt robotic picking-and-placing workflows to new products or layouts without rewriting software from scratch.

Safety a structural requirement

One thread that runs through all of Hitachi’s physical AI work is its emphasis on safety guardrails – not as a compliance checkbox, but as an engineering constraint baked into system design. Yanai told Nikkei that the company is integrating its control and reliability technology from social infrastructure development to prevent AI outputs from deviating from human-approved operating parameters.

This includes input validation to screen out data that models should not be trained on, output verification to ensure machine actions do not endanger people or property, and real-time monitoring of the AI model itself for operational anomalies.

It is a distinction. Physical AI systems fail in the real world, not in a sandbox. The stakes for an AI controlling railway signalling or factory robotics are categorically different from those governing a chatbot.

Infrastructure to match ambition

On the infrastructure side, Hitachi Vantara – the group’s data and digital infrastructure arm – is positioning itself as an early adopter of NVIDIA’s RTX PRO Servers, built on the RTX PRO 6000 Blackwell Server Edition GPU, designed to accelerate agentic and physical AI workloads. The hardware is being paired with Hitachi’s iQ platform and used to build digital twins – virtual replicas of physical systems – that can simulate everything from grid fluctuations to robotic motion at scale.

The IWIM concept, meanwhile, is designed to connect Nvidia’s open-source Cosmos physical AI development platform with specialised Japanese-language LLMs and visual language models via the model context protocol (MCP) – essentially a framework to stitch together the models, simulation tools, and industrial datasets that physical AI systems require.

The broader race in physical AI is far from settled. But Hitachi’s position – that domain expertise and operational data are as important as model architecture – is increasingly hard to dismiss, particularly as deployments with partners like Daikin and JR East begin to demonstrate what that expertise is actually worth in practice.

Sources: Nikkei Asia (Feb 21, 2026); Hitachi R&D (Dec 24, 2025); Hitachi Vantara Blog (Aug 27, 2025)

See also:Alibaba enters physical AI race with open-source robot model RynnBrain

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Hitachi bets on industrial expertise to win the physical AI race appeared first on AI News.

]]>
Coca-Cola turns to AI marketing as price-led growth slows https://www.artificialintelligence-news.com/news/coca-cola-turns-to-ai-marketing-as-price-led-growth-slows/ Fri, 20 Feb 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112311 Shifting from price hikes to persuasion, Coca-Cola’s latest strategy signals how AI is moving deeper into the core of corporate marketing. Recent coverage of the company’s leadership discussions shows that Coca-Cola is entering what executives describe as a new phase focused on influence not pricing power. According to Mi-3, the company is changing its focus […]

The post Coca-Cola turns to AI marketing as price-led growth slows appeared first on AI News.

]]>
Shifting from price hikes to persuasion, Coca-Cola’s latest strategy signals how AI is moving deeper into the core of corporate marketing.

Recent coverage of the company’s leadership discussions shows that Coca-Cola is entering what executives describe as a new phase focused on influence not pricing power. According to Mi-3, the company is changing its focus from “price to persuasion,” with digital platforms, AI, and in-store execution becoming increasingly important in building demand. This reflects a change in consumer brand behaviour as inflation pressures ease and companies seek new strategies to maintain revenue growth.

That means expanding the role of AI in Coca-Cola’s marketing production and decision-making. The company has already experimented with generative AI in creative campaigns and continues testing how automation can help with content creation, campaign planning, and distribution.

Industry analysis from The Current points out that Coca-Cola has been embedding AI into marketing workflows and scaling its use in creative production and campaign execution. These efforts include using AI tools to generate images, assist with storytelling, and adjust campaigns in channels.

Testing AI in the marketing pipeline

The week’s reporting suggests the company is now testing AI-driven systems that can help automate parts of the advertising process, including drafting scripts or preparing social media content. While these initiatives remain in testing not full rollout, they illustrate how large brands are moving toward more automated marketing pipelines. Instead of relying only on agencies or long creative cycles, companies are exploring ways to shorten the path from concept to campaign.

During the past two years, many consumer goods have firms relied on price increases to offset rising costs. As inflation slows in several markets, analysts say that strategy has limits. Growth increasingly depends on persuading consumers to buy more often or choose higher-margin products. AI offers a way to refine that persuasion at scale, using data to shape messages, target audiences, and adjust campaigns in near real time.

Coca-Cola’s approach fits a wider trend in marketing technology. Generative AI tools have quickly moved from experimental use to regular deployment in large enterprises. According to McKinsey’s 2024 global AI survey, about one-third of organisations already use generative AI in at least one business function, with marketing and sales among the most common areas of adoption. Analysts expect that share to keep rising as companies test automation in creative work and customer engagement.

AI moves upstream in enterprise strategy

What strikes out in Coca-Cola’s case is how the corporation frames AI not only as a cost-saving tool, but also as part of a broader operating shift. By focusing on persuasion, the company signals that AI’s value lies in shaping demand, not improving efficiency. That includes using AI to analyse consumer behaviour, tailor messaging to different markets, and support local teams with adaptable content.

The strategy also reflects a growing tension in the marketing sector. Automation can speed up production and test more campaign ideas, but it also raises questions about creative quality, brand consistency, and the role of human teams. Companies experimenting with AI-generated content must still ensure that messaging aligns with their brand identity and cultural context. For global brands like Coca-Cola, that challenge becomes more complex because campaigns frequently need to work in many regions.

Another factor shaping this transition is the rapid growth of digital advertising channels. As spending shifts toward social platforms, streaming services, and online retail media, the volume of content required has expanded. AI tools offer a way to produce many versions of ads, test different approaches, and adjust messaging based on performance data. This makes automation appealing not only for cost reasons, but also for speed and flexibility.

Coca-Cola’s move reflects a broader pattern: AI adoption is moving upstream in business processes. Early deployments frequently centred on analytics or internal automation. Companies are now applying AI in customer-facing functions like marketing strategy, creative development, and campaign management. That change suggests that AI is becoming part of how companies compete for market share, not how they reduce expenses.

The firm has not indicated that AI will replace creative teams or agencies. Instead, the current direction indicates a hybrid model in which automation handles repetitive or data-heavy tasks while human teams guide brand voice and campaign concepts. Many marketing leaders believe that this blended approach will define the next phase of AI adoption.

Coca-Cola’s emphasis on persuasion over pricing may impact how other consumer brands approach growth in a post-inflation environment. If AI can assist businesses in more precisely shaping demand, it may minimise reliance on price increases or mass-market campaigns.

(Photo by James Yarema)

See also: PepsiCo is using AI to rethink how factories are designed and updated

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Coca-Cola turns to AI marketing as price-led growth slows appeared first on AI News.

]]>
Retailers like Kroger and Lowe’s test AI agents without handing control to Google https://www.artificialintelligence-news.com/news/kroger-and-lowe-test-ai-agents-without-handing-control-to-google/ Mon, 12 Jan 2026 12:00:00 +0000 https://www.artificialintelligence-news.com/?p=111562 Retailers are starting to confront a problem that sits behind much of the hype around AI shopping: as customers turn to chatbots and automated assistants to decide what to buy, retailers risk losing control over how their products are shown, sold, and bundled. That concern is pushing some large chains to build or support their […]

The post Retailers like Kroger and Lowe’s test AI agents without handing control to Google appeared first on AI News.

]]>
Retailers are starting to confront a problem that sits behind much of the hype around AI shopping: as customers turn to chatbots and automated assistants to decide what to buy, retailers risk losing control over how their products are shown, sold, and bundled.

That concern is pushing some large chains to build or support their own AI-powered shopping tools, rather than relying only on third-party platforms. The goal is not to chase novelty, but to stay close to customers as buying decisions shift toward automation.

Several retailers, including Lowe’s, Kroger, and Papa Johns, are experimenting with AI agents that can help shoppers search for items, get support, or place orders. Many of these efforts are backed by tools from Google, which is offering retailers a way to deploy agents inside their own apps and websites instead of sending customers elsewhere.

Keeping control as shopping shifts toward automation

For grocers like Kroger, the concern is not whether AI will influence shopping, but how quickly it might do so. The company is testing an AI shopping agent that can compare items, handle purchases, and adjust suggestions based on customer habits and needs.

“Things are moving at a pace that if you’re not already deep into [AI agents], you’re probably creating a competitive barrier or disadvantage,” said Yael Cosset, Kroger’s chief digital officer and executive vice president.

The agent, which sits inside Kroger’s mobile app, can take into account factors such as time limits or meal plans, while also drawing on data the retailer already has, including price sensitivity and brand preferences. The intent is to keep those decisions within Kroger’s own systems rather than handing them off to external platforms.

That approach reflects a wider tension in retail. Making products available directly inside large AI chatbots can widen reach, but it can also weaken customer loyalty, reduce add-on sales, and cut into advertising revenue. Once a third party controls the interface, retailers have less say in how choices are framed.

This is one reason some retailers are cautious about selling directly through tools built by companies like OpenAI or Microsoft. Both have rolled out features that allow users to complete purchases inside their chatbots, and last year Walmart said it would work with OpenAI to let customers buy items through ChatGPT.

For retailers, the appeal of running their own agents is control. “There’s a market shift across the spectrum of retailers who are investing in their own capabilities rather than just relying on third-parties,” said Lauren Wiener, a global leader of marketing and customer growth at Boston Consulting Group.

Why retailers are spreading risk across vendors

Still, building and maintaining these systems is not simple. The underlying models change quickly, and tools that work today may need reworking weeks later. That reality is shaping how retailers think about vendors.

At Lowe’s, Google’s shopping agent sits behind the retailer’s own virtual assistant, Mylow. When customers use Mylow online, the company says conversion rates more than double. But Lowe’s does not rely on a single provider.

“The tech we build can become outdated in two weeks,” said Seemantini Godbole, Lowe’s chief digital and information officer. That pace is one reason Lowe’s works with several vendors, including OpenAI, rather than betting on one system.

Kroger is taking a similar approach. Alongside Google, it works with companies such as Instacart to support its agent strategy. “[AI agents] are not just top of mind, it’s a priority for us,” Cosset said. “It’s going at a remarkable pace.”

Testing AI agents without overcommitting

For others, the challenge is not keeping up with the technology, but deciding how much to build at all. Papa Johns does not create its own AI models or agents. Instead, it is testing Google’s food ordering agent to handle tasks like estimating how many pizzas a group might need based on a photo uploaded by a customer.

Customers will be able to use the agent by phone, through the company’s website, or in its app. “I don’t want to be an AI expert in terms of building the agents,” said Kevin Vasconi, Papa Johns’ chief digital and technology officer. “I want to be an AI expert in terms of, ‘How do I use the agents?’”

That focus on use rather than ownership reflects a practical view of where AI fits today. While agent-based shopping is gaining attention, it is not yet the main way people buy everyday goods.

“I don’t think [AI agents] are going to totally change the industry,” Vasconi said. “People still call our stores on the phone to order pizza in this day and age.”

Analysts see Google’s tools less as a finished answer and more as a way to lower the barrier for retailers that do not want to start from scratch. “The real challenge here is application of the technologies,” said Ed Anderson, a tech analyst at Gartner. “These announcements take a step forward so that retailers don’t have to start from ground zero.”

For now, retailers are testing, mixing vendors, and holding back from firm commitments. Kroger, Lowe’s, and Papa Johns have not shared detailed results from their trials. That caution suggests many are still trying to understand how much control they are willing to give up—and how much they can afford to keep—as shopping slowly shifts toward automation.

(Photo by Heidi Fin)

See also: Grab brings robotics in-house to manage delivery costs

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Retailers like Kroger and Lowe’s test AI agents without handing control to Google appeared first on AI News.

]]>