Latest Artificial Intelligence News & Insights | AI News https://www.artificialintelligence-news.com/categories/artificial-intelligence/ Artificial Intelligence News Thu, 16 Apr 2026 08:01:52 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Latest Artificial Intelligence News & Insights | AI News https://www.artificialintelligence-news.com/categories/artificial-intelligence/ 32 32 The US-China AI gap closes amid responsible AI concerns https://www.artificialintelligence-news.com/news/ai-safety-benchmarks-stanford-hai-2026-report/ Wed, 15 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=113003 The assumption that the US holds a durable lead in AI model performance is not well-supported by the data, and that is just one of the uncomfortable findings in Stanford University’s 2026 AI Index Report, published this week. The report, produced by Stanford’s Institute for Human-Centred Artificial Intelligence, is a 423-page annual assessment of where […]

The post The US-China AI gap closes amid responsible AI concerns appeared first on AI News.

]]>
The assumption that the US holds a durable lead in AI model performance is not well-supported by the data, and that is just one of the uncomfortable findings in Stanford University’s 2026 AI Index Report, published this week.

The report, produced by Stanford’s Institute for Human-Centred Artificial Intelligence, is a 423-page annual assessment of where artificial intelligence stands. It covers research output, model performance, investment flows, public sentiment, and responsible AI. The headline findings are striking.

But the more consequential insights sit in the sections most coverage has skipped, particularly on AI safety, where the gap between what models can do and how rigorously they are evaluated for harm has not closed but widened.

That said, three findings deserve more attention than they are getting.

The US-China model performance gap has effectively closed

The framing that the US leads China in AI development needs updating. According to the report, US and Chinese models have traded the top performance position multiple times since early 2025. In February 2025, DeepSeek-R1 briefly matched the top US model. As of March 2026, Anthropic’s top model leads by just 2.7%.

The US still produces more top-tier AI models – 50 models in 2025 to China’s 30 – and retains higher-impact patents. But China now leads in publication volume, citation share, and patent grants. China’s share of the top 100 most-cited AI papers grew from 33 in 2021 to 41 in 2024. South Korea, notably, leads the world in AI patents per capita.

The practical implication is that the assumption of a durable US technological lead in AI model performance is not well-supported by the data. The gap that existed two years ago has closed to a margin that shifts with each major model release.

There is a further structural vulnerability the report identifies. The US hosts 5,427 data centres – more than ten times any other country – but a single company, TSMC, fabricates almost every leading AI chip inside them. The entire global AI hardware supply chain runs through one foundry in Taiwan, though a TSMC expansion in the US began operations in 2025.

AI safety benchmarking is not keeping pace, and the numbers show it

Almost every frontier model developer reports results on ability benchmarks. The same is not true for responsible AI benchmarks, and the 2026 Index documents the gap with some precision.

The report’s benchmark table for safety and responsible AI shows that most entries are simply empty. Only Claude Opus 4.5 reports results on more than two of the responsible AI benchmarks tracked. Only GPT-5.2 reports StrongREJECT. Across benchmarks measuring fairness, security and human agency, the majority of frontier models report nothing.

Capability benchmarks are reported consistently across frontier models. Responsible AI benchmarks–covering safety, fairness, and factuality–are largely absent. Source: Stanford HAI 2026 AI Index Report

This does not mean Frontier Labs is doing no internal safety work. The report acknowledges that red-teaming and alignment testing happen, but that “these efforts are rarely disclosed using a common, externally comparable set of benchmarks.” The effect is that external comparison in AI safety dimensions is effectively impossible for most models.

Documented AI incidents rose to 362 in 2025, up from 233 in 2024, according to the AI Incident Database. The OECD’s AI Incidents and Hazards Monitor, which uses a broader automated pipeline, recorded a peak of 435 monthly incidents in January 2026, with a six-month moving average of 326.

Documented AI incidents rose to 362 in 2025, up from 233 the previous year and under 100 annually before 2022. Source: AI Incident Database (AIID), via Stanford HAI 2026 AI Index Report

The governance response at the organisational level is struggling to match. According to a survey conducted by the AI Index and McKinsey, the share of organisations rating their AI incident response as “excellent” dropped from 28% in 2024 to 18% in 2025. Those reporting “good” responses also fell, from 39% to 24%. Meanwhile, the share experiencing three to five incidents rose from 30% to 50%.

The report also identifies a structural problem in responsible AI improvement itself: gains in one dimension tend to reduce performance in another. Improving safety can degrade accuracy, or improving privacy can reduce fairness, for example. There is no established framework for managing such trade-offs, and in several dimensions, including fairness and explainability, the standardised data needed to track progress over time does not yet exist.

Public anxiety rises with adoption, and the expert-public gap

Globally, 59% of people surveyed say AI’s benefits outweigh its drawbacks, up from 55% in 2024. At the same time, 52% say AI products and services make them nervous, an increase of two percentage points in one year. Both figures are moving upward simultaneously, which reflects a public that is using AI more while becoming more uncertain about where it leads.

The expert-public divide on AI’s employment effects is particularly sharp. According to the report, 73% of AI experts expect AI to have a positive impact on how people do their jobs, compared with just 23% of the general public – a 50-point gap. On the economy, the gap is 48 points (69% of experts are positive versus 21% of the public). On medical care, experts are considerably more optimistic at 84%, against 44% of the public.

Those gaps matter because public trust shapes regulatory outcomes, and regulatory outcomes shape how AI is deployed. On that dimension, the report flags something striking: the US reported the lowest level of trust in its own government to regulate AI responsibly of any country surveyed, at 31%. The global average was 54%. Southeast Asian countries were the most trusting, with Singapore at 81% and Indonesia at 76%.

Globally, the EU is trusted more than the US or China to regulate AI effectively. Among 25 countries in Pew Research Centre’s 2025 survey, a median of 53% trusted the EU to regulate AI, compared to 37% for the US and 27% for China.

The report closes its public opinion chapter by noting that Southeast Asian countries remain among the world’s most optimistic about AI. In China, Malaysia, Thailand, Indonesia, and Singapore, more than 80% of respondents say AI will profoundly change their lives in the next three to five years. Malaysia posted the largest increase in this view from 2024 to 2025.

See also: IBM: How robust AI governance protects enterprise margins

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post The US-China AI gap closes amid responsible AI concerns appeared first on AI News.

]]>
Hyundai expands into robotics and physical AI systems https://www.artificialintelligence-news.com/news/hyundai-expands-into-robotics-and-physical-ai-systems/ Tue, 14 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112984 Hyundai Motor Group is starting to look like a company building machines that act in the real world. The change centres on physical AI: Where AI is placed into robots and systems that move and respond in physical spaces. Current efforts are mainly focused on factory and industrial settings. Hyundai’s move into physical AI systems […]

The post Hyundai expands into robotics and physical AI systems appeared first on AI News.

]]>
Hyundai Motor Group is starting to look like a company building machines that act in the real world. The change centres on physical AI: Where AI is placed into robots and systems that move and respond in physical spaces. Current efforts are mainly focused on factory and industrial settings.

Hyundai’s move into physical AI systems

In an interview with Semafor, chairman Chung Eui-sun said robotics and AI will play a central role in Hyundai’s next phase of growth, pushing the company beyond vehicles and into physical systems. The group plans to invest $26 billion in the US by 2028, according to United Press International, building on roughly $20.5 billion invested over the past 40 years.

A large part of that spending is tied to robotics and AI-driven systems that Hyundai is combining into a single approach. Chung described robotics and physical AI as important to Hyundai’s long-term direction, adding that the company is developing robots to work with people not replace them.

From automation to collaboration

Hyundai is working on systems where robots and humans share tasks in the same space. This includes humanoid robots developed by Boston Dynamics, which Hyundai acquired a controlling stake in 2021. Machines are being prepared for manufacturing use, with deployment planned around 2028. The company expects to scale production to up to 30,000 units per year by 2030, with the goal to improve work on the factory floor. Robots may handle repetitive or physically demanding tasks, while humans focus on oversight and coordination.

Chung said this kind of setup could help improve efficiency and product quality as customer expectations change.

Current deployments remain focused on industrial settings, though Hyundai is exploring other uses. Potential areas include logistics and mobility services that combine vehicles with AI systems. These may affect deliveries and shared services.

Manufacturing as the first use case for physical AI

While these uses are still developing, manufacturing remains the main testing ground. Factories remain the place where Hyundai is putting these ideas into practice. The company is already working on software-driven manufacturing systems in its US operations, combining data and robotics to manage production.

Physical AI builds on this by adding machines that adjust their actions based on real-time data. Chung said changes in regulations and customer demand are pushing the company to rethink how it operates in regions. Hyundai’s response is a mix of global expansion and local production, with AI and robotics helping standardise processes.

Energy and infrastructure

The company continues to invest in hydrogen through its HTWO brand, which covers production, storage and use. Chung pointed to rising demand linked to AI infrastructure and data centres as one reason hydrogen is gaining attention. He described hydrogen and electric vehicles as complementary options. The idea is to offer different energy choices depending on how systems are used. As AI moves into physical environments, energy becomes a more visible constraint.

What physical AI means for end users

Most people will not interact with a humanoid robot in the near term. But they will feel the effects of these systems in other ways. Products may be built faster and services tied to mobility or infrastructure may become more responsive.

Hyundai sells more than 7 million vehicles each year in over 200 countries, supported by 16 global production facilities, according to the same UPI report.

A gradual transition

Hyundai is still a major carmaker, with brands like Hyundai, Kia, and Genesis forming the base of its operations. What is changing is how those vehicles – and the systems around them – are designed and managed.

Physical AI represents a change from products to systems. It places AI in the environments where work and daily life take place. That change is still in progress, and many of the systems Hyundai is developing will take years to scale. The company is building toward a future where machines work with people in the real world.

(Photo by @named_ aashutosh)

See also: Asylon and Thrive Logic bring physical AI to enterprise perimeter security

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Hyundai expands into robotics and physical AI systems appeared first on AI News.

]]>
Meta has a competitive AI model but loses its open-source identity https://www.artificialintelligence-news.com/news/meta-muse-spark-ai-model-open-source/ Fri, 10 Apr 2026 08:00:00 +0000 https://www.artificialintelligence-news.com/?p=112928 The open-source AI movement has never lacked for options. Mistral, Falcon, and a growing field of open-weight models have been available to developers for years. But when Meta threw its weight behind Llama, something shifted. A company with three billion users, vast compute resources, and the credibility of a tech giant was now building openly, […]

The post Meta has a competitive AI model but loses its open-source identity appeared first on AI News.

]]>
The open-source AI movement has never lacked for options. Mistral, Falcon, and a growing field of open-weight models have been available to developers for years. But when Meta threw its weight behind Llama, something shifted. A company with three billion users, vast compute resources, and the credibility of a tech giant was now building openly, and the developer community responded.

By early 2026, the Llama ecosystem had reached 1.2 billion downloads, averaging about 1 million per day. That is the context for what happened on April 8, 2026. Meta launched Muse Spark, its first major new Meta AI model in a year, and the first product from its newly formed Meta Superintelligence Labs.

It is capable in ways Llama 4 never was, benchmarks well against the current frontier, and is completely proprietary. No free download. No open weights. No building on it unless Meta decides you can.

The companyspentUS$14.3 billion, brought in Alexandr Wang from Scale AI to lead its AI rebuild, then spent nine months tearing down its entire AI stack and starting over. Muse Spark is what came out the other side. The developer community that made Llama what it was is now being asked to wait for a future open-source version that may or may not arrive on any predictable timeline.

What is Muse Spark?

Muse Spark is a natively multimodal reasoning model with tool-use, visual chain of thought, and multi-agent orchestration built in. It now powers Meta AI, which reaches over three billion users in Meta’s apps. Meta rebuilt its technology infrastructure from scratch, letting the company create a model that is as capable as its older midsize Llama 4 variant for an order of magnitude less compute.

That efficiency number is worth noting. At the scale Meta operates, compute costs compound fast, and running a frontier-class Meta AI model at a fraction of the cost of its predecessors changes the economics of deploying it in billions of interactions daily.

On benchmarks, the picture is genuinely mixed. Muse Spark scores 52 on the Artificial Intelligence Index v4.0, placing it fourth overall behind Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6. Meta has not claimed to have built the best model in the world, which is itself a departure from the over-claiming that damaged Llama 4’s credibility.

Where Muse Spark leads is health. On HealthBench Hard – open-ended health queries – it scores 42.8, substantially ahead of Gemini 3.1 Pro at 20.6, GPT-5.4 at 40.1, and Grok 4.2 at 20.3. Health is a stated priority for Meta; the company says it worked with over 1,000 physicians to curate training data for the model.

Muse Spark also offers three modes of interaction: Instant mode for quick answers, Thinking mode for multi-step reasoning tasks, and Contemplating mode, which orchestrates multiple agents’ reasoning in parallel to compete with the most demanding reasoning modes from Gemini Deep Think and GPT Pro.

The open-source retreat

This is the part of the Muse Spark story that the benchmark tables do not capture. Unlike Meta’s previous models, which were released as open-weight models – meaning anyone could download and run them on their own equipment – Muse Spark is entirely proprietary. The company said it will offer the model in a private preview to select partners through an API, making Muse Spark even more proprietary than the paid models offered by Meta’s rivals.

Wang addressed the change directly, stating: “Nine months ago, we rebuilt our AI stack from scratch. New infrastructure, new architecture, new data pipelines. This is step one. Bigger models are already in development with plans to open-source future versions.”

The developer community’s response has been sceptical. Some see this as a necessary pivot after Llama 4 failed to gain expected traction. Others view it as Meta closing the gates once it has something worth protecting. That is the community now being asked to wait while competitors without that open-source legacy continue shipping freely available weights.

Distribution over benchmarks

Meanwhile, Meta is not waiting for the developer community to come around. Muse Spark will debut in the coming weeks inside Facebook, Instagram, WhatsApp, and Messenger, as well as in Meta’s Ray-Ban AI glasses. That rollout path is arguably more consequential than any benchmark result. OpenAI and Anthropic sell to developers and enterprises. Meta deploys directly to over three billion people already inside its apps daily.

Meta’s push into health does raise privacy questions worth watching. Muse Spark users will need to log in with an existing Meta account to use it, and while Meta does not explicitly say personal account information will be used by the AI, the company has generally trained on public user data and has positioned Muse Spark as a personal superintelligence product.

Meta stock rose more than 9% on the day of the launch, a signal that investors read the Muse Spark release as proof that the US$14.3 billion bet on Wang and the nine-month rebuild produced something real. Whether the promised open-source versions actually materialise is a question the developer community will press every quarter. The answer will define how this chapter of Meta’s AI story is remembered.

See Also: The Meta-Manus review: What enterprise AI buyers need to know about cross-border compliance risk

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Meta has a competitive AI model but loses its open-source identity appeared first on AI News.

]]>
Boomi calls it “data activation” and says it’s the missing step in every AI deployment https://www.artificialintelligence-news.com/news/boomi-agentic-ai-data-activation-missing-step/ Tue, 07 Apr 2026 12:00:00 +0000 https://www.artificialintelligence-news.com/?p=112896 The failure mode for enterprise AI in 2026 is not what most people expected. It is not that the models are wrong, or that agents cannot reason, or that the technology is overhyped. The failure mode is that the data feeding those systems is fragmented, inconsistently labelled, and spread across dozens of applications that were […]

The post Boomi calls it “data activation” and says it’s the missing step in every AI deployment appeared first on AI News.

]]>
The failure mode for enterprise AI in 2026 is not what most people expected. It is not that the models are wrong, or that agents cannot reason, or that the technology is overhyped. The failure mode is that the data feeding those systems is fragmented, inconsistently labelled, and spread across dozens of applications that were never designed to share context. 

Boomi calls this the agentic AI data activation problem, and after tracking 75,000 AI agents running in production across its customer base, the company says solving it comes before everything else. That figure comes from February, when Boomi reported its strongest momentum to date: more than 30,000 customers globally, 75,000 AI agents in production, and a customer base that includes over a quarter of the Fortune 500. 

Yet the consistent pattern across those deployments, according to Steve Lucas, chairman and CEO of Boomi, is that AI value only materialises once the data problem is resolved. “AI only delivers value when data is properly activated, trusted and governed first,” Lucas said when the company announced its latest platform capabilities on March 9.

The fragmentation problem

Enterprise data is not missing; it exists in abundance, distributed across ERP systems, CRMs, data lakes, SaaS platforms, and legacy applications that have accumulated over decades. What is missing is the shared context that allows an AI agent to treat data from one system as reliably compatible with data from another. 

An agent drawing customer records from a CRM and pricing data from an ERP may be working from conflicting definitions of what a customer or a product actually is. The outputs it produces are only as coherent as the data standards beneath them.

Boomi’s answer is Meta Hub, a central system of record announced in its March 9 platform update, designed to standardise business definitions across the enterprise and extend that context to every AI agent operating within it. The goal is to ensure agents reason from a consistent understanding of business logic rather than generating outputs based on fragmented interpretations pulled from disconnected systems.

The same release introduced real-time SAP data extraction via change data capture, addressing one of the most common integration bottlenecks in large enterprises, where SAP data is often inaccessible due to slow, manual export processes that render it effectively unavailable to AI workflows in real-time. 

New governance capabilities for Snowflake Cortex agents within Boomi’s Agent Control Tower added audit trails and session logs, addressing a concern that has moved steadily up enterprise priority lists: AI agents operating as a black box, taking actions with no visible reasoning chain.

What the analyst’s recognition signals

Two independent assessments in March gave Boomi external validation of its positioning. On March 16, Gartner named Boomi a Leader in its 2026 Magic Quadrant for Integration Platform as a Service–the twelfth consecutive time–and positioned it highest for Ability to Execute. 

On March 31, the IDC MarketScape for Worldwide API Management named Boomi a Leader, specifically noting its AI-centric strategy that treats APIs as both the fuel and the control plane for AI workloads. The Gartner framing is pointed. 

The report stated that AI-ready integration is a strategic capability that aligns architecture, integration, and governance to enable AI agents to effectively access enterprise data and operate within business processes. That framing validates the problem Boomi is addressing and signals that iPaaS platforms are now being evaluated on AI readiness rather than traditional integration capabilities alone.

The broader pattern

By now, we are aware that the shift from pilot to production in enterprise AI is stalling in a predictable place. Organisations have models. They have agents. What many do not have is the data infrastructure that makes those agents reliable enough to trust with real business processes.

Data activation–moving data from static storage into live, governed, context-rich flows that agents can actually reason from–is one articulation of what that missing layer needs to look like. Whether that framing becomes the industry standard or gets absorbed into a broader category is a question 2026 will start to answer. 

What is not in question is that the enterprises finding ROI from agentic AI are the ones that sorted the data layer first.

Boomi will be exhibiting at the AI & Big Data Expo at TechEx North America, taking place 18–19 May 2026 at the San Jose McEnery Convention Centre.

(Photo by Boomi)

See also: Autonomous AI systems depend on data governance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Boomi calls it “data activation” and says it’s the missing step in every AI deployment appeared first on AI News.

]]>
Anthropic’s refusal to arm AI is exactly why the UK wants it https://www.artificialintelligence-news.com/news/anthropic-uk-expansion-london-pentagon/ Tue, 07 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112889 The Anthropic UK expansion story is less about diplomatic courtship and more about what happens when a government punishes a company for having principles. In late February, US Defence Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a stark ultimatum: remove guardrails preventing Claude from being used for fully autonomous weapons and domestic mass surveillance, […]

The post Anthropic’s refusal to arm AI is exactly why the UK wants it appeared first on AI News.

]]>
The Anthropic UK expansion story is less about diplomatic courtship and more about what happens when a government punishes a company for having principles. In late February, US Defence Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a stark ultimatum: remove guardrails preventing Claude from being used for fully autonomous weapons and domestic mass surveillance, or face consequences. 

Amodei didn’t budge. He wrote that Anthropic could not “in good conscience” grant the Pentagon’s request, arguing that some uses of AI “can undermine rather than defend democratic values.” Washington’s response was swift. 

Trump directed every federal agency to immediately cease all use of Anthropic’s technology, and the Pentagon designated the company a supply chain risk, a label ordinarily reserved for adversarial foreign entities like Huawei. The US$200 million Pentagon contract was pulled. 

Defence tech companies instructed employees to stop using Claude and switch to alternatives. London, watching all of this unfold, saw something different.

The UK’s pitch

Staff at the UK’s Department for Science, Innovation and Technology (DSIT) have drawn up proposals for the US$380 billion company, ranging from a dual stock listing on the London Stock Exchange to an office expansion in the capital, according to multiple people with knowledge of the plans. Prime Minister Keir Starmer’s office has backed the effort, which will be put to Amodei when he visits in late May. 

Anthropic already has around 200 employees in Britain and appointed former prime minister Rishi Sunak as a senior adviser last year. The infrastructure for a meaningful UK presence is already there. What the British government is now offering is an explicit signal that Anthropic’s approach to AI–built on embedded ethical constraints–is an asset, not an obstacle.

A dual listing in London, if it materialised, would give Anthropic access to European institutional investors at a moment when its domestic regulatory standing remains under active legal challenge. The Pentagon’s appeal of the court-ordered injunction blocking the supply chain designation is still before the Ninth Circuit, and the outcome remains uncertain.

Ethics as a competitive advantage

The dispute has been framed largely as a legal and political fight. But its implications for global AI governance run deeper. Anthropic’s lawyers argued in court filings that Claude was not developed to be used for lethal autonomous weapons without human oversight, nor deployed to spy on US citizens, and that using the tools in these ways would represent an abuse of its technology. 

US District Judge Rita Lin, who granted a preliminary injunction blocking the blacklist in March, found the government’s actions “troubling” and concluded they likely violated the law. That judicial finding matters in the UK context. Britain is positioning itself as a regulatory environment sitting between Washington’s current posture, which demands unrestricted military access, and Brussels, where the EU AI Act imposes its own constraints. 

The UK government presents itself as offering a less constrained environment for AI companies than either the US or the European Union. Crucially, that pitch doesn’t ask Anthropic to abandon the guardrails it went to court to defend.

The courtship also sits alongside broader UK efforts to build domestic AI capability, including a recently announced £40 million state-backed research lab, after officials acknowledged the absence of a homegrown competitor to the leading US frontier labs.

Competition in London

The UK’s play for Anthropic is not happening in a vacuum. OpenAI has already committed to making London its biggest research hub outside the US. Google has anchored itself in King’s Cross since acquiring DeepMind in 2014. The race to secure frontier AI in London is already competitive, and Anthropic’s current circumstances make it the most consequential target yet.

Anthropic has been expanding internationally regardless of its domestic legal battles, including opening a Sydney office as its fourth Asia-Pacific location. The global growth strategy is already in motion. What remains to be seen is how much of it London gets to claim.

The company Washington blacklisted for having an AI ethics policy is now being actively courted by another G7 government that wants exactly that. The late May meetings with Amodei will be telling.

See Also: Anthropic selected to build government AI assistant pilot

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Anthropic’s refusal to arm AI is exactly why the UK wants it appeared first on AI News.

]]>
Experian uncovers fraud paradox in financial services’ AI adoption https://www.artificialintelligence-news.com/news/experian-ai-fraud-detection-financial-services-2026/ Thu, 02 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112849 The same technology that financial institutions deploying is being weaponised against them. That is the core tension running through Experian’s 2026 Future of Fraud Forecast, and it’s a tension the company is in a position to name because it sits on both sides of it. According to FTC data cited in the forecast, consumers lost […]

The post Experian uncovers fraud paradox in financial services’ AI adoption appeared first on AI News.

]]>
The same technology that financial institutions deploying is being weaponised against them. That is the core tension running through Experian’s 2026 Future of Fraud Forecast, and it’s a tension the company is in a position to name because it sits on both sides of it.

According to FTC data cited in the forecast, consumers lost more than US$12.5 billion to fraud in 2024. As per Experian’s own data accompanying the report, nearly 60% of companies reported an increase in fraud losses from 2024 to 2025. Experian’s fraud prevention solutions helped clients avoid an estimated US$19 billion in fraud losses globally in 2025, a figure that underscores the scale of the problem and how much defence now depends on AI matching the speed and autonomy of attacks.

The agentic AI issue

The most pressing finding in Experian’s forecast is what the company calls machine-to-machine mayhem, the point at which agentic AI systems, designed to transact autonomously on behalf of users, become indistinguishable from the bots fraudsters deploy for the same purpose.

According to Experian’s forecast, as organisations strive to integrate AI agents capable of independent decision-making, fraudsters are exploiting those same systems to run high-volume digital fraud at a scale and speed no human operation could sustain. The core challenge, as per the report, is that machine-to-machine interactions carry no clear ownership of liability; when an AI agent initiates a transaction that turns out to be fraudulent, the question of who is responsible has no settled answer.

Kathleen Peters, chief innovation officer for Fraud and Identity at Experian North America, framed the problem: “Technology is accelerating the evolution of fraud, making it more sophisticated and harder to detect. By combining differentiated data with advanced analytics and cutting-edge technology, businesses can strengthen fraud defences, safeguard consumers, and deliver secure, seamless experiences.”

Experian predicts that this will reach a tipping point in 2026, forcing substantive industry conversations around liability and the governance of agentic AI in commerce. Some organisations are already making preemptive moves. Amazon, for instance, has stated it blocks third-party AI agents from browsing and transacting on its platform, citing security and privacy concerns.

Four other threats the forecast identifies

Beyond the agentic AI issue, Experian’s forecast identifies four additional trends that financial institutions need to consider in 2026.

Deepfake candidates infiltrating remote workforces; Generative AI tools can now produce tailored CVs and real-time deepfake video capable of passing job interviews. According to the forecast, employers will onboard individuals who are not who they claim to be, granting bad actors access to internal systems. The FBI and Department of Justice issued multiple warnings in 2025 about documented instances of North Korean operatives using this approach to gain employment at US companies.

Website cloning overwhelms fraud teams; AI tools have made it easier to create replicas of legitimate sites, and harder to eliminate them permanently. As per the forecast, even after takedown requests are actioned, spoofed domains continue to resurface, forcing fraud teams into reactive patterns.

Emotionally intelligent scam bots; Generative AI means bots can conduct complex romance fraud and relative-in-need scams without human operators. According to Experian’s forecast, such bots respond convincingly, build trust over extended periods, and are becoming increasingly difficult distinguish from genuine human interaction.

Smart home vulnerabilities: Devices including virtual assistants, smart locks, and connected appliances create new entry points for fraudsters. Experian forecasts that bad actors will exploit these devices to access personal data and monitor household activity as the connected home becomes a more greater part of everyday financial behaviour.

Financial institutions’ responses

According to Experian’s Perceptions of AI Report, drawing on responses from more than 200 decision-makers at leading financial institutions, 84% identify AI as a critical or high priority for their business strategy over the next two years. A further 89% say AI will play an important role in the lending lifecycle.

The governance dimension, however, is where institutions struggle. According to the same report, 73% of respondents are concerned about the regulatory environment around AI, and 65% identify AI-ready data as one of their biggest deployment challenges. Data quality was rated the single most important factor in choosing an AI vendor, which positions Experian’s data-first positioning at the intersection of what financial institutions say they need most.

On the compliance side, Experian’s AI-powered Assistant for Model Risk Management addresses one of the most resource-intensive requirements facing institutions deploying AI. According to a 2025 Experian study of more than 500 global financial institutions, 67% struggle to meet their country’s regulatory requirements, 79% report more frequent supervisory communications from regulators than a year ago, and 60% still use manual compliance processes. In Experian’s announcement, the company states that more than 70% of larger institutions report model documentation compliance involves over 50 people, a figure that signals the scale of the automation opportunity.

Vijay Mehta, EVP of Global Solutions and Analytics at Experian Software Solutions, described the challenge the product addresses: “The AI-enabled speed of data analytics and model development is driving unprecedented business opportunities for financial institutions, but it comes with a challenge: global regulations that require time-consuming documentation. Experian Assistant for Model Risk Management helps solve this labour and resource-intensive requirement with end-to-end model documentation automation.”

The data quality foundation

Running underneath Experian’s fraud and compliance products is the same structural argument that appears in both IBM and Salesforce’s AI narratives that appeared this week: AI is only as reliable as the data it runs on. As per Experian’s Perceptions of AI Report, 65% of financial institution decision-makers consider AI-ready data one of their biggest challenges, and data quality is the most critical factor influencing trust in AI vendors.

That is not a coincidence of messaging. It reflects a constraint facing financial services institutions as they move AI from pilots into production credit decisioning, fraud detection, and regulatory reporting; functions where explainability and auditability are not optional.

Experian’s CDAO Paul Heywood is among the confirmed speakers at the AI & Big Data Expo, part of TechEx North America, taking place 18 – 19 May 2026 at the San Jose McEnery Convention Centre, California. Experian is a Platinum Sponsor at TechEx Global.

See also: Hershey applies AI in its supply chain operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Experian uncovers fraud paradox in financial services’ AI adoption appeared first on AI News.

]]>
DeepL’s Borderless Business report reveals 83% of enterprises are still behind on language AI https://www.artificialintelligence-news.com/news/deepls-borderless-business-report-reveals-83-of-enterprises-are-still-behind-on-language-ai/ Wed, 01 Apr 2026 12:00:00 +0000 https://www.artificialintelligence-news.com/?p=112828 AI is everywhere in the enterprise. The translation workflow often is not. That is the core finding of DeepL’s 2026 Language AI report, “Borderless Business: Transforming Translation in the Age of AI,” published on March 10. Despite broad AI investment across business functions, the report reveals that language and multilingual operations–workflows that touch sales, legal, customer support, and […]

The post DeepL’s Borderless Business report reveals 83% of enterprises are still behind on language AI appeared first on AI News.

]]>
AI is everywhere in the enterprise. The translation workflow often is not. That is the core finding of DeepL’s 2026 Language AI report, “Borderless Business: Transforming Translation in the Age of AI,” published on March 10. Despite broad AI investment across business functions, the report reveals that language and multilingual operations–workflows that touch sales, legal, customer support, and global expansion–remain the most underautomated part of the enterprise technology stack.

The automation gap hiding in plain sight

According to DeepL’s Borderless Business report, 35% of international businesses still handle translation entirely through manual processes, while a further 33% rely on traditional automation paired with systematic human review. Only 17% have implemented next-generation AI tools–large language models or agentic AI–for multilingual operations. 

That means, as per the report’s findings, 83% of enterprises have not transitioned to modern language AI capabilities despite investing in AI across other parts of the business. The report, which draws on survey data from business leaders across the United States, United Kingdom, France, Germany, and Japan, also found that enterprise content volume has grown 50% since 2023, yet 68% of companies still rely on workflows built for a different era.

Jarek Kutylowski, CEO and founder of DeepL, put it plainly: “AI is everywhere, but efficiency is not. Most companies have deployed AI in some form, yet few achieve real productivity at scale because core workflows remain designed around people, not systems.”

Why language AI is becoming infrastructure

The angle that makes this more than a translation story is where language AI is now being deployed. According to DeepL’s research, global expansion is the top driver of language AI investment at 33%, followed by sales and marketing at 26%, customer support at 23%, and legal and finance at 22%. These are mission-critical business functions, not peripheral content tasks.

DeepL’s broader research from December 2025, surveying 5,000 senior business leaders across the same five markets, found that 54% of global executives say real-time voice translation will be essential in 2026, up from 32% today. As perthat research, the UK and France are leading early adoption at 48% and 33% respectively, while Japan sits at 11%, a gap that points to significant variance in enterprise readiness across global markets.

The company now serves over 200,000 business customers across 228 markets, and at the AI & Big Data Expo in London in February 2026, Scott Ivell, vice president of product marketing at DeepL, told SiliconANGLE that the company has 2,000 customers globally deploying AI agents — being used for report analysis, sales targeting, and legal document review.

The sovereign AI dimension

What separates DeepL’s positioning from general-purpose AI competitors is where it sits on the enterprise trust spectrum.As enterprises in regulated industries–financial services, healthcare, legal, government–accelerate AI adoption, data sovereignty is increasingly the deciding factor in platform selection.

DeepL is ISO 27001, SOC 2 Type 2, and GDPR certified, and offers Bring Your Own Key encryption for enterprise customers, giving organisations the ability to withdraw data access in seconds, a control level that most large language model providers do not offer. As per DeepL’s own security documentation, this means data can effectively be placed beyond anyone’s reach, including DeepL itself, at the customer’s discretion.

Sebastian Enderlein, CTO at DeepL, has framed 2026 as a year of execution rather than experimentation: “I believe 2026 will be the year AI stops experimenting and starts executing, at a scale we haven’t yet seen. After a cycle of pilots and proofs of concept, businesses are now ready to scale, and they’re betting big on agentic AI to do it.”

DeepL Agent and the broader pivot

DeepL’s product direction in 2026 reflects the same shift visible across enterprise AI broadly, from single-function tools to autonomous workflow execution. DeepL Agent, launched in general availability in November 2025, is designed to navigate business systems, execute multi-step workflows, and operate across CRM, email, calendars, and project management tools without requiring complex integrations.

According to DeepL’s announcement, the agent operates with enterprise-grade security and data sovereignty built in by default, a deliberate positioning choice that targets the segment of enterprises that cannot send sensitive documents to OpenAI or Microsoft’s public cloud endpoints.

DeepL’s chief scientist, Stefan Miedzianowski, has described the current moment as a transition on the technology adoption curve: “2026 will undoubtedly be the year of the agent. 2025 was the year when public awareness caught up with the science showing what agents can do, but enterprise adoption at scale will happen now. We are moving from the innovators to the early majority.”

As per the Borderless Business report, 71% of business leaders say transforming workflows with AI is a priority for 2026, with expected returns across customer experience, employee productivity, and time to market. The gap between that ambition and the 17% who have actually modernised their language operations is the market DeepL is squarely targeting.

DeepL is a Platinum Sponsor at TechEx Global, appearing at the AI & Big Data Expo and co-located events at Olympia London, February 3 & 4, 2027.

See also: Automating complex finance workflows with multimodal AI

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post DeepL’s Borderless Business report reveals 83% of enterprises are still behind on language AI appeared first on AI News.

]]>
SAP and ANYbotics drive industrial adoption of physical AI https://www.artificialintelligence-news.com/news/sap-and-anybotics-drive-industrial-adoption-physical-ai/ Tue, 31 Mar 2026 15:20:53 +0000 https://www.artificialintelligence-news.com/?p=112821 Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that. ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot […]

The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News.

]]>
Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that.

ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot as a standalone asset, this turns it into a mobile data-gathering node within an industrial IoT network.

This initiative shows that hardware innovation can now effectively connect with established business workflows. Underscoring that broader trend, SAP is sponsoring this year’s AI & Big Data Expo North America at the San Jose McEnery Convention Center, CA, an event that is fittingly co-located with the IoT Tech Expo and Intelligent Automation & Physical AI Summit.

When equipment breaks at a chemical plant or offshore rig, it costs a fortune. People do routine inspections to catch these issues early, but humans get tired and plants are massive. Robots, on the other hand, can walk the floor constantly, carrying thermal, acoustic, and visual sensors. Hook those sensors into SAP, and a hot pump instantly generates a maintenance request without waiting for a human to report it.

Cutting out the reporting lag

Usually, finding a problem and logging a work order are two disconnected steps. A worker might hear a weird noise in a compressor, write it down, and type it into a computer hours later. By the time the replacement part gets approved, the machine might be wrecked.

Connecting ANYbotics to SAP eliminates that delay. The robot’s onboard AI processes what it sees and hears instantly. If it hears an irregular motor frequency, it doesn’t just flash a warning on a separate screen, it uses APIs to tell the SAP asset management module directly. The system immediately checks for spare parts, figures out the cost of potential downtime, and schedules an engineer.

This automates the flow of information from the floor to management. It also means machinery gets judged on hard, consistent numbers instead of a human inspector’s subjective opinion.

Putting robots in heavy industry isn’t like installing software in an office—companies have to deal with unreliable infrastructure. Factories usually have awful internet connectivity due to thick concrete, metal scaffolding, and electromagnetic interference.

To make this work, the setup relies on edge computing. It takes too much bandwidth to constantly stream high-def thermal video and lidar data to the cloud. So, the robots crunch most of that data locally. Onboard processors figure out the difference between a machine running normally and one that’s dangerously overheating. They only send the crucial details (i.e. the specific fault and its location) back to SAP.

To handle the network issues, many early adopters build private 5G networks. This gives them the coverage they need across huge facilities where regular Wi-Fi fails. It also locks down access, keeping the robot’s data safe from interception.

Of course, security is a major issue. A walking robot packed with cameras is effectively a roaming vulnerability. Companies must use zero-trust network protocols to constantly verify the robot’s identity and limit what SAP modules it can touch. If the robot gets hacked, the system has to cut its connection instantly to stop the attackers from moving laterally into the corporate network.

These robots generate a massive amount of unstructured data as they walk around. Turning raw audio and thermal images into the neat tables SAP requires is difficult.

If companies don’t manage this right, maintenance teams will drown in alerts. A robot that is too sensitive might spit out hundreds of useless warnings a day, making the SAP dashboard completely ignored. IT teams have to set strict rules before turning the system on. They need exact thresholds for what triggers a real maintenance ticket and what just needs to be watched.

The setup usually uses middleware to translate the robot’s telemetry into SAP’s language. This software acts as a filter, throwing out the noise so only actual problems reach the ERP system. The data lake storing all this information also needs to be organised for future machine learning projects. Fixing broken machines is the short-term goal; the long-term payoff is using years of robot data to predict failures before they happen.

Ensuring a successful physical AI deployment

Dropping robots into a factory naturally makes people nervous. The project’s success often comes down to how human resources handles it. Workers usually look at the robots and assume layoffs are next.

Management has to be clear about why the robots are there. The goal is to get people out of dangerous areas like high-voltage zones or toxic chemical sectors to reduce injuries. The robot collects the data, and the human engineer shifts to analysing that data and doing the actual repairs.

This requires retraining. Workers who used to walk the perimeter now have to read SAP dashboards, manage automated tickets, and work with the robots. They have to trust the sensors, and management has to make sure operators know they can take manual control if something unexpected happens.

Companies need to take the rollout slowly. Because syncing physical robots with enterprise software is complicated, large-scale rollouts should start as small, targeted pilots.

The first test should be in one specific area with known hazards but rock-solid internet. This lets IT watch the data flow between the hardware and SAP in a controlled space. At this stage, the main job is making sure the data matches reality. If the robot sees one thing and SAP records another, it has to be audited and fixed daily.

Once the data pipeline actually works, the company can add more robots and connect other systems, like automated parts ordering. IT chiefs have to keep checking if their private networks can handle more robots, while security teams update their defenses against new threats.

If companies treat these autonomous inspectors as an extension of their corporate data architecture, they get a massive amount of information about their physical assets. But pulling it off means getting the network infrastructure, the data rules, and the human element exactly right.

See also: The rise of invisible IoT in enterprise operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News.

]]>
JPMorgan begins tracking how employees use AI at work https://www.artificialintelligence-news.com/news/jpmorgan-begins-tracking-how-employees-use-ai-at-work/ Mon, 30 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112785 Banking house JPMorgan Chase is asking its roughly 65,000 engineers and technologists to use AI tools as part of their regular workflow. Business Insider reported that managers are tracking how often staff use these tools. That use may also influence performance reviews. The report states employees are encouraged to use tools like ChatGPT and Claude […]

The post JPMorgan begins tracking how employees use AI at work appeared first on AI News.

]]>
Banking house JPMorgan Chase is asking its roughly 65,000 engineers and technologists to use AI tools as part of their regular workflow. Business Insider reported that managers are tracking how often staff use these tools. That use may also influence performance reviews.

The report states employees are encouraged to use tools like ChatGPT and Claude Code when writing code, reviewing documents, or handling routine tasks. Internal systems then classify workers based on their level of use. Some are labelled “light users,” while others fall into a “heavy user” category.

JPMorgan has been using in fraud detection and risk analysis. What stands out here is not the technology itself, but how it is being woven into day-to-day expectations for staff.

According to internal materials cited by Business Insider, managers are paying close attention to how employees use AI tools.

JPMorgan shows AI adoption in banks

Many companies have spent the past two years rolling out AI tools in departments. In most cases, adoption has been uneven. Some teams experiment heavily, while others stick to existing workflows.

JPMorgan is treating AI as a standard part of the job. That creates a more uniform level of adoption in teams. In the past, performance reviews focused on output and accuracy. Now, they may also include how effectively employees use AI tools to reach those results.

That raises a practical question for large organisations. If AI can reduce the time needed for certain tasks, should employees be expected to produce more work in the same amount of time?

Keeping pace with internal change

By tracking use, the bank may be trying to avoid a familiar problem in enterprise software rollouts. Tools are deployed, but adoption is slow, limiting their impact. Making AI part of performance reviews creates a stronger incentive to engage with the technology. It also suggests that AI literacy is becoming a baseline skill, similar to how spreadsheets or code tools became standard over time.

New challenges include employees feeling pressure to use AI even in cases where it does not clearly improve the outcome. There is also the matter of how to measure “good” use, as opposed to simply frequent use.

JPMorgan’s AI risks and efficiency gains

Banks operate in a regulated environment, where introducing AI into more workflows increases the need for oversight.

Tools like ChatGPT and Claude Code can help summarise information or generate drafts, but they can also produce incorrect or incomplete results. That means employees still need to verify outputs before using them in decision-making or client-facing work.

JPMorgan has developed internal controls for AI systems in areas like trading and risk. Expanding use in a broader group of employees may require similar safeguards, creating a situation for the bank in which it wants to improve efficiency, but also needs to ensure that heavier AI use does not introduce new risks.

Other financial institutions are likely watching closely. If tying AI use to performance leads to measurable gains in productivity, similar models may spread in the sector.

The bank’s approach may reshape how companies hire and train employees, and skills like prompt writing and output checks could become part of standard job requirements. JPMorgan’ approach suggests that this change is already underway, at least in banking.

(Photo by IKECHUKWU JULIUS UGWU)

See also: RPA matters, but AI changes how automation works

Want to experience the full spectrum of enterprise technology innovation? Join TechEx in Amsterdam, California, and London. Covering AI, Big Data, Cyber Security, IoT, Digital Transformation, Intelligent Automation, Edge Computing, and Data Centres, TechEx brings together global leaders to share real-world use cases and in-depth insights. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post JPMorgan begins tracking how employees use AI at work appeared first on AI News.

]]>
AI agents enter banking roles at Bank of America https://www.artificialintelligence-news.com/news/ai-agents-enter-banking-roles-at-bank-of-america/ Wed, 25 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112768 AI agents are starting to take on a more direct role in how financial advice is delivered, as large banks move into systems that support client interactions. Bank of America is now deploying an internal AI-powered advisory platform to a subset of financial advisers, rolled out to around 1,000 financial advisers, according to Banking Dive. […]

The post AI agents enter banking roles at Bank of America appeared first on AI News.

]]>
AI agents are starting to take on a more direct role in how financial advice is delivered, as large banks move into systems that support client interactions.

Bank of America is now deploying an internal AI-powered advisory platform to a subset of financial advisers, rolled out to around 1,000 financial advisers, according to Banking Dive. The move is one of the clearer early examples of how AI is being used in core banking roles, where systems support decision-making in real time.

The platform is based on Salesforce’s Agentforce, which enables the creation of AI agents to handle tasks. It is designed to help advisers handle client queries and prepare recommendations. It can also help manage daily workflows. According to Banking Dive, the system is part of a wider push among major banks to test how AI agents can work alongside human staff.

Bank of America has been expanding its use of AI in its business. It’s said its virtual assistant Erica handles work equivalent to about 11,000 employees, while 18,000 software developers use AI coding tools that have improved productivity by around 20%.

AI agents move to financial decision-making

The approach differs from earlier deployments of AI in banking, which focused mainly on chatbots or internal productivity tools. In those cases, AI was used to answer simple questions or automate routine tasks. The newer systems are built to handle more complex work, including analysing client data.

Firms like JPMorgan, Wells Fargo, and Goldman Sachs are also testing AI tools aimed at improving productivity and helping staff in client-facing roles, though these efforts vary and are not always focused on advisor-specific AI agent systems. While each bank is taking a different approach, the common goal is to increase output without expanding headcount.

Banks report gains in how quickly advisers can access information or prepare for meetings, based on industry reporting and early deployment feedback. Yet there are ongoing concerns about accuracy and oversight, especially when AI systems are used to suggest financial decisions.

Some analysts remain cautious about how quickly AI is changing banking. Wells Fargo analyst Mike Mayo wrote that recent developments have yet to produce major new products, describing the current phase as “a little boring from a product standpoint”.

Human oversight

Bank of America’s rollout stands out because of its scale. Financial advisers sit at the centre of the bank’s relationship with clients, particularly in wealth management. Introducing AI into that role suggests a growing level of trust in the technology. It also shows a willingness to let it influence how advice is formed and delivered.

When dealing with complex financial decisions or high-value clients, industry executives acknowledge AI is unlikely to completely replace expert roles, particularly in complex financial workflows where context and judgement matter.

This hybrid model is becoming more common in the sector. Firms are treating AI as a part of the workforce, with staff expected to work alongside systems day-to-day.

Progress’s limits

There are also practical challenges. AI systems depend on clean, structured data, which is not always easy to achieve in large organisations. Integration with existing tools can take time, and staff may need training to use new systems effectively.

Regulation adds another layer of complexity. Financial institutions must ensure that AI-driven recommendations meet compliance standards and explain decisions if questioned by regulators. This requirement may limit the amount of autonomy provided to AI systems, particularly in areas like lending or investment advice.

Some estimates imply that up to one-third of banking jobs, or parts of those roles, could eventually be handled by AI. The introduction of AI agents into advisory roles raises questions about how the job itself may change. If systems can handle more of the analytical work, advisers may spend more time on client relationships and less on preparation. Over time, this could shift the skills required for the role.

Reliance on AI introduces new risks. Errors in data or model output could affect recommendations, and over-reliance on automated systems may reduce critical review by human staff. The issues are still being studied as deployments expand.

Bank of America’s rollout offers a view into how an AI transition may play out. It shows a large institution testing how far AI can be integrated into everyday work. As more banks follow a similar path, the focus is likely to shift to how AI can be managed once it becomes part of core operations.

See also: Visa prepares payment systems for AI agent-initiated transactions

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post AI agents enter banking roles at Bank of America appeared first on AI News.

]]>
Visa prepares payment systems for AI agent-initiated transactions https://www.artificialintelligence-news.com/news/visa-prepares-payment-systems-for-ai-agent-initiated-transactions/ Thu, 19 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112743 Payments rely on a simple model: a person decides to buy something, and a bank or card network processes the transaction. That model is starting to change as Visa tests how AI agents can initiate payments. New work in the banking sector suggests that, in some cases, software agents may soon take on that role. […]

The post Visa prepares payment systems for AI agent-initiated transactions appeared first on AI News.

]]>
Payments rely on a simple model: a person decides to buy something, and a bank or card network processes the transaction. That model is starting to change as Visa tests how AI agents can initiate payments. New work in the banking sector suggests that, in some cases, software agents may soon take on that role.

A recent example comes from Visa, which is rolling out its “Agentic Ready” programme in Europe to test how financial systems handle AI-initiated transactions. The effort involves collaboration with banks, including Commerzbank and DZ Bank. The aim is to prepare existing payment infrastructure for a scenario where software agents can search for products and make decisions, then complete purchases on behalf of users.

According to information published by Visa and reported by The Paypers, the programme focuses on enabling secure transactions where AI systems act as the initiating party. Instead of a customer confirming a purchase, an AI agent could carry out the task after being given a goal or set of rules.

How transactions begin

Payment systems are built around human identity and intent. A card transaction today depends on verifying that a person has authorised a purchase. If AI agents begin to initiate transactions, banks will need new ways to confirm identity and intent at the system level. That includes deciding how an agent proves it is acting on behalf of a user, and how much autonomy it should have.

In Visa’s model, software agents could handle routine or repeat purchases with limited human input, based on user-defined rules. A system could, for example, monitor supply levels and compare prices, then complete a transaction when certain conditions are met. Reporting from Die Welt and Investing.com says the company sees this as similar in scale to the early change toward online payments, when banks had to adapt to a new type of transaction flow.

Control and compliance

Banks involved in early trials are testing how these ideas work in practice. Commerzbank and DZ Bank are exploring how AI agents can be integrated into existing systems without breaking compliance rules. This includes checks related to fraud, audit trails, and customer consent. These areas are tightly regulated, which means any change to how transactions are initiated must still meet oversight standards.

A RepRisk report found that banks are already dealing with more frequent and costly issues linked to AI. The report states that these incidents can lead to multi-million-dollar losses.

Visa’s work is focused on infrastructure not consumer-facing tools. It’s working on how payment networks should behave when the “customer” is a piece of software. That includes defining how agents are authenticated and how transactions are approved. It also covers how disputes are handled if something goes wrong.

AI and enterprise purchasing

In large organisations, procurement often involves multiple approval steps. AI agents could compress that process by handling routine purchases in set limits. This could reduce manual work, but it also means companies need clear rules about what agents are allowed to do. Without that, the risk of errors or misuse increases.

Large institutions are investing in AI to automate back-office work and reduce costs. Some are also reorganising teams to focus more on data and AI strategy. Regulators are paying closer attention to how AI is used in decision-making, especially in areas like credit and fraud detection.

Taken together, these developments suggest that payments could become one of the first areas where AI agents could act with greater autonomy. Banks will still need to set rules, monitor activity, and handle exceptions. But the day-to-day act of initiating a transaction may, in some cases, require less direct human input.

Visa’s current phase is focused on testing and system design. As AI systems take on more responsibility, financial infrastructure will need to adapt to a new type of user, one that does not hold a card but can still make a purchase.

(Photo by CardMapr.nl)

See also: Goldman Sachs sees AI investment change to data centres

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Visa prepares payment systems for AI agent-initiated transactions appeared first on AI News.

]]>
NVIDIA wants enterprise AI agents safer to deploy https://www.artificialintelligence-news.com/news/nvidia-agent-toolkit-enterprise-ai-agents/ Thu, 19 Mar 2026 08:00:00 +0000 https://www.artificialintelligence-news.com/?p=112746 The NVIDIA Agent Toolkit is Jensen Huang’s answer to the question enterprises keep asking: how do we put AI agents to work without losing control of our data and our liability? Announced at GTC 2026 in San Jose on March 16, the NVIDIA Agent Toolkit is an open-source software stack designed to help enterprises and […]

The post NVIDIA wants enterprise AI agents safer to deploy appeared first on AI News.

]]>
The NVIDIA Agent Toolkit is Jensen Huang’s answer to the question enterprises keep asking: how do we put AI agents to work without losing control of our data and our liability?

Announced at GTC 2026 in San Jose on March 16, the NVIDIA Agent Toolkit is an open-source software stack designed to help enterprises and developers build autonomous AI agents.

What’s stalling broader deployment is trust. Agents that can take action inside enterprise systems need guardrails, and until now, those have been hard to standardise at scale.

OpenShell and the safety problem

The centrepiece of the toolkit is NVIDIA OpenShell, an open-source runtime that enforces policy-based security and privacy guardrails for autonomous agents. In NVIDIA’s terminology, individual agents are called “claws,” and OpenShell is what keeps them in check.

Huang framed the stakes at GTC: “Claude Code and OpenClaw have sparked the agent inflexion point – extending AI beyond generation and reasoning into action. Employees will be supercharged by teams of frontier and custom-built agents they deploy and manage.”

NVIDIA is working with Cisco, CrowdStrike, Google, Microsoft Security, and TrendAI to build OpenShell compatibility into their respective security tools.

Research and cost

Also inside the toolkit is NVIDIA AI-Q, an agentic search blueprint built with LangChain. It uses a hybrid architecture – frontier models handle orchestration while NVIDIA’s open Nemotron models do the research-heavy lifting. According to NVIDIA, this approach can cut query costs by more than 50% while still producing accuracy that tops the DeepResearch Bench and DeepResearch Bench II leaderboards.

That figure will matter to enterprise buyers who’ve been burned by consumption-based AI pricing that looked manageable in pilots and became a budget problem at scale.

Who’s on board?

The partner list includes Adobe, Atlassian, SAP, Salesforce, ServiceNow, Siemens, Cisco, CrowdStrike, Red Hat, Box, Cadence, Cohesity, Dassault Systèmes, IQVIA, and Synopsys.

Salesforce is building a reference architecture where employees use Slack as the orchestration layer for Agentforce agents – pulling from data in both on-premises and cloud environments – powered by NVIDIA infrastructure. Atlassian is integrating Agent Toolkit into its Rovo AI strategy in Jira and Confluence. ServiceNow’s “Autonomous Workforce of AI Specialists” is built on the toolkit with NVIDIA AI-Q.

And Siemens launched the Fuse EDA AI Agent, which uses NVIDIA Nemotron to autonomously orchestrate workflows in its electronic design automation portfolio, from design conception through manufacturing sign-off. IQVIA’s deployment numbers offer a real-world data point: the company has already deployed more than 150 agents in internal teams and client environments, including 19 of the top 20 pharma companies.

The bigger shift

What NVIDIA is positioning itself as the software infrastructure layer for enterprise agentic deployment. The Agent Toolkit, OpenShell, Nemotron models, AI-Q are components of a stack that NVIDIA wants sitting underneath enterprise software.

The toolkit is available now on build.nvidia.com, with support in AWS, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure.

See also: AI Expo 2026 Day 1: Governance and data readiness enable the agentic enterprise

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post NVIDIA wants enterprise AI agents safer to deploy appeared first on AI News.

]]>