Human-AI Relationships - AI News https://www.artificialintelligence-news.com/categories/ai-and-us/human-ai-relationships/ Artificial Intelligence News Tue, 14 Apr 2026 12:55:12 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Human-AI Relationships - AI News https://www.artificialintelligence-news.com/categories/ai-and-us/human-ai-relationships/ 32 32 SAP brings agentic AI to human capital management https://www.artificialintelligence-news.com/news/sap-brings-agentic-ai-human-capital-management/ Tue, 14 Apr 2026 12:55:09 +0000 https://www.artificialintelligence-news.com/?p=112997 According to SAP, integrating agentic AI into core human capital management (HCM) modules helps target operational bloat and reduce costs. SAP’s SuccessFactors 1H 2026 release aims to anticipate administrative bottlenecks before they stall daily operations by embedding a network of AI agents across recruiting, payroll, workforce administration, and talent development. Behind the user interface, these […]

The post SAP brings agentic AI to human capital management appeared first on AI News.

]]>
According to SAP, integrating agentic AI into core human capital management (HCM) modules helps target operational bloat and reduce costs.

SAP’s SuccessFactors 1H 2026 release aims to anticipate administrative bottlenecks before they stall daily operations by embedding a network of AI agents across recruiting, payroll, workforce administration, and talent development. Behind the user interface, these agents must monitor system states, identify anomalies, and prompt human operators with context-aware solutions.

Data synchronisation failures between distributed enterprise systems routinely require dedicated IT support teams to diagnose. When employee master data fails to replicate due to a missing attribute, downstream systems like access management and financial compensation halt.

The agentic approach uses analytical models to cross-reference peer data, identify the missing variable based on organisational patterns, and prompt the administrator with the required correction. This automated troubleshooting dramatically reduces the mean time to resolution for internal support tickets.

Implementing this level of autonomous monitoring requires severe engineering discipline. Integrating modern semantic search mechanisms with highly structured legacy relational databases requires extensive middleware configuration.

Running large language models in the background to continuously scan millions of employee records for inconsistencies consumes massive compute resources. CIOs must carefully balance the cloud infrastructure costs of continuous algorithmic monitoring against the operational savings generated by reduced IT ticket volumes.

To mitigate the risk of algorithmic hallucinations altering core financial data, engineering teams are forced to build strict guardrails. These retrieve-and-generate architectures must be firmly anchored to the company’s verified data lakes, ensuring the AI only acts upon validated corporate policies rather than generalised internet training data.

The SAP release attempts to streamline this knowledge retrieval by introducing intelligent question-and-answer capabilities within its learning module. This functionality delivers instant, context-aware responses drawn directly from an organisation’s learning content, allowing employees to bypass manual documentation searches entirely. The integration also introduces a growing workforce knowledge network that pulls trusted external employment guidance into daily workflows to support confident decision-making.

How SAP is using agentic AI to consolidate the HCM ecosystem

The updated architecture focuses on unified experiences that adapt to operational needs. For example, the delay between a signed offer letter to new talent and the employee achieving full productivity is a drag on profit margins.

Native integration combining SmartRecruiters solutions, SAP SuccessFactors Employee Central, and SAP SuccessFactors Onboarding streamlines the data flow from initial candidate interaction through to the new hire phase.

A candidate’s technical assessments, background checks, and negotiated terms pass automatically into the core human resources repository. Enterprises accelerate the onboarding timeline by eliminating the manual re-entry of personnel data—allowing new technical hires to begin contributing to active commercial projects faster.

Technical leadership teams understand that out-of-the-box software rarely matches internal enterprise processes perfectly. Customisation is necessary, but hardcoded extensions routinely break during cloud upgrade cycles, creating vast maintenance backlogs.

To manage this tension, the software introduces a new extensibility wizard. This tool provides guided, step-by-step support for building custom extensions directly on the SAP Business Technology Platform within the SuccessFactors environment.

By containing custom development within a governed platform environment, technology officers can adapt the interface to unique business requirements while preserving strict governance and ensuring future update compatibility.

Algorithmic auditing and margin protection

The 1H 2026 release incorporates pay transparency insights directly into the People Intelligence package within SAP Business Data Cloud to help with compliance with strict regulatory environments like the EU’s directives on pay transparency (which requires organisations to provide detailed and auditable justifications for wage discrepancies.)

Manual compilation of compensation data across multiple geographic regions and currency zones is highly error-prone. Using the People Intelligence package, organisations can analyse compensation patterns and potential pay gaps across demographics.

Automating this analysis provides a data-driven defence against compliance audits and aligns internal pay practices with evolving regulatory expectations, protecting the enterprise from both litigation costs and brand damage.

Preparing for future demands requires trusted and consistent skills data that leadership can rely on across talent deployment and workforce planning. Unstructured data, where one department labels a capability using differing terminology from another, breaks automated resource allocation models.

The update strengthens the SAP talent intelligence hub by introducing enhanced skills governance to provide administrators with a centralised interface for managing skill definitions, applying corporate standards, and ensuring data aligns across internal applications and external partner ecosystems. 

Standardising this data improves overall system quality and allows resource managers to make deployment decisions without relying on fragmented spreadsheets or guesswork. This inventory prevents organisations from having to outsource to expensive external contractors for capabilities they already possess internally.

By bringing together data, AI, and connected experiences, SAP’s latest enhancements show how agentic AI can help organisations reduce daily friction. For professionals looking to explore these types of enterprise AI integrations and connect directly with the company, SAP is a key sponsor of this year’s AI & Big Data Expo North America.

See also: IBM: How robust AI governance protects enterprise margins

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SAP brings agentic AI to human capital management appeared first on AI News.

]]>
Anthropic keeps new AI model private after it finds thousands of external vulnerabilities https://www.artificialintelligence-news.com/news/anthropic-keeps-new-ai-model-private-after-it-finds-thousands-of-external-vulnerabilities/ Thu, 09 Apr 2026 12:00:00 +0000 https://www.artificialintelligence-news.com/?p=112913 Anthropic’s most capable AI model has already found thousands of AI cybersecurity vulnerabilities across every major operating system and web browser. The company’s response was not to release it, but to quietly hand it to the organisations responsible for keeping the internet running. That model is Claude Mythos Preview, and the initiative is called Project Glasswing. […]

The post Anthropic keeps new AI model private after it finds thousands of external vulnerabilities appeared first on AI News.

]]>
Anthropic’s most capable AI model has already found thousands of AI cybersecurity vulnerabilities across every major operating system and web browser. The company’s response was not to release it, but to quietly hand it to the organisations responsible for keeping the internet running.

That model is Claude Mythos Preview, and the initiative is called Project Glasswing.

The launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. 

Beyond that core group, Anthropic has extended access to over 40 additional organisations that build or maintain critical software infrastructure. Anthropic is committing up to US$100 million in usage credits for Mythos Preview across the effort, along with US$4 million in direct donations to open-source security organisations. 

A model that outgrew its own benchmarks

Mythos Preview was not specifically trained for cybersecurity work. Anthropic said the capabilities “emerged as a downstream consequence of general improvements in code, reasoning, and autonomy”, and that the same improvements making the model better at patching vulnerabilities also make it better at exploiting them. 

That last part matters. Mythos Preview has improved to the extent that it mostly saturates existing security benchmarks, forcing Anthropic to shift its focus to novel real-world tasks–specifically, zero-day vulnerabilities. These flaws were previously unknown to the software’s developers. 

Among the findings: a 27-year-old bug in OpenBSD, an operating system known for its strong security posture. In another case, the model fully autonomously identified and exploited a 17-year-old remote code execution vulnerability in FreeBSD–CVE-2026-4747–that allows an unauthenticated user anywhere on the internet to obtain complete control of a server running NFS. No human was involved in the discovery or exploitation after the initial prompt to find the bug. 

Nicholas Carlini from Anthropic’s research team described the model’s ability to chain together vulnerabilities: “This model can create exploits out of three, four, or sometimes five vulnerabilities that in sequence give you some kind of very sophisticated end outcome. I’ve found more bugs in the last couple of weeks than I found in the rest of my life combined.” 

Why is it not being released?

“We do not plan to make Claude Mythos Preview generally available due to its cybersecurity capabilities,” Newton Cheng, Frontier Red Team Cyber Lead at Anthropic, said. “Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout–for economies, public safety, and national security–could be severe.” 

This is not hypothetical. Anthropic had previously disclosed what it described as the first documented case of a cyberattack largely executed by AI–a Chinese state-sponsored group that used AI agents to autonomously infiltrate roughly 30 global targets, with AI handling the majority of tactical operations independently. 

The company has also privately briefed senior US government officials on Mythos Preview’s full capabilities. The intelligence community is now actively weighing how the model could reshape both offensive and defensive hacking operations. 

The open-source problem

One dimension of Project Glasswing that goes beyond the headline coalition: open-source software. Jim Zemlin, CEO of the Linux Foundation, put it plainly: “In the past, security expertise has been a luxury reserved for organisations with large security teams. Open-source maintainers, whose software underpins much of the world’s critical infrastructure, have historically been left to figure out security on their own.”

Anthropic has donated US$2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, and US$1.5 million to the Apache Software Foundation–giving maintainers of critical open-source codebases access to AI cybersecurity vulnerability scanning at a scale that was previously out of reach.

What comes next

Anthropic says its eventual goal is to deploy Mythos-class models at scale, but only when new safeguards are in place. The company plans to launch new safeguards with an upcoming Claude Opus model first, allowing it to refine them with a model that does not pose the same level of risk as Mythos Preview. 

The competitive picture is already shifting around it. When OpenAI released GPT-5.3-Codex in February, the company called it the first model it had classified as high-capability for cybersecurity tasks under its Preparedness Framework. Anthropic’s move with Glasswing signals that the frontier labs see controlled deployment–not open release–as the emerging standard for models at this capability level.

Whether that standard holds as these capabilities spread further is, at this point, an open question that no single initiative can answer.

See Also: Anthropic’s refusal to arm AI is exactly why the UK wants it

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Anthropic keeps new AI model private after it finds thousands of external vulnerabilities appeared first on AI News.

]]>
KPMG: Inside the AI agent playbook driving enterprise margin gains https://www.artificialintelligence-news.com/news/kpmg-inside-ai-agent-playbook-enterprise-margin-gains/ Wed, 01 Apr 2026 15:24:01 +0000 https://www.artificialintelligence-news.com/?p=112839 Global AI investment is accelerating, yet KPMG data shows the gap between enterprise AI spend and measurable business value is widening fast. The headline figure from KPMG’s first quarterly Global AI Pulse survey is blunt: despite global organisations planning to spend a weighted average of $186 million on AI over the next 12 months, only […]

The post KPMG: Inside the AI agent playbook driving enterprise margin gains appeared first on AI News.

]]>
Global AI investment is accelerating, yet KPMG data shows the gap between enterprise AI spend and measurable business value is widening fast.

The headline figure from KPMG’s first quarterly Global AI Pulse survey is blunt: despite global organisations planning to spend a weighted average of $186 million on AI over the next 12 months, only 11 percent have reached the stage of deploying and scaling AI agents in ways that produce enterprise-wide business outcomes.

However, the central finding is not that AI is failing; 64 percent of respondents say AI is already delivering meaningful business outcomes. The problem is that “meaningful” is doing a lot of heavy lifting in that sentence, and the distance between incremental productivity gains and the kind of compounding operational efficiency that moves the needle on margin is, for most organisations, still substantial.

The architecture of a performance gap

KPMG’s report distinguishes between what it labels “AI leaders” (i.e. organisations that are scaling or actively operating agentic AI) and everyone else. The gap in outcomes between these two cohorts is striking.

Headshot of Steve Chase, Global Head of AI and Digital Innovation at KPMG International.

Steve Chase, Global Head of AI and Digital Innovation at KPMG International, said: “The first Global AI Pulse results reinforce that spending more on AI is not the same as creating value. Leading organisations are moving beyond enablement, deploying AI agents to reimagine processes and reshape how decisions and work flow across the enterprise.”

Among AI leaders, 82 percent report that AI is already delivering meaningful business value. Among their peers, that figure drops to 62 percent. That 20-percentage-point spread might look modest in isolation, but it compounds quickly when you consider what it reflects: not just better tooling, but fundamentally different deployment philosophies.

The organisations in that 11 percent are deploying agents that coordinate work across functions, route decisions without human intermediation at every step, surface enterprise-wide insights from operational data in near real-time, and flag anomalies before they escalate into incidents.

In IT and engineering functions, 75 percent of AI leaders are using agents to accelerate code development versus 64 percent of their peers. In operations, where supply-chain orchestration is the primary use case, the split is 64 percent versus 55 percent. These are not marginal differences in tool adoption rates; they reflect different levels of process re-architecture.

Most enterprises that have deployed AI have done so by layering models onto existing workflows (e.g. a co-pilot here, a summarisation tool there…) without redesigning the process those tools sit inside. That produces incremental gains.

The organisations closing the performance gap have inverted this approach: they are redesigning the process first, then deploying agents to operate within the redesigned structure. The difference in return on AI spend between these two approaches, over a three-to-five-year horizon, is likely to be the defining competitive variable in several industries.

What $186 million actually buys—and what it does not

The investment figures in the KPMG data deserve scrutiny. A weighted global average of $186 million per organisation sounds substantial, but the regional variance tells a more interesting story.

ASPAC leads at $245 million, the Americas at $178 million, and EMEA at $157 million. Within ASPAC, organisations including those in China and Hong Kong are investing at $235 million on average; within the Americas, US organisations are at $207 million.

These figures represent planned spend across model licensing, compute infrastructure, professional services, integration, and the governance and risk management apparatus needed to operate AI responsibly at scale.

The question is not whether $186 million is too much or too little; it is what proportion of that figure is being allocated to the operational infrastructure required to derive value from the models themselves. The survey data suggests that most organisations are still underweighting this latter category.

Compute and licensing costs are visible and relatively easy to budget for. The friction costs – the engineering hours spent integrating AI outputs with legacy ERP systems, the latency introduced by retrieval-augmented generation pipelines built on top of poorly structured data, and the compliance overhead of maintaining audit trails for AI-assisted decisions in regulated industries – tend to surface late in deployment cycles and often exceed initial estimates.

Vector database integration is a useful example. Many agentic workflows depend on the ability to retrieve relevant context from large, unstructured document repositories in real time. Building and maintaining the infrastructure for this – selecting between providers such as Pinecone, Weaviate, or Qdrant, embedding and indexing proprietary data, and managing refresh cycles as underlying data changes – adds meaningful engineering complexity and ongoing operational cost that rarely appears in initial AI investment proposals. 

When that infrastructure is absent or poorly maintained, agent performance degrades in ways that are often difficult to diagnose, as the model’s behaviour is correct relative to the context it receives, but that context is stale or incomplete.

Governance as an operational variable, not a compliance exercise

Perhaps the most practically useful finding in the KPMG survey is the relationship between AI maturity and risk confidence.

Among organisations still in the experimentation phase, just 20 percent feel confident in their ability to manage AI-related risks. Among AI leaders, that figure rises to 49 percent. 75 percent of global leaders cite data security, privacy, and risk as ongoing concerns regardless of maturity level—but maturity changes how those concerns are operationalised.

This is an important distinction for boards and risk functions that tend to frame AI governance as a constraint on deployment. The KPMG data suggests the opposite dynamic: governance frameworks do not slow AI adoption among mature organisations; they enable it. The confidence to move faster – to deploy agents into higher-stakes workflows, to expand agentic coordination across functions – correlates directly with the maturity of the governance infrastructure surrounding those agents.

In practice, this means that organisations treating governance as a retrospective compliance layer are doubly disadvantaged. They are slower to deploy, because every new use case triggers a fresh governance review, and they are more exposed to operational risk, because the absence of embedded governance mechanisms means that edge cases and failure modes are discovered in production rather than in testing.

Organisations that have embedded governance into the deployment pipeline itself (e.g. model cards, automated output monitoring, explainability tooling, and human-in-the-loop escalation paths for low-confidence decisions) are the ones operating with the confidence that allows them to scale.

“Ultimately, there is no agentic future without trust and no trust without governance that keeps pace,” explains Steve Chase, Global Head of AI and Digital Innovation at KPMG International. “The survey makes clear that sustained investment in people, training and change management is what allows organisations to scale AI responsibly and capture value.”

Regional divergence and what it signals for global deployment

For multinationals managing AI programmes across regions, the KPMG data flags material differences in deployment velocity and organisational posture that will affect global rollout planning.

ASPAC is advancing most aggressively on agent scaling; 49 percent of organisations there are scaling AI agents, compared with 46 percent in the Americas and 42 percent in EMEA. ASPAC also leads on the more complex capability of orchestrating multi-agent systems, at 33 percent.

The barrier profiles also differ in ways that carry real operational implications. In both ASPAC and EMEA, 24 percent of organisations cite a lack of leadership trust and buy-in as a primary barrier to AI agent deployment. In the Americas, that figure drops to 17 percent.

Agentic systems, by definition, make or initiate decisions without per-instance human approval. In organisational cultures where decision accountability is tightly concentrated at the senior level, this can generate institutional resistance that no amount of technical capability resolves. The fix is governance design; specifically, defining in advance what categories of decision an agent is authorised to make autonomously, what triggers escalation, and who carries accountability for agent-initiated outcomes.

The expectation gap around human-AI collaboration is also worth noting for anyone designing agent-assisted workflows at a global scale.

East Asian respondents anticipate AI agents leading projects at a rate of 42 percent. Australian respondents prefer human-directed AI at 34 percent. North American respondents lean toward peer-to-peer human-AI collaboration at 31 percent. These differences will affect how agent-assisted processes need to be designed in different regional deployments of the same underlying system, adding localisation complexity that is easy to underestimate in centralised platform planning.

One data point in the KPMG survey that deserves particular attention from CFOs and boards: 74 percent of respondents say AI will remain a top investment priority even in the event of a recession. This is either a sign of genuine conviction about AI’s role in cost structure and competitive positioning, or it reflects a collective commitment that has not yet been tested against actual budget pressure. Probably both, in different proportions across different organisations.

What it does indicate is that the window for organisations still in the experimentation phase is not indefinite. If the 11 percent of AI leaders continue to compound their advantage (and the KPMG data suggests the mechanisms for doing so are in place) the question for the remaining 89 percent is not whether to accelerate AI deployment, but how to do so without compounding the integration debt and governance deficits that are already constraining their returns.

See also: Hershey applies AI across its supply chain operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post KPMG: Inside the AI agent playbook driving enterprise margin gains appeared first on AI News.

]]>
SAP and ANYbotics drive industrial adoption of physical AI https://www.artificialintelligence-news.com/news/sap-and-anybotics-drive-industrial-adoption-physical-ai/ Tue, 31 Mar 2026 15:20:53 +0000 https://www.artificialintelligence-news.com/?p=112821 Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that. ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot […]

The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News.

]]>
Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that.

ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot as a standalone asset, this turns it into a mobile data-gathering node within an industrial IoT network.

This initiative shows that hardware innovation can now effectively connect with established business workflows. Underscoring that broader trend, SAP is sponsoring this year’s AI & Big Data Expo North America at the San Jose McEnery Convention Center, CA, an event that is fittingly co-located with the IoT Tech Expo and Intelligent Automation & Physical AI Summit.

When equipment breaks at a chemical plant or offshore rig, it costs a fortune. People do routine inspections to catch these issues early, but humans get tired and plants are massive. Robots, on the other hand, can walk the floor constantly, carrying thermal, acoustic, and visual sensors. Hook those sensors into SAP, and a hot pump instantly generates a maintenance request without waiting for a human to report it.

Cutting out the reporting lag

Usually, finding a problem and logging a work order are two disconnected steps. A worker might hear a weird noise in a compressor, write it down, and type it into a computer hours later. By the time the replacement part gets approved, the machine might be wrecked.

Connecting ANYbotics to SAP eliminates that delay. The robot’s onboard AI processes what it sees and hears instantly. If it hears an irregular motor frequency, it doesn’t just flash a warning on a separate screen, it uses APIs to tell the SAP asset management module directly. The system immediately checks for spare parts, figures out the cost of potential downtime, and schedules an engineer.

This automates the flow of information from the floor to management. It also means machinery gets judged on hard, consistent numbers instead of a human inspector’s subjective opinion.

Putting robots in heavy industry isn’t like installing software in an office—companies have to deal with unreliable infrastructure. Factories usually have awful internet connectivity due to thick concrete, metal scaffolding, and electromagnetic interference.

To make this work, the setup relies on edge computing. It takes too much bandwidth to constantly stream high-def thermal video and lidar data to the cloud. So, the robots crunch most of that data locally. Onboard processors figure out the difference between a machine running normally and one that’s dangerously overheating. They only send the crucial details (i.e. the specific fault and its location) back to SAP.

To handle the network issues, many early adopters build private 5G networks. This gives them the coverage they need across huge facilities where regular Wi-Fi fails. It also locks down access, keeping the robot’s data safe from interception.

Of course, security is a major issue. A walking robot packed with cameras is effectively a roaming vulnerability. Companies must use zero-trust network protocols to constantly verify the robot’s identity and limit what SAP modules it can touch. If the robot gets hacked, the system has to cut its connection instantly to stop the attackers from moving laterally into the corporate network.

These robots generate a massive amount of unstructured data as they walk around. Turning raw audio and thermal images into the neat tables SAP requires is difficult.

If companies don’t manage this right, maintenance teams will drown in alerts. A robot that is too sensitive might spit out hundreds of useless warnings a day, making the SAP dashboard completely ignored. IT teams have to set strict rules before turning the system on. They need exact thresholds for what triggers a real maintenance ticket and what just needs to be watched.

The setup usually uses middleware to translate the robot’s telemetry into SAP’s language. This software acts as a filter, throwing out the noise so only actual problems reach the ERP system. The data lake storing all this information also needs to be organised for future machine learning projects. Fixing broken machines is the short-term goal; the long-term payoff is using years of robot data to predict failures before they happen.

Ensuring a successful physical AI deployment

Dropping robots into a factory naturally makes people nervous. The project’s success often comes down to how human resources handles it. Workers usually look at the robots and assume layoffs are next.

Management has to be clear about why the robots are there. The goal is to get people out of dangerous areas like high-voltage zones or toxic chemical sectors to reduce injuries. The robot collects the data, and the human engineer shifts to analysing that data and doing the actual repairs.

This requires retraining. Workers who used to walk the perimeter now have to read SAP dashboards, manage automated tickets, and work with the robots. They have to trust the sensors, and management has to make sure operators know they can take manual control if something unexpected happens.

Companies need to take the rollout slowly. Because syncing physical robots with enterprise software is complicated, large-scale rollouts should start as small, targeted pilots.

The first test should be in one specific area with known hazards but rock-solid internet. This lets IT watch the data flow between the hardware and SAP in a controlled space. At this stage, the main job is making sure the data matches reality. If the robot sees one thing and SAP records another, it has to be audited and fixed daily.

Once the data pipeline actually works, the company can add more robots and connect other systems, like automated parts ordering. IT chiefs have to keep checking if their private networks can handle more robots, while security teams update their defenses against new threats.

If companies treat these autonomous inspectors as an extension of their corporate data architecture, they get a massive amount of information about their physical assets. But pulling it off means getting the network infrastructure, the data rules, and the human element exactly right.

See also: The rise of invisible IoT in enterprise operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News.

]]>
OpenAI’s Frontier puts AI agents in a fight SaaS can’t afford to lose https://www.artificialintelligence-news.com/news/openai-frontier-enterprise-ai-agents-saas/ Mon, 16 Mar 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112675 When OpenAI launched Frontier in February, the announcement was described as a platform for enterprise AI agents. What it actually signalled was a challenge to the revenue architecture underpinning the software industry. Frontier is designed to act as a semantic layer in an organisation’s existing systems, connecting data warehouses, CRM platforms, ticketing tools, and internal […]

The post OpenAI’s Frontier puts AI agents in a fight SaaS can’t afford to lose appeared first on AI News.

]]>
When OpenAI launched Frontier in February, the announcement was described as a platform for enterprise AI agents. What it actually signalled was a challenge to the revenue architecture underpinning the software industry.

Frontier is designed to act as a semantic layer in an organisation’s existing systems, connecting data warehouses, CRM platforms, ticketing tools, and internal applications so AI agents can operate with the same business context a human employee would have. OpenAI describes these agents as “AI coworkers” that can be on-boarded, assigned identities, granted permissions, and reviewed for performance.

Early customers include Uber, State Farm, Intuit, and Thermo Fisher Scientific. OpenAI CFO Sarah Friar has stated that enterprise customers currently account for roughly 40% of the company’s revenue, and she aims to increase this figure to closer to 50% by year-end, and claims Frontier is the vehicle.

Frontier in enterprise workflows

The case for Frontier is that agents deployed in isolation add complexity not remove it. Each new agent is a point of integration, requiring its own data connections and governance controls, and the result is fragmentation. OpenAI’s answer is a shared business context. Rather than each agent building its own understanding of how an organisation works, Frontier provides a centralised layer that all agents can reference.

Fidji Simo, OpenAI’s CEO of Applications, speaking at the launch briefing, referred to her time running Instacart. “We spent months integrating each of the ones that we selected. We didn’t even get what we actually wanted, because each tool was good for one use case, but they weren’t integrated or talking to one another, so we were just reinforcing silos on silos.”

The results OpenAI cites from early deployments include a global investment firm using Frontier agents in its sales process freeing up more than 90% of salesperson time previously spent on administrative tasks. A technology customer reported saving 1,500 hours a month in product development. At a major manufacturer, agents compressed a production optimisation process from six weeks to a single day.

Frontier manages agents built by OpenAI, in-house enterprise teams, and those from third-party providers. Openness is a design principle and positioning: it makes Frontier harder to dismiss on the grounds of vendor lock-in and expands the surface area it can govern.

The seat-licence problem

A deep concern for incumbents is structural. The per-seat licence model that has made SaaS enormously profitable assumes that software use maps to headcount. If an AI agent handles the workflow that previously required a human employee logging into Salesforce, the justification for that seat licence weakens. Fortune described fear in the market of models like Frontier making SaaS software “invisible” and consequently less valuable.

Salesforce’s stock has declined more than 27% this year, which analysts have attributed more to agentic AI disruption fears than to any weakness in its underlying financials. Revenue reached $11.2 billion in the quarter, Agentforce’s annual recurring revenue hit $800 million, and the company closed 29,000 Agentforce deals. Stock fell after guidance that came in below Wall Street’s expectations.

The incumbents are not standing still. Salesforce has introduced what it calls the Agentic Enterprise License Agreement, a fixed-price, all-you-can-eat model for Agentforce that attempts to make consumption more predictable for enterprise buyers.

ServiceNow has moved to consumption-based pricing for some of its AI agent offerings, and in January signed a multiyear agreement with OpenAI to embed frontier model abilities directly into its platform. Microsoft has introduced consumption-based pricing with its per-user model for Copilot Studio.

The pricing pivot signals that companies understand the seat-licence model cannot survive agentic AI unchanged. The question is whether repricing is enough or whether the architecture itself needs to change.

Two ideas of where the intelligence layer should sit

Should AI agents live inside systems of record, or above them? Salesforce and ServiceNow are betting on the embedded model, arguing that agents are most effective when they sit closest to the data, and that CIOs will trust governance and compliance controls more readily from vendors already managing their workflows.

Marc Benioff, CEO of Salesforce, has described Agentforce as the “operating system for the agentic enterprise.” ServiceNow positions its AI Control Tower as a centralised governance layer for all agents, regardless of origin.

OpenAI, and to a similar degree, Anthropic with Claude Cowork, is betting on the overlay model. Frontier sits above existing systems, using open standards to connect them. The pitch is that enterprises should not have to re-platform to get production-grade agents running in their operations.

Both arguments have merit, and enterprises evaluating these platforms will find genuine trade-offs. The embedded approach offers tighter data control and faster time to value in a known ecosystem. The overlay approach offers flexibility and avoids the problem of agents that can only see one vendor’s data.

What the incumbents have that OpenAI does not is decades of institutional trust and existing contracts. What the AI leader has is the model ability advantage and an argument that it can run the intelligence layer across the whole enterprise.

Frontier is currently available to a limited set of customers, with broader availability expected over the coming months. Pricing has not been disclosed, with OpenAI directing interested organisations to its enterprise sales team.

Many large enterprises run Salesforce, ServiceNow, and Microsoft infrastructure simultaneously. The immediate question is whether Frontier becomes an orchestration layer that connects systems, or a platform that displaces them.

OpenAI’s chief revenue officer, Denise Dresser, said: “What’s really missing still for most companies is just a simple way to free the power of agents as teammates that can operate inside the business without the need to rework everything underneath.”

Every platform in this space claims to close the gap. SaaS incumbents have a head start on trust and data. Whether that proves sufficient is the central question for enterprise software through to the end of 2026.

(Photo by Austin Distel)

See also: OpenAI’s enterprise push: The hidden story behind AI’s sales race

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post OpenAI’s Frontier puts AI agents in a fight SaaS can’t afford to lose appeared first on AI News.

]]>
BMW puts humanoid robots to work in Germany–and Europe’s factories are watching https://www.artificialintelligence-news.com/news/bmw-humanoid-robots-manufacturing-europe-leipzig/ Fri, 13 Mar 2026 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=112665 Europe’s factory floors have a new kind of colleague. BMW Group has deployed humanoid robots in manufacturing in Germany for the first time, launching a pilot project at its Leipzig plant with AEON–a wheeled humanoid built by Hexagon Robotics.  It is the first automotive deployment of AEON anywhere in the world, and it marks something of a […]

The post BMW puts humanoid robots to work in Germany–and Europe’s factories are watching appeared first on AI News.

]]>
Europe’s factory floors have a new kind of colleague. BMW Group has deployed humanoid robots in manufacturing in Germany for the first time, launching a pilot project at its Leipzig plant with AEON–a wheeled humanoid built by Hexagon Robotics. 

It is the first automotive deployment of AEON anywhere in the world, and it marks something of a line in the sand for European industry: physical AI is no longer a North American or East Asian story.

The announcement, made on March 9, 2026, comes backed by hard data from a prior US trial. In 2025, BMW ran a ten-month pilot at its Spartanburg, South Carolina, plant using Figure AI’s Figure 02 robot. The humanoid supported production of over 30,000 BMW X3s, working 10-hour shifts and moving a total of over 90,000 components. 

Leipzig is now the direct heir to those lessons.

A robot built for work, not demos

AEON, developed by Hexagon’s Zurich-based robotics division, is a deliberately industrial machine. Arnaud Robert, President of Hexagon Robotics, made the philosophy plain at a Munich event earlier this month: “We’re not in the dancing business–we’re in the working business.” That ethos is visible in every design decision.

Rather than walking on two legs, AEON moves on wheels–a choice made after extensive testing of locomotion systems, with Hexagon concluding that on factory-grade flat floors, wheels are significantly more efficient in both speed and energy use. It stands 1.65 metres tall, weighs 60 kilograms, reaches 2.5 metres per second, and can autonomously swap its own battery in 23 seconds–enabling around-the-clock operation without human intervention.

Its 22 integrated sensors–peripheral cameras, time-of-flight, infrared, SLAM cameras, and microphones–give it full 360-degree real-time spatial awareness, including the ability to perform quality inspection tasks that conventional stationary robots cannot. 

Its human-like torso allows a wide variety of grippers, hand elements, and scanning tools to be flexibly docked, which is precisely what BMW needs for multifunctional deployment across different production environments

Phased rollout, deliberate strategy

AEON’s first test deployment at Leipzig took place in December 2025. A further test run is planned for April 2026, ahead of a full pilot phase launching in summer 2026, where two AEON units will work simultaneously across two use cases–focusing on high-voltage battery assembly and component manufacturing for exterior parts.

Leipzig was not an arbitrary choice. It is BMW’s most technologically comprehensive German plant, combining battery production, injection moulding, press shop, body shop, and final assembly under one roof, meaning a successful deployment there effectively validates physical AI across the full production spectrum.

To anchor this work institutionally, BMW has established a Centre of Competence for Physical AI in Production, consolidating expertise across the group and creating a defined evaluation path for technology partners–from lab testing through to full pilot phases. 

As Felix Haeckel, Team Lead for the centre, put it: “We are pooling our expertise to make knowledge on AI and robotics widely usable within the company.”

The infrastructure underneath

What makes BMW’s approach notable is that AEON is not landing on a blank factory floor. BMW has systematically dismantled data silos across its production network, replacing them with a uniform data platform that ensures all information is consistent, standardised, and accessible at all times–the architecture that allows AI agents to operate autonomously and learn continuously. 

The humanoid robot is, in effect, the physical layer of a system that has been years in the making. AEON runs on NVIDIA Jetson Orin onboard computers and was trained largely through simulation using NVIDIA’s Isaac platform–a method that allowed Hexagon to develop core locomotion capabilities in weeks rather than months.

The project also involves Microsoft Azure for scalable model development and Maxon’s actuators for locomotion.

Why this matters beyond Leipzig

The broader signal here is one that the enterprise AI world is already tracking closely. Deloitte’s State of AI in the Enterprise 2026 report, surveying over 3,200 senior leaders across 24 countries, found that 58% of companies are already using physical AI in some capacity, with that figure set to reach 80% within two years, with Asia Pacific leading in early implementation.

BMW’s Leipzig pilot is a proof point in that trajectory: that humanoid robots in manufacturing have moved past the lab and the press release, and are being stress-tested against the unforgiving standards of real industrial production. As Milan Nedeljković, BMW’s Board Member for Production, put it: “The symbiosis of engineering expertise and artificial intelligence opens up completely new possibilities in production.”

The question now is not whether humanoid robots belong on the factory floor. It is how fast the rest of the European industry follows.

See also: Ai2: Building physical AI with virtual simulation data

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post BMW puts humanoid robots to work in Germany–and Europe’s factories are watching appeared first on AI News.

]]>
FIFA is rebuilding world football operations on AI. The World Cup is just the first test https://www.artificialintelligence-news.com/news/fifa-ai-world-cup-2026-lenovo/ Thu, 12 Mar 2026 08:00:00 +0000 https://www.artificialintelligence-news.com/?p=112611 When Romy Gai, FIFA’s chief business officer, described the operational challenge of running a 48-team World Cup across Canada, Mexico and the United States, he was not talking about technology. He was talking about complexity. Previous World Cups relied on local organising committees to absorb much of the logistical load. For 2026, FIFA is running operations directly. […]

The post FIFA is rebuilding world football operations on AI. The World Cup is just the first test appeared first on AI News.

]]>
When Romy Gai, FIFA’s chief business officer, described the operational challenge of running a 48-team World Cup across Canada, Mexico and the United States, he was not talking about technology. He was talking about complexity.

Previous World Cups relied on local organising committees to absorb much of the logistical load. For 2026, FIFA is running operations directly. Six billion people are expected to watch. There are 104 matches, up from 64 in Qatar. There are 48 teams instead of 32, 180-plus broadcasters, and no single national infrastructure to lean on. The scale is genuinely new.

The AI strategy FIFA unveiled at Lenovo Tech World in Hong Kong this week is best understood against that backdrop. Football AI Pro, AI-enabled 3D player avatars, and a next-generation Referee View are the headline announcements. Butthe product decisions themselves reflect something more structural: an organisation that has decided AI is not an enhancement to how it runs football’s biggest event, but it is how the event gets run.

What Football AI Pro actually does

Football AI Pro is a generative AI knowledge assistant that will be made available to all 48 teams competing at the 2026 World Cup. It is built on FIFA’s Football Language Model and trained on hundreds of millions of FIFA-owned data points. It generates pre- and post-match analysis in text, video, graphs and 3D visualisations, supports prompts in multiple languages, and will not be used during live play.

The democratisation argument behind it is straightforward. At the highest level of the game, access to sophisticated match analysis depends heavily on a team’s financial resources. A tier-one footballing nation has a dedicated analytics department. A team competing at its first World Cup does not. Football AI Pro is designed to give every team the same analytical baseline.

That ambition is real, but it is also worth understanding as an enterprise AI deployment challenge. Delivering consistent, tournament-wide intelligence across 48 teams in three countries, in multiple languages, against a match schedule that runs for weeks, is not a small infrastructure problem. It is the kind of workload that requires exactly the hybrid AI architecture 

Lenovo has been building its enterprise positioning.

The referee camera is about transparency, not television

The updated Referee View is being framed in broadcast terms, and it will look good on screen. AI-powered stabilisation smooths footage captured from the referee’s body camera in real time, reducing the motion blur that made the original version hard to watch during fast play.

The more significant purpose is transparency. VAR has been one of the most contested technologies in football, partly because the decision-making process is difficult for fans to follow and partly because the imagery used to communicate those decisions has often been unclear. Better referee footage, delivered in real time, changes both of those problems.

The first version of Referee View was trialled at the FIFA Club World Cup last year. The updated version for 2026 is a meaningful technical step forward, but the real test is whether it shifts audience perception of officiating decisions. If it does, it becomes a governance technology as much as a broadcast one.

3D avatars and the offside problem

The AI-enabled 3D player avatar system addresses a specific and persistent pain point: semi-automated offside technology. The existing system works, but the imagery it produces to explain offside decisions has not always been convincing. The lines are hard to read, the angles are counterintuitive, and fans routinely dispute calls that the technology correctly identified.

The new system scans players to create precise 3D models, with each scan taking approximately one second. During matches, those models are used to track players more accurately through fast or obstructed movements. 

When an offside decision is referred to VAR, the 3D model produces imagery that is both more accurate and easier to understand. It was tested at the FIFA Intercontinental Cup last year, where Flamengo and Pyramids FC players were scanned ahead of their match.

The underlying logic is the same as the referee camera: better data, communicated more clearly, reduces the legitimacy gap between the decision and the audience’s acceptance of it.

The intelligent command centre

The least-discussed element of the FIFA-Lenovo partnership is arguably the most operationally significant. FIFA has built what Gai described as an intelligent command centre that connects real-time data across departments, matches, venues and broadcasters in a single operational view.

In a tournament running across three countries with over 180 broadcasters and six billion expected viewers, operational coordination is the constraint that everything else depends on. The command centre is effectively the enterprise AI backbone behind the public-facing Football AI announcements.

Gai’s point about removing local organising committees is worth sitting with. It means FIFA is taking on operational responsibility for functions that were previously distributed across national bodies with local knowledge and localrelationships. AI is not just supporting that decision; it is what makes the decision viable.

The Football Language Model and what comes after 2026

Football AI Pro is built on FIFA’s Football Language Model, a domain-specific model trained on FIFA’s own data. That is a significant asset. A general-purpose language model can answer questions about football. A model trained on hundreds of millions of FIFA-owned data points can generate validated, tournament-specific intelligence that a general model cannot replicate.

The implications extend beyond 2026. FIFA has stated that Football AI Pro will eventually be made available to fans, not just teams. The 211 member federations that make up world football’s governing structure are also in scope. If the model performs at the World Cup, it becomes the foundation for a much longer democratisation project, one that extends analytical capability to national associations and competitions that currently have almost none.

That is the larger enterprise AI story behind the announcements this week. The World Cup is the proof of concept. What FIFA builds on top of it is the actual deployment.

See also: How physical AI integration accelerates vehicle innovation

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post FIFA is rebuilding world football operations on AI. The World Cup is just the first test appeared first on AI News.

]]>
Physical AI is having its moment–and everyone wants a piece of it https://www.artificialintelligence-news.com/news/physical-ai-global-race-robots-manufacturing-2026/ Wed, 04 Mar 2026 12:00:00 +0000 https://www.artificialintelligence-news.com/?p=112502 There is a particular kind of momentum in the technology industry that announces itself not through a single breakthrough, but through the simultaneous convergence of many. Physical AI is having that moment right now–and paying attention to where it is coming from, and why, tells you more than any single product launch can. The term […]

The post Physical AI is having its moment–and everyone wants a piece of it appeared first on AI News.

]]>
There is a particular kind of momentum in the technology industry that announces itself not through a single breakthrough, but through the simultaneous convergence of many. Physical AI is having that moment right now–and paying attention to where it is coming from, and why, tells you more than any single product launch can.

The term itself–physical AI–is simple enough. It describes AI systems that don’t just process data or generate content, but perceive, reason, and act in the real world–robots, autonomous vehicles, machines that adapt. Nvidia CEO Jensen Huang called it “the ChatGPT moment for robotics” at CES in January–a deliberate framing, and a useful one. 

The ChatGPT comparison isn’t about hype. It signals that a technology once confined to research environments is being adopted for mainstream commercial deployment. That crossing is exactly what we are watching unfold from factory floors in Silicon Valley to stages in Shanghai.”

The West is building the stack

On the Western side, the physical AI push is fundamentally a platform race. The companies investing most aggressively aren’t primarily robotics companies–they’re infrastructure companies that see robotics as the next surface on which AI gets monetised.

Nvidia has released new Cosmos and GR00T open models for robot learning and reasoning, alongside the Blackwell-powered Jetson T4000 module, which delivers 4x greater energy efficiency for robotics computing. Arm has carved outan entirely new Physical AI business unit focused on semiconductor design for robotics and intelligent vehicles. 

Siemens and Nvidia announced plans to build what they’re calling an Industrial AI Operating System, with ambitions to create the world’s first fully AI-driven adaptive manufacturing site. Then there’s Google, which last week brought its robotics software unit Intrinsic fully in-house–out of Alphabet’s “Other Bets” and into Google’s core. 

The move positions Google to offer manufacturers a vertically integrated stack: AI models from DeepMind, deployment software from Intrinsic, and cloud infrastructure from Google Cloud. The Android analogy being floated internally is instructive. Android didn’t win smartphones by building the best phone. It won by becoming the layer everything else ran on. 

That is precisely what Google is attempting with physical AI.

The enterprise implications are significant. A Deloitte survey of more than 3,200 global business leaders found that 58% are already using physical AI in some capacity, rising to 80% with plans over the next two years. The demand is there. The question has shifted from whether to adopt to how fast and on whose platform.

The East is building the machines

China’s physical AI story is different in character–and arguably more visceral. At this year’s Spring Festival Gala, humanoid robots from multiple Chinese startups performed kung fu routines, aerial flips, and choreographed dances before hundreds of millions of viewers–a sharp contrast from the stumbling prototypes that drew scepticism just a year prior. 

It was a spectacle, yes. It was also a statement. China accounted for over 80% of global humanoid robot installations in 2025 and over half of the world’s industrial robots. That dominance is underpinned by structural advantages that go beyond software. China controls roughly 70% of the global lidar sensor market, leads in harmonic reducer production–the gears critical to robot movement–and has driven hardware costs down through the same economies of scale that propelled its EV industry. 

Alibaba has entered the race with RynnBrain, an open-source AI model designed to help robots comprehend the physical world and identify objects–positioning itself alongside NVIDIA’s Cosmos and Google DeepMind’s Gemini Robotics in the foundation model layer. With over 140 domestic humanoid manufacturers and more than 330 humanoid models already unveiled, China’s push into embodied AI is no longer experimental–it’s commercial.

Why it matters beyond the headlines

The convergence of Western platform strategies and Eastern manufacturing scale is creating something genuinely new: a global physical AI ecosystem that is advancing on multiple fronts simultaneously, with different competitive advantages colliding.

What makes this moment distinct from prior robotics waves is the removal of the expertise bottleneck. Historically, deploying industrial robots required specialised engineering teams, months of custom programming, and a high tolerance for downtime. The platforms being built now–by Google, Nvidia, Siemens, and their Chinese equivalents–are explicitly designed to lower that barrier. 

Companies like Vention, which raised US$110 million in January, claim their physical AI platforms can reduce automation project timelines from months to days. When that claim becomes routine, the economics of manufacturing change structurally.

There is also a geopolitical dimension that sits quietly beneath the product announcements. Every foundation model for robotics, every platform layer, every semiconductor architecture being developed right now carries with it questions of supply chain dependency, data sovereignty, and long-term infrastructure control. 

The country–or company–that governs the software layer of physical AI will have unusual leverage over industrial operations globally for years to come.

Physical AI is not a trend. It is the next significant reconfiguration of how the world makes things, moves things, and operates at scale. The conversations happening now–from semiconductor boardrooms to factory floors in Shenzhen and Silicon Valley–are not preliminary. They are the thing itself, already underway.

(Photo by Hyundai Motor Group)

See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Physical AI is having its moment–and everyone wants a piece of it appeared first on AI News.

]]>
Google makes its industrial robotics AI play official–and this time, it means business https://www.artificialintelligence-news.com/news/google-industrial-robotics-ai-physical-ai-intrinsic/ Wed, 04 Mar 2026 08:00:00 +0000 https://www.artificialintelligence-news.com/?p=112499 When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google.  The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and […]

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google. 

The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and Google Cloud. No purchase price was disclosed.

On the surface, this looks like a routine internal reshuffle. It isn’t.

From Moonshot to Mandate

Intrinsic graduated into an independent Alphabet-owned company in 2021 after five years of development within Alphabet’s X, the moonshot research division–the same factory that produced Waymo and Wing. Its mission from the start: make industrial robotics AI accessible to manufacturers who don’t have armies of specialist engineers.

While hardware like robotic arms has become cheaper, programming them remains incredibly complex, often requiring hundreds of hours of manual coding by specialised engineers that can vary based on the particular robot. Intrinsic’s answer to that is Flowstate–a web-based platform that allows users to build robotic applications without having to write thousands of lines of code. 

The platform is designed to be hardware-, software-, and AI-model-agnostic. Think of it less as a product and more as an operating layer–one that Google CEO Sundar Pichai has reportedly compared directly to Android. “He said this is the Android of robotics,” Intrinsic CEO Wendy Tan White said, noting that Pichai worked on Chrome and Android before becoming CEO. 

Why now, why Google?

The timing isn’t arbitrary. The sequence of hiring Boston Dynamics’ CTO, releasing a standalone robotics SDK, and now absorbing Intrinsic represents a deliberate consolidation of robotics capability inside Google’s core. Taken together, these moves position Google to offer manufacturers something no competitor has assembled quite as cleanly: AI models from DeepMind, deployment software from Intrinsic, and cloud infrastructure from Google Cloud–all under one roof.

Last month, Google also teamed up with Boston Dynamics to integrate Gemini into Atlas humanoid robots built for manufacturing environments, while Google DeepMind hired the former CTO of Boston Dynamics in November. 

The industrial robotics AI market Google is chasing is not small. McKinsey projects that the market for general-purpose robots could reach US$370 billion by 2040. 

What it means for the enterprise

For enterprise decision-makers, the more interesting signal here isn’t the technology–it’s the accessibility shift. Google plans to integrate Intrinsic’s robotics development platform and vision models with its broader AI ecosystem, combining advanced reasoning, perception and learning capabilities with industrial-grade robotics software to allow machines to interpret sensor data better, adapt to dynamic environments and execute complex tasks. 

Intrinsic has also expanded through acquisitions–acquiring the Open Source Robotics Corp. in 2022, the for-profit arm of the foundation behind the Robot Operating System (ROS). And its commercial pipeline is already in motion: in October 2025, Intrinsic formed a strategic partnership with Foxconn focused on developing general-purpose intelligent robots for full factory automation within electronics manufacturing. 

White framed the integration in terms enterprise leaders will find hard to ignore: production economics, operational transformation, and what she described as truly advanced manufacturing — all within reach once Google’s infrastructure is fully behind it.

That’s a significant claim. But with Gemini, DeepMind, and Google Cloud now aligned behind it, the infrastructure to back it up is, for the first time, actually there.

See also: Physical AI adoption boosts customer service ROI

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
Physical AI adoption boosts customer service ROI https://www.artificialintelligence-news.com/news/physical-ai-adoption-boosts-customer-service-roi/ Tue, 03 Mar 2026 11:32:47 +0000 https://www.artificialintelligence-news.com/?p=112483 The adoption of physical AI drives ROI in frontline customer service by merging digital intelligence with human-like physical interaction. As businesses navigate shrinking labour pools, they are finding that simply automating routine workflows is no longer enough. A new partnership between KDDI and AVITA demonstrates how companies can address complex operational gaps through humanoid deployment. […]

The post Physical AI adoption boosts customer service ROI appeared first on AI News.

]]>
The adoption of physical AI drives ROI in frontline customer service by merging digital intelligence with human-like physical interaction.

As businesses navigate shrinking labour pools, they are finding that simply automating routine workflows is no longer enough. A new partnership between KDDI and AVITA demonstrates how companies can address complex operational gaps through humanoid deployment.

While traditional industrial robots excel at repetitive, single-function tasks, they lack the versatility required to manage unexpected anomalies like equipment failures. Customer-facing roles demand nonverbal communication, including synchronised nodding, natural eye contact, and reassuring facial expressions. 

By integrating AVITA’s avatar creation expertise with KDDI’s communications infrastructure, the two organisations are building domestically developed humanoids capable of operating smoothly in real-world commercial environments.

Blending hardware with advanced data infrastructure

Deploying humanoids into active commercial spaces requires high-capacity and low-latency network infrastructure to transmit visual data and control commands in real time. KDDI provides this operational backbone, facilitating remote control capabilities alongside intensive cloud-based data processing. The resulting visual and motion data collected during customer interactions feeds back into the system to train the AI, improving the precision and autonomy of the humanoid’s behaviour.

To support the demanding computational requirements of physical AI adoption, the companies plan to utilise GPUs hosted at the Osaka Sakai Data Center, which commenced operations in January 2026. They are also exploring integration with an on-premises service for Google’s Gemini high-performance generative AI model. This alignment with major enterprise platforms ensures that data processing remains secure and capable of handling complex dialogue requirements.

The hardware itself departs from standard utilitarian machinery. Based on a concept model designed by Hiroshi Ishiguro, the humanoid features a compact skeletal structure approximating a typical Japanese physique.

Silicone skin and specialised mechanical systems enable warm, approachable facial expressions that sync directly with spoken dialogue. Embedded camera sensors track objects in motion to create natural eye contact, while quiet pneumatic actuation allows for fluid and continuous movement with natural “micro-variations”. This design specifically addresses the historical difficulty of deploying automation in operations requiring hospitality and reassurance.

Preparing for commercial adoption of physical AI

This initiative builds upon earlier joint projects between KDDI and AVITA, which introduced a “next-generation remote customer service platform” using digital avatars for remote assistance at retail locations like Lawson and au Style shops.

Transitioning from digital and language-driven communication to physical units capable of free movement represents a logical progression for enterprises looking to scale their customer service capabilities. The partners intend to begin trials in actual commercial facilities starting in Autumn 2026. Deployment at customer touchpoints such as au Style shops will also be considered.

Integrating physical AI demands environments capable of sustaining continuous, high-volume data streams without latency interruptions. As visual and motion data becomes central to machine learning models, governance frameworks must adapt to manage customer data usage within physical spaces.

Organisations facing demographic workforce pressures should evaluate current bottlenecks to identify where non-verbal, empathetic engagement is necessary. Setting up high-speed network foundations and piloting digital AI avatar programmes today allows enterprises to prepare for the adoption of physical humanoids as the hardware further matures.

See also: Santander and Mastercard run Europe’s first AI-executed payment pilot

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Physical AI adoption boosts customer service ROI appeared first on AI News.

]]>
Deploying agentic finance AI for immediate business ROI https://www.artificialintelligence-news.com/news/deploying-agentic-finance-ai-for-immediate-business-roi/ Tue, 24 Feb 2026 13:26:20 +0000 https://www.artificialintelligence-news.com/?p=112381 Agentic finance AI improves business efficiency and ROI only when deployed with strict governance and clear return on investment targets. A recent FT Longitude survey of 200 finance leaders across the US, UK, France, and Germany showed 61 percent have deployed AI agents merely as experiments. Meanwhile, one in four executives admit they do not […]

The post Deploying agentic finance AI for immediate business ROI appeared first on AI News.

]]>
Agentic finance AI improves business efficiency and ROI only when deployed with strict governance and clear return on investment targets.

A recent FT Longitude survey of 200 finance leaders across the US, UK, France, and Germany showed 61 percent have deployed AI agents merely as experiments. Meanwhile, one in four executives admit they do not fully grasp what these agents look like in practice.

Advancing agentic finance AI beyond experiments

Finance departments need governed systems that combine language processing with business logic to deliver actual value.

Providers of Invoice Lifecycle Management platforms are introducing new agents designed to accelerate invoice processing and push accounts payable toward greater autonomy. Recent market solutions use generative AI, deep learning, and natural language processing to manage the entire workflow, from initial data ingestion through to final reconciliation.

These digital teammates handle task execution, allowing human employees to focus on higher-level business planning rather than replacing them entirely.

Within these ecosystems, specialised business agents provide contextual and real-time guidance regarding the next best actions for handling invoices. Data agents allow staff to query system information using natural language, easily finding answers about awaiting approvals in specific regions or identifying suppliers offering early payment discounts.

Governing autonomous finance workflows

Finance teams will only hand over tasks to agentic AI if they retain control. Finance departments require verifiable audit trails and explainable logic for every action, avoiding networks of disconnected bots.

Industry leaders note that autonomy without trust isn’t acceptable, especially in sensitive industries like finance. Platforms must ensure every AI decision is explainable, auditable, and governed through existing finance controls. This approach helps safely delegate workloads to algorithms while remaining fully compliant and protected.

To enable this trust, every action performed by an AI agent routes through a central policy engine. Before executing any task, the system passes the proposed action through specific autonomy gates that enforce the customer’s business rules, risk thresholds, and compliance requirements. This architecture ensures algorithms manage the bulk of the workload while finance personnel retain total visibility and a complete audit trail.

Building automated procurement operations

Future agentic finance AI capabilities will automate issue resolution and connect data across systems for faster decision-making.

Modern capabilities in 2026 include supplier agents designed to manage invoice disputes and payment queries. These agents will automatically telephone suppliers to explain discrepancies, summarise the conversation, and outline subsequent steps to achieve faster resolutions. Professional agents, meanwhile, will assist clerks in resolving real-time processing questions using natural language to cut manual effort and delays.

AI must operate as an integral business component rather than a bonus feature, requiring intelligent, secure, and ethical application to drive cost efficiencies and enhance operations. By centralising control and ensuring every automated decision from agentic AI passes through established compliance checks, organisations can safely elevate their finance operations to fully autonomous execution.

See also: Mastercard’s AI payment demo points to agent-led commerce

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Deploying agentic finance AI for immediate business ROI appeared first on AI News.

]]>
How Amul is using AI dairy farming to put 36M farmers first https://www.artificialintelligence-news.com/news/amul-ai-dairy-farming-platform-india/ Mon, 23 Feb 2026 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=112344 AI dairy farming has found its most ambitious deployment yet – not in a Silicon Valley lab nor a European agri-tech campus, but in the villages of Gujarat, India, where 36 lakh (3.6 million) women milk producers are now being served by an AI assistant named Sarlaben. Amul, the world’s largest dairy cooperative, has launched […]

The post How Amul is using AI dairy farming to put 36M farmers first appeared first on AI News.

]]>
AI dairy farming has found its most ambitious deployment yet – not in a Silicon Valley lab nor a European agri-tech campus, but in the villages of Gujarat, India, where 36 lakh (3.6 million) women milk producers are now being served by an AI assistant named Sarlaben.

Amul, the world’s largest dairy cooperative, has launched what it calls Amul AI: a platform built on five decades of cooperative data, designed to give every farmer in its network round-the-clock, personalised guidance in their own language.

Amul was launched just ahead of India’s AI Impact Summit 2026 and backed by the Ministry of Electronics and Information Technology (MeitY) with the EkStep Foundation. It is a test case for whether AI – the kind being debated in boardrooms and policy forums globally – can actually reach the last mile.

Meet Sarlaben: The AI dairy farming assistant

Sarlaben draws from one of India’s most comprehensive agricultural data repositories. It’s accessible via the Amul Farmer mobile app – already downloaded by over 10 lakh (one million) users on Android and iOS – as well as through voice calls for farmers using feature phones or landlines.

The system is integrated with Amul’s Automatic Milk Collection System (AMCS) and the Pashudhan application, allowing it to offer personalised, cattle-specific guidance.

What makes Amul AI substantially different from most agricultural chatbots is the scale of its training data. The platform was built on a digital backbone managing over 200 crore (two billion) milk procurement transactions annually, veterinary treatment records from more than 1,200 doctors covering nearly 3 crore (30 million) cattle, approximately 70 lakh (seven million) artificial inseminations conducted each year, ISRO satellite imagery for fodder production mapping, and a cattle census conducted every five years.

Every animal in the system carries a unique ID, with individual records of feed intake, disease history and milking status. “Amul AI is about taking dependable, verified information directly to the farmer – instantly and in a language they are comfortable with,” said Jayen Mehta, Managing Director of the Gujarat Cooperative Milk Marketing Federation (GCMMF), which markets the Amul brand.

He said how, by using decades of structured data and integrating it with their operational systems, the platform will help farmers make timely decisions that improve animal productivity and income.

India’s productivity paradox

India is the world’s largest producer of milk, generating 347.87 million tonnes in 2024-25 according to the Department of Animal Husbandry and Dairying – more than double the US’s 102.70 million tonnes. And yet despite leading in volume, India’s per-animal milk yield remains among the lowest globally.

The reasons are structural. India’s dairy sector is characterised by small herd sizes, low-quality feed, limited access to veterinary care in rural areas, and widespread lack of awareness about modern breeding and husbandry practices. Amul’s network spans more than 18,600 villages in Gujarat, where farmers supply over 350 lakh litres (35 million litres) of milk daily.

But information asymmetry has long been a bottleneck – a farmer facing a sick animal at midnight in a remote village has few places to turn; the gap Amul AI is designed to close.

Available initially in Gujarati – the primary language of the cooperative’s farmer base – the platform is built on the government’s Bhashini multilingual framework and could, in principle, be extended to 20 Indian languages, reaching Amul’s presence in 20,000 villages in 20 states.

The cooperative model

The technology story here is inseparable from the institutional one. Amul’s cooperative structure – built over five decades under the original White Revolution – created the data infrastructure that makes Amul AI possible.

Most private agri-tech startups are working backwards: collecting data first, building products second. Amul already had the data. What was needed was a way to make it actionable at the farmer level.

Experts tracking the dairy-tech space see this as significant. Sreeshankar Nair, Founder of Brainwired, a dairy-tech startup, identifies three specific challenges that Amul AI could meaningfully address: farmer awareness, access to quality veterinary guidance, and connectivity to grazing and feed resources.

“If AI can integrate local dialects of Indian languages, India can have White Revolution 2.0,” Nair said, pointing to the transformative potential of vernacular AI in a sector where not every farmer speaks the same dialect.

Saswata Narayan Biswas, Director of the Institute of Rural Management, Anand (IRMA) – the institution closely associated with Amul’s founding ethos – frames it as an AI embedded in a cooperative framework. It becomes “not a technology upgrade, but an instrument of inclusive rural transformation.”

For Biswas, the specific abilities Amul AI brings – predictive disease detection, oestrus tracking, optimised feed formulation, localised weather risk advisories – are abilities Amul had been building for years. AI accelerates and democratises them.

Scale and the test ahead

The launch has drawn backing from the highest levels of government. Gujarat Chief Minister Bhupendra Patel launched the platform and confirmed it will be showcased at the AI Impact Summit 2026. The cooperative has acknowledged MeitY and the EkStep Foundation – an open digital infrastructure nonprofit – as partners in building the AI layer.

Farmers not affiliated with Amul can also access general dairying and animal husbandry information through the app. At its current scale, Amul AI already covers more cattle – nearly 3 crore (30 million) – than most national veterinary databases anywhere in the world.

The harder question, as with most AI deployments at a population scale, is whether the tool will serve those who need it most. The farmers most likely to benefit first – those already comfortable with smartphones, already plugged into Amul’s digital system – may not be the ones with the greatest information deficit.

The rollout of Bhashini-enabled dialect support, the adoption rate among feature-phone users relying on voice calls, and whether AI-driven advisories translate into measurable yield improvements will be the metrics that determine whether this is genuinely White Revolution 2.0.

Amul has built an AI system grounded in half a century of real cooperative transactions, real animals, and real farmers. Such an infrastructure is, arguably, the most credible foundation for AI dairy farming at scale. Whether it fulfils its promise will depend on execution – and on whether Sarlaben’s voice can reach in the last few miles; those that have always been the hardest to cross.

See also: Hitachi bets on industrial expertise to win the physical AI race

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How Amul is using AI dairy farming to put 36M farmers first appeared first on AI News.

]]>