Manufacturing & Engineering AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/manufacturing-engineering-ai/ Artificial Intelligence News Thu, 16 Apr 2026 10:19:51 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Manufacturing & Engineering AI - AI News https://www.artificialintelligence-news.com/categories/ai-in-action/manufacturing-engineering-ai/ 32 32 Cadence expands AI and robotic partnerships with Nvidia, Google Cloud https://www.artificialintelligence-news.com/news/cadence-expands-ai-and-robotics-partnerships-with-nvidia-google-cloud/ Thu, 16 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=113025 Cadence Design Systems announced two AI-related collaborations at its CadenceLIVE event this week, expanding its work with Nvidia and introducing new integrations with Google Cloud. The Nvidia partnership focuses on combining AI with physics-based simulation and accelerated computing for robotic systems and system-level design. The companies said the approach targets modelling and deployment in semiconductors […]

The post Cadence expands AI and robotic partnerships with Nvidia, Google Cloud appeared first on AI News.

]]>
Cadence Design Systems announced two AI-related collaborations at its CadenceLIVE event this week, expanding its work with Nvidia and introducing new integrations with Google Cloud. The Nvidia partnership focuses on combining AI with physics-based simulation and accelerated computing for robotic systems and system-level design.

The companies said the approach targets modelling and deployment in semiconductors and large-scale AI infrastructure, including robotic systems that Nvidia describes as physical AI.

Cadence is integrating its multi-physics simulation and system design tools with Nvidia’s CUDA-X libraries, AI models, and Omniverse-based simulation environment. The tools model thermal and mechanical interactions so engineers can assess how systems behave under real-world operating conditions. They also extend beyond chip design to cover infrastructure components like networking and power systems. The combined platform lets engineers simulate system behaviour before physical deployment. The companies said system performance depends on how compute, networking and power systems operate together.

The collaboration also includes robotics development. Cadence’s physics engines, which model how real-world materials interact, are being linked with Nvidia’s AI models used to train AI-driven robotic systems in simulated environments.

“We’re working with you in the board on robotic systems,” said Nvidia CEO Jensen Huang during the event.

Training robots in simulation reduces the need for real-world data collection. The companies said these datasets must be generated with physics-based models not gathered from physical systems. Simulation-generated datasets are used to train models, with outcomes dependent on the accuracy of the underlying physics models.

“The more accurate (generated training data) is, the better the model will be,” said Cadence CEO Anirudh Devgan.

Nvidia said industrial robotics companies are using its Isaac simulation frameworks and Omniverse-based digital twin tools to test robotic systems before deployment. Companies including ABB Robotics, FANUC, YASKAWA, and KUKA are integrating these simulation tools into virtual commissioning workflows to test production systems in software prior to physical rollout.

Nvidia said these systems are used to model complex robot operations and entire production lines using physically accurate digital environments.

Chip design automation on cloud

Separately, Cadence introduced a new AI agent designed to automate later-stage chip design tasks. The agent focuses on physical layout processes, translating circuit designs into silicon implementations. The release builds on an earlier agent introduced this year for front-end chip design, where circuits are defined in code-like descriptions. That earlier system handles circuit design, while the new agent focuses on translating those designs into physical layouts on silicon.

The system will be available through Google Cloud. Cadence said the integration combines its electronic design automation tools with Google’s Gemini models for automated design and verification workflows. The cloud deployment allows teams to run those workloads without relying on on-premise compute infrastructure.

Cadence’s ChipStack AI Super Agent platform uses model-based reasoning with native design tools to coordinate tasks in multiple design stages. The system can interpret design requirements and automatically execute tasks in different stages of the design process.

Cadence reported productivity gains of up to 10 times in early deployments in design and verification tasks. The company did not disclose specific customer implementations.

“We help build AI systems, and then those AI systems can help improve the design process,” Devgan said.

The companies said simulation tools are used to validate systems in virtual environments before physical deployment. Digital twin models allow engineers to test design trade-offs, evaluate performance scenarios, and optimise configurations in software.

They added that the cost and complexity of large-scale data centre infrastructure limit the use of trial-and-error deployment methods.

Quantum models announcement

In a separate announcement, Nvidia introduced a family of open-source quantum AI models called NVIDIA Ising. The models are named after the Ising model, a mathematical framework used to represent interactions in physical systems.

The models are designed to support quantum processor calibration and quantum error correction. Nvidia said the models deliver up to 2.5 times faster performance and three times higher accuracy in decoding processes used for error correction.

“AI is essential to making quantum computing practical,” Huang said. “With Ising, AI becomes the control plane – the operating system of quantum machines – transforming fragile qubits to scalable and reliable quantum-GPU systems.”

(Photo by Homa Appliances)

See also: Hyundai expands into robotics and physical AI systems

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Cadence expands AI and robotic partnerships with Nvidia, Google Cloud appeared first on AI News.

]]>
KPMG: Inside the AI agent playbook driving enterprise margin gains https://www.artificialintelligence-news.com/news/kpmg-inside-ai-agent-playbook-enterprise-margin-gains/ Wed, 01 Apr 2026 15:24:01 +0000 https://www.artificialintelligence-news.com/?p=112839 Global AI investment is accelerating, yet KPMG data shows the gap between enterprise AI spend and measurable business value is widening fast. The headline figure from KPMG’s first quarterly Global AI Pulse survey is blunt: despite global organisations planning to spend a weighted average of $186 million on AI over the next 12 months, only […]

The post KPMG: Inside the AI agent playbook driving enterprise margin gains appeared first on AI News.

]]>
Global AI investment is accelerating, yet KPMG data shows the gap between enterprise AI spend and measurable business value is widening fast.

The headline figure from KPMG’s first quarterly Global AI Pulse survey is blunt: despite global organisations planning to spend a weighted average of $186 million on AI over the next 12 months, only 11 percent have reached the stage of deploying and scaling AI agents in ways that produce enterprise-wide business outcomes.

However, the central finding is not that AI is failing; 64 percent of respondents say AI is already delivering meaningful business outcomes. The problem is that “meaningful” is doing a lot of heavy lifting in that sentence, and the distance between incremental productivity gains and the kind of compounding operational efficiency that moves the needle on margin is, for most organisations, still substantial.

The architecture of a performance gap

KPMG’s report distinguishes between what it labels “AI leaders” (i.e. organisations that are scaling or actively operating agentic AI) and everyone else. The gap in outcomes between these two cohorts is striking.

Headshot of Steve Chase, Global Head of AI and Digital Innovation at KPMG International.

Steve Chase, Global Head of AI and Digital Innovation at KPMG International, said: “The first Global AI Pulse results reinforce that spending more on AI is not the same as creating value. Leading organisations are moving beyond enablement, deploying AI agents to reimagine processes and reshape how decisions and work flow across the enterprise.”

Among AI leaders, 82 percent report that AI is already delivering meaningful business value. Among their peers, that figure drops to 62 percent. That 20-percentage-point spread might look modest in isolation, but it compounds quickly when you consider what it reflects: not just better tooling, but fundamentally different deployment philosophies.

The organisations in that 11 percent are deploying agents that coordinate work across functions, route decisions without human intermediation at every step, surface enterprise-wide insights from operational data in near real-time, and flag anomalies before they escalate into incidents.

In IT and engineering functions, 75 percent of AI leaders are using agents to accelerate code development versus 64 percent of their peers. In operations, where supply-chain orchestration is the primary use case, the split is 64 percent versus 55 percent. These are not marginal differences in tool adoption rates; they reflect different levels of process re-architecture.

Most enterprises that have deployed AI have done so by layering models onto existing workflows (e.g. a co-pilot here, a summarisation tool there…) without redesigning the process those tools sit inside. That produces incremental gains.

The organisations closing the performance gap have inverted this approach: they are redesigning the process first, then deploying agents to operate within the redesigned structure. The difference in return on AI spend between these two approaches, over a three-to-five-year horizon, is likely to be the defining competitive variable in several industries.

What $186 million actually buys—and what it does not

The investment figures in the KPMG data deserve scrutiny. A weighted global average of $186 million per organisation sounds substantial, but the regional variance tells a more interesting story.

ASPAC leads at $245 million, the Americas at $178 million, and EMEA at $157 million. Within ASPAC, organisations including those in China and Hong Kong are investing at $235 million on average; within the Americas, US organisations are at $207 million.

These figures represent planned spend across model licensing, compute infrastructure, professional services, integration, and the governance and risk management apparatus needed to operate AI responsibly at scale.

The question is not whether $186 million is too much or too little; it is what proportion of that figure is being allocated to the operational infrastructure required to derive value from the models themselves. The survey data suggests that most organisations are still underweighting this latter category.

Compute and licensing costs are visible and relatively easy to budget for. The friction costs – the engineering hours spent integrating AI outputs with legacy ERP systems, the latency introduced by retrieval-augmented generation pipelines built on top of poorly structured data, and the compliance overhead of maintaining audit trails for AI-assisted decisions in regulated industries – tend to surface late in deployment cycles and often exceed initial estimates.

Vector database integration is a useful example. Many agentic workflows depend on the ability to retrieve relevant context from large, unstructured document repositories in real time. Building and maintaining the infrastructure for this – selecting between providers such as Pinecone, Weaviate, or Qdrant, embedding and indexing proprietary data, and managing refresh cycles as underlying data changes – adds meaningful engineering complexity and ongoing operational cost that rarely appears in initial AI investment proposals. 

When that infrastructure is absent or poorly maintained, agent performance degrades in ways that are often difficult to diagnose, as the model’s behaviour is correct relative to the context it receives, but that context is stale or incomplete.

Governance as an operational variable, not a compliance exercise

Perhaps the most practically useful finding in the KPMG survey is the relationship between AI maturity and risk confidence.

Among organisations still in the experimentation phase, just 20 percent feel confident in their ability to manage AI-related risks. Among AI leaders, that figure rises to 49 percent. 75 percent of global leaders cite data security, privacy, and risk as ongoing concerns regardless of maturity level—but maturity changes how those concerns are operationalised.

This is an important distinction for boards and risk functions that tend to frame AI governance as a constraint on deployment. The KPMG data suggests the opposite dynamic: governance frameworks do not slow AI adoption among mature organisations; they enable it. The confidence to move faster – to deploy agents into higher-stakes workflows, to expand agentic coordination across functions – correlates directly with the maturity of the governance infrastructure surrounding those agents.

In practice, this means that organisations treating governance as a retrospective compliance layer are doubly disadvantaged. They are slower to deploy, because every new use case triggers a fresh governance review, and they are more exposed to operational risk, because the absence of embedded governance mechanisms means that edge cases and failure modes are discovered in production rather than in testing.

Organisations that have embedded governance into the deployment pipeline itself (e.g. model cards, automated output monitoring, explainability tooling, and human-in-the-loop escalation paths for low-confidence decisions) are the ones operating with the confidence that allows them to scale.

“Ultimately, there is no agentic future without trust and no trust without governance that keeps pace,” explains Steve Chase, Global Head of AI and Digital Innovation at KPMG International. “The survey makes clear that sustained investment in people, training and change management is what allows organisations to scale AI responsibly and capture value.”

Regional divergence and what it signals for global deployment

For multinationals managing AI programmes across regions, the KPMG data flags material differences in deployment velocity and organisational posture that will affect global rollout planning.

ASPAC is advancing most aggressively on agent scaling; 49 percent of organisations there are scaling AI agents, compared with 46 percent in the Americas and 42 percent in EMEA. ASPAC also leads on the more complex capability of orchestrating multi-agent systems, at 33 percent.

The barrier profiles also differ in ways that carry real operational implications. In both ASPAC and EMEA, 24 percent of organisations cite a lack of leadership trust and buy-in as a primary barrier to AI agent deployment. In the Americas, that figure drops to 17 percent.

Agentic systems, by definition, make or initiate decisions without per-instance human approval. In organisational cultures where decision accountability is tightly concentrated at the senior level, this can generate institutional resistance that no amount of technical capability resolves. The fix is governance design; specifically, defining in advance what categories of decision an agent is authorised to make autonomously, what triggers escalation, and who carries accountability for agent-initiated outcomes.

The expectation gap around human-AI collaboration is also worth noting for anyone designing agent-assisted workflows at a global scale.

East Asian respondents anticipate AI agents leading projects at a rate of 42 percent. Australian respondents prefer human-directed AI at 34 percent. North American respondents lean toward peer-to-peer human-AI collaboration at 31 percent. These differences will affect how agent-assisted processes need to be designed in different regional deployments of the same underlying system, adding localisation complexity that is easy to underestimate in centralised platform planning.

One data point in the KPMG survey that deserves particular attention from CFOs and boards: 74 percent of respondents say AI will remain a top investment priority even in the event of a recession. This is either a sign of genuine conviction about AI’s role in cost structure and competitive positioning, or it reflects a collective commitment that has not yet been tested against actual budget pressure. Probably both, in different proportions across different organisations.

What it does indicate is that the window for organisations still in the experimentation phase is not indefinite. If the 11 percent of AI leaders continue to compound their advantage (and the KPMG data suggests the mechanisms for doing so are in place) the question for the remaining 89 percent is not whether to accelerate AI deployment, but how to do so without compounding the integration debt and governance deficits that are already constraining their returns.

See also: Hershey applies AI across its supply chain operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post KPMG: Inside the AI agent playbook driving enterprise margin gains appeared first on AI News.

]]>
Hershey applies AI across its supply chain operations https://www.artificialintelligence-news.com/news/hershey-applies-ai-across-its-supply-chain-operations/ Wed, 01 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112824 Artificial intelligence is moving beyond software and further into the physical side of business. Companies in food production and logistics are starting to use data systems to support day-to-day decisions, not long-term planning. That change is visible in The Hershey Company’s latest strategy update. At its Investor Day, the company said it plans to use […]

The post Hershey applies AI across its supply chain operations appeared first on AI News.

]]>
Artificial intelligence is moving beyond software and further into the physical side of business. Companies in food production and logistics are starting to use data systems to support day-to-day decisions, not long-term planning.

That change is visible in The Hershey Company’s latest strategy update. At its Investor Day, the company said it plans to use AI in its operations, from sourcing analytics to plant automation and fulfilment, with a focus on how the business runs behind the scenes.

Hershey said it plans to apply AI to sourcing and fulfilment. This includes using data to guide how ingredients are bought and how products are distributed. In its Investor Day material, the company said it aims to build “a faster, smarter and more resilient supply chain powered by automation and AI-enabled decision making”.

Supply chains in food and snack markets are under steady pressure: Costs can change quickly, demand can change by season, by market, or by product category, and retailers still expect goods to arrive on time and in the right mix.

Hershey said its digital planning tools are meant to connect different parts of the business. The company said those systems are designed to reduce waste and improve inventory levels. It also said digital operational planning can connect data in the supply chain and help raise service levels.

From reporting to action

Part of Hershey’s update is its use of the phrase “AI-enabled decision-making.” The company said its approach will link sourcing and delivery more closely and plans to use automated fulfilment systems for custom assortments and to improve speed to market.

This is a useful way to read strategy. A hard task is turning data into decisions that help operations move faster or with fewer mistakes.

This is where AI is starting to play a bigger role, according to Hershey’s. The value comes from how operations are connected.

AI in the supply chain and plant operations

The changes also extend into manufacturing. Hershey said it will increase plant automation to improve manufacturing efficiency and use AI in more parts of its operating model. What is changing is how AI fits into those systems. Instead of sitting apart from production, it is being positioned as part of the process used to guide planning and support execution.

That may help companies improve planning and respond more quickly when conditions change. In a business where input costs and consumer demand can change often, even small gains in timing can matter.

Food and snack companies deal with constant swings in input costs and demand. Ingredients like cocoa and sugar are affected by weather, trade flows, and supply issues. Companies still have to keep factories running and products moving through retail channels.

Hershey’s plan to use sourcing analytics is one example of how AI may be applied in that setting. By analysing supplier data and market trends, the company may improve how it buys raw materials and manages risk. The company also said it wants to better connect workers in its operations. That suggests the strategy is not only about automation. It is also about coordination in the business.

Hershey said it plans to “incorporate AI in every stage of its operations,” including sourcing analytics and worker connectivity, as well as automated fulfilment and plant automation.

That makes the company a useful case study for a wider change in enterprise AI. Firms are moving away from narrow pilots and toward broader use in business functions. In that model, AI is treated as a part of supply and delivery systems.

CEO Kirk Tanner framed the plan around growth and execution, saying, “The strategy is clear. The team is ready. The next chapter of growth and leading performance starts now”.

Where this may lead

The kind of change is likely to spread as more companies look for ways to connect data with operational decisions. Hershey’s strategy shows how AI is starting to take a larger role in industries built on physical goods. The technology may sit in the background, but its role in daily operations is becoming harder to ignore.

(Photo by Janne Simoes)

See also: JPMorgan begins tracking how employees use AI at work

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Hershey applies AI across its supply chain operations appeared first on AI News.

]]>
SAP and ANYbotics drive industrial adoption of physical AI https://www.artificialintelligence-news.com/news/sap-and-anybotics-drive-industrial-adoption-physical-ai/ Tue, 31 Mar 2026 15:20:53 +0000 https://www.artificialintelligence-news.com/?p=112821 Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that. ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot […]

The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News.

]]>
Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that.

ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot as a standalone asset, this turns it into a mobile data-gathering node within an industrial IoT network.

This initiative shows that hardware innovation can now effectively connect with established business workflows. Underscoring that broader trend, SAP is sponsoring this year’s AI & Big Data Expo North America at the San Jose McEnery Convention Center, CA, an event that is fittingly co-located with the IoT Tech Expo and Intelligent Automation & Physical AI Summit.

When equipment breaks at a chemical plant or offshore rig, it costs a fortune. People do routine inspections to catch these issues early, but humans get tired and plants are massive. Robots, on the other hand, can walk the floor constantly, carrying thermal, acoustic, and visual sensors. Hook those sensors into SAP, and a hot pump instantly generates a maintenance request without waiting for a human to report it.

Cutting out the reporting lag

Usually, finding a problem and logging a work order are two disconnected steps. A worker might hear a weird noise in a compressor, write it down, and type it into a computer hours later. By the time the replacement part gets approved, the machine might be wrecked.

Connecting ANYbotics to SAP eliminates that delay. The robot’s onboard AI processes what it sees and hears instantly. If it hears an irregular motor frequency, it doesn’t just flash a warning on a separate screen, it uses APIs to tell the SAP asset management module directly. The system immediately checks for spare parts, figures out the cost of potential downtime, and schedules an engineer.

This automates the flow of information from the floor to management. It also means machinery gets judged on hard, consistent numbers instead of a human inspector’s subjective opinion.

Putting robots in heavy industry isn’t like installing software in an office—companies have to deal with unreliable infrastructure. Factories usually have awful internet connectivity due to thick concrete, metal scaffolding, and electromagnetic interference.

To make this work, the setup relies on edge computing. It takes too much bandwidth to constantly stream high-def thermal video and lidar data to the cloud. So, the robots crunch most of that data locally. Onboard processors figure out the difference between a machine running normally and one that’s dangerously overheating. They only send the crucial details (i.e. the specific fault and its location) back to SAP.

To handle the network issues, many early adopters build private 5G networks. This gives them the coverage they need across huge facilities where regular Wi-Fi fails. It also locks down access, keeping the robot’s data safe from interception.

Of course, security is a major issue. A walking robot packed with cameras is effectively a roaming vulnerability. Companies must use zero-trust network protocols to constantly verify the robot’s identity and limit what SAP modules it can touch. If the robot gets hacked, the system has to cut its connection instantly to stop the attackers from moving laterally into the corporate network.

These robots generate a massive amount of unstructured data as they walk around. Turning raw audio and thermal images into the neat tables SAP requires is difficult.

If companies don’t manage this right, maintenance teams will drown in alerts. A robot that is too sensitive might spit out hundreds of useless warnings a day, making the SAP dashboard completely ignored. IT teams have to set strict rules before turning the system on. They need exact thresholds for what triggers a real maintenance ticket and what just needs to be watched.

The setup usually uses middleware to translate the robot’s telemetry into SAP’s language. This software acts as a filter, throwing out the noise so only actual problems reach the ERP system. The data lake storing all this information also needs to be organised for future machine learning projects. Fixing broken machines is the short-term goal; the long-term payoff is using years of robot data to predict failures before they happen.

Ensuring a successful physical AI deployment

Dropping robots into a factory naturally makes people nervous. The project’s success often comes down to how human resources handles it. Workers usually look at the robots and assume layoffs are next.

Management has to be clear about why the robots are there. The goal is to get people out of dangerous areas like high-voltage zones or toxic chemical sectors to reduce injuries. The robot collects the data, and the human engineer shifts to analysing that data and doing the actual repairs.

This requires retraining. Workers who used to walk the perimeter now have to read SAP dashboards, manage automated tickets, and work with the robots. They have to trust the sensors, and management has to make sure operators know they can take manual control if something unexpected happens.

Companies need to take the rollout slowly. Because syncing physical robots with enterprise software is complicated, large-scale rollouts should start as small, targeted pilots.

The first test should be in one specific area with known hazards but rock-solid internet. This lets IT watch the data flow between the hardware and SAP in a controlled space. At this stage, the main job is making sure the data matches reality. If the robot sees one thing and SAP records another, it has to be audited and fixed daily.

Once the data pipeline actually works, the company can add more robots and connect other systems, like automated parts ordering. IT chiefs have to keep checking if their private networks can handle more robots, while security teams update their defenses against new threats.

If companies treat these autonomous inspectors as an extension of their corporate data architecture, they get a massive amount of information about their physical assets. But pulling it off means getting the network infrastructure, the data rules, and the human element exactly right.

See also: The rise of invisible IoT in enterprise operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News.

]]>
BMW puts humanoid robots to work in Germany–and Europe’s factories are watching https://www.artificialintelligence-news.com/news/bmw-humanoid-robots-manufacturing-europe-leipzig/ Fri, 13 Mar 2026 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=112665 Europe’s factory floors have a new kind of colleague. BMW Group has deployed humanoid robots in manufacturing in Germany for the first time, launching a pilot project at its Leipzig plant with AEON–a wheeled humanoid built by Hexagon Robotics.  It is the first automotive deployment of AEON anywhere in the world, and it marks something of a […]

The post BMW puts humanoid robots to work in Germany–and Europe’s factories are watching appeared first on AI News.

]]>
Europe’s factory floors have a new kind of colleague. BMW Group has deployed humanoid robots in manufacturing in Germany for the first time, launching a pilot project at its Leipzig plant with AEON–a wheeled humanoid built by Hexagon Robotics. 

It is the first automotive deployment of AEON anywhere in the world, and it marks something of a line in the sand for European industry: physical AI is no longer a North American or East Asian story.

The announcement, made on March 9, 2026, comes backed by hard data from a prior US trial. In 2025, BMW ran a ten-month pilot at its Spartanburg, South Carolina, plant using Figure AI’s Figure 02 robot. The humanoid supported production of over 30,000 BMW X3s, working 10-hour shifts and moving a total of over 90,000 components. 

Leipzig is now the direct heir to those lessons.

A robot built for work, not demos

AEON, developed by Hexagon’s Zurich-based robotics division, is a deliberately industrial machine. Arnaud Robert, President of Hexagon Robotics, made the philosophy plain at a Munich event earlier this month: “We’re not in the dancing business–we’re in the working business.” That ethos is visible in every design decision.

Rather than walking on two legs, AEON moves on wheels–a choice made after extensive testing of locomotion systems, with Hexagon concluding that on factory-grade flat floors, wheels are significantly more efficient in both speed and energy use. It stands 1.65 metres tall, weighs 60 kilograms, reaches 2.5 metres per second, and can autonomously swap its own battery in 23 seconds–enabling around-the-clock operation without human intervention.

Its 22 integrated sensors–peripheral cameras, time-of-flight, infrared, SLAM cameras, and microphones–give it full 360-degree real-time spatial awareness, including the ability to perform quality inspection tasks that conventional stationary robots cannot. 

Its human-like torso allows a wide variety of grippers, hand elements, and scanning tools to be flexibly docked, which is precisely what BMW needs for multifunctional deployment across different production environments

Phased rollout, deliberate strategy

AEON’s first test deployment at Leipzig took place in December 2025. A further test run is planned for April 2026, ahead of a full pilot phase launching in summer 2026, where two AEON units will work simultaneously across two use cases–focusing on high-voltage battery assembly and component manufacturing for exterior parts.

Leipzig was not an arbitrary choice. It is BMW’s most technologically comprehensive German plant, combining battery production, injection moulding, press shop, body shop, and final assembly under one roof, meaning a successful deployment there effectively validates physical AI across the full production spectrum.

To anchor this work institutionally, BMW has established a Centre of Competence for Physical AI in Production, consolidating expertise across the group and creating a defined evaluation path for technology partners–from lab testing through to full pilot phases. 

As Felix Haeckel, Team Lead for the centre, put it: “We are pooling our expertise to make knowledge on AI and robotics widely usable within the company.”

The infrastructure underneath

What makes BMW’s approach notable is that AEON is not landing on a blank factory floor. BMW has systematically dismantled data silos across its production network, replacing them with a uniform data platform that ensures all information is consistent, standardised, and accessible at all times–the architecture that allows AI agents to operate autonomously and learn continuously. 

The humanoid robot is, in effect, the physical layer of a system that has been years in the making. AEON runs on NVIDIA Jetson Orin onboard computers and was trained largely through simulation using NVIDIA’s Isaac platform–a method that allowed Hexagon to develop core locomotion capabilities in weeks rather than months.

The project also involves Microsoft Azure for scalable model development and Maxon’s actuators for locomotion.

Why this matters beyond Leipzig

The broader signal here is one that the enterprise AI world is already tracking closely. Deloitte’s State of AI in the Enterprise 2026 report, surveying over 3,200 senior leaders across 24 countries, found that 58% of companies are already using physical AI in some capacity, with that figure set to reach 80% within two years, with Asia Pacific leading in early implementation.

BMW’s Leipzig pilot is a proof point in that trajectory: that humanoid robots in manufacturing have moved past the lab and the press release, and are being stress-tested against the unforgiving standards of real industrial production. As Milan Nedeljković, BMW’s Board Member for Production, put it: “The symbiosis of engineering expertise and artificial intelligence opens up completely new possibilities in production.”

The question now is not whether humanoid robots belong on the factory floor. It is how fast the rest of the European industry follows.

See also: Ai2: Building physical AI with virtual simulation data

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post BMW puts humanoid robots to work in Germany–and Europe’s factories are watching appeared first on AI News.

]]>
How multi-agent AI economics influence business automation https://www.artificialintelligence-news.com/news/how-multi-agent-ai-economics-business-automation/ Thu, 12 Mar 2026 15:01:20 +0000 https://www.artificialintelligence-news.com/?p=112642 Managing the economics of multi-agent AI now dictates the financial viability of modern business automation workflows. Organisations progressing past standard chat interfaces into multi-agent applications face two primary constraints. The first issue is the thinking tax; complex autonomous agents need to reason at each stage, making the reliance on massive architectures for every subtask too […]

The post How multi-agent AI economics influence business automation appeared first on AI News.

]]>
Managing the economics of multi-agent AI now dictates the financial viability of modern business automation workflows.

Organisations progressing past standard chat interfaces into multi-agent applications face two primary constraints. The first issue is the thinking tax; complex autonomous agents need to reason at each stage, making the reliance on massive architectures for every subtask too expensive and slow for practical enterprise use.

Context explosion acts as the second hurdle; these advanced workflows produce up to 1,500 percent more tokens than standard formats because every interaction demands the resending of full system histories, intermediate reasoning, and tool outputs. Across extended tasks, this token volume drives up expenses and causes goal drift, a scenario where agents diverge from their initial objectives.

Evaluating architectures for multi-agent AI

To address these governance and efficiency hurdles, hardware and software developers are releasing highly optimised tools aimed directly at enterprise infrastructure.

NVIDIA recently introduced Nemotron 3 Super, an open architecture featuring 120 billion parameters (of which 12 billion remain active) that is specifically-engineered to execute complex agentic AI systems.

Available immediately, NVIDIA’s framework blends advanced reasoning features to help autonomous agents finish tasks efficiently and accurately for improved business automation. The system relies on a hybrid mixture-of-experts architecture combining three major innovations to deliver up to five times higher throughput and twice the accuracy of the preceding Nemotron Super model. During inference, only 12 billion of the 120 billion parameters are active.

Mamba layers provide four times the memory and compute efficiency, while standard transformer layers manage the complex reasoning requirements. A latent technique boosts accuracy by engaging four expert specialists for the cost of one during token generation. The system also anticipates multiple future words at the same time, accelerating inference speeds threefold.

Operating on the Blackwell platform, the architecture utilises NVFP4 precision. This setup reduces memory needs and makes inference up to four times faster than FP8 configurations on Hopper systems, all without sacrificing accuracy.

Translating automation capability into business outcomes

The system offers a one-million-token context window, allowing agents to keep the entire workflow state in memory and directly addressing the risk of goal drift. A software development agent can load an entire codebase into context simultaneously, enabling end-to-end code generation and debugging without requiring document segmentation.

Within financial analysis, the system can load thousands of pages of reports into memory, improving efficiency by removing the need to re-reason across lengthy conversations. High-accuracy tool calling ensures autonomous agents reliably navigate massive function libraries, preventing execution errors in high-stakes environments such as autonomous security orchestration within cybersecurity.

Industry leaders – including Amdocs, Palantir, Cadence, Dassault Systèmes, and Siemens – are deploying and customising the model to automate workflows across telecom, cybersecurity, semiconductor design, and manufacturing.

Software development platforms like CodeRabbit, Factory, and Greptile are integrating it alongside proprietary models to achieve higher accuracy at lower costs. Life sciences firms like Edison Scientific and Lila Sciences will use it to power agents for deep literature search, data science, and molecular understanding.

The architecture also powers the AI-Q agent to the top position on DeepResearch Bench and DeepResearch Bench II leaderboards, highlighting its capacity for multistep research across large document sets while maintaining reasoning coherence.

Finally, the model claimed the top spot on Artificial Analysis for efficiency and openness, featuring leading accuracy among models of its size.

Implementation and infrastructure alignment

Built to handle complex subtasks inside multi-agent systems, deployment flexibility remains a priority for leaders driving business automation.

NVIDIA released the model with open weights under a permissive license, letting developers deploy and customise it across workstations, data centres, or cloud environments. It is packaged as an NVIDIA NIM microservice to aid this broad deployment from on-premises systems to the cloud.

The architecture was trained on synthetic data generated by frontier reasoning models. NVIDIA published the complete methodology, encompassing over 10 trillion tokens of pre- and post-training datasets, 15 training environments for reinforcement learning, and evaluation recipes. Researchers can further fine-tune the model or build their own using the NeMo platform.

Any exec planning a digitisation rollout must address context explosion and the thinking tax upfront to prevent goal drift and cost overruns in agentic workflows. Establishing comprehensive architectural oversight ensures these sophisticated agents remain aligned with corporate directives, yielding sustainable efficiency gains and advancing business automation across the organisation.

See also: Ai2: Building physical AI with virtual simulation data

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How multi-agent AI economics influence business automation appeared first on AI News.

]]>
New partnership to offer smart robots for dangerous environments https://www.artificialintelligence-news.com/news/new-partnership-to-offer-ai-for-robotics-for-work-in-dangerous-environments/ Wed, 11 Mar 2026 11:42:00 +0000 https://www.artificialintelligence-news.com/?p=112598 ADLINK Technology has signed a strategic alliance and joint development agreement with Under Control Robotics, the company behind the robotics startup Noble Machines. The two firms will combine ADLINK’s edge AI platforms with Noble Machines’ autonomy software to create a new generation of physical AI, general-purpose robots for modern manufactories and engineering plants. The work […]

The post New partnership to offer smart robots for dangerous environments appeared first on AI News.

]]>
ADLINK Technology has signed a strategic alliance and joint development agreement with Under Control Robotics, the company behind the robotics startup Noble Machines. The two firms will combine ADLINK’s edge AI platforms with Noble Machines’ autonomy software to create a new generation of physical AI, general-purpose robots for modern manufactories and engineering plants. The work focuses on bi-pedal, bi-manual machines – read, human-like robots – designed to operate in demanding industrial settings.

The partnership will integrate ADLINK’s DLAP edge AI platform with Noble Machines’ autonomy and whole-body control software. The system is intended to provide reasoning, sensing, and motion control for robots handling heavy loads. Initial target sectors include manufacturing, mining, construction, energy, petrochemicals, and public utilities, industries that currently report labour shortages and often involve risky environments for human workers.

ADLINK’s hardware is built on the NVIDIA Jetson Thor platform. In a press release, the companies state DLAP offers multi-voltage feeds and high-bandwidth sensor interfaces, quoting “up to eight” GMSL camera connections, four Ethernet ports, and 5G or Wi-Fi modules. Systems can operate inside a wide temperature range and comply with IEC 60068 standards for shock and vibration.

ADLINK’s hardware will combine with Noble Machines’ autonomy software, which manages perception, reasoning, and coordinated whole-body motion in robots. Robots operating in adverse conditions ideally need to replicate the mobility and manipulation abilities of human workers, so they can replace at-risk humans without significant retooling or altering existing working environments.

Ethan Chen, general manager of ADLINK’s Edge Computing Platforms business unit, said the agreement will extend the company’s edge computing hardware into emerging general-purpose robotic systems, moving from support for the current DLAP platform to a jointly-developed computing platform based on Jetson Thor.

Wei Ding, chief executive of Under Control Robotics, said ADLINK’s experience in industrial hardware complements Noble Machines’ software, specifically its whole-body control systems. The collaboration addresses hardware durability and supply chain integration issues that can affect industrial robot deployment. The two partners will pursue possible deployments in the construction and energy industries initially, where it’s common for certain tasks to involve workers tolerating dust, heat, heavy loads, and vibration. Typically, such tasks are difficult to mechanise because they require on-the-spot decision-making, mobility, and manual handling.

By working with one anothers’ specialisations, the companies may be able to offer a turnkey solution for customers unwilling to invest in what would be experimental technology and hardware deployments. The emphasis on real-time reactions and decision-making means that the AI element would provide the necessary real-time decision-making that humans working in difficult conditions would otherwise provide. Conventional software, as opposed to AI-based algorithms, would need to be constructed with every possible edge-case hard-coded into control systems.

The success of any systems emanating from the partnership would hinge on whether highly-costly robotics could be able to react correctly in unforeseen situations without compromising itself or human co-workers, or negatively affect wider workflows on site.

(Image source: “Robot” by 1lenore is licensed under CC BY 2.0.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post New partnership to offer smart robots for dangerous environments appeared first on AI News.

]]>
ABB: Physical AI simulation boosts ROI for factory automation https://www.artificialintelligence-news.com/news/abb-physical-ai-simulation-secures-factory-automation-roi/ Tue, 10 Mar 2026 17:22:41 +0000 https://www.artificialintelligence-news.com/?p=112561 A new ABB and NVIDIA partnership shows physical AI simulation is driving real ROI in factory automation and solving production hurdles. Manufacturers have often found it difficult to make intelligent robotics work reliably outside testing environments. The core issue is the gap between digital training models and actual factory floors, where lighting, material physics, and […]

The post ABB: Physical AI simulation boosts ROI for factory automation appeared first on AI News.

]]>
A new ABB and NVIDIA partnership shows physical AI simulation is driving real ROI in factory automation and solving production hurdles.

Manufacturers have often found it difficult to make intelligent robotics work reliably outside testing environments. The core issue is the gap between digital training models and actual factory floors, where lighting, material physics, and part variations refuse to behave as they do on a screen.

Historically, this friction has previously forced engineering teams to fall back on physical prototypes, delaying product launches and driving up costs.

Overcoming the digital to physical AI simulation divide

The partnership between ABB Robotics and NVIDIA attempts to close this gap by bringing industrial-grade physical AI to manufacturing facilities. Slated for release in the second half of 2026, RobotStudio HyperReality is already drawing interest from a global customer base.

By embedding NVIDIA Omniverse libraries within its existing RobotStudio software, ABB provides a platform for physically accurate digital testing. On an operational level, this integration allows engineers to cut deployment costs by up to 40 percent and accelerate time to market by as much as 50 percent.

Realising these efficiency gains demands a workflow where production leaders design, test, and validate complete automation cells before installing any hardware. To do this, the system exports a fully parameterised station – encompassing the robots, sensors, lighting, kinematics, and parts – as a USD file straight into the Omniverse environment.

Inside this digital space, a virtual controller runs the identical firmware found on the physical machine, enabling a 99 percent behavioural match between the digital and physical realms.

Rather than manually programming movements, computer vision models learn using synthetic images generated inside the software. When combined with Absolute Accuracy technology, this method cuts positioning errors down from 8-15 mm to approximately 0.5 mm, providing high precision for industrial applications.

Marc Segura, President of ABB Robotics, said: “Combining RobotStudio with the physically accurate simulation power of NVIDIA Omniverse libraries, we have closed technology’s long-standing ‘sim-to-real’ gap—a huge milestone to deploying physical AI with industrial-grade precision, for real-world customer applications.”

Validating factory automation before deployment

Early adopters are already validating these capabilities on active production lines. 

Foxconn, for example, is testing the software for consumer device assembly—an area where frequent product changes and delicate metal components complicate traditional automation. By generating synthetic data to train their systems virtually, Foxconn achieves high accuracy on the factory floor while anticipating a reduction in setup time and the elimination of costly physical testing.

Similarly, Workr – a California-based automation provider – integrates its WorkrCore platform with ABB hardware trained via Omniverse. At the NVIDIA GTC 2026 event in San Jose, Workr intends to showcase systems capable of onboarding new parts in minutes without requiring specialised programming skills.

Deepu Talla, VP of Robotics and Edge AI at NVIDIA, commented: “The industrial sector needs high-fidelity simulation to bridge the gap between virtual training and real-world deployment of AI-driven robotics at scale.

“Integrating NVIDIA Omniverse libraries into RobotStudio brings advanced simulation and accelerated computing to ABB’s virtual controller technology, accelerating how thousands of manufacturers bring complex products to market.” 

The hardware ecosystem is also expanding to edge computing. ABB is evaluating the integration of NVIDIA’s Jetson edge platform into its Omnicore controllers, a step that would facilitate real-time inference across existing robotic fleets.

Adopting this type of digital-first simulation for physical AI can reduce setup and commissioning times by up to 80 percent. As AI moves from software applications to hardware operations, preparing data pipelines and upskilling engineering teams to work with synthetic data will dictate which manufacturers maintain a competitive edge.

See also: Agentic AI in finance speeds up operational automation

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post ABB: Physical AI simulation boosts ROI for factory automation appeared first on AI News.

]]>
Google makes its industrial robotics AI play official–and this time, it means business https://www.artificialintelligence-news.com/news/google-industrial-robotics-ai-physical-ai-intrinsic/ Wed, 04 Mar 2026 08:00:00 +0000 https://www.artificialintelligence-news.com/?p=112499 When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google.  The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and […]

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google. 

The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and Google Cloud. No purchase price was disclosed.

On the surface, this looks like a routine internal reshuffle. It isn’t.

From Moonshot to Mandate

Intrinsic graduated into an independent Alphabet-owned company in 2021 after five years of development within Alphabet’s X, the moonshot research division–the same factory that produced Waymo and Wing. Its mission from the start: make industrial robotics AI accessible to manufacturers who don’t have armies of specialist engineers.

While hardware like robotic arms has become cheaper, programming them remains incredibly complex, often requiring hundreds of hours of manual coding by specialised engineers that can vary based on the particular robot. Intrinsic’s answer to that is Flowstate–a web-based platform that allows users to build robotic applications without having to write thousands of lines of code. 

The platform is designed to be hardware-, software-, and AI-model-agnostic. Think of it less as a product and more as an operating layer–one that Google CEO Sundar Pichai has reportedly compared directly to Android. “He said this is the Android of robotics,” Intrinsic CEO Wendy Tan White said, noting that Pichai worked on Chrome and Android before becoming CEO. 

Why now, why Google?

The timing isn’t arbitrary. The sequence of hiring Boston Dynamics’ CTO, releasing a standalone robotics SDK, and now absorbing Intrinsic represents a deliberate consolidation of robotics capability inside Google’s core. Taken together, these moves position Google to offer manufacturers something no competitor has assembled quite as cleanly: AI models from DeepMind, deployment software from Intrinsic, and cloud infrastructure from Google Cloud–all under one roof.

Last month, Google also teamed up with Boston Dynamics to integrate Gemini into Atlas humanoid robots built for manufacturing environments, while Google DeepMind hired the former CTO of Boston Dynamics in November. 

The industrial robotics AI market Google is chasing is not small. McKinsey projects that the market for general-purpose robots could reach US$370 billion by 2040. 

What it means for the enterprise

For enterprise decision-makers, the more interesting signal here isn’t the technology–it’s the accessibility shift. Google plans to integrate Intrinsic’s robotics development platform and vision models with its broader AI ecosystem, combining advanced reasoning, perception and learning capabilities with industrial-grade robotics software to allow machines to interpret sensor data better, adapt to dynamic environments and execute complex tasks. 

Intrinsic has also expanded through acquisitions–acquiring the Open Source Robotics Corp. in 2022, the for-profit arm of the foundation behind the Robot Operating System (ROS). And its commercial pipeline is already in motion: in October 2025, Intrinsic formed a strategic partnership with Foxconn focused on developing general-purpose intelligent robots for full factory automation within electronics manufacturing. 

White framed the integration in terms enterprise leaders will find hard to ignore: production economics, operational transformation, and what she described as truly advanced manufacturing — all within reach once Google’s infrastructure is fully behind it.

That’s a significant claim. But with Gemini, DeepMind, and Google Cloud now aligned behind it, the infrastructure to back it up is, for the first time, actually there.

See also: Physical AI adoption boosts customer service ROI

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
Hitachi bets on industrial expertise to win the physical AI race https://www.artificialintelligence-news.com/news/hitachi-physical-ai-industrial-expertise/ Mon, 23 Feb 2026 07:00:00 +0000 https://www.artificialintelligence-news.com/?p=112339 Physical AI – the branch of artificial intelligence that controls robots and industrial machinery in the real world – has a hierarchy problem. At the top, OpenAI and Google are scaling multimodal foundation models. In the middle, Nvidia is building the platforms and tools for physical AI development. And then there is a third camp: […]

The post Hitachi bets on industrial expertise to win the physical AI race appeared first on AI News.

]]>
Physical AI – the branch of artificial intelligence that controls robots and industrial machinery in the real world – has a hierarchy problem. At the top, OpenAI and Google are scaling multimodal foundation models. In the middle, Nvidia is building the platforms and tools for physical AI development.

And then there is a third camp: industrial manufacturers like Hitachi and Germany’s Siemens, that are making the quieter but arguably more grounded argument that you cannot train machines to navigate the physical world without first understanding it.

That argument is now moving from boardroom strategy to factory floor deployment, as Hitachi revealed in a recent interview with Nikkei Asia.

Why Physical AI needs a better model

Kosuke Yanai, deputy director of Hitachi’s Centre for Technology Innovation-Artificial Intelligence, is direct about what separates viable physical AI from the theoretical kind. “Physical AI cannot be implemented in society without a systematic understanding that begins with foundational knowledge of physics and industrial equipment,” he told Nikkei.

Hitachi’s pitch is that it already holds much of that foundational knowledge – accumulated over decades of building railways, power infrastructure, and industrial control systems. The company has thermal fluid simulation technology that models the behaviour of gases and liquids, and signal-processing tools for monitoring equipment condition – what Yanai describes as the engineering foundation underpinning Hitachi’s ‘extensive knowledge of product design and control logic construction.’

Daikin and JR East

While Hitachi’s overarching physical AI architecture – the Integrated World Infrastructure Model (IWIM), which it describes as a mixture-of-experts system integrating multiple specialised models and data sets – remains in the concept verification stage, two real-world deployments signal that the underlying approach is already producing results.

In collaboration with Daikin Industries, Hitachi has deployed an AI system that diagnoses malfunctions in commercial air-conditioner manufacturing equipment. The system, trained on equipment maintenance records, procedure manuals, and design drawings, can now identify which component is likely failing when an anomaly is detected – the kind of operational intuition that previously existed only in the heads of experienced engineers.

With East Japan Railway (JR East), Hitachi has built an AI that identifies the root cause of malfunctions in the control devices running the Tokyo metropolitan area’s railway traffic management system, and then assists operators in formulating a response plan. In a network where delays ripple in millions of daily journeys, the ability to accelerate fault diagnosis carries real operational weight.

The R&D pipeline: Cutting development time

Hitachi’s physical AI push is also showing up in its research output. In December 2025, the company published findings from two projects presented at ASE 2025, a top-tier software engineering conference, that address a persistent bottleneck in industrial AI: the time and effort required to write and adapt control software.

In the automotive sector, Hitachi and its subsidiary Astemo developed a system that uses retrieval-augmented generation to automatically produce integration test scripts for vehicle electronic control units (ECUs) – pulling from hardware-specific API information and frontline engineering knowledge. In a pilot involving multi-core ECU testing, the technology reduced integration testing man-hours by 43% compared to manual execution.

In logistics, the company developed variability management technology that modularises robot control software into reusable components structured around a robot operating system (ROS). By mapping out the environmental variables and operational requirements of different warehouse settings in advance, the system lets operators adapt robotic picking-and-placing workflows to new products or layouts without rewriting software from scratch.

Safety a structural requirement

One thread that runs through all of Hitachi’s physical AI work is its emphasis on safety guardrails – not as a compliance checkbox, but as an engineering constraint baked into system design. Yanai told Nikkei that the company is integrating its control and reliability technology from social infrastructure development to prevent AI outputs from deviating from human-approved operating parameters.

This includes input validation to screen out data that models should not be trained on, output verification to ensure machine actions do not endanger people or property, and real-time monitoring of the AI model itself for operational anomalies.

It is a distinction. Physical AI systems fail in the real world, not in a sandbox. The stakes for an AI controlling railway signalling or factory robotics are categorically different from those governing a chatbot.

Infrastructure to match ambition

On the infrastructure side, Hitachi Vantara – the group’s data and digital infrastructure arm – is positioning itself as an early adopter of NVIDIA’s RTX PRO Servers, built on the RTX PRO 6000 Blackwell Server Edition GPU, designed to accelerate agentic and physical AI workloads. The hardware is being paired with Hitachi’s iQ platform and used to build digital twins – virtual replicas of physical systems – that can simulate everything from grid fluctuations to robotic motion at scale.

The IWIM concept, meanwhile, is designed to connect Nvidia’s open-source Cosmos physical AI development platform with specialised Japanese-language LLMs and visual language models via the model context protocol (MCP) – essentially a framework to stitch together the models, simulation tools, and industrial datasets that physical AI systems require.

The broader race in physical AI is far from settled. But Hitachi’s position – that domain expertise and operational data are as important as model architecture – is increasingly hard to dismiss, particularly as deployments with partners like Daikin and JR East begin to demonstrate what that expertise is actually worth in practice.

Sources: Nikkei Asia (Feb 21, 2026); Hitachi R&D (Dec 24, 2025); Hitachi Vantara Blog (Aug 27, 2025)

See also:Alibaba enters physical AI race with open-source robot model RynnBrain

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Hitachi bets on industrial expertise to win the physical AI race appeared first on AI News.

]]>
PepsiCo is using AI to rethink how factories are designed and updated https://www.artificialintelligence-news.com/news/pepsico-is-using-ai-to-rethink-how-factories-are-designed-and-updated/ Fri, 30 Jan 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111925 For many large companies, the most useful form of AI right now has little to do with writing emails or answering questions. At PepsiCo, AI is being tested in places where mistakes are costly and changes are hard to undo — factory layouts, production lines, and physical operations. That shift is visible in how PepsiCo […]

The post PepsiCo is using AI to rethink how factories are designed and updated appeared first on AI News.

]]>
For many large companies, the most useful form of AI right now has little to do with writing emails or answering questions. At PepsiCo, AI is being tested in places where mistakes are costly and changes are hard to undo — factory layouts, production lines, and physical operations.

That shift is visible in how PepsiCo is using AI and digital twins to model and adjust its manufacturing facilities before making changes in the real world. Rather than experimenting with chat interfaces or office tools, the company is applying AI to one of its core problems: how to configure factories faster, with less risk, and fewer disruptions.

Digital twins are virtual models of physical systems. In manufacturing, they can simulate equipment placement, material flow, and production speed. When combined with AI, these models can test thousands of scenarios that would be impractical — or expensive — to try on a live production line.

PepsiCo has been working with partners to apply AI-driven digital twins to parts of its manufacturing network, with early pilots focused on improving how facilities are designed and adjusted over time.

The goal is not automation for its own sake. It is cycle time. Instead of taking weeks or months to validate changes through physical trials, teams can test configurations virtually, identify problems earlier, and move faster when updates are needed.

From planning bottleneck to operational shortcut

In large consumer goods companies, factory changes tend to move slowly. Even small adjustments — a new line layout, different packaging flow, or equipment upgrade — can require long planning cycles, approvals, and staged testing. Each delay has knock-on effects on supply chains and product availability.

Digital twins offer a way around that bottleneck. By simulating production environments, teams can see how changes might affect throughput, safety, or downtime before touching the actual facility.

PepsiCo’s early pilots showed faster validation times and signs of throughput improvement at initial sites, though the company has not published detailed metrics yet. What matters more than the numbers is the pattern: AI is being used to compress decision cycles in physical operations, not to replace workers or remove human judgment.

This kind of use case fits a broader trend. Enterprises that move beyond pilot projects often focus on narrow, well-defined problems where AI can reduce friction in existing workflows. Manufacturing, logistics, and healthcare operations are showing more traction than open-ended knowledge work.

Why PepsiCo treats AI as operations engineering, not office productivity

PepsiCo’s approach also highlights a quieter shift in how AI programs are being justified inside large firms. The value is tied to operational outcomes — time saved, fewer disruptions, better planning — rather than general claims about productivity.

That distinction matters. Many enterprise AI efforts stall because they struggle to connect usage with measurable impact. Tools get deployed, but workflows stay the same.

Digital twins change that dynamic because they sit directly inside planning and engineering processes. If a simulated change cuts weeks off a factory upgrade, the benefit is visible. If it reduces downtime risk, operations teams can measure that over time.

This focus on process change, rather than tools, mirrors what is happening in other sectors. In healthcare, for example, Amazon is testing an AI assistant inside its One Medical app that uses patient history to reduce repetitive intake and support care interactions, according to comments from CEO Andy Jassy reported this week. The assistant is embedded in the care workflow, not offered as a standalone feature.

Both cases point to the same lesson: AI adoption moves faster when it fits into how work already gets done, instead of asking teams to invent new habits.

Why this matters for other enterprises

PepsiCo’s digital-twin work is unlikely to be unique for long. Large manufacturers across food, chemicals, and industrial goods face similar planning constraints and cost pressures. Many already use simulation software. AI adds speed and scale to those models.

What is more interesting is what this says about the next phase of enterprise AI adoption.

First, the centre of gravity is shifting away from broad, generic tools toward focused systems tied to specific decisions. Second, success depends less on model quality and more on data quality, process ownership, and governance. A digital twin is only as useful as the operational data feeding it.

Third, this kind of AI work tends to stay out of the spotlight. It does not generate flashy demos, but it can reshape how companies plan capital spending and manage risk.

That also explains why many firms remain cautious. Building and maintaining accurate digital twins takes time, cross-team coordination, and deep knowledge of physical systems. The payoff comes from repeated use, not one-off wins.

PepsiCo’s manufacturing AI work is a quiet signal worth watching

In AI coverage, it is easy to focus on new models, agents, or interfaces. Stories like PepsiCo’s point in a different direction. They show AI being treated as infrastructure — something that sits underneath daily decisions and gradually changes how work flows through an organisation.

For enterprise leaders, the takeaway is not to copy the technology stack. It is to look for places where planning delays, validation cycles, or operational risk slow the business down. Those friction points are where AI has the best chance of sticking.

PepsiCo’s digital-twin pilots suggest that the factory floor may be one of the most practical testing grounds for AI today — not because it is trendy, but because the impact is easier to see when time and mistakes have a clear cost.

(Photo by NIKHIL)

See also: Deloitte sounds alarm as AI agent deployment outruns safety frameworks

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post PepsiCo is using AI to rethink how factories are designed and updated appeared first on AI News.

]]>
Bosch’s €2.9 billion AI investment and shifting manufacturing priorities https://www.artificialintelligence-news.com/news/bosch-e2-9-billion-ai-investment-and-shifting-manufacturing-priorities/ Thu, 08 Jan 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=111519 Factories are producing more data than they can process, and companies like Bosch are using AI to close the gap. Cameras watch production lines, sensors track machines, and software records each step of processes. However, much of that information can’t create faster decisions or lead to fewer breakdowns. For large manufacturing firms, the missed opportunity […]

The post Bosch’s €2.9 billion AI investment and shifting manufacturing priorities appeared first on AI News.

]]>
Factories are producing more data than they can process, and companies like Bosch are using AI to close the gap. Cameras watch production lines, sensors track machines, and software records each step of processes. However, much of that information can’t create faster decisions or lead to fewer breakdowns. For large manufacturing firms, the missed opportunity is pushing AI from small trials into core operations.

The shift helps explain why Bosch plans to invest about €2.9 billion in artificial intelligence by 2027, according to The Wall Street Journal. The spending is aimed at manufacturing, supply chain management, and perception systems, areas where the company sees AI as a way to improve how physical systems behave in real conditions.

How Bosch uses AI to catch manufacturing problems earlier

In manufacturing, delays and defects frequently start small. A minor variation in materials or machine settings can ripple through a production line. Bosch has been applying AI models to camera feeds and sensor data to detect quality issues earlier.

Instead of catching defects after products are finished, systems can flag problems while items are still on the line. That gives workers time to change operations before waste increases. For high-volume manufacturing, earlier detection can reduce scrap and limit the need for rework.

Equipment maintenance is another area under pressure. Many factories still rely on fixed schedules or manual inspections, which can miss early warning signs of errors or failure. AI models trained on vibration and temperature data can help predict when a machine is likely to fail.

This allows maintenance teams to plan repairs instead of reacting to breakdowns. The aim is to reduce unplanned downtime without replacing equipment too early. Over time, this approach can extend the working life of machines while keeping production more stable.

Making supply chains more adaptable

Supply chains are also part of the investment focus. Disruptions that became visible during the pandemic have not fully disappeared, and manufacturers are still dealing with shifting demand and transport delays.

AI systems can help forecast needs, track parts in sites, and adjust plans when conditions change. Even small improvements in planning accuracy can have a broad effect when applied in hundreds of factories and suppliers.

Bosch is funding perception systems, which help machines understand their surroundings. Systems combine input from cameras, radar, and other sensors with AI models that can recognise objects, judge distance, or spot changes in the environment. They are used in areas like factory automation, driver assistance, and robotics, where machines must respond quickly and safely. In these environments, AI is reacting to real-world conditions as they happen.

Why edge computing matters on the factory floor

Much of this work takes place at the edge. In factories and vehicles, sending data to a distant cloud system and waiting for a response can add delay or create risk if connections fail. Running AI models locally allows systems to respond in real time and keep operating even when networks are unreliable.

It also limits how much sensitive data leaves a site. For industrial companies, that can matter as much as speed, especially when production processes are closely guarded.

Cloud systems still play a role, though mostly behind the scenes. Training models, managing updates, and analysing trends in locations often happens in central environments.

Many manufacturers are moving toward a hybrid setup, using cloud systems for coordination and learning, and edge systems for action. The pattern is becoming common in industrial firms, not just Bosch.

Scaling AI beyond small trials

The scale of the investment matters, as small AI tests can show promise, but rolling them out across all operations takes funding, skilled staff, and long-term commitment.

Bosch executives have described AI as a way to support workers not replace them, and as a tool to handle the complexity that humans cannot manage. That view reflects a broader shift in industry, where AI is treated less as an experiment and more as basic infrastructure.

What Bosch’s manufacturing AI strategy shows in practice

Rising energy costs, labour shortages, and tighter margins leave less room for inefficiency. Automation alone no longer solves those problems. Companies are looking for systems that can adjust to changing conditions without constant manual input.

Bosch’s €2.9 billion commitment sits in that wider shift. Other large manufacturers are making similar moves, often without public fanfare, by upgrading factories and retraining staff. What stands out is the focus on operational use rather than customer-facing features.

Taken together, these efforts show how end-user companies are applying AI today. The work is less about bold claims and more about reducing waste, improving uptime, and making complex systems easier to manage. For industrial firms, that practical focus may define how AI delivers value over time.

(Photo by P. L.)

See also: Agentic AI scaling requires new memory architecture

Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Bosch’s €2.9 billion AI investment and shifting manufacturing priorities appeared first on AI News.

]]>