Physical AI - AI News https://www.artificialintelligence-news.com/categories/ai-and-us/physical-ai/ Artificial Intelligence News Wed, 15 Apr 2026 11:10:35 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.artificialintelligence-news.com/wp-content/uploads/2020/09/cropped-ai-icon-32x32.png Physical AI - AI News https://www.artificialintelligence-news.com/categories/ai-and-us/physical-ai/ 32 32 Drones get smarter for large farm holdings https://www.artificialintelligence-news.com/news/agricultural-drones-get-smarter-for-large-farm-holdings/ Wed, 15 Apr 2026 11:03:00 +0000 https://www.artificialintelligence-news.com/?p=113012 Singapore-based DroneDash Technologies and GEODNET have formed a joint venture to be called GEODASH Aerosystems, to build an agricultural spraying drone for large industrial farms. The companies say the near-production drone technology is designed to remove the need to map a field to be treated before each flight, and the need to rebuild flight plans […]

The post Drones get smarter for large farm holdings appeared first on AI News.

]]>
Singapore-based DroneDash Technologies and GEODNET have formed a joint venture to be called GEODASH Aerosystems, to build an agricultural spraying drone for large industrial farms. The companies say the near-production drone technology is designed to remove the need to map a field to be treated before each flight, and the need to rebuild flight plans when conditions on the ground have changed.

The aircraft will be capable of perceiving its surroundings during flight, adjust behaviour in response to visuals it captures, and undertake crop spraying.

Current agricultural spraying drones were adapted from general-purpose models developed outside the industry, which meant that on farms, human operators had to survey and map each field, generate a flight plan for each spraying operation, and repeat the mapping process when canopy conditions altered. The technology is designed to be cost-effective on very large estates, especially palm oil plantations where crops are planted in rows, this necessary preparation and adjustment times can limit how much land a team can cover.

GEODASH says its platform is built to remove the need for such preparation stages. The drone will combine DroneDash’s AI vision system with GEODNET’s positioning correction tech to achieve accuracy down to one centimetre. The drones can interpret rows, trees, terrain, and zones of operation while in the air. They are capable of adjusting their altitude and spray rates as conditions vary.

The dividing line in smart robotics is whether machines can act in changing environments. Structured spaces – assembly lines, warehouses, etc. – present simpler operating parameters. However, in the case of agriculture, real-time decisions need to be made autonomously. Agricultural land, particularly plantation terrain with mixed-age crops and changing plant growth, means drones have to recognise all relevant physical features and alter flight paths or treatment patterns according to unpredictable conditions.

In this sense, the perfect agricultural machine would need to combine the abilities of perception and location, and be able to attenuate its operations according to environmental conditions. Deterministic systems are less suited to these types of use case, as every edge-case of random occurrence can’t be hard-coded.

GEODASH Aerosystems’ proposed solution isn’t a fully unsupervised machine that can make its own decisions anywhere on a farm property, but it will be capable of operating without pre-existing maps inside geo-fenced boundaries. It will also be able to log each decision in case of the need for adjustment by operators to get the best results.

The nature of agriculture (and the natural world more generally) is that replanting, pruning, soil erosion or a host of other changes can make static maps increasingly less accurate over time. A platform that can be redeployed quickly after environmental changes could be more useful than one that’s only as accurate as its last survey data.

The companies say each flight will feed data to DroneDash’s AI Smart Farming backend, providing metrics on canopy density analysis, stresses and anomalies, plant health scores, spray-effectiveness checks, and terrain profiles. Each drone will therefore have a dual-purposes: as a spray applicator, and what’s effectively an aerial sensor platform. Data gathered could be used on an ongoing basis by farm operators, perhaps to informing of the need to change dosages, change treatment timings, flag the need for fertilisation or pest control, and inform replanting schedules.

GEODASH is aiming its technology initially at palm oil plantations in Southeast Asia, row-cropping operators in the US, and large estates in South America. The companies say they ran pilot deployments and validation projects throughout 2025 and into early 2026. Commercial deployment by GEODASH Aerosystems is planned for the third quarter of 2026.

“Agriculture does not need bigger drones – it needs smarter ones,” said Paul Yam, CEO, DroneDash Technologies and GEODASH Aerosystems.

(Image source: “Agriculture drone new technology” by Shreesha Sharma is licensed under CC BY-SA 4.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/4.0)

 

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Drones get smarter for large farm holdings appeared first on AI News.

]]>
Hyundai expands into robotics and physical AI systems https://www.artificialintelligence-news.com/news/hyundai-expands-into-robotics-and-physical-ai-systems/ Tue, 14 Apr 2026 10:00:00 +0000 https://www.artificialintelligence-news.com/?p=112984 Hyundai Motor Group is starting to look like a company building machines that act in the real world. The change centres on physical AI: Where AI is placed into robots and systems that move and respond in physical spaces. Current efforts are mainly focused on factory and industrial settings. Hyundai’s move into physical AI systems […]

The post Hyundai expands into robotics and physical AI systems appeared first on AI News.

]]>
Hyundai Motor Group is starting to look like a company building machines that act in the real world. The change centres on physical AI: Where AI is placed into robots and systems that move and respond in physical spaces. Current efforts are mainly focused on factory and industrial settings.

Hyundai’s move into physical AI systems

In an interview with Semafor, chairman Chung Eui-sun said robotics and AI will play a central role in Hyundai’s next phase of growth, pushing the company beyond vehicles and into physical systems. The group plans to invest $26 billion in the US by 2028, according to United Press International, building on roughly $20.5 billion invested over the past 40 years.

A large part of that spending is tied to robotics and AI-driven systems that Hyundai is combining into a single approach. Chung described robotics and physical AI as important to Hyundai’s long-term direction, adding that the company is developing robots to work with people not replace them.

From automation to collaboration

Hyundai is working on systems where robots and humans share tasks in the same space. This includes humanoid robots developed by Boston Dynamics, which Hyundai acquired a controlling stake in 2021. Machines are being prepared for manufacturing use, with deployment planned around 2028. The company expects to scale production to up to 30,000 units per year by 2030, with the goal to improve work on the factory floor. Robots may handle repetitive or physically demanding tasks, while humans focus on oversight and coordination.

Chung said this kind of setup could help improve efficiency and product quality as customer expectations change.

Current deployments remain focused on industrial settings, though Hyundai is exploring other uses. Potential areas include logistics and mobility services that combine vehicles with AI systems. These may affect deliveries and shared services.

Manufacturing as the first use case for physical AI

While these uses are still developing, manufacturing remains the main testing ground. Factories remain the place where Hyundai is putting these ideas into practice. The company is already working on software-driven manufacturing systems in its US operations, combining data and robotics to manage production.

Physical AI builds on this by adding machines that adjust their actions based on real-time data. Chung said changes in regulations and customer demand are pushing the company to rethink how it operates in regions. Hyundai’s response is a mix of global expansion and local production, with AI and robotics helping standardise processes.

Energy and infrastructure

The company continues to invest in hydrogen through its HTWO brand, which covers production, storage and use. Chung pointed to rising demand linked to AI infrastructure and data centres as one reason hydrogen is gaining attention. He described hydrogen and electric vehicles as complementary options. The idea is to offer different energy choices depending on how systems are used. As AI moves into physical environments, energy becomes a more visible constraint.

What physical AI means for end users

Most people will not interact with a humanoid robot in the near term. But they will feel the effects of these systems in other ways. Products may be built faster and services tied to mobility or infrastructure may become more responsive.

Hyundai sells more than 7 million vehicles each year in over 200 countries, supported by 16 global production facilities, according to the same UPI report.

A gradual transition

Hyundai is still a major carmaker, with brands like Hyundai, Kia, and Genesis forming the base of its operations. What is changing is how those vehicles – and the systems around them – are designed and managed.

Physical AI represents a change from products to systems. It places AI in the environments where work and daily life take place. That change is still in progress, and many of the systems Hyundai is developing will take years to scale. The company is building toward a future where machines work with people in the real world.

(Photo by @named_ aashutosh)

See also: Asylon and Thrive Logic bring physical AI to enterprise perimeter security

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Hyundai expands into robotics and physical AI systems appeared first on AI News.

]]>
Asylon and Thrive Logic bring physical AI to enterprise perimeter security https://www.artificialintelligence-news.com/news/physical-ai-security-at-the-enterprise-perimeter-takes-a-step-closer/ Tue, 07 Apr 2026 14:40:42 +0000 https://www.artificialintelligence-news.com/?p=112899 Exciting times are ahead in the world of enterprise perimeter security with a new partnership between Thrive Logic, an AI agent-driven security and operational intelligence platform, and Asylon, a security robotics company. Together, the companies are to introduce physical AI into the network edge security arena, combining “autonomous perimeter patrols with agentic AI analytics and […]

The post Asylon and Thrive Logic bring physical AI to enterprise perimeter security appeared first on AI News.

]]>
Exciting times are ahead in the world of enterprise perimeter security with a new partnership between Thrive Logic, an AI agent-driven security and operational intelligence platform, and Asylon, a security robotics company. Together, the companies are to introduce physical AI into the network edge security arena, combining “autonomous perimeter patrols with agentic AI analytics and automated incident workflows.” The goal is to reduce response friction and let security leaders report with confidence in high-security exterior zones.

Physical AI understands real-world situations and is capable of responding actively via a continuous, mobile security presence. This is in comparison to merely recording events as and when they take place, for actions to happen later.

Using Asylon’s robotic patrols and Thrive Logic’s AI agent, the integration will monitor perimeter areas and analyse any incidents that may occur. Security teams might therefore relax a little and let AI detect issues in real time. In this arena, it could soon be ‘AI – 1, Bad Actors – 0.’

24/7 robotic patrol oversight

With pressure rising on security leaders in perimeter-intensive environments (labour volatility and unreliable patrol executions are two examples that spring to mind), Asylon’s Robotic Security Operations Centre (RSOC) helps combat challenges with audit-read security outcomes. Alongside Thrive Logic’s integration, robotic patrols won’t just collect video streams, but will produce alerts and step-by-step response processes. Therefore, security teams can respond more effectively, proving humans and AI can work in harmony.

How it works

Video captured by Asylon’s robotic patrols is securely sent to Thrive Logic’s platform. From here, the Thrive Logic AI agent continues to track connected streams, triggering alerts to relevant staff and stakeholders, and generating automated incident workflows aligned to SOP if or when these are required.

The system allows enterprise security organisations to reduces operational friction, and see improvements in response consistency. The system will generate audit-ready, time-stamped incident records for all sites where the technology operates.

Damon Henry, CEO of Asylon Robotics, said: “Security leaders don’t need more dashboards – they need reliable coverage, consistent response, and defensible reporting. Robotic systems that extend perimeter presence, paired with AI that turns what’s observed into clear actions and documented outcomes. By integrating Asylon’s RSOC-managed robotic patrols with Thrive Logic’s agentic AI analytics and incident workflow automation, we’re giving enterprise teams a practical, scalable way to reduce response friction and elevate operational maturity across sites.”

Nate Green, CEO of Thrive Logic, also emphasised the importance of physical AI. “Physical AI is where security becomes truly operational – persistent real-world visibility paired with intelligence that drives action,” he said. “Asylon’s robotic patrols create a high-value mobile layer across large perimeters. When connected to Thrive Logic’s AI agent and workflow automation, that visibility becomes actionable alerts, guided response, and audit-ready documentation.”

You may have to wait your turn to experience the Asylon-Thrive Logic Physical AI integration as it’s currently only available for enterprise security teams managing high-activity exterior environments, but the companies are hoping for greater availability to all business sizes in the near future.

(Image by ikrzeus style from Pixabay)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Asylon and Thrive Logic bring physical AI to enterprise perimeter security appeared first on AI News.

]]>
SAP and ANYbotics drive industrial adoption of physical AI https://www.artificialintelligence-news.com/news/sap-and-anybotics-drive-industrial-adoption-physical-ai/ Tue, 31 Mar 2026 15:20:53 +0000 https://www.artificialintelligence-news.com/?p=112821 Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that. ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot […]

The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News.

]]>
Heavy industry relies on people to inspect hazardous, dirty facilities. It’s expensive, and putting humans in these zones carries obvious safety risks. Swiss robot maker ANYbotics and software company SAP are trying to change that.

ANYbotics’ four-legged autonomous robots will be connected straight into SAP’s backend enterprise resource planning software. Instead of treating a robot as a standalone asset, this turns it into a mobile data-gathering node within an industrial IoT network.

This initiative shows that hardware innovation can now effectively connect with established business workflows. Underscoring that broader trend, SAP is sponsoring this year’s AI & Big Data Expo North America at the San Jose McEnery Convention Center, CA, an event that is fittingly co-located with the IoT Tech Expo and Intelligent Automation & Physical AI Summit.

When equipment breaks at a chemical plant or offshore rig, it costs a fortune. People do routine inspections to catch these issues early, but humans get tired and plants are massive. Robots, on the other hand, can walk the floor constantly, carrying thermal, acoustic, and visual sensors. Hook those sensors into SAP, and a hot pump instantly generates a maintenance request without waiting for a human to report it.

Cutting out the reporting lag

Usually, finding a problem and logging a work order are two disconnected steps. A worker might hear a weird noise in a compressor, write it down, and type it into a computer hours later. By the time the replacement part gets approved, the machine might be wrecked.

Connecting ANYbotics to SAP eliminates that delay. The robot’s onboard AI processes what it sees and hears instantly. If it hears an irregular motor frequency, it doesn’t just flash a warning on a separate screen, it uses APIs to tell the SAP asset management module directly. The system immediately checks for spare parts, figures out the cost of potential downtime, and schedules an engineer.

This automates the flow of information from the floor to management. It also means machinery gets judged on hard, consistent numbers instead of a human inspector’s subjective opinion.

Putting robots in heavy industry isn’t like installing software in an office—companies have to deal with unreliable infrastructure. Factories usually have awful internet connectivity due to thick concrete, metal scaffolding, and electromagnetic interference.

To make this work, the setup relies on edge computing. It takes too much bandwidth to constantly stream high-def thermal video and lidar data to the cloud. So, the robots crunch most of that data locally. Onboard processors figure out the difference between a machine running normally and one that’s dangerously overheating. They only send the crucial details (i.e. the specific fault and its location) back to SAP.

To handle the network issues, many early adopters build private 5G networks. This gives them the coverage they need across huge facilities where regular Wi-Fi fails. It also locks down access, keeping the robot’s data safe from interception.

Of course, security is a major issue. A walking robot packed with cameras is effectively a roaming vulnerability. Companies must use zero-trust network protocols to constantly verify the robot’s identity and limit what SAP modules it can touch. If the robot gets hacked, the system has to cut its connection instantly to stop the attackers from moving laterally into the corporate network.

These robots generate a massive amount of unstructured data as they walk around. Turning raw audio and thermal images into the neat tables SAP requires is difficult.

If companies don’t manage this right, maintenance teams will drown in alerts. A robot that is too sensitive might spit out hundreds of useless warnings a day, making the SAP dashboard completely ignored. IT teams have to set strict rules before turning the system on. They need exact thresholds for what triggers a real maintenance ticket and what just needs to be watched.

The setup usually uses middleware to translate the robot’s telemetry into SAP’s language. This software acts as a filter, throwing out the noise so only actual problems reach the ERP system. The data lake storing all this information also needs to be organised for future machine learning projects. Fixing broken machines is the short-term goal; the long-term payoff is using years of robot data to predict failures before they happen.

Ensuring a successful physical AI deployment

Dropping robots into a factory naturally makes people nervous. The project’s success often comes down to how human resources handles it. Workers usually look at the robots and assume layoffs are next.

Management has to be clear about why the robots are there. The goal is to get people out of dangerous areas like high-voltage zones or toxic chemical sectors to reduce injuries. The robot collects the data, and the human engineer shifts to analysing that data and doing the actual repairs.

This requires retraining. Workers who used to walk the perimeter now have to read SAP dashboards, manage automated tickets, and work with the robots. They have to trust the sensors, and management has to make sure operators know they can take manual control if something unexpected happens.

Companies need to take the rollout slowly. Because syncing physical robots with enterprise software is complicated, large-scale rollouts should start as small, targeted pilots.

The first test should be in one specific area with known hazards but rock-solid internet. This lets IT watch the data flow between the hardware and SAP in a controlled space. At this stage, the main job is making sure the data matches reality. If the robot sees one thing and SAP records another, it has to be audited and fixed daily.

Once the data pipeline actually works, the company can add more robots and connect other systems, like automated parts ordering. IT chiefs have to keep checking if their private networks can handle more robots, while security teams update their defenses against new threats.

If companies treat these autonomous inspectors as an extension of their corporate data architecture, they get a massive amount of information about their physical assets. But pulling it off means getting the network infrastructure, the data rules, and the human element exactly right.

See also: The rise of invisible IoT in enterprise operations

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post SAP and ANYbotics drive industrial adoption of physical AI appeared first on AI News.

]]>
BMW puts humanoid robots to work in Germany–and Europe’s factories are watching https://www.artificialintelligence-news.com/news/bmw-humanoid-robots-manufacturing-europe-leipzig/ Fri, 13 Mar 2026 09:00:00 +0000 https://www.artificialintelligence-news.com/?p=112665 Europe’s factory floors have a new kind of colleague. BMW Group has deployed humanoid robots in manufacturing in Germany for the first time, launching a pilot project at its Leipzig plant with AEON–a wheeled humanoid built by Hexagon Robotics.  It is the first automotive deployment of AEON anywhere in the world, and it marks something of a […]

The post BMW puts humanoid robots to work in Germany–and Europe’s factories are watching appeared first on AI News.

]]>
Europe’s factory floors have a new kind of colleague. BMW Group has deployed humanoid robots in manufacturing in Germany for the first time, launching a pilot project at its Leipzig plant with AEON–a wheeled humanoid built by Hexagon Robotics. 

It is the first automotive deployment of AEON anywhere in the world, and it marks something of a line in the sand for European industry: physical AI is no longer a North American or East Asian story.

The announcement, made on March 9, 2026, comes backed by hard data from a prior US trial. In 2025, BMW ran a ten-month pilot at its Spartanburg, South Carolina, plant using Figure AI’s Figure 02 robot. The humanoid supported production of over 30,000 BMW X3s, working 10-hour shifts and moving a total of over 90,000 components. 

Leipzig is now the direct heir to those lessons.

A robot built for work, not demos

AEON, developed by Hexagon’s Zurich-based robotics division, is a deliberately industrial machine. Arnaud Robert, President of Hexagon Robotics, made the philosophy plain at a Munich event earlier this month: “We’re not in the dancing business–we’re in the working business.” That ethos is visible in every design decision.

Rather than walking on two legs, AEON moves on wheels–a choice made after extensive testing of locomotion systems, with Hexagon concluding that on factory-grade flat floors, wheels are significantly more efficient in both speed and energy use. It stands 1.65 metres tall, weighs 60 kilograms, reaches 2.5 metres per second, and can autonomously swap its own battery in 23 seconds–enabling around-the-clock operation without human intervention.

Its 22 integrated sensors–peripheral cameras, time-of-flight, infrared, SLAM cameras, and microphones–give it full 360-degree real-time spatial awareness, including the ability to perform quality inspection tasks that conventional stationary robots cannot. 

Its human-like torso allows a wide variety of grippers, hand elements, and scanning tools to be flexibly docked, which is precisely what BMW needs for multifunctional deployment across different production environments

Phased rollout, deliberate strategy

AEON’s first test deployment at Leipzig took place in December 2025. A further test run is planned for April 2026, ahead of a full pilot phase launching in summer 2026, where two AEON units will work simultaneously across two use cases–focusing on high-voltage battery assembly and component manufacturing for exterior parts.

Leipzig was not an arbitrary choice. It is BMW’s most technologically comprehensive German plant, combining battery production, injection moulding, press shop, body shop, and final assembly under one roof, meaning a successful deployment there effectively validates physical AI across the full production spectrum.

To anchor this work institutionally, BMW has established a Centre of Competence for Physical AI in Production, consolidating expertise across the group and creating a defined evaluation path for technology partners–from lab testing through to full pilot phases. 

As Felix Haeckel, Team Lead for the centre, put it: “We are pooling our expertise to make knowledge on AI and robotics widely usable within the company.”

The infrastructure underneath

What makes BMW’s approach notable is that AEON is not landing on a blank factory floor. BMW has systematically dismantled data silos across its production network, replacing them with a uniform data platform that ensures all information is consistent, standardised, and accessible at all times–the architecture that allows AI agents to operate autonomously and learn continuously. 

The humanoid robot is, in effect, the physical layer of a system that has been years in the making. AEON runs on NVIDIA Jetson Orin onboard computers and was trained largely through simulation using NVIDIA’s Isaac platform–a method that allowed Hexagon to develop core locomotion capabilities in weeks rather than months.

The project also involves Microsoft Azure for scalable model development and Maxon’s actuators for locomotion.

Why this matters beyond Leipzig

The broader signal here is one that the enterprise AI world is already tracking closely. Deloitte’s State of AI in the Enterprise 2026 report, surveying over 3,200 senior leaders across 24 countries, found that 58% of companies are already using physical AI in some capacity, with that figure set to reach 80% within two years, with Asia Pacific leading in early implementation.

BMW’s Leipzig pilot is a proof point in that trajectory: that humanoid robots in manufacturing have moved past the lab and the press release, and are being stress-tested against the unforgiving standards of real industrial production. As Milan Nedeljković, BMW’s Board Member for Production, put it: “The symbiosis of engineering expertise and artificial intelligence opens up completely new possibilities in production.”

The question now is not whether humanoid robots belong on the factory floor. It is how fast the rest of the European industry follows.

See also: Ai2: Building physical AI with virtual simulation data

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post BMW puts humanoid robots to work in Germany–and Europe’s factories are watching appeared first on AI News.

]]>
Ai2: Building physical AI with virtual simulation data https://www.artificialintelligence-news.com/news/ai2-building-physical-ai-with-virtual-simulation-data/ Wed, 11 Mar 2026 16:50:56 +0000 https://www.artificialintelligence-news.com/?p=112603 Virtual simulation data is driving the development of physical AI across corporate environments, led by initiatives like Ai2’s MolmoBot. Instructing hardware to interact with the real world has historically relied on highly expensive and manually-collected demonstrations. Technology providers building generalist manipulation agents typically frame extensive real-world training as the basis for these systems. For some […]

The post Ai2: Building physical AI with virtual simulation data appeared first on AI News.

]]>
Virtual simulation data is driving the development of physical AI across corporate environments, led by initiatives like Ai2’s MolmoBot.

Instructing hardware to interact with the real world has historically relied on highly expensive and manually-collected demonstrations. Technology providers building generalist manipulation agents typically frame extensive real-world training as the basis for these systems.

For some context, projects like DROID include 76,000 teleoperated trajectories gathered across 13 institutions, representing roughly 350 hours of human effort. Google DeepMind’s RT-1 required 130,000 episodes collected over 17 months by human operators. This reliance on proprietary, manual data collection inflates research budgets and concentrates capabilities within a small group of well-resourced industrial laboratories.

“Our mission is to build AI that advances science and expands what humanity can discover,” said Ali Farhadi, CEO of Ai2. “Robotics can become a foundational scientific instrument, helping researchers move faster and explore new questions. To get there, we need systems that generalise in the real world and tools the global research community can build on together. Demonstrating transfer from simulation to reality is a meaningful step in that direction.”

Researchers from the Allen Institute for AI (Ai2) offer a different economic model with MolmoBot, an open robotic manipulation model suite trained entirely on synthetic information. By generating trajectories procedurally within a system called MolmoSpaces, the team bypasses the need for human teleoperation.

The accompanying dataset, MolmoBot-Data, contains 1.8 million expert manipulation trajectories. This collection was produced by combining the MuJoCo physics engine with aggressive domain randomisation, varying objects, viewpoints, lighting, and dynamics.

“Most approaches try to close the sim-to-real gap by adding more real-world data,” said Ranjay Krishna, Director of the PRIOR team at Ai2. “We took the opposite bet: that the gap shrinks when you dramatically expand the diversity of simulated environments, objects, and camera conditions. Our latest advancement shifts the constraint in robotics from collecting manual demonstrations to designing better virtual worlds, and that’s a problem we can solve.”

Generating virtual simulation data for physical AI

Using 100 Nvidia A100 GPUs, the pipeline created roughly 1,024 episodes per GPU-hour, equating to over 130 hours of robot experience for every hour of wall-clock time.

Compared to real-world data collection, this represents nearly four times the data throughput, directly impacting project return on investment by accelerating deployment cycles.

The MolmoBot suite includes three distinct policy classes evaluated on two platforms: the Rainbow Robotics RB-Y1 mobile manipulator, and the Franka FR3 tabletop arm.  The primary model, built on a Molmo2 vision-language backbone, processes multiple timesteps of RGB observations and language instructions to dictate actions.

Hardware flexibility with Ai2’s MolmoBot

For edge computing environments where resources are constrained, the researchers provide MolmoBot-SPOC, a lightweight transformer policy with fewer parameters. MolmoBot-Pi0 uses a PaliGemma backbone to match the architecture of Physical Intelligence’s π0 model, permitting direct performance comparisons.

During physical testing, these policies demonstrated zero-shot transfer to real-world tasks involving unseen objects and environments without any fine-tuning.

In tabletop pick-and-place evaluations, the primary MolmoBot model achieved a success rate of 79.2 percent. This outperformed π0.5, a model trained on extensive real-world demonstration data, which achieved a 39.2 percent success rate. For mobile manipulation, the policies successfully executed tasks such as approaching, grasping, and pulling doors through their full range of motion.

Providing these varied architectures allows organisations to integrate capable physical AI systems without being locked into a single proprietary vendor ecosystem or extensive data collection infrastructure.

The open release of the entire MolmoBot stack – including the training data, generation pipelines, and model architectures – permits internal auditing and adaptation. Anyone exploring physical AI can leverage these open tools for the simulation and building of capable systems while controlling costs.

“For AI to truly advance science, progress cannot depend on closed data or isolated systems,” continues Ali Farhadi, CEO of Ai2. “It requires shared infrastructure that researchers everywhere can build on, test, and improve together. This is how we believe physical AI will move forward.”

See also: New partnership to offer smart robots for dangerous environments

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Ai2: Building physical AI with virtual simulation data appeared first on AI News.

]]>
New partnership to offer smart robots for dangerous environments https://www.artificialintelligence-news.com/news/new-partnership-to-offer-ai-for-robotics-for-work-in-dangerous-environments/ Wed, 11 Mar 2026 11:42:00 +0000 https://www.artificialintelligence-news.com/?p=112598 ADLINK Technology has signed a strategic alliance and joint development agreement with Under Control Robotics, the company behind the robotics startup Noble Machines. The two firms will combine ADLINK’s edge AI platforms with Noble Machines’ autonomy software to create a new generation of physical AI, general-purpose robots for modern manufactories and engineering plants. The work […]

The post New partnership to offer smart robots for dangerous environments appeared first on AI News.

]]>
ADLINK Technology has signed a strategic alliance and joint development agreement with Under Control Robotics, the company behind the robotics startup Noble Machines. The two firms will combine ADLINK’s edge AI platforms with Noble Machines’ autonomy software to create a new generation of physical AI, general-purpose robots for modern manufactories and engineering plants. The work focuses on bi-pedal, bi-manual machines – read, human-like robots – designed to operate in demanding industrial settings.

The partnership will integrate ADLINK’s DLAP edge AI platform with Noble Machines’ autonomy and whole-body control software. The system is intended to provide reasoning, sensing, and motion control for robots handling heavy loads. Initial target sectors include manufacturing, mining, construction, energy, petrochemicals, and public utilities, industries that currently report labour shortages and often involve risky environments for human workers.

ADLINK’s hardware is built on the NVIDIA Jetson Thor platform. In a press release, the companies state DLAP offers multi-voltage feeds and high-bandwidth sensor interfaces, quoting “up to eight” GMSL camera connections, four Ethernet ports, and 5G or Wi-Fi modules. Systems can operate inside a wide temperature range and comply with IEC 60068 standards for shock and vibration.

ADLINK’s hardware will combine with Noble Machines’ autonomy software, which manages perception, reasoning, and coordinated whole-body motion in robots. Robots operating in adverse conditions ideally need to replicate the mobility and manipulation abilities of human workers, so they can replace at-risk humans without significant retooling or altering existing working environments.

Ethan Chen, general manager of ADLINK’s Edge Computing Platforms business unit, said the agreement will extend the company’s edge computing hardware into emerging general-purpose robotic systems, moving from support for the current DLAP platform to a jointly-developed computing platform based on Jetson Thor.

Wei Ding, chief executive of Under Control Robotics, said ADLINK’s experience in industrial hardware complements Noble Machines’ software, specifically its whole-body control systems. The collaboration addresses hardware durability and supply chain integration issues that can affect industrial robot deployment. The two partners will pursue possible deployments in the construction and energy industries initially, where it’s common for certain tasks to involve workers tolerating dust, heat, heavy loads, and vibration. Typically, such tasks are difficult to mechanise because they require on-the-spot decision-making, mobility, and manual handling.

By working with one anothers’ specialisations, the companies may be able to offer a turnkey solution for customers unwilling to invest in what would be experimental technology and hardware deployments. The emphasis on real-time reactions and decision-making means that the AI element would provide the necessary real-time decision-making that humans working in difficult conditions would otherwise provide. Conventional software, as opposed to AI-based algorithms, would need to be constructed with every possible edge-case hard-coded into control systems.

The success of any systems emanating from the partnership would hinge on whether highly-costly robotics could be able to react correctly in unforeseen situations without compromising itself or human co-workers, or negatively affect wider workflows on site.

(Image source: “Robot” by 1lenore is licensed under CC BY 2.0.)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post New partnership to offer smart robots for dangerous environments appeared first on AI News.

]]>
How physical AI integration accelerates vehicle innovation https://www.artificialintelligence-news.com/news/how-physical-ai-integration-accelerates-vehicle-innovation/ Wed, 11 Mar 2026 09:52:15 +0000 https://www.artificialintelligence-news.com/?p=112592 The integration of physical AI into vehicles remains a primary objective for automakers looking to accelerate innovation. A technical collaboration between Qualcomm and Wayve offers a framework for how hardware and software providers can consolidate their efforts to supply production-ready advanced driver assistance systems to manufacturers worldwide. The partnership combines Wayve’s AI driving layer with […]

The post How physical AI integration accelerates vehicle innovation appeared first on AI News.

]]>
The integration of physical AI into vehicles remains a primary objective for automakers looking to accelerate innovation.

A technical collaboration between Qualcomm and Wayve offers a framework for how hardware and software providers can consolidate their efforts to supply production-ready advanced driver assistance systems to manufacturers worldwide.

The partnership combines Wayve’s AI driving layer with Qualcomm’s Snapdragon Ride system-on-chips and active safety software. This aims to simplify implementation while meeting baseline requirements around reliability, safety, and time-to-market.

Simplifying physical AI integration for modern vehicles

Building an autonomous driving stack often involves piecing together fragmented components from various vendors. This closed method increases development costs, complexity, and project risk. 

Pre-integrating the core processor, safety protocols, and the neural intelligence layer allows vehicle manufacturers to implement reliable capabilities faster while demanding less engineering effort. The unified system is engineered to support global deployment and long-term platform strategies over the lifespan of a vehicle.

Unlike traditional rule-based autonomy that relies heavily on detailed mapping, Wayve utilises a unified foundation model trained on diverse global data. This data-driven software learns driving behaviour directly from real-world exposure. This allows the system to adapt across different regions and road types without requiring location-specific engineering.

When embedded within a commercial vehicle, this form of physical AI needs massive yet energy-efficient processing power. Qualcomm provides that compute infrastructure through a safety-certified architecture featuring redundancy, real-time monitoring, and secure system isolation.

By establishing an open architecture that scales from mainstream models to premium systems, automotive brands can ensure consistent high performance. The design helps provide flexibility, supporting software portability and reuse across various platforms and model years.

Anshuman Saxena, VP and GM of ADAS and Robotics at Qualcomm, said: “ADAS is where scale, safety, and real‑world impact matter most for automakers today. Snapdragon Ride is built to support the widest range of long‑term platform strategies, enabling automakers to standardise across programs and regions while retaining flexibility.

“Together with Wayve, we’re empowering automakers with more choice for how advanced driving systems are developed, deployed, and scaled, while also helping them reduce development cycles, effort and risk.”

The alliance also secures future optionality for enterprise investments. Both companies plan to explore applying these system-on-chips in future Level 4 robotaxi deployments.

Balancing standardisation with brand identity

A common concern among leaders adopting pre-integrated vendor platforms, especially in an often brand loyalty-heavy industry like automotive, is the potential loss of differentiation. Building on an open physical AI framework allows vehicle manufacturers to standardise underlying hardware and software across regions while retaining the ability to differentiate brand experiences and model tiers.

Alex Kendall, Co-founder and CEO of Wayve, commented: “Wayve AI Driver is designed as a flexible, vehicle-agnostic software that serves as the intelligence layer for autonomy for any vehicle, anywhere. Our collaboration with Qualcomm Technologies provides global automakers building on Snapdragon Ride with a streamlined path to deploy market-leading, end-to-end AI automated driving capability alongside Qualcomm’s Active Safety stack.

“By combining our embodied AI driving intelligence with Qualcomm Technologies’ compute performance, platform maturity, and global scale, we are expanding choice and delivering immediate value to automakers across ADAS and automated driving systems, with natural progression from hands-off to eyes-off operation.”

As autonomous technology matures, leaders must evaluate vendor alignments that lower implementation hurdles. Pre-integrated systems offer a practical route to delivering complex physical AI, controlling operational costs, and securing a competitive edge in the global vehicle landscape.

See also: ABB: Physical AI simulation boosts ROI for factory automation

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post How physical AI integration accelerates vehicle innovation appeared first on AI News.

]]>
ABB: Physical AI simulation boosts ROI for factory automation https://www.artificialintelligence-news.com/news/abb-physical-ai-simulation-secures-factory-automation-roi/ Tue, 10 Mar 2026 17:22:41 +0000 https://www.artificialintelligence-news.com/?p=112561 A new ABB and NVIDIA partnership shows physical AI simulation is driving real ROI in factory automation and solving production hurdles. Manufacturers have often found it difficult to make intelligent robotics work reliably outside testing environments. The core issue is the gap between digital training models and actual factory floors, where lighting, material physics, and […]

The post ABB: Physical AI simulation boosts ROI for factory automation appeared first on AI News.

]]>
A new ABB and NVIDIA partnership shows physical AI simulation is driving real ROI in factory automation and solving production hurdles.

Manufacturers have often found it difficult to make intelligent robotics work reliably outside testing environments. The core issue is the gap between digital training models and actual factory floors, where lighting, material physics, and part variations refuse to behave as they do on a screen.

Historically, this friction has previously forced engineering teams to fall back on physical prototypes, delaying product launches and driving up costs.

Overcoming the digital to physical AI simulation divide

The partnership between ABB Robotics and NVIDIA attempts to close this gap by bringing industrial-grade physical AI to manufacturing facilities. Slated for release in the second half of 2026, RobotStudio HyperReality is already drawing interest from a global customer base.

By embedding NVIDIA Omniverse libraries within its existing RobotStudio software, ABB provides a platform for physically accurate digital testing. On an operational level, this integration allows engineers to cut deployment costs by up to 40 percent and accelerate time to market by as much as 50 percent.

Realising these efficiency gains demands a workflow where production leaders design, test, and validate complete automation cells before installing any hardware. To do this, the system exports a fully parameterised station – encompassing the robots, sensors, lighting, kinematics, and parts – as a USD file straight into the Omniverse environment.

Inside this digital space, a virtual controller runs the identical firmware found on the physical machine, enabling a 99 percent behavioural match between the digital and physical realms.

Rather than manually programming movements, computer vision models learn using synthetic images generated inside the software. When combined with Absolute Accuracy technology, this method cuts positioning errors down from 8-15 mm to approximately 0.5 mm, providing high precision for industrial applications.

Marc Segura, President of ABB Robotics, said: “Combining RobotStudio with the physically accurate simulation power of NVIDIA Omniverse libraries, we have closed technology’s long-standing ‘sim-to-real’ gap—a huge milestone to deploying physical AI with industrial-grade precision, for real-world customer applications.”

Validating factory automation before deployment

Early adopters are already validating these capabilities on active production lines. 

Foxconn, for example, is testing the software for consumer device assembly—an area where frequent product changes and delicate metal components complicate traditional automation. By generating synthetic data to train their systems virtually, Foxconn achieves high accuracy on the factory floor while anticipating a reduction in setup time and the elimination of costly physical testing.

Similarly, Workr – a California-based automation provider – integrates its WorkrCore platform with ABB hardware trained via Omniverse. At the NVIDIA GTC 2026 event in San Jose, Workr intends to showcase systems capable of onboarding new parts in minutes without requiring specialised programming skills.

Deepu Talla, VP of Robotics and Edge AI at NVIDIA, commented: “The industrial sector needs high-fidelity simulation to bridge the gap between virtual training and real-world deployment of AI-driven robotics at scale.

“Integrating NVIDIA Omniverse libraries into RobotStudio brings advanced simulation and accelerated computing to ABB’s virtual controller technology, accelerating how thousands of manufacturers bring complex products to market.” 

The hardware ecosystem is also expanding to edge computing. ABB is evaluating the integration of NVIDIA’s Jetson edge platform into its Omnicore controllers, a step that would facilitate real-time inference across existing robotic fleets.

Adopting this type of digital-first simulation for physical AI can reduce setup and commissioning times by up to 80 percent. As AI moves from software applications to hardware operations, preparing data pipelines and upskilling engineering teams to work with synthetic data will dictate which manufacturers maintain a competitive edge.

See also: Agentic AI in finance speeds up operational automation

Banner for AI & Big Data Expo by TechEx events.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post ABB: Physical AI simulation boosts ROI for factory automation appeared first on AI News.

]]>
Physical AI is having its moment–and everyone wants a piece of it https://www.artificialintelligence-news.com/news/physical-ai-global-race-robots-manufacturing-2026/ Wed, 04 Mar 2026 12:00:00 +0000 https://www.artificialintelligence-news.com/?p=112502 There is a particular kind of momentum in the technology industry that announces itself not through a single breakthrough, but through the simultaneous convergence of many. Physical AI is having that moment right now–and paying attention to where it is coming from, and why, tells you more than any single product launch can. The term […]

The post Physical AI is having its moment–and everyone wants a piece of it appeared first on AI News.

]]>
There is a particular kind of momentum in the technology industry that announces itself not through a single breakthrough, but through the simultaneous convergence of many. Physical AI is having that moment right now–and paying attention to where it is coming from, and why, tells you more than any single product launch can.

The term itself–physical AI–is simple enough. It describes AI systems that don’t just process data or generate content, but perceive, reason, and act in the real world–robots, autonomous vehicles, machines that adapt. Nvidia CEO Jensen Huang called it “the ChatGPT moment for robotics” at CES in January–a deliberate framing, and a useful one. 

The ChatGPT comparison isn’t about hype. It signals that a technology once confined to research environments is being adopted for mainstream commercial deployment. That crossing is exactly what we are watching unfold from factory floors in Silicon Valley to stages in Shanghai.”

The West is building the stack

On the Western side, the physical AI push is fundamentally a platform race. The companies investing most aggressively aren’t primarily robotics companies–they’re infrastructure companies that see robotics as the next surface on which AI gets monetised.

Nvidia has released new Cosmos and GR00T open models for robot learning and reasoning, alongside the Blackwell-powered Jetson T4000 module, which delivers 4x greater energy efficiency for robotics computing. Arm has carved outan entirely new Physical AI business unit focused on semiconductor design for robotics and intelligent vehicles. 

Siemens and Nvidia announced plans to build what they’re calling an Industrial AI Operating System, with ambitions to create the world’s first fully AI-driven adaptive manufacturing site. Then there’s Google, which last week brought its robotics software unit Intrinsic fully in-house–out of Alphabet’s “Other Bets” and into Google’s core. 

The move positions Google to offer manufacturers a vertically integrated stack: AI models from DeepMind, deployment software from Intrinsic, and cloud infrastructure from Google Cloud. The Android analogy being floated internally is instructive. Android didn’t win smartphones by building the best phone. It won by becoming the layer everything else ran on. 

That is precisely what Google is attempting with physical AI.

The enterprise implications are significant. A Deloitte survey of more than 3,200 global business leaders found that 58% are already using physical AI in some capacity, rising to 80% with plans over the next two years. The demand is there. The question has shifted from whether to adopt to how fast and on whose platform.

The East is building the machines

China’s physical AI story is different in character–and arguably more visceral. At this year’s Spring Festival Gala, humanoid robots from multiple Chinese startups performed kung fu routines, aerial flips, and choreographed dances before hundreds of millions of viewers–a sharp contrast from the stumbling prototypes that drew scepticism just a year prior. 

It was a spectacle, yes. It was also a statement. China accounted for over 80% of global humanoid robot installations in 2025 and over half of the world’s industrial robots. That dominance is underpinned by structural advantages that go beyond software. China controls roughly 70% of the global lidar sensor market, leads in harmonic reducer production–the gears critical to robot movement–and has driven hardware costs down through the same economies of scale that propelled its EV industry. 

Alibaba has entered the race with RynnBrain, an open-source AI model designed to help robots comprehend the physical world and identify objects–positioning itself alongside NVIDIA’s Cosmos and Google DeepMind’s Gemini Robotics in the foundation model layer. With over 140 domestic humanoid manufacturers and more than 330 humanoid models already unveiled, China’s push into embodied AI is no longer experimental–it’s commercial.

Why it matters beyond the headlines

The convergence of Western platform strategies and Eastern manufacturing scale is creating something genuinely new: a global physical AI ecosystem that is advancing on multiple fronts simultaneously, with different competitive advantages colliding.

What makes this moment distinct from prior robotics waves is the removal of the expertise bottleneck. Historically, deploying industrial robots required specialised engineering teams, months of custom programming, and a high tolerance for downtime. The platforms being built now–by Google, Nvidia, Siemens, and their Chinese equivalents–are explicitly designed to lower that barrier. 

Companies like Vention, which raised US$110 million in January, claim their physical AI platforms can reduce automation project timelines from months to days. When that claim becomes routine, the economics of manufacturing change structurally.

There is also a geopolitical dimension that sits quietly beneath the product announcements. Every foundation model for robotics, every platform layer, every semiconductor architecture being developed right now carries with it questions of supply chain dependency, data sovereignty, and long-term infrastructure control. 

The country–or company–that governs the software layer of physical AI will have unusual leverage over industrial operations globally for years to come.

Physical AI is not a trend. It is the next significant reconfiguration of how the world makes things, moves things, and operates at scale. The conversations happening now–from semiconductor boardrooms to factory floors in Shenzhen and Silicon Valley–are not preliminary. They are the thing itself, already underway.

(Photo by Hyundai Motor Group)

See also: Goldman Sachs and Deutsche Bank test agentic AI for trade surveillance

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Physical AI is having its moment–and everyone wants a piece of it appeared first on AI News.

]]>
Google makes its industrial robotics AI play official–and this time, it means business https://www.artificialintelligence-news.com/news/google-industrial-robotics-ai-physical-ai-intrinsic/ Wed, 04 Mar 2026 08:00:00 +0000 https://www.artificialintelligence-news.com/?p=112499 When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google.  The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and […]

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
When Google folds a moonshot into its core operations, it’s not cleaning house. It’s placing a bet. On February 25, Alphabet-owned Intrinsic–which builds AI models and software designed to make industrial robotics more accessible–officially joined Google. 

The company will remain a distinct group within Google, working closely with Google DeepMind and tapping into Gemini AI models and Google Cloud. No purchase price was disclosed.

On the surface, this looks like a routine internal reshuffle. It isn’t.

From Moonshot to Mandate

Intrinsic graduated into an independent Alphabet-owned company in 2021 after five years of development within Alphabet’s X, the moonshot research division–the same factory that produced Waymo and Wing. Its mission from the start: make industrial robotics AI accessible to manufacturers who don’t have armies of specialist engineers.

While hardware like robotic arms has become cheaper, programming them remains incredibly complex, often requiring hundreds of hours of manual coding by specialised engineers that can vary based on the particular robot. Intrinsic’s answer to that is Flowstate–a web-based platform that allows users to build robotic applications without having to write thousands of lines of code. 

The platform is designed to be hardware-, software-, and AI-model-agnostic. Think of it less as a product and more as an operating layer–one that Google CEO Sundar Pichai has reportedly compared directly to Android. “He said this is the Android of robotics,” Intrinsic CEO Wendy Tan White said, noting that Pichai worked on Chrome and Android before becoming CEO. 

Why now, why Google?

The timing isn’t arbitrary. The sequence of hiring Boston Dynamics’ CTO, releasing a standalone robotics SDK, and now absorbing Intrinsic represents a deliberate consolidation of robotics capability inside Google’s core. Taken together, these moves position Google to offer manufacturers something no competitor has assembled quite as cleanly: AI models from DeepMind, deployment software from Intrinsic, and cloud infrastructure from Google Cloud–all under one roof.

Last month, Google also teamed up with Boston Dynamics to integrate Gemini into Atlas humanoid robots built for manufacturing environments, while Google DeepMind hired the former CTO of Boston Dynamics in November. 

The industrial robotics AI market Google is chasing is not small. McKinsey projects that the market for general-purpose robots could reach US$370 billion by 2040. 

What it means for the enterprise

For enterprise decision-makers, the more interesting signal here isn’t the technology–it’s the accessibility shift. Google plans to integrate Intrinsic’s robotics development platform and vision models with its broader AI ecosystem, combining advanced reasoning, perception and learning capabilities with industrial-grade robotics software to allow machines to interpret sensor data better, adapt to dynamic environments and execute complex tasks. 

Intrinsic has also expanded through acquisitions–acquiring the Open Source Robotics Corp. in 2022, the for-profit arm of the foundation behind the Robot Operating System (ROS). And its commercial pipeline is already in motion: in October 2025, Intrinsic formed a strategic partnership with Foxconn focused on developing general-purpose intelligent robots for full factory automation within electronics manufacturing. 

White framed the integration in terms enterprise leaders will find hard to ignore: production economics, operational transformation, and what she described as truly advanced manufacturing — all within reach once Google’s infrastructure is fully behind it.

That’s a significant claim. But with Gemini, DeepMind, and Google Cloud now aligned behind it, the infrastructure to back it up is, for the first time, actually there.

See also: Physical AI adoption boosts customer service ROI

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Google makes its industrial robotics AI play official–and this time, it means business appeared first on AI News.

]]>
Physical AI adoption boosts customer service ROI https://www.artificialintelligence-news.com/news/physical-ai-adoption-boosts-customer-service-roi/ Tue, 03 Mar 2026 11:32:47 +0000 https://www.artificialintelligence-news.com/?p=112483 The adoption of physical AI drives ROI in frontline customer service by merging digital intelligence with human-like physical interaction. As businesses navigate shrinking labour pools, they are finding that simply automating routine workflows is no longer enough. A new partnership between KDDI and AVITA demonstrates how companies can address complex operational gaps through humanoid deployment. […]

The post Physical AI adoption boosts customer service ROI appeared first on AI News.

]]>
The adoption of physical AI drives ROI in frontline customer service by merging digital intelligence with human-like physical interaction.

As businesses navigate shrinking labour pools, they are finding that simply automating routine workflows is no longer enough. A new partnership between KDDI and AVITA demonstrates how companies can address complex operational gaps through humanoid deployment.

While traditional industrial robots excel at repetitive, single-function tasks, they lack the versatility required to manage unexpected anomalies like equipment failures. Customer-facing roles demand nonverbal communication, including synchronised nodding, natural eye contact, and reassuring facial expressions. 

By integrating AVITA’s avatar creation expertise with KDDI’s communications infrastructure, the two organisations are building domestically developed humanoids capable of operating smoothly in real-world commercial environments.

Blending hardware with advanced data infrastructure

Deploying humanoids into active commercial spaces requires high-capacity and low-latency network infrastructure to transmit visual data and control commands in real time. KDDI provides this operational backbone, facilitating remote control capabilities alongside intensive cloud-based data processing. The resulting visual and motion data collected during customer interactions feeds back into the system to train the AI, improving the precision and autonomy of the humanoid’s behaviour.

To support the demanding computational requirements of physical AI adoption, the companies plan to utilise GPUs hosted at the Osaka Sakai Data Center, which commenced operations in January 2026. They are also exploring integration with an on-premises service for Google’s Gemini high-performance generative AI model. This alignment with major enterprise platforms ensures that data processing remains secure and capable of handling complex dialogue requirements.

The hardware itself departs from standard utilitarian machinery. Based on a concept model designed by Hiroshi Ishiguro, the humanoid features a compact skeletal structure approximating a typical Japanese physique.

Silicone skin and specialised mechanical systems enable warm, approachable facial expressions that sync directly with spoken dialogue. Embedded camera sensors track objects in motion to create natural eye contact, while quiet pneumatic actuation allows for fluid and continuous movement with natural “micro-variations”. This design specifically addresses the historical difficulty of deploying automation in operations requiring hospitality and reassurance.

Preparing for commercial adoption of physical AI

This initiative builds upon earlier joint projects between KDDI and AVITA, which introduced a “next-generation remote customer service platform” using digital avatars for remote assistance at retail locations like Lawson and au Style shops.

Transitioning from digital and language-driven communication to physical units capable of free movement represents a logical progression for enterprises looking to scale their customer service capabilities. The partners intend to begin trials in actual commercial facilities starting in Autumn 2026. Deployment at customer touchpoints such as au Style shops will also be considered.

Integrating physical AI demands environments capable of sustaining continuous, high-volume data streams without latency interruptions. As visual and motion data becomes central to machine learning models, governance frameworks must adapt to manage customer data usage within physical spaces.

Organisations facing demographic workforce pressures should evaluate current bottlenecks to identify where non-verbal, empathetic engagement is necessary. Setting up high-speed network foundations and piloting digital AI avatar programmes today allows enterprises to prepare for the adoption of physical humanoids as the hardware further matures.

See also: Santander and Mastercard run Europe’s first AI-executed payment pilot

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Physical AI adoption boosts customer service ROI appeared first on AI News.

]]>