On a rail manufacturing line in Maryland, a doglike quadruped robot moves slowly alongside newly assembled train cars. Equipped with cameras and lidar, the machine scans for defects and anomalies, capturing images and automatically feeding them into maintenance systems. If the robot detects a problem, it issues a work order and engineers receive alerts on connected tablets. 

Only a few years ago, the same robot, a Boston Dynamics Spot, was shaking hands with conference attendees as a curiosity. Now it is an essential part of a production workflow. 

“In the past, companies were testing things like edge computing, computer vision and private 5G almost in silos,” said James Weaver, vice president of product marketing at Ericsson. “Over the last 12 to 18 months, we’ve started to see implementations go into production that offer a real return on investment.” 

These systems — now commonly referred to as physical AI — combine AI models with real-world sensor data, robotics and industrial connectivity. The result is a new generation of autonomous systems capable of perceiving, analyzing and acting in physical environments. 

Where the thinking happens 

Physical AI systems distribute their intelligence across robots, edge platforms and central systems, depending on how quickly data must be processed. 

“In some cases, the processing happens directly on the robot or device because it has to be that close to the data,” said Weaver. “In others, it happens at the edge or on premises. It really depends on how fast the system needs to react.” 

As robotics systems scale from individual machines to coordinated fleets, connectivity becomes increasingly important. In a recent blog post, semiconductor company Qualcomm argued that robotics is shifting from isolated automation to physical AI systems in which robots share data, update common world models and coordinate tasks across entire facilities. Future wireless architectures such as 6G could support this model by enabling deterministic, low-latency communication between machines, edge infrastructure and cloud systems. 

Ericsson 5G Factory in China. Credit: Ericsson

Surviving the messy factory 

At the Toyota Research Institute (TRI), turning experimental breakthroughs into systems that can survive the realities of factory environments is the central challenge. Robotics teams treat this transition as a staged process, moving technologies from early research through multiple validation steps before they can be trusted in production. 

Hrishikesh Gopal Tawade, lead robotics engineer at TRI, said his team uses the nine-stage Technology Readiness Level scale — a framework originally developed by NASA to measure how close a technology is to real-world deployment. 

“A lot of long-horizon research starts at level one,” Tawade said. “Our job is to take technologies once they reach around level five and move them through real-world testing until they are hardened enough to become fully operational production systems.” 

Beyond rigid automation 

For decades, industrial automation has relied on highly controlled environments — in many cases, the environment itself is modified specifically to make the robot's task easier. Production lines are carefully designed so that robots repeatedly perform the same sequence of movements on identical components. But that approach limits flexibility when production needs change. 

“If demand changes or you introduce a new product, you often have to reconfigure the entire production line,” Tawade said. “You end up writing software for every individual task.” 

AI-driven robotics is beginning to change that. Instead of programming each action explicitly, engineers are experimenting with large behavioral models that allow robots to learn new skills from demonstrations and adapt to more variable environments. 

Deploying these systems on real factory floors introduces another challenge. Laboratory environments are highly controlled, Tawade said, but once systems reach factory floors, “your production violates your assumptions.” 

Tawade recalled one recent example from a vision-based inspection system his team was developing to detect defects in stamped car panels. During testing, the model performed well, but once deployed on the production line, it began producing large numbers of false positives. 

“We discovered that grease marks on some of the stamped parts were being interpreted as splits,” he said. The team asked Toyota’s production engineers to prevent those marks where possible and collected new data to retrain the model to ignore them. 

Lighting changes. Hardware evolves. Sensors drift out of calibration. Parts vary between suppliers. Edge cases that never appeared during testing begin to surface once systems run continuously in factories. 

Hrishikesh Gopal Tawade, lead robotics engineer at TRI

In production environments, failures also carry financial consequences. “In the lab, a failure might not matter,” Tawade said. “In production, there’s a dollar sign attached to it.” 

The case for humanoid robots 

While most industrial automation still relies on specialized machines designed for highly controlled environments, researchers are also exploring general-purpose humanoid robots that can operate in spaces built for human workers. 

“The only real advantage of humanoids is that they can fit into spaces where humans already work,” said Tawade. “Factories, warehouses and logistics systems were designed around people.” 

Even so, Tawade cautioned that humanoids are unlikely to replace traditional industrial automation any time soon. Most factory tasks remain highly structured and repetitive, making them better suited to conventional robotic arms or specialized machines. 

“On a typical factory floor today, maybe 80% of the work is still fixed and repeatable,” he said. “The remaining 20% is more variable. That’s where you might start to see more flexible robots play a role.” 

Many of those opportunities lie in logistics and material handling, where robots must deal with unpredictable environments such as unloading trucks with irregularly stacked boxes. Researchers are experimenting with what Tawade described as “vision-language-action” systems that allow robots to interpret scenes and perform tasks without being programmed for every individual movement. 

Boston Dynamics robot. Credit: Boston Dynamics

But widespread deployment will take time. “Right now, a lot of what you see is still demo-based,” Tawade said. “Moving from that to reliable production systems is a very different challenge.” 

Big tech bets on physical AI 

Major technology vendors are positioning themselves around physical AI systems that combine simulation, AI models and real-world sensor data to optimize industrial operations. 

NVIDIA has been expanding partnerships across the industrial ecosystem as part of that strategy, including a collaboration with Siemens that aims to deliver AI-driven manufacturing platforms built around digital twins and high-performance computing infrastructure. 

Companies are already testing this approach. PepsiCo announced plans in January 2026 to use Siemens software and NVIDIA’s Omniverse simulation platform to create detailed digital twins of manufacturing plants and supply chain operations. By modeling facilities virtually before implementing changes on the factory floor, companies can evaluate new layouts, optimize production flows and identify potential issues before physical modifications are made. 

With NVIDIA having highlighted further developments in industrial and robotics AI at GTC 2026, the push toward physically grounded AI systems is accelerating. 

Real engineering starts here 

The pieces are converging. AI models, edge computing and industrial connectivity, once developed in isolation, are now being integrated into production systems at scale. 

Turning those research advances into reliable industrial systems remains a complex engineering challenge. Systems must operate safely, adapt to changing conditions and recover gracefully when failures occur — requirements that rarely appear in controlled demonstrations. 

The grease marks on a stamped car panel, the retraining that followed — this is what real engineering looks like. “At some point, the research has to become something that runs every day in the real world,” said Tawade. “That’s where the real engineering begins.” 

 


Keep Reading