{"id":8338,"date":"2025-12-08T12:37:42","date_gmt":"2025-12-08T12:37:42","guid":{"rendered":"https:\/\/aa.roboticaiamagazine.com\/?page_id=8338"},"modified":"2025-12-08T12:37:43","modified_gmt":"2025-12-08T12:37:43","slug":"noticias-en-ingles","status":"publish","type":"page","link":"https:\/\/aa.roboticaiamagazine.com\/index.php\/noticias-en-ingles\/","title":{"rendered":"Noticias en Ingl\u00e9s"},"content":{"rendered":"<div class=\"feedzy-cd310cd3e97b3a827481b6df7f2cb693 feedzy-rss\"><div class=\"rss_header\"><h2><a href=\"\" class=\"rss_title\" rel=\"noopener\"><\/a> <span class=\"rss_description\"> <\/span><\/h2><\/div><ul><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/boston-dynamics-spot-interaction\" target=\"_blank\" rel=\" noopener\" title=\"Studying Human Attitudes Towards Robots Through Experience\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/families-watch-a-colorful-robotic-dog-demo-at-a-robotics-and-ai-institute-exhibit.jpg?id=65453180&#038;width=980\" title=\"Studying Human Attitudes Towards Robots Through Experience\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/boston-dynamics-spot-interaction\" target=\"_blank\" rel=\" noopener\">Studying Human Attitudes Towards Robots Through Experience<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Dawn Wendell<\/a> on April 5, 2026 at 1:00 pm <\/small><p>Building the next generation of robots for successful integration into our homes, offices, and factories is more than just solving the hardware and software problems \u2013 we also need to understand how they will be perceived and how they can work effectively with people in those spaces.  aspect_ratio In summer 2025, RAI Institute set up a free popup robot experience in the CambridgeSide mall, designed to let people experience state-of-the-art robotics first hand. While news stories about robots and AI are common, with some being overly critical and some overly optimistic, most people have not encountered robots in the flesh (or metal) as it were. With no direct experience, their opinions are largely shaped by pop culture and social media, both of which are more focused on sensational stories instead of accurate information about how the robots might be used effectively and where the technology still falls short. Our goal with the popup was two-fold: first, to give people an opportunity to see robots that they would otherwise not have a chance to experience and second, to better understand how the public feels about interacting with these robots.Designing a Robot Experience for the General Public  Some earlier versions legged robots, built by the RAI Institute\u2019s Executive Director, Marc RaibertRAI Institute  The ANYmal by ANYrobotics (left) and a previous model of the RAI Institute\u2019s UMV (right)RAI InstituteThe pop-up space had two areas: a museum area where people could see historical and modern robots, including some RAI Institute builds like the UMV and an interactive experience called \u201cDrive-a-Spot\u201d. This area was a driving arena where anyone who came by could take the controls of a Spot quadruped, one of the more recognizable, commercially available robots available today.The guest robot drivers used a custom controller built on an adaptive video game controller that was designed so that anyone of any age could use it. It featured basic controls: move forward, back, left, right, adjust height, sit, stand, and tilt. The buttons were large so that tiny or elderly hands could use the controller and the people who drove Spot ranged in age from two to over 90.  The guest robot drivers used a custom controller built on an adaptive video game controller that was designed so that anyone of any age could use it.RAI InstituteThe demo area was designed to be a bit challenging for the Spot robot to maneuver in \u2013 it contained tight passages, low obstacles to step over, a barrier to crouch under, and taller objects the robot had to avoid. Much to the surprise of many of our guests, Spot is able to autonomously adjust itself to traverse and avoid those obstacles when being supervised by the joystick.  RAI Institute The driving arena\u2019s theme rotated every few weeks across four scenarios: a factory, a home, a hospital, and an outdoor\/disaster environment. These were chosen to contrast settings where robots are broadly accepted (industrial, emergency response) with settings where public ambivalence is well-documented (domestic, healthcare).The visitors who chose to drive the Spot robot could also participate in a short survey before and after their driving experience. The survey focused on two core dimensions:Comfort: how comfortable would you feel if you encountered a robot in a factory, home, hospital, office, or outdoor\/disaster scenario?Suitability: how well would this robot work in each of those contexts? The survey also recorded emotional reactions immediately after driving, likelihood to recommend the experience, and open-ended responses about what they found memorable or surprising. The researchers were careful to separate the environment participants drove through from the scenarios they were asked to evaluate in the survey). This distinction is important for interpreting the results given below.Did Interacting with the Robot Change People\u2019s Feelings about Robots?Out of approximately 10,000 guests that visited the Robot Lab, 10 percent of those drove the Spot and opted-in to our surveys. Of those surveyed, more than 65% of people had seen images or videos of Spot robots online, but most had never seen one of the robots in person.Increased Comfort Through ExperienceAcross all five contexts presented in the survey (factory, home, hospital, office, and outdoor\/disaster scenarios), comfort scores increased significantly after the driving session. The effects were small to moderate in magnitude, but they were consistent and statistically robust after correcting for multiple comparisons across all participants spanning children to older adults.The largest gain appeared in the outdoor\/disaster context, which started with low comfort despite high-perceived suitability. People already thought Spot would be useful in search-and-rescue scenarios; they just weren\u2019t comfortable with it performing in that scenario. This discomfort may stem from media portrayals of quadruped robots in military contexts. A few minutes of hands-on control appears to partially dissolve that apprehension.Participants who drove through the factory-themed arena showed no significant increase in comfort, but this scenario already had the highest rating of any rated context at baseline, leaving little room for improvement.No matter their previous experience, most people were neutral about having a Spot robot in their home before their driving experience. However, after the experience of controlling the Spot robot, people had a statistically significant increase in their comfort at having a Spot in their home and also felt that a Spot robot was more suitable for work in any environment, not just the one they had driven it in.Better Understanding of Where Robots Can Fit into Daily LifePerceived suitability for Spot to operate in each context also increased. However, the pattern in the data is different. The largest gains weren\u2019t in the high-baseline industrial and outdoor contexts. They were in home, office, and hospital \u2013 the very environments where people started out most skeptical.Participants who drove the Spot robot in a home-themed environment didn\u2019t just consider homes more suitable for robots; they also rated hospitals and offices as more suitable. This result suggests that hands-on control alters something more fundamental than just context-specific familiarity. It may change a person\u2019s underlying understanding of a robot\u2019s capabilities and, consequently, where they believe robots are appropriate.Results by DemographicThe hands-on experience seems to be similarly effective across genders, although it does not completely eliminate existing disparities. For example, men reported higher baseline comfort than women across all five contexts. However, all genders improved at similar rates after interaction. The gap didn\u2019t significantly widen or close in most contexts, though it did narrow in factory and office settings.Age effects were more context dependent. Children (aged 8\u201317) rated factory environments as less comfortable and less suitable before the study. However, this could be because most children do not have experience with factory settings or industrial environments. After interaction, this gap largely persisted. By contrast, children showed stronger gains in office comfort than older adults and entered the study rating home contexts more favorably than adults did.  Participants ranged from age 8 to over age 75.RAI InstituteParticipants who had previously driven Spot (mainly robotics professionals) began with higher comfort across the board. But after the hands-on session, people with no prior exposure caught up to experienced drivers. This level of familiarity would be difficult to replicate with images and videos alone.Post-Interaction ResultsPost-interaction emotional data was overwhelmingly positive. \u201cExcitement\u201d was reported by 74% of participants, \u201chappiness\u201d by 50%, and only 12% reported \u201cnervousness.\u201d Over 55% rated the experience as \u201cbrilliant\u201d and 62% said they were very likely to recommend it to a friend.The open-ended responses added a lot more color. The most commonly mentioned moments were locomotion and terrain adaptation (22%). This included the way Spot navigated steps, tight spaces, and uneven ground and expressive tilt movements (22%), which people found surprisingly dog-like or dance-like. A smaller set of responses (3%) described anthropomorphic reactions: worrying about \u201churting\u201d the robot or finding its behavior \u201csilly\u201d in a way that prompted genuine emotional response.When asked what tasks they\u2019d want a robot to perform, responses shifted meaningfully. Before driving, answers clustered around domestic assistance and heavy or hazardous labor. After driving, domestic help remained prominent, but entertainment and play jumped from 7.5% to 19.4%. Companionship also appeared at 5%. References to hazardous or industrial tasks declined as people who had operated the robot began imagining it as a companion and playmate, not just a labor-replacement tool.Key Takeaways from The Robot LabIn the not-so-distant future, robots will become more common in public and private spaces. But whether that integration into daily life will be accepted by the general public remains to be seen. The standard approach to building acceptance has been passive exposure such as videos, exhibits, and articles. This study suggests giving people agency and letting them actually operate a robot is a qualitatively different intervention.Short, well-designed, hands-on encounters can raise comfort in precisely the social domains where ambivalence is highest and where future robotics deployment will likely take place. This hands-on experience shouldn\u2019t be limited to tech conferences and museums, as it may be more valuable than just entertaining.  Fun for all ages!RAI InstituteWe consider the popup a success, but as with all experiments, we also learned a lot along the way. For our takeaways, in addition to the increased comfort with robots, we also found that the guests to our space really enjoyed talking to the robotics experts that staffed the location. For many people, the opportunity to talk to a roboticist was as unique as the opportunity to drive a robot, and in the future, we are excited to continue to share our technical work as well as the experiences of our humans in addition to our humanoids.Does building a space where folks can experience robots firsthand have the potential to create meaningful, long-term attitude shifts? That remains an open question. But the effect\u2019s direction and consistency across different situations, ages, and genders are hard to ignore.Pop-Up Encounters with Spot: Shaping Public Perceptions of Robots Through Hands-On Experience, by Hae Won Park, Georgia Van de Zande, Xiajie Zhang, Dawn Wendell, and Jessica Hodgins from the RAI Institute and the MIT Media Lab, was presented last month at the 2026 ACM\/IEEE International Conference on Human-Robot Interaction in Edinburgh, Scotland.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/video-humanoid-dancing\" target=\"_blank\" rel=\" noopener\" title=\"Video Friday: Digit Learns to Dance\u2014Virtually Overnight\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/bipedal-teal-robot-practices-side-to-side-dance-move-with-arm-movement.gif?id=65460048&#038;width=980\" title=\"Video Friday: Digit Learns to Dance\u2014Virtually Overnight\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/video-humanoid-dancing\" target=\"_blank\" rel=\" noopener\">Video Friday: Digit Learns to Dance\u2014Virtually Overnight<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Evan Ackerman<\/a> on April 3, 2026 at 4:30 pm <\/small><p>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1\u20135 June 2026, VIENNARSS 2026: 13\u201317 July 2026, SYDNEYSummer School on Multi-Robot Systems: 29 July\u20134 August 2026, PRAGUEEnjoy today\u2019s videos! Getting Digit to dance takes more than putting on some fancy shoes\u2014our AI Team can teach Digit new whole-body control capabilities overnight. Using raw motion data from mocap, animation, and teleop methods, Digit gets new skills through sim-to-real reinforcement training.[ Agility ]We\u2019ve created GEN-1, our latest milestone in scaling robot learning. We believe it to be the first general-purpose AI model that crosses a new performance threshold: mastery of simple physical tasks. It improves average success rates to 99% on tasks where previous models achieve 64%, completes tasks roughly 3x faster than state of the art, and requires only 1 hour of robot data for each of these results. GEN-1 unlocks commercial viability across a broad range of applications\u2014and while it cannot solve all tasks today, it is a significant step towards our mission of creating generalist intelligence for the physical world.[ Generalist ]Unitree open-sources UnifoLM-WBT-Dataset\u2014high-quality real-world humanoid robot whole-body teleoperation (WBT) dataset for open environments. Publicly available since March 5, 2026, the dataset will continue to receive high-frequency rolling updates. It aims to establish the most comprehensive real-world humanoid robot dataset in terms of scenario coverage, task complexity, and manipulation diversity.[ Hugging Face ]Autonomous mobile robots operating in human-shared indoor environments often require paths that reflect human spatial intentions, such as avoiding interference with pedestrian flow or maintaining comfortable clearance. This paper presents MRReP, a Mixed Reality-based interface that enables users to draw a Hand-drawn Reference Path (HRP) directly on the physical floor using hand gestures.[ MRReP ]Thanks, Masato!Eye contact, even momentarily between strangers, plays a pivotal role in fostering human connection, promoting happiness, and enhancing belonging. Through autonomous navigation and adaptive mirror control, Mirrorbot facilitates serendipitous, nonverbal interactions by dynamically transitioning reflections from self-focused to mutual recognition, sparking eye contact, shared awareness, and playful engagement.[ ARL ] via [ Cornell University ]Experience PAL Robotics\u2019 new teleoperation system for TIAGo Pro, the AI-ready mobile manipulator designed for advanced research. This real-time VR teleoperation setup allows precise control of TIAGo Pro\u2019s dual arms in Cartesian space, ideal for remote manipulation, AI data collection, and robot learning.[ PAL Robotics ]Utter brilliance from Robust AI. No notes.[ Robust AI ]Come along with our Senior Test Engineer, Nick L., as he takes us on a tour of the Home Test Labs inside the iRobot HQ.[ iRobot ]By automating the final \u201cmagic 5%\u201d of production\u2014the precise trimming of swim goggles\u2019 silicone gaskets based on individual face scans\u2014UR cobots allow THEMAGIC5 to deliver affordable, custom-fit goggles, enabling the company to scale from a Kickstarter sensation to selling over 400,000 goggles worldwide.[ Universal Robots ]Sanctuary AI has once again demonstrated its industry-leading approach to training dexterous manipulation policies for its advanced hydraulic hands. In this video, their proprietary hydraulic hand autonomously manipulates a lettered cube, continuously reorienting it to match a specified goal (displayed in the bottom-left corner of the video).[ Sanctuary AI ]China\u2019s Yuxing 3-06 commercial experimental satellite, the first of its kind to be equipped with a flexible robotic arm, has recently completed an in-orbit refueling test and verification of key technologies. The test paves the way for Yuxing 3-06, dubbed a \u201cspace refueling station,\u201d to refuel other satellites in orbit, manage space debris, and provide other in-orbit services.[ Sanyuan Aerospace ] via [ Space News ]This is a demonstration of natural walking, whole-body teleoperation, and motion tracking with our custom-built humanoid robot. The control policies are trained using large-scale parallel reinforcement learning (RL). By deploying robust policies learned in a physics simulator onto the real hardware, we achieve dynamic and stable whole-body motions.[ Tokyo Robotics ]Faced with aging railway infrastructure, a shrinking workforce and rising construction costs, Japan Railway West asked construction innovator Serendix to replace an old wooden building at its Hatsushima railway station using its 3D printing technology. An ABB robot enabled the company to assemble the new building in a single night ready for the first train service the next day.[ ABB ]Humanoid, SAP, and Martur Fompak team up to test humanoid robots in automotive manufacturing logistics. This joint proof of concept explores how robots can streamline operations, improve efficiency, and shape the future of smart factories.[ Humanoid ]This MIT Robotics Seminar is from Dario Floreano at EPFL, on \u201cAvian Inspired Drones.\u201d[ MIT ]This MIT Robotics Seminar is from Ken Goldberg at UC Berkeley: \u201cGood Old-Fashioned Engineering Can Close the 100,000 Year \u2018Data Gap\u2019 in Robotics.\u201d[ MIT ]<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/humanoid-robots-gill-pratt-darpa\" target=\"_blank\" rel=\" noopener\" title=\"Gill Pratt Says Humanoid Robots\u2019 Moment Is Finally Here\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/a-smiling-bespectacled-bearded-man-kneels-posed-behind-a-robotic-torso.jpg?id=65446567&#038;width=980\" title=\"Gill Pratt Says Humanoid Robots\u2019 Moment Is Finally Here\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/humanoid-robots-gill-pratt-darpa\" target=\"_blank\" rel=\" noopener\">Gill Pratt Says Humanoid Robots\u2019 Moment Is Finally Here<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Evan Ackerman<\/a> on April 2, 2026 at 3:00 pm <\/small><p>In 2012, the U.S. Defense Advanced Research Projects Agency announced the DARPA Robotics Challenge (DRC). The multiyear, multimillion-dollar competition for disaster robotics resulted in Boston Dynamics\u2019 Atlas, some absolutely incredible moments from one of the very first generations of useful humanoid robots, and a blooper video that will live on forever.Gill Pratt, the architect of the competition, had a very clear understanding of what the DRC was going to do for robotics. \u201cThe reason [for the DARPA Robotics Challenge] is actually to push the field forward and make this capability a reality,\u201d Pratt told IEEE Spectrum in 2012. At the time, he pointed out that before the DARPA Grand Challenge in 2004 and the DARPA Urban Challenge in 2007, driverless cars for complex environments essentially did not exist. He saw the DRC doing the same thing for robotics.It\u2019s been about a decade since the conclusion of the DARPA Robotics Challenge, and many in the industry believe humanoid robots are about to have the transformative moment that Pratt predicted. But as is common in robotics, things tend to be far more difficult than it seems like they should be. Spectrum checked in with Pratt, now the CEO of the Toyota Research Institute (TRI), to find out what\u2019s holding humanoid robotics back, what he thinks these robots should be doing (or not doing), and how to navigate the humanoid hype bubble. What do you think about this robotics moment that we\u2019re in?Gill Pratt: What has changed is actually not about humanoids. Many people have been building research robots in the humanoid form for a long time. What\u2019s different now isn\u2019t the body, but the brain. We have always had this disparity in the robotics field where the mechanisms we were building were incredibly capable, but we didn\u2019t really have the means for making the utility of the robot match that potential. Now we actually do, and that\u2019s because of the AI revolution that has happened over the last few years.It\u2019s very tempting to look back 10 years and directly credit the DRC with a lot of what is now happening with commercial humanoids. Is there any reason not to do that?  Gill Pratt poses with an early version of NASA\u2019s Valkyrie DRC robot.Gill PrattPratt: No, but I want to be humble about it. The DRC was focused on half autonomy and half teleoperation in real time. There was remote supervision, and then semiautonomy to amplify that supervision to handle tasks in real time while the remote person was telling the robot what to do. That was all before the breakthroughs that have happened in AI recently.What has changed now is that we have a way to essentially teach robots what to do, and make them competent in a way that doesn\u2019t require writing code; you can just demonstrate the task to the robot instead. With a sufficient amount of that data and new AI methods, robots can be far more performant than ever before.But that data is a bottleneck, right? How do we know what it should consist of, and what a sufficient amount is to get a robot to do something reliably?Pratt: This mirrors exactly the debate going on in large language models [LLMs]. You have certain people who believe that if you take LLMs\u2014which are autoregressive predictors that guess what the next word should be based on past words\u2014and patch them up with a variety of methods to solve their hallucinations, we\u2019ll eventually get to a point where we can trust the AI system. And then there are other people, and I think Yann LeCun is the most well-known of them, who say that\u2019s nonsense, and we need something else. His view, and I agree, is that we need world models. We need some way for the AI system to imagine, try things out, and truly reason.And I know that we\u2019re applying words like \u2018reason\u2019 to what are essentially pattern-matching systems. Saying that there\u2019s \u2018reasoning\u2019 is just a sticker we put on whatever we\u2019ve built; it\u2019s not true reasoning.Data Bottlenecks in Robot LearningThis is an example of \u201dsystem one\u201d versus \u201csystem two\u201d thinking, right?Pratt: Yes. System one is the fast, reflexive thinking we have, which is the kind of pattern matching that current LLMs do. System two is the slow reasoning that involves imagination and world models. That\u2019s what we have not done yet. Progress on system one has been extraordinary, but we still don\u2019t have system two. These attempts to patch system one to make it system two are like trying to squeeze a balloon filled with water; you squeeze it on one side and the water bulges out on the other side. You keep getting surprised that you fix one thing and something else breaks, and the performance overall doesn\u2019t really get that much better.How have you been approaching this problem at TRI?Pratt: Two years ago, we came up with diffusion policy, and then we came up with what I call large behavior models (LBMs). That involves having one model trained on many tasks, and showing that as you add each task, it actually helps with the other tasks and cuts down on the amount of training data needed to reach a given level of performance. These have been incredible system one advances.The breakthrough happened when we realized that diffusion could be applied to robot behavior. We discovered that operating in the behavior space, from vision in, to action out, worked incredibly well. That kicked off the whole field, and since then, I think every robotics demonstration that we\u2019ve seen is using some form of diffusion policy to do what it\u2019s doing. But again, this is system-one pattern matching: \u2018If I see the world like this, I act on the world like that.\u2019 The robot\u2019s not imagining, thinking, and planning the way traditional robotics with hand coding used to do. It\u2019s just reacting.System one\u2019s pattern matching often breaks down in the real world, though, as we\u2019ve seen with autonomous driving\u2019s struggles.Pratt: Ten years ago, when TRI first started, almost everybody was saying that automated driving was right around the corner. Ten years later, I do think we are now there, and the remaining questions are business ones: How much does the hardware cost, the insurance, the support, does it economically make sense? We haven\u2019t necessarily solved automated driving, but our solutions are good enough, because we use humans for backup. When an automated vehicle gets stuck at a double-parked car, it calls home and asks a person for a system-two decision. I think other robots could do that also. Most of the time they do their work on their own, and every once in a while, they raise their hand for help.If we\u2019ve just barely managed to get autonomous cars right, why are we devoting so much attention to the legged humanoid form factor?Pratt: We\u2019ve built the world with physical affordances for our bodies. If the robot is to do well in that world, it should have something that takes advantage of those affordances. It\u2019s also easier for imitation learning to work because we have the same form. And legs are good for certain environments; you can step over obstacles to balance faster than you can roll to a new point of support with wheels. Having said all that, legs are not always the most practical thing. It\u2019s very weird to see so much focus on legged robots in factories, which are flat environments perfectly suited for wheels.Managing the Humanoid Robotics HypeDo you think that the amount of money being poured into legged humanoids is a good thing for robotics?Pratt: It has both advantages and dangers. It\u2019s wonderful seeing so many resources into the robotics field, and I do think that something special has occurred. Things are not the way they were before, and there are so many possibilities when you think about people teaching robots how to do things.  Gill Pratt admires a robot on the roof of the Ghibli Museum in Tokyo.Gill PrattWhat kinds of things should humans be teaching robots to do?Pratt: For 10 years at TRI, we\u2019ve been thinking about society and aging. It\u2019s not just about physical disability; it\u2019s about loneliness and loss of purpose, which are far more prevalent (and far worse) problems. And so the question is, what can we do technologically to help people feel that they\u2019re younger?At TRI, we\u2019re exploring \u201ccare-receiving robots\u201d\u2014robots that receive teaching from a human. We have evolved to be creatures that love giving and love helping. When you program a machine by demonstration, and that machine goes on to help someone else, you feel a sense of purpose. We think robots can be bidirectional things to improve quality of life psychologically, not only physically.When you started TRI 10 years ago, I asked you what you would be focusing on, your answer really stuck with me: You said elder care, because \u201cwe don\u2019t have a choice.\u201dPratt: Yes. The statistics in Japan and the U.S. are only getting worse, and we don\u2019t have a choice. It\u2019s important to remember that an aging society has a huge impact on young people. This is because of the dependency ratio, which is how many young people in the workforce are supporting both people that are too young to work, and also people that are too old to work. Those numbers keep getting worse and worse.How do we solve this?Pratt: We\u2019ve had some incredible breakthroughs with system one, but it doesn\u2019t mean the robots are going to be doing all that much, unless somebody makes a system-two breakthrough also. Or, where we have a system where humans provide some level of system-two supervisory control.That kind of human supervisory control takes us right back to the DRC, doesn\u2019t it?Pratt: [Laughs] That\u2019s exactly right! Look, I\u2019m not going to tell you not to praise the DRC\u2026 There was someone who called it the \u201cWoodstock of Robots,\u201d which just warmed my heart, that was so cool!So, 10 years later, how do you feel about the amount of hype in humanoid robotics right now?Pratt: We are approaching what (I hope!) is a peak of inflated expectations for humanoids. And that\u2019s because nobody\u2019s thinking deeply enough about the system-one versus system-two thing.Right now, our physical AI systems are just pattern matching. They\u2019re incredibly capable, and it\u2019s astonishing how good these things are\u2014we are so proud of it. And we do believe that aggregating learning from many tasks through large behavior models will be incredibly effective. But it\u2019s still not system two. There\u2019s a lot of overpromising going on, and it\u2019s very sad because it\u2019s setting us up for a fall. What I\u2019m worried about is the trough of disillusionment that will follow.How do we avoid that crash in robotics when the humanoid hype bubble bursts?Pratt: For now, we need damping. In control systems, you stabilize an unstable system by adding damping. The press and the academic world can add lead compensation by reminding everyone that what we\u2019re seeing in humanoids now isn\u2019t really reasoning.We should also remember that the automated driving field went through a bubble burst also, and just a few companies survived that, by keeping the hype down and being persistent. I think we should do that here, too.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/robotics-in-nuclear-industry\" target=\"_blank\" rel=\" noopener\" title=\"Wi-Fi That Can Withstand a Nuclear Reactor\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/close-up-of-a-receiver-chip.jpg?id=65428613&#038;width=980\" title=\"Wi-Fi That Can Withstand a Nuclear Reactor\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/robotics-in-nuclear-industry\" target=\"_blank\" rel=\" noopener\">Wi-Fi That Can Withstand a Nuclear Reactor<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Katherine Bourzac<\/a> on April 2, 2026 at 2:00 pm <\/small><p>Researchers have made a Wi-Fi receiver that\u2019s tough enough to work inside a nuclear reactor. They hope the receiver might be part of a wireless communications system for robotics used to decommission reactors.Yasuto Narukiyo, a graduate student at the Institute of Science Tokyo, presented the wireless receiver at the IEEE International Solid-State Circuits Conference (ISSCC), in San Francisco in February. The receiver endured a total radiation dose of 500 kilograys, orders of magnitude higher than the doses typically tolerated by electronics in outer space.After the 2011 nuclear disaster at the Fukushima Daiichi plant, engineers began using robots to help characterize and clean up the site. Most of these require local area network (LAN) cables that can get tangled, says Narukiyo. His team, which includes his advisor Atsushi Shirane and Masaya Miyahara of Japan\u2019s High Energy Accelerator Research Organization (KEK), is aiming to develop a wireless system for controlling robots in this harsh environment.Even under less dramatic circumstances, nuclear plants don\u2019t last forever, and they need to be safely dismantled and decontaminated so the sites can be reused, a process called decommissioning. The process is lengthy, and risks exposing people to radiation, which is why engineers hope robots can come to the rescue. The need for such robots is only growing. According to a 2024 study, of 204 reactors that have been closed, only 11 plants with a capacity over 100 megawatts have been fully decommissioned, and 200 more reactors will reach the end of their lifetimes in the next 20 years.While electronics for space exploration are typically required to endure radiation doses of 100 to 300 grays over three years, a robot operating in a nuclear reactor needs to endure more than 500 kGy over the course of six months, says Narukiyo\u2014at least 1,000 times the dosage. A robotic arm made by KUKA was able to withstand just 164.55 Gy of damage before failing. For comparison, the lens of the eye absorbs just 60 milligrays during a CT scan of the brain.Radiation HardeningTo \u201charden\u201d the 2.4-gigahertz Wi-Fi receiver against intense levels of radiation, Narukiyo and his team changed its mix of components, minimized the total number of transistors, and tinkered with the geometry of the transistors that were left. The transistors, silicon MOSFETs (metal-oxide semiconductor field-effect transistors), contain an oxide layer that\u2019s particularly vulnerable to radiation damage. Blasts of gamma rays can trap positive charges in the oxide, degrading the device\u2019s performance and causing errors. They also changed the design of the transistors themselves. The device\u2019s gate controls the flow of current through the transistor. The smaller it is, the more its performance will be degraded by a dose of radiation. So they made the gates longer and wider.  Researchers tested the Wi-Fi receiver by placing it on top of a radiation source.Yasuto Narukiyo, Sena Kato, et al.Secondly, they considered the differences in how radiation affects PMOS transistors, in which current is carried primarily by positive charges, and NMOS, where electrons flow. PMOS transistors are more vulnerable to radiation damage because positive charge gets trapped in both the oxide and at the interface between the oxide and the rest of the semiconductor. These add up and shift the transistor towards the off state, says Narukiyo. To compensate, the new receiver design minimizes the use of PMOS, replacing these transistors with other elements such as inductors that don\u2019t have an oxide layer. NMOS transistors are more resilient, says Narukiyo, because positive charges trapped in the oxide are to some extent canceled out by negative charges that get trapped at the interface.Narukiyo and his team measured the performance of the receiver before exposure to radiation, and again after blasting it with a total dose of 300 kGy and then 500 kGy. Before being irradiated, it showed comparable performance to typical Wi-Fi receivers. After reaching the highest radiation dose, the gain of the receiver had decreased by about 1.5 decibel.Narukiyo says the receiver is hardened enough, and now he hopes to improve its performance. He\u2019s also working on a transmitter, which would allow for two-way communications. This is more challenging due to the need to produce high levels of current to generate the Wi-Fi signal. He says an earlier version he tried was broken by a 300 kGy dose. The group is exploring using other semiconductors, such as diamond, to toughen the transmitter.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/neurobot-living-robot-nervous-system\" target=\"_blank\" rel=\" noopener\" title=\"Scientists Build Living Robots With Nervous Systems\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/close-up-of-a-neuro-robot-that-has-been-stained-to-highlight-multi-ciliated-cells-around-its-periphery.jpg?id=65444408&#038;width=980\" title=\"Scientists Build Living Robots With Nervous Systems\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/neurobot-living-robot-nervous-system\" target=\"_blank\" rel=\" noopener\">Scientists Build Living Robots With Nervous Systems<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Elie Dolgin<\/a> on April 2, 2026 at 1:00 pm <\/small><p>Engineers have long tried to mimic life. They\u2019ve built machine learning algorithms modeled after the human brain, designed machines that walk like dogs or fly like insects, and taught robots to adapt, however clumsily, to the world around them.Now they are skipping imitation altogether.Instead of taking inspiration from biology, they are building robots out of it: fashioning tiny, free-swimming assemblages of living cells that organize into self-directed systems, complete with neurons that wire themselves into functional circuits.The result, reported last month in Advanced Science, is what the researchers call a \u201cneurobot.\u201dThese living machines could help scientists better understand how simple neural networks give rise to complex behaviors, a foundational step toward building cyborg systems that integrate biological tissue with engineered control. And with further refinement, they could be put to use in applications ranging from precision tissue repair to environmental cleanup.\u201cMy general reaction is, \u2018Wow, this is amazing!\u2019 \u201d says Kate Adamala, a synthetic biologist at the University of Minnesota Twin Cities, who was not involved in the research. \u201cThis truly puts the engineering component into bioengineering.\u201dToward Internal ControlNeurobots mark the latest advance in a series of increasingly sophisticated biological machines developed by Tufts University biologist Michael Levin and his collaborators.First described in 2020, these clusters of living cells, when removed from their normal developmental context and cultured in simple saline conditions, spontaneously self-organize in such a manner that they move and act in novel ways. Under the microscope, they look like irregular, translucent blobs of tissue, but their coordinated motion reveals an emergent order that is unlike anything found in the natural world.\u201cThese things don\u2019t occur naturally,\u201d says Carlos Gershenson, a computer scientist at Binghamton University, State University of New York, who studies artificial life and complex systems but was not involved in the neurobot research. \u201cThey\u2019re made with natural cells, but we\u2019re the ones arranging them.\u201dThe earliest examples of this technology, called xenobots, were built from frog-derived tissues and mainly from a single type of structural cell. Despite the simplicity of their construction, however, they could propel themselves through water using beating hair-like projections called cilia. They survived for days without added nutrients. And they could repair minor damage, all without any scaffolding materials or genetic manipulation. Some could even self-replicate by spontaneously sweeping up loose stem cells.Still, for all the novelty of these biological machines, their behavior was essentially mechanical. Their movements were driven by anatomy and physics, not by anything resembling internal control. They could sense chemical cues, change direction accordingly, and even retain traces of past experiences, as detailed in a preprint posted 17 March on bioRxiv.But many other simple organisms\u2014fungi, protists, and bacteria included\u2014can do much the same. To achieve more flexible, coordinated behavior, they would need a way to integrate information across the body and dynamically direct their actions. Neurobots begin to provide that missing layer of control.  Small tufts of hairlike cilia, combined with the neurobot\u2019s nervous system, allow it to move on its own. Haleh Fotowat Linking Neural Activity to ActionLike earlier xenobots, neurobots are still built from frog cells, but they are now endowed with neurons that mature from partially differentiated stem cells. These nerve cells develop alongside structural tissues, forming branching connections throughout the autonomous beings. This means they can relay electrochemical signals from cell to cell.And unlike other laboratory models of the nervous system\u2014brain organoids, say, or lab-on-a-chip technologies\u2014neurobots move. They swim, explore, and respond to their surroundings in ways that tie electrical signaling to observable movement, producing patterns of physical activity  distinct from their non-neural counterparts.Neurobots spend less time idling and more time exploring. They also trace looping and spiraling paths rather than repeating simple trajectories. And they respond differently to neuroactive drugs.If the organizing principles that enable these internally guided motions and reflexes can now be deciphered, they could then be harnessed to produce more predictable biological functions, says Haleh Fotowat, a neuroengineer from Harvard\u2019s Wyss Institute for Biologically Inspired Engineering, who collaborated with Levin\u2019s team on the study.\u201cWe\u2019re still very early in terms of understanding the system and its capabilities.\u201d But once the scientists understand how the neurobots self-organize, she says, \u201cthen we can begin to engineer on top of that.\u201dBeyond the practical, neurobots also raise deeper epistemological questions about the nature of biological organization, notes Levin. \u201cWhere does form and function come from in the first place?\u201d he asks. \u201cWhen it\u2019s not evolved and it\u2019s not engineered, where do these patterns come from?\u201d\u201cThis is a model system for asking those kinds of questions,\u201d Levin says\u2014in frog and human constructs alike.From Discovery to DeploymentAmong the many variations on the biobot theme are \u201canthrobots,\u201d built from clusters of human lung cells instead of frog tissue.Levin\u2019s team now plans to add human neural cells to their anthrobots, extending the neurobot framework into a fully human context. Then, through further conditioning and guided learning, these living machines\u2014like dogs trained to sniff for bombs\u2014may become capable of adapting their behavior in predictable ways.\u201cThe hope would be that you could teach them or train them to do what you want them to do,\u201d says Josh Bongard, a computer scientist and roboticist at the University of Vermont.Bongard was not involved in the neurobot study but is a frequent collaborator of Levin\u2019s. Together, they cofounded the nonprofit Institute for Computationally Designed Organisms and a commercial startup, Fauna Systems, to advance biobot-related technologies.According to Fauna CEO Naimish Patel, the company is initially targeting environmental sensing applications, aiming to deploy xenobots in settings such as aquaculture, wastewater monitoring, and pollutant detection, where the technology\u2019s ability to integrate multiple signals could provide an early readout of ecosystem health. If the xenobots encounter a mixture of stressors\u2014say, elevated heavy metals, shifts in pH, and traces of agricultural runoff\u2014their collective changes in movement or activity could provide a sensitive, real-time signal that something in the environment is amiss. Precedent for this idea comes from Poland, where many cities already use freshwater mussels as living sentinels of water quality, wired with sensors that register when the animals clamp their shells shut in response to pollutants. Xenobots could extend this concept further, Patel says, potentially offering greater sensitivity and specificity by integrating multiple environmental cues into a single, measurable behavioral response. And neurobots could eventually push this fusion of sensing and computation into ever more sophisticated territory, he adds.But the technical hurdles remain substantial\u2014and the practical opportunities with simpler, non-neural versions are already compelling\u2014so the first-gen xenobots, for the time being,  remain the focus of Fauna\u2019s initial product-development efforts, Patel says. \u201cRight now, we\u2019re looking for the intersection between unmet commercial need and emerging capability.\u201d<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/roadrunner-bipedal-robot\" target=\"_blank\" rel=\" noopener\" title=\"Video Friday: Beep! Beep! Roadrunner Bipedal Bot Breaks the Mold\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/two-wheeled-balancing-robot-leans-while-rolling-in-an-indoor-testing-lab.png?id=65415603&#038;width=980\" title=\"Video Friday: Beep! Beep! Roadrunner Bipedal Bot Breaks the Mold\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/roadrunner-bipedal-robot\" target=\"_blank\" rel=\" noopener\">Video Friday: Beep! Beep! Roadrunner Bipedal Bot Breaks the Mold<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Evan Ackerman<\/a> on March 27, 2026 at 4:30 pm <\/small><p>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1\u20135 June 2026, VIENNARSS 2026: 13\u201317 July 2026, SYDNEYSummer School on Multi-Robot Systems: 29 July\u20134 August 2026, PRAGUEEnjoy today\u2019s videos! \u201cRoadrunner\u201d is a new bipedal wheeled robot prototype designed for multimodal locomotion. It weighs around 15 kg (33 lb) and can seamlessly switch between its side-by-side and in-line wheel modes and stepping configurations depending on what is required for navigating its environment. The robot\u2019s legs are entirely symmetric, allowing it to point its knees forward or backward, which can be used to avoid obstacles or manage specific movements. A single control policy was trained to handle both side-by-side and in-line driving. Several behaviors, including standing up from various ground configurations and balancing on one wheel, were successfully deployed zero-shot on the hardware.[ Robotics and AI Institute ]Incredibly (INCREDIBLY!) NASA says that this is actually happening.NASA\u2019s SkyFall mission will build on the success of the Ingenuity Mars helicopter, which achieved the first powered, controlled flight on another planet. Using a daring midair deployment, SkyFall will deliver a team of next-gen Mars helicopters to scout human landing sites and map subsurface water ice.[ NASA ]NASA\u2019s MoonFall mission will blaze a path for future Artemis missions by sending four highly mobile drones to survey the lunar surface around the Moon\u2019s South Pole ahead of astronauts\u2019 arrival there. MoonFall is built on the legacy of NASA\u2019s Ingenuity Mars Helicopter. The drones will be launched together and released during descent to the surface. They will land and operate independently over the course of a lunar day (14 Earth days) and will be able to explore hard-to-reach areas, including permanently shadowed regions (PSRs), surveying terrain with high-definition optical cameras and other potential instruments.For what it\u2019s worth, Moon landings have a success rate well under 50%. So let\u2019s send some robots there to land over and over![ NASA ]In Science Robotics, researchers from the Tangible Media group led by Professor Hiroshi Ishii, together with colleagues from Politecnico di Bari, present Electrofluidic Fiber Muscles: a new class of artificial muscle fibers for robots and wearables. Unlike the rigid servo motors used in most robots, these fiber-shaped muscles are soft and flexible. They combine electrohydrodynamic (EHD) fiber pumps\u2014slender tubes that move liquid using electric fields to generate pressure silently, with no moving parts\u2014with fluid-filled fiber actuators. These artificial muscles could enable more agile untethered robots, as well as wearable assistive systems with compact actuation integrated directly into textiles.[ MIT Media Lab ]In this study, we developed MEVIUS2, an open-source quadruped robot. It is comparable in size to the Boston Dynamics Spot, equipped with two lidars and a C1 camera, and can freely climb stairs and steep slopes! All hardware, software, and learning environments are released as open source.[ MEVIUS2 ]Thanks, Kento!What goes into preparing for a live performance? Arun highlights the reliability testing that goes into trying a new behavior for Spot.[ Boston Dynamics ]In this work, a multirobot planning and control framework is presented and demonstrated with a team of 40 indoor robots, including both ground and aerial robots.That soundtrack, though.[ GitHub ]Thanks, Keisuke!Quadrupedal robots can navigate cluttered environments like their animal counterparts, but their floating-base configuration makes them vulnerable to real-world uncertainties. Controllers that rely only on proprioception (body sensing) must physically collide with obstacles to detect them. Those that add exteroception (vision) need precisely modeled terrain maps that are hard to maintain in the wild. DreamWaQ++ bridges this gap by fusing both modalities through a resilient multimodal reinforcement learning framework. The result: a single controller that handles rough terrains, steep slopes, and high-rise stairs\u2014while gracefully recovering from sensor failures and situations it has never seen before.That cliff behavior is slightly uncanny.[ DreamWaQ++ ]I take issue with this from iRobot:While the pyramid exploration that iRobot did was very cool, they did it with a custom-made robot designed for a very specific environment. Cleaning your floors is way, way harder. Here\u2019s a bit more detail on the pyramids thing:[ iRobot ]More robots in the circus, please![ Daniel Simu ]MIT engineers have designed a wristband that lets wearers control a robotic hand with their own movements. By moving their hands and fingers, users can direct a robot to perform specific tasks, or they can manipulate objects in a virtual environment with high-dexterity control.[ MIT ]At Nvidia GTC 2026, we showcased how AI is moving into the physical world. Visitors interacted with robots using voice commands, watching them interpret intent and act in real time\u2014powered by our KinetIQ AI brain.[ Humanoid ]Props to Sony for its continued support and updates for Aibo![ Aibo ]This robot looks like it could be a little curvier than normal?[ LimX Dynamics ]Developed by Zhejiang Humanoid Robot Innovation Center Co., Ltd., the Naviai Robot is an intelligent cooking device. It can autonomously process ingredients, perform cooking tasks with high accuracy, adjust smart kitchen equipment in real time, and complete postcooking cleaning. Equipped with multimodal perception technology, it adapts to daily kitchen environments and ensures safe and stable operation.That 7x is doing some heavy lifting.[ Zhejiang Lab ]This CMU RI Seminar is by Hadas Kress-Gazit from Cornell, on \u201cFormal Methods for Robotics in the Age of Big Data.\u201dFormal methods\u2014mathematical techniques for describing systems, capturing requirements, and providing guarantees\u2014have been used to synthesize robot control from high-level specification, and to verify robot behavior. Given the recent advances in robot learning and data-driven models, what role can, and should, formal methods play in advancing robotics? In this talk I will give a few examples for what we can do with formal methods, discuss their promise and challenges, and describe the synergies I see with data-driven approaches.[ Carnegie Mellon University Robotics Institute ]<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/honda-p2-robot-ieee-milestone\" target=\"_blank\" rel=\" noopener\" title=\"30 Years Ago, Robots Learned to Walk Without Falling\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/collage-of-hondas-p2-humanoid-robot-from-1996-against-a-background-of-figures-related-to-its-technical-features.jpg?id=65402169&#038;width=980\" title=\"30 Years Ago, Robots Learned to Walk Without Falling\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/honda-p2-robot-ieee-milestone\" target=\"_blank\" rel=\" noopener\">30 Years Ago, Robots Learned to Walk Without Falling<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Joanna Goodrich<\/a> on March 25, 2026 at 6:00 pm <\/small><p>When you hear the term humanoid robot, you may think of C-3PO, the human-cyborg-relations android from Star Wars. C-3PO was designed to assist humans in communicating with robots and alien species. The droid, which first appeared on screen in 1977, joined the characters on their adventures, walking, talking, and interacting with the environment like a human. It was ahead of its time.Before the release of Star Wars, a few androids did exist and could move and interact with their environment, but none could do so without losing its balance.It wasn\u2019t until 1996 that the first autonomous robot capable of walking without falling was developed in Japan. Honda\u2019s Prototype 2 (P2) was nearly 183 centimeters tall and weighed 210 kilograms. It could control its posture to maintain balance, and it could move multiple joints simultaneously.In recognition of that decades-old feat, P2 has been honored as an IEEE Milestone. The dedication ceremony is scheduled for 28 April at the Honda Collection Hall, located on the grounds of the Mobility Resort Motegi, in Japan. The machine is on display in the hall\u2019s robotics exhibit, which showcases the evolution of Honda\u2019s humanoid technology.In support of the Milestone nomination, members of the IEEE Nagoya (Japan) Section wrote: \u201cThis milestone demonstrated the feasibility of humanlike locomotion in machines, setting a new standard in robotics.\u201d The Milestone proposal is available on the Engineering Technology and History Wiki.Developing a domestic androidIn 1986 Honda researchers Kazuo Hirai, Masato Hirose, Yuji Haikawa, and Toru Takenaka set out to develop what they called a \u201cdomestic robot\u201d to collaborate with humans. It would be able to climb stairs, remove impediments in its path, and tighten a nut with a wrench, according to their research paper on the project.\u201cWe believe that a robot working within a household is the type of robot that consumers may find useful,\u201d the authors wrote.But to create a machine that would do household chores, it had to be able to move around obstacles such as furniture, stairs, and doorways. It needed to autonomously walk and read its environment like a human, according to the researchers.But no robot could do that at the time. The closest technologists got was the WABOT-1. Built in 1973 at Waseda University, in Tokyo, the WABOT had eyes and ears, could speak Japanese, and used tactile sensors embedded on its hands as it gripped and moved objects. Although the WABOT could walk, albeit unsteadily, it couldn\u2019t maneuver around obstacles or maintain its balance. It was powered by an external battery and computer.To build an android, the Honda team began by analyzing how people move, using themselves as models.That led to specifications for the robot that gave it humanlike dimensions, including the location of the leg joints and how far the legs could rotate.Once they began building the machine, though, the engineers found it difficult to satisfy every specification. Adjustments were made to the number of joints in the robot\u2019s hips, knees, and ankles, according to the research paper. Humans have four hip, two knee, and three ankle joints; P2\u2019s predecessor had three hip, one knee, and two ankle joints. The arms were treated similarly. A human\u2019s four shoulder and three elbow joints became three shoulder joints and one elbow joint in the robot.The researchers installed existing Honda motors and hydraulics in the hips, knees, and ankles to enable the robot to walk. Each joint was operated by a DC motor with a harmonic-drive reduction gear system, which is compact and offered high torque capacity.To test their ideas, the engineers built what they called E0. The robot, which was just a pair of connected legs, successfully walked. It took about 15 seconds to take each step, however, and it moved using static walking in a straight line, according to a post about the project on Honda\u2019s website. (Static walking is when the body\u2019s center of mass is always within the foot\u2019s sole. Humans walk with their center of mass below their navel.)The researchers created several algorithms to enable the robot to walk like a human, according to the Honda website. The codes allowed the robot to use a locomotion mechanism, dynamic walking, whereby the robot stays upright by constantly moving and adjusting its balance, rather than keeping its center of mass over its feet, according to a video on the YouTube channel Everything About Robotics Explained.\u201cP2 was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.\u201d \u2014IEEE Nagoya SectionThe Honda team installed rubber brushes on the bottom of the machine\u2019s feet to reduce vibrations from the landing impacts (the force experienced when its feet touch the ground)\u2014which had made the robot lose its balance.Between 1987 and 1991, three more prototypes (E1, E2, and E3) were built, each testing a new algorithm. E3 was a success.With the dynamic walking mechanism complete, the researchers continued their quest to make the robot stable. The team added 6-axis sensors to detect the force at which the ground pushed back against the robot\u2019s feet and the movements of each foot and ankle, allowing the robot to adjust its gait in real time for stability.The team also developed a posture-stabilizing control system to help the robot stay upright. A local controller directed how the electric motor actuators needed to move so the robot could follow the leg joint angles when walking, according to the research paper.During the next three years, the team tested the systems and built three more prototypes (E4, E5, and E6), which had boxlike torsos atop the legs.In 1993 the team was finally ready to build an android with arms and a head that looked more like C-3PO, dubbed Prototype 1 (P1). Because the machine was meant to help people at home, the researchers determined its height and limb proportions based on the typical measurements of doorways and stairs. The arm length was based on the ability of the robot to pick up an object when squatting.When they finished building P1, it was 191.5 cm tall, weighed 175 kg, and used an external power source and computer. It could turn a switch on and off, grab a doorknob, and carry a 70 kg object.P1 was not launched publicly but instead used to conduct research on how to further improve the design. The engineers looked at how to install an internal power source and computer, for example, as well as how to coordinate the movement of the arms and legs, according to Honda.For P2, four video cameras were installed in its head\u2014two for vision processing and the other two for remote operation. The head was 60 cm wide and connected to the torso, which was 75.6 cm deep.A computer with four microSparc II processors running a real-time operating system was added into the robot\u2019s torso. The processors were used to control the arms, legs, joints, and vision-processing cameras.Also within the body were DC servo amplifiers, a 20-kg nickel-zinc battery, and a wireless Ethernet modem, according to the research paper. The battery lasted for about 15 minutes; the machine also could be charged by an external power supply.The hardware was enclosed in white-and-gray casing.P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly.  P2, which was launched publicly in 1996, could walk freely, climb up and down stairs, push carts, and perform some actions wirelessly.King Rose ArchivesThe following year, Honda\u2019s engineers released the smaller and lighter P3. It was 160 cm tall and weighed 130 kg.In 2000 the popular ASIMO robot was introduced. Although shorter than its predecessors at 130 cm, it could walk, run, climb stairs, and recognize voices and faces. The most recent version was released in 2011. Honda has retired the robot.Honda P2\u2019s influenceThanks to P2, today\u2019s androids are not just ideas in a laboratory. Robots have been deployed to work in factories and, increasingly, at home.The machines are even being used for entertainment. During this year\u2019s Spring Festival gala in Beijing, machines developed by Chinese startups Unitree Robotics, Galbot, Noetix, and MagicLab performed synchronized dances, martial arts, and backflips alongside human performers.\u201cP2\u2019s development shifted the focus of robotics from industrial applications to human-centric designs,\u201d the Milestone sponsors explained in the wiki entry. \u201cIt inspired subsequent advancements in humanoid robots and influenced research in fields like biomechanics and artificial intelligence.\u201cIt was not just a technical achievement; it was a catalyst that propelled the field of humanoid robotics forward, demonstrating the potential for robots to interact with and assist humans in meaningful ways.\u201dTo learn more about robots, check out IEEE Spectrum\u2019s guide.Recognition as an IEEE MilestoneA plaque recognizing Honda\u2019s P2 robot as an IEEE Milestone is to be installed at the Honda Collection Hall. The plaque is to read:In 1996 Prototype 2 (P2), a self-contained autonomous bipedal humanoid robot capable of stable dynamic walking and stair-climbing, was introduced by Honda. Its legged robotics incorporated real-time posture control, dynamic balance, gait generation, and multijoint coordination. Honda\u2019s mechatronics and control algorithms set technical benchmarks in mobility, autonomy, and human-robot interaction. P2 inspired new research in humanoid robot development, leading to increasingly sophisticated successors.Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/autonomous-drone-warfare\" target=\"_blank\" rel=\" noopener\" title=\"The Coming Drone-War Inflection in Ukraine\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/person-holding-a-large-drone-outdoors-under-a-sunny-partly-cloudy-sky.jpg?id=65327386&#038;width=980\" title=\"The Coming Drone-War Inflection in Ukraine\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/autonomous-drone-warfare\" target=\"_blank\" rel=\" noopener\">The Coming Drone-War Inflection in Ukraine<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Tereza Pultarova<\/a> on March 24, 2026 at 1:00 pm <\/small><p>WHEN KYIV-BORN ENGINEER Yaroslav Azhnyuk thinks about the future, his mind conjures up dystopian images. He talks about \u201cswarms of autonomous drones carrying other autonomous drones to protect them against autonomous drones, which are trying to intercept them, controlled by AI agents overseen by a human general somewhere.\u201d He also imagines flotillas of autonomous submarines, each carrying hundreds of drones, suddenly emerging off the coast of California or Great Britain and discharging their cargoes en masse to the sky.\u201cHow do you protect from that?\u201d he asks as we speak in late December 2025; me at my quiet home office in London, he in Kyiv, which is bracing for another wave of missile attacks.Azhnyuk is not an alarmist. He cofounded and was formerly CEO of Petcube, a California-based company that uses smart cameras and an app to let pet owners keep an eye on their beloved creatures left alone at home. A self-described \u201cliberal guy who didn\u2019t even receive military training,\u201d Azhnyuk changed his mind about developing military tech in the months following the Russian invasion of Ukraine in February 2022. By 2023, he had relinquished his CEO role at Petcube to do what many Ukrainian technologists have done\u2014to help defend his country against a mightier aggressor.It took a while for him to figure out what, exactly, he should be doing. He didn\u2019t join the military, but through friends on the front line, he witnessed how, out of desperation, Ukrainian troops turned to off-the-shelf consumer drones to make up for their country\u2019s lack of artillery.Ukrainian troops first began using drones for battlefield surveillance, but within a few months they figured out how to strap explosives onto them and turn them into effective, low-cost killing machines. Little did they know they were fomenting a revolution in warfare.  The Ukrainian robotics company The Fourth Law produces an autonomy module [above] that uses optics and AI to guide a drone to its target. Yaroslav Azhnyuk [top, in light shirt], founder and CEO of The Fourth Law, describes a developmental drone with autonomous capabilities to Ukrainian President Volodymyr Zelenskyy and German Chancellor Olaf Scholz.Top: THE PRESIDENTIAL OFFICE OF UKRAINE; Bottom: THE FOURTH LAWThat revolution was on display last month, as the U.S. and Israel went to war with Iran. It soon became clear that attack drones are being extensively used by both sides. Iran, for example, is relying heavily on the Shahed drones that the country invented and that are now also being manufactured in Russia and launched by the thousands every month against Ukraine.A thorough analysis of the Middle East conflict will take some time to emerge. And so to understand the direction of this new way of war, look to Ukraine, where its next phase\u2014autonomy\u2014is already starting to come into view. Outnumbered by the Russians and facing increasingly sophisticated jamming and spoofing aimed at causing the drones to veer off course or fall out of the sky, Ukrainian technologists realized as early as 2023 that what could really win the war was autonomy. Autonomous operation means a drone isn\u2019t being flown by a remote pilot, and therefore there\u2019s no communications link to that pilot that can be severed or spoofed, rendering the drone useless.By late 2023, Azhnyuk set out to help make that vision a reality. He founded two companies, The Fourth Law and Odd Systems, the first to develop AI algorithms to help drones overcome jamming during final approach, the second to build thermal cameras to help those drones better sense their surroundings.\u201cI moved from making devices that throw treats to dogs to making devices that throw explosives on Russian occupants,\u201d Azhnyuk quips.Since then, The Fourth Law has dispatched \u201cmore than thousands\u201d of autonomy modules to troops in eastern Ukraine (it declines to give a more specific figure), which can be retrofitted on existing drones to take over navigation during the final approach to the target. Azhnyuk says the autonomy modules, worth around US $50, increase the drone-strike success rate by up to four times that of purely operator-controlled drones.And that is just the beginning. Azhnyuk is one of thousands of developers, including some who relocated from Western countries, who are applying their skills and other resources to advancing the drone technology that is the defining characteristic of the war in Ukraine. This eclectic group of startups and founders includes Eric Schmidt, the former Google CEO, whose company Swift Beat is churning out autonomous drones and modules for Ukrainian forces. The frenetic pace of tech development is helping a scrappy, innovative underdog hold at bay a much larger and better-equipped foe.All of this development is careening toward AI-based systems that enable drones to navigate by recognizing features in the terrain, lock on to and chase targets without an operator\u2019s guidance, and eventually exchange information with each other through mesh networks, forming self-organizing robotic kamikaze swarms. Such an attack swarm would be commanded by a single operator from a safe distance.According to some reports, autonomous swarming technology is also being developed for sea drones. Ukraine has had some notable successes with sea drones, which have reportedly destroyed or damaged around a dozen Russian vessels. The Skynode X system, from Auterion, provides a degree of autonomy to a drone.AUTERIONFor Ukraine, swarming can solve a major problem that puts the nation at a disadvantage against Russia\u2014the lack of personnel. Autonomy is \u201cthe single most impactful defense technology of this century,\u201d says Azhnyuk. \u201cThe moment this happens, you shift from a manpower challenge to a production challenge, which is much more manageable,\u201d he adds.The autonomous warfare future envisioned by Azhnyuk and others is not yet a reality. But Marc Lange, a German defense analyst and business strategist, believes that \u201can inflection point\u201d is already in view. Beyond it, \u201cthings will be so dramatically different,\u201d he says.\u201cUkraine pretty rapidly realized that if the operator-to-drone ratio can be shifted from one-to-one to one-to-many, that creates great economies of scale and an amazing cost exchange ratio,\u201d Lange adds. \u201cThe moment one operator can launch 100, 50, or even just 20 drones at once, this completely changes the economics of the war.\u201dDrones With a View For a while, jammers that sever the radio links between drones and operators or that spoof GPS receivers were able to provide fairly reliable defense against human-controlled first-person-view attack drones (FPVs). But as autonomous navigation progressed, those electronic shields have gradually become less effective. Defenders must now contend with unjammable drones\u2014ones that are attached to hair-thin optical fibers or that are capable of finding their way to their targets without external guidance. In this emerging struggle, the defenders\u2019 track records aren\u2019t very encouraging: The typical countermeasure is to try to shoot down the attacking drone with a service weapon. It\u2019s rarely successful. A truck outfitted with signal-jamming gear drives under antidrone nets near Oleksandriya, in eastern Ukraine, on 2 October 2025.ED JONES\/AFP\/GETTY IMAGES\u201cThe attackers gain an immense advantage from unmanned systems,\u201d says Lange. \u201cYou can have a drone pop up from anywhere and it can wreak havoc. But from autonomy, they gain even more.\u201dThe self-navigating drones rely on image-recognition algorithms that have been around for over a decade, says Lange. And the mass deployments of drones on Ukrainian battlefields are enabling both Russian and Ukrainian technologists to create huge datasets that improve the training and precision of those AI algorithms. A Ukrainian land robot, the Ravlyk, can be outfitted with a machine gun.While uncrewed aerial vehicles (UAVs) have received the most attention, the Ukrainian military is also deploying dozens of different kinds of drones on land and sea. Ukraine, struggling with the shortage of infantry personnel, began working on replacing a portion of human soldiers with wheeled ground robots in 2024. As of early 2026, thousands of ground robots are crawling across the gray zone along the front line in Eastern Ukraine. Most are used to deliver supplies to the front line or to help evacuate the wounded, but some \u201ckiller\u201d ground robots fitted with turrets and remotely controlled machine guns have also been tested.In mid-February, Ukrainian authorities released a video of a Ukrainian ground robot using its thermal camera to detect a Russian soldier in the dark of the night and then kill the invader with a round from a heavy machine gun. So far these robots are mostly controlled by a human operator, but the makers of these uncrewed ground vehicles say their systems are capable of basic autonomous operations, such as returning to base when radio connection is lost. The goal is to enable them to swarm so that one operator controls not one, but a whole herd of mesh-connected killer robots.But Bryan Clark, senior fellow and director of the Center for Defense Concepts and Technology at the Hudson Institute, questions how quickly ground robots\u2019 abilities can progress. \u201cGround environments are very difficult to navigate in because of the terrain you have to address,\u201d he says. \u201cThe line of sight for the sensors on the ground vehicles is really constrained because of terrain, whereas an air vehicle can see everything around it.\u201dTo achieve autonomy, maritime drones, too, will require navigational approaches beyond AI-based image recognition, possibly based on star positions or electronic signals from radios and cell towers that are within reach, says Clark. Such technologies are still being developed or are in a relatively early operational stage.How the Shaheds Got BetterRussia is not lagging behind. In fact, some analysts believe its autonomous systems may be slightly ahead of Ukraine\u2019s. For a good example of the Russian military\u2019s rapid evolution, they say, consider the long-range Iranian-designed Shahed drones. Since 2022, Russia has been using them to attack Ukrainian cities and other targets hundreds of kilometers from the front line. \u201cAt the beginning, Shaheds just had a frame, a motor, and an inertial navigation system,\u201d Oleksii Solntsev, CEO of Ukrainian defense tech startup MaXon Systems, tells me. \u201cThey used to be imprecise and pretty stupid. But they are becoming more and more autonomous.\u201d Solntsev founded MaXon Systems in late 2024 to help protect Ukrainian civilians from the growing threat of Shahed raids. A Russian Geran-2 drone, based on the Iranian Shahed-136, flies over Kyiv during an attack on 27 December 2025.SERGEI SUPINSKY\/AFP\/GETTY IMAGESFirst produced in Iran in the 2010s, Shaheds can carry 90-kilogram warheads up to 650 km (50-kg warheads can go twice as far). They cost around $35,000 per unit, compared to a couple of million dollars, at least, for a ballistic missile. The low cost allows Russia to manufacture Shaheds in high quantities, unleashing entire fleets onto Ukrainian cities and infrastructure almost every night.The early Shaheds were able to reach a preprogrammed location based on satellite-navigation coordinates. Even one of these early models could frequently overcome the jamming of satellite-navigation signals with the help of an onboard inertial navigation unit. This was essentially a dead-reckoning system of accelerators and gyroscopes that estimate the drone\u2019s position from continual measurements of its motions. In the Donetsk Region, on 15 August 2025, a Ukrainian soldier hunts for Shaheds and other drones with a thermalimaging system attached to a ZU23 23-millimeter antiaircraft gun.KOSTYANTYN LIBEROV\/LIBKOS\/GETTY IMAGESUkrainian defense forces learned to down Shaheds with heavy machine guns, but as Russia continued to innovate, the daily onslaughts started to become increasingly effective.Today\u2019s Shaheds fly faster and higher, and therefore are more difficult to detect and take down. Between January 2024 and August 2025, the number of Shaheds and Shahed-type attack drones launched by Russia into Ukraine per month increased more than tenfold, from 334 to more than 4,000. In 2025, Ukraine found AI-enabling Nvidia chipsets in wreckages of Shaheds, as well as thermal-vision modules capable of locking onto targets at night.\u201cNow, they are interconnected, which allows them to exchange information with each other,\u201d Solntsev says. \u201cThey also have cameras that allow them to autonomously navigate to objects. Soon they will be able to tell each other to avoid a jammed region or an area where one of them got intercepted.\u201dThese Russian-manufactured Shaheds, which Russian forces call Geran-2s, are thought to be more capable than the garden variety Shahed-136s that Iran has lately been launching against targets throughout the Middle East. Even the relatively primitive Shahed-136s have done considerable damage, according to press accounts.Those Shahed successes may accrue, at least in part, from the fact that the United States and Israel lack Ukraine\u2019s long experience with fending them off. In just two days in early March, upward of a thousand drones, mostly Shaheds, were launched against U.S. and Israeli targets, with hundreds of them reportedly finding their marks.One attack, caught on videotape, shows a Shahed destroying a radar dome at the U.S. navy base in Manama, Bahrain. U.S. forces were understood to be attempting to fend off the drones by striking launch platforms, dispatching fighter aircraft to shoot them down, and by using some extremely costly air-defense interceptors, including ones meant to down ballistic missiles. On 4 March, CNN reported that in a congressional briefing the day before, top U.S. defense officials, including Secretary of Defense Pete Hegseth, acknowledged that U.S. air defenses weren\u2019t keeping up with the onslaught of Shahed drones. Russian V2U attack drones are outfitted with Nvidia processors and run computer-vision software and AI algorithms to enable the drones to navigate autonomously.GUR OF THE MINISTRY OF DEFENSE OF UKRAINERussia is also starting to field a newer generation of attack drones. One of these, the V2U, has been used to strike targets in the Sumy region of northeastern Ukraine. The V2U drones are outfitted with Nvidia Jetson Orin processors and run computer-\u00advision software and AI algorithms that allow the drones to navigate even where satellite navigation is jammed.The sale of Nvidia chips to Russia is banned under U.S. sanctions against the country. However, press reports suggest that the chips are getting to Russia via intermediaries in India.Antidrone Systems Step UpMaXon Systems is one of several companies working to fend off the nightly drone onslaught. Within one year, the company developed and battle-tested a Shahed interception system that hints at the sci-fi future envisioned by Azhnyuk. For a system to be capable of reliably defending against autonomous weaponry, it, too, needs to be autonomous.MaXon\u2019s solution consists of ground turrets scanning the sky with infrared sensors, with additional input from a network of radars that detects approaching Shahed drones at distances of, typically, 12 to 16 km. The turrets fire autonomous fixed-winged interceptor drones, fitted with explosive warheads, toward the approaching Shaheds at speeds of nearly 300 km\/h. To boost the chances of successful interception, MaXon is also fielding an airborne anti-Shahed fortification system consisting of helium-filled aerostats hovering above the city that dispatch the interceptors from a higher altitude.\u201cWe are trying to increase the level of automation of the system compared to existing solutions,\u201d says Solntsev. \u201cWe need automatic detection, automatic takeoff, and automatic mid-track guidance so that we can guide the interceptor before it can itself flock the target.\u201d An interceptor drone, part of the U.S. MEROPS defensive system, is tested in Poland on 18 November 2025.WOJTEK RADWANSKI\/AFP\/GETTY IMAGESIn November 2025, the Ukrainian military announced it had been conducting successful trials of the Merops Shahed drone interceptor system developed by the U.S. startup Project Eagle, another of former Google CEO Eric Schmidt\u2019s Ukraine defense ventures. Like the MaXon gear, the system can operate largely autonomously and has so far downed over 1,000 Shaheds.What Works in the Lab Doesn\u2019t Necessarily Fly on the Battlefield Despite the progress on both sides, analysts say that the kind of robotic warfare imagined by Azhnyuk won\u2019t be a reality for years.\u201cThe software for drone collaboration is there,\u201d says Kate Bondar, a former policy advisor for the Ukrainian government and currently a research fellow at the U.S. Center for Strategic and International Studies. \u201cDrones can fly in labs, but in real life, [the forces] are afraid to deploy them because the risk of a mistake is too high,\u201d she adds. Ukrainian soldiers watch a GOR reconnaissance drone take to the sky near Pokrovsk in the Donetsk region, on 10 March 2025.ANDRIY DUBCHAK\/FRONTLINER\/GETTY IMAGESIn Bondar\u2019s view, powerful AI-equipped drones won\u2019t be deployed in large numbers given the current prices for high-end processors and other advanced components. And, she adds, the more autonomous the system needs to be, the more expensive are the processors and sensors it must have. \u201cFor these cheap attack drones that fly only once, you don\u2019t install a high-resolution camera that [has] the resolution for AI to see properly,\u201d she says. \u201c[You install] the cheapest camera. You don\u2019t want expensive chips that can run AI algorithms either. Until we can achieve this balance of technological sophistication, when a system can conduct a mission but at the lowest price possible, it won\u2019t be deployed en masse.\u201dWhile existing AI systems are doing a good job recognizing and following large objects like Shaheds or tanks, experts question their ability to reliably distinguish and pursue smaller and more nimble or inconspicuous targets. \u201cWhen we\u2019re getting into more specific questions, like can it distinguish a Russian soldier from a Ukrainian soldier or at least a soldier from a civilian? The answer is no,\u201d says Bondar. \u201cAlso, it\u2019s one thing to track a tank, and it\u2019s another to track infantrymen riding buggies and motorcycles that are moving very fast. That\u2019s really challenging for AI to track and strike precisely.\u201dClark, at the Hudson Institute, says that although the AI algorithms used to guide the Russian and Ukrainian drones are \u201cpretty good,\u201d they rely on information provided bysensors that \u201caren\u2019t good enough.\u201d \u201cYou need multiphenomenology sensors that are able to look at infrared and visual and, in some cases, different parts of the infrared spectrum to be able to figure out if something is a decoy or real target,\u201d he says.German defense analyst Lange agrees that right now, battlefield AI image-recognition systems are too easily fooled. \u201cIf you compress reality into a 2D image, a lot of things can be easily camouflaged\u2014like what Russia did recently, when they started drawing birds on the back of their drones,\u201d he says.Autonomy Remains Elusive on the Ground and at Sea, TooTo make Ukraine\u2019s emerging uncrewed ground vehicles (UGVs) equally self-sufficient will be an even greater task, in Clark\u2019s view. Still, Bondar expects major advances to materialize within the next several years, even if humans are still going to be part of the decision-making loop. A mobile electronic-warfare system built by PiranhaTech is demonstrated near Kyiv on 21 October 2025.DANYLO ANTONIUK\/ANADOLU\/GETTY IMAGES\u201cI think in two or three years, we will have pretty good full autonomy, at least in good weather conditions,\u201d she says, referring to aerial drones in particular. \u201cHumans will still be in the loop for some years, simply because there are so many unpredictable situations when you need an intervention. We won\u2019t be able to fully rely on the machine for at least another 10 or 15 years.\u201dUkrainian defenders are apprehensive about that autonomous future. The boom of drone innovation has come hand in hand with the development of sophisticated jamming and radio-frequency detection systems. But a lot of that innovation will become obsolete once the pendulum swings away from human control. Ukrainians got their first taste of dealing with unjammable drones in mid-2024, when Russia began rolling out fiber-optic tethered drones. Now they have to brace for a threat on a much larger scale. An experimental drone is demonstrated at the Brave1 defense-tech incubator in Kyiv.DANYLO DUBCHAK\/FRONTLINER\/GETTY IMAGES\u201cToday, we have a situation where we have lots of signals on the battlefield, but in the near future, in maybe two to five years, UAVs are not going to be sending any signals,\u201d says Oleksandr Barabash, CTO of Falcons, a Ukrainian startup that has developed a smart radio-frequency detection system capable of revealing precise locations of enemy radio sources such as drones, control stations, and jammers.Last September, Falcons secured funding from the U.S.-based dual-use tech fund Green Flag Ventures to scale production of its technology and work toward NATO certification. But Barabash admits that its system, like all technologies fielded in Ukrainian war zones, has an expiration date. Instead of radio-frequency detectors, Barabash thinks, the next R&amp;D push needs to focus on passive radar systems capable of identifying small and fast-moving targets based on the signal from sources like TV towers or radio transmitters that propagate through the environment and are reflected by those moving targets. Passive radars have a significant advantage in the war zone, according to Barabash. Since they don\u2019t emit their own signal, they can\u2019t be that easily discovered by the enemy.\u201cActive radar is emitting signals, so if you are using active radars, you are target No. 1 on the front line,\u201d Barabash says.Bondar, on the other hand, thinks that the increased onboard compute power needed for AI-controlled drones will, by itself, generate enough electromagnetic radiation to prevent autonomous drones from ever operating completely undetectably.\u201cYou can have full autonomy, but you will still have systems onboard that emit electromagnetic radiation or heat that can be detected,\u201d says Bondar. \u201cBatteries emit electromagnetic radiation, motors emit heat, and [that heat can be] visible in infrared from far away. You just need to have the right sensors to be able to identify it in advance.\u201d She adds that that takeaway is \u201chow capable contemporary detection systems have become and how technically challenging it is to design drones that can reliably operate in the Ukrainian battlefield environment.\u201dThere Will Be Nowhere to Hide from Autonomous DronesWhen autonomous drones become a standard weapon of war, their threat will extend far beyond the battlefields of Ukraine. Autonomous turrets and drone-interceptor fortification might soon dot the perimeter of European cities, particularly in the eastern part of the continent. A fixed-wing drone is tested in Ukraine in April 2025.ANDREWKRAVCHENKO\/BLOOMBERG\/GETTY IMAGESNefarious actors from all over the world have closely watched Ukraine and taken notes, warns Lange. Today, FPV drones are being used by Islamic terrorists in Africa and Mexican drug cartels to fight against local authorities.When autonomous killing machines become widely available, it\u2019s likely that no city will be safe. \u201cWe might see nets above city centers, protecting civilian streets,\u201d Lange says. \u201cIn every case, the West needs to start performing similar kinetic-defense development that we see in Ukraine. Very rapid iteration and testing cycles to find solutions.\u201dAzhnyuk is concerned that the historic defenders of Europe\u2014the United States and the European countries themselves\u2014are falling behind. \u201cWe are in danger,\u201d he says. While Russia and Ukraine made major strides in their drones and countermeasures over the past year, \u201cEurope and the United States have progressed, in the best-case scenario, from the winter-of-2022 technology to the summer-of-2022 technology.\u201cThe gap is getting wider,\u201d he warns. \u201cI think the next few years are very dangerous for the security of Europe.\u201d This article appears in the April 2026 print issue as \u201cRise of the AUTONOMOUS Attack Drones.\u201d<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/tennis-playing-robot\" target=\"_blank\" rel=\" noopener\" title=\"Video Friday: Humanoid Learns Tennis Skills Playing Humans\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/robot-playing-tennis-holding-racket-on-green-court-inset-shows-human-opponent-hitting-ball.png?id=65325604&#038;width=980\" title=\"Video Friday: Humanoid Learns Tennis Skills Playing Humans\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/tennis-playing-robot\" target=\"_blank\" rel=\" noopener\">Video Friday: Humanoid Learns Tennis Skills Playing Humans<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Evan Ackerman<\/a> on March 21, 2026 at 4:30 pm <\/small><p>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1\u20135 June 2026, VIENNASummer School on Multi-Robot Systems: 29 July\u20134 August 2026, PRAGUEEnjoy today\u2019s videos! Human athletes demonstrate versatile and highly dynamic tennis skills to successfully conduct competitive rallies with a high-speed tennis ball. However, reproducing such behaviors on humanoid robots is difficult, partially due to the lack of perfect humanoid action data or human kinematic motion data in tennis scenarios as reference. In this work, we propose LATENT, a system that Learns Athletic humanoid TEnnis skills from imperfect human motioN daTa.[ LATENT ]A beautifully designed robot inspired by Strandbeests.[ Cranfield University ]We believe we\u2019re the first robotics company to demonstrate a robot peeling an apple with dual dexterous humanlike hands. This breakthrough closes a key gap in robotics, achieving bimanual, contact-rich manipulation and moving far beyond the limits of simple grippers.Today\u2019s AI models (VLMs) are excellent at perception but struggle with action. Controlling high-degree-of-freedom hands for tasks like this is incredibly complex, and precise finger-level teleoperation is nearly impossible for humans.  Our first step was a shared-autonomy system: rather than controlling every finger, the operator triggers prelearned skills like a \u201crotate apple or tennis ball\u201d primitive via a keyboard press or pedal. This makes scalable data collection and RL training possible.How does the AI manage this? We created \u201cMoDE-VLA\u201d (Mixture of Dexterous Experts). It fuses vision, language, force, and touch data by using a team of specialist \u201cexperts,\u201d making control in high-dimensional spaces stable and effective.  The combination of these two innovations allows for seamless, contact-rich manipulation. The human provides high-level guidance, and the robot executes the complex in-hand coordination required.[ Sharpa ]Thanks, Alex!It was great to see our name amongst the other \u201cAI Native\u201d companies during the NVIDIA GTC keynote. NVIDIA Isaac Lab helps us train reinforcement learning policies that enable the UMV to drive, jump, flip, and hop like a pro.[ Robotics and AI Institute ]This Finger-Tip Changer technology was jointly researched and developed through a collaboration between Tesollo and RoCogMan LaB at Hanyang University ERICA. The project integrates Tesollo\u2019s practical robotic hand development experience with the lab\u2019s expertise in robotic manipulation and gripper design.I don\u2019t know why more robots don\u2019t do this. Also, those pointy fingertips are terrifying.[ RoCogMan LaB ]Here\u2019s an upcoming ICRA paper from the Fluent Robotics Lab at the University of Michigan featuring an operational PR2! With functional batteries!!![ Fluent Robotics Lab ]This video showcases the field tests and interaction capabilities of KAIST Humanoid v0.7, developed at the DRCD Lab featuring in-house actuators. The control policy was trained through deep reinforcement learning leveraging human demonstrations.[ KAIST DRCD Lab ]This needs to come in adult size.[ Deep Robotics ]I did not know this, but apparently shoeboxes are really annoying to manipulate because if you grab them by the lid, they just open, so specialized hardware is required.[ Nomagic ]Thanks, Gilmarie!This paper presents a method to recover quadrotor Unmanned Air Vehicles (UAVs) from a throw, when no control parameters are known before the throw.[ MAVLab ]Uh-oh, robots can see glass doors now. We\u2019re in trouble.[ LimX Dynamics ]This drone hugs trees &lt;3[ Stanford BDML ]Electronic waste is one of the fastest-growing environmental problems in the world. As robotics and electronic systems become more widespread, their environmental footprint continues to increase. In this research, scientists developed a fully biodegradable soft robotic system that integrates electronic devices, sensors, and actuators yet completely decomposes after use.[ Nature ]We developed a distributed algorithm that enables multiple aerial robots to flock together safely in complex environments, without explicit communication or prior knowledge of the surroundings, using only onboard sensors and computation. Our approach ensures collision avoidance, maintains proximity between robots, and handles uncertainties (tracking errors and sensor noise). Tested in simulations and real-world experiments with up to four drones in a dense forest, it proved robust and reliable.[ RBL ]The University of Pennsylvania\u2019s 2025 President\u2019s Sustainability Prize winner Piotr Lazarek has developed a system that uses satellite data to pinpoint inefficiencies in farmers\u2019 fields, conducts real-time soil analysis with autonomous drones to understand why they occur, and generates precise fertilizer application maps. His startup Nirby aims to increase productivity in farm areas that are underperforming and reduce fertilizer in high-performing ones.[ University of Pennsylvania ]The production version of Atlas is a departure from the typical humanoid form factor, favoring industrial utility over human likeness. Intended for purposeful work in an industrial setting, Atlas has a form factor that signals its role as a machine rather than a companion or friendly assistant. Join two lead hardware engineers and our head of industrial design for a technical discussion of how key product requirements, ranging from passive thermal management to a modular architecture, dictated a bold new vision for a humanoid.[ Boston Dynamics ]Dr. Christian Hubicki gives a talk exploring the common themes of modern robotics research and his time on the reality competition show, \u201cSurvivor.\u201d[ Optimal Robotics Lab ]<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/content.knowledgehub.wiley.com\/engineering-challenges-and-component-strategies-in-humanoid-robotics-from-prototype-to-production\/\" target=\"_blank\" rel=\" noopener\" title=\"Overcoming Core Engineering Barriers in Humanoid Robotics Development\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/assets.rbl.ms\/65106483\/origin.png\" title=\"Overcoming Core Engineering Barriers in Humanoid Robotics Development\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/content.knowledgehub.wiley.com\/engineering-challenges-and-component-strategies-in-humanoid-robotics-from-prototype-to-production\/\" target=\"_blank\" rel=\" noopener\">Overcoming Core Engineering Barriers in Humanoid Robotics Development<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/content.knowledgehub.wiley.com\" target=\"_blank\" title=\"content.knowledgehub.wiley.com\">Murata Manufacturing Co.<\/a> on March 19, 2026 at 10:00 am <\/small><p>A technical examination of the sensing, motion control, power, and thermal challenges facing humanoid robotics engineers \u2014 with component-level design strategies for real-world deployment.What Attendees will LearnWhy motion control remains the hardest unsolved problem \u2014 Explore the modelling complexity, real-time feedback requirements, and sensor fusion demands of maintaining stable bipedal locomotion across dynamic environments.How sensing architectures enable perception and safety \u2014 Understand the role of inertial measurement units, force\/torque feedback, and tactile sensing in achieving reliable human-robot interaction and collision avoidance.What power and thermal constraints mean for system design \u2014 Examine the trade-offs in battery chemistry selection (LFP vs. NCA), DC\/DC converter topologies, and thermal protection strategies that determine operational endurance.How the industry is transitioning from prototype to mass production \u2014 Learn about the shift toward modular architectures, cost-driven component selection, and supply chain readiness projected for the late 2020s.Download this free whitepaper now!<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/legged-modular-robot\" target=\"_blank\" rel=\" noopener\" title=\"Video Friday: These Robots Were Born to Run\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/rolling-cannon-distant-cityscape-trees-and-water.gif?id=65282014&#038;width=980\" title=\"Video Friday: These Robots Were Born to Run\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/legged-modular-robot\" target=\"_blank\" rel=\" noopener\">Video Friday: These Robots Were Born to Run<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Evan Ackerman<\/a> on March 13, 2026 at 4:00 pm <\/small><p>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1\u20135 June 2026, VIENNAEnjoy today\u2019s videos! All legged robots deployed \u201cin the wild\u201d to date were given a body plan that was predefined by human designers and could not be redefined in situ. The manual and permanent nature of this process has resulted in very few species of agile terrestrial robots beyond familiar four-limbed forms. Here, we introduce highly athletic modular building blocks and show how they enable the automatic design and rapid assembly of novel agile robots that can \u201chit the ground running\u201d in unstructured outdoor environments.[ Northwestern University Center for Robotics and Biosystems ] [ Paper ] via [ Gizmodo ] If you were going to develop the ideal urban delivery robot more or less from scratch, it would be this.[ RIVR ]Don\u2019t get me wrong, there are some clever things going on here, but I\u2019m still having a lot of trouble seeing where the unique, sustainable value is for a humanoid robot performing these sorts of tasks.[ Figure ]One of those things that you don\u2019t really think about as a human, but which is actually pretty important.[ Paper ] via [ ETH Zurich ]We propose TRIP-Bag (Teleoperation, Recording, Intelligence in a Portable Bag), a portable, puppeteer-style teleoperation system fully contained within a commercial suitcase, as a practical solution for collecting high-fidelity manipulation data across varied settings.[ KIMLAB ]We propose an open-vocabulary semantic exploration system that enables robots to maintain consistent maps and efficiently locate (unseen) objects in semi-static real-world environments using LLM-guided reasoning.[ TUM ]That\u2019s it, folks. We have no need for real pandas anymore\u2014if we ever did in the first place. Be honest, what has a panda done for you lately?[ MagicLab ]RoboGuard is a general-purpose guardrail for ensuring the safety of LLM-enabled robots. RoboGuard is configured offline with high-level safety rules and a robot description, reasons about how these safety rules are best applied in robot\u2019s context, then synthesizes a plan that maximally follows user preferences while ensuring safety.[ RoboGuard ]In this demonstration, a small team responds to a (simulated) radiation contamination leak at a real nuclear reactor facility. The team deploys their reconfigurable robot to accompany them through the facility. As the station is suddenly plunged into darkness, the robot\u2019s camera is hot-swapped to thermal so that it can continue on. Upon reaching the approximate location of the contamination, the team installs a Compton gamma-ray camera and pan-tilt illuminating device. The robot autonomously steps forward, locates the radiation source, and points it out with the illuminator.[ Paper ]On March 6, 2025, the Robomechanics Lab at CMU was flooded with 4 feet of black water (i.e., mixed with sewage). We lost most of the robots in the lab, and as a tribute, my students put together this \u201cIn Memoriam\u201d video. It includes some previously unreleased robots and video clips.[ Carnegie Mellon University Robomechanics Lab ]There haven\u2019t been a lot of successful education robots, but here\u2019s one of them.[ Sphero ]The opening keynote from the 2025 Silicon Valley Humanoids Summit: \u201cInsights Into Disney\u2019s Robotic Character Platform,\u201d by Moritz Baecher, Director, Zurich Lab, Disney Research.[ Humanoids Summit ]<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/video-friday-robot-hand-artificial-muscles\" target=\"_blank\" rel=\" noopener\" title=\"Video Friday: A Robot Hand With Artificial Muscles and Tendons\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/robotic-hand-grasping-a-red-bull-can-against-a-dark-background.png?id=65162441&#038;width=980\" title=\"Video Friday: A Robot Hand With Artificial Muscles and Tendons\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/video-friday-robot-hand-artificial-muscles\" target=\"_blank\" rel=\" noopener\">Video Friday: A Robot Hand With Artificial Muscles and Tendons<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Evan Ackerman<\/a> on March 6, 2026 at 4:00 pm <\/small><p>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1\u20135 June 2026, VIENNAEnjoy today\u2019s videos! The functional replication and actuation of complex structures inspired by nature is a longstanding goal for humanity. Creating such complex structures combining soft and rigid features and actuating them with artificial muscles would further our understanding of natural kinematic structures. We printed a biomimetic hand in a single print process composed of a rigid skeleton, soft joint capsules, tendons, and printed touch sensors.[ Paper ] via [ SRL ]Two Boston Dynamics product managers talk about their favorite classic BD robots, and then I talk about mine.And this is Boston Dynamics\u2019 LittleDog, doing legged locomotion research 16 or so years ago in what I\u2019m pretty sure is Katie Byl\u2019s lab at UCSB.[ Boston Dynamics ]This is our latest work on the trajectory planning method for floating-based articulated robots, enabling the global path for searching in complex and cluttered environments.[ DRAGON Lab ]Thanks, Moju!OmniPlanner is a unified solution for exploration and inspection-path planning (as well as target reach) across aerial, ground, and underwater robots. It has been verified through extensive simulations and a multitude of field tests, including in underground mines, ballast water tanks, forests, university buildings, and submarine bunkers.[ NTNU ]Thanks, Kostas!In the ARISE project, the FZI Research Center for Information Technology and its international partners ETH Zurich, University of Zurich, University of Bern, and University of Basel took a major step toward future lunar missions by testing cooperative autonomous multirobot teams under outdoor conditions.[ FZI ]Welcome to the future, where there are no other humans.[ Zhejiang Humanoid ]This is our latest work on robotic fish, and it\u2019s also the first underwater robot from DRAGON Lab. [ DRAGON Lab ]Thanks, Moju!Watch this one simple trick to make humanoid robots cheaper and safer![ Zhejiang Humanoid ]Gugusse and the Automaton is a 1897 French film by Georges M\u00e9li\u00e8s featuring a humanoid robot in a depiction that\u2019s nearly as realistic as some of the humanoid promo videos we\u2019ve seen lately.[ Library of Congress ] via [ Gizmodo ]At Agility, we create automated solutions for the hardest work. We\u2019re incredibly proud of how far we\u2019ve come, and can\u2019t wait to show you what\u2019s next.[ Agility ]Kamel Saidi, robotics program manager at the National Institute of Standards and Technology (NIST), on how performance standards can pave the way for humanoid adoption.[ Humanoids Summit ]Anca Dragan is no stranger to Waymo. She worked with us for six years while also at UC Berkeley and now at Google DeepMind. Her focus on making AI safer helped Waymo as it launched commercially. In this final episode of our season, Anca describes how her work enables AI agents to work fluently with people, based on human goals and values.[ Waymo Podcast ]This UPenn GRASP SFI Seminar is by Junyao Shi: \u201cUnlocking Generalist Robots with Human Data and Foundation Models.\u201dBuilding general-purpose robots remains fundamentally constrained by data scarcity and labor-intensive engineering. Unlike vision and language, robotics lacks large, diverse datasets that span tasks, environments, and embodiments, thus limiting both scalability and generalization. This talk explores how human data and foundation models trained at scale can help overcome these bottlenecks.[ UPenn ]<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/military-drones-self-driving-cars\" target=\"_blank\" rel=\" noopener\" title=\"What Military Drones Can Teach Self-Driving Cars\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/silhouette-from-the-back-of-an-adults-head-as-they-look-at-two-monitors-one-screen-displays-a-drone-and-the-other-shows-self-d.jpg?id=65098234&#038;width=980\" title=\"What Military Drones Can Teach Self-Driving Cars\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/military-drones-self-driving-cars\" target=\"_blank\" rel=\" noopener\">What Military Drones Can Teach Self-Driving Cars<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Missy Cummings<\/a> on March 2, 2026 at 12:00 pm <\/small><p>Self-driving cars often struggle with situations that are commonplace for human drivers. When confronted with construction zones, school buses, power outages, or misbehaving pedestrians, these vehicles often behave unpredictably, leading to crashes or freezing events, causing significant disruption to local traffic and possibly blocking first responders from doing their jobs. Because self-driving cars cannot successfully handle such routine problems, self-driving companies use human babysitters to remotely supervise them and intervene when necessary.This idea\u2014humans supervising autonomous vehicles from a distance\u2014is not new. The U.S. military has been doing it since the 1980s with unmanned aerial vehicles (UAVs). In those early years, the military experienced numerous accidents due to poorly designed control stations, lack of training, and communication delays.As a Navy fighter pilot in the 1990s, I was one of the first researchers to examine how to improve the UAV remote supervision interfaces. The thousands of hours I and others have spent working on and observing these systems generated a deep body of knowledge about how to safely manage remote operations. With recent revelations that U.S. commercial self-driving car remote operations are handled by operators in the Philippines, it is clear that self-driving companies have not learned the hard-earned military lessons that would promote safer use of self-driving cars today.While stationed in the Western Pacific during the Gulf War, I spent a significant amount of time in air operations centers, learning how military strikes were planned, implemented and then replanned when the original plan inevitably fell apart. After obtaining my PhD, I leveraged this experience to begin research on the remote control of UAVs for all three branches of the U.S. military. Sitting shoulder-to-shoulder in tiny trailers with operators flying UAVs in local exercises or from 4000 miles away, my job was to learn about the pain points for the remote operators as well as identify possible improvements as they executed supervisory control over UAVs that might be flying halfway around the world.Supervisory control refers to situations where humans monitor and support autonomous systems, stepping in when needed. For self-driving cars, this oversight can take several forms. The first is teleoperation, where a human remotely controls the car\u2019s speed and steering from afar. Operators sit at a console with a steering wheel and pedals, similar to a racing simulator. Because this method relies on real-time control, it is extremely sensitive to communication delays.The second form of supervisory control is remote assistance. Instead of driving the car in real time, a human gives higher-level guidance. For example, an operator might click a path on a map (called laying \u201cbreadcrumbs\u201d) to show the car where to go, or interpret information the AI cannot understand, such as hand signals from a construction worker. This method tolerates more delay than teleoperation but is still time-sensitive.Five Lessons From Military Drone OperationsOver 35 years of UAV operations, the military consistently encountered five major challenges during drone operations which provide valuable lessons for self-driving cars.LatencyLatency\u2014delays in sending and receiving information due to distance or poor network quality\u2014is the single most important challenge for remote vehicle control. Humans also have their own built-in delay: neuromuscular lag. Even under perfect conditions, people cannot reliably respond to new information in less than 200\u2013500 milliseconds. In remote operations, where communication lag already exists, this makes real-time control even more difficult.In early drone operations, U.S. Air Force pilots in Las Vegas (the primary U.S. UAV operations center) attempted to take off and land drones in the Middle East using teleoperation. With at least a two-second delay between command and response, the accident rate was 16 times that of fighter jets conducting the same missions . The military switched to local line-of-sight operators and eventually to fully automated takeoffs and landings. When I interviewed the pilots of these UAVs, they all stressed how difficult it was to control the aircraft with significant time lag.Self-driving car companies typically rely on cellphone networks to deliver commands. These networks are unreliable in cities and prone to delays. This is one reason many companies prefer remote assistance instead of full teleoperation. But even remote assistance can go wrong. In one incident, a Waymo operator instructed a car to turn left when a traffic light appeared yellow in the remote video feed\u2014but the network latency meant that the light had already turned red in the real world. After moving its remote operations center from the U.S. to the Philippines, Waymo\u2019s latency increased even further. It is imperative that control not be so remote, both to resolve the latency issue but also increase oversight for security vulnerabilities.Workstation DesignPoor interface design has caused many drone accidents. The military learned the hard way that confusing controls, difficult-to-read displays, and unclear autonomy modes can have disastrous consequences. Depending on the specific UAV platform, the FAA attributed between 20% and 100% of Army and Air Force UAV crashes caused by human error through 2004 to poor interface design.UAV crashes (1986-2004) caused by human factors problems, including poor interface and procedure design. These two categories do not sum to 100% because both factors could be present in an accident.Human Factors Interface Design Procedure Design Army Hunter 47% 20% 20% Army Shadow 21% 80% 40%Air Force Predator 67% 38% 75% Air Force Global Hawk 33% 100% 0%Many UAV aircraft crashes have been caused by poor human control systems. In one case, buttons were placed on the controllers such that it was relatively easy to accidentally shut off the engine instead of firing a missile. This poor design led to the accidents where the remote operators inadvertently shut the engine down instead of launching a missile. The self-driving industry reveals hints of comparable issues. Some autonomous shuttles use off-the-shelf gaming controllers, which\u2014while inexpensive\u2014were never designed for vehicle control. The off-label use of such controllers can lead to mode confusion, which was a factor in a recent shuttle crash. Significant human-in-the-loop testing is needed to avoid such problems, not only prior to system deployment, but also after major software upgrades.Operator WorkloadDrone missions typically include long periods of surveillance and information gathering, occasionally ending with a missile strike. These missions can sometimes last for days; for example, while the military waits for the person of interest to emerge from a building. As a result, the remote operators experience extreme swings in workload: sometimes overwhelming intensity, sometimes crushing boredom. Both conditions can lead to errors.When operators teleoperate drones, workload is high and fatigue can quickly set in. But when onboard autonomy handles most of the work, operators can become bored, complacent, and less alert. This pattern is well documented in UAV research.Self-driving car operators are likely experiencing similar issues for tasks ranging from interpreting confusing signs to helping cars escape dead ends. In simple scenarios, operators may be bored; in emergencies\u2014like driving into a flood zone or responding during a citywide power outage\u2014they can become quickly overwhelmed.The military has tried for years to have one person supervise many drones at once, because it is far more cost effective. However, cognitive switching costs (regaining awareness of a situation after switching control between drones) result in workload spikes and high stress. That coupled with increasingly complex interfaces and communication delays have made this extremely difficult.Self-driving car companies likely face the same roadblocks. They will need to model operator workloads and be able to reliably predict what staffing should be and how many vehicles a single person can effectively supervise, especially during emergency operations. If every self-driving car turns out to need a dedicated human to pay close attention, such operations would no longer be cost-effective.TrainingEarly drone programs lacked formal training requirements, with training programs designed by pilots, for pilots. Unfortunately, supervising a drone is more akin to air traffic control than actually flying an aircraft, so the military often placed drone operators in critical roles with inadequate preparation. This caused many accidents. Only years later did the military conduct a proper analysis of the knowledge, skills, and abilities needed to conduct safe remote operations, and changed their training program.Self-driving companies do not publicly share their training standards, and no regulations currently govern the qualifications for remote operators. On-road safety depends heavily on these operators, yet very little is known about how they are selected or taught. If commercial aviation dispatchers are required to have formal training overseen by the FAA, which are very similar to self-driving remote operators, we should hold commercial self-driving companies to similar standards.Contingency PlanningAviation has strong protocols for emergencies including predefined procedures for lost communication, backup ground control stations, and highly reliable onboard behaviors when autonomy fails. In the military, drones may fly themselves to safe areas or land autonomously if contact is lost. Systems are designed with cybersecurity threats\u2014like GPS spoofing\u2014in mind.Self-driving cars appear far less prepared. The 2025 San Francisco power outage left Waymo vehicles frozen in traffic lanes, blocking first responders and creating hazards. These vehicles are supposed to perform \u201cminimum-risk maneuvers\u201d such as pulling to the side\u2014but many of them didn\u2019t. This suggests gaps in contingency planning and basic fail-safe design.The history of military drone operations offers crucial lessons for the self-driving car industry. Decades of experience show that remote supervision demands extremely low latency, carefully designed control stations, manageable operator workload, rigorous, well-designed training programs, and strong contingency planning.Self-driving companies appear to be repeating many of the early mistakes made in drone programs. Remote operations are treated as a support feature rather than a mission-critical safety system. But as long as AI struggles with uncertainty, which will be the case for the foreseeable future, remote human supervision will remain essential. The military learned these lessons through painful trial and error, yet the self-driving community appears to be ignoring them. The self-driving industry has the chance\u2014and the responsibility\u2014to learn from our mistakes in combat settings before it harms road users everywhere.A full paper on this topic will be presented at the 2026 IEEE International Conference on Human-Machine Systems (ICHMS) meeting in Singapore in July.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/quadruped-farming-robots\" target=\"_blank\" rel=\" noopener\" title=\"Video Friday: Robot Dogs Haul Produce From the Field\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/quadruped-robots-with-crates-on-their-backs-carry-produce-on-a-path-amidst-lush-leafy-green-crops.png?id=65095903&#038;width=980\" title=\"Video Friday: Robot Dogs Haul Produce From the Field\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/quadruped-farming-robots\" target=\"_blank\" rel=\" noopener\">Video Friday: Robot Dogs Haul Produce From the Field<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Evan Ackerman<\/a> on February 27, 2026 at 6:00 pm <\/small><p>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1\u20135 June 2026, VIENNAEnjoy today\u2019s videos! Our robots Lynx M20 help transport harvested crops in mountainous farmland\u2014tackling the rural \u201clast mile\u201d logistics challenge.[ Deep Robotics ]Once again, I would point out that now that we are reaching peak humanoid robots doing humanoid things, we are inevitably about to see humanoid robots doing nonhumanoid things.[ Unitree ]In a study, a team of researchers from the Max Planck Institute for Intelligent Systems, the University of Michigan, and Cornell University show that groups of magnetic microrobots can generate fluidic forces strong enough to rotate objects in different directions without touching them. These microrobot swarms can turn gear systems, rotate objects much larger than the robots themselves, assemble structures on their own, and even pull in or push away many small objects.[ Science ] via [ Max Planck Institute ]Bipedal\u2014or two-legged\u2014autonomous robots can be quite agile. This makes them useful for performing tasks on uneven terrain, such as carrying equipment through outdoor environments or performing maintenance on an oceangoing ship. However, unstable or unpredictable conditions also increase the possibility of a robot wipeout. Until now, there\u2019s been a significant lack of research into how a robot recovers when its direction shifts\u2014for example, a robot losing balance when a truck makes a quick turn. The team aims to fix this research gap.[ Georgia Tech ]Robotics is about controlling energy, motion, and uncertainty in the real world.[ Carnegie Mellon University ]Delicious dinner cooked by our robot Robody. We\u2019ve asked our investors to speak about why they\u2019re along for the ride.[ Devanthro ]Tilt-rotor aerial robots enable omnidirectional maneuvering through thrust vectoring, but introduce significant control challenges due to the strong coupling between joint and rotor dynamics. This work investigates reinforcement learning for omnidirectional aerial motion control on overactuated tiltable quadrotors that prioritizes robustness and agility.[ Dragon Lab ]At the [Carnegie Mellon University] Robotic Innovation Center\u2019s 75,000-gallon water tank, members of the TartanAUV student group worked to further develop their autonomous underwater vehicle (AUV) called Osprey. The team, which takes part in the annual RoboSub competition sponsored by the U.S. Office of Naval Research, is comprised primarily of undergraduate engineering and robotics students.[ Carnegie Mellon University ]Sure seems like the only person who would want a robot dog is a person who does not in fact want a dog.Compact size, industrial capability. Maximum torque of 90N\u00b7m, over 4 hours of no-load runtime, IP54 rainproof design. With a 15-kg payload, range exceeds 13 km. Open secondary development, empowering industry applications.[ Unitree ]If your robot video includes tasty baked goods it will be included in Video Friday.[ QB Robotics ]Astorino is a 6-axis educational robot created for practical and affordable teaching of robotics in schools and beyond. It has been created with 3D printing, so it allows for experimentation and the possible addition of parts. With its design and programming, it replicates the actions of industrial robots giving students the necessary skills for future work.[ Astorino by Kawasaki ]We need more autonomous driving datasets that accurately reflect how sucky driving can be a lot of the time.[ ASRL ]This Carnegie Mellon University Robotics Institute Seminar is by CMU\u2019s own Victoria Webster-Wood, on \u201cRobots as Models for Biology and Biology and Materials for Robots.\u201dIn the last century, it was common to envision robots as shining metal structures with rigid and halting motion. This imagery is in contrast to the fluid and organic motion of living organisms that inhabit our natural world. The adaptability, complex control, and advanced learning capabilities observed in animals are not yet fully understood, and therefore have not been fully captured by current robotic systems. Furthermore, many of the mechanical properties and control capabilities seen in animals have yet to be achieved in robotic platforms. In this talk, I will share an interdisciplinary research vision for robots as models for neuroscience and biology as materials for robots.[ CMU RI ]<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/perseverance-mars-rover-autonomous-driving\" target=\"_blank\" rel=\" noopener\" title=\"Perseverance Smashes Autonomous Driving Record on Mars\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/a-self-portrait-captured-by-nasa-s-perseverance-rover-while-traversing-mars-rocky-surface.jpg?id=65007226&#038;width=980\" title=\"Perseverance Smashes Autonomous Driving Record on Mars\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/perseverance-mars-rover-autonomous-driving\" target=\"_blank\" rel=\" noopener\">Perseverance Smashes Autonomous Driving Record on Mars<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Michelle Hampson<\/a> on February 25, 2026 at 3:00 pm <\/small><p>This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.In past missions to Mars, like with the Curiosity and Opportunity rovers, the robots relied mostly on human instructions from millions of miles away in order to safely navigate the Martian landscape. The Perseverance rover, on the other hand, has zipped across the alien, boulder-ridden land almost completely autonomously, smashing previous records for autonomous driving on Mars. Whereas the Curiosity rover completed about 6.2 percent of its travels autonomously, Perseverance had completed about 90 percent of its travels autonomously, as of its 1,312th Martian day since landing (28 October 2024). Perseverance was able to accomplish such a feat\u2014using remarkably little computing power\u2014thanks to its specially designed autonomous driving algorithm, Enhanced Autonomous Navigation, or ENav. The full details on ENav\u2019s inner workings and how well it has performed on Mars are described in a study published in IEEE Transactions on Field Robotics in November 2025. There are some advantages, but some serious challenges when it comes to autonomous navigation on Mars. On the plus side, almost nothing on the planet moves. Rocks and gravel slopes\u2014while formidable obstacles\u2014remain stationary, offering rovers consistency and predictability in their calculations and pathfinding. On the other hand, Mars is in large part uncharted terrain. \u201cThis enormous uncertainty is the major challenge,\u201d says Masahiro \u201cHiro\u201d Ono, supervisor of the Robotic Surface Mobility Group at NASA\u2019s Jet Propulsion Laboratory, who helped develop ENav.Creating a Highly Autonomous Rover While some images from the space-borne Mars Reconnaissance Orbiter exist, these are usually not high enough resolution for ground-based navigation by a rover. In December, NASA engineers performed the first test of a navigation technique that uses a model based on Anthropic\u2019s AI to analyze MRO images and generate waypoints\u2014the coordinates used to guide the rover\u2019s path\u2014for more complete automation. RELATED: NASA Let AI Drive the Perseverance RoverBut in the majority of today\u2019s navigation, Perseverance must rely on images the rover itself takes, analyze these to assess thousands of different paths, and choose the right route that won\u2019t end in its own demise. The kicker? It must do so with the equivalent computing capacity of an iMac G3, an Apple computer sold in the late 1990s.The rover\u2019s processor must undergo radiation hardening, a process that makes them resilient to the extreme levels of solar radiation and cosmic rays experienced on Mars. Although other radiation-hardened CPUs with more computing power were available at the time of Perseverance\u2018s development, the one used is proven to be reliable in the harsh conditions of outer space. By reusing hardware from previous missions\u2014the same CPU was used in Curiosity\u2014NASA can reduce costs while minimizing risk.Given its limited computing resources, the ENav algorithm was strategically designed to do the heaviest computing only when driving on challenging terrains. It works by analyzing images of its surroundings and assessing about 1,700 possible paths forward, typically within 6 meters from the rover\u2019s current position. Assessing factors such as travel time and terrain roughness, it ranks possible paths. Finally, it runs a computationally heavy collision-checking algorithm, called ACE (approximate clearance estimation) on only on a handful of top-ranked potential paths.   As of October 2024, Perseverance has driven more than 30 kilometers (18.65 miles) and collected 24 samples of rock and regolith. Source:  JPL-Caltech\/ASU\/MSSS\/NASAExploring the Red Planet with ENavPerseverance landed on Mars on 18 February 2021. In their study, Ono and his colleagues describe how the rover was initially deployed with strong human navigation oversight during its first 64 Martian days on the Red Planet, but then went on to predominantly use ENav to travel to one of the major exploration targets: the delta formed by an ancient river that once flowed into Jezero Crater billions of years ago. Scientists believe it could be a prime spot for finding evidence of past alien life, if life ever existed on Mars.After a brief exploration of an area southwest of its landing site, Perseverance jetted counterclockwise around sand dunes toward the ancient river delta at a crisp pace, averaging 201 meters per Martian day. (It\u2019s too cold for the rover to travel at night.) Over the course of just 24 Martian days of driving, the rover traveled about 5 kilometers into the foothill of the delta. 95 percent of all driving that month was performed using the autonomous driving mode, resulting in an unprecedented amount of autonomous driving on Mars.Past rovers, such as Curiosity, had to stop and \u201cthink\u201d about their paths before moving forward. \u201cThat was the main speed bump for Curiosity, why it was so slow to drive autonomously,\u201d Ono explains. In contrast, Perseverance is able to think and drive at the same time. \u201cSometimes [Perseverance] has to stop and think, particularly when it cannot figure out a safe path quickly. But most of the time, particularly on easy terrains, it can keep driving without stopping,\u201d Ono says. \u201cThat made its autonomous driving an order of magnitude faster.\u201dOpportunity held the previous record for autonomous driving on Mars, traveling 109 meters in a single Martian day. But on 3 April 2023, Perseverance set a new record by driving 331.74 meters autonomously (and 347.69 meters in total) in a single Martian day. Ono says that fine-tuning the ENav algorithm took a lot of work, but he is happy with its performance. He also emphasizes that efforts to continue advancing autonomous navigation are critical if humans want to continue exploring even deeper into space, where Earthly communication with rovers and other spacecraft will become increasingly difficult.\u201cThe automation of the space systems is unstoppable direction that we have to go if we want to explore deeper in space,\u201d Ono says. \u201cThis is the direction that we must go to push the boundaries and frontiers of space exploration.\u201dThis article was updated on 27 February to clarify NASA\u2019s reasoning for selecting the CPU used in the Perseverance rover.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/robot-martial-arts\" target=\"_blank\" rel=\" noopener\" title=\"Video Friday: Humanoid Robots Celebrate Spring\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/five-humanoid-robots-in-red-vests-perform-synchronized-movements-on-a-shiny-stage.png?id=64966934&#038;width=980\" title=\"Video Friday: Humanoid Robots Celebrate Spring\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/robot-martial-arts\" target=\"_blank\" rel=\" noopener\">Video Friday: Humanoid Robots Celebrate Spring<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Evan Ackerman<\/a> on February 20, 2026 at 6:00 pm <\/small><p>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1\u20135 June 2026, VIENNAEnjoy today\u2019s videos! So humanoid robots are nearing peak human performance. I would point out, though, that this is likely very far from peak robot performance, which has yet to be effectively exploited, because it requires more than just copying humans.[ Unitree ]\u201cThe Street Dance of China\u201d: Turning lightness into gravity, and rhythm into impact.This is a head-on collision between metal and beats. This Chinese New Year, watch PNDbotics Adam bring the heat with a difference.[ PNDbotics ]You had me at robot pandas.[ MagicLab ]NASA\u2019s Perseverance rover can now precisely determine its own location on Mars without waiting for human help from Earth. This is possible thanks to a new technology called Mars global localization. This technology rapidly compares panoramic images from the rover\u2019s navigation cameras with onboard orbital terrain maps. It\u2019s done with an algorithm that runs on the rover\u2019s helicopter base station processor, which was originally used to communicate with the Ingenuity Mars helicopter. In a few minutes, the algorithm can pinpoint Perseverance\u2019s position to within about 10 inches (25 centimeters). The technology will help the rover drive farther autonomously and keep exploring. [ NASA Jet Propulsion Laboratory ]Legs? Where we\u2019re going, we don\u2019t need legs![ Paper ]This is a bit of a tangent from robotics, but it gets a pass because of the cute jumping spider footage.[ Berkeley Lab ]Corvus One for Cold Chain is engineered to live and operate in freezer environments permanently, down to \u201320 \u00b0F, while maintaining full-flight and barcode-scanning performance.I am sure there is an excellent reason for putting a cold-storage facility in the Mojave Desert.[ Corvus Robotics ]The video documents the current progress made in the picking rate of the Shiva robot when picking strawberries. It first shows the previous status, then the further development, and finally the field test.[ DFKI ]Data powers an organization\u2019s digital transformation, and ST Engineering MRAS is leveraging Spot to get a full view of critical equipment and a facility. Working autonomously, Spot collects information about machine health\u2014and now, thanks to an integration of the Leica BLK ARC for reality capture, detailed and accurate point-cloud data for their digital twin.[ Boston Dynamics ]The title of this video is \u201cGet out and have fun!\u201d Is that mostly what humanoid robots are good for right now, pretty much...?[ Engine AI ]Astorino is a modern six-axis robot based on 3D-printing technology. Programmable in AS language, the robot facilitates the preparation of classes with ready-made teaching materials, is easy both to use and to repair, and gives the opportunity to learn and make mistakes without fear of breaking it.[ Kawasaki ]Can I get this in my living room?[ Yaskawa ]What does it mean to build a humanoid robot in seven months, and the next one in just five? This documentary takes you behind the scenes at Humanoid, a U.K.-based AI and robotics company building reliable, safe, and  helpful humanoid robots. You\u2019ll hear directly from our engineering, hardware, product, and other teams as they share their perspectives on the journey of turning physical AI into reality.[ Humanoid ]This IROS 2025 keynote is from Tim Chung\u2014now at Microsoft\u2014on catalyzing the future of human, robot, and AI agent teams in the physical world.The convergence of technologies\u2014from foundation AI models to diverse sensors and actuators to ubiquitous connectivity\u2014is transforming the nature of interactions in the physical and digital world. People have accelerated their collaborative connections and productivity through digital and immersive technologies, no longer limited by geography or language or access. Humans have also leveraged and interacted with AI in many different forms, with the advent of hyperscale AI models (that is, large language models) forever changing (and at an ever-astonishing pace) the nature of human\u2013AI teams, realized in this era of the AI \u201ccopilot.\u201d Similarly, robotics and automation technologies now afford greater opportunities to work with and\/or near humans, allowing for increasingly collaborative physical robots to dramatically impact real-world activities. It is the compounding effect of enabling all three capabilities, each complementary to one another in valuable ways, and we envision the triad formed by human\u2013robot\u2013AI teams as revolutionizing the future of society, the economy, and technology.[ IROS 2025 ]This GRASP SFI talk is by Chris Paxton at Agility Robotics: \u201cHow Close Are We to Generalist Humanoid Robots?\u201dWith billions of dollars of funding pouring into robotics, general-purpose humanoid robots seem closer than ever. And certainly it feels like the pace of robotics is faster than ever, with multiple companies beginning large-scale deployments of humanoid robots. In this talk, I\u2019ll go over the challenges still facing scaling robot learning, looking at insights from a year of discussions with researchers all over the world.[ University of Pennsylvania GRASP Laboratory ]This week\u2019s Carnegie Mellon University Robotics Institute Seminar is from Jitendra Malik at University of California, Berkeley: \u201cRobot Learning, With Inspiration From Child Development.\u201dFor intelligent robots to become ubiquitous, we need to \u201csolve\u201d locomotion, navigation, and manipulation at sufficient reliability in widely varying environments. In locomotion, we now have demonstrations of humanoid walking in a variety of challenging environments. In navigation, we pursued the task of \u201cGo to Any Thing\u201d: A robot, on entering  a newly rented Airbnb, should be able to find objects such as TV sets or potted plants. RL in simulation and sim-to-real have been workhorse technologies for us, assisted by a few technical innovations. I will sketch promising directions for future work.[ Carnegie Mellon University Robotics Institute ]<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/olympics-curling-robot-ai\" target=\"_blank\" rel=\" noopener\" title=\"Tech Is Taking Over Olympic Curling\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/curling-players-sweeping-a-red-stone-on-ice-motion-blur-emphasizes-speed-and-action.jpg?id=64953312&#038;width=980\" title=\"Tech Is Taking Over Olympic Curling\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/olympics-curling-robot-ai\" target=\"_blank\" rel=\" noopener\">Tech Is Taking Over Olympic Curling<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Elie Dolgin<\/a> on February 18, 2026 at 3:00 pm <\/small><p>At this year\u2019s Winter Olympics in Italy, the controversy began with a fingertip.A disputed double-touch\u2014whether a curler had brushed a moving stone twice\u2014sparked protests, profanity-laced exchanges, and heated debate about sportsmanship. In a game that prides itself on mutual trust and the idea of competition as a shared test of skill, even the suggestion of impropriety can ripple far beyond a single end.But if a double-touch can shake the sport, what happens when the controversy isn\u2019t about a fingertip but an algorithm?That\u2019s the question shadowing the rise of analytics driven by machine learning and a new breed of AI-powered robots that can throw stones, read the ice, and calculate strategy with machine precision.RELATED: Milan-Cortina Winter Olympics Debut Next-Generation Sport SmartsSome of these robots, such a \u201cCurly,\u201d have already toppled elite human opponents in head-to-head competitions. Others, engineered either to replicate the biomechanics of human shot delivery or to fire stones consistently with repeatable speed and rotation, are transforming the sport by dissecting technique and strategy with a level of rigor no coach with a stopwatch could match.  Seen here in action, the two-part robot system named Curly made its debut in 2018 ahead of that year\u2019s Paralympic Winter Games in Pyeongchang.TUBerlinTV\/YouTube\u201cThe amount of innovation I\u2019m seeing is just tremendous,\u201d says Glenn Paulley, a retired computer scientist who now runs Throwing Rocks Consulting Services, where he coaches curlers and advises teams on analytics.Fueled by investments from governments and sporting bodies around the world, the pursuit of a competitive edge has escalated into a data-driven push for marginal gains ahead of each Olympic cycle. \u201cThey\u2019re trying like crazy to elevate their national team programs,\u201d Paulley says, \u201cand they\u2019re doing it in every way possible.\u201d By the time medals are handed out in Cortina d\u2019Ampezzo this weekend, the imprint of this full-throttle tech offensive could be etched into every sheet of ice.Yet, as algorithms begin suggesting shots, the contours of fair play blur. Regulators and coaches alike are grappling with where to draw the line. And as top curlers lean more into AI and robotic systems, some fear the loss of something fundamental: the quiet, hard-earned feel for ice that separates veterans from novices.\u201cIt\u2019s a big debate!\u201d says Emily Zacharias, a former elite curler from Manitoba who captured gold representing Canada at the 2020 World Junior Curling Championships.Three decades ago, Garry Kasparov sat across from IBM\u2019s Deep Blue and discovered that even the most cerebral of games could be unsettled by silicon. Curling, long called \u201cchess on ice,\u201d may now be entering its own version of that reckoning.Can New Tech Comply With the \u201cSpirit of Curling\u201d?Curling has been at this kind of crossroads before. A decade back, the sweeping-fabric controversy known as \u201cBroomgate\u201d triggered accusations of technological doping, a dispute that tore at the heart of the sport\u2019s ethos of trust and bonhomie.The World Curling Federation responded by clamping down on brush materials, but AI now poses a broader challenge. It is not just a better broom but a decision engine, capable of shifting authority from a player\u2019s judgment in the \u201chouse\u201d to a model running in the cloud.  The six-legged \u201chexapod\u201d curling robot is displayed at the World Robot Conference 2022 in Beijing, where that year\u2019s Olympic Games were also held.Anna Ratkoglo\/Sputnik\/APIt\u2019s a prospect that unsettles some athletes and ethicists, who worry about what gets lost as optimization tightens its grip on a sport long governed by the so-called Spirit of Curling, an unwritten code of integrity, fairness, and respect.\u201cWe\u2019re at a point now where just about everything that we used to hold up as uniquely human is now being eroded by technology\u2014and we feel a loss,\u201d says Jason Millar, who runs the Canadian Robotics and AI Ethical Design Lab at the University of Ottawa.\u201cThe AI doesn\u2019t care,\u201d he adds. \u201cThere\u2019s no \u2018spirit\u2019 there.\u201dBuilding Rock-Solid Curling RobotsThe Curly robot first made waves in 2018 when, ahead of that year\u2019s Paralympic Winter Games in Pyeongchang, engineers at Korea University, in Seoul, unveiled the AI-powered device\u2014or, rather, two coordinated devices, a pair of \u201cskip\u201d and \u201cthrower\u201d units, designed to read the ice and deliver stones.Driven by a physics-based simulator and an adaptive deep-reinforcement-learning framework, the robot didn\u2019t simply replay preprogrammed shots. It learned from its own misses, updated its aim based on the distance gaps between intended and actual stone positions, and factored in the cumulative wear of pebbled ice as a match unfolded.That capacity was put to the test in a series of mini-games against top-ranked Korean athletes. As reported in the journal Science Robotics, Curly started slow, dropping the opening match as it calibrated to the live ice. But it then went on to win the next three contests, demonstrating what its creators called \u201chuman-level performance\u201d under real-world conditions.The next Winter Olympics\u2014the Beijing 2022 Games\u2014then brought a more agile machine: a \u201chexapod\u201d curling robot built to walk, align, and throw like a human curler.  With six legs, the hexapod robot can act more like a human curler when launching the stone, putting a new spin on curling-robot tech.FlyingDumplings\/YouTubeWith its six-legged gait for stable traction and flexibility on the ice, the robot could pivot at the \u201chack,\u201d the rubber foothold curlers use to launch their delivery. From there, the hexapod set its angle, kicked off, and glided on a skateboard-like undercarriage before releasing the stone, imparting competition-level spin.Equipped with lidar and cameras, the robot scanned the sheet to map stone positions and fed those data into software that calculated collision paths and solved for the precise release parameters needed to execute a chosen strategy.Curling Bots Leave Broom for ImprovementFor all the technical prowess of Curly and the hexapod, one stubborn constraint remains: No robot can sweep\u2014at least not yet.There are no Roomba-like machines flanking the stone, frantically brushing to extend its travel or hold its line. Once released, the robot\u2019s shot is fate, untouched by the vigorous, broom-flailing choreography that so often determines whether a stone bites the button or drifts wide.\u201cThese robots are leaving out a huge chunk of potential that humans are bringing to the game,\u201d says Steven Passmore, a human-movement scientist at the University of Manitoba in Winnipeg who, together with Zacharias, coauthored a comprehensive review of the scientific literature on curling.At the time of their data cutoff, in 2021, they found nearly two dozen published studies about robotics, AI, and emerging tech in the sport. But as Zacharias points out, the most sophisticated tools shaping elite play often never appear in academic journals, developed behind closed doors and closely guarded as competitive secrets.For her part, Zacharias\u2014who competed at four Canadian women\u2019s curling championships between 2021 and 2024\u2014says she never once practiced against a robot. But she has trained with a rock launcher, a mechanized delivery system that fires stones at precisely calibrated speeds and rotations, over and over.By standardizing the throw, the device allows athletes to isolate how different sweeping techniques, brush-head fabrics, or ice temperatures alter a stone\u2019s path, explains Paulley. \u201cIt means you can run repeated experiments in order to test the impact of different variables,\u201d he says. \u201cAnd in curling, there are a lot of variables.\u201dCutting-Edge Tech Helps Athletes TrainIn Japan, all these technologies and more are being explored in a government-backed initiative called Curling of the Future.The program brings together university engineers, sporting agencies, and elite athletes to prototype delivery robots and sweep-assist machines, along with AI strategy engines, instrumented \u201csmart stones,\u201d and rock-launcher systems for controlled training. \u201cThe core objective is elite performance: improving decision-making and the quality of training so that Japan can strengthen its competitiveness in international competition,\u201d says Yoshinari Takegawa, an information scientist at the Future University Hakodate who is co-leading the project.  Dylan Rusnak, a kinesiology student at Red Deer Polytechnic, contributed to the project by developing a VR system for curling. Rusnak wears a Meta Quest headset [left] while demonstrating the system, which shows athletes immersive views of the rink [right]. Red Deer PolytechnicThe technology push isn\u2019t confined to Olympic play either. At the Paralympics next month, the Canadian national wheelchair curling squad will be coming primed with training sessions inside a full virtual replica of the Cortina Curling Olympic Stadium, courtesy of a VR system developed by mechanical engineer Jennifer Dornstauder and her students at Red Deer Polytechnic in Alberta. The setup drops athletes into an immersive curling rink via a Meta Quest headset, where they can look down and see virtual renderings of their legs, wheelchair, throwing stick, stones, and the ice surface beneath them.According to Mick Lizmore, head coach of Canada\u2019s National Wheelchair Curling Program, his team has used the VR to help visualize the venue where they will be competing and for group tactical training, even when they can\u2019t meet together in person. Beyond sharpening elite preparation, Dornstauder says, the same tool should help expand access to wheelchair curling for people with disabilities who face mobility challenges or limited ice availability.\u201cVR is just this amazing tool that is almost designed for getting around these barriers,\u201d she says.Will Tech Change Curling?Many of the technologies entering curling are, in many ways, benign\u2014tools for analysis, accessibility, and incremental refinement rather than wholesale disruption. A rock launcher standardizes practice. A VR headset extends rehearsal beyond the rink. A strategy engine offers probabilities, not ultimatums.Taken together, however, they reveal how thoroughly digital systems are seeping into every layer of the sport.AI-powered sparring machines tuned to mimic a rival team\u2019s tendencies, and thus capable of playing out fully simulated preparatory matches, remain a fantasy. National curling programs operate on tight budgets, limiting how far and how fast innovation can go. And even well-funded federations must balance software and robotics against coaching, travel, and ice time.   Rock launchers provide a consistent throw to help athletes practice sweeping.Sean Maw\/University of SaskatchewanYet as money continues to flow into high-performance curling, those possibilities draw closer.\u201cIt\u2019s probably just a matter of time,\u201d says Sean Maw, a sports engineer at the University of Saskatchewan who has built rock launchers and studies the complexities of curling. For now, the stones still leave human hands\u2014hands capable of brilliance, instinct, and the occasional double-touch\u2014and the final call still rests with the skip in the house. But the algorithms are edging closer to the button.<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/video-friday-robot-collective\" target=\"_blank\" rel=\" noopener\" title=\"Video Friday: Robot Collective Stays Alive Even When Parts Die\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/robot-collective-crawls-under-a-bridge-of-rocks-with-glowing-lights-video-speed-increased-10x.gif?id=64423332&#038;width=980\" title=\"Video Friday: Robot Collective Stays Alive Even When Parts Die\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/video-friday-robot-collective\" target=\"_blank\" rel=\" noopener\">Video Friday: Robot Collective Stays Alive Even When Parts Die<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Evan Ackerman<\/a> on February 13, 2026 at 4:30 pm <\/small><p>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1\u20135 June 2026, VIENNAEnjoy today\u2019s videos! No system is immune to failure. The compromise between reducing failures and improving adaptability is a recurring problem in robotics. Modular robots exemplify this trade-off, because the number of modules dictates both the possible functions and the odds of failure. We reverse this trend, improving reliability with an increased number of modules by exploiting redundant resources and sharing them locally.[ Science ] via [ RRL ]Now that the Atlas enterprise platform is getting to work, the research version gets one last run in the sun. Our engineers made one final push to test the limits of full-body control and mobility, with help from the RAI Institute.[ RAI ] via [ Boston Dynamics ]Announcing Isaac 0: the laundry-folding robot we\u2019re shipping to homes, starting in February 2026 in the Bay Area.[ Weave Robotics ]In a paper published in Science, researchers at the Max Planck Institute for Intelligent Systems, the Humboldt University of Berlin, and the University of Stuttgart have discovered that the secret to the elephant\u2019s amazing sense of touch is in its unusual whiskers. The interdisciplinary team analyzed elephant-trunk whiskers using advanced microscopy methods that revealed a form of material intelligence more sophisticated than the well-studied whiskers of rats and mice. This research has the potential to inspire new physically intelligent robotic sensing approaches that resemble the unusual whiskers that cover the elephant trunk.[ MPI ]Got an interest in autonomous mobile robots, ROS2, and a mere US $150 lying around? Try this.[ Maker's Pet ]Thanks, Ilia!We\u2019re giving humanoid robots swords now.[ Robotera ]A system developed by researchers at the University of Waterloo lets people collaborate with groups of robots to create works of art inspired by music.[ Waterloo ]FastUMI Pro is a multimodal, model-agnostic data acquisition system designed to power a truly end-to-end closed loop for embodied intelligence, transforming real-world data into genuine robotic capability.[ Lumos Robotics ]We usually take fingernails for granted, but they\u2019re vital for fine-motor control and feeling textures. Our students have been doing some great work looking into the mechanics behind this.[ Paper ]This is a 550-lb. all-electric coaxial unmanned rotorcraft developed by Texas A&amp;M University\u2019s Advanced Vertical Flight Laboratory and Harmony Aeronautics as a technology demonstrator for our quiet-rotor technology. The payload capacity is 200 lb. (gross weight = 750 lb). The noise level measured was around 74 dBA in hover mode at 50 feet, making this probably the quietest rotorcraft at this scale.[ Harmony Aeronautics ]Harvard scientists have created an advanced 3D-printing method for developing soft robotics. This technique, called rotational multimaterial 3D printing, enables the fabrication of complex shapes and tubular structures with dissolvable internal channels. This innovation could someday accelerate the production of components for surgical robotics and assistive devices, advancing medical technology.[ Harvard ]The Lynx M20 wheel-legged robot steps onto the ice and snow, taking on challenges inspired by four winter sports scenarios. Who says robots can\u2019t enjoy winter sports?[ Deep Robotics ]NGL right now I find this more satisfying to watch than a humanoid doing just about anything.[ Fanuc ]At Mentee Robotics, we design and build humanoid robots from the ground up with one goal: reliable, scalable deployment in real-world industrial environments. Our robots are powered by deep vertical integration across hardware, embedded software, and AI, all developed in-house to close the Sim2Real gap and enable continuous, around-the-clock operation.[ Mentee Robotics ]You don\u2019t need to watch this whole video, but the idea of little submarines that hitch rides on bigger boats and recharge themselves is kind of cool.[ Lockheed Martin ]Learn about the work of Dr. Roland Siegwart, Dr. Anibal Ollero, Dr. Dario Floreano, and Dr. Margarita Chli on flying robots and some of the challenges they are still trying to tackle in this video created based on their presentations at ICRA@40, the 40th-anniversary celebration of the IEEE International Conference on Robotics and Automation.[ ICRA@40 ]<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/autonomous-warehouse-robots\" target=\"_blank\" rel=\" noopener\" title=\"Video Friday: Autonomous Robots Learn By Doing in This Factory\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/robotic-arms-on-mobile-bases-sort-crates-on-a-conveyor-belt-in-a-warehouse.png?id=63907821&#038;width=980\" title=\"Video Friday: Autonomous Robots Learn By Doing in This Factory\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/autonomous-warehouse-robots\" target=\"_blank\" rel=\" noopener\">Video Friday: Autonomous Robots Learn By Doing in This Factory<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Evan Ackerman<\/a> on February 6, 2026 at 5:00 pm <\/small><p>Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.ICRA 2026: 1\u20135 June 2026, VIENNAEnjoy today\u2019s videos! To train the next generation of autonomous robots, scientists at Toyota Research Institute are working with Toyota Manufacturing to deploy them on the factory floor.[ Toyota Research Institute ]Thanks, Erin!This is just one story (of many) about how we tried, failed, and learned how to improve our \u202adrone delivery system.Okay, but like you didn\u2019t show the really cool bit...?[ Zipline ]We\u2019re introducing KinetIQ, an AI framework developed by Humanoid, for end-to-end orchestration of humanoid robot fleets. KinetIQ coordinates wheeled and bipedal robots within a single system, managing both fleet-level operations and individual robot behavior across multiple environments. The framework operates across four cognitive layers, from task allocation and workflow optimization to task execution based on Vision-Language-Action models and whole-body control taught by reinforcement learning, and is shown here running across our wheeled industrial robots and bipedal R&amp;D platform.[ Humanoid ]What if a robot gets damaged during operation? Can it still perform its mission without immediate repair? Inspired by the self-embodied resilience strategies of stick insects, we developed a decentralized adaptive resilient neural control system (DARCON). This system allows legged robots to autonomously adapt to limb loss, ensuring mission success despite mechanical failure. This innovative approach leads to a future of truly resilient, self-recovering robotics.[ VISTEC ]Thanks, Poramate!This animation shows Perseverance\u2019s point of view during a drive of 807 feet (246 meters) along the rim of Jezero Crater on 10 December 2025, the 1,709th Martian day, or sol, of the mission. Captured over 2 hours and 35 minutes, 53 navigation-camera (Navcam) image pairs were combined with rover data on orientation, wheel speed, and steering angle, as well as data from Perseverance\u2019s inertial measurement unit, and placed into a 3D virtual environment. The result is this reconstruction with virtual frames inserted about every 4 inches (0.1 meters) of drive progress.[ NASA Jet Propulsion Lab ]\u221247.4 \u00b0C, 130,000 steps, 89.75\u00b0E, 47.21\u00b0N\u2026 On the extremely cold snowfields of Altay, the birthplace of human skiing, Unitree\u2019s humanoid robot G1 left behind a unique set of marks.[ Unitree ]Representing and understanding 3D environments in a structured manner is crucial for autonomous agents to navigate and reason about their surroundings. In this work, we propose an enhanced hierarchical 3D scene graph that integrates open-vocabulary features across multiple abstraction levels and supports object-relational reasoning. Our approach leverages a vision language model (VLM) to infer semantic relationships. Notably, we introduce a task-reasoning module that combines large language models and a VLM to interpret the scene graph\u2019s semantic and relational information, enabling agents to reason about tasks and interact with their environment more intelligently. We validate our method by deploying it on a quadruped robot in multiple environments and tasks, highlighting its ability to reason about them.[ Norwegian University of Science &amp; Technology, Autonomous Robots Lab ]Thanks, Kostas!We present HoLoArm, a quadrotor with compliant arms inspired by the nodus structure of dragonfly wings. This design provides natural flexibility and resilience while preserving flight stability, which is further reinforced by the integration of a reinforcement-learning control policy that enhances both recovery and hovering performance.[ HO Lab via IEEE Robotics and Automation Letters ]In this work, we present SkyDreamer, to the best of our knowledge the first end-to-end vision-based autonomous-drone racing policy that maps directly from pixel-level representations to motor commands.[ MAVLab ]This video showcases AI Worker, equipped with five-finger hands, performing dexterous object manipulation across diverse environments. Through teleoperation, the robot demonstrates precise, humanlike hand control in a variety of manipulation tasks.[ Robotis ]Autonomous following, 45-degree slope climbing, and reliable payload transport in extreme winter conditions, built to support operations where environments push the limits.[ DEEP Robotics ]Living architectures, from plants to beehives, adapt continuously to their environments through self-organization. In this work, we introduce the concept of architectural swarms: systems that integrate swarm robotics into modular architectural fa\u00e7ades. The Swarm Garden exemplifies how architectural swarms can transform the built environment, enabling \u201cliving-like\u201d architecture for functional and creative applications.[ SSR Lab via Science Robotics ]Here are a couple of IROS 2025 keynotes, featuring Bram Vanderborght and Kyu-Jin Cho.  - YouTube  www.youtube.com  [ IROS 2025 ]<\/p><\/div><\/li><li  style=\"padding: 15px 0 25px\" class=\"rss_item\"><div class=\"rss_image\" style=\"height:150px;width:150px;\"><a href=\"https:\/\/spectrum.ieee.org\/poetry-for-engineers-ode\" target=\"_blank\" rel=\" noopener\" title=\"Ode to Very Small Devices\" style=\"height:150px;width:150px;\"><img decoding=\"async\" src=\"https:\/\/spectrum.ieee.org\/media-library\/anthropomorphized-miniature-gadgets-standing-on-the-heads-of-two-hex-bolts.jpg?id=63525887&#038;width=980\" title=\"Ode to Very Small Devices\" style=\"height:150px;width:150px;\"><\/a><\/div><span class=\"title\"><a href=\"https:\/\/spectrum.ieee.org\/poetry-for-engineers-ode\" target=\"_blank\" rel=\" noopener\">Ode to Very Small Devices<\/a><\/span><div class=\"rss_content\" style=\"\"><small>by <a href=\"\/\/spectrum.ieee.org\" target=\"_blank\" title=\"spectrum.ieee.org\">Paul Jones<\/a> on January 30, 2026 at 7:02 pm <\/small><p>As fairies for the Irish or leeks for Welsh,it\u2019s the secret lives of small hidden machines,their junctures, and networks that inspire me:Mystic hidden functionaries that makeour made world live, brave little servo motors,whose couplers, whose eccentric fire-filledsensors are encased in bakelite with brassscrews, who stare with red eyes, who gauge moisture,who notice tiny motions and respond,whose cooling fans call out in white-noiseregisters like older folk singers\u2013I canalmost hear their earlier songs, their strong voicesnow yelps, their thumps, their throbs, their hum, their chant\u2013,they click, they whir, they are sent spinninginside like teen girls giggling over boy bands.Most of all: ones waiting silently, concealingthe surprise of their purpose, tasks not yet known,their true natures found only in connections.Those that listen, those that speak,those that control cool and heat,those that open doors, those that lockall the things that we\u2019ve forgot,those that hide, those that disclosethose embedded in our clothesthose in our ears, those in our heartsthose that bring together, those a partof divisions, those like birds,like parrots that complete our words,those like fish, those that entrap,those that free, those that freely flapin fierce winds, those that replacewhat we have lost, those that seeat night, in fog, in brightness, in fear,those that show what we hold dear,those that tempt, those that repel,those that buy and those that sell,those that keep us alive, those thatdon\u2019t, won\u2019t, couldn\u2019t and cannot.Parts of one mind, not mine, blunt orchestraof information, bundles of feelersreaching out to touch us, teach us, guide usto form better futures better understood.May your sounds, your chimes, your silence calm us.May your tender tendrils touch what we seek.Small parts becoming one being intertwined,a world in itself, remind us to be kind.<\/p><\/div><\/li><\/ul> <\/div><style type=\"text\/css\" media=\"all\">.feedzy-rss .rss_item .rss_image{float:left;position:relative;border:none;text-decoration:none;max-width:100%}.feedzy-rss .rss_item .rss_image span{display:inline-block;position:absolute;width:100%;height:100%;background-position:50%;background-size:cover}.feedzy-rss .rss_item .rss_image{margin:.3em 1em 0 0;content-visibility:auto}.feedzy-rss ul{list-style:none}.feedzy-rss ul li{display:inline-block}<\/style>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"iawp_total_views":6,"footnotes":""},"class_list":["post-8338","page","type-page","status-publish"],"_links":{"self":[{"href":"https:\/\/aa.roboticaiamagazine.com\/index.php\/wp-json\/wp\/v2\/pages\/8338","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aa.roboticaiamagazine.com\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/aa.roboticaiamagazine.com\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/aa.roboticaiamagazine.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aa.roboticaiamagazine.com\/index.php\/wp-json\/wp\/v2\/comments?post=8338"}],"version-history":[{"count":1,"href":"https:\/\/aa.roboticaiamagazine.com\/index.php\/wp-json\/wp\/v2\/pages\/8338\/revisions"}],"predecessor-version":[{"id":8339,"href":"https:\/\/aa.roboticaiamagazine.com\/index.php\/wp-json\/wp\/v2\/pages\/8338\/revisions\/8339"}],"wp:attachment":[{"href":"https:\/\/aa.roboticaiamagazine.com\/index.php\/wp-json\/wp\/v2\/media?parent=8338"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}