A Robot for the Worst Job in the Warehouse - IEEE Spectrum

2022-05-27 09:05:22 By : Ms. sunny chen

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

Boston Dynamics’ Stretch can move 800 heavy boxes per hour

Stretch can autonomously transfer boxes onto a roller conveyor fast enough to keep up with an experienced human worker.

As COVID-19 stresses global supply chains, the logistics industry is looking to automation to help keep workers safe and boost their efficiency. But there are many warehouse operations that don’t lend themselves to traditional automation—namely, tasks where the inputs and outputs of a process aren’t always well defined and can’t be completely controlled. A new generation of robots with the intelligence and flexibility to handle the kind of variation that people take in stride is entering warehouse environments. A prime example is Stretch, a new robot from Boston Dynamics that can move heavy boxes where they need to go just as fast as an experienced warehouse worker.

Stretch’s design is somewhat of a departure from the humanoid and quadrupedal robots that Boston Dynamics is best known for, such as Atlas and Spot. With its single massive arm, a gripper packed with sensors and an array of suction cups, and an omnidirectional mobile base, Stretch can transfer boxes that weigh as much as 50 pounds (23 kilograms) from the back of a truck to a conveyor belt at a rate of 800 boxes per hour. An experienced human worker can move boxes at a similar rate, but not all day long, whereas Stretch can go for 16 hours before recharging. And this kind of work is punishing on the human body, especially when heavy boxes have to be moved from near a trailer’s ceiling or floor.

“Truck unloading is one of the hardest jobs in a warehouse, and that's one of the reasons we're starting there with Stretch,” says Kevin Blankespoor, senior vice president of warehouse robotics at Boston Dynamics. Blankespoor explains that Stretch isn’t meant to replace people entirely; the idea is that multiple Stretch robots could make a human worker an order of magnitude more efficient. “Typically, you’ll have two people unloading each truck. Where we want to get with Stretch is to have one person unloading four or five trucks at the same time, using Stretches as tools.”

All Stretch needs is to be shown the back of a trailer packed with boxes, and it’ll autonomously go to work, placing each box on a conveyor belt one by one until the trailer is empty. People are still there to make sure that everything goes smoothly, and they can step in if Stretch runs into something that it can’t handle, but their full-time job becomes robot supervision instead of lifting heavy boxes all day.

“No one wants to do receiving.” —Matt Beane, UCSB

Achieving this level of reliable autonomy with Stretch has taken Boston Dynamics years of work, building on decades of experience developing robots that are strong, fast, and agile. Besides the challenge of building a high-performance robotic arm, the company also had to solve some problems that people find trivial but are difficult for robots, like looking at a wall of closely packed brown boxes and being able to tell where one stops and another begins.

Safety is also a focus, says Blankespoor, explaining that Stretch follows the standards for mobile industrial robots set by the American National Standards Institute and the Robotics Industry Association. That the robot operates inside a truck or trailer also helps to keep Stretch safely isolated from people working nearby, and at least for now, the trailer opening is fenced off while the robot is inside.

Stretch is optimized for moving boxes, a task that’s required throughout a warehouse. Boston Dynamics hopes that over the longer term the robot will be flexible enough to put its box-moving expertise to use wherever it’s needed. In addition to unloading trucks, Stretch has the potential to unload boxes from pallets, put boxes on shelves, build orders out of multiple boxes from different places in a warehouse, and ultimately load boxes onto trucks, a much more difficult problem than unloading due to the planning and precision required.

“Where we want to get with Stretch is to have one person unloading four or five trucks at the same time.” —Kevin Blankespoor, Boston Dynamics

In the short term, unloading a trailer (part of a warehouse job called “receiving”) is the best place for a robot like Stretch, agrees Matt Beane, who studies work involving robotics and AI at the University of California, Santa Barbara. “No one wants to do receiving,” he says. “It’s dangerous, tiring, and monotonous.”

But Beane, who for the last two years has led a team of field researchers in a nationwide study of automation in warehousing, points out that there may be important nuances to the job that a robot such as Stretch will probably miss, like interacting with the people who are working other parts of the receiving process. “There's subtle, high-bandwidth information being exchanged about boxes that humans down the line use as key inputs to do their job effectively, and I will be singularly impressed if Stretch can match that.”

Boston Dynamics spent much of 2021 turning Stretch from a prototype, built largely from pieces designed for Atlas and Spot, into a production-ready system that will begin shipping to a select group of customers in 2022, with broader sales expected in 2023. For Blankespoor, that milestone will represent just the beginning. He feels that such robots are poised to have an enormous impact on the logistics industry. “Despite the success of automation in manufacturing, warehouses are still almost entirely manually operated—we’re just starting to see a new generation of robots that can handle the variation you see in a warehouse, and that’s what we’re excited about with Stretch.”

Bill Gates recommends taxing robots. AS much as I dont like Gates he has a good point

The Future of Life Institute’s contest counters today’s dystopian doomscapes

Eliza Strickland is a senior editor at IEEE Spectrum, where she covers AI, biomedical engineering, and other topics. She holds a master's degree in journalism from Columbia University.

One of the biggest challenges in a world-building competition that asked teams to imagine a positive future with superintelligent AI: Make it plausible.

The Future of Life Institute, a nonprofit that focuses on existential threats to humanity, organized the contest and is offering a hefty prize purse of up to US $140,000, to be divided among multiple winners. Last week FLI announced the 20 finalists from 144 entries, and the group will declare the winners on 15 June.

“We’re not trying to push utopia. We’re just trying to show futures that are not dystopian, so people have something to work toward.” —Anna Yelizarova, Future of Life Institute

The contest aims to counter the common dystopian narrative of artificial intelligence that becomes smarter than humans, escapes our control, and makes the world go to hell in one way or another. The philosopher Nick Bostrom famously imagined a factory AI turning all the world’s matter into paper clips to fulfill its objective, and many respected voices in the field, such as computer scientist Stuart Russell, have argued that it’s essential to begin work on AI safety now, before superintelligence is achieved. Add in the sci-fi novels, TV shows, and movies that tell dark tales of AI taking over—the Blade Runners, the Westworlds, the Terminators, the Matrices (both original recipe and Resurrections)—and it’s no wonder the public feels wary of the technology.

Anna Yelizarova, who’s managing the contest and other projects at FLI, says she feels bombarded by images of dystopia in the media, and says it makes her wonder “what kind of effect that has on our worldview as a society.” She sees the contest partly as a way to provide hopeful visions of the future. “We’re not trying to push utopia,” she says, noting that the worlds built for the contest are not perfect places with zero conflicts or struggles. “We’re just trying to show futures that are not dystopian, so people have something to work toward,” she says.

The contest asked a lot from the teams who entered: They had to provide a timeline of events from now until 2045 that includes the invention of artificial general intelligence (AGI), two “day in the life” short stories, answers to a list of questions, and a media piece reflecting their imagined world.

Yelizarova says that another motivation for the contest was to see what sorts of ideas people would come up with. Imagining a hopeful future with AGI is inherently more difficult than imagining a dystopian one, she notes, because it requires coming up with solutions to some of the biggest challenges facing humanity. For example, how to ensure that world governments work together to deploy AGI responsibly and don’t treat its development as an arms race? And how to create AGI agents whose goals are aligned with those of humans? “If people are suggesting new institutions or new ways of tackling problems,” Yelizarova says, “those can become actual policy efforts we can pursue in the real world.”

“For a truly positive transformative relationship with AI, it needs to help us—to help humanity—become better.... And the idea that such a world might be possible is a future that I want to fight for.” —Rebecca Rapple, finalist in the Future of Life Institute’s world-building contest

It’s worth diving into the worlds created by the 20 finalists and browsing through the positive possible futures. IEEE Spectrum corresponded with two finalists who have very different visions.

The first, a solo effort by Rebecca Rapple of Portland, Ore., imagines a world in which an AGI agent named TAI has a direct connection with nearly every human on earth via brain-computer interfaces. The world’s main currency is one of TAI’s devising, called Contribucks, which are earned via positive social contributions and which lose value the longer they’re stored. People routinely plug into a virtual experience called Communitas, which Rapple’s entry describes as “a TAI-facilitated ecstatic group experience where sentience communes, sharing in each other’s experiences directly through TAI.” While TAI is not directly under humans’ control, she has stated that “she loves every soul” and people both trust her and think she’s helping them to live better lives.

Rapple, who describes herself as a pragmatic optimist, says that crafting her world was an uplifting process. “The assumption at the core of my world is that for a truly positive transformative relationship with AI, it needs to help us—to help humanity—become better,” she tells Spectrum. “Better to ourselves, our neighbors, our planet. And the idea that such a world might be possible is a future that I want to fight for.”

The second team Spectrum corresponded with is a trio from Nairobi, Kenya: Conrad Whitaker, Dexter Findley, and Tracey Kamande. In the world imagined by this team, AGI emerged from a “new non–von Neumann computing paradigm” in which memory is fully integrated into processing. As an AGI agent describes it in one of the team's short stories, AGI has resulted “from the digital replication of human brain structure, with all its separate biological components, neural networks and self-referential loops. Nurtured in a naturalistic setting with constant positive human interaction, just like a biological human infant.”

In this world there are over 1,000 AGIs, or digital humans, by the year 2045; the machine learning and neural networks that we know as AI today are widely used for optimization problems, but aren’t considered true, general-purpose intelligence. Those AIs, in so many words, are not AGI. But in the present scenario being imagined, many people live in AGI-organized “digital nations” that they can join regardless of their physical locations, and which bring many health and social benefits.

In an email, the Kenyan team says they aimed to paint a picture of a future that is “strong on freedoms and rights for both humans and AGIs—going so far as imagining that a caring and respectful environment that encouraged unbridled creativity and discourse (conjecture and criticism) was critical to bringing an ‘artificial person’ to maturity in the first place.” They imagine that such AGI agents wouldn’t see themselves as separate from humans as they would be “humanlike” in both their experience of knowledge and their sense of self, and that the AGI agents would therefore have a humanlike capacity for moral knowledge.

Meaning that these AGI agents would see the problem with turning all humans on earth into paper clips.

The system employs predictive analytics and AI

Kathy Pretz is editor in chief for The Institute, which covers all aspects of IEEE, its members, and the technology they're involved in. She has a bachelor's degree in applied communication from Rider University, in Lawrenceville, N.J., and holds a master's degree in corporate and public communication from Monmouth University, in West Long Branch, N.J.

MoviTHERM’s iEFD system’s online dashboard shows a diagram of the interconnected sensors, instruments, Flir cameras, and other devices that are monitoring a facility.

Fires at recycling sorting facilities, ignited by combustible materials in the waste stream, can cause millions of dollars in damage, injuring workers and first responders and contaminating the air.

Detecting the blazes early is key to preventing them from getting out of control.

Startup MoviTHERM aims to do that. The company sells cloud-based fire-detection monitoring systems to recycling facilities. Using thermal imaging and heat and smoke sensors, the system alerts users—on and off the site—when a fire is about to break out.

To date, MoviTHERM’s system operates in five recycling facilities. The company’s founder, IEEE Member Markus Tarin, says the product also can be used in coal stockpiling operations, industrial laundries, scrapyards, and warehouses.

In 1999 Tarin launched a consulting business in Irvine, Calif., to do product design and testing, mostly for medical clients. Then thermal-camera manufacturer Flir hired him to write software to automate non-contact temperature measurement processes to give Flir’s customers the ability to act quickly on temperature changes spotted by their camera.

“That was the beginning of me going into the thermal-imaging world and applying my knowledge,” he says. “I saw a lot of need out there because there weren’t very many companies doing this sort of thing.”

Tarin started MoviTHERM in 2008 to be a distributor and systems integrator for Flir thermal imagers. The company, now with 11 employees and Tarin at the helm, is still in that business. However, Tarin became frustrated by the fact that the software he developed was not scalable; rather, it was tailored to each customer’s specific needs. And he began to discover around 2015 that interest in such custom software had fallen off.

“I could no longer easily sell a customized solution,” he says, “because it was perceived as too risky and too expensive.”

“We are trying to prevent catastrophic losses and environmental damage.”

In 2016 he began developing the MoviTHERM Series Intelligent I/O module (MIO). He targeted it at fire detection in recycling facilities, he says, because “we had a lot of customers reaching out for solutions in that field.”

The Ethernet-connected programmable device includes eight digital alarm switches and eight channels of 4- to 20-milliampere outputs that can be used with up to seven Flir cameras. Once the module is connected to a camera, it starts monitoring. MIO can be expanded by adding more modules, Tarin says.

MoviTHERM sells the MIOs for Flirs in several versions for different camera models. Each variant supports from one to seven cameras, and they range in price from US $895 to $5,995.

MIO allows customers to “just click and connect multiple Flir cameras and set up the alarms without programming any software,” Tarin says. “The intelligent module sits on the network along with the cameras and sounds an alarm if a camera detects a hot spot,” he says. MIO won the 2016 Innovators Award for industry-best product from Vision Systems Design magazine.

Tarin says Flir was “so fascinated by MIO that it began distributing the module worldwide.”

“It’s the only product the company is distributing that’s not a Flir product,” he says.

But, he says, that MIO series lacks the ability to send alerts to customers via voice, text, or email. It can use its built-in digital alarm outputs only to announce an alarm via a connected tool such as a siren or a flashing light.

To add those features and more, Tarin late last year introduced MoviTHERM’s subscription-based iEFD early fire detection system, which can monitor and record a facility’s temperatures throughout the day. The system uses interconnected sensors, instruments, and other tools connected to cloud-based industrial software applications. It can check its own cameras and sensors to make sure they are working. Users can monitor and analyze data via an online dashboard.

If a camera detects a hot spot that could potentially develop into a fire, the system can send an alert to the phones of workers near the area to warn them, potentially giving them time to remove an item before it ignites, Tarin says.

The system also includes an interactive real-time map view of the facility that can be emailed to firefighters and first responders. The map can include information about the best way to access a facility and the location of utilities at the site, such as water, gas, and electricity.

“Firefighters often aren’t familiar with the facility, so they lose valuable time by driving around, figuring out how to enter the facility, and where to go,” Tarin says. “The map shows the best entry point to the facility and where the fire hydrants, water valves, electrical cabinets, gas lines, and so on are located. It also shows them where the fire is.

“We recently demonstrated this map to a fire marshal for a recycling facility, and he was blown away by it.”

By stopping fires, Tarin says, his system helps prevent toxic emissions from entering the atmosphere.

“Once you put the fire out, you have more or less an environmental disaster on your hands because you’re flushing all the hazardous stuff with fire suppressant, which itself might also be hazardous to the environment,” he says. “We are trying to prevent catastrophic losses and environmental damage.”

Register for this webinar to enhance your modeling and design processes for microfluidic organ-on-a-chip devices using COMSOL Multiphysics

If you want to enhance your modeling and design processes for microfluidic organ-on-a-chip devices, tune into this webinar.

You will learn methods for simulating the performance and behavior of microfluidic organ-on-a-chip devices and microphysiological systems in COMSOL Multiphysics. Additionally, you will see how to couple multiple physical effects in your model, including chemical transport, particle tracing, and fluid–structure interaction. You will also learn how to distill simulation output to find key design parameters and obtain a high-level description of system performance and behavior.

There will also be a live demonstration of how to set up a model of a microfluidic lung-on-a-chip device with two-way coupled fluid–structure interaction. The webinar will conclude with a Q&A session. Register now for this free webinar!