SnotBot Drone Swoops Over Blowholes to Track Whale Health - IEEE Spectrum

2022-10-02 14:39:38 By : Mr. JACK FU

The October 2022 issue of IEEE Spectrum is here!

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.

The SnotBot drone passes over a blue whale at the moment of exhalation.

It’s a beautiful morning on the waters of Alaska’s Peril Strait—clear, calm, silent, and just a little cool. A small but seaworthy research vessel glides through gentle swells. Suddenly, in the distance, a humpback whale the size of a school bus explodes out of the water. Enormous bursts of air and water jet out of its blowholes like a fire hose, the noise echoing between the banks.

“Blow at eleven o’clock!” cries the lookout, and the small boat swarms with activity. A crew member wearing a helmet and cut-proof gloves raises a large quadcopter drone over his head, as if offering it to the sun, which glints off the half dozen plastic petri dishes velcroed to the drone.

Further back in the boat, the drone pilot calls, “Starting engines in 3, 2, 1! Takeoff in 3, 2, 1!” The drone’s engines buzz as it zooms 20 meters into the air and then darts off toward where the whale just dipped below the water’s surface. With luck, the whale will spout again nearby, and the drone will be there when it does.

The drone is a modified DJI Inspire 2. About the size of a toaster oven, it’s generally sold to photographers, cinematographers, and well-heeled hobbyists, but this particular drone is on a serious mission: to monitor the health of whales, the ocean, and by extension, the planet. The petri dishes it carries collect the exhaled breath condensate of a whale—a.k.a. snot—which holds valuable information about the creature’s health, diet, and other qualities. Hence the drone’s name: the Parley SnotBot.

The flyer comes standard with a forward-facing camera for navigation, collision-avoidance detectors, ultrasonic and barometric sensors to track altitude, and a GPS locator. With the addition of a high-definition video camera on a stabilized gimbal that can be directed independently, it can stream 1080p video live while simultaneously storing the video on a microSD card as well as high-resolution images on a 1-terabyte solid-state drive. Given that both cameras run during the entire 26 minutes of a typical flight, that’s a lot of data. More on what we are doing with that data later, but first, a bit of SnotBot history.

Petri dishes aboard SnotBot collect whale exhalate for later analysis.

Iain Kerr was one of the early pioneers in using drones as a platform to collect and analyze whale exhalation. He’s the CEO of Ocean Alliance, in Gloucester, Mass., a group dedicated to protecting whales and the world’s oceans. Whale biologists know that whale snot contains an enormous amount of biological information, including DNA, hormones, and microorganisms. Scientists can use that information to determine a whale’s health, sex, and pregnancy status, and details about its genetics and microbiome. The traditional and most often used technique for collecting that kind of information is to zoom past a surfacing whale in a boat and shoot it with a specially designed crossbow to capture a small core sample of skin and blubber. The process is stressful for both researchers and whales.

Researchers had demonstrated that whale snot can be a viable replacement for blubber samples, but collection involved reaching out over whales using long, awkward poles—difficult, to say the least. The development of small but powerful commercial drones inspired Kerr to launch an exploratory research project in 2015 to go after whale snot with drones. He received the first U.S. National Oceanic and Atmospheric Administration (NOAA) research permit for collecting whale snot in U.S. waters. Since then, there have been dozens of SnotBot missions around the world, in the waters off Alaska, Gabon, Mexico, and other places where whales like to congregate, and the idea has spread to other teams around the globe.

The SnotBot design continues to evolve. The earliest versions tried to capture snot by trailing gauzy cloth below the drone. The hanging cloth turned out to be difficult to work with, however, and the material itself interfered with some of the lab tests, so the researchers scrapped that method. The developers didn’t consider using petri dishes at first, because they assumed that if the drone flew directly into a whale’s spout, the rotor wash would interfere with collection. Eventually, though, they tried the petri dishes and were happy to discover that the rotors’ downdraft improved rather than hindered collection.

For each mission, the collection goals have been slightly different, and the team tweaks the design of the craft accordingly. On one mission, the focus might be to survey an area, getting samples from as many whales as possible. The next mission might be a “focal follow,” in which the team tracks one whale over a period of hours or days, taking multiple samples so that they can understand things like how a whale’s hormone levels change throughout the day, either from natural processes or as a response to environmental factors.

Collecting and analyzing snot is certainly an important way to assess whale health, but the SnotBot team suspected that the drone could do more. In early 2017, staffers from Parley for the Oceans, a nonprofit environmental group that was working with Ocean Alliance on the SnotBot project, contacted one of us (Willke) to find out just how much more.

Willke is a machine-learning and artificial-intelligence researcher who leads Intel’s Brain-Inspired Computing Lab, in Hillsboro, Ore. He immediately saw ways of expanding the information gathered by SnotBot. Willke enlisted two researchers in his lab—coauthor Keller and Javier Turek—and the three of us got to work on enhancing SnotBot’s mission.

The quadcopters used in the SnotBot project carry high-quality cameras with advanced auto-stabilization features. The drone pilot relies on the high-definition video being streamed back to the boat to fly the aircraft and collect the snot. We knew that these same video streams could simultaneously feed into a computer on the boat and be processed in real time. Could that information help assess whale health?

Working with Ocean Alliance scientists, we first came up with a tool that analyzes a photo of a whale’s tail flukes and, using a database of whale photographs collected by the Alaska Whale Foundation, identifies individual whales by the shape of the fluke and its black and white patterns. Identifying each whale allows researchers to correlate snot samples over time.

Such identification can also help whale biologists cope with tricky regulatory issues. For example, there are at least two breeding populations of humpback whales that migrate to Alaska. Most come from Hawaii, but a smaller group comes from Mexico. The Mexican population is under greater stress at the moment, and so NOAA requests that researchers focus on the healthier, Hawaiian whales and leave the Mexican whales alone as much as possible. However, both populations are exactly the same species and thus indistinguishable from each other as a group. The ability to recognize individual whales allows researchers to determine whether a whale had been previously spotted in Mexico or Hawaii, so that they can act appropriately to comply with the regulation.

We also developed software that analyzes the shape of a whale from an overhead shot, taken about 25 meters directly above the whale. Since a skinny whale is often a sick one or one that hasn’t been getting enough to eat, even that simple metric can be a powerful indicator of well-being.

The biggest challenge in developing these tools was what’s called data starvation—there just wasn’t enough data. A standard deep-learning algorithm would look at a huge set of images and then figure out and extract the key distinguishing features of a whale. In the case of the fluke-ID tool, there were only a few pictures of each whale in the catalog, and these were often too low quality to be useful. For overhead health monitoring, there were likewise too few photos or videos of whales shot with the right camera, from the right angle, under the right conditions.

To address these problems, our team turned to classic computer-vision techniques to extract what we considered the most useful data. For example, we used edge-detection algorithms to find and measure the trailing edge of a fluke, then obtained the grayscale values of all the pixels in a line extending from the center notch of the fluke to the outer tips. We trained a small but effective neural network on this data alone. If more data had been available, a deep-learning approach would have worked better than our approach did, but we had to work with the limited data we had.

The latest model of SnotBot flies into action, with custom mounting points for petri dishes and its new paint scheme, designed to camouflage it against a cloud-studded sky.

New discoveries in whale biology have already come from our tools. Besides the ability to distinguish between the Mexican and Hawaiian whale populations, researchers have discovered they can identify whales from their calls, even when the calls were recorded many years previously.

That latter discovery came during the summer of 2017, when we joined Fred Sharpe, an Alaska Whale Foundation researcher and founding board member, to study teams of whales that worked together to feed. While observing a small group of humpback whales, the boat’s underwater microphone picked up a whale feeding call. Sharpe thought it sounded familiar, and so he consulted his database of whale vocalizations. He found a similar call from a whale called Trumpeter that he had recorded some 20 years ago. But was it really the same whale? There was no way to know for sure from the whale call.

Then a whale surfaced briefly and dove again, letting us capture an image of its flukes. Our software found a match: The flukes indeed belonged to Trumpeter. That told the researchers that adult whale feeding calls likely remain stable for decades, maybe even for life. This insight gave researchers another tool for identifying whales in the wild and improving our understanding of vocal signatures in humpback whales.

Meanwhile, whale-ID tools are getting better all the time. The original SnotBot algorithm that we developed for whale identification has been essentially supplanted by more capable services. One new algorithm relies on the curvature of the trailing edge of the fluke for identification.

SnotBot’s real contribution, it turns out, is in health monitoring. Our shape-analysis tool has been evolving and, in combination with the spray samples, is giving researchers a comprehensive picture of an individual whale’s health. We call this tool Morphometer. We recently teamed up with Kelly Cates, a Ph.D. candidate in marine biology at the University of Alaska Fairbanks, and Fredrik Christiansen, an assistant professor and whale expert at the Aarhus Institute of Advanced Studies, in Denmark, to make the technology more powerful and also easier to use.

Here’s how it works. Researchers who make measurements and assessments of baleen whales—the type of whales that filter-feed—have typically used a technique developed by Christiansen in 2016. (So far the effort has involved humpback and southern right whales, but the process could work for any kind of baleen whale.) The researchers start with photographic prints or images on a computer and hand-measure the body widths of whales in the images at intervals of 5 percent of the overall length from the snout to the notch of the tail flukes. They then feed this set of measurements to software that calculates an estimate of the whale’s volume. From the relationship between the body length and volume, they can determine if an individual whale is relatively fatter or thinner compared with population norms, taking into account the significant but normal changes in girth that occur as whales accumulate energy reserves during the feeding season and then use those energy stores for migration and during the breeding season.

Morphometer also uses photos, but it measures the whale’s width continuously at the highest resolution possible given the quality of the photo, yielding hundreds of width measurements for each animal, instead of only the small number of measurements that are feasible for human researchers. The result is thus much more accurate. It also processes the data much faster than a human could, allowing biologists to focus on biology rather than doing tedious measurements by hand.

To improve Morphometer, we trained a deep-learning system on images of humpback and southern right whales in all sorts of different weather, water, and lighting conditions to allow it to understand exactly which pixels in an image belong to a whale. Once a whale has been singled out, the system identifies the head and tail and then measures the whale’s length and width at each pixel point along the outline of its body. Our software tracks the altitude from which the drone photographed the whale and combines that data with camera specifications entered by the drone operator, allowing the system to automatically convert the measurements from pixels to meters.

Morphometer compares this whale with others of its body type, displaying the result as an image of the subject whale superimposed on a whale-shape color-coded diagram with zones indicating the average measurements of similar whales. It’s immediately obvious if the whale is normal size, underweight, or larger than average, as would be the case with pregnant females [see illustration, “Measuring Up”].

Measuring up: The Morphometer image-analysis tool (below left) compares one whale with others of its body type, displaying the result as an image of the subject whale superimposed on a diagram representing an average whale. The diagram is color coded—red at the center, blending to white at the size of an average whale and to blue for larger whales. With the individual whale superimposed on this color map, any red that shows around the whale means the whale is underweight; the more underweight, the darker the red. Above, fluke-recognition software makes a match.Images: Intel

For our early prototype, we input parameters for a “normal” body shape based on age, sex, and other factors. But now Morphometer is in the process of figuring out “normal” for itself by processing large numbers of whale images. Whale researchers who use their own drones to collect whale photos have been sending us their images. Eventually, we envision setting up a collaborative website that would allow images and morphometry models to be shared among researchers. We also plan to adapt Morphometer to analyze videos of whales, automatically extracting the frames or clips in which the whale’s position and visibility are the best.

To help researchers gain a more complete picture, we’re building statistical models of various whale populations, which we will compare to models derived from human-estimated measurements. Then we’ll take new photos of whales whose age and gender are known, and see whether the software correctly classifies them and gives appropriate indications of health; we’ll have whale biologists verify the results.

Once this model is working reliably, we expect to be able to say how a given whale’s size compares with those of its peers of the same gender, in the same region, at the same time of year. We’ll also be able to identify historical trends—for example, this whale is not skinnier than average compared with last year, but it is much skinnier than whales in its class a few decades ago, assuming comparison data exists. If, in addition, we have snot from the same whale, we can create a more complete profile of the whale, in the same way your credit card company can tell a lot about you by integrating your personal data with the averages and variances in the general population.

So far, SnotBot has told us a lot about the health of individual whales. Soon, researchers will start using this data to monitor the health of oceans. Whales are known as “apex predators,” meaning they are at the top of the food chain. Humpback whales in particular are generalist foragers and have wide-ranging migration patterns, which make them an excellent early-warning system for environmental threats to the ocean as a whole.

This is where SnotBot can really make a difference. We all depend on the oceans for our survival. Besides the vast amount of food they produce, we depend on them for the air we breathe: Most of the oxygen in the atmosphere comes from marine organisms such as phytoplankton and algae.

Lately, ocean productivity associated with a North Pacific warm-water anomaly, or “blob,” has resulted in a reduction of births and more reports of skinny whales, and that should worry us. If conditions are bad for whales, they’re also bad for humans. Thanks to Project SnotBot, we’ll be able to find out—accurately, efficiently, and at a reasonable cost—just how the health and numbers of whales in our oceans are trending. With that information, we hope, we will be able to spur society to take steps to protect the oceans before it’s too late.

The whale images in this article were obtained under National Marine Fisheries Service permits 18636-01 and 19703.

This article appears in the December 2019 print issue as “SnotBot: A Whale of a Deep-Learning Project.”

Bryn Keller is a deep-learning research scientist in Intel’s Brain-Inspired Computing Lab. Senior principal engineer Ted Willke is the lab’s director.

Tesla fails to show anything uniquely impressive with its new humanoid robot prototype

Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.

Elon Musk unveiled the Optimus humanoid robot at Tesla's AI Day 2022.

At the end of Tesla’s 2021 AI Day last August, Elon Musk introduced a concept for “Tesla Bot,” an electromechanically-actuated, autonomous bipedal “general purpose” humanoid robot. Musk suggested that a prototype of Tesla Bot (also called “Optimus”) would be complete within the next year. After a lot of hype, a prototype of Tesla Bot was indeed unveiled last night at Tesla’s 2022 AI Day. And as it turns out, the hype was just that—hype.

While there’s absolutely nothing wrong with the humanoid robot that Musk very briefly demonstrated on stage, there’s nothing uniquely right, either. We were hoping for (if not necessarily expecting) more from Tesla. And while the robot isn’t exactly a disappointment, there’s very little to suggest that it disrupts robotics the way that SpaceX did for rockets or Tesla did for electric cars.

You can watch the entire 3+ hour livestream archived on YouTube here (which also includes car stuff and whatnot), but we’re just going to focus on the most interesting bits about Tesla Bot/Optimus.

Before revealing the robot, Musk attemped to set reasonable expectations for the prototype.Tesla

These quotes are all from Musk.

It’s far, far too late for Musk to be attempting to set reasonable expectations for this robot (or Tesla’s robotics program in general). Most roboticists know better than to use humans when setting expectations for humanoid robots, because disappointment is inevitable. And trying to save it at the literal last minute by saying, “compared to not having a robot at all, our robot will be very impressive,” while true, is not going to fix things.

Yeah, I’m not touching that.

Right before the robot was brought on stage, one of the engineers made clear that this was going to be the first time that the robot would be walking untethered and unsupported. If true, that’s bonkers, because why the heck would you wait until this moment to give that a try? I’m not particularly impressed, just confused.

For some context on what you’re about to see, a brief callback to a year ago last August, when I predicted what was in store for 2022:

I’m reminded of the 2015 DARPA Robotics Challenge, because many of the humanoid platforms looked similar to the way Tesla Bot looks. I guess there’s only so much you can do with a mostly naked electromechanical humanoid in terms of form factor, but at first glance there’s nothing particularly innovative or futuristic about Tesla’s design. If anything, the robot’s movement is not quite up to DRC standards, since it looks like it would have trouble with any kind of accidental contact or even a bit of non-level floor (and Musk suggested as much).

On stage, the robot did very little. It walked successfully, but not very dynamically. The “moves” it made may well have been entirely scripted, so we don’t know to what extent the robot can balance on its own. I’m glad it didn’t fall on its face, but if it had, I wouldn’t have been surprised or judged it too harshly.

Tesla showed videos of the robot watering plants, carrying a box, and picking up a metal bar at a factory. Tesla

After the very brief live demo, Musk showed some video clips of the prototype robot doing other things (starting at 19:30 in the livestream). These clips included the robot walking while carrying a box of unspecified weight and placing it on a table, and grasping a watering can. The watering can was somewhat impressive, because gripping that narrow handle looks tricky.

“The robot can actually do a lot of more than we’ve just showed you. We just didn’t want it to fall on its face.” —Elon Musk

However, despite the added footage from the robot’s sensors we have no idea how this was actually done; whether it was autonomous or not; or how many tries it took to get right. There’s also a clip of a robot picking an object and attempting to place it in a bin, but the video cuts right before the placement is successful. This makes me think that we’re seeing carefully curated best-case scenarios for performance.

This looks a bit more like the concept that Tesla showed last year, although obviously it’s less functional than the other prototype we saw. It’s tempting to project the capabilities of the first robot onto the second robot, but it would be premature to do so.

Just like last year, Musk is implying that the robot will be able to operate tools and do useful things because it has the necessary degrees of freedom. But of course the hardware is only the first step towards operating tools and doing useful things, and the software is, I would argue, much harder and far more time consuming, and Tesla seems to have barely started work on that side of things.

I generally agree with Musk here, in that historically, humanoid robots were not designed for manufacturability. This is changing, however, and I think that other companies likely have a head start over Tesla in manufacturability now. But it’s entirely possible that Tesla will be able to rapidly catch up if they’re able to leverage all that car building expertise into robot building somehow. It’s not a given that it’ll work that way, but it’s a good idea, potentially a big advantage.

As for the production volume and cost, I have no idea what “expected” means. This line got some applause, but as far as I’m concerned, these numbers are basically meaningless at the moment.

I’m not exactly sure who Musk is throwing shade at, but there are only a couple of companies who’d probably qualify with “very impressive humanoid robot demonstrations.” And those companies do, in fact, have robots that broadly have the kind of intelligence that allows them to navigate at least some of the world by themselves, much better than we have seen from Optimus at this point. If Musk is saying that those robots are insufficiently autonomous or world-aware, then okay, but so far Tesla has not done better, and doing better will be a lot of work.

While the actual achievements here have been mercilessly overshadowed by the hype surrounding them, this is truly an astonishing amount of work to be done in such a short time, and Tesla’s robotics team should be proud of what they’ve accomplished. And while there will inevitably be comparisons to other companies with humanoid robots, it’s critical to remember the context here: Tesla has made this happen in something like eight months. It’s nuts.

I can see the appeal of Tesla for someone who wants to start a robotics career, since you’d get to work on a rapidly evolving hardware platform backed by what I can only assume is virtually unlimited resources.

Maybe just, like, get your robot to reliably and affordably do A Single Useful Thing, first?

Three versions of the Optimus design: Concept, Development Platform, and Latest Generation. Tesla

Musk takes a break after this, and we get some actual specific information from a series of Tesla robotics team members about the latest generation Optimus.

We’ll come back to the hands, but that battery really stands out for being able to power the robot for an entire day(ish). Again, we have to point out that until Tesla actually demonstrates this, it’s not all that meaningful, but Tesla does know a heck of a lot about power systems and batteries and I’m guessing that they’ll be able to deliver on this.

Tesla is using simulations to design the robot’s structure so that it can suffer minimal damage after a fall.Tesla

I appreciate that Tesla is thinking very early about how to structure their robot to be able to fall down safely and get up again with only superficial damage. Although, they don’t seem to be taking advantage of any kind of protective motion for fall mitigation, which is an active area of research elsewhere. And what is not mentioned in this context is safety of others. I’m glad the robot won’t get damaged all that much when it falls, but can Tesla say the same for whoever might be standing next to it?

Optimus will use six different actuators: three rotary and three linear units.Tesla

Tesla’s custom actuators seem very reasonable. Not special, particularly, but Tesla has to make its own actuators if it needs a lot of them, which it supposedly will. I’d expect these to be totally decent considering the level of mechanical expertise Tesla has, but as far as I can tell nothing here is crazy small or cheap or efficient or powerful or anything like that. And it’s very hard to tell from these slides and from the presentation just how well the actuators are going to work, especially for dynamic motions. The robot’s software has a lot of catching up to do first.

Optimus will feature a bio-inspired hand design with cable-driven actuators.Tesla

Each hand has six cable-driven actuators for fingers and thumb (with springs to provide the opening force), which Tesla chose for simplicity and to minimize part count. This is perhaps a little surprising, since cable drives typically aren’t as durable and can be more finicky to keep calibrated. The five-finger hand is necessary, Tesla says, because Optimus will be working with human tools in human environments. And that’s certainly one perspective, although it’s a big tradeoff in complexity. The hand is designed to carry a 9kg bag.

Tesla is using software components developed for its vehicles and porting them to the robot’s environment.Tesla

Software! The following quote comes from Milan Kovac, who’s on the autonomy team.

I still fundamentally disagree with the implied “humanoid robots are just cars with legs” thing, but it’s impressive that they were able to port much at all—I was highly skeptical of that last year, but I’m more optimistic now, and being able to generalize between platforms (on some level) could be huge for both Tesla and for autonomous systems more generally. I’d like more details on what was easy, and what was not.

Tesla showed how sensing used in its vehicles can help the Optimus robot navigate.Tesla

What we’re seeing above, though, is one of the reasons I was skeptical. That occupancy grid (where the robot’s sensors are detecting potential obstacles) on the bottom is very car-ish, in that the priority is to make absolutely sure that the robot stays very far away from anything it could conceivably run into.

By itself, this won’t transfer well to a humanoid robot that needs to directly interact with objects to do useful tasks. I’m sure there are lots of ways to adapt the Tesla car’s obstacle avoidance system, but that’s the question: how hard is that transfer, and is it better than using a solution developed specifically for mobile manipulators?

Tesla explained the challenges of dynamic walking in humanoid robots, and its approach to motion planning.Tesla

The next part of the presentation focused on some motion planning and state estimation stuff that was very basic, as far as I could make out. There’s nothing wrong with the basics, but it’s slightly weird that Tesla spent so much time on this. I guess it’s important context for most of the people watching, but they sort of talked about it like they’d discovered how to do all of this stuff themselves, which I hope they didn’t, because again, very, very basic stuff that other humanoid robots have been doing for a very long time.

Tesla adopted a traditional approach to motion control, based on a model of the robot and state estimation.Tesla

One more quote from Milan Kovac:

Ignoring that last bit about changing the entire economy, and possibly also ignoring the time frame because “next few months or years” is not particularly meaningful, the push to make Tesla Bot useful is another substantial advantage that Tesla has. Unlike most companies working on humanoid robots, Tesla is potentially its own biggest customer, at least initially, and having these in-house practical tasks for the robot to train on could really help accelerate development.

“Optimus is designed to be an extremely capable robot, but made in very high volume, ultimately millions of units. And it is expected to cost much less than a car—much less than $20,000 would be my guess.” —Musk

However, I’m having trouble imagining what Tesla Bot would actually do in a factory that would be uniquely useful and not done better by a non-humanoid robot. I’m very interested to see what Tesla comes up with here, and whether they can make it happen in months (or years). I suspect that it’s going to be much more difficult than they are suggesting that it will be, especially as they get to 90% of where they want to be and start trying to crack that last 10% that’s necessary for something reliable.

This was the end of the formal presentation about Optimus, but there was a Q&A at the end with Musk where he gave some additional information about the robot side of things. He also gave some additional non-information, which is worth including just in case you haven’t yet had enough eye rolling for one day.

Musk expects Optimus to cost less than a car, “much less than $20,00 dollars would be my guess,” he said.Tesla

This is a variation on the minimum-viable product idea, although it seems to be more from the perspective of making a generalist robot, which is somewhat at odds with something minimally viable. It’s good that Musk views the hardware as something in flux, and that he’s framed everything within a plan for volume production. This isn’t the only way to do it—you can first build a useful robot and then figure out how to make it cheaper, but Tesla’s approach could get them to production faster. If, that is, they are able to confirm that the robot is in fact useful. I’m still not convinced that it will be, at least not on a time scale that will satisfy Musk.

While Musk seems to be mostly joking here, the whole “it’s going to be your friend” is really not a good perspective to bring to a robot like this, in my opinion. Or probably any robot, at all honestly.

Less robot-like and more friendly than a human pretending to be a robot trying to be a human? Good luck with that.

I think it’s more likely that in the short-to-medium term, Tesla will struggle to find situations where Optimus is uniquely useful in an efficient and cost-effective way.

Uh. Maybe as a research platform?

Despite my skepticism on the time frame here, five years is a long time for any robot, and ten years is basically forever. I’m also really interested to see these things happen, although Musk’s definitions of “incredible” and “mind-blowing” may be much different than mine. But we’ll see, won’t we?

Tesla’s AI Day serves as a recruitment event for the company. “There’s still a lot of work to be done to refine Optimus and improve it, and that’s really why we’re holding this event—to convince some of the most talented people in the world to join Tesla,” Musk said.Tesla

I think Elon Musk now has a somewhat better idea of what he’s doing with Tesla Bot. The excessive hype is still there, but now that they’ve actually built something, Musk seems to have a much better idea of how hard it actually is.

Things are only going to get more difficult from here.

Most of what we saw in the presentation was hardware. And hardware is important and a necessary first step, but software is arguably a much more significant challenge when it comes to making robotics useful in the real world. Understanding and interacting with the environment, reasoning and decision-making, the ability to learn and be taught new tasks—these are all necessary pieces to the puzzle of a useful robot that Tesla is trying to put together, but they’re all also extremely difficult, cutting edge problems, despite the enormous amount of work that the research community has put into them.

And so far, we (still) have very little indication that Tesla is going to be any better at tackling this stuff than anyone else. There doesn’t appear to be anything all that special or exciting from Tesla that provides any unique foundation for Musk’s vision in a way that’s likely to allow them to outpace other companies working on similar things. I’ll reiterate what I said a year ago: the hard part is not building a robot, it’s getting that robot to do useful stuff.

“I think Optimus is going to be incredible in five years. In 10 years, mind-blowing. I’m really interested to see that happen, and I hope you are too.” —Musk

I could, of course, be wrong. Tesla likely has more resources to throw at this problem than almost anyone else. Maybe the automotive software will translate much better and faster than I think it will. There could be a whole bunch of simple but valuable use cases in Tesla’s own factories that will provide critical stepping stones for Optimus. Tesla’s battery and manufacturing expertise could have an outsized influence on the affordability, reliability, and success of the robot. Their basic approach to planning and control could become a reliable foundation that will help the system mature faster. And the team is obviously very talented and willing to work extremely hard, which could be the difference between modest success and slow failure.

Honestly, I would love to be wrong. We’re just starting to see some realistic possibilities with commercial legged and humanoid robots. There are lots of problems to solve, but also lots of potential, and Tesla finding success would be a huge confidence boost in commercial humanoids broadly. We can also hope that all of the resources that Tesla is putting towards Optimus will either directly or indirectly assist other folks working on humanoid robots, if Tesla is willing to share some of what they learn. But as of today, this is all just hoping, and it’s on Tesla to actually make it happen.

Assistive technologies are often designed without involving the people these technologies are supposed to help. That needs to change.

Harry Goldstein is Acting Editor in Chief of IEEE Spectrum. 

Before we redesigned our website a couple of years ago, we took pains to have some users show us how they navigate our content or complete specific tasks like leaving a comment or listening to a podcast. We queried them about what they liked or didn’t like about how our content is presented. And we took onboard their experiences and designed a site and a magazine based on that feedback.

So when I read this month’s cover story by Britt Young about using a variety of high- and low-tech prosthetic hands, I was surprised to learn that much bionic-hand development is conducted without taking the lived experience of people who use artificial hands into account.

I shouldn’t have been. While user-centered design is a long-standing practice in Web development, it doesn’t seem to have expanded deep into other product-development practices. A quick search on the IEEE Xplore Digital Library tallied less than 2,000 papers (out of 5.7 million) on “user-centered design.” Five papers bubbled up when searching “user-centered design” and “prosthesis.”

Young, who is working on a book about the prosthetics industry, was in the first cohort of toddlers fitted with a myoelectric prosthetic hand, which users control by tensing and relaxing their muscles against sensors inside the device’s socket. Designed by people Young characterizes as “well-intentioned engineers,” these technologically dazzling hands try to recreate in all its complex glory what Aristotle called “the instrument of instruments.”

“It’s more important that we get to live the lives we want, with access to the tools we need, than it is to make us look like everyone else.”

While high-tech solutions appeal to engineers, Young makes the case that low-tech solutions like the split hook are often more effective for users. “Bionic hands seek to make disabled people ‘whole,’ to have us participate in a world that is culturally two-handed. But it’s more important that we get to live the lives we want, with access to the tools we need, than it is to make us look like everyone else.”

As Senior Editor Stephen Cass pointed out to me, one of the rallying cries of the disabled community is “nothing about us, without us.” It is a response to a long and often cruel history of able-bodied people making decisions for people with disabilities. Even the best intentions don’t make up for doing things for disabled people instead of with them, as we see in Young’s article.

Assistive and other technologies can indeed have huge positive impacts on the lives of people with disabilities. IEEE Spectrum has covered many of these developments over the decades, but generally speaking it has involved able-bodied journalists writing about assistive technology, often with the perspective of disabled people relegated to a quote or two, if it was included at all.

We are fortunate now to have the chance to break that pattern, thanks to a grant from the IEEE Foundation and the Jon C. Taenzer Memorial Fund. With the grant, Spectrum is launching a multiyear fellowship program for disabled writers. The goal is to develop writers with disabilities as technology journalists and provide practical support for their reporting. These writers will investigate not just assistive technologies, but also look at other technologies with ambitions for mass adoption through a disability lens. Will these technologies be built with inclusion in mind, or will disabled people be a literal afterthought? Our first step will be to involve people with disabilities in the design of the program, and we hope to begin publishing articles by fellows early next year.

This article appears in the October 2022 print issue.

Autonomous systems sit at the intersection of AI, IoT, cloud architectures, and agile software development practices. Various systems are becoming prominent, such as unmanned drones, self-driving cars, automated warehouses, and managing capabilities in smart cities. Little attention has been paid to securing autonomous systems as systems composed of multiple automated components. Various patchwork efforts have focused on individual components.

Cloud services are starting to adopt a Zero Trust approach for securing the chain of trust that might traverse multiple systems. It has become imperative to extend a Zero Trust architecture to systems of autonomous systems to protect not only drones, but also industrial equipment, supply chain automation, and smart cities.