Oral-History:Reid Simmons

From ETHW

About Reid Simmons

Reid Simmons was born in Akron, Ohio but spent his childhood in Buffalo, New York. He spent his undergraduate years studying computer science at the University of Buffalo, spent a year working for an Ann Arbor computer design and graphics company, and earned his M.S. and Ph.D. in artificial intelligence from the MIT. In 1988, following his post-doc work, Simmons worked on NASA’s Mars rover prototype and joined the faculty at Carnegie Mellon. His research focuses on autonomous and self-reliant robots, but most recently Simmons work has focused on human-robot social interaction, multi-robot coordination, and formal verification of autonomous systems. Professor Simmons has published over 150 papers and articles in the fields of robotics and artificial intelligence, and was awarded the Newell Research Award in 2004 and the 1999 BASA Software of the Year Award as part of the Remote Agent team.

In this interview, Simmons discusses his introduction to robotics during his graduate years and his start in the industry and at Carnegie Mellon. He outlines his research work and describes the difficulties in the field of robotics and artificial intelligence. He reviews his involvement with the NASA groups at Ames and JPL, his work and collaborations on robotics projects and the work of his previous students, and his research in human-robot social interaction. Additionally, he reflects on the development and future direction of the field of robotics, and provides advice for those who wish to pursue it.

About the Interview

REID SIMMONS: An Interview Conducted by Peter Asaro with Selma Šabanovic, IEEE History Center, 23 November 2010.

Interview #678 for Indiana University and IEEE History Center, The Institute of Electrical and Electronics Engineers Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to Indiana University and to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center, 445 Hoes Lane, Piscataway, NJ 08854 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user. Inquiries concerning the original video recording should be sent to Professor Selma Sabanovic, selmas@indiana.edu.

It is recommended that this oral history be cited as follows:

Reid Simmons, an oral history conducted in 2010 by Peter Asaro with Selma Šabanovic, Indiana University, Bloomington Indiana, for Indiana University and the IEEE.

Interview

INTERVIEWEE: Reid Simmons
INTERVIEWER: Peter Asaro with Selma Šabanovic
DATE: 23 November 2010
PLACE: Pittsburgh, PA

Early Life and Education

Q:

We’ll just start by asking where you were born and where you grew up?

Reid Simmons:

Okay. I was born in Akron, Ohio, lived there for two years and then moved to Buffalo, New York, where I spent most of my formative years. I was an undergraduate at University of Buffalo, so hometown and then went on to spent a year working for a company in Ann Arbor doing computer design, graphics, and then moved to MIT to do my graduate work in artificial intelligence.

Introduction to Robotics and NASA at Carnegie-Mellon

Q:

And when did you first encounter robotics?

Reid Simmons:

When I first encountered robotics, so Rod Brooks was at MIT at the time and he was very active doing robotics, so a number of his grad students were contemporaries of mine and I saw what they were doing and they had robots running around the lab, but I was kind of a good old-fashioned AI guy and didn’t actually do anything in robotics during my graduate work. When I graduated, I was intended to go into industrial research and that’s where I spent most of my time looking for jobs and then towards the very end of my job search, I get a call out of the blue from Tom Mitchell who along with Red Whittaker and Takeo Kanade had landed a fairly large NASA contract to build a prototype Mars rover and Red was going to do the mechanism and control and Takeo was going to be doing the perception and Tom was going to be doing the AI. So he was looking for someone who post-doc could come in and do AI for robots and it was such an intriguing idea, something I’d never really considered, but it was such an intriguing idea that I ended up taking the job. Figured I’d be in Pittsburgh two years for the post-doc and 22 years later I’m still here and it’s funny actually. A couple of years ago, one of my students came up to me with a copy of one of my papers that I’d written when I was a graduate student and he said, “Is this you or is it a different Reid Simmons?” because he couldn’t believe that what I do now that I had worked in this very different area 20 years ago, but – so I basically completely changed focus of attention once I got to Carnegie Mellon.

Q:

In your graduate work, what was your thesis and what was this paper?

Reid Simmons:

The thesis work was on combining what’s called causal reasoning, first principle reasoning, and role-based systems and I did the work in the domain of geological interpretation. So basically trying to understand, given what the earth looks like now, what the forces were that caused it to be that way. So where were the earthquakes and the faults and there were volcanoes that split the rocks like this and sedimentary rocks, so it must have been originally ocean bottom that then rose up and things like that and so it would reason about patterns and first principles in order to come up with these interpretations.

Q:

And what year was the lunar project, you remember?

Reid Simmons:

It started in ’88 that’s when I got here, went on for, I think, four years. The NASA work was just great. We did lots of work with NASA over the years and thoroughly enjoyed all the work with them. They just had the best problems and really good people that worked at the NASA centers that I could collaborate with that was a really good time.

Q:

How had you gotten to know Tom Mitchell and who invited you over for the first time?

Reid Simmons:

So Tom and my advisor, Randy Davis, were good friends and so I guess Tom just put out question to people he knew whether they knew of anyone who was graduating. I don’t think I had met him at conferences beforehand, but it was made basically through my advisor.

Q:

And did any of your work on causal reasoning come in handy on the lunar project?

Reid Simmons:

Not on that project but subsequently some of that work has been applied. A lot of what the work – well, I guess some of it has been applied. A lot of the work was, what was I going to say, a lot of the work I did as a graduate student was involved in planning and so I’ve done a lot of work since then in planning, not so much in the rover project but subsequently that and so things have come back, but even there, things have changed dramatically as a result of some of the graduate students that I worked with, we all got very interested in probabilistic reasoning and probabilistic planning and so that’s basically kind of taken over the way that I look at how to deal with robotics problems. So there’s a lot of uncertainty in robotics and reasoning about uncertainty in a probabilistic way. It’s something that we find very important. So that’s what we’ve been doing a lot of since then. When I first got here, there was a lot of – so robotics in those days were basically kind of one-off mechanisms. I remember very distinctly that at those times that Red’s group when they built a new robot, before they started programming it, the first thing they do is build their own operating system, real-time operating system, for the robot because there really wasn’t anything out there that was suitable, light weight, real time. They could do what they wanted. So they would do a lot of that on their own and every project was different and so they do that again and so I felt that there was a need for tools that would help make putting robot systems easier in particular at the higher levels. So a lot of work done in controls and real-time systems but not much work at that kind of what we call the task-level control. So for the first seven or eight years that I was here that was my focus was on designing robot architecture, software architectures and that was completely different than anything I had done as a graduate student, but it was more out of necessity than anything else that we embarked on that and since then I’ve been kind of going back to my roots, AI roots, in terms of planning and reasoning, but – so that was something that was important to do.

Q:

What were the big challenges in designing a robot operating system and architecture?

Reid Simmons:

The big challenge was that there was this gap between the kind of real-time control that you needed to make the robots operate in the world and the kind of unbounded computational nature of the AI parts of the system. So you wanted the robot to react quickly to contingencies but still you wanted it to plan and that planning could take a large amount of time. So it was kind of bridging that gap is where we spent most of our efforts.

Q:

And what were some of the things that you worked on during the first rover project because you mentioned NASA has all these interesting problems?

Reid Simmons:

So designing the robot architecture was a large part of it. We did work in gait planning, so where the rover should put its feet. It was a legged rover, so where should it put its feet and that was basically a combination of using perception to understand where good places were to put – for the rover to move and planning paths and that lead to a long, probably a decade long, set of work that we did for NASA in terms of path planning for rovers, navigation planning, I should say, navigation planning for rovers, which culminated actually in one of the graduate students who worked with us in the early 90s went on to NASA and took the ideas that we had developed here at Carnegie Mellon and ported them to the Mars rovers, Spirit and Opportunity, and so basically they’ve been running our algorithms for seven years now, which is really cool. It was interesting when they first launched. We all said that if the rovers worked well, it’s a great win for NASA, and if they don’t work well, Carnegie Mellon is going to get blamed, but they worked flawlessly and it was really great to see, so that was really exciting because we actually had some impact on an actual mission.

Q:

And who were the people that went from here to there?

Reid Simmons:

This was Mark Maimone and he’s still at JPL and he’s working on the next generation rover, so…

Q:

And he was in your group, a student?

Reid Simmons:

He was not a student of mine. We hired him as a post-doc to do some work and then when the post-doc ended, he went on to JPL to work with NASA.

Reflections on Graduate Students

Q:

How many PhD students have you trained while you’ve been here?

Reid Simmons:

I think I’ve graduated 12 or 13. I don’t have the exact count.

<crew talk>

Q:

So your PhD students where have they gone onto and what are they doing and <inaudible>.

Reid Simmons:

Basically, all over. My first PhD student is still here. He graduated in 1995. Actually funny story about that. My wife was pregnant with our third child at the time and we worked really hard to make sure that his defense would not coincide with the delivery, but she ended up being three weeks early and she happened to go into labor that day. So I was in the labor room with her on the phone with his defense because his external member would come in, everything had been set up, and the it was just easier to do it that way and the nurses couldn’t <laughs> believe that I was doing this. It worked out fine. The defense ended and then she went into the delivery room and everything worked out fine, but I always remembered exactly when his defense was because it was the day my son was born. So anyway, so that was 1995. So he’s still here. He’s a professor as I am in robotics.

Q:

What was his name?

Reid Simmons:

Sanjiv Singh and I got one other – no two other students in academia. One is Sven Koenig who’s now at University of Southern California and Chris Urmson who I co-advised with Red Whittaker, did the Urban Challenge and he’s on leave at Google, but we all expect him to come back. The rest of them are scattered around. A large number of them are actually at NASA, both at NASA Ames and at JPL. It was really good synergy for NASA and us. They would provide us with funding and we’d provide them with well-trained students. So I think four of my students are still at NASA now.

Q:

And within JPL and Ames, are they all in the same research group or?

Reid Simmons:

It’s kind of spread out.

Research Collaborations for NASA

Q:

And did you work with the same NASA groups every time or were there different groups that you’ve worked with at NASA?

Reid Simmons:

There were a number of different groups. So there’s two groups at Ames that I was involved with and three groups at JPL.

Q:

And who were the groups, who were they, people?

Reid Simmons:

So there was a group at Ames that was doing robot work, Terry Fong, Mike Sims, Dan Clancy. There was a group at Ames doing program verification, Charles Pecheur and Klaus Havelund. The JPL group, I worked with Steve Chien, Rich Doyle, I don’t remember his first name, Aljabri was his last name. I can’t remember his first name and Issa Nesnas. No, I guess I never really worked with Mark Maimone. So that was kind of the group. I’m sure I’m forgetting names. It was spread out over different groups. I mean, they all kind of worked together and everything, but…

Q:

And after your post-doc have you continued to work with Red Whittaker in collaborations?

Reid Simmons:

Some, not a lot but some.

Q:

What were some of your collaborations?

Reid Simmons:

There was some massive work. There was a lunar rover project that we were involved in together, not a lot. I have been working a lot with Sanjiv Singh though, who’s as I say was my first student, and we’ve been involved in a number of activities over the years. The most recent one being multi-robot coordinated assembly. So one of the things that I’ve is, a lot of people will kind of do – basically have the same path for their whole career and they just like pushing this big idea as far as they can push it and I don’t have the patience to do that, so I’ve been kind of bouncing around. So every five or six years I start up a new effort and the old ones kind of die out. So the architecture efforts that was kind of the first thing I did that’s died out. I don’t do much of that much anymore. The rover navigation stuff was big for a while that’s died out mostly because NASA is not funding any work in that area.

<crew talk>

Q:

Okay. So you did the architecture, the navigation…

Reid Simmons:

Oh, and then there was a period of doing indoor robot navigation and most recently we’ve been doing multi-robot assembly, large scale assembly, so a coordination of multiple robots and I guess my current passion is human robot interaction, particularly human robot social interaction. So we’ve gotten a number of projects in this area both for conversational interaction how robots can engage people and talk to them in a socially acceptable way mostly dealing with nonverbal communication and navigational interaction, so how you move through space in a socially acceptable way, things like how do you get on and off elevators, passing people in the hallways, things like that. So every number of years a new thing comes up.

Human-Robot Interaction

Q:

How did you get interested in the human robot interaction?

Reid Simmons:

Good question. This is a good story. <laughs> In the mid-90s, I had a student, Sven Koenig, who was very interested in probabilistic reasoning and he and I put together a system that did navigation using Markov models and it was different than what most people – right now, nowadays, it’s very common because it turned out to be a very useful thing to do and Sebastian Thrun came up with a much better way of doing it than we had one it and it’s very popular now but back in those days it was not really very well accepted. So Illah Nourbakhsh had done a little bit of work using probabilistic planning but not in any real big way. So what we wanted to do is we wanted to demonstrate that this thing was more reliable than competing technologies and so we did an experiment where we would have the robot wandering the halls for hours a day as long as his batteries would last every day and eventually it accumulated like 300 miles of driving indoors and while we were doing that we noticed how people would react to the way that the robots – react to the robots. I mean, this was 15 years ago the idea of seeing robots actually driving around you. I mean, it’s one thing to go into a lab and you see it in a protective setting but to see them actually driving around you was very unusual in those days and we noticed that people would react to the robot in very different ways than they would react to the people and realized that a large part of that was probably because the robot wasn’t driving around the environment in ways that they expected. So it was very unpredictable.

So in particular things like the robot would be driving towards someone. Someone would be walking. The robot had this algorithm that when its pathway would be blocked, it would go to the largest open space and so more often than not that was to the person’s left, so the robot would start moving over to the left, but the person who was assuming that the robot would be doing the socially acceptable thing would also be moving over to his right and they would come at each other and then the person would move out of the way. In the meantime, the robot saw its pathway was blocked, so it would go this way and so there was this dance between them until finally invariably the person gave up and the robot would win and go on its way and so eventually people would just get in the habit as the robot would come, they would just kind of move to the side, let the robot pass, and then continue and we realized that this was kind of socially unacceptable and if robots were going to actually be accepted in society, they would have to abide by people’s rules. So that got us into this human robot social interaction and it’s something I think that it’s going to turn out to be more and more important as robots become more common in society.

Q:

What were some of the first studies that you did with robots and social interaction, what kinds of robots were you using?

Reid Simmons:

We started off by using Xavier, which was a robot that was built early 90s to study navigation. It was a 24-inch base, so it was pretty big, kind of cylindrical robot and the two studies that we did then were – well, three actually. One was having the robot stand in line. So that turned out to be a really cool project and led to some nice results. The second was having the robot pass on the right, which turned out to be fairly straightforward because of the way that the system was architected. We just needed to add a little bias for the robot to go one way or the other and the third, which was actually never published, was having the robot move on and off the elevators in a socially acceptable way and those all culminated in 2002…

Q:

Is it two?

Reid Simmons:

...the mobile robot…

Q:

Oh, okay, okay, not the triple AAAI one.

Reid Simmons:

Yeah, the AAAI.

Q:

Was it two or three? Never mind, you’ll find it. <laughs>

Reid Simmons:

Okay. I’m pretty sure the first one was 2002.

Q:

Okay. You probably know better than me. <laughs>

AAAI Mobile Robot Challenge: Grace

Reid Simmons:

Where we had the robot – we entered Grace, which was a robot that we had designed specifically for this competition in the AAAI Mobile Robot Challenge. The challenge basically was to have a robot attend the conference and the idea was you drop it off at the entrance of the convention center. It would have to find its way to the registration booth, register. Once it got there, it would get a map of the environment and a location where it was supposed to give a talk and then would navigate there and give its talk and we were shocked at the publicity that came out of that. I mean, there were hundreds and hundreds of people who came to the convention center to watch the robot perform. Basically, the robot needed bodyguards because it couldn’t navigate through really thick crowds. So we would have to push people away so that the robot would have any chance of getting anywhere. It was really crazy.

So we did rather well there and participated in a couple of more after that and that kind of set the stage for a lot of the work that we did subsequently to that but that one involved getting on and off elevators and standing in line. We didn’t actually use the socially acceptable passing in corridors because we expected that the robot would be in these large open spaces and that was a multi-institutional effort we were involved with. Northwestern, I’m going to forget someone, Metrica, Naval Research Lab in Swarthmore. I think those were the five that were involved in that project and a large part of what we did was integrate all this software that have been developed over the years by different researchers into kind of one system and that turned out to be quite an effort but well worth it in the end.

Well since then, there’s been a large focus on conversational interaction, so we developed this roboceptionist project where basically it’s a stationary robot with a graphical face and a monitor and a pan tilt head, so it can move its head around but it has a cartoonish but three-dimensional face rendered in graphics and this is a joint effort with the school of drama. So they developed a character back story and continuing episodes in the life of the robot and we can program that in and you can talk to the robot about its life. You can ask it about its parents or its siblings or its love life or what it thinks about its job or its boss who happens to be me and the robot for whatever reason thinks that I’m an evil guy. That’s a continuing story line no matter what the character is. I’m the evil guy. <laughs> The Carnegie Mellon School of Drama is world class and the writers that we get for this is mostly in the graduate writing program are just really, really, really good and so there’s been a lot of really interesting story lines that come out of that and very interesting. It’s a very different type of human robot interaction study because most of the studies had been and still are controlled. So you go into a lab and you sit down and you talk to the robot or you interact with the robot and this is what they call “in the wild.” The same with Xavier. The robot is just there and it’s an uncontrolled experiment. So we get reactions that differ wildly from what you would get in a laboratory. I mean, it would be hard to imagine people going into a lab when they know an experiment is taking place and swearing to the robot, but we get that all the time or propositioning the robot. We get that all the time.

So there’s all sorts of interesting things that we discovering about the way people interact with the robot and some of the things are, for instance, we had one of the students, Rachel Kirby did a study where she implemented an emotional model on the robots. The robot could express emotions and moods and it turns out that just showing different facial expressions. So if the robot looked sad, people would interact with it differently than when the robot looked happy and maybe that’s not surprising in retrospect, but the fact that people who have absolutely no idea of what’s going on could just – one day they’re walking in the hallway and the robot looks happy and they go over and they interact with it. The next day, they’re walking in the hallway, the robot looks sad and they just avoid it that’s what happens. So those are some of the things that were – been very interesting discoveries.

Q:

What are some of the technical challenges that have come up with the social robot?

Reid Simmons:

So, in my mind, the main technical challenge is that people are infinitely variable. The robots, even with machine learning, the robots basically adhere to rules. There are certain rules about how to interact, how to behave in certain situations, and people just present an infinite variety of the ways that they interact and it very quickly – so the robot could do the right thing first for a little while but then invariably the interaction breaks down because the person asks question of the robot isn’t capable of answering or they try something like sarcasm or humor that the robot just doesn’t understand or something happens like they turn their back on the robot for a second and the robot doesn’t realize it and continues to talk to them and they get pissed about that and they walk away. So all those things happen with high regularity but none of the things happen regularly enough that we can actually capture that and say, okay, here’s a new rule for the robot. We do that but still there’s a large number of a quarter of the things that people say to the robot, the robot doesn’t respond to it in a reasonable way. It’s even more so I think when you’re interacting with the robots spatially. So the way people move through space is predictable for us as humans but it’s really hard to kind of explain, just rationalize so that you can write it down in a set of rules that a robot program would understand.

Q:

And who have you been collaborating with in this direction of your research?

Reid Simmons:

Mostly people at Carnegie Mellon. There’s a drama professor, Anne Mundell, and writer, Michael Chemers, been involved with Illah Nourbakhsh. Manuela Veloso has gotten very interested in this area now, Aaron Steinfeld, Jodi Forlizzi, haven’t had a project but have interacted with Sara Kiesler.

Roboceptionist

Q:

How did you decide to build a roboceptionist?

Reid Simmons:

People had been kind of complaining for a while that we’re the Robotics Institute and most of the time you come in to the Robotics Institute, this is in the 90s. You come into the Robotics Institute and you never see a robot and so there was a push to develop a robot presence that would be there all the time and we were concerned about having mobile robot because of the danger and also the fact that it would not be available full time. It’d only be available for a couple of hours a day as long as its batteries lasted but mostly because the danger of having a robot wandering around completely unsupervised for 40 hours a week. So we came up with this idea of a robot receptionist. Randy Pausch actually suggested to me that a great way of keeping people engaged would be to have the robot involved in a soap opera, so that they would have stories to tell and that people would be engaged with that and keep coming back to hear its stories and so that basically was the genesis of the interaction with the theater department and, yes, so we decided that we wanted to – so the other impetus was that there had been a number of technologies that had been put out in the environment. So HCI, Human Computer Institute, had this really cool piece of technology. It was a bubble machine where <laughs>, do you remember, okay…

Q:

I’ve heard of it. <laughs>

Reid Simmons:

Okay. It was a bubble machine and so basically what it was is that it had these – it was tubes of water and these air bubbles that would come out periodically from the tubes and they would form patterns and you could program the patterns and it would do the patterns that you wanted and it was really cool and people, I mean, in the early days when it first come out, there were lines of people waiting to use it. Everyone liked to play with it and then over time people stopped using it and the mechanism got old and started leaking and they took it down, but I think by the time they took it down, it was hardly used at all. So the question was how could we develop a technology that would maintain people’s interest over very long periods of time and so the idea was to have a robot with a character and a personality and as Randy Pausch suggested story lines that would capture people’s imagination and make them interested in coming back and hearing more about the robot’s life. So that’s what we did and it turned out to be a fairly successful experiment. The first year that the roboceptionist was there, we had a huge number of people interacting with it, which then the second, third year kind of calmed down, but this had been going on for seven years now and it’s been pretty even over the length, doesn’t mean that the same people come back all the time, but there’s enough repeat visitors that it keeps the interest level up on a daily basis.

Other Robot Projects

Q:

And what were some of the other robots that you worked on?

Reid Simmons:

So, let’s see, we had Xavier, which basically did robot navigation. There were a couple of Mars rovers, Ambler, Ratler was the lunar rover, did that work with – both the Ambler and Ratler work with Eric Krotkov. Martial Hebert was also involved in those projects. We have a robot called Bullwinkle, which is a mobile manipulator, and it coordinates with this robotic crane that we borrowed from NIST ten years ago and haven’t gotten around to returning yet. It’s huge. I don’t think they’ll ever want it back because it would basically need a trailer to – an 18-wheeler to cart it back. We have the Grace and the Roboceptionist. The latest robot that we’re just starting to use is called COMPANION. It was developed by Rachel Kirby who is co-advise with Jodi Forlizzi at the HCI Institute and it’s a omnidirectional humanoid robot, so it’s got a fiberglass shell that’s shaped roughly like a human. It’s got a graphical face like the roboceptionist does, but the face rather being on a big screen is imbedded in a three-dimensional head, so it looks much more organic and it’s going to be used for social navigation. So one of the things that Rachel found in her thesis was that robots that – the non-holonomic robots that basically have to turn their whole bodies in order to change orientation turns out to be not very socially acceptable. As the robot turns, people are getting confused about what the robot is going to do. So people as they’re moving down the hallway tend to keep their body aligned with the direction of travel and they just sidestep it if they want to move away from someone. So we have developed a robot that can actually sidestep like that and we think that it’ll prove much more socially acceptable.

Q:

And how does the robot sidestep, what’s different about the design compared to the others?

Reid Simmons:

Well, instead of just moving in the direction that its body is facing, it can move side to side as well. So it can keep its body in this orientation and just move to the side.

Q:

We were just talking to Ralph yesterday with the ball robots, so is it on something like that or do the wheels just kind of turn?

Reid Simmons:

Yeah. It’s this thing they’re called mecanum wheels or Swedish wheels and basically the wheels have rollers that are kind of set at a 45-degree angle so that as the wheel turns, these rollers turn as well and they can just slip. So depending on the direction which you – if you push in this direction, they go forward. If you push in this direction, it just slips to the side. So it’s not like ballbot but…

NASA Projects

Q:

Could you tell us a little bit more about the different NASA projects that you worked on?

Reid Simmons:

There were a lot of them. So I worked on – the first one was Ambler, which was a very large eight-legged, six-legged I think, six-legged walking robot, very unique design. It was designed by John Bares as his PhD thesis and John joined the faculty here and has been very successful since. Had some projects with architectures, had some projects with navigation both for lunar rover and from Mars rover, actually had a project, I forgot about this one, had a project that was develop an autonomous spacecraft. So this was basically AI in space. So this was actually a mission, this flew. I was part of the team that worked on this project. It was to autonomously control spacecraft. Normally, they’re controlled from the ground, tells them when to turn on their motors, how long to burn everything. This was all decided on board using AI planning technologies. It was a very, very fast-paced, very intense project, very large project, probably the largest project that I’ve ever worked on and it was really cool because it flew. We went down to Cape Kennedy and saw the launch and then six months later actually did its thing. It was quite cool.

Q:

What was the spacecraft and what was the mission?

Reid Simmons:

The mission was to visit some asteroids. It was called Deep Space 1. So it was a technology demonstration mission. So it demonstrated a number of different technologies. The only other one that I remember was an ion propulsion system, which is I believe since been used in other missions as well. As far as I know, the AI in space hasn’t been used again, but it was a great development effort.

Q:

When was this?

Reid Simmons:

What?

Q:

When was this?

Reid Simmons:

Jeez, I don’t remember, ’98 somewhere around there.

<crew talk>

Reid Simmons:

And then we’ve had work on multi-robot assembly most recently.

Q:

And as far as you know, is that the first spacecraft that was completely controlled by AI?

Reid Simmons:

Yeah, yeah, definitely.

Q:

And there haven’t been any since?

Reid Simmons:

Not that I know of. No. I mean, there’s been spacecraft that have had AI components in it but as far as kind of being completely controlled by AI system, no. I don’t think so.

Q:

So can you tell us about how you designed that system and how it operated, the architecture?

Reid Simmons:

I don’t know. It’s not that…

<crew talk>

Reid Simmons:

Yeah, it’s not interesting.

Q:

It’s not interesting? Is Amelia another robot then?

Reid Simmons:

Yeah. So Amelia is the base that became Grace. She was called Amelia for a while but there really isn’t much to talk about. Amelia was mostly that we used the same base, put a face on her, and it became Grace and she’s been known as Grace since then.

Q:

I see NASA Peer Award.

<crew talk>

Reid Simmons:

There are a couple of them. Oh, I remember what this was. Another project I forgot to talk about was – that I actually was involved in for a number of years was a robot architecture problem. So NASA wanted to put together a common architecture that they could use for kind of a lot of their robot projects and I was involved in that for a number of years. We basically took the navigation work that we had done and integrated it into this architecture and it turned out to be – it was interesting effort. The architecture was fairly successful. They ended up open sourcing it and people can use it. It’s called Clarity and it was run by Issa Nesnas and yes, so it was rather successful in NASA, gave out awards to people for that, so.

Q:

What about your work on mixed autonomy and you mentioned the Crane and Bullwinkle, could you tell us a little bit about that?

Reid Simmons:

Yes, so one of the things that we had noticed in doing some of our multi-robot coordination work is that the robot would – actually this is another area that I should mention that I’ve been very interested in. One of the things I’ve been very interested in is fault detection error recovery. So one of the things that robots do very poorly is noticing when they’re in trouble and so – I mean, this came from – I was involved for many years in the AAAI Robot competitions and you would go there and you would watch and you’d see the robots do things over and over again basically get stuck in loops, so they would – there would be – let’s say they have to collect orange balls and so they’d see an orange ball over there and they would turn towards it to go and there would be a rock in front of them, so they couldn’t go over the rock, so they’d turn away from the rock and then the cameras would detect an orange ball over there and so they’d turn towards the <laughs> orange ball and there would be a rock in front of it, <laughs> so they – and they would do this forever, literally forever. Sometimes, we just had to stop the robots because it was clear they were just never going to get out of this loop and similarly the Mars rovers would get themselves into trouble where they would – the more they would try and not even noticing they were in trouble, the more they would try and move according to their plan the deeper and deeper they would literally dig themselves in until eventually they would end in catastrophe more often than not.

So very much interested in trying to detect these types of behavior changes and be able to react to them in reasonable ways and one of the things we realized is that, again, where I was talking before about humans being infinitely variable, the environment is variable as people, but it’s still pretty variable. Different things can be thrown at you and we realized that to get robots to be 100 percent autonomous, do it all by themselves, was just not going to happen anytime soon and so we were very much interested in looking at how people and robots could work together to help this out and so what we were interested in is having the robots be able to detect when they were in trouble and be able to determine for themselves when they could get out of the trouble themselves and when they needed to ask people for help. So a lot of my recent work with students like Brennan Sellner and Laura Hiatt had been in terms of recovering from failure. So either from how to detect that the system was failing or how to recover once the system was failing and then in some cases how to ask humans for help to help the robots when they’re failing.

Q:

And what kind of tasks are you doing this in?

Reid Simmons:

Mostly assembly tasks. So putting together large structures like space structures.

Q:

And this is also – is it a NASA project or?

Reid Simmons:

It has been supported by NASA, yeah. We’ve since been looking for other areas of supports since NASA support is kind of dried up, but there are other manufacturing companies that are interested in this type of behaviors.

Q:

Has most of your research been supported by NASA or have you had other funding sources?

Reid Simmons:

I would say for the first ten years, I’ve been here 22 years, the first half of the time I’ve been here, it was almost exclusively NASA, but, since then, they’ve kind of branched out. I’ve had a little bit of DARPA support, not much, ONR support, NSF support, industrial support. We just an Air Force contract, so – but, yeah, NASA had been my big support up until a few years ago.

Q:

You mentioned that NASA support was running out. Is there a particular reason, are they not doing so much funding of robotics anymore or?

Reid Simmons:

Let’s not go there.

Q:

Okay. <laughs>

Reid Simmons:

It’s political.

Other Collaborations

Q:

Okay. <laughs> What sort of industrial partners have you had?

Reid Simmons:

Good question. General Motors is – I’m trying to remember, has actual industrial. Well, I guess not a lot of industrial support. I’m sure there are others. General Motors is the only one that brings to mind right now.

Q:

Have you had other outside collaborations apart from NASA and CMU?

Reid Simmons:

No, not really.

Q:

What was the Atacama Desert Trek field experiment?

Reid Simmons:

Oh, this was another NASA thing. This actually was something that was led by Red Whittaker where they sent a robot to the Atacama Desert, which is the driest desert in the world and basically had it drive around for weeks at a time. So it was basically trying to show that robots could operate in very harsh environments.

Advice for Young People

Q:

So I don’t know if you have something at 11, but it’s getting there. So we have kind of an ending question but is there anything you’d like to add or something that we missed <laughs> before we go?

Reid Simmons:

No, I think you’ve covered a lot of things, yeah.

Q:

Okay. So our final question was would you have some advice to give to people who are interested in getting into robotics?

Reid Simmons:

Yeah, let’s see <laughs> so when I got this offer to come to Carnegie Mellon and as I said I had not had any experience in robots before. So I asked Rod Brooks, who is at MIT and very well renowned for his work in robotics and I knew him but not well, and basically asked him for his advice and his advice basically was forget about this planning stuff. The action is all in perception and I thanked him for his advice and I wasn’t going to go that far field but it still is good advice I think. A very large part of what makes robotics hard is getting good understanding of what’s happening in the world. I mean, as I said before that people are very variable, but one of the things that makes human-robot social interaction hard is that robots can’t pick up on the type of queues that people give out, expressions and nods and prosody and all that. I mean, people are working on all that, but it’s just not there yet. So it’s still a very large kind of open area, so that’s kind of passing on Rod’s advice from 20 years ago.

My own advice though is that I think it’s important to do something in robotics that you’re passionate about. There’s an awful lot of work where people will kind of flock after the next fad and decide what to work on based on what’s hot at the time and invariably – well, at least I don’t think that’s a very satisfying way of choosing research topics. I think you want to choose research topics that interest you that you’re passionate about, that you believe will have an impact on society and the field in general and then go with that and if you’re right and you do good work then eventually the funding will come and people will notice. The social robot’s work is one thing. I mean, I wasn’t the first person by any means to get involved in it, but when we started there was relatively few people doing work in HRI period and even less work in social interaction and it’s become much, much more prevalent now ten years later. I got into it because I saw that it was an important thing that the robots were missing and realized myself that – figured that unless we had this type of interaction, robots wouldn’t be well accepted in society, but it’s important, as I said, kind of choose topics based on what you think is important and what you’re really interested in rather than just following the latest fad.

Reflection on Robotics

Q:

So besides HRI, what do you think are some of the other important directions in robotics in the next few years?

Reid Simmons:

Mobile manipulation, learning, and better perception, particularly multi-mobile perception as we get a lot of feedback from all our senses and we have to put them together very well and those are all going to be very important.

Q:

And given your background, what do you see as the relationship between AI and robotics over that time period?

Reid Simmons:

So as robots have become more and more – so, there’s two things. One is that as robots have become more and more capable mechanically and we’ve been able to – computing power has gotten to the point where you can imbed very, very powerful computers on board, it’s made AI feasible to do. If robots really couldn’t do much, planning didn’t really make any sense because the robots didn’t survive for more than a few seconds or a few minutes, whatever, it wasn’t very important, but, now that robots are very capable and you can think about having robots that are available all the time, then the AI becomes much more important. The other thing I think is more fundamentally is that thinking about AI and robots has pushed the field of AI in a different direction.

So when I was starting out, when I was getting my graduate degree, AI was all symbolic reasoning and there were a few people who were doing nonsymbolic reasoning, probabilistic reasoning, but they were very few and far between and since then once you start taking the world seriously and that there’s so much uncertainty in the world, it’s pushed AI into much more into nonsymbolic representations of reasoning, probabilistic reasoning, nondeterministic reasoning. Machine learning has become much more nonsymbolic than it used to be. Control learning, things like that, have become very important. So putting AI and robotics together has opened up a huge new set of problems for AI to solve and has led people into very different types of solution techniques than they had been involved in in the past.

Q:

Great. Thank you, thank you.

Reid Simmons:

Okay. Sure.