Update der Roboterhand von Shadow Robot Company

Shownotes

Die Roboterhand von Shadow ist sehr gefragt - unter anderem bei Google DeepMind - aber das britische Unternehmen will seine Hand skalieren und hat auch andere Märkte im Blick. Im Podcast erklärt Rich Walker, welche Rolle KI dabei spielt und warum er mit Hardware zufrieden ist.

Danke an unseren Partner, die Hannover Messe

Das Bewerbungsformular für den Robotics Award gibt es hier

Kommet alle zu unserem Event -> Anmelden

Fragen oder Ideen zur Robotik in der Industrie? helmut@robotikpodcast.de oder robert@robotikpodcast.de

Transkript anzeigen

00:00:00: Hi, Robert hier.

00:00:01: Dieser Podcast wird euch präsentiert von der Hannover Messe.

00:00:04: Wir sagen vielen herzlichen Dank an die Kollegen in Hannover und jetzt geht's los.

00:00:09: Robotik in der Industrie, der Podcast mit Helmut Schmidt und Robert Weber.

00:00:18: Hi, Robert hier.

00:00:22: Das ist die letzte Folge vor unseren Sommerferien.

00:00:27: Und wir haben uns dieses Mal bei den Kollegen vom Industrial AI Podcast

00:00:31: bedient, nämlich die hatten Rich schon vor einigen Wochen in ihrem Programm.

00:00:36: Und der kam so gut an, dass wir gesagt haben, OK, nehmen wir Rich Walker von der

00:00:40: Shadow Robot Company auch mit in den Robotik in der Industrie Podcast.

00:00:46: So Helmut und ich gehen jetzt in den Urlaub, entspannen uns ein bisschen.

00:00:51: Der Helmut bleibt noch ein bisschen länger.

00:00:52: Der fährt nämlich erst Anfang September in den Urlaub.

00:00:55: Ich fahre jetzt in den Urlaub und wenn wir wieder da sind, gibt's was ganz

00:01:01: Besonderes, was Neues vom Podcast Robotik Industrie.

00:01:05: Wir wechseln nämlich die Seiten.

00:01:07: Was das bedeutet, erfahrt ihr dann in der

00:01:09: erste Folge nach den Ferien.

00:01:11: Wir wünschen euch alles Gute und bleibt vor allem gesund.

00:01:14: Und jetzt geht's los.

00:01:17: Mein Gast heute ist Rich Walker.

00:01:19: Er arbeitet für die Shadow Robot Company.

00:01:21: Und Rich, das ist dein zweites Mal als Gast auf unserem Podcast.

00:01:25: Willkommen zurück.

00:01:26: Danke, es ist immer ein pleasure to be here.

00:01:28: Lass mal, dein Kat, der auch eine

00:01:30: kruzische Part in der Podcast ist ein Kat auch da.

00:01:33: Er ist heute zu Hause und ich bin in der

00:01:35: Office, so wir hoffentlich werden die

00:01:37: Feline Intervention sparen.

00:01:38: Okay, wie geht es dir?

00:01:41: Sehr gut, sehr gut.

00:01:42: Wir haben einfach die neue Robot Hand finally

00:01:43: released, die neue Robot Hand, die wir in

00:01:46: Secret für fünf Jahre arbeiten.

00:01:47: So, das ist eine sehr tolle,

00:01:49: etwas expansiv Russische, sagen wir uns.

00:01:51: Und wir sind sehr froh, zu sagen, dass wir

00:01:52: Leute darüber sprechen können.

00:01:53: Genau.

00:01:54: Ich rede über AI und Robotik.

00:01:57: Nvidia ist mit den Händen mit Hardware.

00:02:00: Und so bist du.

00:02:01: So ist Hardware noch wert?

00:02:04: Oh, absolut.

00:02:05: Ich denke, was wir mit den letzten fünf

00:02:07: Jahren gesehen haben, ist, dass es in der

00:02:08: AI-Action total drifft ist, mit viel, viel besserer Comput.

00:02:13: Nvidia ist ein sehr großer Beton für GPUs und Fabriken, für die deepen

00:02:17: Erleichterung. Das hat natürlich wirklich, wirklich

00:02:19: geklappt für sie und für jemanden, der einen Video-Scheinungen

00:02:21: hat, fünf Jahre ago, gut gemacht.

00:02:24: Für die restlichen.

00:02:26: Hast du Schäden?

00:02:27: Ich habe nicht in Nvidia.

00:02:28: Fortschließlich, da gehen wir.

00:02:30: Ich bin mein eigenes Haufen.

00:02:34: Aber für uns über die Hardware, das ist ein deep, deep-tech Corner,

00:02:38: ich denke, das ist etwas, das wir sehen, dass du nicht neue

00:02:41: Hardware haben, ohne viel Arbeit, viel Entwicklung.

00:02:43: Aber du musst neue Hardware haben, um alles möglich zu machen.

00:02:47: Wie viel Software und AI ist in deiner neuen Hand?

00:02:52: Remarkabel little.

00:02:54: Wir hatten eine schöne Relationschaft, mit Google Deed Mine,

00:02:57: entwickeln diesen Robots.

00:02:59: Und die haben gesagt, wir sind die AI-Präsidenten,

00:03:01: du bist die Roboter-Hardware-Präsidentin.

00:03:03: Hier ist eine kleine Liste.

00:03:04: Lass uns nicht zu viel über die Liste stecken.

00:03:07: Und das war eigentlich sehr, sehr rewarding,

00:03:09: weil es dachten, wir mussten nicht mit den

00:03:11: Herausforderungen, die sie mitnehmen.

00:03:12: Aber auch können wir sie sagen, wir haben

00:03:15: ein paar Ideen über die Hardware-Präsidentin.

00:03:17: Wollen sie das Problem lösen, die sie haben?

00:03:19: Und sie sagen, ja, das wird funktionieren.

00:03:21: Und das wird funktionieren.

00:03:22: Lass uns beide tun.

00:03:24: Was ist jetzt neu?

00:03:25: Was ist in AI und Robotik im Moment geändert?

00:03:29: Die Veränderung, die wir gesehen haben, ist diese

00:03:32: Algorithmen aus dem Video-Game und in der realen Welt.

00:03:37: Wir haben viele, viele Examples gesehen, wo

00:03:39: wirklich powerful AI-Systems haben

00:03:41: und mit den complexen Taschen zu tun.

00:03:43: Aber die sind fast all die virtuellen Taschen.

00:03:47: Wenn man sie auch in der realen Welt tun könnte,

00:03:49: dann waren sie virtual.

00:03:50: So, wenn man sich auch auf Alpha-Folge,

00:03:52: die Protein-Folge, schafft,

00:03:54: aber es ist nicht das Experte zu verwerfen,

00:03:57: die Protein-Folge, das wird noch by Chemisten gemacht.

00:04:01: Und was wir versuchen, über einen anderen

00:04:03: Korn der AI-Spaces zu machen, ist, dass wir sagen,

00:04:05: OK, wir nehmen diese powerful AI-Tools und

00:04:08: wir nutzen sie auf realen Robots, um die Probleme zu lösen,

00:04:12: die für einen langen Zeit in Robotik im Stuck sind.

00:04:15: Zum Beispiel?

00:04:17: So, wie picken wir Objekte und interaktieren sie gut?

00:04:20: Das ist ein wirklich, wirklich großer Herausforderung in Robotik.

00:04:22: Wenn man sich alle sehr

00:04:24: erfolgreiche Unternehmen, die mit Robots

00:04:25: Handeln tun, schaut man, was sie tun,

00:04:28: sie sind sehr, sehr limitiert in den Typen,

00:04:30: die sie handeln oder die Environments, die sie tun.

00:04:33: Auch die Weltklasse-Experten, die

00:04:36: Handeln Objekte auf der Stelle, wie Amazon,

00:04:38: sind noch sehr, sehr limitiert in den Domain.

00:04:41: Amazon erwartet das Objekt, das auf der Strecke auf der Strecke.

00:04:44: So wie ein Kardo, weiß, dass das Objekt in einem Warehaus wird.

00:04:49: Aber wenn du dich anstatt, hier ist ein randomes Teil

00:04:51: des Sachen in deinem Kinder-Bedroom, wie ich sie umschen?

00:04:53: Ja. - All diese Systeme, einfach

00:04:55: throw die Hände in den Herd und sagen, ja,

00:04:57: du kannst noch nicht das tun.

00:04:58: Du hast alle diese AI-Auguristen gesagt.

00:05:02: Wie integriert du jetzt, wie du die bothen Wörter in

00:05:07: deinem Hardware-Wert und dein AI-Auguristen wohnen?

00:05:11: So, ein lot of that happens through a tool called Reinforcement Learning.

00:05:15: Reinforcement Learning is really powerful.

00:05:17: It's a method that allows you to take advantage of the fact that people have got

00:05:22: really good at simulating physics.

00:05:24: I think Festo already presented a hand with a Rubik, I think, was it?

00:05:29: Yeah, exactly.

00:05:30: I think quite a lot of interesting challenges where people have used

00:05:33: Reinforcement Learning to solve them.

00:05:34: But typically the problem you have is that,

00:05:37: okay, you can do that in the simulator and it works really, really well.

00:05:40: But then when you try and run those experiments in the real world,

00:05:44: what happens is you break your robot.

00:05:46: And trying to keep the robots running for long enough to get these experiments done

00:05:50: has been essentially the big showstopper here.

00:05:53: Yeah.

00:05:53: And what is now your approach?

00:05:57: Well, we spent thousands of hours beating the robots with sticks, so you didn't have to.

00:06:02: Is the honest simplest way to it, I can explain.

00:06:05: For the new hardware, we said, okay,

00:06:07: what Reinforcement Learning does is it tries random movements and works out,

00:06:12: which ones are good quality movements.

00:06:13: So we have to be able to survive that.

00:06:15: And then once we can survive that,

00:06:17: what grasping a manipulation does is it collides the robot hand with objects.

00:06:22: But if I'm going to pick something up,

00:06:24: my fingers are colliding with the object by definition.

00:06:27: They have to be to touch it.

00:06:28: And traditionally in robotics, we avoid collisions.

00:06:30: We say, plan your path so you don't collide.

00:06:34: But if you want to learn to handle objects well,

00:06:36: you have to accept that collisions are going to be your bread and butter.

00:06:39: Exactly.

00:06:40: We built a robot platform that was designed to survive

00:06:44: quite hostile abuse.

00:06:45: We hit it with pistons, we hit it with baseball bats.

00:06:48: We hammered it into itself and over thousands of hours of testing.

00:06:53: We ended up with a platform that was reliable enough to perform these

00:06:57: experiments that really make you wince when you see a robot running on them.

00:07:01: So you're running the hand together with the robot, collecting data?

00:07:06: Am I wrong?

00:07:07: Yes, exactly.

00:07:08: The learning algorithm runs partly in simulation,

00:07:11: but then translates across the real hardware.

00:07:13: When running on the real hardware,

00:07:14: we have as much data as we can possibly get coming out of the robot.

00:07:19: So this is something that people don't appreciate about classical robots.

00:07:22: They don't actually generate very much information.

00:07:25: You might have some motor current data and you might have some joint position

00:07:28: sensors.

00:07:29: Und if you're really lucky, you'll have a force talk sensor at the wrist.

00:07:32: And that's probably it.

00:07:34: As if you're trying to learn to understand acting in the real world,

00:07:38: you want as much data as possible.

00:07:41: So we've packed this robot with sensing.

00:07:43: You have really, really deep insights into what's

00:07:45: happening around the robot and with the robot, what it's working.

00:07:49: And how much data do you collect?

00:07:51: And what kind of algorithms now do you use then with the data?

00:07:56: So we have, I think it's something like a hundred and fifty five sensors

00:08:00: or each finger of the robot.

00:08:01: Each finger,

00:08:02: but you still have three fingers, right?

00:08:05: Three fingers on this, three fingers.

00:08:06: And then we have.

00:08:06: That's new, right?

00:08:07: It's our previous hands were always very humanlike.

00:08:10: So there'd be four fingers and a thumb and the fingers would be that kind

00:08:14: of characteristic up and down pattern that your fingers come in.

00:08:18: The thumb comes up to propose those.

00:08:20: This new hand is actually, it's only three fingers,

00:08:23: but each one of those fingers is probably as agile as your thumb is.

00:08:28: And there are a range of triangles so they could do quite a lot of manipulation,

00:08:31: quite a lot of dexterity.

00:08:32: We could put more fingers there,

00:08:34: but for the moment three was enough to do interesting things and see what could

00:08:38: be done and how we could do it.

00:08:39: And now you have this one hundred.

00:08:42: You mentioned one hundred fifty sensors, right?

00:08:44: Yeah. In one finger, right?

00:08:45: In one finger.

00:08:46: OK, now you collect the data and what is the next step then?

00:08:51: What do you do with the data?

00:08:53: So for somebody doing research in robotics in AI,

00:08:56: what they're typically doing with that data is they're taking it and they're

00:09:00: feeding it into the capturing it while they're doing an experiment and they're

00:09:03: feeding it into their learning system.

00:09:05: So that might be going in as real time data that's used to generate control.

00:09:09: You feed it into neural networks.

00:09:11: Neural network generates commands back to the robot or it might be something

00:09:15: where you're using a demonstration and then at the end of the process,

00:09:18: you're saying, OK, capture this data and learn from this data how to control the robot.

00:09:23: And tools like lots of kind of good learning tools out there in the in the

00:09:27: ecosystem now, PyTorch and so on, people just feeding this data into.

00:09:31: So, but this is not the reinforcement approach, right?

00:09:34: This is how the the hand behaves, right?

00:09:37: Well, the reinforcement approach is the next step, right?

00:09:40: The simulation stage, typically.

00:09:42: And then also on the real robot.

00:09:44: But what the reinforcement system has to learn to do is to use both the images

00:09:49: of robots in the world and the data from the robot to control its behavior.

00:09:55: And of course, something that's very interesting is when your simulation

00:09:59: and your real robot have different properties.

00:10:01: Sometimes it's called a sim to real gap because it's easier to do things in

00:10:05: simulation than reality, but on other occasions, we actually have more data

00:10:10: available in reality, because we can we can detect things that are actually

00:10:13: very, very hard to model in simulation and you're getting real live data from

00:10:17: the world, from the interaction.

00:10:19: How many real live data does your hand need to see or not to see to?

00:10:25: How do you call it?

00:10:27: Well, it's interesting if you apply,

00:10:29: we can go back to classical control approaches with this hardware.

00:10:32: So typically internally, when we organise a demo, we're using position

00:10:36: control or joint control, because we implement that on the robot for you.

00:10:42: We're not trying to do the high level learning systems,

00:10:45: but if we are, people who are, they have all this extra data.

00:10:48: So they have things like fingertip, Kontakt data, video streams from the fingers.

00:10:53: There are IMUs, so you can measure vibration systems.

00:10:56: And then there are kind of the low level information you expect,

00:10:58: like the motor temperature and the motor current and things.

00:11:01: Lots and lots of data so you can build a,

00:11:04: you've got the ability to build a wealth of insight,

00:11:07: whether your algorithm ends up using it or not, is up to you.

00:11:11: OK, OK.

00:11:12: Can you, because we already talked about your new approach,

00:11:16: could you please share some technical key facts?

00:11:19: Is it an update? Is it a total new design?

00:11:23: Rich, some updates.

00:11:25: We went right the way back to a clean slate for this, because we said to ourselves,

00:11:29: you know, we built anthropomorphic hands in the past, we've done lots and lots of those.

00:11:32: But actually maybe we don't care about anthropomorphism, we care about dexterity

00:11:37: and reliability, and that turned out to be a different kind of lens to look at the

00:11:42: problem for. So we were able to start and say,

00:11:45: all right, what would a reliable robot look like?

00:11:48: And where we ended up is this idea that you have a finger

00:11:50: and the finger has five motors in it and those five motors drive four joints.

00:11:55: That's a technique called N plus one actuation.

00:11:58: Sounds a bit weird. Why have the extra motor?

00:12:00: Turns out the extra motor means that you're never

00:12:03: going through a period of zero force on any joint.

00:12:06: So you never have backlash, which is really, really cute.

00:12:10: The movements become, the small, subtle movements become much easier to do and much more

00:12:15: flexible. So we have this modular finger design, we designed some new sensor control

00:12:20: architectures, some new buses, and we kind of built all that into this package. It's

00:12:25: bigger than our previous hardware. One finger weighs a little over a kilo,

00:12:29: okay, really got quite chunky options. But it's very modular. So you can swap a finger in

00:12:33: and out without needing to change anything in your experimental setup.

00:12:37: And how fast is the hand to open and to close?

00:12:42: It's embarrassingly fast. We have some video where we're just like, wait, what

00:12:45: happened? It's like, you can't quite see it moving. If you're running it in

00:12:48: tall control, it moves very, very fast. The joint control setting limits it to 180

00:12:53: degrees per second on any of the joints, which is which is an open and close in a

00:12:59: second without too many problems. Yeah.

00:13:01: And how many new do you handle or how do you do you provide for a fingertip?

00:13:08: So we were looking at this and saying to ourselves, do we want to build a reliable

00:13:12: robot or a strong robot? And okay, we could build a very strong, very reliable

00:13:17: robot, but how big would that get? And there's a certain limit which is going,

00:13:20: okay, this is starting to get silly. So the spec that we came up with was to be

00:13:24: able to handle, I'm going to call them typical objects, a couple of kilos in the

00:13:28: fingers. So three finger robot will hold one or two kilos without any problems.

00:13:32: And across the working range, anywhere in the working range, it should be able to

00:13:37: exert eight newtons at the fingertip.

00:13:39: Okay. But what is now your unique approach compared to other companies?

00:13:45: I think part of the, this is one of those projects where you go, well, actually,

00:13:51: there's a whole stack of things going on that reinforce each other in the design.

00:13:54: So we stuck to traditional actuation in terms of you just using top quality

00:13:59: electric motors because we knew that that would be a really good

00:14:02: approach. From Switzerland, it's always from Switzerland.

00:14:07: Absolutely. If you're buying motors and gearboxes and you're not buying from

00:14:10: Switzerland, please let me know.

00:14:11: Yes. But then in other places, we were able to go back and say, okay, what's the

00:14:18: right approach for doing this? And then often there wasn't a right approach.

00:14:22: So we said, what are the possible approaches? And then we build a test

00:14:25: program and work out what gave interesting information, what gave good data, what

00:14:29: gave good performance and go back to the researchers at Google DeepMind and say,

00:14:34: hey, we see that there are two or three ways we can do this. Which do you like?

00:14:39: And they would often say, we like all of them.

00:14:41: Okay. And then we'd have to go back and say, okay, how do we put all of them in

00:14:44: there? And that's why there are two completely different tactile sensing

00:14:48: technologies in the hand.

00:14:49: I can order different tactile sensors.

00:14:52: We have different sensors in different regions.

00:14:55: So if you look at your fingers, you'll see in the middle and proximal

00:14:59: phalanges, you've got this kind of fleshy pad that comes into contact with

00:15:02: things when you grasp. And there we cover those with lots and lots of

00:15:07: three-axis hall effect sensors with magnets. So really nice sensing technology

00:15:11: gives you a really good understanding of contact and deformation and force.

00:15:16: In a compact form, it can be made over the surface of the finger.

00:15:20: But then at the fingertip, we have more space available.

00:15:23: So there we have a new sensor we developed, which is based on a stereo

00:15:28: camera pair pointing at a pattern of markers on the inside of the finger,

00:15:32: which is squishy. And as you squish it, you can see these markers move around.

00:15:37: And we found that that's so sensitive that we've yet to have anybody

00:15:41: managed to touch the fingertip without being able to detect it in the contact.

00:15:45: But it's robust enough to go up to people pushing as hard as possible,

00:15:49: 70, 80, 90 kilos of load on the finger.

00:15:51: So newtons of load on the fingertip without it being unduly saturated.

00:15:57: They're very, very responsive.

00:15:58: How do you achieve this technical occurrence?

00:16:00: See of your hand.

00:16:02: So in that case, it was by looking at our options and trying them and trying them.

00:16:07: And sometimes we said, actually, we're going slightly too far here.

00:16:10: So we had a previous iteration of that sensor, which was so sensitive.

00:16:14: It was clear that it was going to be really hard to process the data from it.

00:16:18: And then we backed away a little bit.

00:16:20: So we just have a high frequency video stream running down the hand to to analyze.

00:16:25: But then in other areas, there was kind of very obviously a

00:16:27: after a couple of iterations, it was clear that there was a approach that was

00:16:32: excellence and we didn't need to go beyond that.

00:16:36: We could see clearly that was the right approach.

00:16:38: It gave us the data that we needed.

00:16:39: It gave us the reliability that we needed.

00:16:42: And we knew how to build it, most importantly.

00:16:45: You mentioned DeepMind as your partner, as your customer, maybe.

00:16:49: That's fine.

00:16:51: That's a research company, a huge one.

00:16:54: But Rich, how do you scale now your robotic hand?

00:16:59: Well, the nice thing about a deep research organization is that they will

00:17:05: often they buy a lot of hardware.

00:17:07: Yes. Yes.

00:17:07: So that's nice from a kind of purely purely technical point of view,

00:17:12: commercial points of view.

00:17:13: But what we're very interested in now is say, well, actually, what we've done

00:17:16: is we've gone away and quietly over five years, we built a new category of robot

00:17:20: hardware. What can be done with this new category of robot hardware elsewhere?

00:17:25: What can you do when you don't mind if your robot hits things?

00:17:28: What can you do when you've got the very, very high levels of dexterity and

00:17:31: agility that a human hand has with reliability built into it?

00:17:36: And what can you do with all this data that we're generating?

00:17:39: So we're now going out and we're talking to the community and saying,

00:17:42: how can we take this technology and how can we apply it into different places,

00:17:46: different areas, different classical robotics challenges?

00:17:50: Where so far no one's really used robots because they weren't quite good enough.

00:17:53: Does this technology unlock good enough for you?

00:17:56: So how do you see the development of the humanoid systems?

00:18:00: Is that the new trend you want to follow?

00:18:02: There are lots of people doing humanoid.

00:18:05: I personally, we started out doing bipeds 25, 30 years ago.

00:18:09: And we stopped because we realized that bipeds were too far away.

00:18:12: Now the world has changed.

00:18:14: And, you know, I think somebody pointed out to me that to build a

00:18:17: humanoid from scratch takes 60 engineers a year.

00:18:20: And it's almost a quantifiable at that level.

00:18:23: So we have some new technology.

00:18:26: Is it potentially interesting for humanoid?

00:18:27: I'm sure it is.

00:18:28: Is it interesting for anyone else in the robotics community?

00:18:31: Yes, I think so as well.

00:18:33: But you mentioned 150 sensors.

00:18:36: That sounds very, very expensive for, I would say, common robotics applications.

00:18:42: It's true that we have built, we've done what Shadow always likes to do,

00:18:46: which is to build a thing absolutely at the top of the mountain.

00:18:49: Yeah.

00:18:49: That's kind of where we always like to start.

00:18:51: Could, are there simpler things that could be built?

00:18:54: Oh, I'm sure there are.

00:18:55: Are there ways to adapt this technology to be more general purpose?

00:18:58: I'm sure there are.

00:19:00: What we found very interesting was when we ran the hardware through our pricing model,

00:19:04: it actually came out cheaper than our old hand.

00:19:06: OK.

00:19:07: OK.

00:19:08: It's just a surprise.

00:19:09: How do you do that?

00:19:10: It's just, it's to do with the fact that the engineering was designed

00:19:13: from the ground up for repair.

00:19:15: OK.

00:19:15: And that turns out that repair and manufacturing in the first place

00:19:19: are remarkably similar places.

00:19:21: So something that's very fast to repair is something that's easy to strip down

00:19:24: and put back together again.

00:19:26: And that means it's pretty easy to put down in the first, put together in the first place.

00:19:29: So do you talk to the, to our gripper providers like on-robot,

00:19:34: Schung or Schmaltz or is it, are they competitors and you don't want to talk to them?

00:19:39: Historically, we haven't because we've just been operating in somewhere

00:19:43: that's a very, very distinct location.

00:19:45: But now we are talking to more people who say, well, you know,

00:19:48: we have these products that we currently use.

00:19:50: How does your new system fit into that landscape?

00:19:53: And we're very interested to understand, you know, where we can contribute

00:19:57: and where some of the technologies that we've developed may be able to make these

00:20:00: like other types of hardware more versatile and more useful.

00:20:04: I want to come back to NVIDIA at the beginning because the CEO says

00:20:08: without AI and the design phase and they won't get any further.

00:20:14: What about you and your hand?

00:20:15: It's interesting that we didn't really use any AI in the design process.

00:20:22: But you collected during the testing, right?

00:20:24: Well, because what we were able to do was to essentially

00:20:27: take some very bright people and lock them in a room and say,

00:20:30: don't come out until you've got something that works.

00:20:32: That if you can afford that approach, it's a great one to take.

00:20:36: In terms of future designs and future systems,

00:20:39: I think one of the nice things is that we now know that we can use

00:20:42: some of the tools from the AI toolbox to challenge our designs

00:20:46: and to give us kind of things to explore for the next steps.

00:20:49: Yeah. But you are using simulation tools, right?

00:20:53: You will use AI based simulation tools.

00:20:55: We use the simulation tools, but they tend,

00:20:58: where's the weirdly they tend not to be AI based.

00:21:00: Something like Majoko or NVIDIA's Isaac tool is a great simulator.

00:21:05: They're very powerful and they basically work of classical physics

00:21:09: and just really good modeling of the classical physics,

00:21:11: which is why they're interesting for AI,

00:21:14: because they allow you to verify what the AI produces by testing it

00:21:18: in a real simulator, a real physics model.

00:21:20: What is now possible with your hand?

00:21:24: What are the plans of DeepMind?

00:21:26: What they want to do, they want to do reinforcement learning with the hand?

00:21:29: What else is possible with the hand?

00:21:32: I would love to know what they want to do.

00:21:35: Honestly, we've been through this really great relationship

00:21:38: where we built stuff for them and they went away and they came back

00:21:40: and they said, "That won't work because."

00:21:42: Okay.

00:21:42: They said, "Why not?"

00:21:44: And they said, "Because."

00:21:45: Okay.

00:21:45: And we're, "Okay, fine. That's fair enough."

00:21:47: They very much, they know what they want to do

00:21:49: and they have a strong focus on their particular innovations.

00:21:53: And we're quite happy to sit here and say,

00:21:56: "Okay, we're the hardware guys here.

00:21:57: We don't need to actually understand the rigors of your learning systems

00:22:01: algorithms, because what we need to understand is the rigors of your lab

00:22:04: where you beat the robot up and we make it survive, though, that damage."

00:22:10: I want to come back when you said we want to also tackle

00:22:13: different robotics application.

00:22:15: How do I integrate the hand into my robotics system?

00:22:19: Is it an API or is it a ROS-based operating system?

00:22:24: What is your idea?

00:22:25: It's at the moment, it's all in ROS-1.

00:22:27: Okay.

00:22:28: We have a nice open framework where you have joint position control,

00:22:32: joint torque control, tendon force control available.

00:22:35: You have all the sensor data published in a nice big ROS topic,

00:22:38: lots of ability using dynamic reconfigure to change the configuration of the system.

00:22:43: But it is an open API in ROS, in fact, it's an open API in ROS-1.

00:22:47: Okay, fine.

00:22:49: We've been checking that the entire project we've had moved to ROS-2

00:22:52: sort of in the middle of the to-do list.

00:22:55: And it's now the end of the project and it's still in the middle of the to-do list.

00:22:57: Okay.

00:22:58: It's not been that critical to do it.

00:23:02: You're a little bit late to the party, right?

00:23:04: Well, we have reliable stable running systems

00:23:06: that would run for thousands of hours on ROS-1.

00:23:09: We changed it, we have to re-qualify.

00:23:11: So that's part of the reason for sticking there.

00:23:14: Okay.

00:23:15: So what are your next plans with your company?

00:23:18: Well, we're very excited to take this hardware out and say to people,

00:23:22: "What can be done with this? Where can it go? What can we do?"

00:23:24: Where can buy it?

00:23:26: From us, from our website.

00:23:28: Drop us an email.

00:23:29: We're in production now.

00:23:31: That's a really nice thing about this.

00:23:32: We've actually transferred the production across to our Madrid team.

00:23:36: So you can, if you feel the urge, just drive down to Madrid and collect what you want.

00:23:40: Oh, that's a European...

00:23:41: Yes.

00:23:42: We hedge our bets there.

00:23:45: We hedge our bets there.

00:23:46: And you produce in Spain or where do you produce it?

00:23:48: Yeah, we produce it in Spain.

00:23:50: Obviously, subcontract manufacturing happens all over the planet,

00:23:52: but the final development assembly test is done in Spain at the Madrid office.

00:23:59: Rich, we wish you all the best with your new hand, with a new approach,

00:24:04: with your new key customer, DeepMind.

00:24:06: We learned that they order a lot of your robots.

00:24:10: More than most people do.

00:24:11: More than most people.

00:24:12: It was a pleasure. Thank you very much and all the best.

00:24:15: Thank you, Robert. Pleasure.

00:24:17: Robotik in der Industrie.

00:24:22: Der Podcast mit Helmut Schmidt und Robert Weber.

00:24:26: www.bergevlog.com

Neuer Kommentar

Dein Name oder Pseudonym (wird öffentlich angezeigt)
Mindestens 10 Zeichen
Durch das Abschicken des Formulars stimmst du zu, dass der Wert unter "Name oder Pseudonym" gespeichert wird und öffentlich angezeigt werden kann. Wir speichern keine IP-Adressen oder andere personenbezogene Daten. Die Nutzung deines echten Namens ist freiwillig.