KW 49: Robotik-Häppchen

Shownotes

Unser Newshäppchen für die Fahrt zur Arbeit. Immer Montags gibt es die Robotik News - mit-recherchiert, geschrieben und gesprochen mit und von einer KI. Wenn Ihr auch in die Robotik News wollt, dann schreibt uns eine Mail.

Transkript anzeigen

00:00:00: Robotik in der Industrie, der Podcast mit Helmut Schmidt und Robert Weber.

00:00:09: Guten Morgen liebe Sorgen, ihr habt natürlich keine Sorgen, denn hier ist das Robotik-Häppchen

00:00:17: um 6 Uhr am Montagmorgen.

00:00:20: Mein Name ist Robert Weber und wir suchen einen neuen Partner für das Robotik-Häppchen.

00:00:26: Wenn ihr also Lust habt, Partner zu werden, dann meldet ihr euch bitte bei Helmut oder

00:00:30: mir und jetzt geht's los.

00:00:33: Hallo liebe Zuhörerinnen und Zuhörer, das ist das Robotik-Häppchen und wir gehen in

00:00:40: den Urlaub.

00:00:41: Mitte Januar sind wir wieder mit News am Start.

00:00:43: Aber es kommen noch reguläre Robotik-Folgen, wir haben noch ein Jahresrückblick, auf den

00:00:49: ihr euch freuen könnt.

00:00:50: Und für nächste Jahr suchen wir auch noch einen neuen Partner für unser Häppchen.

00:00:54: Heute haben wir noch einen Happen für euch aus dem Industrial AI Podcast zum Thema Safety

00:01:00: und AI.

00:01:01: Ein sehr cooler Ansatz.

00:01:02: Das war es von der KI, euch eine gute Zeit.

00:01:05: Und jetzt geht es los.

00:01:08: Hi there, welcome to a new episode of the Industrial AI Podcast.

00:01:13: My name is Peter Sieberg and I'm your host.

00:01:17: Today we'll be talking to Stefan Milz.

00:01:20: He's the founder, managing director and also the head of R&D at Spleenlab.

00:01:26: And Stefan and I are going to be talking about SAVE AI and also towards the end their first

00:01:34: endeavor with XLSTM.

00:01:36: Hi Stefan.

00:01:37: Hi Peter, thanks for having me.

00:01:39: You're welcome.

00:01:40: To start, please introduce yourself to our listeners.

00:01:43: I keep it short, so I'm Stefan Milz.

00:01:46: I have a technical background.

00:01:48: I'm the CEO of Spleenlab, also the founder, a technical background in physics.

00:01:53: So I have a PhD from the Technical University of Munich.

00:01:57: And then I worked several years in the automotive industry in highly automated driving for the

00:02:03: continental introduced.

00:02:05: They are working on the first NVIDIA series production project ZFAS and then for Vallejo

00:02:11: on highly automated driving.

00:02:13: During that time, I had the idea to introduce something new on chip AI with a focus on functional

00:02:21: safety and I guess founded Spleenlab.

00:02:24: And this is the reason why we are sitting now here.

00:02:26: That's right.

00:02:27: On chip AI, I guess we're going to be talking about that.

00:02:30: So, to start, what does Spleenlab stand for?

00:02:35: You mean the name itself?

00:02:37: Yes, exactly.

00:02:38: Okay, that's a funny story.

00:02:39: I believe so.

00:02:40: Yeah, I thought it this morning when I looked.

00:02:43: It's a very funny story.

00:02:45: And originally I started the company in 2015 as a pure site project, right?

00:02:52: And Spleen is a translation of my surname, Milz.

00:02:56: So it only makes sense if you understand the German language.

00:03:00: No German, right?

00:03:03: And then this was needed in a fast way a name, right?

00:03:08: And then when we really started with Spleenlab, we wanted to have a different name and then

00:03:13: we thought, okay, this is so unique.

00:03:15: You find it everywhere.

00:03:17: Let's keep it because there is no mismatch with other companies.

00:03:22: And then we left it.

00:03:23: So literally my surname.

00:03:25: Right, your surname.

00:03:30: Also we not excluded from future, right?

00:03:41: As safe AI is a really important topic also in medical, but it's no focus at the moment.

00:03:46: Oh, you got.

00:03:47: Maybe we come to that.

00:03:48: I mean, I've been saying time and again that everything is growing together.

00:03:52: And that's an example of it.

00:03:54: So when we talk about, and that's what we are, industrial AI podcast.

00:03:59: We do talk about industrial.

00:04:01: Today we talk maybe a little bit outside even.

00:04:04: You already mentioned here autonomous.

00:04:07: We're going to be talking about your visionary approach.

00:04:11: And at the same time, I would do the same thing.

00:04:13: I would talk about how a physician, a physical doctor may be using very similar technology.

00:04:20: So we will be talking about safe AI.

00:04:24: Let's start with a quick definition of what is safety maybe as opposed to security.

00:04:31: Exactly.

00:04:32: So this is also a fun, funny side note in Germany or in German we have safety as Sicherheit

00:04:42: and security also as Sicherheit.

00:04:45: So this question is quite good.

00:04:47: Right.

00:04:48: And when we talk about safe in the context of what we are doing at SpleenLab, then it's

00:04:52: meant to be functional safety.

00:04:54: So functional safety is something, if we automate processes with heavy machines or with sensitive

00:05:01: processes, then all the time a human is in the loop or there is a probability human could

00:05:08: get hurt, then we talk about functional safety.

00:05:12: And this is where we are working on.

00:05:15: There are real regulations in that field.

00:05:18: They are there for many, many years.

00:05:21: So in ADAS we have ISO 26262 and now for the software we have SOTIF.

00:05:28: But there are similar things in aerospace like the DO178.

00:05:33: These are real process definitions that define how to develop a software or a system that

00:05:40: needs to be safe in the context that it interacts with a human and no one gets hurt.

00:05:46: What I recall when I was more strongly involved is was saying that safety is making sure that

00:05:57: a machine does not hurt a human and security is more about a human maybe doing something

00:06:04: bad to a machine or to other humans I guess.

00:06:08: Yeah, and exactly.

00:06:09: And security is in the context of software really more meant what we classical know as

00:06:14: cyber security.

00:06:16: So this is the typical differentiation at the moment.

00:06:20: So what does SPLEENLAB do then?

00:06:24: Is SAFE.AI your central thing or is it one of many things?

00:06:28: What is your top approach?

00:06:31: So SAFE.AI is a very important thing where we are working on in the context of AI.

00:06:37: We all know that it's a really unsolved topic and there are many buzzwords out there like

00:06:42: trustworthy AI, debugging AI, whatever.

00:06:47: But this is a really important pillar because we develop software for autonomous machines

00:06:54: like autonomous drones.

00:06:56: For example we work together with DroneUp, a big company in the US that do delivery,

00:07:04: last my delivery of packages for Walmart.

00:07:07: We do precision landing and their goal is to remove an operator in that automated process.

00:07:13: So they need to fulfill specific requirements regarding safety when it automatically lands

00:07:18: or automatically drops a package over some area.

00:07:23: And here our product comes into the game.

00:07:27: So we have a redundant approach to deploy robotics software always with a part of AI.

00:07:35: Sometimes 20% AI, sometimes 40% AI, but at the end it's a robotics application that fulfills

00:07:42: a partially critical task and needs to fulfill some requirements regarding safety.

00:07:49: So that's why I would say it's a very important pillar, what we have here, because our business

00:07:54: at SPLEENLAB could only scale if we solve it.

00:07:58: Our approach is to do it iteratively.

00:08:01: So we don't want to solve the full autonomy yet.

00:08:05: So we say okay let's break it down in some partially automated tasks and take the big challenge

00:08:12: iteratively.

00:08:13: And this is our big vision and we're working on more, we're working on full safe AI algorithms.

00:08:19: We work on the full AI deployment which needs to be safe.

00:08:22: So there comes everything into the game.

00:08:24: So the most important thing is we're working on an embedded solution, but with a very, very

00:08:29: strong focus on the current safety regulations.

00:08:32: Right, we'll get into a little bit more detail just in a moment.

00:08:37: Maybe first tell us where, as in what markets are your customers.

00:08:44: You just mentioned Walmart, you could almost say delivery logistics.

00:08:49: Maybe also in industry, but not necessarily the majority.

00:08:54: So where are your current solutions, your current customers?

00:08:57: Yeah, this is a very good question.

00:08:59: So this also aligns with our vision.

00:09:01: So we really deploy cross industries.

00:09:05: So could be drones as a work vertical and industrial drones.

00:09:09: It's also logistics, last mile logistics was long term logistics with tracking.

00:09:16: We are also in the field of automated agriculture as we talk about automated tractors.

00:09:22: Also we are working in ADAS on the streets.

00:09:25: So we see us here complimentary, but there's also applications in the defense pillar.

00:09:32: So this is our vision.

00:09:33: So we can say that we are something like Mobile Eye in the drone industry.

00:09:38: We want to become it off-road.

00:09:40: And we are also complimentary in the ADAS street domain.

00:09:45: So these are our markets.

00:09:48: And here we have everywhere very famous paying customers.

00:09:53: Which maybe we come to in a moment.

00:09:55: So let's get into maybe a little bit more detail of your offer, which I understand.

00:10:01: Or one of your offers is called Visionary, Visionary Safe AI.

00:10:07: What is it and how does it work?

00:10:10: Exactly.

00:10:11: So the big challenge, if it comes to autonomous robots, is that every robot has a different

00:10:17: design.

00:10:18: But every robot has a different sensor.

00:10:21: And our approach is to keep it as generic as possible to support as many sensors that

00:10:27: are possible.

00:10:28: So we have solutions on industrial RGB cameras, but also on IR radar.

00:10:35: And then we have different solutions in the different verticals like agriculture, drones,

00:10:43: ADAS.

00:10:44: And deliver to many, let's say niche markets.

00:10:47: And our offering is to deploy all this software stack to different chips.

00:10:53: So there's NVIDIA, Qualcomm, TI.

00:10:56: So there's a very heterogeneous target market of robotics, but we serve them all with our

00:11:01: solutions.

00:11:02: And when it comes to the offering, then we can tell, okay, we can provide you an RGB-based

00:11:08: SLAM.

00:11:09: We can provide you an RGB-based visual navigation or a lighter-based sensor fusion.

00:11:15: This is exactly the offering.

00:11:17: We offer a spatial AI software stack for autonomy, in visionary sensor fusion, visionary object

00:11:24: detection, visionary free space and visionary SLAM, which then ends in real functionalities.

00:11:30: Let's imagine a drone does collision avoidance, a drone does GPS denied operation.

00:11:37: The parking car has the capability to predict free space on a very low-cost sensor, like a

00:11:44: low-cost camera.

00:11:45: Or we do sensor fusion on L3 autonomous tracking.

00:11:51: So this is our offer.

00:11:52: And where we're really strong is that we are focused heavily on the embedded part.

00:11:59: So we have a high-performance team.

00:12:02: And that's why we can serve many different chips, like what I told before.

00:12:07: And I guess this is a little bit the strong USB that we offer very fast in the integration

00:12:12: part.

00:12:13: Okay, yeah, we come to that a bit later.

00:12:16: Now, I understand that working with artificial intelligence, specifically neural networks,

00:12:23: because of their statistical property, and I read about it again this morning, a huge

00:12:30: discussion on LinkedIn.

00:12:32: Is your machine learning?

00:12:35: Is your large language model whatever element of the AI world we choose?

00:12:41: Is it really statistical?

00:12:43: Or is it maybe deterministic?

00:12:44: I understand that because at least it typically has the statistical property.

00:12:51: The neural network-based approach to safety has not yet been approved for safety requirements.

00:13:02: How do you deal with that?

00:13:04: That's a very good question.

00:13:06: This is exactly why we started Splenelab.

00:13:09: To explain this a little more, you fully introduced it right.

00:13:16: So classic algorithms that are in current automated, for example, driving systems, or for example,

00:13:26: an approved other industrial robots are highly deterministic.

00:13:31: And the deterministic nature of their system design helps to approve towards a, we call,

00:13:38: safety integrity level.

00:13:41: And this is also the pure idea behind functional safety, that you literally design a system,

00:13:48: that you can really say any possible situation that could happen.

00:13:55: You can deterministically describe what you do.

00:13:58: Very quickly, deterministic, same set of inputs always gives the same output.

00:14:04: Is that the right definition here?

00:14:06: Exactly.

00:14:07: Describe what happens, right?

00:14:09: And with a specific probability, but you know in which end, let's say, state you will land

00:14:17: and then something needs to be triggered.

00:14:19: And a safety function needs to be triggered, for example.

00:14:23: And this is the nature of how the current regulations are designed, like what I told you before,

00:14:28: right?

00:14:29: The ISO 262 or industry, it's the IEC 61508.

00:14:37: They really want some deterministic three, where all probability states are defined and

00:14:43: everything that could happen with the system will be described in the safety mitigation.

00:14:48: And if it comes now to a neural network, you typically have so many parameters.

00:14:53: So you all know this large language models, right?

00:14:56: Even 70 billion parameters of a medium-sized large language model is almost impossible to

00:15:04: tell in terms of safety mitigation what could happen if you try out all permutations.

00:15:10: So at the end, it's a statistical model.

00:15:12: You're fully right.

00:15:14: And then you need to design something that we call safety by design.

00:15:18: You need to design watchdogs.

00:15:20: What's typically done, you have a different hardware where a deterministic algorithm runs

00:15:26: parallel to the AI and that assesses the AI at the end of the day.

00:15:33: So these are typical safety patterns.

00:15:36: And only if it's really in the same area of their output, then the output is validated.

00:15:45: And what we do is exactly similar.

00:15:47: So we do safety by design, but we design software functions that assess the output of a neural

00:15:54: network.

00:15:55: And you may have seen this depth estimation post I have done also with the use of XLSDM.

00:16:02: It's a good example to explain how we do it.

00:16:07: So we include temporal consistency.

00:16:10: So we designed it also patented.

00:16:13: We designed some assessment function that deterministically describes the output of

00:16:20: this depth map, which comes out of a neural network, the confidence.

00:16:24: But this confidence comes not out of the neural network itself.

00:16:28: It's a parallel path and it's done with temporal consistency.

00:16:33: So let me explain it a little more.

00:16:35: So right.

00:16:36: It sounds like you made a step going from probabilistic towards deterministic.

00:16:43: Yeah, it's kind of a mixed thing.

00:16:46: Because we take over the approach of self supervised learning, where you have a loss function during

00:16:50: training, which needs no human input.

00:16:53: You transfer this idea to the runtime we call inference.

00:16:58: And then you use this deterministic loss function as deterministic safety watchdog.

00:17:04: And this works out quite well.

00:17:06: We have shown it for depth estimation in free space, for example, for partially automated

00:17:10: driving function.

00:17:11: And we are very sure to achieve an SLB, we call it safety integrity level B.

00:17:17: And this is quite a very important milestone.

00:17:20: Right, because that was then, of course, going to be my question.

00:17:25: So you're very close to receiving, and you are good hopes that you will get an approval.

00:17:31: What is that from one of those specifications that you will get?

00:17:37: What is basically a what neural networks based system will get a safety approval?

00:17:43: Exactly.

00:17:44: But really in a specific context, right?

00:17:47: Ich verstehe das.

00:17:49: with your pay-tented technology, you can use that technology in different other environments as well,

00:17:55: I assume. Exactly. That's a big thing then, actually, because the way we started discussing this,

00:18:02: and I think you're using the use case of autonomous in this case, but it sounds like you have kind

00:18:11: of solved this, what I guess has been an issue, a problem for a long, long time, that as you said,

00:18:20: the regulations have always been written on the basis of whatever, 50 years of deterministic

00:18:26: programming, and it sounds like you have found a solution towards getting approvals even with,

00:18:33: on the basis, probabilistic neural networks. Yeah, but I wouldn't call it so, it's not

00:18:40: for my point of view, it's not rocket science. It's really understanding what we are doing there.

00:18:47: I know it's a big thing, but this is maybe something you asked in the beginning, so where do we stand

00:18:52: for? And I believe we're very strong and embedded in the engineering, very strong at the algorithm

00:18:57: design, but we also have knowledge about the current safety regulations. And this triangle is

00:19:03: very important if you design a system. And there's another important thing we faced, right,

00:19:08: when it comes to AI on chip, you need to somehow execute the AI. And this execution of the AI,

00:19:17: for example, could be an ONX model or something, this execution, this software piece is also not

00:19:25: certified yet. And this is also something we are working on, because if we solve the overall

00:19:30: thing, then we also need to have a safe execution on the chip. I will come back to this, I will come

00:19:35: back to SLAM you mentioned, I will come back to XLSTM in 10, 15 minutes, if that's okay with you.

00:19:42: Before that, I do want to look into one or two other use cases, maybe you talked about, you know,

00:19:51: specifically autonomous here, I understand there is this inflight use case, yeah, for well,

00:19:57: and maybe then let's move to SLAM, because SLAM is a technology or an approach that I know,

00:20:04: typically from robotics, but you've been using it, or one of your customers has been using it for

00:20:11: inflight use case, can you talk about that a bit? Exactly. So yeah, when it comes to drones,

00:20:16: operations, automated mission, for example, also for delivery, but also in surveillance drones,

00:20:23: right, we have a big customer quantum systems, or it's a big partner. For the inflight use case,

00:20:30: so this SLAM solution is something we call visual automotry, or if you map everything,

00:20:37: it's a SLAM. And at the moment, the system only relies on on GNS signals and IMU. And this makes

00:20:46: the system dependent on on outer, outer conditions on an auto signal, or let's say on an active signal,

00:20:53: and to do the operation more safe, and then we are back into the safety case, right, we need one

00:20:58: more input. And that's where we come into the game. We provide SLAM as a functionality that can

00:21:05: provide a geo reference position. And all together makes the overall inflight of an automated drone

00:21:11: much more stable, much more precise in its mission planning, because it's a very, very stable

00:21:17: geolocalization. And SLAM is helping the vehicle, maybe the drone in this case, to know where it is,

00:21:27: or to first map exactly an environment of what is around it and of where it itself is in it,

00:21:35: is that right? It's an exact definition. So, so typically we differentiate between perceptions

00:21:41: of what is around me and localization where I am in the robotics field. And SLAM directly points

00:21:49: to the second one. But as you told fully correct, so you use signals, like a pure camera signal,

00:21:57: and you try to create a 3D understanding of the surrounding due to video stream. So, let's say,

00:22:05: temple approach. And based on finding good features, you design a 3D map. This map could be sparse,

00:22:15: or could be very dense, so that different AI approaches out there. And based on this, you can

00:22:20: estimate where you are. And then you have a kind of understanding in your map, where you are. And if

00:22:27: you match the measured map, you get out of your SLAM algorithm with a global, global map, then you

00:22:35: can get a position of your robot and global coordinates. So this is at the moment a very

00:22:42: important approach, because it enhances safety and it also improves the accuracy.

00:22:48: One customer maybe you can share with us is understand Deutsche Bahn, the German railway.

00:22:54: What are they using this approach for? At the moment, they are going into surveillance, right?

00:23:01: They want to see something on their rails, right, to see if there could be a tree or something. And

00:23:09: therefore, they need an operation system that is kind of very precise in its fully automated

00:23:16: perception approach. Therefore, they use for short drones following the rails,

00:23:21: and they need to be precise. And there are many, for example, many, how could we call it,

00:23:27: signals that are toxic for a pure genius approach. And this makes it more safe and more precise.

00:23:35: Okay, yeah, I live actually close to a railway track. And every day I do actually pass it a

00:23:41: couple of times a day. And typically, I've always been on the outlook of one of the drones of Deutsche

00:23:47: Bahn to see. So I'm looking forward to seeing them one of these days. You mentioned already,

00:23:54: XLSTM. So you had a first endeavor, I think your colleague maybe yourself, XLM, and how to apply it

00:24:02: in industrial setting. I was going to ask you also for the SLAM because you have been using the word

00:24:08: robot a couple of times, maybe in combination. Let's first, what have you been, why did you get

00:24:13: into XLSTM and what did you get out of your deep dive? Yeah, I guess it's, again, a great question.

00:24:22: So also, if you take into account what we discussed before, as I told you, it's all about

00:24:28: temporal consistency, how we treat the safety, right? And we have seen transformers and for us,

00:24:36: especially vision transformers outperform CNNs and the results are really great in the recent years.

00:24:44: But a little drawback is how the transformers are designed. So they split all the input into

00:24:50: embeddings and you either focus spatially on some image where you have some image patches,

00:24:57: you order and put into the transformer, or you have some temple signal like video stream.

00:25:04: But here it's only a fixed focus, right? And it's very small. And the XLSTM has the capability to

00:25:12: have a much wider look into the past, which makes it very, very, very designed, very, very,

00:25:18: relevant for the approaches I told you before. So processing of video signals over time with

00:25:24: the scope of functional safety. And this was our main idea to use it because we think this

00:25:29: temporal consistency approaches, they match very well with the nature of XLSTM. And that's why we

00:25:35: moved into it. And we see lots of challenges are still out there. But what we found that, yeah,

00:25:41: on our first glimpse, it's on par, or at least in the same range with vision transformers for

00:25:48: our first trainings, for example, for the depth estimation. But we also want to check it out

00:25:52: for free space and others. But then there's still work to do. But the main reason is really the

00:25:57: temporal consistency, because as I explained, this is the nature of our safety approach.

00:26:03: And this matches very well with the design of XLSTM.

00:26:07: Okay, sounds good. And in addition, I understand XLSTM may be more suited for your embedded

00:26:14: deployment as well and faster actually in a time of inference or also training time.

00:26:21: Yeah, it's more suited. It's more suited. There's a very, very strong advantage. So it's the kind of

00:26:27: processing dynamic size of input parameters. So if we come back to the SLAM, for example, if you

00:26:33: have imagined you're in an environment, less number of features, right? For features, you can

00:26:38: match with the next frame, or you're in an environment with many, many numbers of features.

00:26:43: You have some different size of input into a specific neural network based on some backbone.

00:26:50: And here it could suit very well because it's per design capable of processing dynamic input sizes.

00:26:57: Again, when it comes to different resolutions or all these things, so there it really makes

00:27:02: sense because on a transformer, you only have a fixed size of inputs, you go workarounds and

00:27:07: this works quite well. But here for it's very, very well suited. And on top is that's what the

00:27:12: paper proposed. It could be more efficient on the onboard execution and because transformers are

00:27:19: expensive, expensive in the sense of costs we need to execute. And this is at the moment shown in

00:27:27: the papers. At the moment, it's not true on the device because it still needs works to do that

00:27:32: make the XLSDM more efficient on execution. I mean the implementation, right? Because there's

00:27:39: lots of work done on transformer side, but per nature, it's more efficient. And then it could

00:27:45: run faster, then we have more safety that makes it very well suited. Okay, one final question

00:27:52: towards us being the industrial AI podcast. As we talked about SLAM as you've been using it,

00:28:00: and you've been using the term robotics a couple of times, maybe in a different sense, but sure,

00:28:05: I mean, talking about robotics in an industrial environment, could your approach be used in such

00:28:14: industrial robotics environment as well? Yeah, it could be fully abused. At the moment,

00:28:22: we don't have so many customers in that field, but it's for sure an idea we want to go also in

00:28:28: warehouses. We already had projects also together with drones to be honest, but really on fixed

00:28:34: robots, it could be also used. So our approach is agnostic. So we are agnostic towards the sensor.

00:28:40: We are agnostic towards the deployment on the chip. So yes, it could be used. And the good thing,

00:28:46: the safety is also towards industrial safety standards, which I told before is very similar

00:28:53: to automotive safety or medical safety, or even aerospace safety. And that's the fun fact,

00:28:58: these mitigation strategies are very similar. Yeah, you explained this, that your pay-tent

00:29:05: approach allows for moving from a probabilistic to a deterministic approach. Now, as you mentioned,

00:29:14: I had in my, I think I've seen maybe some, some, some videos somewhere of drones in warehouses.

00:29:22: What would be if at all you say you're agnostic? So which means in the end, you don't really care

00:29:29: as a big word, but assume you're open for customers also for listeners, customers in a typical

00:29:35: environment, industrial setting. But what would you say? Because we have heard lately so many

00:29:42: times about the humanoids and even humanoids in an industrial environment. And we've had this

00:29:47: discussion a couple of times. So I have a fixed robot or I have a robot, which is like a human

00:29:53: or have a robot, which maybe I put on wheels, doesn't look really human anymore. But then

00:29:58: there is the, there is the option of drone. As you say, you have any view on where you see

00:30:05: the market moving kind of thing is drones going to be a the thing. And because we have drones,

00:30:11: we don't need kind of humanoid or fixed robots. It's a really tough question. So what I have seen

00:30:18: is that drones are always underestimated and simply by the reason they could fall down.

00:30:23: And please trust me, this is really the thing. Everyone thinks drone is very easy, but this is

00:30:29: not to be honest. And in the industrial setting, you rely purely on, on slam, right, slam in the

00:30:36: sense of, of localization because there is no geo, geo reference, the GNS signal. So yeah, I, I

00:30:44: believe there's still some work to do with drones in the industrial environment. Some drones are

00:30:50: wired drones and they are combined with a ground robot. And this was also something we did in the

00:30:56: past. And I believe some teaming in the sense makes, makes sense. But you also told the human

00:31:04: need approach is, is a thing. And I don't have a clue, but I feel they turns it, it turns to the

00:31:12: human needs because nothing else needs to be changed. I have no clue how expensive they will be.

00:31:18: But this is my opinion on that. Yeah, I've heard that, that reasoning for why, but it's a different

00:31:25: discussion. I'm not sure I, I can understand that in a transformation approach, you know, for

00:31:31: whatever the next couple of years until so we can use existing factories, but I'm a stronger

00:31:37: believer in, you know, if we, if we have a new, a structural new way of doing things, we can make

00:31:44: new, you know, factories. And if in the new factories, we're going to have, it doesn't matter

00:31:48: what we're going to have. Maybe we're going to have drones could be, and we're going to design

00:31:52: that factory with drones. So then yes, in the meantime, we can have humanoids because, you

00:31:58: know, factories have still been built for machines plus humans. Different discussion. Tell us about

00:32:05: your team, where your base, maybe you're looking for new colleagues. If so, what should they bring?

00:32:09: Yeah, oh, this is something I need to emphasize, right? The team is the greatest part of Splenelab.

00:32:16: And we're really talking about our software engineers. We are currently about 45 people at

00:32:23: Splenelab located in Jena. Jena is a tech city in central Germany. You can come with a very,

00:32:30: very traditional university and a very strong, let's say university landscape around. And this

00:32:38: is where we founded Splenelab because the core team settled down there. And then it was obvious

00:32:46: to start there, but it was the right decision because there's a good location. There's a strong

00:32:52: university that other big companies like ZEIS, Yeen Optik, but also in video certified camera

00:32:59: producers like Light Vision. So there's also a good ecosystem. And the most important thing is

00:33:04: that we have a loyal, strong team. And yeah, it's an international team, but mixed of robotics

00:33:11: engineers, AI engineers, and pure software engineers. But this is mainly the feedback.

00:33:18: And we are always looking for new people. We have no specific roles open because we believe in a

00:33:25: way to insert people that they can really put their most important strength into the company that

00:33:35: it's most worth for us and adds the most value. So it's most efficient. And this worked out most

00:33:42: of the time very well. And if someone is really interested, then he can always apply. We have

00:33:47: some open roads, but the main meaning is to attract it to people that may feel that they're

00:33:53: good in the space. I understand. Thank you very much for that. So if one of you dear listeners

00:33:58: feel maybe you would like to work together with Stefan and his team in this great safe AI

00:34:06: environment, you can best contact him on LinkedIn, Stefan Milz, M-I-L-Z. And that's the German word

00:34:15: for what we say, spleen, right? Exactly. Yeah, it's one of your ways of people. I think you're

00:34:24: right. It's one of the ways that people that are going to know exactly who you are. So otherwise,

00:34:30: if you're dear listeners, if you have any other question, comment as always, please send a short

00:34:35: email to peter@aipodde. I'm very happy that you stayed with us so far. Looking forward to have

00:34:42: you with us again. Stefan, thank you very much for your time. One side note, maybe. Please. We are

00:34:49: really looking for some salespeople. This is where we are. Oh, okay. Not too strong. So if

00:34:54: someone is in the sales field, either at us or agriculture or whatever field or has really the

00:35:02: ambition to work with us. So this is something where we are not well occupied. And I think

00:35:09: it's a great opportunity. Thank you, Stefan. Noted. Take care and bye-bye. Thanks a lot for having me.

00:35:15: Was great discussion. Really good questions. Thanks a lot. Really good answers from your side.

00:35:20: Thanks. Thank you, Stefan.

00:35:23: Robotik in the industry. The podcast with Helmut Schmidt and Robert Weber.

00:35:31: (upbeat music)

00:35:34: [ GENTLE MUSIC FADES OUT ]

Neuer Kommentar

Dein Name oder Pseudonym (wird öffentlich angezeigt)
Mindestens 10 Zeichen
Durch das Abschicken des Formulars stimmst du zu, dass der Wert unter "Name oder Pseudonym" gespeichert wird und öffentlich angezeigt werden kann. Wir speichern keine IP-Adressen oder andere personenbezogene Daten. Die Nutzung deines echten Namens ist freiwillig.