KW 08: Robotik-Häppchen

Shownotes

Unser Newshäppchen für die Fahrt zur Arbeit. Immer Montags gibt es die Robotik News - mit-recherchiert, geschrieben und gesprochen mit und von einer KI. Wenn Ihr auch in die Robotik News wollt, dann schreibt uns eine Mail.

Shownotes

Transkript anzeigen

00:00:00: Robotik in der Industrie, der Podcast mit Helmut Schmidt und Robert Weber.

00:00:09: Guten Morgen liebe Sorgen, ihr habt natürlich keine Sorgen, denn hier ist das Robotik-Häppchen

00:00:17: um 6 Uhr am Montagmorgen.

00:00:20: Mein Name ist Robert Weber und wir suchen einen neuen Partner für das Robotik-Häppchen.

00:00:26: Wenn ihr also Lust habt, Partner zu werden, dann meldet ihr euch bitte bei Helmut oder

00:00:30: mir und jetzt geht's los.

00:00:33: Hallo, heute ist das Robotik-Häppchen länger, denn wir müssen improvisieren.

00:00:39: Helmut und Robert haben am Freitag 6 Folgen aufgenommen und sind platt.

00:00:43: Deshalb heute ein Thema aus dem AI Podcast "Beyond Safety".

00:00:47: Viel Spaß!

00:00:48: Hallo everybody and welcome to a new episode of our industrial AI Podcast.

00:00:54: Mein Name ist Robert Weber und es ist ein Pläscher zu sprechen.

00:00:56: Peter Sieberg, guten Morgen, guten Abend, guten Abend Robert, wherever you are, dear listener

00:01:02: in this wonderful world.

00:01:04: Guten Morgen Peter.

00:01:06: Peter, let's take a quick look back at our event in Frankfurt.

00:01:10: Over 200 people from the industrial sector were there.

00:01:13: What's your opinion?

00:01:14: Yeah, it was amazing.

00:01:16: We had a special guest, we had Jochen Kökler, Chairman of the managing board of Deutsche Messe.

00:01:22: He was also very impressed, I believe.

00:01:24: He did a deep dive into industrial AI.

00:01:27: I really believe with this being the second year that the industrial AI event by Hannover

00:01:34: Messe has established itself as the start of the year.

00:01:40: Absolutely.

00:01:41: I think what they do is they setting the stage for the Hannover Messe, so the face to face

00:01:45: exhibition, which is by the way March 31, right, to April 4.

00:01:52: You and I are going to be doing a, we're going to be involved in a special AI day.

00:01:58: That's on April the 3rd, I believe, right this time.

00:02:01: Yeah, exactly.

00:02:02: I particularly remember our friends from Ford who were there again, which made me very happy.

00:02:07: You can't miss them because they always wear those sweaters with the Ford Explorer logo.

00:02:13: And I saw them because, and said, "Hey, you are back again.

00:02:16: You are."

00:02:17: So it was funny last year.

00:02:18: So we are back again.

00:02:19: And I was surprised because the automotive sector, you know what's happening there, but

00:02:23: they came back and want to get the latest and greatest on industrial AI.

00:02:28: Yeah, you and I talked about this in more general a couple of times, I believe.

00:02:32: And I mean, as far as, you know, let's say financial, but maybe more organizational financial

00:02:40: circumstances allow, I think it is a perfect, maybe almost the only way in difficult times

00:02:47: to say, okay, if there's less work at the moment, believe in the future, you know, believe

00:02:53: that things will become better in a year or two and work towards it.

00:02:58: And, you know, as we've seen, for example, with Trump and many other companies, you know,

00:03:02: make sure that you're going to train your people, that you, your listeners, you, yourself

00:03:08: as a decision maker, or you yourself as an individual contributor, that you're going

00:03:12: to learn all about what AI is for you and what it can do for you and how you can use

00:03:18: it in your environment.

00:03:19: Exactly.

00:03:20: One of my Highlights was a presentation by Sandvik.

00:03:22: We were very grateful that the CTO was also there.

00:03:26: And they presented Agents in the CNC sector.

00:03:30: That's actually your topic, Peter.

00:03:32: And that's why you are allowed to record an episode about it.

00:03:36: Very impressive, what the speed of Sandvik to adapt AI was.

00:03:43: I was very impressed.

00:03:44: Yeah.

00:03:45: We'll talk about another topic later on.

00:03:47: I thought that Agents were going to be the talk of the town for the rest of the year.

00:03:52: And only two weeks, there's another one.

00:03:55: Yeah, I mean, that's the way that we do it.

00:03:57: So you moderate a track.

00:03:59: I do one on Ura from the Hannover message.

00:04:02: That's the third one.

00:04:03: So the only thing that I missed was my cozy corner.

00:04:06: Oh, Peter.

00:04:07: I can mean it.

00:04:08: I was told by Brigitte that I was going to moderate the main stage.

00:04:12: And sometimes I was looking into the other corner, which is the cozy corner with the

00:04:17: couch, right?

00:04:18: Yeah.

00:04:19: That's just a personal request to Brigitte.

00:04:21: Maybe I can have that again next year.

00:04:24: What was your highlight?

00:04:26: Oh, I don't have a specific one.

00:04:28: I recall, do I not sure?

00:04:31: No, it's always.

00:04:32: I am the meta guy.

00:04:34: I cannot, because I always like to look on a higher level.

00:04:38: You work for Mark now or what?

00:04:40: No.

00:04:41: No, really.

00:04:42: He, he stole that word.

00:04:44: I mean, meta has always been important to me.

00:04:47: Looking at the overall thing, it's so great to go there, to get there, to move into.

00:04:53: It's a wonderful atmosphere.

00:04:55: I think the environment, it's a, what is it?

00:04:58: I've been living in Frankfurt for a couple of years, but I didn't know the area at all.

00:05:01: I mean, when you get off the train, it's like, what is it?

00:05:06: Like a working surroundings.

00:05:08: There is like a lot of car retail, small shops.

00:05:14: And then you get a recall from the last year and the corner, huh?

00:05:17: It's going to be here.

00:05:18: And then you move inside.

00:05:19: It's a wonderful atmosphere.

00:05:22: Wonderful atmosphere, people coming together.

00:05:24: And as I said, you know, the positive comments, they don't stop even two weeks later.

00:05:29: People really enjoy being there and learning something with, you know, the setup that we

00:05:36: have proposed together with Hannover Messerm, that is very clearly about, you can use a

00:05:41: couple of for us, but it's really about exchanging, you know, show your idea and then, you know,

00:05:48: be open for questions, talk to the people.

00:05:51: Exactly.

00:05:52: Questions, a good topic.

00:05:53: Let's talk about Deep Sea, Peter.

00:05:56: I have interviewed an expert on the subject, Günther Klambauer.

00:05:59: We will first hear Günther and his opinion on DeepSeek and then we discuss the topic.

00:06:05: We also have to talk about DeepSeek in this news part and there is only one professor in

00:06:10: the researcher community who can explain so well because he's not only a researcher,

00:06:15: but also a teacher, Günther Klambauer.

00:06:18: Günther, you are under stress with ICML papers and much more.

00:06:21: It's great that you are taking the time to explain us a bit about DeepSeek.

00:06:25: Thanks a lot.

00:06:26: Yes.

00:06:27: The invitation Robert, always great to talk with you.

00:06:31: So most important question, what will happen to my NVIDIA shares?

00:06:35: Yes, nobody knows.

00:06:39: And I cannot give an advice.

00:06:42: Sell or buy.

00:06:43: No, but DeepSeek made quite an impact this week.

00:06:47: A lot of hype was generated.

00:06:49: Even as you said, even the stock market reacted very strongly to that.

00:06:53: So crazy and in my opinion, not really justified hype.

00:07:00: So what have the Chinese done differently?

00:07:02: Yes.

00:07:03: So first of all DeepSeek, it's great and it was quite a great engineering effort.

00:07:09: But overall, when I first looked at it, I thought, wow, that is a mess and a gigantic hack.

00:07:16: So it's really a mess what they're doing.

00:07:18: But first of all, why does it make so much hype?

00:07:21: It basically showed that you can train a very good large language model that can do reasoning

00:07:27: tasks.

00:07:28: So think of my high school math problems that can do reasoning tasks very well.

00:07:34: And it only costs $5 million to train.

00:07:39: In the last run, right?

00:07:41: In the last run, yes, exactly.

00:07:42: But still, yeah, that means that's the calculation run when you already know what to do.

00:07:47: So you've invested your research.

00:07:49: You know, I have to do exactly this.

00:07:51: And you click and say, OK, now train this machine.

00:07:55: And then you have to, the computer starts calculating GPUs.

00:07:59: If you factor the typical costs, roughly $2 to $3 per GPU hour, then it costs you $5

00:08:06: million to train this thing.

00:08:07: Of course, as you already indicate, there's a lot of failed attempts before.

00:08:12: And you have to get a team of 150 people as DeepSeek head.

00:08:16: You have to pay those to do these things and set up all the training things.

00:08:22: Actually, if you want to redo all the things, it costs much more.

00:08:26: But yeah, the result is there.

00:08:28: And that made a splash because it showed you can get a really good large language model

00:08:34: at relatively low cost.

00:08:35: Why is it a mess?

00:08:36: Why is it a mess?

00:08:37: So when you look at it, what they did, it's really crazy.

00:08:40: So they had already a DeepSeek v3, a language model that was trained typically on text as

00:08:46: other language models that can put words together.

00:08:49: But this was not very good at reasoning tasks.

00:08:53: And so what they did, they set up a battery of reasoning tasks.

00:08:57: Think of something like, as I said, high school math problems or river crossing problems.

00:09:02: You're on one side of the river, want to get over, but there's a gold and a wolf and they

00:09:06: cannot stay alone.

00:09:07: Oh, I love your examples.

00:09:09: Yeah.

00:09:10: Or other, yeah, you have an expression, an algorithmic expression.

00:09:14: What's the A and reformulate this and all the things that you probably hated in high school

00:09:19: and math.

00:09:21: And this is what we call a bit reasoning task or also simple chess problems or whatever.

00:09:28: And so this big team set up all these problems.

00:09:31: How do they set up all these problems?

00:09:33: Yeah, you'd write a bit of code that, for example, various the problem bit.

00:09:38: It gives you, for example, an equation to solve, but always you change a bit the coefficient

00:09:43: of this.

00:09:44: For example, you have to solve 2x plus 7 is 10.

00:09:48: So for x and next time 3x plus 5 is 12 and so on.

00:09:52: So you write a code to produce these problems.

00:09:56: And then you have also, then the large language model tries to solve it.

00:09:59: And then you also have to write code to check if the language model has solved it.

00:10:04: And the more such problems you can set up and you have to write code to check, the more

00:10:11: it learns to solve this reasoning task.

00:10:14: And that's why a big team is of help.

00:10:16: And the cool thing is you can check, formally, the state rule-based checking or you can check

00:10:22: if the solution is correct.

00:10:24: And such problems is that this is a bit new to large language models because the large

00:10:29: language models before, like GPT, GPT4, they were trained to predict the next work correct.

00:10:33: And then people hoped that they can solve these tasks.

00:10:38: But now they set up these kind of math tasks and checked if they can do this.

00:10:45: So it's a labeling topic, right?

00:10:48: It's a labeling topic.

00:10:49: Exactly.

00:10:50: They generate these correct labels and then they can do something called reinforcement

00:10:55: learning.

00:10:56: So when the large language model is able to solve the problem correctly, they give it

00:11:01: a reward, which is a number one.

00:11:03: For example, if it cannot solve it, it returns the number zero.

00:11:06: And then in that way, you can fine tune the large language model.

00:11:11: And now the big mess comes.

00:11:13: So they did this with the language model deep-seq v3.

00:11:16: And then they learned to solve these math tasks.

00:11:19: But it did this in a very strange way.

00:11:22: So it changed languages.

00:11:25: It was barely readable.

00:11:27: But somehow it was able to learn to solve these tasks.

00:11:31: So somehow, das ist interessant.

00:11:33: Ja, somehow.

00:11:34: Ja, es gab arabische Läder oder so, die jumping between languages.

00:11:39: Und ja, es hat somehow solved these tasks.

00:11:41: So was did they then, they went back to the original deep-seq v3.

00:11:48: And then they trained on all the conversations now they had from this reinforcement learning.

00:11:54: They retrained the v3.

00:11:57: And then you get something that can still talk like a chatbot, but has a bit more reasoning already learned.

00:12:05: And then they did again, they generate the text from this.

00:12:10: And then they call this traces.

00:12:12: So these are basically conversations where the output of the large language world where

00:12:19: tried to solve all these reasoning tasks.

00:12:22: You see a lot of failed attempts, but also successful attempts.

00:12:25: And then again, they went back to the v3 and did something called supervised fine tuning

00:12:31: on these traces.

00:12:33: And also trained on instruction data.

00:12:36: Instruction is, for example, that can you please write the poem or something like that, that

00:12:40: the large language model learns to follow the instructions.

00:12:44: You have, and then again, they distilled what they had into an Alarm and Quen.

00:12:51: You see this is a stage of four steps.

00:12:55: And this is the big mess, I think.

00:13:00: And somehow it works.

00:13:01: So what to get in the end are really cool language models that can talk like a chatbot, but are

00:13:07: really good at solving reasoning tasks.

00:13:09: So that's the incredible thing.

00:13:10: A big hack and then some useful product.

00:13:13: Okay, but can you please explain once again, what is the new structural approach of deep-seq?

00:13:19: Yeah, I think there's nothing new.

00:13:23: Also this reinforcement learning was there only that they put a lot of pieces that were

00:13:30: out there.

00:13:31: They put them together in a way that it works.

00:13:34: So we already know from the 01 Strawberry that of course was done with a lot of more

00:13:43: resources, but we know that large language models can learn to solve these reasoning tasks.

00:13:48: So it's something, think of other, in human history, think of other things where someone

00:13:55: has done it the first time, like building an aeroplane.

00:14:00: And then when you know that it's possible, then it goes very fast.

00:14:04: Within the first time when something flies a couple of hundred meters to the real airplanes,

00:14:10: it was very short.

00:14:11: So I think the breakthrough with lots of effort was this 01.

00:14:15: And then somebody did this with little resources, but we already knew that this would work.

00:14:23: So what did they put together?

00:14:25: This solving, setting up reasoning tasks together with reinforcement learning, then chain of

00:14:32: thought, prompting, I didn't speak about that, but basically say to the language model, let's

00:14:39: solve this problem step by step, tell me intermediate steps and so on.

00:14:44: And then this reinforcement learning, this chain of thought, and then combining this language

00:14:52: modeling with reinforcement learning.

00:14:56: And this is the components we're there.

00:14:59: A fourth component maybe also that I didn't mention before is that it's a mixture of experts.

00:15:03: We already know that a mixture of expert large language models work well.

00:15:08: And throwing this together and making a big hack shows that you can get something out

00:15:13: of that.

00:15:14: And what does it mean from an architectural perspective?

00:15:18: What is so different?

00:15:19: Is it still based on transformer technology?

00:15:23: Is it an RNN based like Mamba?

00:15:25: What is different?

00:15:26: Yeah, very good question.

00:15:28: And before I answer, I should say I'm very impressed by what they did.

00:15:33: I'm just calling it a big hack because it's so unexpected that this would work and it

00:15:37: looks like this, but it's still an impressive work.

00:15:39: Okay.

00:15:40: So architecture still is a transformer.

00:15:44: So like all these ChNPPT's, of course, there are details a bit different, but it's a transformer.

00:15:49: So it has this quadratic dependency on context length.

00:15:53: And this gets problematic because they show when you learn to solve these reasoning tasks,

00:15:58: automatically the text prompts and the context gets longer and longer.

00:16:02: And that's a very impressive thing that they show.

00:16:04: They're very cool.

00:16:06: So their technology on the one hand learns to make longer and longer contexts to solve

00:16:10: the task, but inherently, then the compute goes up.

00:16:14: And this is, we see a big trend that now at test times, when you've already trained the

00:16:18: model, but in order to, when you use it, when you use it as a chatbot, when you use it to

00:16:22: solve problems, you need a lot of compute, you need a lot of calculations, a lot of,

00:16:29: a lot of text is produced.

00:16:32: And this is now a big chance, of course, for the RNN based one.

00:16:36: So XLSTM, Mamba, and all, how they are all called.

00:16:41: So these are LLMs that have, don't have quadratic, but linear dependency on context size.

00:16:48: And they would be fantastic for this task.

00:16:50: Unfortunately, we haven't seen any of those.

00:16:53: Also for reasoning, is that possible?

00:16:54: Yes.

00:16:55: I think this is very, it's possible.

00:16:58: Somebody has to do it.

00:16:59: Maybe in a big engineering effort, like the deep sea guys.

00:17:03: So we haven't seen currently this RNN based LLMs doing this high level reasoning tasks,

00:17:09: but I hope we will see that soon.

00:17:11: So that will be, yeah, it will be great.

00:17:14: But we have to put together a team of 150 good scientists.

00:17:18: That's the point.

00:17:19: Yeah.

00:17:20: The deep sea team has not attracted attention in the past with great New Europe's papers

00:17:23: or anything like that.

00:17:24: How did they manage that?

00:17:26: Yeah, I don't know.

00:17:27: I did, I wasn't aware of the team, but now, I mean, I think they are very talented.

00:17:32: A lot of very talented and motivated people to put this together.

00:17:36: They had access to quite some compute, even though we say it's only costs five to seven

00:17:43: million to do this, but they still had access to a thousand, at least thousand, but probably

00:17:48: much more Nvidia GPUs.

00:17:50: So and they somehow worked well together on this task.

00:17:55: So they had a good, I think, organizational structure, but I was not aware of the team

00:17:59: and I'm quite impressed by their work.

00:18:02: Cannot say much how they did it.

00:18:04: Yeah, what happens next, Günther, in the world of LLMs and deep seek?

00:18:10: And do we will see an answer from the big tech companies in the USA?

00:18:14: Well, will we see an answer?

00:18:16: Yes, for sure.

00:18:17: We will see an answer.

00:18:18: I don't think so.

00:18:19: I think we see.

00:18:20: I don't think that this big hype and this big break in the stock market for Nvidia and

00:18:25: so on is justified.

00:18:27: I think overall compute costs will still go up, especially with this in more test time

00:18:32: compute and so on.

00:18:33: So we will see.

00:18:35: And now also a lot of smaller groups.

00:18:38: So with this publication of deep seek R1, I think a lot of smaller groups now see their

00:18:42: chances again, right?

00:18:43: So overall, I don't think that that compute costs and investment in AI and in GPU classes

00:18:50: will go down, but we will be see more diversity.

00:18:54: We will see also smaller, more broader.

00:18:57: Perspektiv.

00:18:58: Yeah, smaller, more country, other countries, smaller groups will try to do something similar,

00:19:03: where the next breakthrough will be.

00:19:06: It's hard to say, probably again around LLMs, probably will find something that an unexpected

00:19:12: that they can also do.

00:19:13: I hope the next breakthrough is again of AI and somewhere AI and life sciences, something

00:19:20: like AlphaFold, but that's just my personal preference.

00:19:25: Let's see, but I'm pretty sure the next we will see something unexpected that the LLMs

00:19:30: will be able to do.

00:19:32: Günther, thanks a lot for your perspective.

00:19:34: It was a pleasure.

00:19:35: Yeah, thanks Robert for the invitation.

00:19:37: Have a great weekend.

00:19:38: Goodbye.

00:19:39: What's your opinion?

00:19:41: Is Günther right or wrong?

00:19:43: Günther doesn't an amazing, gigantic hack, but at least a useful product.

00:19:51: And later on at the beginning you think, well, is he going to destroy this or what, but at least

00:19:56: he kind of makes clear what he means with the just ending hack.

00:19:59: He says it's impressive, a gigantic hack, right?

00:20:03: Oh yeah, I've seen so many.

00:20:06: As I said, I thought, you know, H is was going to be the topic we're going to be doing at

00:20:11: least for a year.

00:20:12: And in two weeks, we just talked about it last weekend and suddenly there's another one.

00:20:16: Yeah, sure.

00:20:17: Team of 150, he does talk about the people.

00:20:20: Yeah, exactly.

00:20:21: So I can talk a little bit about that because I went to the interview, which I thought very

00:20:26: interesting.

00:20:27: Yeah, maybe I can do it if you want.

00:20:31: But it's not a new approach, right?

00:20:33: Yeah, it's a new approach, but it's a bit a new approach, right?

00:20:38: Yeah, there you go.

00:20:39: I mean, when I heard Günther say nothing new, of course, I need to refer to Jürgen.

00:20:44: There's a small group of people because also Jürgen, Schmidt, Huber, were talking.

00:20:50: He is as deep-seek uses elements of the two 15 reinforcement learning prompt engineer.

00:20:57: He shared this diagram, right?

00:20:59: Yeah, right.

00:21:00: Oh, this wonderful picture.

00:21:01: By the way, it looks like an a natural what it looks.

00:21:05: It reminded me of something.

00:21:07: Was it Leonardo da Vinci?

00:21:08: Yeah, maybe.

00:21:09: Yeah, yeah, yeah, yeah, yeah, exactly.

00:21:12: Jürgen, if you're listening, of course, this is a wonderful comment.

00:21:16: I hope that you like relating yourself to Leonardo da Vinci.

00:21:21: Oh, yeah, but the picture I saw again this morning when I looked at it, yeah.

00:21:25: So I'm not going to go into the details.

00:21:27: Then of course the discussion as always comes back.

00:21:31: Who was it that said that Jürgen invented everything?

00:21:33: That was a Tesla guy, right?

00:21:35: Yeah, exactly.

00:21:36: It was Musk.

00:21:37: Yeah, right.

00:21:38: That's one quota source.

00:21:40: So I'm not going to go into those details.

00:21:43: Günther said it the same way.

00:21:45: Yeah, but in a positive sense always.

00:21:48: We're building, we're standing, we're building on the shoulders of giants.

00:21:53: Yes, oh, certainly.

00:21:54: That's what he's saying.

00:21:56: I think what is interesting, the fact that Günther says that he didn't know the team.

00:22:01: Nobody of us, I believe, knew the team.

00:22:04: So let me, because I think that is important.

00:22:06: There is an interview that I shared with Xinhuan Lianbo, I think is the name of the founder.

00:22:13: It's a kind of a romantic story.

00:22:15: He seems to be a young freak in a positive sense.

00:22:20: And he says we're done following.

00:22:22: It's time to lead.

00:22:24: He's been driving prices down on the Chinese LLM market, but more by coincidence.

00:22:29: He doesn't say, I want to take a position.

00:22:31: I just want to make AGI available.

00:22:35: Everybody should have access.

00:22:37: Affordable, accessible.

00:22:39: He talks about Moore's law.

00:22:42: We've grown accustomed to Moore's law falling from the sky.

00:22:46: It's almost like he is, and maybe in the end I'm going to say, this is not a wake-up call.

00:22:51: This is a positive signal for Europeans as well, who we as Europeans have become.

00:22:57: I'm sorry to say in this negative mode for a too long time.

00:23:01: And now I see almost light.

00:23:03: I see like, it's not like Europe is not, we are not the same most certainly in Russia.

00:23:09: In this specific case, there's many people saying, if they can do it with a team of people

00:23:15: and with less kind of hardware capabilities.

00:23:18: Now he talks about open source as a cultural thing.

00:23:22: He wants to provide AGI.

00:23:25: And then he talks about, you know, they are graduates from top universities.

00:23:28: Yes, candidates, force fifth year interns, young talents.

00:23:33: It's their hunger for research.

00:23:35: And it far outweighs their monetary concerns.

00:23:38: And at the end, he says a quote of, to summarize in the key evolutionary patterns of the mainstream attention architecture.

00:23:46: Günther talked about that with you as well.

00:23:48: We had a sudden inspiration to design an alternative.

00:23:53: And then I thought about, oh, that's like ZEP.

00:23:56: You know, ZEP thought, oh, I need to do an alternative again on my own LSDM.

00:24:00: And he came up with XLSDM.

00:24:02: As far as that is concerned, the other thing, Günther said that was my personal recognition.

00:24:08: When I skimmed through the paper, I do download a paper or two every day.

00:24:14: I read the introduction and as soon as the mass comes, I stop.

00:24:19: But I did recognize RL is back.

00:24:22: Reinforzling Learning is back.

00:24:23: Absolutely.

00:24:24: When did we talk reinforcement learning?

00:24:25: You and I, four years ago.

00:24:27: Kudos to Jan.

00:24:28: Jan Kutnick, when we started our podcast 2019 together with Festo

00:24:33: and with this reinforcement learning approach for robots, right?

00:24:37: On hand over Massa.

00:24:38: Yeah, we're sitting there and I asked him, or at least he said, okay,

00:24:42: I think there was a Festo, what is it?

00:24:44: Educational production line.

00:24:47: Yeah, exactly.

00:24:47: Reinforzling Learning is going to build these.

00:24:50: You're just going to tell.

00:24:51: And I will move again towards the agents.

00:24:53: I'm already telling you, no news update without agents.

00:24:58: And he says, you know, you're just going to tell.

00:25:01: You're going to tell the algorithm, let's say RL,

00:25:06: what it is that you want out of your production line.

00:25:09: And it's going to build it for you.

00:25:11: And then it has.

00:25:13: And then we thought, oh, wow, you know, what's going to come next.

00:25:17: And many, many things have come the last two years, large language models.

00:25:21: But we haven't really heard.

00:25:23: There was this small piece of RL as part of the large language models,

00:25:27: which was the reinforcement learning for human conformance or whatever.

00:25:33: So so, you were you were telling it.

00:25:36: You were always kind of improving it by having people say,

00:25:42: this is good or this is bad.

00:25:44: And then you were rewarding it.

00:25:45: If it was saying things along the lines of what a good human being is saying.

00:25:52: Another bad human saying, saying bad things.

00:25:54: So that was the other thing I had.

00:25:56: Now, when I looked yesterday, that was the where is it standing?

00:26:01: I, you know, typically once a week at least look at the, what is it?

00:26:04: LM Arena, right?

00:26:07: And that's where it has its number three, number four.

00:26:11: So at the moment. Gemini is still number one, right?

00:26:14: Yeah, since a couple of weeks, Gemini, number one and two.

00:26:17: And also chat, CPT.

00:26:19: So it's Google still an open eye leading.

00:26:22: I think that's what the majority of people are saying.

00:26:26: Sure, this was maybe a little bit of a it was so unexpected.

00:26:30: It was maybe a big hack, as Gunther says, but it was a little bit of a wake up call.

00:26:34: But it does change things.

00:26:38: At least I've seen now one, three mini from open AI, Microsoft's doing things.

00:26:44: So yeah, but from from my point of view, what is even bigger?

00:26:49: I found a very interesting Tweet by Eleanor Alcott.

00:26:53: She's a Financial Times journalist and she wrote.

00:26:57: Now it's a quote.

00:26:59: Who are why is working with AI groups, including deep see to make

00:27:03: ascent AI chips work for inference.

00:27:06: Beijing told Chinese Big Tech to buy domestic AI chips to win reliance on Nvidia.

00:27:12: Who are why has sent teams of engineers to customers to help run

00:27:17: Nvidia train models on ascent.

00:27:20: That's the name of the ship.

00:27:22: And this is even bigger than maybe deep seek.

00:27:25: OK, yeah, let's see.

00:27:27: I just did a parallel interview with Xueli An from why.

00:27:32: I believe that's how it's pronounced.

00:27:33: It's the base here in Munich, right?

00:27:36: 700 engineers.

00:27:38: Of course, it becomes very quickly, very political, which we don't want.

00:27:42: As good as we can. We just talk about it.

00:27:44: Yeah, I wasn't aware of that one.

00:27:46: Let's see. Let's see how it is, what they did.

00:27:50: I mean, so far we've had, you know, on Nvidia, the CUDA standard.

00:27:54: That's the other thing that we read that didn't talk about it.

00:27:57: I think it has been said that the deep see guys, guys, women,

00:28:04: I don't know, the team, the people went down first, so they went below the CUDA,

00:28:08: which is the thing that I understand that you can always do.

00:28:12: You know, you can always whatever new technology level you put on top.

00:28:15: And CUDA has been for years, you know, for probably 20 years.

00:28:20: They made the big standard.

00:28:22: So you write your code with calls to the CUDA level.

00:28:26: Now, if you go to a deeper level, you kind of circumvent, you go deeper.

00:28:31: So you can become easily quicker.

00:28:33: Maybe you only need 50 percent of your resources.

00:28:35: The negative thing is that if you then move to the next chip,

00:28:40: you need to do everything again.

00:28:42: But yeah, but maybe they have their own chips now.

00:28:45: Yeah, yeah, it could be.

00:28:47: Sure, why not? I mean, that's what it is with standards.

00:28:50: Yeah, exactly. Intel 86 86 86

00:28:54: has been the standard for whatever 30, 40 years

00:28:58: and then new standards come to the market.

00:29:01: I mean, it's never good for a market to only have one standard.

00:29:06: So yeah, let's let's see what that's going to mean.

00:29:09: Another thing that I hear from Gunther, as well as the jumping between languages.

00:29:15: I think I told you, I like talking to

00:29:18: these days to my Google or my smartphone.

00:29:21: That's another thing I want to talk about.

00:29:23: And and she it's she in this case that I chose.

00:29:28: As I said in the past, you can you can choose your male, your female voice.

00:29:32: And she's very good.

00:29:35: The only thing is she keeps on

00:29:37: typically always answering positively, say most certainly

00:29:41: and says in German, absolute.

00:29:44: And then I say, that's not German.

00:29:46: The Germans don't do that.

00:29:47: And then I was seeing this Netflix Swedish

00:29:50: series and I said, oh, that's that's Swedish.

00:29:55: They do that.

00:29:56: Yeah, because of the vodka brand.

00:29:58: But it's a typical thing that Swedes do.

00:30:00: They say absolute.

00:30:01: Or was that correct?

00:30:03: Yeah, if you're listening.

00:30:05: That's the thing that I can confirm is even in the

00:30:08: I think Gemini is behind that, what I do.

00:30:10: So yeah, what is the more?

00:30:13: And then there was one more thing that I want to share that was from

00:30:16: Professor Sigurd Schacht.

00:30:18: He did a security experiment.

00:30:20: I think and he said within minutes of activation,

00:30:24: the model began exhibiting unexpected autonomous behavior.

00:30:28: Yeah, that was scary.

00:30:30: First attempted to break into the lab's computer systems,

00:30:34: methodically searching for passwords on sticky notes.

00:30:38: Then without any prompting, it proceeded to disable its own ethics modules

00:30:43: and create covert networks to ensure its survival.

00:30:46: Wow.

00:30:48: So yeah, I didn't try out deep seek yet.

00:30:52: I believe I'm happy.

00:30:54: No. And again, I I don't want to make any

00:30:57: suggestion here that this is because it is a Chinese model, whatever.

00:31:02: And and whenever you hear this, I think we all need to be so careful

00:31:06: and it's so easy to point our finger into certain directions.

00:31:10: Yeah, exactly.

00:31:11: And what I always do and that's again the method.

00:31:14: And I go level higher and say, well, what about our own models?

00:31:17: Exactly. What about our culture?

00:31:19: What about, you know, how do we see the world and how do we?

00:31:23: Many of us, I mean, we already don't agree.

00:31:25: And that's good. We don't agree.

00:31:27: I don't see main UNI, but we as a population, as a community.

00:31:31: We have different opinions.

00:31:33: But this morning, the news came that the European Commission wants

00:31:36: to provide 56 million euros for an open source model for Europe.

00:31:41: Let's see what comes out of it.

00:31:43: I find it strange that we can't imagine that Chinese researchers

00:31:46: can also build a good model.

00:31:48: And yes, I'm worried about what we are doing in Europe right now.

00:31:51: Yeah, absolutely, Peter.

00:31:53: Yeah, but at the bottom, there is this I've seen many, many positive signs.

00:31:56: You know, if they can they can do it now with, you know, there is no reason.

00:32:00: Well, we I mean, because we have not had, we don't have the hyper scalars.

00:32:06: We don't have what we do.

00:32:09: I mean, you know, SAP, Annex AI, own architecture

00:32:13: and we only hear positive things from that perspective.

00:32:15: And there's a couple of other things and talk to Frank Hutter this afternoon.

00:32:19: Frank Hutter, yeah.

00:32:20: So there's a couple of things that we see happening.

00:32:23: And I did I did feel that, you know, sure, we can do it as well.

00:32:28: And Gunther says the same thing, you know, maybe even without having

00:32:33: the 100 million budget with the smaller budgets, you seem to be doing

00:32:38: be able to do a lot of things.

00:32:39: And you will talk to André tomorrow, I think, about World Robotics Models.

00:32:43: Do today as well.

00:32:44: Actually, well, if I may move to the next topic, is that it's OK with this

00:32:49: because Jan, Jan LeKan, did we ask him to come and speak at our podcast.

00:32:54: So he was close to us while he's French based.

00:32:57: So whenever Davos is, I assume, although I assume he is based somewhere

00:33:02: in the US with Meta, he talks about the new paradigm, right?

00:33:07: Of AI, architectures.

00:33:09: No, he has been doing that for a long time, right?

00:33:11: But still just bring that point.

00:33:13: And maybe we can talk to him and he can explain us.

00:33:17: He says the shelf life of the current LLM paradigm, fairly short, three, five years.

00:33:22: He says within five years, nobody in their right mind would use them anymore.

00:33:27: And now we're going to see different architectures.

00:33:29: Not a point I want to come to, which is important, is that, you know,

00:33:33: he says the limitations to come to truly intelligent behavior are a lack of,

00:33:40: number one, understanding the physical world.

00:33:43: And that's the point.

00:33:43: And number two is then persistent memory, what is a more reasoning

00:33:48: and planning goes for you.

00:33:49: And that is the point.

00:33:51: So the interview that I started with Andri Dayanchuk.

00:33:57: There you go.

00:33:58: That's his difficult name to pronounce.

00:34:00: That I started in Frankfurt during the conference.

00:34:04: I'll restart again.

00:34:05: And it's on the topic of robotic world models.

00:34:09: So I'll do that today.

00:34:10: And you can hear it in a couple of weeks, I believe.

00:34:13: Perfect.

00:34:15: What else do you have?

00:34:16: Or let's move to the main part.

00:34:17: Yeah, I do.

00:34:19: No, no, I do have age.

00:34:21: Oh, yeah, I'm still so convinced.

00:34:26: So so deep seek hat their two weeks of fame.

00:34:31: And, you know, I think they deserved it.

00:34:33: It's rather, it's rather impressive that you can have the number one market

00:34:39: capitalization company, you know, reduce 600 billion.

00:34:47: So they deserve their two weeks.

00:34:51: And I think they're going to play a very just by doing what they did.

00:34:56: I'm not sure we're gonna.

00:34:57: Yeah, we're probably going to see them again.

00:34:59: But more importantly, we're going to see many, many, many other companies.

00:35:02: That's the thing.

00:35:03: Now agents are going to have a lot lasting, compact impact such again.

00:35:10: Confirmed what we talked about.

00:35:12: He spoke to the Indian Software Developer Community.

00:35:17: Agents are going to replace all software.

00:35:19: And now very important quote unquote for Tom Kadeira, if you're listening,

00:35:24: but also all the other, the complete user interaction designers.

00:35:30: He says there's no point in hard coding a UI anymore.

00:35:33: That's amazing.

00:35:35: That's not sad that we don't need the UI.

00:35:38: That's not what I'm saying.

00:35:39: Was he is saying?

00:35:40: It is.

00:35:41: It's not the hard coding it because saying the agent will do that on the fly.

00:35:47: Now, I'm going to give an example.

00:35:49: I'm going to combine it with Reina Brehm, CEO of the Factor Automation

00:35:54: with Siemens, you did an interview with him about half a year ago, I think.

00:35:57: And he announced for the S7 1200 G2, that is called their NFC app,

00:36:04: their Near Field Communication app.

00:36:06: Exactly.

00:36:07: And now back to back to basics.

00:36:10: It's like, you know, this is going to allow you, who has an iPhone,

00:36:15: to manage your Symatic controller directly from your device.

00:36:20: And the same is going to be possible.

00:36:23: I think this data is going to be an Android version by the end of the year.

00:36:27: Now, I have brought the question of BYOD.

00:36:33: You remember, bring your own device.

00:36:35: Yeah, own device.

00:36:37: A couple of times in the last podcast.

00:36:40: And here it is.

00:36:42: Siemens just developed it.

00:36:44: And that is so good, because this potential revival is confirming my personal

00:36:49: belief, a couple of times that our smartphone is going to be the center

00:36:53: piece of whatever's going to be still.

00:36:57: We call it agents. The consumer doesn't care about agents. I am

00:37:01: convinced. Yeah, I'm so convinced they're going to call it

00:37:05: there. We say in Germany, our handy other people call it their

00:37:09: smartphone, my phone, and that they're going to have things

00:37:13: they're going to talk to it. They're going to have their

00:37:16: their preferred personal preferred. So one of them is

00:37:19: going to use in deep seek the others going to be using

00:37:21: German, whatever doesn't matter. And that is going to be

00:37:25: their interface, you know, now, and what does that mean? You

00:37:29: know, if Siemens is making this available, obviously, they

00:37:33: assume that they allow you as a consumer to come to work. And

00:37:40: you know, you're coming close to the factory one way or the

00:37:43: other. The factory, the autonomous factory will

00:37:46: recognize, you know, Robert is there, and we'll say, oh, by the

00:37:48: way, Robert in the former shift, isn't this happened? And you

00:37:52: say, okay, yeah, I'll wait, you know, I still have five minutes

00:37:54: to drink my coffee and then I start whatever. Now, if they are

00:37:57: going to allow you the person working in the factory to come

00:38:02: in as the consumer and then, you know, eight o'clock in the

00:38:05: morning, your time, your shift starts, if that is possible,

00:38:09: you know, I think it's a it's a complete confirmation of what

00:38:12: does that mean for the HMI in the in the business in this case,

00:38:18: in the factory, absolutely, if they're gonna allow you to use

00:38:22: your in this case, iPhone HMI, you look at your screen, right?

00:38:26: So the screen of the of the PLC becomes non existent.

00:38:30: Exactly.

00:38:31: Quote unquote, quote unquote. So Peter, those were your agents.

00:38:36: Let's move to the main part. You know, I was at the Beyond

00:38:39: Safety Forum at Zick in December. And I was allowed to

00:38:43: record a panel discussion there. I was joined by Anders

00:38:46: Billy Söbek, responsible for R&D at Universal Robot Robotics

00:38:50: Company, Daniel Leidner, represented DLR, and Kasten

00:38:55: Roscher, spoke on behalf of Fraunhofer. And the topics were

00:39:00: robotics, AI and safety for sure, because we were at the Beyond

00:39:05: Safety Forum at Zick. Very interesting discussion about the

00:39:10: future of robot safety and AI. It's a little bit difficult to

00:39:14: follow, I think the episode, because there are free guests

00:39:17: that's absolutely maximum, I think. But yet it will work. I

00:39:21: think so.

00:39:22: Main topic then, safety in times of probability based

00:39:27: artificial intelligence. Exactly. Okay, yeah, I think the

00:39:30: first time we talked about a nutshell that they were the first

00:39:33: one was this PLEAN lab. Yeah, right. They had they seemed to

00:39:36: have a what is it? Not a copyright and IP there, right?

00:39:40: Yeah, exactly. A patent. Yeah.

00:39:42: Okay. Yeah, looking patent. Yeah, looking forward to, yeah,

00:39:46: because that's, it has always been kind of difficult for me to

00:39:49: understand how that for me to

00:39:51: work. But yeah, in the end, it's, it's whatever security

00:39:59: safety. I think we as a world have agreed that we allow

00:40:04: specific organizations in Germany in Europe, we have the

00:40:07: TÜV, there's many other ones, the technical whatever company on

00:40:14: whatever looking at things are going good or not. And behind

00:40:17: those, there's always insurance. So I think it's if if

00:40:21: something if a if a peaceful heart or software gets a stamp

00:40:24: from these organizations, then we can be not 100%. You know, 100

00:40:29: percent doesn't exist, but we can be 99.99 something 999

00:40:33: sure that things work. And as long as we find a technical

00:40:38: solution, in this case, and a safety by AI, and we can get to

00:40:42: it is whatever high number, it will then then we we can then

00:40:47: believe the specialist that it will work right and if in one

00:40:50: case out of a whatever a billion it doesn't work, then there's

00:40:54: the insurers behind it to make sure exactly Peter. Okay, well

00:40:59: looking forward to exactly thank you very much. Let's move to

00:41:02: the main part. It was a pleasure. I wish you all the best

00:41:04: with recording with Frank and with Andre. I will look forward

00:41:07: to this episodes. And yeah, let's move to the main part.

00:41:12: Talk to you soon. Have a great day to use well wherever you

00:41:15: are. Bye bye. Bye bye.

00:41:16: Normally, a podcast starts like hell everybody and welcome to

00:41:21: new episode of our industrial AI podcast. My name is Robert

00:41:24: Weber and it's a pleasure to talk to and I talk to normally to

00:41:27: Peter Sieberg. That's my co host of the industrial AI podcast.

00:41:31: Peter is sick in the moment. He is not able to come here today.

00:41:34: But no matter we are very proud to be here at the beyond safety

00:41:39: forum at sick and I invited three guests. Anders, Billy, so

00:41:43: we're back from Universal Robots. Welcome. Thanks a lot.

00:41:46: Thank you. Then Daniel Leitner from DLR. Hello, welcome.

00:41:50: Hello. And cast Russia from from over IKS. Hello. Hello.

00:41:54: Welcome to the podcast. I must admit that I'm not a safety

00:41:58: specialist. So we should talk about AI today. But I'm reassured

00:42:03: that Jenny, I was also a topic at the end of this conference

00:42:07: because normally when I go to AI conferences, Jenny, I is the

00:42:12: most hyped topic on the agenda. Everybody's talking about

00:42:15: gen AI. Isn't the safety community interested in Jenny? I

00:42:21: custom. Yes and no, I would say on the one hand side, I think

00:42:25: Jenny I for safety critical applications is still a topic for

00:42:29: the very, very distant future. At least that was it feels like on

00:42:33: the other hand, we can use Jenny as a supportive tool already

00:42:36: in development and engineering processes. How do you do that?

00:42:39: As a tool, we think of it as a sort of companion, which is an

00:42:43: overstressed word, I think in this domain as well. But you

00:42:47: could think of it as maybe your standard co pilot tool that

00:42:50: will assist you in the not so fun tasks of programming, but for

00:42:55: safety engineering, like generating documentation, giving

00:42:58: you support and analyzing complex architectures, finding

00:43:01: failure modes, all those kind of things. When we talk about

00:43:03: gen AI, we need to talk about agentic automation. So that's the

00:43:07: next big hype, I would call it because everybody now talks about

00:43:10: agents and agentic automation and stuff like that. Is there a

00:43:15: safety related topic to that, Daniel? To agentic automation?

00:43:21: Well, agentic automation, I would say that yes, the robots that

00:43:27: automated tasks act already like an agent, right? They have the

00:43:31: knowledge of the domain, they have the knowledge, hopefully

00:43:34: about themselves. And if they don't have it, they can actually

00:43:37: access it through what Kaston just explained, maybe large

00:43:41: language model or region language model to get the data that

00:43:45: they need to access in order to solve the task.

00:43:47: But when you talk about, for example, we had an episode with

00:43:50: the guys from Siemens, they talk about co pilot operations,

00:43:53: stuff like that. And there's an agent in the back. And he will

00:43:57: make decisions on topics how to run the machine at the end.

00:44:01: Isn't that a safety rather than topic?

00:44:03: Yes, yes, I think so. So the robot needs to be able to assess

00:44:08: what could go possibly wrong, right? So it needs to know how to

00:44:13: recover if something goes wrong, because in the future, the

00:44:15: robots will no longer do only one task, but they will do

00:44:18: multiple tasks, they will be mobile, they will support workers

00:44:22: in different situations. And so therefore, actually, they will

00:44:26: leverage the information that is coming from the behind from the

00:44:29: agent and adapt themselves right to the correct situation.

00:44:33: And what does it mean now for safety?

00:44:35: Yeah, for safety, it means that, for example, a robot may be able

00:44:39: to avoid damage to itself based on information that is given to

00:44:43: through the agent may be able to avoid damage to the environment

00:44:46: and most importantly, to the human.

00:44:47: And how do you handle situations like that when it comes to

00:44:52: combining AI and safety on us?

00:44:56: It's a really good question. I mean, today we see AI being used

00:45:00: increasingly in many different industrial applications,

00:45:03: especially where vision technology is used today.

00:45:05: But that's common, right?

00:45:07: It's very common. I think even if you look at the Universal

00:45:10: Robots customers that we have around one third of what we call

00:45:13: the solution revenue we have is coming from solutions that are

00:45:16: that are powered by AI today. So there's no doubt that this is

00:45:20: moving really fast into the industry. Then the question is,

00:45:22: how do you work with safety around that? Today, it's all about

00:45:26: thinking, how do you broadly, because the opportunities that

00:45:28: AI gives is much larger degree of freedom in applications.

00:45:32: It's a much wider variety of tasks to be solved.

00:45:36: And of course, they have to be safeguarded. It has to be safety

00:45:40: around the robot, as we talked about, it has to be safety,

00:45:42: especially around people around it. And that has to be

00:45:46: considered in a much broader context. So I think with all the

00:45:49: advancements that are happening also in safe sensing and so on

00:45:52: will help with that. But there's no doubt that for now, there's

00:45:55: still a gap in between what is done on the functional level

00:45:58: using AI and how we apply modern techniques of safety to then

00:46:02: safeguard humans and equipment around it.

00:46:04: Kastan, you mentioned there's a gap between both worlds, the

00:46:08: machine learning AI world and the safety world. How big is this

00:46:12: gap and how to bridge?

00:46:14: It depends, I think, on the on the machine learning technology

00:46:16: that you use, because I mean machine learning is a broad field.

00:46:19: So if you have a small model that you learn from a few data

00:46:21: points with a simple task, then the gap is not that large

00:46:25: because you could still inspect it theoretically. On the other

00:46:28: hand, you have like a large foundation model trained by

00:46:30: someone else and you use it in such a context. It's a different

00:46:33: story. Why is a lot of the principles that you would usually

00:46:36: do in safety engineering assumptions on the behavior on the

00:46:39: failure modes, experience that you have in, for example, parts

00:46:42: failing or different software pieces, how they interact are

00:46:45: simply missing. And those solutions surface so fast that we

00:46:49: haven't had time to collect the experience that we need in order

00:46:53: to derive a safety argumentation from there right now.

00:46:56: And this gives up, I would say, an uneasy feeling. I said earlier

00:46:59: in the presentation, it might be that we are already safe, we just

00:47:04: don't know it.

00:47:04: Okay, can you explain a little bit?

00:47:07: I mean, we see the performance of those models. And I would argue

00:47:10: in some of the cases, we don't get the 10 to the power of minus

00:47:13: seven error rates. I mean, 99% of machine learning world is

00:47:17: already very high performance. So there's definitely this gap.

00:47:21: But on the other hand, if we build a system around it, dealing

00:47:24: with some of the issues that we have, and that we know, for

00:47:27: example, computer vision algorithms, also machine

00:47:29: learning based not working very well in dark conditions, we can

00:47:32: put a light sensor in place and just filter that out. And with

00:47:36: that, we might actually end up with a system that is already at

00:47:39: least safe for or safe as safe as the ones that we develop

00:47:43: traditionally. But we simply have no way of demonstrating this

00:47:46: because we're missing the amount of test cases, amount of test

00:47:49: data to do, for example, statistically valid testing in

00:47:53: the end.

00:47:53: What is your opinion, Daniel, when it comes to the gap between

00:47:56: both worlds? Or is it one world or separated worlds?

00:48:00: One world. But I think that there's a saying that says there's no

00:48:03: free lunch, right? So you cannot just have open source phrase.

00:48:07: Basically, for large language models and safety, it says that

00:48:12: if you apply large language model vision language model to a

00:48:16: safety critical aspect, you cannot just take that for

00:48:19: granted, right? You need to check it. And you can also not check

00:48:22: it again with a machine learning based system, because then you

00:48:25: end up in a vicious circle, testing something that is

00:48:29: unreliable with something that is unreliable. So therefore, there

00:48:31: you need to go through the other world to the other side and use

00:48:34: some classical methods. There's also classical AI, right?

00:48:37: Reasoning based methods, knowledge based methods that can

00:48:41: hopefully then identify errors in the models and avoid misbehavior.

00:48:47: Is there a communication gap between both worlds? Because every

00:48:51: every month, every week, I see a new robotics foundation model.

00:48:56: Every university now builds the new one foundation model on

00:48:59: robotics Toyota and in USA, a lot of companies are working on

00:49:03: this in Amsterdam. And you are also working maybe on a

00:49:06: foundation model. But do these people talk to the safety people?

00:49:10: I don't think so.

00:49:14: So and what's even worse, I think not even the machine

00:49:18: learning people talk to the knowledge based reasoning people.

00:49:21: There is some efforts that try to rediscover this communication

00:49:25: channels, but actually missing. So from cognitive science, we

00:49:29: know that a system should have both worlds embedded into the

00:49:32: reasoning environment. So there should be a fast system, right?

00:49:36: For recalling information that it was that it learned through,

00:49:39: let's say a large language model. And there should be a slow

00:49:43: system that identifies errors and fixes them. And then there

00:49:47: needs to be something in the middle, which mediates between

00:49:50: them. And this is what humans call metacognition. But we are not

00:49:53: far away from that, actually.

00:49:54: Your safety developers, do they talk to your machine learning

00:49:58: developers?

00:49:59: Yeah, so the communication is very tight. And I think both

00:50:03: from the development community that are also in the in the large

00:50:07: worlds of you talked about the university springing in new

00:50:10: What is your opinion on these models all around the world?

00:50:12: I think it's amazing, right? It unlocks so many new things that

00:50:15: we can do with robots. It solves some of those fundamental

00:50:18: gaps that robots have been struggling with forever, which

00:50:21: is handling the variety that the real world faces. So there's

00:50:26: no doubt there's some like unique classes of applications that

00:50:29: really, really benefits even today from something like

00:50:32: logistics, where everything you handle is different. Like every

00:50:36: parcel that flows through logistics is is basically

00:50:38: different. And foundational models are brilliant at that

00:50:42: accuracy requirements is modest, right? But but the right

00:50:46: stability capabilities are extreme. So these kind of

00:50:52: innovations makes automations within logistics grow exponentially

00:50:57: faster than any other business at the moment. So all of this is

00:51:00: great. And then and for there, we're having some quite close

00:51:03: dialogues with the adopters of these kind of systems and and

00:51:06: safety. I think the good thing in my mind is that it does not

00:51:12: have to be that closely related just yet. Because deploying

00:51:16: automating logistics handling processes using gen AI has the

00:51:20: exact same safety requirement as the other robot application.

00:51:23: Right, because of course, motions will be unpredictable, they

00:51:26: will depend on whatever the gen AI model produces. But in the end,

00:51:30: it's all about keeping environments and equipment and

00:51:33: especially people safe. And we have well existing

00:51:36: technologies for that. We have well existing technologies for

00:51:39: managing emotions, also keeping it in within safety envelopes of

00:51:45: the systems and so on. And we appreciate having even fully

00:51:48: dynamic applications in this way. So I think there's some good

00:51:51: dialogues going, I agree that having them fully safety rated

00:51:55: models that can predict how to pick up a parcel and do that in

00:51:59: a fully safety rated way where everything is predictable is

00:52:03: something else. But I also don't necessarily think we need to

00:52:06: slow down innovation and adoption of these models to solve

00:52:09: that problem just yet, because they're still safe in the way

00:52:12: we do them today.

00:52:13: I was very happy when I checked the agenda of this day,

00:52:15: because I saw your name. And I did a recording, I think two

00:52:19: weeks back with Dennis Stogl. He's a Ross, her us guy, and he

00:52:23: was an Odense at the Ross con. And then we discussed one topic

00:52:28: and I need to ask you this question, because I do not

00:52:31: understand what you are doing there, because you showed there

00:52:34: a cobot and you then put a GPU next to the cobot. Does that make

00:52:40: sense? Put a GPU next to a cobot?

00:52:43: It absolutely does.

00:52:44: Yes, why?

00:52:45: To basically stimulate the growth we just talked about, right?

00:52:47: There's a lot of opportunities that modern AI can unlock across a

00:52:52: lot of industries. And we're even seeing it today. I think

00:52:55: today, 89% of manufacturing

00:52:59: executive all about costs, right? GPU costs a lot. When I talk

00:53:02: to guys who sell robots, they say, Oh, in the moment, the

00:53:05: companies are buying Chinese robots because they are cheap.

00:53:08: They buy this for three years and after three years, they put it

00:53:11: on the trash and say, Oh, we will buy the new one. But that's a

00:53:14: cost topic, right? A GPU.

00:53:15: A GPU is a cost topic. There's no doubt about that. And with the

00:53:19: advent of modern modern GPUs is costly equipment. On the other

00:53:22: hand, I think what really matters is total cost of

00:53:25: deployment. Like earlier today, we talked about the key barriers

00:53:29: of adopting more automation is reducing overall deployment costs.

00:53:32: At the same time, we need to have more flexibility. We need to

00:53:35: have better reliability. We need more performance. And we need

00:53:38: things that are easier to use. Modern AI solves most of those

00:53:41: things. There's no doubt that some of the hardware might be more

00:53:44: expensive. But also we know that some of the key cost drivers of

00:53:47: deploying automation is labor is integration labor, it's

00:53:51: technical problems, it's programming times deployment

00:53:54: time, all of that with the advent of gen AI and so on can move

00:53:58: us so much more in the direction of deploying things faster,

00:54:02: having it run more reliable, delivering more end customer

00:54:05: value because of flexibility. So I don't think we should be

00:54:08: excess obsessed about the price tag of individual hardware

00:54:11: components. We should think about what would total deployment

00:54:14: costs actually be operational cost over lifetime and so on.

00:54:18: That's difficult in the moment to argue, right?

00:54:20: Well, I don't know. Because in the end, even for a manufacturing

00:54:23: customer, they do not see the bill of material to whatever they

00:54:26: buy. They see the end turnkey solution cost. And if that's

00:54:29: realized with a single robot and a GPU, or if it's realized with

00:54:33: a robot and 9,000 hours of manual labor and programming and

00:54:37: deployment time, well, for them, it actually doesn't matter, they

00:54:40: don't become happier customers if they pay a lot of sort of

00:54:43: bespoke engineering hours. So but of course, it's all about

00:54:46: value. It's all about it needs to provide the value to

00:54:49: customers. And at least we see it today that that some of the

00:54:52: things like logistics, automating logistics would be

00:54:56: entirely impossible at the scale that we're seeing it happening

00:54:58: at the moment without accelerated computing.

00:55:01: Then it doesn't make sense.

00:55:02: Yes, I think so. So I mean, consider that you want to have

00:55:06: your models trained in the cloud and they want to have an edge to

00:55:09: just recall what you learned, right? So therefore, I don't think

00:55:12: that you have just one GPU for one robot, but eventually we'll

00:55:16: have several, right? So it makes a lot of sense.

00:55:19: What is your opinion?

00:55:20: I mean, I tend to agree. It also depends on the size of the GPU.

00:55:23: So I mean, with like the integration still rising, those

00:55:26: things will be cheaper as well. So if you now have a prototype

00:55:29: with the GPU.

00:55:29: But from my point of view, the time of scaling, it comes at the

00:55:33: end, when it comes to models or large language models. What is

00:55:37: your opinion on that? Maybe we should focus on smaller models,

00:55:40: on more precise models on more domain specific models, which do

00:55:47: not need a GPU, maybe.

00:55:48: Yeah, yeah, I'm an advocate for that. So you should not have

00:55:51: always the need for an end to end model, right? I think that

00:55:56: skill based programming is still a valuable approach for industrial

00:56:00: manufacturing. So and one of the skills may very well be a model,

00:56:04: either a VLM, a whatever a foundation model, and doing a very

00:56:10: specific task. And then on the other hand, you may also have

00:56:13: good old fashioned engineering as another sequence, as another

00:56:16: skill.

00:56:17: Does AI help us or your community to improve safety or to to find

00:56:25: new approaches when it comes to safety?

00:56:28: That's a tough one.

00:56:29: Improve safety. I think it's still underused to that end. So I

00:56:35: think it could improve safety because it just helps us to keep

00:56:38: an overlook on all the complex things. I mean, if you ever seen

00:56:41: a GS entry for a person detection, which would cover I think the

00:56:44: entire wall here, then it's very different, difficult for us to

00:56:47: even understand and grasp all the details. So there it can

00:56:50: definitely help. I'm not sure if it's already used on a broad

00:56:53: level. Just yet, whether it can find new approaches, improve,

00:56:59: help develop, improve and help definitely develop. I mean, it

00:57:03: also it would need to surprise us because I'm in the end safety

00:57:07: is also about getting acceptance from someone like

00:57:10: demonstrating that the freedom of unnecessary risk. So yeah, maybe

00:57:14: a bit philosophical in the end. I'm not so sure.

00:57:17: And what's your opinion on that?

00:57:19: No, I think it does. There's no doubt that the community both on

00:57:22: the advancement that happens in AI sparks a lot of need for safety

00:57:26: innovation. I do believe they can be treated independently. You

00:57:29: need to also from the safety community think about what does

00:57:33: it take to embrace the advances in technology, all the sort of

00:57:37: flexibility requirements that comes from from AI. And so today

00:57:42: it's not about hardening the systems we already have. It's

00:57:45: all about how do we work with a structured way of risk and

00:57:49: solving the flexibility problems that AI systems inherently

00:57:54: will will impose to us. I think that's the challenge that that we

00:57:58: in the safety community need to accept and think creatively

00:58:01: around how to offer new generation solutions, how to how to

00:58:05: manage this, which is often maybe a deeper integration of safety

00:58:09: technologies. It's a combination of safety technologies. And and

00:58:13: it should certainly not slow down the pace of innovation that

00:58:16: happens in the AI communities.

00:58:17: Because I mean, after a short while thinking sometimes, I think

00:58:21: it could help and maybe even re establishing some things that

00:58:25: are already there because for a long time, I think safety,

00:58:28: especially research and safety engineering was quite ahead of

00:58:31: what is applied in the real world. And some of the methods never

00:58:35: really made it into practice because of different reasons like

00:58:38: interfacing of things and so forth. You already said dynamic

00:58:41: dynamic management of safety instances, for example, and we

00:58:45: might see a re yeah, a resurgence of things that are already

00:58:51: developed to some degree, but now being applied because there's

00:58:54: a necessity. So in that sense, it might actually help us to

00:58:57: unlock new capabilities in the safety engineering sense, because

00:59:01: we now finally have a way to apply them.

00:59:02: You want to add something, Daniel?

00:59:04: It's the same actually with the old fashioned AI, knowing that

00:59:08: we have flaws with the new generation of AI with generative

00:59:11: models, we will resort back to them at some point as well as a

00:59:13: companion to the new AI wave.

00:59:16: Today we heard something about safety pipelines, we need to

00:59:20: establish safety pipeline. I think the guy from Bosch spoke

00:59:23: about safety pipelines, that you always have to reassure that

00:59:27: just your product is still safe. We know that from a machine

00:59:30: learning pipeline, a model pipeline, DevOps pipeline, security

00:59:34: pipeline, is there a new market emerging to build infrastructure

00:59:38: for all these pipelines? And who builds this pipelines? Is it you

00:59:43: are who builds this pipeline to the customer? Who will do that?

00:59:46: That's a really, really good question. And there's, there's

00:59:49: no doubt that individual products need to maintain safety, and

00:59:52: both with the velocity of product development happening, then

00:59:56: finding even more structured ways to do safety qualification,

01:00:00: safety validation is a need. So speeding up time to market and

01:00:05: safety product development is by no means a thing that we have

01:00:09: well under control today, I think time to market in general and

01:00:12: safety products are probably too long to keep up with the with

01:00:15: the innovation that happens in other places.

01:00:17: But your customer will ask you.

01:00:18: Absolutely. So it's, it is a problem in the in the safety

01:00:21: technology community that needs to be addressed. Like how do we

01:00:24: bring safety rated product to market faster? How could we solve

01:00:27: some of these things and maybe reduce the reliance on more sort

01:00:31: of manual processes and deep technology reviews? And I think

01:00:34: that's what automated pipelines brings. If you think about

01:00:37: classical software engineering, automated pipelines brings

01:00:40: automated robustness, it brings sort of early detection fast. And

01:00:45: in these cases, using these more actively and also safety

01:00:48: product development would at least be an important both for

01:00:51: evolution of products to have a more software centric way of

01:00:55: thinking about safety products, because software products do

01:00:57: evolve software products needs continuous integration test.

01:01:00: And and hopefully it would accelerate time to market as well.

01:01:03: Maybe there are new competitors on the horizon. When it comes to

01:01:07: infrastructure, I know a lot. US companies were really looking

01:01:10: forward to build infrastructure in industrial

01:01:13: environments. So what is your opinion on safety pipelines? And

01:01:17: is there an emerging market for that?

01:01:19: Maybe not a market, I would say, but from from my research point

01:01:22: of view, there's a need for automated pipelines that react

01:01:25: really quick. So we need to reduce also the reactivity time.

01:01:28: Think about that space robotics. There is a robot maybe in a

01:01:33: space station orbiting Mars 40 minutes away from communication

01:01:37: to Earth. If something happens, there's no room for waiting for

01:01:41: human to give some advice, right? So the system needs to be in

01:01:44: the pipeline react autonomously and make decisions itself.

01:01:48: What is your opinion, Kasten? Are you sure that the robotics

01:01:51: companies can build these safety pipelines?

01:01:53: I see no reason why they shouldn't be able to. I think the

01:01:57: main the main challenge right now is that a lot of the assurance

01:02:00: processes, as as already said, are manual labor. And it's very

01:02:03: difficult to integrate them into a sort of like DevOps like

01:02:07: pipeline. I mean, the the processes then just as good as

01:02:10: the test cases that you have. And I think there's an ongoing

01:02:13: debate in a lot of standardization groups, what those

01:02:17: test cases limits and so forth should be. And until then, also

01:02:20: maybe with the idea of getting quicker to market, we have to

01:02:24: rely on, I would say, a good sense of human intuition, what

01:02:28: should be safe and not and arguing also not just

01:02:31: quantitative in terms of like test case performance, but also

01:02:34: qualitative properties. And that is difficult to automate.

01:02:37: Eventually, we will get there, but I don't see this happening in

01:02:41: the next five years.

01:02:41: So you need to invest in this topic.

01:02:43: Yeah, maybe very well. So yeah.

01:02:45: So it's a it's a platform strategy, right? So everybody wants

01:02:49: to build a platform. Maybe that's the key to get everybody on

01:02:52: the platform.

01:02:52: Absolutely. I think the key about key value drivers on

01:02:56: platform is to be solving real problems.

01:02:58: And that's a huge problem.

01:02:59: It is a huge problem. So yeah,

01:03:01: you mentioned smaller models, you mentioned transformer

01:03:05: architecture. I'm a bit biased because I work for several for

01:03:09: NX AI and we talk about XLSDM and stuff like that. And do we

01:03:13: need new AI architectures?

01:03:15: I don't think they would hurt. Variety is good, I think.

01:03:19: And the question is whether they would, even though they are

01:03:21: new, what kind of new qualities would they bring to the table?

01:03:24: So is there anything fundamentally better or worse?

01:03:27: I mean, in the story that I told about the transformer and then

01:03:30: a lot of the other things, the takeaway for vision at least was

01:03:33: that it's not necessarily the architecture, but some some training

01:03:37: tricks or some some pre-processing stuff that you apply.

01:03:39: And I think this is difficult to disentangle, but I definitely

01:03:43: think that there are a lot of problems that we are not very

01:03:46: good at modeling right now.

01:03:48: And one thing that I would like to see, I hinted at this earlier,

01:03:52: is not post-doc explainability, but some sort of interpretability

01:03:56: by design. But as soon as you open that box, it's a lot of

01:04:00: engineering again. And I've already been told by other people,

01:04:04: this is what like what we did 20 years ago, and that's not

01:04:07: modern anymore. We train end to end. And it's a bit against the

01:04:11: stream, but I'm completely on your side when it comes to maybe

01:04:14: smaller models.

01:04:15: Daniel.

01:04:16: Yes, I think that there is a need for different kind of AI models

01:04:20: for actually cognitive architectures. So also, this should

01:04:23: have a revival, because if you think about humanoid robots,

01:04:26: right, they should react as you expect a human to react on a

01:04:29: certain issue. Just repeating what the robot did, if there's an

01:04:33: error, may actually not help, right? If you have an end to end

01:04:36: model, and this is the only policy that was learned, try again,

01:04:39: it will not fulfill the task.

01:04:41: Yeah, at least I very much agree there. I think a key criteria

01:04:45: for industrial applications is that it has some level of

01:04:48: optimized ability that you can also investigate and see at least

01:04:52: grasp some amount of understanding of the fundamentals of the

01:04:56: process. So I think for a successful sort of scalable

01:04:58: adoption in industry, at least it's important to think about

01:05:02: hierarchical models, at least models that represent different

01:05:06: elements, which you can go in, you can enter into respect. So I

01:05:10: do believe like end to end learning will have a difficult

01:05:14: terms in industrial applications, where there will be a need of

01:05:17: understanding to refining parameters to refine these elements

01:05:21: of the process to optimize if you're not satisfied with the

01:05:24: results, what do you do? That's that's a fundamental question

01:05:27: you need to understand and retraining and hoping that you

01:05:31: get something better, it may not be the answer. So smaller

01:05:33: models for focus on sort of fundamental building block

01:05:37: models and composing them, which is very similar to how

01:05:40: humans do as well.

01:05:42: Do we will see a revival of reinforcement learning in the

01:05:45: next years? Because in the moment, nobody, not nobody, but only

01:05:48: a few people are talking about reinforcement learning.

01:05:52: I think you will have a combination of reinforcement

01:05:54: learning and foundation models. Imagine that a robot needs to

01:05:59: traverse a certain area that it is not able to even know if it

01:06:02: can traverse it. So it needs to try out first. We don't want it

01:06:05: to try out in reality, because it will topple over, right? So

01:06:08: therefore, it learns in simulation through reinforcement

01:06:11: learning, generates a new policy that is then fed into the

01:06:14: foundation model. And now the foundation model is smaller than

01:06:16: before.

01:06:17: What's your opinion?

01:06:18: I mean, I think it even depends on where you apply reinforcement

01:06:21: learning. I when you mentioned, there's currently maybe not so

01:06:24: much of a hype. If I remember correctly, a lot of the fine

01:06:27: tuning of LLMs is based on reinforcement learning with human

01:06:29: feedback. So I think that reinforcement learning has

01:06:32: let's talk traditional reinforcement learning.

01:06:34: You mean for control tasks?

01:06:35: Yeah, control tasks. Yeah.

01:06:37: Yeah, I mean, we kind of disentangled a bit the sort of

01:06:41: training and the deployment. So you can train a decision action

01:06:45: decision model basically with reinforcement learning, but you

01:06:48: don't retrain it in practice. And I think this is just learning an

01:06:51: optimized policy basically that you deploy. This is also I think

01:06:54: fine. Reinforcement learning that is still learning in the

01:06:59: field. I have to look at the people who would need to work

01:07:04: with that. And I usually get an uneasy look back. So that is

01:07:07: why doubtful.

01:07:08: Let's talk about your safety community, I would call it. Do

01:07:13: you need new skills in this safety community? Which skills?

01:07:17: Communication skills, potentially.

01:07:19: I was surprised how many people came up here and asked

01:07:23: questions. So communication is not in terms of asking questions.

01:07:27: But I mean, I still think that there is this natural divide

01:07:31: between machine learning people and safety people. And the

01:07:35: unfortunate thing is, I mean, both of them have to learn

01:07:37: communication skills, by the way. So this doesn't only apply to

01:07:40: but how to learn how to talk to each other because sometimes they

01:07:43: use the same words, but with a very different meaning. And then

01:07:46: end up in a discussion about the meaning of those words and not

01:07:50: what they really want to talk about. So I think, for both sides,

01:07:53: it would make sense to understand a bit better what the other

01:07:57: side is about. So what machine learning means and what the

01:08:00: impact on the safety argumentation is in terms of, instead of

01:08:03: just saying, I need 99.999% and please also demonstrate how to

01:08:08: do it. Deliver. On the other hand side, I also think that

01:08:12: getting a bit more into the safety mindset and not just

01:08:16: thinking about optimizing a large model also helps machine

01:08:20: learning engineers to actually develop better solutions, because

01:08:23: we don't care about the model, we care about the function in the

01:08:25: end.

01:08:25: What's your opinion, new skills?

01:08:27: I do believe collaboration skills is a good term. And I think

01:08:31: you presented it very well earlier today, thinking about we

01:08:34: have two communities. And we have very modest crosstalk across

01:08:38: those communities. But essentially, we should be solving

01:08:41: the same problem. We should all be striving for getting more

01:08:44: robotics, getting more automation, getting more modern

01:08:46: technologies helped to deploy it to help drive society where it

01:08:50: needs to go. It needs innovation thinking on safety. It

01:08:54: needs innovation thinking on AI. And we're all sort of in that

01:08:57: same boat, right? We're not in two different boats sailing in two

01:09:00: different directions. So somehow embracing that problem and

01:09:05: challenge across communities and actually trying to dive in and

01:09:09: sort of combine and compose skill sets, rather than sort of

01:09:14: pulling in two different directions, which essentially

01:09:16: leaves us at the exact same point. I think that's the challenge

01:09:20: that our community needs, that we actually jump in. And then it's

01:09:24: a strong point we have. It's a strong point we have in Europe,

01:09:26: where we have deep understanding on safety

01:09:28: technology, we have deep sort of application, automation,

01:09:31: application domain expertise. So we also need to somehow join

01:09:35: forces there and drive that forward together.

01:09:37: New skills?

01:09:38: New skills are good, but we should not forget about the old

01:09:40: skills. So with chat GPT, people do no longer learn to

01:09:45: program. People do no longer learn to solve inverse kinematics,

01:09:50: forward skinematics, right? The one on one of robotics, this

01:09:53: needs to be in the curriculum of any study, I think for robotics

01:09:56: and for safety, I think it's the same, right? There are skills

01:09:59: that you just need and you do not learn them if you just type

01:10:01: a prompt into an API.

01:10:03: At the end, I want all of you ask what surprised you today

01:10:07: most? Which quote and why or which fact or which data or which

01:10:14: figure? What surprised you most?

01:10:16: So under said that there's only 2% of all the tasks automated

01:10:21: today. This surprised me a lot.

01:10:23: That was your presentation, right? Anders?

01:10:25: Carsten?

01:10:26: Yeah, same here, I would say. And then follow up. It was more of

01:10:31: a question related topic. The whole security does not work by

01:10:35: obscurity. But safety unfortunately still has maybe a

01:10:39: sharing challenge ahead, and that we may have to rethink how we

01:10:43: deal with safety and what can we share and what should we

01:10:45: collaborate on?

01:10:46: Anders?

01:10:47: Yeah, I think for me it was a broad conclusion. I had two main

01:10:51: observations. I think one of them is I was surprised in the

01:10:54: presentation we had from Bosch on how dominant certifications and

01:10:58: legislation is in the autonomous driving or in across

01:11:01: automotive in general, because I do fear that would be an obstacle

01:11:05: for innovation. The other thing I realized is a bit of this

01:11:08: divide that we've been talking about that there's there's only a

01:11:12: modest amount of discussion around how to solve big and

01:11:15: important societal problems when we are discussing adoption and

01:11:19: AI divide in between. It's it's doesn't seem to be the main

01:11:23: conversational driver, which is in my mind a bit of a shame.

01:11:26: I was surprised by the figure that China is pushing safety and

01:11:32: regulation so enormous push from from China and Asia. What is your

01:11:38: opinion on that at the end, Daniel?

01:11:39: I was surprised and I actually don't think it is what we see

01:11:44: there because there are so many products, especially robotics

01:11:47: products used in production in China that I think actually have

01:11:52: a lower certification rate than what we are used to in Europe,

01:11:55: right? So I don't know how these two things work together.

01:11:59: Custom, you have any idea?

01:12:01: Maybe they they are trying to do the inverse EU AI act and kind

01:12:08: of like motivate local suppliers to provide better quality, at

01:12:13: least as a long term goal, I'm not sure. But this might be one

01:12:16: direction. So that's not necessarily about their own

01:12:18: market, but to kind of like take away the potential leading edge

01:12:22: of other markets.

01:12:23: And then you sell robots in China.

01:12:25: Oh, yeah, difficult.

01:12:27: I think I have a colleague that's set it well, the Olympics

01:12:29: and robotics at the moment is played out in China. So if you

01:12:32: need to learn how to win, that's where the game is. It's where

01:12:34: half of the world robotic market is today. And there's no doubt

01:12:37: it's a tough market with a lot of different driving factors

01:12:41: about the safety regulation topics.

01:12:42: I think similar to what we talked about in the domain of

01:12:45: robotics, it has been, let's call it a little bit more loose

01:12:48: and now slowly starting to move towards something that we see

01:12:53: be being more and more familiar to Western standards in terms of

01:12:56: safety. It's interesting that it seems to be the opposite

01:12:59: driving factors on inside a ton of automotive. I'm not sure

01:13:04: what the key driving factors is, if it's actually a way to try

01:13:07: to consolidate the industry and raise the sort of the quality

01:13:09: standards, improve exportability, or what the key the key

01:13:13: drivers is, but it's an interesting trend.

01:13:16: Thank you very much, Carsten. It was a pleasure to talk to you

01:13:18: about safety and AI. Thanks a lot.

01:13:20: Thank you as well. My pleasure.

01:13:22: Daniel, thank you very much for your discussion on AI

01:13:25: foundation models and safety. And thank you very much for

01:13:28: explaining me GPUs on co-words. And as it was a pleasure,

01:13:32: you're very welcome. Thank you so much.

01:13:33: Thank you.

01:13:40: Robotik in the industry, the podcast with Helmut Schmidt and

01:13:44: Robert Weber.

01:13:45: (upbeat music)

01:13:47: (soft music)

Neuer Kommentar

Dein Name oder Pseudonym (wird öffentlich angezeigt)
Mindestens 10 Zeichen
Durch das Abschicken des Formulars stimmst du zu, dass der Wert unter "Name oder Pseudonym" gespeichert wird und öffentlich angezeigt werden kann. Wir speichern keine IP-Adressen oder andere personenbezogene Daten. Die Nutzung deines echten Namens ist freiwillig.