Hallo Du! Bevor du loslegst den Talk zu transkribieren, sieh dir bitte noch einmal unseren Style Guide an: https://wiki.c3subtitles.de/de:styleguide. Solltest du Fragen haben, dann kannst du uns gerne direkt fragen oder unter https://webirc.hackint.org/#irc://hackint.org/#subtitles oder https://rocket.events.ccc.de/channel/subtitles erreichen. Bitte vergiss nicht deinen Fortschritt im Fortschrittsbalken auf der Seite des Talks einzutragen. Vielen Dank für dein Engagement! Hey you! Prior to transcribing, please look at your style guide: https://wiki.c3subtitles.de/en:styleguide. If you have some questions you can either ask us personally or write us at https://webirc.hackint.org/#irc://hackint.org/#subtitles or https://rocket.events.ccc.de/channel/subtitles . Please don't forget to mark your progress in the progress bar at the talk's website. Thank you very much for your commitment! ====================================================================== Hello. Well, you probably all know opened over your tears, and NBT is one of the main developers of Felix Fitow and is here. He's here to tell you about the last 10 years of open WRP. Yes, thank you. So the opening of the art project was started a bit over 10 years ago, and I'd like to take this opportunity to reflect back on where the whole project came from and how much it has changed and evolved over time based on the progress that we've made. So first, I'd like to tell you a bit about what happened in the early days, as many of you know, and as the name still implies, it started out pretty much as a framework for the old 50 fauji. The effort to create this foamer started when basically it was discovered that this this device we're using, a Broadcom chip was using Linux as its base operating system. And initially it was a pretty huge violation because lynxes didn't bother to release any source code for the device. But that, of course, didn't stop interested people from from trying to hack it, from looking into the inner workings of the firmware, discovering the limitations of it, and then starting to create a replacement for that firmware. And this was done in the early days based on the U.S. Lipsy, Bill Drut, which was a pretty small and flexible environment to quickly bring up code for for a new target and used Lipsey have the advantage of being much smaller than G c, which I think the initial device used. So it started out by basically taking the links is terrible once it was released, which only happened because many people, many of which included were member of the Linux kernel community, actively fought for the source code being released. And there was a lot of compliance engineering involved in that effort as well. And some pretty prominent kernel people also took part in that. And back then, I think embedded Linux was still it was somewhat popular in some areas, but it wasn't really all that widespread on Voudrais. Yet back then there was still the PVCs w orks operating system as an alternative, which had no GPL requirements and was much smaller than Linux, but also much less flexible. So when Open WRP started out with the lynxes, terrible it was, it still resembles a lot of the structure of of the lynxes system when running on the system, even though it was much more minimalistic than the original firmware, because they always had the intention of creating something new from scratch. So the first step, of course, one onesta, the code was running based on the build route and the lynxes terrible was reworking it to get rid of this terrible dependency which weighed in a few hundred megabytes, I think, or maybe one hundred and fifty. So the first step was to to get rid of that and basically only pull in the required parts, which were much, much smaller than the entire Tarbell. And then the the focus on the initial effort was also to update the Linux two point four kernel. This was in 2003 and 2004, long before we had a working two point six kernel. So when when I got involved in the project, I basically started with the first of our several built system rewrites. I put a lot of effort into creating something where you could you could package software for the system with minimal effort. And this then also led to our first actual release that we did. We came up with the idea of naming our releases after vodka based cocktails, and our first release was called White Russian. And the version number is also interesting because back then we still believed in at some point eventually coming up with a 1.0 release and we felt that we were getting pretty close to it. So we called it zero point nine. And with that release done, we again had time to to come up with more interesting developments to work on that. And actually after the white Russian release, we made lots of big changes. We shifted away from the Broadcom platform, which the Lanxess device was based on in the wide Russian release. We had actually supported many devices running on the same platform, but it was still very much based on the code of the Broadcom SDK. So we decided it's going to be multiplatform and we rewrote the built system again. And I was again part of that effort and we made major changes to ensure that it's not platform specific, it's not device specific. And we're able to to create more features and more different platforms to support. And we actually also came up with our own configuration system because the old system was still still very much inherited. Parts of the SDK structure, which includes the infamous and VRM support, which is basically just a key value store dumped into one partition, which has some device specific works, is rather limited in configuration. And we didn't feel like bringing that legacy baggage to other platforms as well. So we created our own configuration system and we decided pretty much from the beginning that we want to have all configuration handled through a single system, at least for everything that's relevant on the system. Instead of going the traditional Linux way of just having everything being configured in its own config file format and trying to come up with various clever hacks of automated modifications to those, we use our own central configuration storage, which was designed to fit basically 90 percent of the requirements of a system, but not to not make it too complex or complicated. So we never tried to really put a cover everything with the single code because this would have added too much complexity for the whole system. And actually, complexity is one of our main enemies these days. So I think this trade off was a good choice. And with the transition from from the old white Russian code base and away from the traditional and VRM configuration interface, we also had to basically rewrite our web interface from scratch. So it was it was lucky that some people from the FREEPHONE project who also needed a new Web interface for this new code base, decided to create something that not only fit their needs, but decided to to wear it, put it, make it a part of the open system as well. So given the scope of these changes and how much we did in the short time, we decided, well, actually we decided before many of the changes were made, we decided to call the system kamikaze because I think this is a fitting name, given how radically the project changed during that time. And again, here you see, it has a nice recipe and it's easy to mix and you can have fun with a cocktail as well as the system. But then after Cannes, because it was released and actually we did several releases with that name because at that point in time we didn't have the time to come up with with a more fitting names for the individual releases, because we were so busy hacking on incremental changes, we decided that it's time to focus some more on stabilization. Of course, at the same time, we also had the usual churn of adding more targets, more packages and updated our compiler. But the radical aspects of the changes that we made is in some ways actually backfired on us. So we also, again, had a very fitting release code name. And then after we did the backfire release, it was again time to to change or revisit some of the design decisions that we made, for instance, Linux, two point four support that we still had because we needed the Broadcom wireless drivers. It was getting stale and it was getting really annoying and maintaining a system. I mean, we had at that time, we had many targets running Linux, two point six, and we had one single target, the Broadcom target, still running Linux, two point four. And in the meantime, actually, a lot of development happened on replacing that binary driver with the open source. Be forty three driver. It's still not at the point where it can replace the binary drivers. But in the meantime, we also managed to get a binary driver as a replacement up and running on the Linux two point six target, though that one's not update d as frequently. So we decided since since we're going to do some more changes again. We this time focused mostly on the userspace rework because we still had a system that was running a lot of code and much, much of it was getting very slow and hard to maintain because it's easy to to create some simple scripts with Chele. But once you start making a bigger system with lots of configuration options, it gets cumbersome, especially if you have all these side effects of things that need to be restarted. If the configuration changes or the network setup especially was getting very complex because we started supporting many different configuration types with complex typologies of bridges, villans routing setups, we had complex firewall things and it was just too cumbersome to keep all that as shell scripts. And we also then started working on IPV six because it's the protocol of the future and may well be for some time. And this is, again, reflected nicely in the release name, because a lot of our attitudes of how we did things previously changed with that release and much of the userspace we work, we have the the initial versions of that in the attitude adjustment release, but we're only now bringing it to full potential with a development that's going on right now. So I'd like to take a short some short time to introduce some of the components that we we've built for the embedded system, which deviate a lot from what you find on a traditional Linux distribution. We have our system. Our service is called Juba's with the name based on on DBAs. But unlike DBAs, it's small. I think with DBAs, the library itself weighs in way over one hundred and fifty kilobytes or something like that. And you need lots of other things to get it running as well. And we wanted to have something that's pretty much equally high level but more flexible for our requirements for the embedded systems. And we now have something where I think the system service itself is about 20 kilobytes as a com piled binary and the library is maybe 13 or 15 kilobytes. And it's still very high level and in some ways even more high level than than DBAs. A while ago, I discovered in the DBAs manual where it describes the capital of the system. If if you need to use this, you're signing up for some pain. And that's what the official documentation says. So that wasn't really what we were all interested in. So we brought a second component to replace the the old network scripts that we had. We now have a system deman that sets up the whole thing that we supported with the earlier scripts, with the bridges and villans and interfaces with multiple addresses and all that stuff. And now we have one CD-ROM that handles all that. And unlike the Shell scripts, it can handle things where you just make a change to the configuration and tell it to reload. And it will do the minimum amount of work necessary to bring up the new configuration without making it too hard on the implementor to to handle that with every single protocol that we support. Because we sometimes also have some complex configuration with with Tunnell's for IPV six and we have PBP and we have interfaces that come and go and all that. And this needs to be handled in a way where you just change the configuration and have the back and handle all these changes. So this is the piece of software that does all that. And then another thing that we reimplemented was we have our own kind of systemd, but again, much, much smaller and much more tailored to to the embedded use case. It's called Trachte. And it basically, aside from being PIDE one and handling the tasks related to that, the only thing that it really does is it keeps track of the processes that were started by its scripts and make sure that they are restarted when they need to be. And if the configuration changed and the init script is run again and this process monitoring demand can decide whether to to actually restart the demon or if maybe nothing changed on the sy stem and it can just leave the existing instance running. And this fits into a pattern that we now keep repeating in many of the things that we do, which is that we merged the code path for loading the configuration and for reloading. It becomes a big problem with many of the existing pieces of software that we found. As if they have support for reloading the configuration, it's usually only added as an afterthought. And it means that this code path is usually not as well tested as other code parts. And if somebody adds new features, it's so easy to forget to add them to the reload path after adding it to the load path. This is something that we're basically avoiding by design to to make sure that if if people add something and we do have many people in the community that know scripting fairly well and that can tweak existing source codes, they don't always have a good overview of how the entire system is built. And we want to make sure that if such people come and add features, they should do it in a way that if they tested and it works, that other things, more complex things like configuration, reload, are handled well, are handled properly as well. So they don't have to worry about too many side effects of the changes that I make. And we we are also working on configuration validation in the back end, because with the traditional firmware that we had, we had one web interface that supported automated updates to configuration, and it usually did all the work of keeping track of what needs to be restarted or what needs to be done to apply the configuration by itself, which is way too complex. If we if you want to have managed networks where you have a central piece of software that has a database of the configuration of all the devices and you want to support something like that to or you have things like a TR sixty nine client which is used by ISPs to automatically configure the routers, want to make sure that pieces of software like this don't really have to worr y about the intricacies of what needs to be done after the configuration changed. So we're doing that all in a single place and that just makes it easier for everybody to deal with. And actually, the Web interface itself also is also seeing some very big changes right now because with the bus service, you can you can basically add and a new API to any piece of software in 10 minutes with or if it's implemented in C with very little effort. And you can use that that API from anything running on the system. So it makes sense to decide that you may want to have things running in the browser that talk to components on the system. And it's always annoying if whenever you want to do something like that, you have to create a new API. You have to create, for instance, the CGI script to to do the actual talking with the system servers. So we decided we can give the browser direct access to things behind you. But running on the system, of course, there's security concerns for that as well. But we decided we can handle all that simply by white listing. The Web browser can connect, can get a token and the router side can decide, OK, this browser may talk to these and these and these things running on the system and nothing else. And this just cuts down the number of unnecessary abstraction layers that you need in order to get code running. So that just makes the code smaller, more high level and easier to use as well, which I think is a pretty good tradeoff. So we also focused a lot on our IPV six integration efforts, we initially had things like our DVD running on the system, which is used by normal Linux distributions, and we had some other components from regular Linux distributions as well. But we noticed that not only are these things very big, they're often not very standard compliant as well, at least in the corner cases. And they're really hard to integrate with the rest of the system because all these components are typically designed to have their own kind of APIs, th eir own config files. And there's not not much thought put into making these things integrate well with another system that may have a slightly different design. So we basically wrote new implementations from scratch. We are have our own DHC six and advertisement client. We decided to to handle all that in USERSPACE instead of letting the kernel code do its thing, just to make sure that we can track all the prefixes that we get. We can we can make sure that the kernel does not do weird things that mess up the routing tables, which we also manage in a central piece of software. And we want to make sure that if you if you connect a router and you want to run it as a regular IPV six order and you get a prefix, then that prefix is automatically redistributed to other parts of the network if you want to. And if you specify that in the configuration so you don't have to know in advance too much about what the setup is going to be, you can just say, OK, if I'm getting a prefix and just redistributed to LAN or you have multiple networks on different interfaces and you just want to set up some routing between them. These are things that with the typical Linux distribution, it's it's always a bit of a hassle. You first have to figure out what what config files to use and what services to run. And we want to make sure that most of these things are really handled well by default. And we also handled the whole CPV sex aspect as a as a server, we didn't find any good CPV six servers that fit into the space constraints of a typical router because we still run on devices that have four megabytes of flash and thirty two megabytes of RAM. And if we have to waste something like four hundred five hundred kilobytes just for a simple HP six server, maybe just because it needs a particular SSL library, then that's really a waste of precious space. And the more code you have, the harder it gets to review all of this. So with the security implications of these pieces of software, it's good to have less code to make sure that people can actually be validated at some point. And as I mentioned earlier, we're also working on a new kind of Web interface where we previously had our Web interface written in Lua and we had some very complex templating set up. So the router site was actually generating HTML code on the fly with normal forms and things like that. So pretty much Web 1.0 stuff. So we decided we want to migrate this incrementally with access to to much of the system through the Hubert's API, through adjacent IPCA, that we can actually put a lot of the complex logic that typically comprises the user interface completely on the client side and get the nice additional benefit of allowing people with no experience with embedded devices to come up with their own eyes simply by eventually creating a documented, limited set of APIs that you can use to do pretty much everything interesting with the router. And then you have some normal Web developers that can just look at this API and be mostly familiar with it. Because, Jason, our PC is fairly common in some areas of Web development and they can create all this by themselves because if you look at the landscape of available developers, there aren't that many that no embedded systems. Well, and no web development well and still want to do web development in. So the way that we set up the system, it makes things actually much faster because there is a lot of work involved in templating HTML and even the mobile devices are getting much faster with JavaScript engines. So we might as well just use all that processing power to create better looking and better working guys and add more modularity to all of this and just free up resources on the router side where because the space constraints are still pretty tough in some places. And again, we came up with a fitting name, I think, for the next release that we're working on, because I think in many in many areas we're breaking new ground in terms of what a what a typical router design looks like. And we're breaking a lot of barriers of existing limitations of what the routers do and freeing ourselves from much of the structure of of the legacy devices where we already, I think, did a pretty good job of getting rid of existing structures and existing design patterns. But it's time to to come up with our own coherent set of software. So I think the the release name reflects that nicely. And of course, with all of these names there, they were legitimate cocktails and you can enjoy them as well once you're done with your other work. So in the third part of the talk, I'd like to talk a bit more about what affects the open art project had on the rest of the industry or how the rest of the industry influenced the way that we work as well. When the art 50 Fauji was still somewhat popular, but it was already losing in popularity compared to other devices, especially with a focus shift away from the Broadcom platforms. We actually had some some talks with some representatives of lynxes. I think they decided to send us some marketing people and they basically asked us the question like, why is our product popular? They noticed that there was a lot of popularity attached to the 50 Forgy, but they did not understand where it came from. And they desperately wanted to build on that success and create another product just like it that had had the same success. But the talks broke down pretty quickly because they said they didn't understand. That's actually that's actually a diverse set of communities that do different things with these devices and that the the whole open source aspect is very important as well. And then they they eventually came up with a device that they called the successor of the 50 40. Well, not officially, but they told us that this is the product that they came up with based on the input that they got. And we were pretty let down, pretty much let down by it, because when when they initially released it, it was full of proprietary code for which we had no replacement at that point in time. And they didn't do much in terms of efforts for opening up that code. And so it took quite a while to for for the community to slightly adopt this device. But progress was too slow for it to to gain much popularity. And in addition to that, they used some chips that were rather quirky and that didn't help adoption either. So since those talks, we've also seen some some strange adoption of open heart by a few Odoms. They basically took the system and we only figured this out by looking into tarballs of of routers and looking at where they came from and what company is mentioned in the actual source code, because the router manufacturers typically don't develop the pieces of hardware themselves. They just tell the requirements that they have to odoms and give them some ideas of what the branding is going to look like. Give them some some nice graphics and some nice design rules, but don't do the actual development themselves. So it's up to the Odoms to to create the working products. And as we can see in all sorts releases, they really don't have a lot of experience with that. And so we've seen some Odoms adopt open. But instead of reusing the components that we made, they actually took most of the the properly working and stable parts and decided for some reason to replace those parts. And some of the other parts that they kept were actually the ones that we consider to be more fragile. So in the end, they try to recreate their own system, but use it only as a built system. And the the product or the resulting product looked really horrible in terms of code structure. And it really shined through that. They did not understand what they were working with, but they also had no intention of contacting us or building a working relationship with us because we could have told them many things that would have saved them a lot of trouble and re re implementing the code. But they they decided not talki ng is better because they always it was want to build their own competitive advantages and which in the usually in the end end up being disadvantages. So we decided at some point we need to go higher up the food chain because Odoms, they don't understand the technology well and in many cases they have to rely on the software that they get from the chip manufacturers because otherwise they're going to have a lot of trouble getting support for the things that they built. And if the chip manufacturers luckily are so much fewer than the ODM. So if we can get our software injected there, it easily trickles down to the rest of the market. So we actually had or have some working relationships with a few of them which have other or other kinds of difficulties, depending on which vendor, if you're talking about. So we had some some work done in collaboration with Qualcomm. Atheros Lanting also decided that they want to abandon their own SDK and use OpenTable instead. And we're also now starting to get a bit more contact with mediatheque involving. So I want to talk a bit about what the issues are in collaboration with those chip vendors. I think one of the most obvious one ones is a complete difference in motivation, given that we've been doing this for a few years and we decide we want to continue this for a while longer. One of our main focus is code quality and long term health of the project. I think this is a large part of what made the project itself successful. But this is something that you see isn't going on very much inside big companies. One thing I see over and over again is there's always a very, very strong focus on getting the next product out as fast as possible and making it as cheap as possible. So cost reduction and time to market are the primary concern. And it's very hard to to actually go to a company and tell them we have this thing and we we're focused on code quality. And this is going to make your long term business very successful if all they're fo cused on right now is just barely getting the next product out the door and then try to figure out another plan for the future. But the main issue with their approach is if they get the product out the door, then they immediately after get the pressure to get the next product out the door. So there's never any time left to really focus on the long term overall health. And another another very big issue with these companies is there's always a lot of red tape and bureaucracy going on. The bigger the company, the more of it we get. And it's always very hard to convince them to do things in a particular way when they have lots of reasons to to not do what we're saying, which they call company policy. So this this pretty much fits in with another big aspect here, which is the licensing and the intellectual property issues, in many cases they they want to have their competitive advantages and they realize that hardware isn't isn't the only thing that can differentiate them so themselves on the market. And they they cannot always rely on price to or on competing on price to get their product sold, because then they reduce their profit a lot as well. So they decided if we're going to do this hardware, then we might as well just create some software advantages that will make sure that they only buy our product and not our competitors. There's only one problem with that. They aren't particularly good at creating software for their their hardware. And so this is also a big reason why they eventually decided to to talk to us because they realized that actually Odoms were bypassing their case and building something based on open warranty, even with the limitations attached, that they have more trouble getting support. So if their customers are doing things like that, there must be a pretty good reason for them to do it. And they I think they they often still don't understand the full scope of what's going on there. And a large part of this is, at least with some of the vendors that I've talked to, there are engineering resources and projects are controlled by marketing and they have the least understanding of what's going on on the technology side. But even aside from that, there's also development process issues inside the companies you have marketing that that allocates resources for various short term projects, but then they can't even get comparatively simple things right, like branching in one of the companies that did some some consulting work with. I noticed that they were doing all of their development in either and release branches or in customer branches. And then sometimes when they had a bit of time, they occasionally merged chunks of it to the main branch. But it meant that the main branch was never actually usable for anybody, and the merged conflicts kept getting bigger and bigger. So often they did mergers between individual release branches and mergers between individual customer branches. And there was nobody actually keeping track of a mainline development branch. And to make sure that features that are added once and in one branch actually kept in multiple branches means the people usually had to do the same work twice or even more times. And nobody was really paying attention anywhere where this code goes. And in some cases, we've even seen that, that they made multiple mainline branches based on different business units where I think 90 percent of the code was the same. But for some business units, they decided that they want to have a different directory layout. So they move files around and change the file names. And whenever they had to do mergers, they always had to manually mangle the changes to between the different branches to make sure that the layout somehow fits. And I don't I don't know why they did all this effort with pretty much no gain at all. And this was one of the companies that only have dysfunctional mainline branches. So all of the development was going on inside the customer branches, but they s till needed multiple mainline branches that never really worked. But even aside from the chipset manufacturers, there's also other interesting related project activity going on. One of the most important ones is obviously upstream integration of our patches. We have several people working or working on their their target support and the for a particular system on CHIP. And most of the targets that we're working on are actually not Linux upstream integrated yet. So it makes sense for the same people that do the development inside the open project to also add mainlines support to these targets. And we we also tend to do the same with at least the packages that have accumulated many of the patches. For instance, I maintain the key driver inside of OpenTable and I typically do the development simultaneously for open heart and for upstream. So we want to make sure that in upstream we get or in open we get the changes that we make as fast as possible into the tree because we have a lot of active users that regularly check out our development releases. But we want to make sure that the big changes that we make don't go stale in our repository. So we're working on it on many things simultaneously in the upstream projects as well. And we have other interesting related projects, one of the biggest ones that has or or at least the ones, the one that has been with us for the longest period of time is the freephone project, which was an early adopter from from the old WRP. Fifty four time. They're building community mesh network projects. And of course they're using our system a lot because they're they're from that the building is open, WRP plus a few extra things. So we we've always had a very good relationship with that project and we get a lot of feedback from them because they do test many things that other more simply use cases usually don't get to test. Which brings me to another interesting project, I'm sure many of you have heard of the buffer problem where packets accu mulate delay over over multiple hops in the Internet because it's always cheaper to add more memory than to actually fix the cueing things or add more bandwidth. So the buffer project maintains its it's a bit of a work of art, but with strong collaboration where they want to build a reference router stack that fixes the buffer load issues and is built upon open and actually is used to prototype many things that later on make it into the open art project. And it's also doing a lot of research on on IPV six routing and how to put it into an environment where you have multiple routers and you don't have this one big layer, two domain for everything. And when I mentioned IPV six, it's also worthwhile to mention that there are two, I think, competing IETF projects that deal with IPV six routing at home, which is Hypnotoad and home that and it's nice to see that both of them are based on open WRGA. And yeah, this is pretty much what I have for today, and I hope you've brought some good questions for me. Hello, I'm engineer. I know that the justices makes makes a living, so what is your suggestion, suggestion to the companies who want to have benefits in the software design firm from exact rules so that the customers buy to get a higher price because it is just the price, the faith argument. Many companies don't have a chance, especially here in Europe or whatever. So what should they do to to make the benefits and the software some some extra widgets with your way of open source? But what should they do? Very good question. Thanks. So one of the suggestions that that I keep making is to make sure that you you split the development in two parts so that you have one part that takes care of the individual project products that need to be finished quickly. But you also have a separate team which can be made of a smaller number of people with more technical expertize to take care of the long term health of the project so that whenever you create a new product, you actually sta rt from the the latest state of the long term health product and then do everything you need to do to get the product out and finished quickly. And then you can you can have the long term health product or project look at the changes that the individual customer project's made and just make sure that it's all cleaned up, re architected and fits into the overall long term health of prospective. I have another question. You said that you do parallel in parallel the development for some drivers, for the mainline kernel and for poverty. What are the challenges in converting the drivers that are not in the upstream kernels? These are typically either things that are too specialized for the mainline kernel that don't really or things that are simply not finished yet. It's typically always, always a reason in terms of either other mainline people will not like this because sometimes it's a hack, sometimes it's something that basically only we need and other people don't. And sometimes it's just because something is not finished, but it happens to work well for for us. Thank you. One question about the acceleration, lately, I've seen some Linux based devices that have some sort of acceleration of offloading engine for IP routing or firewalls. Is it possible to support this in the same way in ATI or is it planned or is it even possible? It's planned. It's something that I plan on researching extensively next year. I want to see if it's possible to to integrate this properly with net filter in a way that could possibly be applied upstream. But I still have to research more about the side effects of that because such changes will be pretty intrusive and I need to make sure that they don't hurt other normal use cases inside mainline Linux. Does open pretty sane binaries, and I was wondering what plans there might be for that. No, we don't sign binaries yet. We have to do a lot of work on our release, build infrastructure, which will happen next year. But I think one of the reas ons why this is not that important is because we we can at least I consider the open binaries mostly something where you can test it. But if you're going to use it a lot, you want to build your own anyway, because open tea is mostly targeted at other developers. And the whole end user use case is mostly an afterthought because I think the developer aspect is the one that's more missing on the market. And related question are is the source code signed? We're using it and you can use the git hashes. OK, thanks at least. Well, when I say we're using it, we're not using it as a main repository storage, but we usually point people that want to download it at the git repositories. And we just happened to use Svenne internally, but it's automatically synched. What about using open W.T. and nonvolatile devices, I saw previous efforts, even like I followed closely the bauk effort to kind of assimilate the Android. I really love that idea of having a general purpose built system for devices which may need initial network connection and may not offer a display like also NASA devices and stuff like that. Um, would you prefer that to be developed in Forks of the Open ability and keep open the ability based mostly on for as a targeting network devices, or is there any effort going on on expanding the target device types? It depends on what kind of development for four things. If you have lots of packages that you need for for your particular use case, then we want to have this as a sort of package fit instead of being developed inside the open repositories, because we are actually trying to offload much of the package maintenance to to other feeds and other communities, because with the limited number of people that we have, it's just too much to do all the work ourselves. But we make sure that the package feed list is actually maintained inside our repository. So you don't have to go around digging for us to get all the packages. But this is something we want this part of the wo rk offloaded. But if it's something where you need to make changes to the core to make things work, well, then we of course prefer that to be done in the open code base. So we were aware of the different kinds of use cases that that people are using with our code base. And we want to make sure that we don't have unnecessary forks of the main code base. Hi there, I two questions, my first question is, I know that, like with the with the art project, sometimes the developers and users have been at odds with things like project purity. Has there been any efforts to try and kind of collaborate with those project with the other project? I'm actually also working with the art guys and I'm working on making sure that they will not have to maintain their hackish code base for much longer. And my next question also has to do with relating to other projects is I know that inside Oakland, for example, there's been like an idiot driver for a wireless card that exists solely in the open kernel patches for a very long time. Are there many efforts to upstream patches like that? Typically, those efforts are made by the people that added the package in the first place. If it's a package that somebody send a patch for a long time ago and it's somebody that isn't really active in the open desert community, we typically don't go around looking for these kinds of things to mainline them because that would just be too much work for us at this point. But if it's something where we create something new, we try to create it in a way where we can easily upstream it, even while it's being developed. LOL, have you tried to base your belief system around something like scratch box, something like use a computer simulation? We haven't really tried it because we don't think we really need it. We also support some architectures where there is no decent cumulous support and we have many devices that don't really work well with Kumo. So we want to make sure that we don't add additional restrictions or make sure that you get a different user experience if you're using a target that isn't kümmel supported. And in some in some ways, the built system already pretty much does what we need for for these devices. And if I've designed most parts to be as easy as possible to to change in various ways and we we don't feel like abandoning our built system in favor of another one because we don't see the advantages at this point. Big headaches, are there plans to rename the project? No, I don't support the W anymore, so it's maybe time to rename. Well, he could stand for other things like wireless router technologies or whatever. So we don't feel like changing the name at this point. I just wanted to say thanks for doing so much work on this project. This is actually how I got into Linux. Welcome. Thanks so much. We seem to be out of questions, so thanks, Felix. Thank you all for listening and.