ChaosPad V1.1
Full screen

Server Notice:

hide

32c3-talk-7555 Latest text of pad 32c3-talk-7555 Saved Feb 18, 2024

 
Hallo Du!
Bevor du loslegst den Talk zu transkribieren, sieh dir bitte noch einmal unseren Style Guide an: https://wiki.c3subtitles.de/de:styleguide. Solltest du Fragen haben, dann kannst du uns gerne direkt fragen oder unter https://webirc.hackint.org/#irc://hackint.org/#subtitles oder https://rocket.events.ccc.de/channel/subtitles oder https://chat.rc3.world/channel/subtitles erreichen.
Bitte vergiss nicht deinen Fortschritt im Fortschrittsbalken auf der Seite des Talks einzutragen.
Vielen Dank für dein Engagement!
Hey you!
Prior to transcribing, please look at your style guide: https://wiki.c3subtitles.de/en:styleguide. If you have some questions you can either ask us personally or write us at https://webirc.hackint.org/#irc://hackint.org/#subtitles or https://rocket.events.ccc.de/channel/subtitles or https://chat.rc3.world/channel/subtitles .
Please don't forget to mark your progress in the progress bar at the talk's website.
Thank you very much for your commitment!
======================================================================
 
 
[Music]
 you're going to get a small glimpse of what infrastructure is being built up here every year so please give it up for all of the people of our various infrastructure crews and we'll start with leon and the 32c3 knock review all right good morning everyone this is the knock talk so i'll start with the bad news we had no sauna this year the issues um were quite interesting this year because we had some problems acquiring backbone hardware which was quite serious usually we had a sponsor who who lend us a ton of equipment that we can that we could use and that we had everything from one source and that was really easy we just sent a wish list and we got everything we wanted and we could build the nice network with many features that we didn't really need but just made it fun to have this year however the sponsor had already agreed to do the same thing again but then they had some internal difficulties so they couldn't do that and we learned that basically one week prior to build up so i got a phone call and we basically were left without any backbone equipment so it was back to the roots what have we done the years before we called many people and we got many offers uh from amazing folks who helped us out um uh we decided that basically everyone in the north would bring his fritz box and it'll be fine um so the temperatures were fine in all rooms the first boxes don't have that much energy to dissipate um so this happened um now what we seriously did was um some emergency planning we got help from many amazing companies especially we have to thank team x who are in nuremberg a company um they've been incredibly helpful um arranging stuff from their storage i'm talking to people if we can use their stuff and we spent many hours on the phone late in the evening with a man from tmx so that was really helpful thanks for that um other companies helped as well um especially huawei uh via the company called 10 ict who lend us some routers um secure link which is another neth
erlands company um helped us with equipment and e-cigs learned some they've learned a big router and switch which we use in the data center so what we ended up with in the end uh was this equipment it's uh basically basically it's a mixture of many vendors which made the network very interesting to build but it worked out pretty nicely in the end we we even got the force 10 router the e600 which we had first used at 20 23 c3 um which we got from a friend of mine uh a friend of us um the good thing about that is we don't have to give it back um so that was really useful uh yeah you want to continue in the backbone yeah sure hello is this thing okay i think that's working now thank you um yeah with all the equipment we got in we went in and did basically within one week a complete redesign of the backbone we had we knew of course in which patch rooms we needed to go and how many ports we needed each patch room but the whole vpls routed setup we had in before was oh yeah this is maybe you can see it in the first two rows or if you have really sharp eyes although in the back rows but uh yeah it's also later on the slides so yeah we went in and redesigned the whole uh the whole backbone basically this year we just did a simple layer two backbone no vpls nothing fancy we just needed to get it running the routing was done mostly usually on the huawei's in the larger patch rooms and where we were as in the since we've been here in the ip and the cch actually we've been in the data center of iphh where we were connected to upstream isps and had four times 10 gig wdm back back link to the cch here and then we had a couple of um upstream isps too uh for the really for the first time we had 10 gig from dodger telecom here in cch so for the first time since we've been here we had actually redundancy on the uplink if the uplink to the iph would fail we would have still 10 gig to dodge a telecom for internet traffic here in the cch which was pretty cool to have and then at iph we 
were connecting to e-cigs internet exchange point to kaya global kpn and lw lcom our three other upstreams and at ecx we were mostly doing peering with the route server later on they came appearing sessions to kabul deutschland and we had two peering requests from one isp which didn't set up the iptd bgp session actually so we're still waiting for it they have still a couple of hours before we shut down our equipment so maybe we'll get it in the end
[Applause]
 yeah and with all the well all the actually quite big isps we had as an upstream we had the some problems or not some challenges difficulties balancing between those uplinks we were moving around traffic a bit from one from romi some asses from one uplink to the other and stuff like that uh to to balance it out and to find a good uh find good connectivity basically for all of you and in the end i think we managed to do this and uh we had quite a few complaints and uh the ddos's were coming in pretty fast too so i guess we were a pretty successful successful with that
[Applause]
 so after all those years we used the e600 basically many years in berlin then we didn't use it in the cch and we used it at camp again this year we finally found its limits um because this this was always like that one box that always stood up to the challenges of the congress but this year i think we filled its tcamp table which couldn't hold more than 16 000 ipv6 neighbor entries well it's basically you you have three entries per device so there's not this many entries basically so that device was just too small to hold all the v6 traffic um but yeah we could um mitigate that by moving v6 routing to another device we still had lying around so that worked um another issue was that for the first time uh we actually had a ddos so someone tried to attack our network from the outside um basically they've filled two of our 10 gig uplinks with dns amplification traffic so that was one evening of some issues for us it was it was going against one of the colo servers so we don't really know if they were trying to attack the event or some specific server they didn't like but we could mitigate that with the help of our upstream knocks so that worked
[Music]
 what we tried to do with kbn this time which is one of our uplinks is to run a 100 gigabit connection to them um they they arranged hardware especially to do that test with us um and we arranged hardware to to do that but in the end it didn't work well we tried some we suspected a broken optic so flex optics sent us another one which didn't work either so we are suspecting currently a broken line card in their router or something um but yeah in the end we were stuck with uh with your grandmother's 10 gigabit ethernet again um but that's fine we don't need all the bandwidth it just would have been nice to test the technology uh marcus um yeah and then we had some problems uh problems with physics routing on the huawei's basically they were working quite fine in most locations but in one of the patch rooms where we had a lot of vlans terminated they were it seems like they were running out of cam basically which is the memory where the addresses are stored in and we noticed that it was mostly for ipv6 only we had package strange packet loss on some vlans not on others so it was quite a hard problem to debug and to find out eventually and when we when we figure out that it's it it has to be cam related we were moving most of the routing of the one patch room back to the e600 which made sure that it was still below sixteen thousand neighbors of course and then suddenly the latency went back to normal on the ipv6 networks and everything was working quite fine we didn't really find out what happened there but it had to be something like that and we had the same problem basically with a wi-fi network where uh we had uh almost like well we have the numbers like 8 000 something users in there um so we were routing that at the beginning on the on the e600 basically right yeah yeah another on the e600 and uh there we have we're running in the cam issues too then we were moving this actually to an mx mx80 so moving about around a little bit of locally without a traffic but the
n it was fine and then in between there were some people trying to be funny and sending some stuff to our monitoring servers which took down some of the monitoring for some time but eventually we found out that too and we're blocking the drp addresses and then we could see that all the network was working anyways so that was fine too yeah and then we have some some random notes so to speak we were not we were noticing that the facebook traffic was really really low it didn't it didn't get into the top 20 top 200 of the ass yeah we were talking that the most facebook traffic we've seen was that one visitor walking around in a facebook shirt and actually on a quite interesting note there was more v6 traffic from facebook than v4 traffic so that's good to come somehow um for the colo we saw this year some requirements oh yeah we need this like uh 10 times 10 10 10 gig no it wasn't 10 times but some people wanted to actually have multiple 10 gig channels for the for the servers i mean we are quite happy to write connectivity and color we were quite happy to provide the caller and do anything we can to to provide bandwidth there but there are there are some limits so we think at least at this for this congress 10 gig was quite enough for one server in the cola so especially i mean the people couldn't know that we had this problem with the backbone and everything else but still i mean please keep your requirements aligned with reality that's all what we're saying uh we learned some interesting new facts that well people couldn't tell us when we asked them uh we have om2 multi-mode fiber in this building and with the hectic arrangement of hardware before the build up we had a situation when where we had to run 40 gigabit ethernet links between some sites well we would have only needed 10 but we had the equipment only for 40 gigabits so we had to figure out whether those links would work over this fiber that's not designed for this kind of use the the stuff is specified to 
run uh for 100 meters over om3 fiber which is better fiber than we have um and we were able to get it working for 190 meters uh on the bad fiber we had so that was nice to learn uh yeah another interesting fact that's barely visible in the slides due to a formatting [ __ ] up um the maximum outgoing traffic was 21.4 gigabits per second people always ask for that number which is about four gigabits more than last year so it's steadily increasing here's a pretty graph that's just it just shows um the load and our backbone links uh we just noticed that an hour ago it would be fun to show um this is where we announced the end of the color or the shutdown of the color so basically all of you walked to the color and probably got their servers out so most of our traffic is gone now wi-fi wi-fi we have wifi no but but really we have wi-fi what we were using this time the same as last year aruba controllers to high value in a high value with a high availability setup with 10 gig links towards the the core we had deployed 145 access points throughout the building in uh all over the building we saw um 20 000 unique clients and had a peak of 1 8 150 clients so it's not over 9 000 yeah our wi-fi wi-fi team didn't deliver on that apparently yeah and so sorry um on a really serious note we saw 40 percent of clients on and on the unencrypted network which we really couldn't explain because we were actually trying to push the people on the encrypted wi-fi we had the 32 c3 ssid and the 32 c3 open ssid and somehow 40 were ending up on the unencrypted open network although we provided an android app to configure the encrypted network we had profiles for apple and windows and basically we did everything we could just to to get the people using the encrypted wi-fi and still 40 decided i know yolo
[Laughter]
 well maybe the next year we see a decrease in this number it would be quite fun actually the obligatory username statistics because as you know you can choose any name you want number seven isn't is an improvement from last year it was on number 11. device statistics um this time linux has one together um and traffic statistics uh we saw an average of three gigabit per second um coming from the wi-fi which well at the sum of received and transmitted data um peak of 4.5 gigabits um so one and two were the most busy areas of obvious obviously um they had the peak of 1.75 gigabits per second um so we in in this in this deployment we see a much higher average bandwidth use a bandwidth usage per user than a normal rollout of this size so it means you guys are much more active on the internet for some reason yeah and uh we can see this in this pretty graph too that's the the airtime usage on five gigahertz and sol one so basically everything is you in use we don't have any spare spare frequency ends anymore and uh yeah we have to be trying uh we're actually trying to push as much traffic as is physically possible through those wavelengths to those frequencies that we that we have uh so this year our wireless people decided to try uh to try to build a probe system so they can actually monitor the quality of the wi-fi they've built um well before we could just monitor them by do we have complaints from the users but this time we have actual measurements so data they've built five open wt devices with wpa supplicant installed and then they've run automated tests on them for icmp ping checks can we get a dhcp lease let's try to download this file via http and see how long it takes things like that so what you can see is well the the download speed was limited to 15 megabits per second and during night time when the rooms were empty um you could actually get the speed um but once people would come in the speed would drop so that's that's interesting data to have while designi
ng the network for the next event or something we had with a bit of problems in the wi-fi too of course or challenges we had some performance issues with the e600 on the first day which were was adding because it was running out of cam apparently a lot of latency to the to the wireless network and basically to most of the net to the network traffic that's going through it uh when we moved the traffic away from the e600 the throughput doubles basically uh after day one so we figure out then that we reached the limits of the e600 what we're telling you before already and um we were hitting the physical limits in zal one two and three most of the time with a channel utilization you just you just saw so we either need more spectrum or a more efficient usage of the spectrum that we already have and in this regard our wi-fi team is asking if there are some people who have knowledge of this area of running a wi-fi network for how many having 20 12 000 15 000 people and with the bandwidth we're having uh they'd like to talk to you about how this can be done if you have any ideas or whatever you can get in contact to us i mean this doesn't of course uh you can you can still talk to us but we don't really need an experience for someone who has like a fritz box at home we still have this experience ourselves so of course so but we really need uh like the experience of engineers and people who are running this high capacity networks with with lots and lots lots of users and bandwidth and the last problem we had was the the fast radar detection we had this last year already we were presenting it talking a bit a little bit about it basically when there's uh some noise going in on one frequency the wi-fi wi-fi just just shuts down because it's thinking oh there's uh like radar uh from from the from the airport um traffic control or whatever coming in and then it shuts and it needs to shut down this is an um a known bug in the arubas or something like that i guess and um it's still
 not fixed we're still still seeing these problems but apparently it gets better and the newer devices which we didn't have or didn't have that much from right 50 okay so 50 percent of the people were able to continue using the wi-fi when those events come in which is not so bad another topic is the people who we need to thank
[Laughter]
 we couldn't get a better picture sorry so we would really like to thank the kind folks from the noc help desk who who always are our firewall they just take all your requests and complaints and they try to help you guys and they forward the serious stuff to us so that uh well we have actually time to work on the network so thanks for that and on the last note um the traditional thinking to the sponsors because these are the companies who actually make this event possible um it's especially in a situation like uh this year when we were left without equipment um basically um it was good to have so many companies that you can still rely on who would jump in or would still provide their other stuff of their respective areas so thanks for that again
[Applause]
 all right i think we're done are there any questions hey so thanks very much for the for the great work um just two quick questions on the wi-fi in in in sal eins how many access points did you have in halle irons in in total and i was just wondering per access point were using only a single channel per access point or can the access points have several channels at the same time on thanks arya i want to go on that here's our wi-fi man and so we were using 18 access points in style one and um though those were using uh pretty much all of the uh 20 megahertz wide five gigahertz channels so they were on different channels yes so each access point has one five gigahertz radio so it will uh use up one twenty bankers channel i guess that's also the controller solution right they i mean they see each other and they distribute the or did we assign it definitely exactly okay i'm being told there's another question from the signal angel is that correct yeah the most interesting question is how did you get peering with deutsche telekom it was actually transit not not just peering um yeah well we asked some forks okay so it's not reproducible well we hope to reproduce it next year but we'll see yeah you can i can't i have another one which is
[Music]
 why is the wikidown i'd like to stress this this year again we do not run events that see the cd and i guess the last question from the internet is what funny thing did some users do against the monitoring system uh some blah i don't remember there was some weird packets that put heavy load in the machine or something sim flat apparently okay it's in flat easy yeah hi um a few years ago nadia henninger and i did a study where we found thousands and thousands of devices all over the internet that had weak rsa encryption keys for like ssh and tls because of bad random number generators and we could compromise all of them for that reason and um i'm i'm wondering whether right you guys can can help us with this because um a very large fraction of them were fritz boxes
[Applause]
 we're returning all our fritz boxes so we don't have to do anything with switchboxes anymore um do you think next year you could uh discourage use of the unencrypted wi-fi more by giving the ssid a less innocuous name like open instead and call it like unencrypted or insecure or something safe yeah we can try that we we do have some success uh discouraging people from using the 2.4 gigahertz networks by giving it the legacy or even slow name or something so that works yeah we can probably do better but than that wireless questions specifically hall one to get the network working do you run the access points at full power or do you have to reduce it to make the cells smaller yeah we have to reduce the transmit power so we get cells that overlap the least as possible thank you i was wondering next year would it be possible to get better wi-fi coverage in the bathrooms you know the funny thing is this was on our list for this year so i'm not quite sure what failed there but well we're sorry we are aware and we're sorry yeah do you try to turn off the unencrypted network uh well we didn't try that we know what would happen but we don't really want to but we would like to discourage users um but we still want to provide a connectivity for people who i know it's it's a hacker event right so if you want you should be able to s well have your data sniffed or whatever okay one more question from the signal angle yeah how many abuse messages did you get i'm sorry i don't have this number this year this time around kai is coming off our abuse handling person is coming hello hello scroll scroll uh 248 males were like 95 automated about port scans because it's still just used for stupid pod scan the network most of the time and we had i think roughly 10 calls where one was more or less serious and the other ones were like really unimportant and easy to fix i've got one of those calls someone has sent a spam email which is of course not a good thing to do but yeah okay we're don
e okay okay all right um who's next blank stairs yeah that looks like a voc review so if have any people left the room do we have new free seats i see two over here one over here because we have people standing in the back so they can find a seat
[Laughter]
 all right um yeah just you know keep coming keep finding the three seats people keep their hands up all right then please give it up for the walk preview all right hi um is this on is it oh yeah now it is okay so um some of you might know last year we were more or less virus operation center with half our team like falling victim to the congress lawyer we tried to do better this year and um well more or less worked uh we have like a few sore throats but no casualties this year so i think we did all right so the setup was more or less similar to last year we we had um in the uh in each of the halls we had like two to three cameras three and the two first two halls and two in hall six and whole g uh being mostly mixed or being mixed with hardware video mixers and lots of equipment in between which mostly came from which mostly was made by blackmagic design so video mixers signal conversion like converting hdmi to sdi and scaling and stuff and backup recording to ssds and so on we got requests to like talk more about how the actual signal flow looked like so i included one of the pictures and from hall 2. so we i'll use the cursor probably so this is where the speakers insert or where the the speaker signal gets inserted which goes to a switcher which also outputs to the projector and can also loop back to the video mixer which is connected to cameras um i can't read that oh yeah and there's a scaler for feeding back the output from the screen to the video mixer as well so we can take the info beamer which is the thing which placed during the breaks to and put it on the stream as well one thing we we do which might not be obvious if if you haven't done video stuff before is that we embed the audio signal later in the chain so the video mixing itself happens without the audio and we have a central i have a slide the next slide is about that we have a central place where all the audio from all the halls gets collected and mixed and redistributed and we only embed the au
dio after the mixing which uh makes a lot of things easier but also causes problems from time to time because um you like have it's it's not local so if there's a problem in the hall you have to call up to the central central audiology we call it and talk to them and that's that caused some problems but we were able to fix most of them at least in the releases if not in the streams yeah so this then we split the signal to one backup recording which is just recording everything on ssds we hope we don't have to use that but it has saved us a couple of times this year already and we had that the previous years as well and then they go to two separate boxes one of which does the recording and the other one does the streams so yeah and this is the control panel for the oh that's not the control panel for the mixer so that's hall 2 which is slightly more complex than all g or hall six hull one is mostly similar oh and we also have a gopro input with which is c1 in the front for if somebody has to or wants to show something uh like hardware or so on so yeah i think i covered everything more or less all right we also have that's the only picture i got uh we have a like i said a big audio mixer which collects all the audio signals and all the translation signals as well so we can balance the levels after uh after the audio mixing done by the cch itself so we can react more quickly and do some stuff which wouldn't be possible otherwise like ducking in the in the translation streams and so on we also did the zended centrum this year again we we already had that last year and we decided to do some well experimental stuff for research we wanted to have a software video mixer which can do hd for the smaller smaller conferences we do over the course of the year and uh we or one of our team was developing such a video mixer which we call which is called voctomix uh it is based on g streamer and we decided well we have to try it sometimes so well we actually tested it before of cour
se but this was really the first big conferences where uh conference where it was in use and it worked surprisingly well we had some minor hiccups but uh i think there was no lasting damage as in all the recordings got through so uh if you uh if you'd like to see how if you need uh hd uh capable software video mixer it's open source we can you can go there and please send patches and and so on um so about the recordings um we had uh about 133 hours worth of talks in all the halls together so that's um that doesn't include a zender centrum because that was tracked in another project in our internal system this is only hall 1 to gn6 we do releasing 8 in 8 formats so we do hd releases sd releases each of those in mp4 which is h264 and aac and webm which is in our case vp8 and varbis we also do audio only releases and because well most of you when they if you go to media ccd and watch the try to watch the talk in your browser you might notice that you get a random like either the translation or the original audio signal and this this actually depends on the browser you're using and and uh the face of the moon and whatnot is actually the only browser which handles multi-language or multiple audio tracks in video files on the web correctly is ia-10 so so that's pretty sad so we also have to we also d-max we create additional releases which only contains single audio tracks to to cater to broken browsers so if you have issues like that playing playing getting the wrong audio or something just download the file and try a drive vlc or something so in the end this amounted to more than a thousand hours of encoding and and actual files ending up on on various servers if you have uh in cpu time that's probably about like five or three thousand hours of encoding time all right um the reasoning status we this year we were able to we were allowed to record almost all talks but one which was the the play in in hall one um when i when i made the slides we released we already had rel
eased 130 of those 152 talks this number is probably higher by now uh we'll try to get most of as many talks as we can out to until we have to leave and until the the network gets cut down so we hope that well most of them will come out today and the rest will hopefully come out soon once the our hardware has traveled to its home again and has we've gone to the rest of the releasing process so the talks which are not going to be released today will hopefully happen in the next one or two weeks tops um yeah so you can go to mediacc.e or we also publish the talks on youtube all right streaming on the other hand um it was pretty similar to last year we dropped finally we we felt comfortable dropping rtmp so only hls and webm with the majority of the the viewers actually going to webm because uh browsers the only browsers which natively support hls are apple device so that was about 10 to 20 percent i think and this will also be able to offer hd pass on all relays like for the website itself but also for the stream people came to us for the past few years and requested that so we built that this year and that was mainly possible thanks to that's encrypt because we have lots of servers and like going either getting like huge amounts of wildcard certificates or going to uh going to the process of getting like 50 different certificates by copy pasting csrs somewhere just isn't fun so let's encrypt made that easy for us that was quite nice so this is how uh how cdn looked like the servers you see in the bottom that's 17 if i didn't miss count is the ones which were serving streams to the users this is our source which is in in in in a collocation space here in in cch and we have a distribution layer which uh just handles the load uh of actually pushing the data to the edge relays because one edge relay gets fed with approximately 300 megabits just the all the streams so we can't do 17 times 300 from just one server so that's where we have this distribution layer one of thos
e was in two of those servers were in in the united states actually so people coming from the united states were pushed to a local server we also had a server in inside this building and also went to pushing people from the 3233 to that local server to ensure that you don't have or you have like less less round trip time more or less peak thief is still a thing so you can see peak thief over there that was like five and a half thousand viewers in hall one we also have a little peak jeopardy over here yeah so we ended up serving almost 20 gigabits of streams at the peak and well less than an average so something like 10 gigabits which was 10 gigabits on average during a normal day which was quite a bit more than last year so uh yeah um one of the reasons is that we actually enforced hd by default last year we we had hd as an upgrade path so to say and well that seemed to have worked we pushed more than 100 terabytes of stream data to users altogether and as it turns out streaming to deutsche telecom isn't really fun we had to actually push telecom customers to a specific subset of servers which had less than shitty peering to a telecom so um yeah and actually most of the telecom traffic was served from cch itself via the knocklink so thanks for that that has helped a lot so there are a few other teams which are well not exactly the video team but related so i i collected some statistics from them as well so first as the subtitling team which this year again did live subtitling in halls one and two and they subtitled 82 talks which was all of them and the number of characters they typed was 2 million 14 021 they had about i didn't get these numbers in time for the slides so i'll just say them they had like 60 angels which did that so that was great as well and i was speaking about angels we had like video angels manning the cameras and the video mixers and so on that was about 250 so well done thank you i tried to call the translation team but uh their phone just uh d
idn't uh didn't uh uh they didn't answer the phone so i pulled this these statistics out of our recording system uh so these are not complete but they give you a rough idea how how how the translations were distributed so we had uh three german talks which weren't translated and nine english talks but comparing that with uh 105 english talks which were translated to german and 19 german talks which were translated to english that's a pretty good ratio i think yeah and and also this is uh i think it's the first time we actually released uh uh uh a video with uh three audio tracks so that's uh originally german translated to english and also translates to uh switch schwitzer douche uh that was the uh yesterday's hacker jeopardy if i'm not mistaken so if you want to watch that all right so if you want to know more details about the how the post processing works and so on you can go to our wiki um the setup described there is not exactly what we do on congress but many of the parts are still the same so there's some documentation there and there's also talk from frostcon linked which explains some of that so see you at easter hack and have a nice day
[Music]
 all right so we're going to start switching speakers already and might do one or two very quick questions because we are running low on time so if you have a question i mean signalrange will go ahead you already have a mic the first question from the internet is still buffering what were the audio problems during the opening event
[Music]
 um uh he would be able to explain more about that but um the audio problems during opening can you yeah some some magic happened and uh it didn't do the magic anymore so um yeah any other questions here um yeah uh did you think about uh broadcasting the streams on dvpt or yeah we thought about that like we we tried that last congress in preparation for camp and we um ended up deciding not to do that this year inside the building because it's not really all that useful and we because in propagation of radio inside a building is not all that good and um the well the idea during 31c3 was to evaluate the technical problems and we managed that so we didn't do that we were thinking about doing broadcast like putting an antenna on top of the building and broadcasting to hamburg but that didn't happen because of lysis um well radio license issues but we might try that next year okay all right um i think we're running out of time and to give the other teams chances to speak we're gonna move to the gsm review right hello hello hello yes okay good thanks so gsm we had a um we had a test network again this year but um there there there's a gsm spectrum situation in germany at the moment so the regulatory body the bundesnet's argentour they in 2015 auctioned away the test spectrum the the decked guardband that is between the frequencies used for gsm and the frequencies used for dect handsets so because this spectrum is in between the two ranges it's it's traditionally been held free because you know there might be some they might sort of bleed into each other anyway now it's gone it's auctioned off and um so we couldn't apply for a test license well we could apply but we wouldn't get one and it looked really bad it looked like we would get no spectrum at all and so hagal velter one of the osmo comb developers he he wrote a blog post about this and a few days later well saying in his blog post that it seemed like we would be able to have no network actually because of this situa
tion and a few days later he was able to write another very good blog post saying that we would have a network because he got um some some some contacts from or somebody from vodafone got in touch and said well actually this isn't being used just yet so it's all right for you to uh to use it one more time they gave us permission and we applied at the bundesliga and we were assigned assigned one more test license so that was that was great we had seven well some of them are still up we had seven bts's operating in the building that's a little bit less than we've had previous years they were each on their own um arkan their own frequency in the 1800 megahertz band one in the rooms one two and g the larger room upstairs one in the gsm room upstairs one outside here in the foyer and one two stories up and one over in the hack center so we did some new things this year we implemented huff rate the half rate audio codec which means that we were able to accommodate twice as many calls as as previously that's good now there's also an open source working open source implementation of this which was indeed finished during during congress now and and we also operated gprs for the whole event uh on one time slot on each pts so that's uh one-eighth of the frequency capacity was reserved for packet data and this this uh this was used and it it worked the the thing is so you you if you do this you have to you have to determine in advance how many how much uh frequency spectrum you're going to allocate to packet data versus phone calls and that's sort of fixed it's not so easy to do that dynamically so um we started small and let's see what happens so we had some some subscribers i'll show a graph in a bit lots of sim cards and no stupid bots this year that's that's good peak activity 66 calls in within a minute was the maximum load on the network and 38 sms messages sent within one minute the total number of sms messages 15 939 so well done and it seems that about 450 or so of the
m didn't actually deliver uh didn't arrive at their destination maybe the phone was switched off for too long or nobody there was no subscriber with that particular number who knows there was some gprs traffic as well not no gigabytes right but megabytes at least at least something let's start small uh 116 or so megabytes received by the network so sent by um by devices and 475 megabytes transmitted by the networks are received by the the devices and some some pretty graphs this is a snapshot from from a little while ago the the three on top i already went through that's the number of active subscribers and the total number of sent messages in gprs traffic and on the bottom you see the the channel loads of the the the activity uh of each bts uh each of the seven bts that we're using finally so what about next year uh we don't really know it doesn't look that great and um the we're it's pretty certain that vodafone is going to be using the the their spectrum that they've acquired next year and um sad sad sad panda sad face if you have any way to to help us out with some spectrum in downtown hamburg between christmas and new year's in 2016 then we would love to talk to you and and this is really this is really a unique and important opportunity to exercise the open source software and hardware that we have for cell phone technology i know maybe you catched uh caught harold's talk on on the new 3g work that is slowly coming off the ground and by next year i i would be super excited to see that um implemented now um i mentioned yeah so so if you can't help please get in touch i mentioned sim cards as it turns out we found half a box of sim cards left over so if you're interested in those come find me after after the talk that's it for me i guess we can take some questions yeah so we're gonna start switching speakers already and in the meantime take a few questions so if you have a question in the room run to one of the microphones if the signal angel has one he waves hi
s hand like he does right now um yeah the question a question is were there any special uh services provided via gprs and were there congress wap pages still remember that there were no special services implemented by by us and there was no wap content provided but if you're interested in doing so please please get in touch and and we'll we'll make sure to set something up oh there's one thing one more thing i should mention about the sim cards uh they so the sim cards from 31c3 a year ago uh the ones from camp this summer and the ones from this event they are all java cards and if you're interested in acquiring the the keys on these cards get in touch with us we can we can give them to you if you want to implement for example some some sim toolkit applets and there's some some source code that was released just before congress as well in if you're interested to work on that okay perfect maybe one one final very quick question have you tried to combine the text messages from the voiceover ip with the sms because that didn't work i think but it could be interesting have we tried to combine text messages over over packet you mean yeah there is a voice over ip also to call and uh but you can also send text messages with that protocol but uh do you mean is it possible to send sms you mean text messages okay uh no we haven't tried that there is no no sip to 2g sip 2 sms gateway that's also an an interesting project if anyone wants to work on that should be it should be pretty easy the poc infrastructure is is easy to integrate with okay thank you okay perfect you are the last question one sentence question one sentence answer my question is what kind of hardware was involved in this entire network i'm also interested in the bts uh the hardware we used are cismo bts cismo btss and what was the second part of your question that i was mostly interested in the bts all right yeah so the the bts is a cisco bts and it has an ethernet connection to a server where we're running t
he open bsc open source software to control all these seven btss thank you okay perfect thank you and we technically have two minutes left for silk road but we made might run one or two uh longer so uh yeah please secret yeah hi um we are part of the team that set up this year's pneumatic tube network but we also attended camp some people thought we weren't there but we actually were our village got located or moved to another spot so we weren't able to build the network we wanted part of the reason was because there's there were security ways that we couldn't cross but we didn't we tried to cross them turns out that the clay soil in mildenberg is actually pretty hard to dig trenches into we also have some statistics from camp that we couldn't show the at the last infrastructure review so all in all there are about 118 capsules sent from our central node unfortunately as most of you know there was a little downpour at the camp one of the last days so our measuring hardware that we would have loved to use at 32 c3 somewhat corroded there and we weren't able to use it so this year's network uh looks like this it's a bit larger than uh at last year's congress um but this year we were also able to handle all the traffic from a single central node so most of you have seen the router upstairs so it's able to route all the traffic to the central node and back to all the nodes again so some new developments we did for the camp we actually wanted to collect the statistics from all the nodes and since we planned a large network with nodes that are far apart we figured we needed some kind of two-wire bus that has to use cheap cable so this is somewhat is uh two wire bars you can check out the code and github and the hardware schematics and this is also this stuff we use to control the routers yeah yeah because of that decentral or that central approach we needed some communications that's why some other guys set up some field phones to every station and they were heavily used 
even uh especially in the mornings when the smaller hackers were there that was quite fun and nah that's our router it was really fun it had three tubes one tube coming one tube coming from the hack center and three other tubes one going to the park to get home for its vai and the other one was actually yeah we'll come to that later was really fun because of that central approach we could provide suction as a service so that yeah i got rid of that noise problem we had some yeah not no one really who who could provide an electrical switch so we choose another approach yeah and we had that third tube at the router that was really really fun it was good for trolling podcasts because it was on the stage at the zenith centrum that was almost everything we would love to to have more more people who help us if you want to help us get everything set up come here come here on day minus three or so and just help if you have nice ideas contact us via the mailing list and
[Applause]
[Music]
 you