RIPE 86

Archives

Plenary session
23 May 2023

11 a.m.

DIMITRY KOHMANYUK: Good morning, everyone, we are still filling up this huge room, so we should have decreased it, or maybe increased the screen size.

All right, so this session has three blocks, so to speak, we have one extra large talk, three regular talks and three lightning talks. We have three presenters.

BENNO OVEREINDER: Thanks. Should we wait for a minute or you rather have me start it?
.
This is so important. The introduction is so important. Otherwise everyone is lost.

So, for some introductions actually. I should have updated also the presenters. So originally Marten should be on stage, Marten is working at NLnet Labs, unfortunately he has an excuse to be here, for an unfortunate unexpected event. So, I'm not the expert as Marten is, so I will, instead of a very in‑depth discussion of how things went after last RIPE meeting in Belgrade, where he did a presentation at Open Source Software Working Group, and I think also Cooperation, about a new legislation, the Cyber Resilience Act. I'll give a little bit more high level presentation to keep everybody on par, Bastian will give some more about the status and the follow‑up steps, and Robert will also present another initiative or other regulation which is the product liability. So we're more a little bit broader than this in depth.

All the credit to Marten, all the failures and mistakes are mine.

The CRA, we ‑‑ who is familiar with the CRA here? One third. So, the CRA is something like, it's ‑‑ the CRA, the European Commission tried to regulate products with digital elements, and so what are digital elements? Actually, that's everything with hardware and software. And it's on bad security, so they want to reduce the number of holes, so etc. That the intention of the CRA.

So, and you are probably familiar with the CE marking. So that's ‑‑ they actually want to have software kind of have a CE stamp that it has been audited, has been approved and it can be put on the European market as a software product.

But there are a number of issues here. So, we're not against security, we're not against regulation in security but we are for good regulations, and I think there was definitely identified a half year ago, there was room for improvement here.

How to go?
.
So, we specifically look at open source software here, not software in general in the European market but open source. I will come to that later, but, well, our own interest in NL labs and ISC, we make open source software, and so there was some specifics that is very interesting for us, and has a serious impact for our organisations, how we produce and maintain software.

And I think a lot of the Internet is running on open source software. So it's not only us having a ‑‑ feeling an impact but it's also the users and other people here developing great open source software for routing, for DNS, etc. So it has a really impact for producers of open source and users of open source.
.
We talk about software and hardware, and there are a number of exceptions. So, not covered is open source. Okay, and that's interesting. We'll come to that later. Services, so, running, for example a DNS server, that's NIS 2 it has been for some of the discussions have already been gone through the RIPE community, so I won't go into that, and also outright exclusions. For example, medical equipment, they have their own regulations already, so the CRA regulations won't apply in that.

Back to the non‑commercial projects, including open source. That sounds good. So, again, we want to produce secure software, but an open source software but without ‑‑ there is no ‑‑ so this open source is not part of commercial activity, and that's the whole crux here, that's very difficult to define what's commercial activity, and we'll come back to that later. So what is the CRA, how does it work?
.
So there's ‑‑ through the lifetime of a product, the design and ‑‑ development, implementation period, there is a maintenance, and at some point there is, we call it ‑‑ well, it's called confirmity assessment, it's just checking if you fulfil ‑‑ if you meet the requirements or not. So there is a long list, and I removed the bullet points here, for example ‑ well lack of time.

But, there are a number of requirements you have to meet before you are allowed to put your product on the market.

So and who does this assessment? Well, 90% of the products are in category ‑‑ the default category. We say ‑‑ there is some examples how it works. They do self‑assessment, that's final, it's also a thing, the open source developers do actually (that) because they want to put open source and secure open source on the market.

But for the critical class 1 and 2, there are different ways for this assessment. So it's either application of standards or a third‑party assessment. So the application of standards, we'll present that a little bit later but the third part is the assessment. So there will be other external parties doing an assessment of your software. And again speaking for myself and for iec, we developed DNS software and other software close to the routers that's really infrastructure, industrial software. So, our guess is, our analysis is we are in the class 3, so we are in this ballpark that if we have commercial activities, then we are in the third class 2 category, we need third‑party assessments.

.
So, there's this, again there is a lot of legal talk here, so, I want to introduce at least some terms here. So the new legislative framework is kind of what's the EC using internally as a guide book, where it talks about. So, it talks about manufacturers, notified bodies, etc., etc. So I want to identify a number of these definitions. How it relates to the third parties we talk about.

So the notified bodies in this new legislative framework, this is what the EC is using for creating new laws and acts and directives is, they are called notified bodies. Institutes, we have the German one and DEKRA, there are many, many more. But as far as we know, there is no, well, third‑party assessment organisation for software. So this is something completely new. So, that has to be defined and has to be ‑‑ when this becomes real law, European law, these kind of institutes have to be created with all the expertise. So this is something we have to think about.

Then again we have the notifying authorities, in the Netherlands it's the (Dutch) and they also have to accommodate for this new law.

So where are we now? '23, about halfway, Bastian will go more in depth where we are with the process but what's important is the standards, because the requirements are a little high level. So the standards organisation will give the exact rules we have to apply. They have to be defined. So that will be after '24, and then in 24 months, I think there should be harmonised standards. That's very optimistic. There is nothing and in two years to expect to have standards we have to comply to. So that's also a concern for us. And it's ‑‑ the standards it's not about security, it's about kind of a security assurance, it's about the feeling it's secure, and not so much as we try to build software that's secure and the whole thing, it's a little bit, the tension between the Act, the Cyber Resilience Act, what it tries to achieve and what it does to us is more a feeling that everything is secure, then it's really secure.

For open source. Well, just the hardware products, you know them, washing machines, telephones, they have the CE mark, and it's clear who is owner of what. For open source, the simplified example for Linux kernel for example is not clear at all. So who is the owner of the Linux kernel? Is that the Linux or is that kernel .org where I can find my source from the Linux kernel? But there are also many, many developers contributing from different companies. So this whole CE marking for products, they want to apply to open source software but large group of people collaborate doesn't fit well. So there are many, many things not very clear here.

So how does it affect FOSS? And going back to the commercial activity, it only applies ‑‑ are we exempt? FOSS is exempt if it's out of the course of commercial activity. So how do we define commercial activity?
.
Well, open source is free, so it's not a product on the market. It's ‑‑ what some of us do, iec and NL labs and other companies also, we provide support, support contracts and that's in that sense a technical commercial activity. So we are in the current definition, we would be in. So open source. And there are all kinds of risks here, so ‑‑ so this expansion I have interpretation of commercial activity narrows down the organisations, or the open source protocols that are exempt and that makes it dangerous. I will skip all the implications here, you can look at the slides, I have to speed up here.

And also, there is the blue guide which is also part of the legs, the new legs, is that commercial activity, or standalone software as a product is not well defined there. So it's more as part of a larger product. So, there is also the definitions are not aligned yet. So all kind of problems here also.

I think it's good to go to Bastian to where we are now. So, with the current definition ‑‑ kind of summarising, in the current situation, we are looking at ‑‑ we are under discussion, so we have in the past half year, there is a lot of outreach to members of European Parliament, to other organisations to make our point, that with the current interpretation of commercial activity, open source would be in the CRA with all kinds of risks, that's innovation would lose because people, as an individual, and you get some kickbacks, some money, some funding to do some open source development, it can be considered to be a commercial activity and then you are regulated and for small software developers, companies or individuals, that be be a reason to stop contributing and stop innovation etc. So that's how we identify the risk.

Bastian, please take away.

BASTIAN GOSLINGS: Thank you. Hello. Hi there, my name is Bastian Goslings from the RIPE NCC. I joined the Public Policy and Internet Governance team just a year ago, and this is one of the files that I also have been involved in and I will now, relatively quick overview, tried to share what happened since the presentation on the CRA in Belgrade at the last RIPE meeting, what happened since the input we provided and give you an overview of the current status and what's happening and what's next.

So, thank you, Benno, for your great Introduction to it. When we saw the initial proposal from September last year, it seemed like good intentions in improving cybersecurity within the European Union region, of course harmonising and creating legal clarity sounds like a good thing. Yeah, the risk‑based approach seems like a very sensible, security by design, yeah we like that. And also the part that's not necessarily something that you went into, Benno, but it's also part of the regulation that manufacturers and other distributors, importers, that bring the products on the European market, also need to provide consumers with clear information with regard to the products.

So, all in all, it sounded like a good thing.

But still, after reading it, it raised a number of questions and the European Commission gave the opportunity for external stakeholders to provide input, reflect on this proposal, so that's what we did as the RIPE NCC too. On the 23rd January, we submitted a response, there is a short URL there that you can read it if you have not already done so, this response was not only on are behalf of the RIPE NCC, how they read the proposal, what did we think of the scope and the intentions etc. Of course, how, potentially, it could affect our services, the way we read it, Atlas, for instance, is in scope, not considered a critical product, but still, that would have implications. And last but not least, we also went to the community using the respective mailing list to see what do you guys think of this, and that's actually where the open source component came in, and was flagged.

So we brought that to the Commission as well.

This is another screenshot of the response we provided.
And I want to go into like the community concerns, when it comes to the RIPE NCC please read that but with regard to the community concerns, it was all about the open source component and the exemption there which in itself seemed like a good thing, but it just raised a lot more questions, as Benno just illustrated. Like, how does it work in practice.

As was referred to, it only applies when developed or supplied outside the course of a commercial activity. How is that then defined? You go to the blue guide and that doesn't give you a clear answer. The terminology that's being used here, as was referred, to it's part of a framework but how does it actually apply to software and more specifically in this case to open source software and everything that surrounds development and publication of that?
.
The way that we read it and what we received also from the community, that this potentially could have a huge impact on smaller entities, developers as well as individuals that are working on this. If you, for instance, then fall into a category of critical products, that has a significant impact in terms of compliance and the cost that is come along with that. What does that mean in terms of are people going to stop developing? Are they going to stop placing products on the European Union market? What does that mean in terms of innovation, everything that the European Commission is advocating for. And another thing that came back from the community as well is like, the emphasis should not necessarily be on the type of licence associated with the product but actually what is it used for? If a bank, you know, takes part of an open source repository and uses it for a certain service, that ‑‑ you could consider that much more critical in terms of potential impact if something goes wrong than if you, as a private individual, hobby around with it on your private website.

That was with regard to, you know, providing the European Commission with input. Recently, we also sent a message to the European Parliament, and especially members the ITRE Committee, the leading committee that is working on this. Again, to raise our concerns, to flag the concerns with regard to this open source so‑called exemption. And then also referring to what we could considered would be improved text of actually that's something that the Council came up, the Member States.

That brings me to this slide. Some of you may be aware if you know how this works. Initially, the Commission, European Commission will come with a legislative proposal, there is work being done before that, right, like stakeholders are consulted, an impact assessment is done and a need is defined for something to be formulated in terms of new legislation, but once the Commission comes with its proposal, that is published and that's it as far as they are concerned. People can provide input and reflect on it but the proposal itself is not going to change.

Then we have two other entities, I already referred to, we have the Member States together in the European Council and, on the other hand, the European Parliament and they will also look at this proposal and think about that and, you know, reflect on it and come with their own proposed changes, so‑called amendments to this proposal. And that's actually another process that is being worked on and is actually almost being finalised. When it comes to open source, the Council already has reached a compromised text and we're now in the midst of the European Parliament, especially the ITRE Committee, to discuss all the amendments from the different parties that had been tabled with regard to the CRA, in this case specifically with regard to open source.
.
So, as I mentioned, the Council had already reached its compromise text. The slide it there if you are would like to read it. European Parliament is actually ‑‑ and the pressure is on, they are really working hard to finalise this. For them to table new amendments at this stage is not longer possible, but if you do feel the need to flag concerns that others have already raised just in terms of quantity and keeping this on the agenda and having it go to the right direction, it could still make sense to reach out to either your local or regional MEPs or from certain parties you know that you feel either feel sensitive towards the position they are taking, you want to reinforce that or others you think this is going the wrong way and you want to provide them with arguments to make another proposed stance on this.

As I mentioned, I included the text. This is compromised text from the Council. And from the latest draft report, the proposed text in there but there are numerous other amendments. It seems, summarising going into a better direction than the original proposal itself, that we're not there yet. I'm looking towards Benno, do you think does it make sense to have questions with regard to the CRA now or go to Rob first?
.
ROBERT CAROLINS: I am going to to do a bit of a context shift here but clearly someone is driving this as well.
.
In a little less than five minutes I am going to introduce you to a topic that'S sort of the legal equivalent of teaching you everything you need to know about lattice cryptography in order to get ready for quantum computing attacks, but in less than five minutes as an engineer group I feel you will be up for it very, very simple thing you can read this slide at your leisure later on.

In order to introduce to you this idea context shift, I want to talk to you about the Product Liability Directive, PLD, the the product liability directive. As Benno said, the CRA is not just about creating software, it's about creating an environment of security assurance. It's the cost and process of chasing that little CE label, who is required to do it, how much will it cost, how difficult will it be? What I want to talk about now, Product Liability Directive, is about who gets stuck with the bill, civil liability, lawsuit liability, when something goes wrong and a person is injured or killed?
.
So, to do that, I'm going to introduce a hypothetical, I'm going to do what lawyers do, I am going to tell you a pretend story, about 7 persons, 2 pieces of software, one car and one victim. Read this later. I'll just tell you the story.

Here is the picture. We have a company called Firefly Limited that writes a piece of software that I have described as OpenSesame. It's a cryptographic unit designed to authenticate users. They live in Fredonia. There is a company called Bravo Bits Limited that writes a piece of software called Bravo Drive that translates user inputs to control surfaces, It's fly by wire type software for cars and they incorporate OpenSesame in Bravo Drive, they live in England. Third person, Einstein Motors Inc. lives in California, they take Bravo software and put it into the car. Exotic Imports Limited live in Ireland, they import the car, someone named Johnson lives in Ireland, he buys the car, someone named victim lives in Ireland and gets hit by the car. Why hit by the car? Because someone named Denis Dastardly who lives in Rotania hacks into the car because of a coding is make made by Firefly. That's the picture. The question is: What happens next?
.
We get a forensic expert report that said the reason the zero day existed was because of a coding error made by OpenSesame in the first small piece of software misplaced; results in a zero day exploit. That doesn't sound at all familiar. That's never happened at all in the history of security software!
.
Everybody else in this chain is very careful in terms of who they select, how they build, how they do quality assurance. Everybody is very, very careful and the hacker, well, gets hit by a bus, he is broke, in my law school exams I teach technology, folks, and cyber security about law and regulation, I have been doing that for 23 years, I always have the hacker gets run over by a bus or killed, in this case killed in a paragliding accident, I changed this.

Here come the lawsuits. Two more slides, very dense, but only one change. This is the only thing you'll see for the rest of my little blush.

The law as it exists today, if you think of lawyers and lawsuits as an attack vector, your exposure to a lawsuit, think of that as your attack surface, two main paths of attack that someone representing the victim will take. They'll sue you for negligence, or they'll sue you for having supplied a defective product that resulted in day to day or personal injury or damage to personal property. So different types of attack vector. As you can see from this slide, basically nobody today gets tagged with negligence liability for the story that I just told. With the possible exception of fire through but I have to tell you it's pretty attenuated given the way the law works today because reasons, I don't have time to go through this block by block. And in terms of strict liability for defective products, the law in Europe as it exists today says software is not a product. So, the car manufacturer will definitely become liable for this problem? Why, because turned out their car was not safe. I thought you said they were careful. Yes, they were very, very careful but nonetheless somebody was able to gain remote access and pilot the car off the road and kill a person. That's not the level of safety we expect a car to have. They are a least cost avoid err. They are the types of people that society, at least in Europe around the United States has decided should bear the cost of this kind of damage.

What does the Product Liability Directive going to do that matters to people in this room? These slides are slides that I first presented at ETSI security week four weeks ago predicting to the people in that room what is happening this year. And I said then, and I was right and I'll say again today, that if Europe decides to change the law, the difference becomes this.

And if you look down to the lower right‑hand side quadrant of this particular slide, you will see and I have got a big red light here telling me to shut up, so I have only got 30 seconds before someone gives me the hook ‑‑ this says that although we didn't use to think of software as a product, once this directive is in place and transposed into member state law, we will, which means that all of those people upstream are suddenly potentially liable in the same way that the car manufacturer is, including the supplier of that component, the mistyped semi‑colon. Were they careful? Yes, it doesn't matter, you can be as careful as humanly possible; was the software defective? That will be the Openflow point of inquiry. What does this mean in practice? Nobody is going to know for sure for at least, three, four, or five years.

Thank you, on that cheerful note.

(Applause)

MOIN RAHMAN: We are running out of time so we will not take questions in person here, so you are present here and the speakers are present here you can engage in a hallway break, we have an online question which is: I am shocked that ‑‑ are in the default category, both would seem to provide significant risk to the rest of the ecosystem, will the categories get better refined by Michael Richardson?

BENNO OVEREINDER: So, I tried to trick you into this, to answer it. I don't have an answer for that for the question. So, to redefine this 90%: 10% class, and so why are the hard disks and microphones not critical in a class 1 or class 2? I don't know who defines that actually. So that's why ‑‑ I have to defer to Marty here. So, just ‑‑ concluding, this is all new to us. It's not a familiar place to be for software engineers, nor network engineers, it's in this policy and governance or in this ‑‑ well there are more and more regulations are coming to ‑‑ are being made by the European Community and I think it's a call to you all to think about what's the impact to my business, to my organisation or the products I use and think about what you can do. So it's also I think regulations are not bad but weed into good regulations, and I think it's a kind of a shared interest in all of us and a responsibility to monitor the activities and to give feedback so that we get good regulations. But there will be regulations, and I think that's very important for you all to understand. Thank you.

MOIN RAHMAN: Thanks for the presentation, now our next presentation.

BART VAN DE VELDE: So, it's a real pleasure to be here today and talk to all of you in the RIPE community, it's my first time I am here and I am really happy to see there are so many people showing up.

I'm going to try to talk to you a little bit about the new encrypted protocol stack that we're seeing heavily being used by application content providers across all these networks that you today operate.

We collected a set of data points, snapshots across the world the past couple of years which I will share with you and then I'll talk about some potential ways of dealing with that kind of traffic.

But before we get started, I just wanted to make a call out in memory of Mark Gallagher, a dear colleague and friend of ours that I actually started this work with a couple of years ago, he sadly passed away in 2021, so we always keep him in mind when we talk about this topic.

So, as I said, I'll talk about some data points of traffic analysis that we see, talk about the implications that we think are out there, and also discuss a couple of aspects that you could use to deal with certain sets of use cases that we have been seeing from several operators across the world.

.
So, let's talk about how the Internet really looks today. In 2020, this was really the start of our investigation. We got a very large dataset from an operator fully anonymised, just, you know, packets in, packets out, kind of view, where we did some analysis on and these are the top level conclusions that we got.

Essentially, all of the Internet is encrypted, over 90% of the traffic payloads are encrypted. A huge very important concentration to, you know, a very discrete set of large logos that you see here, over 70 percent of traffic going to the Cloud so the notion of of a very distributed Internet with content sitting everywhere is kind of a thing of the past, and you essentially have, you know, elephant destinations sitting out there where all the traffic gets syphoned to back and forth.

Then, very importantly, you know, a large amount of the flow is ‑‑ around 50% is effectively DNS traffic. So DNS has become, you know, the total content load‑balancing mechanism for traffic to, you know, arrive on your mobile phone or on your PC.

And then a very important thing we picked up on, and we'll talk about this in more detail, is essentially at that point in time we saw 20% of traffic being run over QUIC. That's with a C, not with a K, right. And we started kind of digging into that and we started tracking it, and QUIC is a protocol, it's an IETF fully standardised that runs over UDP so it's kind of a change from the TCP IP stack that traditionally has been used, and this chart, we show a few snapshots of how the volume of traffic is morphing more and more towards a QUIC delivery mechanism, so in 2020 we did a snapshot, that's the 20%, 2021, we saw is rising, and today, as we take these snapshots of various operators in Europe, North America, we see the traffic hovering around, you know, the 40, 45 percent range in terms of volume.

So, undeniably, this is a piece of technology that is being extensively used on the Internet and it cannot be ignored if you are thinking about, you know, how to deal with your network environment.

So, a bit more digging in on data.

The big five logos generate around 48 percent of traffic, the usual applications that you all have installed on your phones. QUIC, roughly in this 2022 snapshot, was 40%. And for the other destinations you slowly started seeing, you know, QUIC adoption happening as well.

Now, it is last chassis, even though it's been designed to have very efficient interactions between client and servers or application, but in these applications that we see, essentially it's being used for very heavy built transport of video related visual traffic, right, high bandwidth traffic.
So you still see a lot of chattiness or more flows in the TCP camp.

In case you had any doubt, we do see consistent patterns of this utilisation across the world, there is ranges, this is not a scientific study obviously, but I think it's good enough to see that the trending is out there growing adoption of QUIC, growing consumption of QUIC based traffic across the globe with, you know, the US snapshot you have LATAM and EU.

Now, what's important to know note, and this is a source from Geoff Huston who is I think speaking this afternoon. There's a bit of a different philosophy behind the two protocol stacks. TCP was essentially designed to create network fairness, to have in case of congestion applications back off and you get a nice distribution of available bandwidth to the various users of the network capacity, while if you look at QUIC behaviour, it's essentially very, very heavily targeted towards my app performance. If I am an application developer, if I want to get my content to end users as fast as possible at any cost, you will gear towards using a QUIC stack typically versus a TCP IP stack, and there is various, you know, tidbits of information, for instance, Uber did a study on how their app reactivity ‑‑ or reaction time is on heavily congested mobile networks and they decided to flip their application completely to QUIC just because it is better when things are tough out there on the network, and they want to actually get the user experience to be above par, compared to others on the network.

So, you know, we have to think about, you know, when we design IP networks, how we deal with this.

So, I kind of highlighted this already, but I think it's very, you know, important to emphasise.

These large logos that generate around 50% of traffic on your average Internet link, you know, they have switched for all intents and purposes fully to QUIC, it's 80% of their traffic is generated on QUIC. The biggest hold out today is NetFlix who has, you know, a super optimised TCP IP stack for their content delivery.

Now, this is not all, it's not just a QUIC. When we talk about fully encrypted protocol stack for application delivery, there's actually, you know, three main pieces to the puzzle here, right.

You have QUIC that runs over UDP that serves HTTP/3, but you also have encrypted DNS. DNS over HTTPS is being used extensively by the applications that I just cited to essentially draw all DNS control back into the application provider space and essentially manage the content delivery a hundred percent from the application view and not leave it to anybody else to deal with.

So this creates another level of obfuscation and visibility that goes away in your networks. And then there's further plugging of holes, if you wish, or plugging of visibility points. If you look on the wire that are being encrypted away with I would say encrypted SNI and encrypted ClientHello and various other efforts that are ongoing.
Which really makes traditional depacket inspection where you look on the wire and you see some of these control elements flow by in the clear, makes it very hard to actually keep up with and you need to look at other methods to manage traffic if you want to effect wait a level of quality of experience or fairness on your networks.

.
So, this is an interesting snapshot. Again you see Facebook pretty much encrypted overnight their full application to QUIC. They were on TCP for a long time, then they invoked QUIC and, you know, essentially an upgrade on the mobile phones generated, you know, 86%‑plus traffic. And YouTube, who was spearheading QUIC development historically, obviously is very high up there in the percentages.
.
So, this is kind of a slide for the people that are sceptical about this and say hey, this is just a thing that's, you know, a fashion thing that's going to go away, it will have a different fashion next year. All these logos that I have out here are actively working with or on applications that run on this protocol stack, so this is not something that we think is a fashion du jour at all.

And you'll see it I think much more appear in enterprise application space than what we have seen in the past, because I talked about mostly, you know, consumer traffic here, but we see a trend in the enterprise space as well.

So, I have a few suggestions on how to potentially deal with this based on, you know, a set of customer input that we gathered over the past couple of years interacting with customers who essentially say hey, I have certain delivery contract with end customers, they are, you know ‑‑ there is holes coming into those guarantees that I give, what can I do about it?
.
So, for instance, managing video streams versus video downloads with the same application space behind it, I can't differentiate, accelerating delivery of, you know, certain aspects of snap was one of the things we got, account for encrypted traffic in terms of source destination when effectively they sit behind CDNs and you can't see the SNI. More generically, we see a lot more questions coming up on how to manage these flows, mitigate impact on congestion in for instance radio networks or other access networks for, you know, guaranteeing and creating a level of fairness between the users they are serving.

.
So, one thing we thought about is essentially you can profile in the time domain various applications, or categories of applications where you can differentiate between, you know, if you look in time on, you know, how a video stream is behaving versus a QUIC video stream, they are different patterns on the network, or, you know, life video streaming such as a Zoom call or YouTube TV versus downloading of content.

So, time domain is really a critical aspect to take into account when you actually want to look at encrypted traffic. And you can essentially build models which will help you to categorise your observations into certain sets of application categories which will give you a better level of granularity than you would have if you just look at the bits and you don't have a way to model against time behaviour.

.
So that gives you a bucket of categories.

The next thing that we think is critical is to essentially understand how you can ‑‑ how applications are running into congestion, how your network is behaving under congestion. You can look at essentially behaviour of that time domain, how it's backing off, how it's speeding up, and therefore infer that you have a flow that is under congestion and therefore you could potentially take action.
Obviously, if you want to create some level of fairness or quality of experience on networks, when everything is running smoothly everybody gets its fair share and there is no congestion, there is no need to interfere, right.

.
Now ‑‑ and then as a third piece, once you have that information about the length that you're looking at, you know, what kind of various application categories are flowing on it and you can detect certain flows being under congestion or not you can essentially start looking at some of those flows and apply a set of, you know, traffic management rules should you wish to do so.

For instance, elephant flow management and I'll talk about a few use cases in a second.

So, we think that, you know, a very important part of the toolbox of the future is going to look like ‑‑ a chain like this where you can only look at IP header information, look at the proposal, source IP, you know, gather as much data as you can, look at the volume that is being generated by that flow, and look at the time domain as to how those flows are actually behaving on the actual wire that you're looking at. Then, do the categorisation and infer what's being actually, you know, dragged across those links, decide whether there is a level of congestion happening in those downstream networks that you are actually looking at, and decide on a policy action to control, create fairness, shape, drop, etc., traffic so that you optimise your delivery mechanism towards end users.

.
So a couple of examples where you could apply this technology. I think first of all, we generated visibility, and I think visibility is a critical aspect of network planning. This kind of toolbox could just simply passively generate a set of data for you in networks that is extremely useful. You can start figuring out what's going on and use it for planning purposes, etc.
.

And then secondly, you could, you know ‑‑ we have had questions to actually do custom policy enforcement. What we mean by that is essentially if you have, you know, flows going from the same client to the same destination but they are actually different application aspects or simply different applications, you could start differentiating between, for instance, downloading of videos versus streaming and then apply a different level of policy without affecting the end user experience but rather, make sure that everything gets pulled through. This is explicitly interesting in networks that are, you know, dealing with scarce bandwidth, so typically mobile networks we see a lot of activity around applying this kind of thing.

.
Another example is time domain. We call it time domain shaping where essentially you optimise the user experience on the congestion if you have massive YouTube videos running on your network. Because of the nature of those kind of video streams that do very heavy buffering onto the end point, you can actually invoke, once you have detected congestion on the network, invoke to that set of the network that is using a lot of bandwidth, the YouTube video, a level of shaping, not doing any dropping, but essentially pace the speed of delivery of the running videos towards the end user, he won't notice it but at the same time, you will be freeing uptime slots for instance on the radio network to deliver other content to the same or other users that are sitting on the same cell.

.
So this is a very handy technique to start improving the level of utilisation you have on the network, maintaining the level of quality that you want to give to the end users on those cells.

.
And we have seen variations of this also in the peering space where you have broadband, in this case I called it a broadband service IP which accesses end users over wholesale operator, they are interconnected with a committed information rate, an SLA, that has certain bandwidth settings. If you have congestion, essentially as an ISP going into those networks, you have only one choice, which is just, you know, burst above the SLA and pay more, or ‑‑ this is the bad scenario ‑‑ you start dropping packets, you get more retransition and effectively you are not helping yourself nor the customers you are serving because you are essentially indiscriminately dropping packets, you can't actually see with normal techniques what's going on in the network.

So applying some of these techniques to such a situation you can start picking out heavy bandwidth applications, start pacing them, slowing them down without actually affecting the delivery of the application, letting other traffic go through in that time span that you're shaping traffic. So that's kind of a level of smart congestion alleviation you are doing which will give you in return less, you know, interconnect fees, less SLA violations if you see it negatively, but you will end up getting more bang for the buck on the interconnect.

.
And another use case we have seen is essentially an ability to look at a set of flows on the wire figuring out you have a set of realtime collaboration applications going and making sure you carve out enough bandwidth for those where you limit the amount of traffic that's being consumed by, you know, big flows that would affect that realtime collaboration, and essentially generate a, you know, nice conditioned bandwidth envelope for the realtime collaboration apps versus other traffic that you have on the same links running.

So, in short, we think that time domain observation, IP header inspection and building models that understand from that dataset what is going on in the network with, you know, a scaleable flow management system behind it is something that can help generate, you know, fair use on networks and not ending up with, you know, a few applications consuming all of the capacity, you know, reducing the quality of experience for some and leaving it for others.

So, this protocol stack is a stack that we think is here to stay. I think Geoff, on his blog, recently said it's going to even become bigger than TCP very soon. So, you know, we all need to think about how we can work on this kind of traffic management in a way that benefits the end users on networks.

.
With that, thank you, I'll take questions, if there are some.

(Applause)

AUDIENCE SPEAKER: Maria, developer of BIRD, I am having a question about your comparison between QUIC and TCP. You are saying that QUIC is like ‑‑ are you saying taking over the domain of TCP? Do you think that it's going to eat TCP completely or are there some parts of the Internet like, well, let's say, SMTP or even BGP that will stay at TCP?

BART VAN DE VELDE: So I left my crystal ball at home, let me start with that. It is very clear that it continues to grow. Where this will stop, I think is anybody's best guess. To me, the biggest attractiveness of the set‑up is essentially that you just ‑‑ if you use a QUIC DoH stack, let's call it, as an application development entity, you pull everything into your user space, you define what scheduling mechanism you are going to use, how you're going to load‑balance, what artifacts you're going to distribute across the globe etc., etc. And it becomes really, really compelling. Does that mean that everything is going to go away? Other protocols, probably not, but it's definitely going to go beyond 45 percent, in my opinion.

AUDIENCE SPEAKER: This was a great talk. Thank you so much. This was one of the ones I really came here to see. My name Trey Darley, I work in the cyber resilience space for Accenture security and I am also on the Board of Directors of first.org, so this is wearing both hats but I'm not speaking on behalf of anybody. It's a question. I have heard a lot about QUIC ‑‑ this is a bandwidth traffic management aspect to this but like it's being driven I think predominantly by sort of content delivery needs and end‑to‑end privacy concerns, and a lot of the criticism I have heard the sort of what‑if‑ism, what‑about‑ism, has been sort of like mainly two things like as a company, or institution, this is going to break all of my bumps on the wire for security controls and on the hosts, it's going to disrupt everything I do from a security perspective, and then, as a parent, like I'm not going to be able to stop little Johnny from going and looking at, I don't know, elephant safari, slaughter porn or something, I don't know. I'm sure it exists, don't Google that because I think I just made it up otherwise. So my question is, I guess, to you and also to other people in the room from the big vendors who are push this protocol shift, like, many of you work for companies that also have security needs and like have security offerings, so if this is going to displace a lot of the security controls on the hosts and on the wire, what are your company's anticipating to replace this with? And also having kids, this is a ‑‑ let's take this offline kind of question but I suppose there are people in the room who can come find me on the coffee break and educate me more about these two things and if you have an answer. Thank you.

BART VAN DE VELDE: So, I think the who is securing what is essentially a question of where do you sit in the food chain and what does that then mean in terms of security? And if you look at this stack it's ascending a TLS 1.3 environment, right. And so people will argue that if you're at the end point of that encrypted tunnel, you can do security, right. So, the fact is that in the middle, you know, opaque, don't have to ask anybody security capabilities, just got like X times harder, right. That is a fact of life with these protocol stacks. Now, does that mean that there is no security any more? No. I think it means that whoever holds the keys of encryption environments can actually invoke security, right.

AUDIENCE SPEAKER: It sounds like you are saying that we have to move ‑‑

MOIN RAHMAN: I have to can it right now. If you have any questions, please join the line because we are running short of time. We have three lightning talks so I'm sorry, but we have to close the queue. Sorry.

BART VAN DE VELDE: I'll be around. We can talk.

MOIN RAHMAN: So thank you berth, our next speaker is Elisantile Gaci. They'll be presenting about enabling distributed configuration control for WiFiMon probes.

ELISANTILA GACI: Hi, everybody. What a huge stage. You look great from here. I am Elisantile Gaci, and I am a WiFiMon service owner, GM 51 project. Today I am going to show you a service that will help to see, understand and feel your wi‑fi.
.
But what is WiFiMon? WiFiMon is an open source specific network monitoring and performance verification system, which is vendor‑independent and transparent in the wi‑fi network users, which means that measurements are automatically triggered up an the monitoring website, and which is more important, there is no need ‑‑ there is no need for end users to install any applications. It is independent of wi‑fi network technology and it captures user experience.

It also provides metrics like download and upload for both link quality and signal strength. But how WiFiMon really works?
.
WiFiMon relies on two monitoring data sources, crowd sources measurements from end users roaming the network and also hardware probe measurements from fixed network locations.

Who is it for? NRENs have shown interest on using these services and some of them are using like SNET and Rash. Lately have shown interest, also Switch. WiFiMon can be used in network technologies or laboratories like NETMODE laboratory in university, National Technical University of Athens.

It can be used also at conference venues like these, that we had piloted there that were considered successful. So why not you can use it here?
.
Let's see a little bit, a brief history of WiFiMon. In 2015, the idea of WiFiMon is born. Monitoring the performance of wi‑fi network as experienced by end users but without requiring their intervention presented at the community in TNC in Porto in 2015.

Complemented by hardware probes that monitor wi‑fi persons per network access point.

As I said, WiFiMon will be subsequently challenging by monitoring the conference venues at prominent meetings. WiFiMon pilots in TNC 19 and GEANT symposium 2020 conference venues, delivered promising results and lead to the declaration of WiFiMon as an official service. To offer better performance, WiFiMon is integrated with mMass to also automating WiFiMon installation.

We have some additional steps. WiFiMon becomes more user friendly by redesigning its user interface and also support for edu roam. WiFiMon team is currently trying to simplifying probe configuration and control which I'm going to tell you today and present.

So let's see.
We have an old approach and new novel approach. Old approach is the picture on the left. We can see that administrator, the yellow one, and multi‑type of probes. The administrator should go to each of them, configure, for example, plug in and monitor, and hardware configuration would be required by the administrator. But on the the right side, we have new novel approach required. One administrator from a centre of point connects to the probes and everything happens more flexible.

Administrators can configure and reconfigure WiFiMon hardware probes from the WiFiMon user interface. It can provide data like device identification number, location information, etc., and also configuration files by using new Jinja templates. The administrator goes to the new user interface that we have, we have redesigned or redeveloped, received input from the users and it controls where brackets are and substitutes them with values.

What knowledge or infrastructure have we used. Solution is based on salt infrastructure management tool. We have also WiFiMon analyses the server that it is a device that manges everything and it's called master, so Salt master. And we have also WiFiMon hardware probe that are devices that are managed and are called minions or Salt minions.

What are the advantages of using this technology or solution?
Salt establishes application layer. So, using Salt, you can find the probe behind the NAT without knowing IP addresses or using names. So, public IP addresses are not required, we need an IP address but not a public one. The second one is Salt includes ZeroMQ message broker. We have a queue that receives messages and a need to distribute to the client.

So, if you see the picture before, we had three probes, and if you want to configure them from a central point, everything will happen in parallel. But if we use Ansible play book on Ansible, administrator should go to, each of them, to the first one, make the configuration, then to the second one, make the configuration, and so on. By using this message broker, everything, all configuration will happen in parallel. So if you have three probes or 100 of probes, everything ‑‑ this requires the same time.
And the third one is configuration files generated from templates are transferred from WiFiMon analyser server to the probes. We are using this tool to copy the files from the Salt master to the Salt minions.

Some of our future work. Automation of probe installation is one point. Machine learning for performance prediction and the last one is more visualisation options.

I want to thank you for your attention. If you have questions, we are here, together with my colleagues which is online, Nickos, and if you are interested, well we are also here to help you using this nice service.

Thank you.

(Applause)
.
MOIN RAHMAN: I don't think there is any questions from the audience, but if you want to have any question and discussion, I think the speaker is here during the whole week. So thank you.

Our next presentation is about Bogons Observatory.

LEFTERIS MANASSAKIS: Thank you very much for the introduction. I want to thank the Programme Committee for giving me the opportunity to speak in front of you today. Today, I will speak to you about bogons, a topic that I came across like almost 20 years ago, and I was fascinated about it, and since two years ago that we built code BGP and we started the platform which is a BGP monitoring service, a few months ago I had the idea that we could use our platform to study the bogons, but because if you want to study the bogons, the first thing you need to do is identify what a bogon is.

So, Martians are private and reserved addresses defined by RFCs. Traditional bogons, include Martians and prefixes that have not been allocated to a regional registry by the Internet assigned numbers authority, by IANA. And the full bogons contain the traditional bogon prefixes, but also include the IP space allocated to the RIRs, but not yet assigned by them to local registries, for both IPv4 and IPv6.

This is a list of IPv4 Martians which heavily inspired by the BGP filter guide of NLNOG, and this is a list of IPv6 Martians, again inspired by the same website.

Now, for bogon autonomous systems, the idea is the same. An AS should be termed as bogon if any of the following conditions is true: It is reserved for a special use by an RFC ‑ for example, private AS; it is not part of the block assigned to a regional registry by IANA or it is not assigned to a local registry by any original registry like RIPE, for example.

This is a list of reserved and unallocated autonomous systems. And why we care about bogons is a question. They are usually the result of configuration mistakes. However, they are also commonly found as the source of various types of Internet misconduct such as DDoS attacks, hijacks or route leaks and other types of nefarious activity. They are also ‑‑ I mean, we have anecdotal evidence of events that resulted in outages that were caused by bogons.

And in order to study them, we will use the BGP monitoring service we have developed when we use Bird 2, for our routing demon. We use a route reflection and IP because we want to know for each update where we got it from, from which router in the world we got it from, and by using a route reflector the next hop is not altered so you get the information, we gather all the data in route collectors and in order to forward all of the data of iBGP, we use the BGP path capability. These are numbers for for peerings and geolocation distribution of our routers.

And this is a graphical presentation of what I just described with words. The monitor has been connected with the route collectors and the data is being propagated to the platform and you are able to view bogons in realtime basically, you can see in realtime using the platform who is announcing, who is propagating bogons, who is accepting bogons.

So the code BGP platform is configured to monitor as I defined them, both IPv4 and IPv6. And bogons, bogons ASNs anyway in various paths and this results to routes that can be ‑‑ can either have, can almost also have RPKI valid prefixes but because in the AS path there might be a bogon AS, for us this is a bogon route.

And if you go to the UI, the platform, you can see more than 100,000 routes currently are bogons and this is one of the reasons why there are so many.

The methodology for this study. So I had ‑‑ I was in the taxi, and this is a funny story ‑‑ I was in a taxi during DKNOG in Copenhagen with my friend and I was pitching this idea to him and I told him how can I identify bogon ASes? And he said that's very easy, typical Max, you know, way of talking, this is very easy, I will show you. So the RIPE NCC publishes a CSV file which contains the prefixes and ASes that have been assigned to local registries, based on data gathered from all five regional registries, a script checks every hour, it downloads this file, identifies the entries that have been either available or reserved, and creates two lists, the bogon prefixes and the bogon ASes, and these two lists are used to update the Bird filters. BGP monitoring service, I should say the list is huge, there is almost 300,000 prefixes and millions of ASes and Bird does not even break a sweat, and there is bogon ASes and prefixes are forwarded to the platform. This is the URL for the file. And this is how the file looks like for the reserved and available autonomous systems, and also for the prefixes.

.
We have open sourced this methodology basically. We have a shell script implementing the methodology in this URL. Bird template, the Python script you can use to extract bogons from RIPE RIS and route views MRT dumps without having to have a router. And read me with ten steps. You can use this URL and play with the bogons yourselves. Or you can get access to our platform, following these steps, and you can see bogons in realtime, basically we can use the bogons instance to make sure we don't announce or propagate bogon prefixes, make sure we don't use or propagate bogon ASes and figure out who does it and let them know so they fix their announcements and filters.

These are some screenshots from the UI of the platform where you can use, for example, in the routes various paths filtering where it's also supports where you can filter for your own AS or anybody, any other AS to see which routes might be present in the bogon standard filters bogons and contain your AS. And you can also use the functionality in the UI where you can use RIPEstat and go directly to RIPEstat and other sources of information like CloudFlare radar, to see more information about its route, its prefix, its AS.

Same for the prefixes. The prefixes table, I must say that, as I said before, contain the prefixes that are present in bogon routes, but themselves, the prefixes, they can also not be bogons themselves. They are just present in bogon routes in the UI, just as a clarification.

The next steps we will conduct a measurement study, we will already developed a team, a measurement study for the bogon phenomenon that can hopefully result in a presentation. Tried to correlate bogon data with attacks and other types of security‑related events.

We will seek funding for development methodology and automation that will periodically inform people about their misconfigured BGP filters.

The goal, of course, is an Internet with less bogons.

This is a slide with the references for this talk. Thank you very much for your attention, and I would love to hear your questions.

(Applause)

DIMITRY KOHMANYUK: Thank you.

AUDIENCE SPEAKER: This one works. Hi, we had a discussion yesterday right. I have a problem with the full bogons. There is a recommendation to filter on full bogons. When there were traditional bogons, this was easy, the list was actually quite static and you didn't need any automation, you only need to do this, you know, every half year you would update your routers. With full bogons, basically this list fluctuates so heavily that you need to update your routers automatically every 24 hours and for very high distant routers in our networks, this is almost impossible to automate.
There is also the question of who has authority for maintaining this list of full bogons? Okay, RIPE published the CSV file, but I haven't read any best current practice or any documentation that this is the authoritative list of what a full bogon is. Does a great job offering tools to acquire the list but it's not always matching with the CSV file that the RIRs produce. So, I actually have a proposal and this is a very simple one:
.
Why don't we have the RIRs put an RPKI ROA to AS 0 for all the prefixes that are unallocated? That way, I don't need to implement automated code for all my routers. The RIRs are responsible anyway for defining what an unallocated prefix is. They only need to put a ROA to zero, I am already doing RPKI validation so I will block the full bogons.

LEFTERIS MANASSAKIS: As I told you yesterday, sounds like a great idea to me. And we should take it further.

DIMITRY KOHMANYUK: I am sorry, I guess we have five questions and still have ‑‑ five potential questions and one more lightning talk speaker. So maybe I'll let the first ‑‑ otherwise I'll let ‑‑

AUDIENCE SPEAKER: I have a very quick question. Andrei Robachevsky, Internet Society. Why didn't you use the full bogon from Tim Kimerrick [phonetic] and develop your own?

LEFTERIS MANASSAKIS: So originally the idea was to use this list of prefixes. It has two issues for me. The one is that it contains only prefixes and not ASes, or it's not the full list that I wanted. And also, when I asked for Tim Kimerrick, who are very good friends of mine and I really like their work, and everything, when I asked them how do you produce this list? I didn't manage to get an answer. So, I could not use it for research because I cannot support it. That's why I didn't use it.

DIMITRY KOHMANYUK: I think it's a bad timing on our part because of the longer presentations but I appreciate your search, I know you have a product behind this, so ‑‑ we still have one more speaker though that I'd like to hear. It's ‑‑
.
(Applause).

PETER THOMASSEN: Hi. Who thinks DNSSEC is easy? So some people interest misconceptions. I'm not telling you which ones. I mean which people, whatever.

.
As you probably know for DNSSEC you need to do two things, you need to validate signatures but you also need to secure the delegations because if they are not secured, there is nothing to validate. According to APNIC, the global rate of validation is 31%. I don't know of queries or resolvers, but whatever, it's a decent number.

What do you think is the secure fraction of secure delegations? 2% who bids more? Okay. Anyway. It's 7%, according to the source at the bottom. And the validation rate actually depends much on the country, it's 31% overall and in Germany 70%, I don't know, the nerds probably in the same ball parks, I was surprised by Russia and Saudi Arabia is even 99%.

Why am I telling you this? Because if the delegations aren't secure, there is nothing to validate and there are only 7% secure delegations. At deDEC we think that should be ramped up and we get support from RIPE for doing that, which is great and which is why we got invited to speak here.

I'll tell you a bit about our project. As you said in your initial reaction DNSSEC is too hard and most people who try to use it know that and that's why we try to make it easier. And the way to do that is to do a managed DNS hosting service, that's our approach, so think of it and let's encrypt but much smaller and also much less funding and also not for TLS certificates but for DNSSEC. You can just create a zone with us and configure things and you don't have to demeanor I mean things, I mean your records that you want to put in the zone and you don't have to deal with DNSSEC stuff. We tried to automate that as much possible. It's fully automatic.

And we support some modern stuff like Dwayne. We have a dynamic... playground before we started doing serious hosting. We started and released a service in April 2020. Since then it's grown quite a lot and we are a RIPE member, and IETF contributor specifically with DNSSEC automation stuff, so how do you get the DS records to the parents, for example? That's a big problem. There is some work ongoing there.

And we have some costs which are several operations, Anycast networks, development in all of that and that's where we thank our supporters today, that is specifically RIPE.

This is or Anycast network. And much more interesting is our user interface and the REST API. So the slide here is a space limited so I decided to make several points of showing our mobile screenshots. So the graphic user face is mobile friendly. It's straightforward and reactive, so while you type things get validated. There is field level validation. So, for example, if you look at the right‑hand side, this is how you enter in our R set of type MX for mail exchange records and it's actually hybrid between a text line and a multiple field form. So E to, when you receive an MX record in an e‑mail and you have supposed to place it somewhere, that often doesn't work when there is multiple fields. But you can still place it here and the software takes care of it. So it feels like a text editor with lines but it still has field validation. It has zero external resources so you don't leak anything. We don't have cookies. Which is unique.

Then we have REST API, it's documented, it has helpful validation. And it supports advanced things like transactions with multiple records updates at the same time. It can do paging and token scoping and stuff like that. It also has MFA authentication.

So, about the RIPE community project fund. The question is, how do we use it? Primarily we use it for the server operations which we need for primary signer. We have back‑up machines. Then there is the Anycast network which we don't run ourselves because we don't have the corresponding ops team. And not necessarily all of the knowledge to do it, so that's quite expensive and we also try to push development stuff like DoQ support and API improvements to make the token authorisation for granular and for domain owners to be able to inspect their own stuff.

That's what we use that for and we're very grateful that RIPE is supporting us in the current round.

This is it. Questions?

DIMITRY KOHMANYUK: Thank you. It's a great project. Please, folks, we have a minute or two and I would say that please go to the ripe86 [at] ripe [dot] net and rate every talk you have seen, or at least you plan to see, and that feedback goes directly to the presenters, so it's not just something that we collect and never use, every comment you write, and you can write comments, will be sent by the Programme Committee and you can also apply to be a member of the Programme Committee, we have still an hour and a half till 3:30 to be exact so you can e‑mail to pc [at] ripe [dot] net. So I don't see any stepping to the mic. So any remotes?
.
Okay, good, so we are done then and enjoy your lunch and don't forget we have another Plenary after this.

(Applause)