RIPE 86

Archives

25 May 2023

DNS Working Group

At 9 a.m.

WILLEM TOOROP: Is it 9 o'clock? One minute. Welcome to the DNS Working Group, this is the first time I will be presenting as the new co‑chair. We have a very densely packed programme, the first three presentations will all be surrounding the DNS4EU project I guess or more or related to it, the first one about the DNS4EU and after that what about Connectbyname presentation from Philip Homburg, DNS multi signinger models from Matthijs Mekking, the RIPE NCC undate and finally, a live demo from one of the more notable hackathon projects, from the weekend preceding this RIPE conference.

Because we are so densely packed, without further ado I present to you Andronikos Kyriakou with the DNS4EU project.

ANDRONIKOS KYRIAKOU: I think we can start. Good morning, everyone, I am the technical consulting lead at Whalebone and today I would like to open the discussion about DNS4EU, how we were planning to operate the service and actually protect 100 million users all around Europe. Of course as you might know we might be talking about this project for hours and hours so during this session I will try to summarise the key points, summarise our plans, but I will be more than happy to discuss later during the day or in the evening.

To begin with, for anyone who might not be familiar with the project, with DNS4EU basically here is an extract from the written called for proposals, the primary goal of the project is to build a European DNS resolution service which will improve resilience, protect privacy and foster cross‑disciplinary collaboration.

Furthermore, DNS4EU will offer protection not only against global but also local country specific threats.

As it is highlighted in the last sentence, this is a key policy action, it was announced back in 2020 and the origin can be traced back to the fact the vast majority of the resolvers which are available today are actually operated by non‑European entities. More and more users tend to rely on those. So DNS4EU is to here to address such consolidation and offer a resilient platform for secure and compliant DNS resolution.

The European Commission issued a call for proposals in the first month of 2022 and outlined that the Horizon of the project should be three years. At Whalebone we were already operating such services offering DNS security to million of users so our main focus was to build a strong consortium of members, from different disciplines and individual countries. The final shape of the consortium consists of 14 members originating from 10 countries around Europe.

The members are a mix of private companies, computer emergency and response teams and NOGs and national and research educational networks.

In more detail, the members apart from Whalebone, they are cz.nic, technical university in Prague, the Belgium law firm, DESEC which is a nonprofit organisation, ABI lab from the financial CERT in Italy, CERT Polska and the national cyber secure directorate in Romania. We have a number of associated partners such as the ministry of electronic governance from Bulgaria, the national cybersecurity from Portugal, the anti‑virus company from Finland and CESNET from the Czech Republic. It's adding its own expertise and it's contributing to the goals of the project.

What are however some goals and requirements.

So of course the main focus is to ensure a reliable and efficient DNS resolution service. All the DNS4EU resolvers and the back‑end of the service will be located in data centres around Europe and will be bound to strict privacy standards. Compliance with regulations such as GDPR and also national legislations is also to be expected, whereas in the very core the DNS resolvers of DNS4EU will be performing DNS validation and will support all the latest standards such as DNS over HTPS and. TLS and eventually over QUIC, whereas having said that, we are aiming to conform with all the best practices, we have experience already but we are very looking forward to the efforts of the DNS resolver task force and the upcoming presentation.

Diving a bit deeper in this slide we can find app high level architecture of the project, the main idea is that since DNS was built as a decentralised technology this is the same approach we would like to follow.

In order to achieve that, we have adopted a distributed architecture combining Cloud‑based infrastructure but also offering on premise DNS resolvers to basically ISPs, Telcos or even government offices.

This combination we expect it will provide the lowest possible latency but also this distributed nature will increase the resilience.

Basically this is how we expect the system to be robust, it's not only about having a big central resolver but having multiple individual instances.

To take this discussion a step further we have also identified several deployment and also options so we will see them in the next slides. So the key four pillars of the are threat intelligence, DNS for governments and end users. The first case the cornerstone of the project is the threat on intelligence which will be generated from the project and will be obliged to all the deployment scenarios we will see. This threat intelligence will be a product of research and it will be based on the DNS4EU anonymised traffic as well as continuous information exchange

Then, we will also focus on working with operators and offering them dedicated on premises ‑‑ in this case the operators will be able to deploy the service either on their own IP service or utilise the CERT for you Anycast IP address.

Furthermore on a national level we have identified that governments and institutions are going to play a major role in this project so following similar experiences from other countrywide protective DNS resolution services we expect we will offer a ready‑made solution to cover the unique requirements.

Last but not least, the end users will be offered a public DNS service and an optional protective security/content filtering layer.

Let's take the individual layers and I will start from the threat intelligence one. Basically, this is a good starting point as it is the common do nominator for the service, all the other components will be utilising it.

At Whalebone we have been working with DNS security for the past few years, we would highlight two key aspects, the accuracy and the coverage of emerging threats but also the number of false positives. On a foundational level, we will cooperate with existing threat intelligence provides, feeds, and we will incorporate those in order to protect against global complaints. This is not something ‑‑ of course, what is the important differentiator of the project is this second part which will be the regional threat intelligence exchange.

Our plan is basically to focus on individual Member States and target emerging threats such as Phishes or Phishing or other malicious complaints and block them almost immediately. In order to achieve that we will cooperate with local CERTs and utilise existing or new information sharing channels. At the same time, we will use the data from the DNS4EU, anonymised data and we will improve the accuracy of the service.

When it comes to this channel initiation and the establishing of the threat intelligence information sharing basically we are aiming to facilitate the cooperation between CERTs but also commercial entities such as banks or partial delivery companies, basically what we see is that in the case of phishing, for example, these organisations who are being targeted are really aware about this from the very, very first minutes, right? So, what we aim to do is basically to exchange this information as soon as possible, have this communication channel and information flow and apply it immediately on the DNS4EU level.

Furthermore and moving to the next pillar, on the road to protect the 100 million users we identified from the very, very beginning that Telcos would play a very important in this, so what we see as a major pain point of the operators nowadays is the loss of control and visibility of the DNS traffic. Given that the operators are using DNS not only for its original case but also for signalling this results in loss of optimisation opportunities. So, taking it from the other perspective, from the user perspective, users they are looking for a fast and transparent DNS service, so with DNS4EU we aim to support both these angles. For the operators, we aim to offer them an unpremised DNS service which will be operated from their data centres and support all the latest standards and we will offer them a large number of integration options so that we can cover their existing operations, monitoring and also support.

For the users, one of the major benefits will be the low latency, would be the privacy preserving nature of the service but also the premium threat intelligence which will be optionally applied.

The next pillar, DNS for governments, is based on the emerging pattern that can be identified in the last years around the world, we see a number of public organisations and institutions, of course they are inherently complex but they still remain under protected so in several cases such as in the UK, in Australia, in Canada, this has been addressed by introducing quite successfully, if I may, country‑wide protective DNS security solutions. The thing is that these solutions were built as turnkey single purpose projects and to that regard DNS4EU is aiming to offer a ready‑made product that can support individual countries or even particular organisations in the countries. So, we see here that the Telcos could play a major role but we would like to be quite flexible in the architecture of this approach, which means that we would have in each country a combination of countrywide DNS resolvers maybe for all the organisations but if there's a very particular use case we could even go deeper and deploy dedicated DNS resolvers on the premises of independent organisations. This actually multi tenant approach we expect it will allow for flexible and basically complete separation of the roles.

Last but not least, we also plan to make available a service for end users, the architecture behind this option will be utilising the CERT Cloud infrastructure, the consortium will be running this infrastructure, will be made available to the end users so one of those would be the pure DNS resolution service, another one would be the protective DNS service along with DNS4EU threat intelligence and we will also make available an option for protective DNS plus content filtering.

What is quite interesting also in this case we are invited to Telcos operators to take part in it, they would be able to host their own local point of presence and they would be advertising in this way the DNS4EU IP addresses as close to the users as possible.

When it comes to the timeline of the project, here is a high level overview, so let's walk through the plan.

This year in 2023 we are aim to go finalise the technology and the security design and kick off the implementation of the back‑end of the service. We have already started discussing the options with Telcos and governments and we expect to do it even more intensively in the next month and later on this year we will be kicking off the research activities in order to have the first results.

In 2024 we would be investing our efforts in setting up the information sharing channels and we will be either opening new ones or existing ones. At the same time we expect all the legislation and security requirements will be achieved during this year.

When it comes to 2025, we would like to basically focus on the end users, start attracting them to the service, while scaling the deployment where it is.

In 2026 and beyond, we expect to invest in the continuous improvement of the project and basically at that point we will be iterating on the existing infrastructure.

So to conclude and to summarise the key points, DNS4EU aims to provide a resilient privacy focus DNS resolution service for all European citizens as well as, and this is very important, generate high actionable threat intelligence. The service will be offered not only to the public, to the end users but also to governments, institutions and operators.

As it has been highlighted during the presentation but I think it's also worth noting once more, one key of the project is the collaboration and the stakeholder involvement. So I'm very much looking forward to your questions, the discussion either now or during later today. Thank you very much.

(Applause)

MORITZ MULLER: We have one question online, he asks: Will blocking/filtering be universal across Telcos, government or users or customised for each group or customer?

ANDRONIKOS KYRIAKOU: This will depend on the individual case so we will have some local set intelligence available for the governments and when it comes to the Telcos there will be different options and the user will be able to select which one to choose from.

Maxim: In my life, I saw two major Internet ‑ system to be emergent, the first in Russia, second in Ukraine. And for my point of view, the sole purpose of this system is Internet censorship, maybe you have some safeguards in your system not to go to this way, some protecters or not to...

ANDRONIKOS KYRIAKOU: That's quite an important topic, it's a question we have been challenged quite a lot of times and we are aiming to basically implement. Basically the European Commission will not have any access in the configuration of the system, the system will be run by the consortium, the consortium is basically consisting of multiple independent member countries, right, and so we don't see that basically this will be the case, that we are building a system for censorship.



AUDIENCE SPEAKER: Michele: I didn't hear anything about exchanges. Are you looking at places DNS node at internet exchanges to make them available universally?

ANDRONIKOS KYRIAKOU: At this point in time, we don't have it into consideration, but again, we haven't finalised the set‑up, right, so if this is the case I will be happy to discuss it.

AUDIENCE SPEAKER: You should consider.

ANDRONIKOS KYRIAKOU: Thank you.

PETER HESSLER: Internet user. A common problem I have seen with a lot of so‑called adult fill erring systems in many regions of the world is a remove of anything that is not so‑called for ‑ people so for straight people, removing a lot about puberty and how the body grows, what sort of protection do you have to prevent legitimate information about human rights to prevent being filtered as adult content

ANDRONIKOS KYRIAKOU: I would start this discussion from the fact this adult filtering will be completely optional, we will not impose it on the end users. So far at Whalebone and with the rest of the consortium members we will been building this database in order to be basically avoid the situations, we are refining the use cases and we will make possible to the users to actually report the situations so we will not be blocking the legitimate content. Of course it might happen, but this is the case with all the systems, and we will try to basically have the tools in place to avoid to the best extent possible.

WILLEM TOOROP: We will to cut the line.

MORITZ MULLER: Online: Is there an official website for the DNS4EU project?

ANDRONIKOS KYRIAKOU: Yes, so right now that's the Whalebone... has the latest information, I will make sure to put a link into the slides. And eventually we will build more dedicated websites.

Michele: I see co‑founded by the European ‑‑ co‑funded by the European Union. I know the EU has the callout to finance the initial set‑up but how are you looking at financing this in long run?

ANDRONIKOS KYRIAKOU: One of the requirements of the service was to be commercially viable, right? So that's why the four pillars of the service are built around this premises. First of all, when it comes to service for the operators, this will be a commercial service, right, the operators will be getting the service and all the benefits of it and the same will apply for the DNS for governments.

PETER KOCH: The question might be a bit premature but it isn't about filters. Under NIS 2 this is going to be critical infrastructure. Now you mentioned in your presentation that this is with independent organisations, I think you said. Can we expect that it will be covered by a single regulator in one EU country or what are your thoughts about that, the background of the question is this is an interesting experiment because the Commission is now sitting on both sides of the game here.

ANDRONIKOS KYRIAKOU: Yes, and I see we are running a bit of time but to the best of my knowledge, basically the consortium will be operating from Czech Republic as we are leading by Whalebone and we will be falling under regulations in the Czech Republic.

PETER KOCH: Thank you.

Sebastian: What could be the role of the ccTLD manager or registry? Thank you.

ANDRONIKOS KYRIAKOU: You mean of cz.nic or in general?

AUDIENCE SPEAKER: This one is member of your consortium, therefore the others from Europe?

ANDRONIKOS KYRIAKOU: Right now, we don't have some particular role for the CcTLD managers but we are really open to discussing that.

(Applause).

WILLEM TOOROP: So, in ‑‑ during the last RIPE conference RIPE there was the establishment of the best common practice task force for resolvers and Shane is going to tell you how that has all been progressing.

SHANE KERR: That's correct, so if you were at the BCOP meeting on Monday evening, this is going to look very familiar to you, this is basically the same presentation, but the idea again is to give an undate about the status of the task force.

So, I work for NIS 1 DNS company, recently bought by IBM, I was one of the former ‑‑ I was one of the co‑chairs of the DNS Working Group when this was ‑‑ work was discussed and that's how I got roped into helping out with this effort. What is this task force? This is the mission statement from the web page I just copied and pasted it and basically the idea is to look at how ‑‑ what the current best practices are for running a DNS resolver, mostly focused on public resolvers but also we made a decision to focus ‑‑ to support recommendations for any resolver, and that's basically the idea.

So, what is the status of this work? We started the task force in between RIPE meetings so there was a decision, the RIPE chair agreed to found the task force at the last RIPE meeting, we had a bit of a slow start‑up trying to figure out which people would be involved with the work, that's been sorted out and we have had a few online meetings, we finally met in person, although only four of us are here at this RIPE meeting so what have we actually done?

We have basically, we know what we are going to do, we have an outline, a long list of topics which I will go over a little bit later and then we have great hopes and dreams for basically doing the rest of the work or turning this list of topics into actual structured recommendations.

So, that's kind of what ‑‑ who we are and where we are at. I will give a short bit of background to explain how we got here.

Now, given the information presented in the last slide I don't need to talk too much about the fact that there are public resolvers and what that ‑‑ the observation that European Commission had about that and the concerns they had, so why is RIPE doing this thing when obviously there is very serious and funded people taking coordinated and well organised action on this?

As you can imagine, threat community wasn't super thrilled with the idea of a resolver being controlled by people who may be weren't familiar with the RIPE community, didn't have previous history with it, there's also in the Internet community at large and especially in this region a history of distrust of governments in general, kind of Lib tare Jan ‑ to the way things run, there is this feeling we are not sure we want this to happen in a way we have no control or input on. There wasn't a lot of appetite for the RIPE community putting forward a proposal for running a European Commission funded public resolver or any ‑‑ or recommending that a subgroup of RIPE community members do that. So, instead, Joao Damas talked to a lot of people and his proposal was let's make a set of recommendations, we will make a document which says anyone who is going to be running such a service, these are the ways we think you should do it and that's kind of where we are at.

What philosophy and approaches did we decide on for this: The first thing is there's a tonne of existing documentation, many, many things, this includes not only things like testify RFCs and things like that, best common practice documents, it also includes a bunch of other people have put together quite a reasonable list of things, security organisations, both IRTs and collections of security folks and all kinds of stuff, our goal is to curate and collect that and basically provide pointers with an explanation of what we think you should get out of other documents and that's probably going to be most of this document, it's just going to be when you are thinking about how to set your packet sizes to minimise fragmentation and here is a great document explaining why you want to do that and so on.

Another philosophy I have to tried to push on the group is to say it's okay to have opinions. In previous task forces we have tried very, very hard to take the neutral and open way of the RIPE community and just kind of reflect that in the document. However, there are a few things we think it's okay to be a little more assertive and in my next slide I give a couple of examples of that. For example, we are going to have some documentation and talk about centralisation, one of the motivations for this ‑‑ for some of the people in this task force is they think they are concerned about the increasing centralisation of services and power, basically, on the Internet. So, we think it's okay to push for distributed models and ‑‑ basically, that and self hosting and that kind of stuff. Another example of opinionated thing we are going to say is Open Source is usually better than proprietary software, there's nothing wrong with it, in many contexts it's great, for something like this we think that it's better to have Open Source and there's a lot of reasons we can go into. These are examples of the kind of things we are going to say hey, this is the right way to do things, this is not something the entire community is going to agree on but that's okay, a little bit of controversy sparks interest so that's good.

We have a whole bunch of topics, I just pulled this from our online documentation, there's a link in the ‑‑ a link in the slides which you can download and follow, currently we are using GitHub for our work, my approach I thought it would be a good way to make sure not only the resulting document but also the discussion around the text that gets in there could also be kind of historically archived, and also I'm a programmer so I like GitHub, I didn't want to through everything in a Google document, I am not sure that approach for work is working out, it turned out people who aren't programmers aren't super happy.

This is a list of topics. Each of these is basically a very high level area and there's going to be a whole lot of text under that. You can go to the link and see where we are at with all that. I think could we do less than this list? Of course, I think we need to consider the possibility of each of these areas is going to be very important for somebody running a public resolver.

There is some question whether or not we expect this document to be used for auditing at a later time, could you say like I want to say are you meeting all these requirements in which case all of this will have to be something that can be measured and testing, this has some out of kindness, the kind of DNS stuff and MANRS project, can you check it, we are not focusing on that now but that is something slightly under consideration. Here is the link. You can see a fuller list again, you can click on the link and see where we are at with that.

What are we going to be doing next? We may revisit the group of people that's in the task force, it's actually quite a large group of people for a task force and I think the idea there was to get a wider set of backgrounds and specialties, unfortunately there's probably a perfect size for task force depending what you are doing and it's probably a bit too big now so we may want to say people who don't feel like they are personally super invested in this effort may want to ask them to say hey, we don't know ‑‑ let's find someone else who has time or not. We need to flesh out the text. There was a hopefully humerus suggestion we use ChatGPT, if you are interested in helping with that effort see me after the meeting.

We want to publish a draft RIPE document. The original timeline for the task force was to have it published a month before this meeting which considering we didn't start until after the RIPE meeting was basically a fantasy. So ‑‑ but our goal, our next step to publish a draft RIPE document, elicit feedback especially the RIPE community. We are going to do that before the OARC meeting in September, we can talk to researchers and other people there. After that, I don't know at some point it's going to become a RIPE document, and then the task force work will be done, that doesn't mean that the work is done at that point, as always then we will have hopefully a set of reasonable recommendations and it will it be up to DNS Working Group or individual members to decide what to do next. I am hoping there will be a conversation with DNS4EU folks as well as other operators, Quad9 is based in Switzerland so they are within our region and there's no reason he we can't talk to open DNS and Google and Cloudflare and say, hey, could you look at the transparency section, could you look at the recommendations for technology and so on and maybe adopt them.

That's it.

(Applause)

MORITZ MULLER: First a question online from Brett: Some of this work seems very similar to the kindness work with ICANN, will you be working with them?

SHANE KERR: I don't currently have any plans to have a liaison effort. Of course we are going to steal as much as we can from their recommendations if it makes sense. I'm ‑‑ I don't know too much about that work, but I have heard that various operators are nervous about it because it comes from ICANN, which is probably unfortunate but I think that's the reality of the situation and I would hate for ‑‑ to get trust issues and concerns for being closely aligned.



BENNO OVEREINDER: Little bit similar to the question before, you asked, you mentioned the DNS Working Group could do review, but more proactively how can the DNS Working Group help you right now, giving ‑‑ well feedback, review of a document is part of it but maybe writing the document, you need input already from the community. I was also thinking, or collaboration with other organisations like first, they also have a DNS track or ‑‑ Working Groups I think they are called, or maybe the global cyber alliance, people here in the room. I don't know if they are interested or maybe they have experience or want to proactively contribute or whatever, is there room for the task force ‑‑

SHANE KERR: First is definitely, they have some public documentation which is really interesting for us and we are going to try to point to that and kind of filter out what we think is useful there. We haven't considered more radical styles of creating a rhyme document. Maybe that's reasonable. My thinking ‑‑ the thinking was to do this in traditional task force style. A group of people with grey hair would get into a room and think great thoughts and out would pop a document and the community would say this is terrible I have grave concerns and we would end up with something good at the end. Maybe given the wide range and reach of topics maybe it doesn't make sense ‑‑ it's on GitHub so on principle anyone can make a request and suggest text, that doesn't happen by itself, maybe reaching out and asking people to contribute is a better way to do that. I know that was a very political answer, sort of saying that's an interesting suggestion and we will consider it but that's actually my answer so.

WILLEM TOOROP: Any more questions for Shane? If not, thank you

(Applause)

The next speaker will be the only online speaker in this session, which is Geoff Huston all the way from the other side of the world, I think.

GEOFF HUSTON: Yes.

WILLEM TOOROP: We can hear you, Geoff. You can go ahead.

GEOFF HUSTON: Thank you very much. I am going to speak again on this topic of DNS4EU, and looking at it from a measurement perspective, because sometimes you kind of get this ideal opportunity to look at something before` and after its introduction, to actually see and measure if the promises are realised or not and this whole initiative is an ideal opportunity to measure the landscape. So the kinds of questions are pretty obvious questions, the first is where are we now in the EU about who uses what resolvers? And that's very quickly what I'd like to report on today, I won't take much time.

And then some sort of comments about how it would be easy to measure the impact of DNS4EU and how it would be hard or more challenging. And it really depends on the deployment choices for DNS4EU. They are actually going to answer that kind of question. And I would encourage the folk in DNS4EU to think hard about sort of choices that make measurement easy in this area.

Now, there are many ways to measure the DNS, what we do at APNIC, myself and Joao Damas, is to actually use an out‑looking‑in methodology, we use an online ad campaign, large scale, and we seed end users with DNS labels and get them to fetch them, because they have got an a number of elements in there there are one‑off use, there's no caching involved, they are going to pass off to our authoritative DNS server and in essence there's enough encoding in the DNS label to match the query against the user, against the add that was being impressed. So that when a certain DNS resolver asks as a question, we have some idea of which user was originally seeded with the URL that contained that question. We can then match IP addresses all the way through and find which users use which resolvers.

Interestingly, of course, if you think about the relationship between a user and the resolver, very quickly I think there are four sort of buckets between users and the resolvers they use. The first of these is you are using a ‑ resolver whether it's Cloudflare, Google, Quad9, open DNS, they are using a constant or they publish a list of IP addresses that their servers use when those open resolver engines make queries to authorities, it's not the front‑end address, you never see quad 8 querying at the back but you need to know what server addresses it uses for example. So it could be an open DNS resolver. Obviously, and most prevalent, it could be in the same origin AS as the user, it could be the user's ISP. There's a smaller class which is we see geolocation to the same country that not necessarily the same AS. It happens that some ISPs put their resolvers in a different network than their users. That could be the case, or some other mechanism.

And curiously, sometimes they just geolocate to a different country and sometimes it's the geolocation tools is wrong and sometimes it's in a different country, this happens.

The next thing is query behaviour. Because you might think you have given the user one DNS label to query and the user in their stub resolver has handed that into the DNS Cloud and the recursive resolver is picked and it makes a query to our authoritative. Nice idea, but, on the whole, on average, we see about 1.5, 1.6 queries per unique label. So, almost immediately your query starts to fan out and multiple resolvers take on that query, on average it's around 1.6. Interestingly, the one who answers first is important. The one who pops in the first query is the one with the first answer. That's the one the user is going to believe.

The second idea is, well if that is put in one query all these other resolvers, recursives, got to see you, they are looking so that's second class of a single query actually pulse up multiple recursive resolvers to get to look over the shoulder and see what you are doing.

Last and not least is the situation where the name doesn't resolve properly, takes too long and the DNS being an awfully persistent protocol simply asks more recursive resolvers and when you deliberately create either non‑answering or surf fail you tend to flush out a whole lot more resolvers who may get to see the user.

So that's the results we get applying that methodology, just looking at the first query. And this is from February, start of last year so it's around 14 months or so, until today. And there's a fascinating story going inside there, that Google, which is in Orange, peaked a little over a year ago, and is now declining, that its use has come down for this first query. The same country is growing ever so slightly, and the same, the ISP resolver has climbed considerably, almost as the direct opposite of Google.

So that's the first query.

If we look at all queries, in other words sending back surf fail and seeing who is in your backup lists, Google is very commonly used as a backup, as a resolver that you use when nothing is answering, let's just ask Google and see if there really is an answer. So the hit rate is higher, but same kind of pattern. It was bigger last year than this year. This year, it's smaller.

Let's now look at just those two open major resolvers in use in Europe, Google and Cloudflare are the two major ones and in Google's case the usage peaked at around 27% of users in around March of last year, and ever since then, its market share within Europe has been declining to around 9 to 9 .5% these days. Cloudflare peaked a little later, around mid‑last year but yet again it's been declining and in recent weeks the decline is even steeper, the visible use of Cloudflare is really quite low.

So, what we are seeing is that the ISP resolver is very, very strongly dominant, at around 80‑odd percent Google's DNS has certainly 80% of the open resolver market but that market itself is less than 10% and is a strong enterprise signature; usage is higher on weekdays than weekends so what you are saying is the folk who don't wish to use the ISP defaults tend to be on the enterprise side and tend to be during weekdays when the use is more intense than weekends. The big lesson, even today, users don't stray away from defaults and these days the ISP resolver is predominant across the European market.

Now, you kind of go is that Europe or is this just what the world does? Interestingly, Australia and New Zealand have the lowest use of open resolvers at around 7.8%, and eastern Asia, which is China, is also quite low. Then there's Europe and then north and South America, between 15 and 20% and, interestingly, across Africa, the use of open resolvers is far higher, predominantly because I think Google's public DNS offering is incredibly well used across Africa and it may well be reliability or speed, there may be a whole bunch of factors but Google's predominant global presence is actually sitting inside Africa and not inside Europe or America or Oceania. Is Europe different? Not really.

How big could DNS4EU be in it is really aiming at replacing some of these offshore open resolvers from a European perspective, the Googles and the Cloudflares, with a local offering? And less than 10% of users and most of those are enterprise and directing their queries to the local ISP, it would seem that the potential for change in Europe is actually not all that great to start with. This is not a significant user base any more. Google and Cloudflare does not have have a strong European presence any more.

So let's now get into how can we measure it. If DNS4EU used the same kind of mechanisms as the existing open DNS resolver services ‑‑ and I'm not talking about the front end address, all 9s, 10s, whatever, that doesn't matter ‑‑ it's the addresses they use at the back end. And the whole issue is, if they use dedicated addresses or a dedicated autonomous system for each of the instances of their server engines, then that is quite easy to see when people do it and use it, and as long as there is distinct service addresses that's going to work and we can measure it.

If, on the other hand, DNS4EU takes a software based or software plus platform path and beds it deeply into the ISP recursive resolver service and becomes the replacement DNS engine using the ISP's IP addresses to perform their DNS queries, it's going to be invisible. It's so deeply embedded in the ISP infrastructure, no one externally can see whether it's being used or not, and that would certainly be difficult if, if you want to measure the success or otherwise of this initiative over the coming years.

So, that's kind of what I said. If it's going to use a constant service address in Anycast and dedicated back end addresses then we can measure this much the same as all the other open DNS resolvers, deep embedding, it becomes very, very difficult to find out that it's being used from an external standpoint because the ISP is asking you a DNS query. You can't readily tell which engine is being used as a result of that query and I think that brings us a closer to the timetable.

I will happily answer any questions if there are any, otherwise I will hand it on to the next speaker, thank you.

(Applause)

MORITZ MULLER: We have a question online, from Markus, he asks. Ment maybe I missed it how account strong decline in usage of Google, Cloudflare from last year to now be explained?

GEOFF HUSTON: That's a much harder question, I just told you it declined, I didn't say why because I don't know. You'd need to go an ask a few hundred million EU users why they moved. I really don't understand that movement, no, sorry. Maybe others in the room have a better idea.

PETER HESSLER: As, you know, many of the modern browsers, Chrome and Firefox specifically have certain settings that will override and ignore the system settings and the web browsers will choose their own preferred DNS servers. Are you able to measure whether or not the DNS resolvers you are seeing in your experiments are chosen by the web browser or are they system‑configured DNS resolvers?

GEOFF HUSTON: Let me quickly flip to this side and make the observation that Cloudflare did team up with Firefox in their DNS over HTTPS work and if Cloudflare was rising, then Firefox might well be implicated but that's the opposite of what we see for Europe and don't forget this is only European count, it's not across the entire Internet and that seems to point to the fact that that partnership between the browser and some of these open resolver engines is not exactly getting a whole heap of users and some of this stuff I think is actually bigger in the slideware than it is in practice. We were looking at use of Apple's private data relay because that will affect the use of DNS, because Apple publish addresses that they use for private relay, is really, really small, so again, a lot of noise but the substance is a lot, lot lower. So, I suspect in this case the wrapping up in the browser, Peter, doesn't happen as much as folk would like or not like to see. It isn't that common based on this kind of evidence.

PETER HESSLER: Okay, thank you.

WILLEM TOOROP: Thank you, Geoff.

(Applause)

Next speaker is Philip Homburg who will be telling you all about Connectbyname.

PHILIP HOMBURG: So, surprisingly, I'm not going to talk about DNS4EU, apparently that's the topic here, but I am going to say a little bit when Geoff said before DNS4EU than in this presentation that is literally true, I want to talk about software that originates DNS queries and, in particular, I want to look at setting up TCP connection or TLS connection and what we do there, what we can do in the future.

In the good old days it was really simple, you called Git hubs by name, you got a list of addresses that were only addresses, it was not IPv6, it was not called out for IPv4, you look out for that try connect and you are done and sometimes there were applications that would only try to connect through the first address and that was frowned upon.

Then we get IPv6 and it gets more complex but for Nathalie we get other info which abstracts a lot from that and then we basically triggered another issue that was before and if you try to connect to an address that is not reachable then it may take a lot time for connect to time out which causes unhappy people and happy eyeballs was invented to make them happy again but it's hard to implement for applications, so it's basically web browsers do it, not much else. But this is not the IPv6, that will be next, I will not talk a lot about that part of the problem, I will focus more on the DNS aspects and in DNS we have stuff like DANE which few applications do because it's hard to do in application, it will get HTTPS HCP which requires a lot of work in the application so how do applications do that?

I guess SR could have been moved to get other info but nobody did that so that is mostly ignored. So the thing that came up is if we want to make life better for applications, can we have a library call and then somebody coins it as Connectbyname and you give it a host name and it gives the service and back a socket. The first one is the typical Unix way of doing it, a socket file descriptor is an integer and you have minus one if you have errors over data, but you can make it a little bit complex, say well, it should get explicit call texts with some configuration options, we separate the sockets from the code and stuff like that. And still have a relatively simple call.

In this call if we stop here then there's two things that we didn't do, one is many applications these days want to have some sort of event‑based system that you don't want to have a blocking call any more and the second thing is that 20 years ago giving back a TCP socket was maybe a good idea but these days applications need TLS, if we stop here the application still has to do all the TLS stuff and we did not gain that much.

If you play with that for a while and unfortunately within the time I got here I can't go over all the options, then it gets a bit more complex, but basically it is you have a bunch of call back functions at the top that you pass to events and you get ‑‑ I came up with some configuration options and I focus here only on DNS, if you want to do happy eyeballs you would have parameters and interfaces and stuff like that.

But for DNS, you may want to say, okay, I want to have a DNS transport that has for example authenticated encryption, I may want to connect to very specific resolver because I trust that one, and all my connection to the resolver because, I mean, people invented DNS over HTPS, import all kinds of things like ALPNs and weird stuff we may want to configure but of course in this case the default should be sensible so you can leave it out.

The initialise configuration stuff it, you have the Connectbyname, and that just tells you that it's started and then if you use LIP events somewhere in your application you calls events based dispatch to get it going and then the rest of it works with call backs.

So that's all fine. Here I have a bit of background on that project, we got funding from the NLnet foundation and we built a prototype at the moment, just an URL at the bottom if you want to play with it, so it is asynchronous, it does happy eyeballs and DANE and it's top of getDNS and one thing I forgot to mention in the previous slide and if you use LIP event you have a bit of a buffering code that can transparencily give you, you ‑‑ give you application, you can roll in DANE now.

What we found is that if we zoom into the DNS, is that with all of the DNS transports you start to get something like an application has to talk to DNS over an increasingly large number of transports and then every application probably uses a different library or whatever so you get a huge mess of all the different options that the applications are trying to do and it also pulls a huge amount of code into every application.

So, we said okay, the existing way to deal with that is that you just have a local proxy, then application can just ‑‑ UDP over port 53 and the proxy will handle all of it but now the application has completely lost any control over what happens upstream so we said if we define a new DNS option then we can manage that and in the interests of time, I am not going to say a lot of this, there's a draft for that and contact me if you have feedback of the option.

Now for bit of reflection, connection by name call is so simple there must be other people have thought about it, there must be people have done interesting work in this area and the most interesting one that I know of is what Apple is doing, I am not a programmer for Apple devices so I don't have first‑hand experience, this is just from the presentation late year at the top of the URL but basically Apple has a language called swift, which nobody outside Apple is using and nice bit of code so this like okay, it's very good for concepts but it's very limited to a post ecosystem but what I find interesting here if you have more modern language you can have the call‑back things essentially in line because it says ‑‑ a block of codes is there that will be executed in the call‑back. Which is not something you can really do that nice in C.

And then they have the thing, they give some parameters first, then install the call‑back later and then have a start thing which is different from the typical pattern you see in the event code.

Apple has a lot of experience dealing with mobile devices so they are explicit about what happens if a network interface comes and goes but the thing I was interested in is sort of these kinds of procedures how you add them if you have a language that supports it, which brings me to the final part of this sort of rushed presentation, and that is that at NLnet Labs we have a lot of experience with ‑‑ because RPKI tools are in Rust and we will we were trying to sort of also move all of the new DNS work that we are doing to Rust, so I was ‑‑ we basically stopped the C implementation of Connectbyname, leave that as prototype, we are now building library codes and as very simple proof of concept, I wrote how you do these things in Rust so at the top is a function that sends an HTTP request and gets the result and if you put A SYN in front of it Rust will take care of the hard work. If you want to do the same kind of call‑backs Apple does them in Rust with some effort. You can also do that. That's the end of my presentation.

WILLEM TOOROP: There is no room for questions so I suggest we take the questions to the hallway after the session.

(Applause) Matthijs Mekking from ISC will be presenting on multi‑signer solutions.

MATTHIJS MEKKING: I am going to tell a little tale. Two years ago invited I was invited on ‑‑ my input there was this works in BIND great, yeah, we are done. And it works but also it doesn't and there are two things that doesn't really work that way, we will get to that.

First of all this is not DNSSEC is hard story. Yes, I am describing some weird scenarios, things will go wrong here, but majority if you are doing DNSSEC signing policy just works and multi signinger is a new set‑up, you are finding new things and it's not a common set‑up right now.

Multi‑signer, who doesn't know what it is? A few folks. I will go over it then. Multiple DNS providers you do this because you want to have high reliability, maybe your zone is very important and you don't want to have single point of failures and have another providers, you do two or more even, and some of these providers may have their own signing solutions or do some trickery and can't rely on regular zoned transfers so you have to two signers and both do signing independently so that's why we have to multi sign a model. It can help with smooth provider transition where you want to move to one to another without going unsigned. All of this is described in RFC, which are listed here and basically that RFCdescribes two kinds of models, forget about model one right now model 2 is the more common one where each provider has its own unique set of keys and this is also one that works best with BIND, the other one is not possible in BIND right now.

Documentation currently says you have to use auto DNSSEC allow and that is when you are having multiple provides you have to have coordination between key roll‑overs but you still need to have creating key files of the over providers and also auto DNSSEC is marked as deprecated so you want to make it work for DNSSEC policy as well.

From the Multi‑sign Project these are a couple of capabilities that a server has to follow and with update BIND is able to do that, but this is not the end of the story, as I said in the beginning because there's a draft in the IETF which actually describes all kinds of use cases that you have to follow when you are for example provider joining a multi‑signing group, leaving, what happens when there's a key rollover, what coordination steps you have to follow, it's all described in these draft documents.

Funny enough there's also proof of concept implementation in this multi signing project, it's called music, I love to talk about music and take pictures but in this sign it's multi sign controller, you can use that to test your server, if they follow these use case scenarios correctly and there's actually two scenarios implemented right now where a signer joins a group and where it leaves, the key rollover scenarios have yet to be implemented you about I thought I would give it a try anyway with BIND so inspired by the tool at this set up three zones, POP which is simple zone, I have a punk zone which does inline signing, I will get into what that exactly means in a meaning and pirate metal, it's a bump in the wire scenario.

The goal is to add a signer to that signer group and we have the music control, multi‑signer controller doing queries to these signers, doing updates to the signers, updating the DNSKEY set etc.

So let's dance. Let's look that first POP zone example. Basically, the signinger in this case will maintain one zone, will add DNSSEC records there and regular signing.

So these are the steps that are listed in the draft from the IETF, don't read this, it's a long list of steps you have to follow but basically it's syncronising the DNSKEY sets, waiting them, writing out of time so these things can reach caches etc.

If you translate that to the music tool it has a client tool that you can say step FSM which is try to move to the next step, so I did this on the pop example and everything worked fine, I saw updates coming in, I saw the CDS sets being syncronised, I see it failed on the step where it had to wait for TTL so all is good, it works, but there were some quirks.

You can look at the logs here, it's an excerpt of the log and it actually says after updating the DNSKEY from the other provider, it tries to find the private key material because without DNSSEC in BIND you could do key roll‑overs, and then you sort of trigger like I need to sign this zone with these new keys but since they are from the other provides we don't have the key files. Still works though because we have our own keys and they are from the same algorithm so we actually do have keys we are able to sign zone, we can do it so ignore the ones that are access and yeah, it works.

But it's sort of feels like it's not intentional so we want to have a fix in place where how we determine which keys need to be signing will be changed. We won't be looking at DNSKEY signing but key files because you need them anyway if you are going to sign a zone.

Second issue, the tool was done but some later or internal key manager, see in the state file, hey, this key needs to have DS so CDS needs to be published I am going to put that record. From our case, not from the other provider because we don't know about the state of that one and you can see this can be problematic for CDS scanners, some see CDS is missing and the parent might think I need to remove a DS record. So this is because the music tool sees the DS records and parent are synced with CDS, removes them all from all the signers and so a work‑around there could be that you just keep synced with the ones in the parent but we also added a fix so you can actually disable CDS and CDNSKEY ultimately publication.

For the fun part. Inline signing. If you enable this in BIND, if you don't you can't maintained unsigned version of the zone and signed version of the zone and whenever the unsigned version is updated you will sync it and resign everything etc.

I tried to move this to the step and I couldn't continue here because it wouldn't sync the DNSKEY records so let's see what's going on in the logs.

We see that there's a dynamic update, it's trying to add DNSKEY of the other provide, tries to read the key files again but our DNS keys are in the signed version of the zone, but the provider from the other provided the keys we don't have the key files for so it's looking at the DNSKEY setting insigned version of the zone, it can't feel the key files for it etc. What's happening here all the DNSSEC and signing methods conflict with each other, sort of like a bug we discovered by trying this new set‑up.

We have a DNSKEY in the unsigned zone, we don't have the key files, as I said.

Don't try to sign your unsigned version of the zone. With this fix in place, I still got the same error and that is with inline signing we never sync the DNSSEC records from the unsigned version to the signed version because the unsigned version should not have DNSSEC records right? We are maintaining that. This all changes again when multi‑signer because now we have multi providers trying to syncronise these records so we need another fix, we need to allow sinking of DNSKEY records but we have to be careful all the providers aren't able to remove our DNSKEY records.

Four issues found already with the multi sign set‑up, this is great.

After these fixes have been put in place, I can actually make the inline signer work adding that to the multi‑signer group.

Let's go crazy. The bump in the wire. We have a hidden priority that the music tool cannot reach and it transfers the unsigned version to the signer, there the server maintains another unsigned version of the zone and then does the signing with inline signing. And we are going to add a new signer which is own hidden primary, there is a lot of service involved and NSes....

(Fire alarm)



MORITZ MULLER: I hope ‑‑ I think we will extend into the break.

PHILIP HOMBURG: I am not sure what word triggered the alarm whether it was DNSSEC or pirate metal. Let's go crazy.

The final scenario that I want to talk about is the bump in the wire, it was more for me let's see what else we can find, and first thing I think this was where I left off, the music tool wants to talk to the signers, wants up to date the signers and hidden priorities are not available to the music tools.

We are going to run the scenario again, the singer wants to join the band. I couldn't move towards this step where we wanted to provide the CD C records, let's see what's going on. We are adding the CDS record into the hidden primary with the forwarding and it's rejecting it because BIND has a check where it looks if publishing the CDS record makes sense and basically that means you also have to have a DNSSEC key there of the same algorithm otherwise you are probably doing something weird, that's logic from the signer but also this fails in some scenarios like this one.

Remember, this hidden primary doesn't so signing, it's a bump in the wire, so I don't really have a good fix for this so I think if you are running this set‑up, the only work around here is to add actually your own DNSKEY records to the primary zone.

And with that work around everything actually works with this set‑up.

Then we ‑‑ the only thing I tested more is that where you want to leave the signer group this is the list of steps that is in the document, in the IETF document, and when I run that I have no issues with the first two zones but when the singer wants to leave the pirate metal band there will be issues.

It's there NS records and I didn't really get that much because the document talks about needs to know which NS records are of its own and the other provider. I ran into an issue that I couldn't remove the NS record, it still was published there, I don't know what's going on here to be honest.

I need to talk to the authors of the document why this is necessary and also if there's something we need to implement in our software that we have to identify which records because I don't think that's a very common property in our DNS software.

All right. We are almost done.

Basically my conclusion is supporting multi signinger environment is more complex than meets the eye. I am not going to say two years ago everything is fine and working, but at least I think we make it a much nicer experience, find some issues with the CDS publications and scheduled for next month's releases but no promises.

It also like within the multi‑signer project things are pushing towards that model too where each provider has its own unique keyset and with the centralised control, that is this music thing where there's a central piece that tells the signers what to do. Please publish these records, please publish these records etc.

The draft mentions a decentralised method and it is more like yeah, the providers talking to each other.

One slide, if you are going to play with this, I recommend this configuration where you have a DNSSEC policy clause, you set key lifetimes to unlimited so you won't trigger automatic key roll‑overs, you set CDNSKEY to know which means you won't be publishing CDS records yourself, you leave that up to the controller and you have some update policy that will grant controller to update records and specifically the only records that is allowed to.

I have some next steps here, I want to test key rollover when that's possible with music, get that with the author of the document and that's what I have. Thank you.

(Applause)

WILLEM TOOROP: Are there any questions? No. Thank you,

(Applause)

Next speaker is Anand with an update ‑‑ DNS update of the RIPE NCC.

ANAND BUDDHDEV: Hi good morning, I am Anand Buddhdev from the RIPE NCC and I will give you a very quick update on things that we have been doing and things that we are going to be working on.

I will start with talking about our authoritative DNS cluster, this is the cluster that serves ripe.net as well as reserve DNS zones and several smaller TLDs and one of the plans that we had last year was to deploy a fourth core site that we had to delay because of unavailability of hardware. And I am happy to say that we expect to receive hardware quite soon, in July, I think, and we plan to deploy a fourth core site somewhere in Asia, so hopefully we will have some nice news /RO report by the next meeting.

We have also been deploying hosted instances of authoritative DNS which is where we get ISPs and internet exchanges to host a server for us and we now have eleven active instances and these are handling about 23,000 queries per second which is 16% of the overall query rate of this cluster. And then I am going to talk about Zonemaster, we use Zonemaster which is a software by the Swedish registry in collaboration with AFNIC, the French registry, we use this to do pre‑delegation checks for reverse delegations and we have updated this to the latest released version and we are going to swap the native user interface with something integrated into RIPEstat very soon so RIPEstat becomes the one‑stop shop for all your resources.

We are also in the process of updating our servers from centre 7 to Oracle Linux 9, 7 is coming to the end of its life, and we contribute to day in the life, this is a project where lots of providers, DNS providers collect PCAP data and submit it to DNS‑OARC and this is available to researchers for doing all kinds of analysis and yeah, we contribute to this every year so we did that earlier this year in April.

I'd like to talk quickly about one incident that we had recently, we have a secondary DNS server and this one was serving older versions of two of our Reverse‑DNS zones, and the unfortunate thing here was that even though the zone was being served, the DNSSEC signatures in the zone had expired and this was causing DNSSEC validation failures so this is actually worse than returning surf fail. And I am going to go into the details now.

So, first, a very quick slide about expiry timers, in a zones record authority record there are various values or timers and one of this is the zone expiry timer and ours is set at 10 days, and you can see that highlighted in red in the first part of the slide.

The second part of this slide shows you an DNSSEC signature which has the signature inception time in blue and expiry in red and this DNSSEC signature expiry is 14 days, and we will see why this is important on this slide. Here, we have a very simple DNS infrastructure, there's a primary DNS server providing a zone to a secondary DNS server over zone transfer and let's suppose that something goes wrong, maybe the transport fails or the TSIG doesn't work for some reason and the primary DNS server is no longer able to provide a zone transfer to the secondary, then on the secondary server the zone will expire after ten days and this is ‑‑ this is how traditional DNS works and the secondary DNS server will respond with ServFail when the zone has expired and this allows DNS resolvers to try other name servers and continue resolving names.

Now, these days, this simple infrastructure is not so common; we have entire tiers of transfer servers, there's a primary, there's an intermediate transfer server and there may be a second one because if you operate large infrastructure you wants lots of resilience and redundancy and availability, so, if you have the scenario where again the primary is unable to provide zone transfer to its first downstream, then, after ten days, the first intermediary will expire the zone. However, the publication server which is at the end is still serving the zone and it will take another ten days for the second transfer intermediary to expire the zone, when the first intermediary starts to ServFail and when the second intermediary finally ServFails then, after 30 whole days, the publication server in this chain will expire the zone. So, it will keep serving a zone with expired DNSSEC signatures because the signature lifetime is only 14 days so for 16 days the signatures will be without expired.

Now, there is a solution to this and Mark Andrews back in 2014 actually wrote an RFCfor an option, an EDNS option called expire and this is documented in RFC7314 and this allows a primary server to set the zone expiry timer and respond with that timer to queries coming in. So, if a query comes in for the ‑ record of the zone or for a zone transfer then along with that response the primary server will add an expiry value to the expire option in the DNS response.

And the idea is that the DNS client is supposed to use this expiry value and not the one from the SOA record and the client is also supposed to pass this value on to further downstreams so this way the actual expiry gets carried on until the end.

This only works, though, if all the servers support the expire option. If one of them doesn't, then this whole thing fails. So, here I'm showing you the example of a query and, in the query, I have included the plus expire option and you can see that the server is responding with an expiry value in the response, this is highlighted in red, and notice that it's not exactly ten days, so you know, there's a small gap because the server I have queried last queried it's upstream, not exactly ten days ago but a little bit earlier than that.

So expire support in software. BIND 9 has this for a long time, I tried to find out when it was added but I couldn't but at least 9.10 had it so this has probably been around for quite a long time and you can also request expiry time, Knot DNS has it had since 3.2 and NSD does not have this but we have a fee fewer request and I hope these find the options soon.

The lessons we learned we want to review our zone and signature expiry timer values and I encourage other folks who are doing DNSSEC to look at this. More monitoring of our secondary DNS servers, we want to work with our secondary DNS providers to encourage of the expire option and finally we want the expire option available all through K‑root and auth DNS Anycast clusters so we don't have this issue in the future.

With that, I end my presentation and open the floor to questions.

(Applause)

AUDIENCE SPEAKER: SIDN. Interesting, I didn't know this and I probably have a very stupid question, if you want to expire within ten days and you have that chain that you just showed in the slide, why not lower the ten days to three days or so?

ANAND BUDDHDEV: Sure we could do that, the problem with this is we don't always know how many XFR servers there are in a chain and trying to match the expiry with the zone expiry is a little bit of hit‑and‑miss so that's why I think the expire option is really the best solution here.

AUDIENCE SPEAKER: Makes sense.

AUDIENCE SPEAKER: I want to thank you very much for this because this is the type of presentations that will help the community to build a stronger and better functioning DNS system. When you hold out the problems that you have yourself and make that possible, that actually helps building communities so thank you very much.

(Applause)

ANAND BUDDHDEV: Thank you.

MORITZ MULLER: One more question online. This one was before Marco but there was ‑‑ anyways.

Robert: Is somewhere documented why you decide to choose Oracle Linux 9 over ‑‑ Linux 9, are most common Linux ‑‑ healthy communities?

ANAND BUDDHDEV: As far as we are concerned rocky and Arab Oracle Linux are all equally well supported, Linux distribution systems. Some people don't like Oracle as a company but their Linux distribution, Oracle Linux 9 is quite rock solid, supportable, dependable, so we don't see a problem using it.

WILLEM TOOROP: Preceding this conference there was a port 53‑DNS hackathon organised by DNS Netnod and RIPE NCC and those were the people that participated in that. And it's from that we have six projects and I will go over the projects quickly, describing them and in this slide deck which you can also download, also in the introduction slide deck are the links to the separately project pages.

So the first project was DNSSEC boot strapping, which is a draft from Peter Thomas and they worked on implementing boot strapping DNSSEC from zones that already have DNSSEC instead of having to go through the registry procedure.

The sustainability team looked into how DNS query ‑‑ how the DNS could use less energy in its deployment.

DNS today and DNS home appliances did something similar, basically; they had a DNS resolve on a ‑‑ 0, with local router and showed that it consumed very little memory and cost a lot less queries.

And then there were three hackathon projects who won a prize, they all have the ‑‑ first prize, there was no distinction between first, second, third. The first one is DNS and the application layer, also known as dopper. They won the prize for most courageous project to address the topic of all the modern DNS standards being used at the application layer. The same what Philip Homburg is addressing in his Connectbyname or our Connectbyname.

Digalicious was a project received a prize for best team work, they worked on a specific language to use on RIPE Atlas, on the probes themselves, to already have some results from the probe before ‑‑ so you don't have to do all the visuals processing yourself and also save space and so energy.

And the final project that won the most relevant for operators project is the DNS out of protocol signalling project, also known as DNS oops and it's relevant because the reasons which Stefan Ubbink is going to explain to you and he is giving a live demo.

STEFAN UBBINK: Thank you, yes, DNS oops, we used this at SIDN for our Anycast platform and we created with NLnet Labs first draft for IETF to get signalling outside the name server to act upon it.

And the goal of the project is to be able to have notified when something goes wrong so announce could use this in the future to see a signal is expiring and he can do something about it, that might be useful for others as well. And we, during the hackathon we created a small set‑up with master and multiple secondaries and all secondaries handled update, which I will demonstrate.

So, here we have a zone and currently you see in the top screen you see Knot DNS, oops is the Knot server with serial 58 and if I replace serial with newer version, we have a good server reload and you see already the serial updated and now the server will disable BGP and another one will enable BGP and now we have NSD oops server answering our questions.

So, since we have three name servers, we do it again. And it should update quite soon, to have BIND version, BIND notify oops and the 060 so this was the small demo and it worked, no demo effect.

We used notable different software, so NSD has been used and Knot and BIND has been used. We used extra BGP and BIRD for the config of the daemon and SN notify D and creed as part of NSD and the functionality of what to establish all of this.

Are there any questions?

WILLEM TOOROP: I think we don't have time for questions. Time for some drinks and a bit of a break, and there are some DNS hackathon stickers available for the people who want them. Thank you very much.

(Applause)

WILLEM TOOROP: Yes, so this is the end of the session and ten minutes for coffee.

LIVE CAPTIONING BY AOIFE DOWNES, RPR
DUBLIN, IRELAND