From Business Innovation and Network Re-invention to Future Internet

SDN Switzerland MeetUp – 05.07.2019 – Swisscom Tower Zurich

This time, apart the always inventive networking approaches, we also had talks on digital innovation, business transformation by FlashStack, 2STiC program initiative and DevOps approaches, DC automation from design to deployment in a highly volatile ICT-ecosystem.

Roman Vogel (PureStorage) opened the workshop explaining current issues why all is about business transformation and data centric. He introduced how Software Defined Data Centres (SD-DC) help to address today’s business issues with “FlashStack” – old has to be merged with new, and silos have to be given up, but the right time for this process is never explicitly there – so lets go ahead.

Business Transformation with a SDDC

Therefore, the mechanism is the evolution to a data centric architecture. There is a high degree of virtualisation, and the whole stack, network, compute and storage can be automated!…He argued that FlashStack is Cisco UCS for compute, Cisco fabric (Nexus and MDS FC switches), and pure storage. Thus, it’s a system as a private cloud constituted on security, where policy driven service layers are customisable on demand. He explained the anatomy of such a system as disk stateless servers, fast converged client networks with fast parallel protocols, and NVMe (Non-Volatile Memory Express) over fabric. Protocols are any protocols for block or object storage and for analytics object store of NFS or S3.

The current solutions with FlashStack eliminate silos, simplify operations, scale with agility (compute and/or storage part of it), and provide transformative values, e.g. decreasing complexity on compute, network and storage layer.

Kamila Součková (ETH Zurich) is researching on P4 (Programming Protocol-independent Packet Processors) and presented “P4 in the wild: Line-rate packet forwarding of the SCION future Internet architecture”.

She introduced SCION as a design for providing route control, failure isolation, and explicit trust information for end-2-end-communications. With SCION there is a next generation of networks – SCION offers in details scalability by packet-carrier forwarding, and hierarchical design, network access control so that the end-host selects paths (the ISPs decide path availability), multi tenancy by isolation domains (core and non-core ASs, failures stay within) and built-in DoS protection. With SCION native we have a new concept on control and data plane, replaces IP +BGP and endhost-controlled multipath “for free”.

P4 in the wild: Line-rate packet forwarding of the SCION future Internet architecture

Based on the generic concept Kamila introduced her expertise, to build a SCION Border router in P4 with NetFPGA. She’s assuming four main assumptions: – (#1) To elaborate a ready-to-deploy concept requiring data forwarding at 40 Gbps or more, usable under real conditions/traffic and with an integration form on existing SCION infrastructure regarding control plan and monitoring, metrics; – (#2) SCION is set as a library, should be modular, portable, and including high-performance P4 code for parsing, verification and data forwarding; – (#3) To elaborate guidelines (as recommendation) for high-speed in P4 by checking the critical path in the design and – (#4) Optimising the SCION protocol for hardware or asking the question: “How can we adjust SCION to enable a more efficient implementation in HW?”

Many challenges in #1 to #4 were discussed, so here is an extract on underlined spots: A FPGA based hardware is a good basis for evolving P4 code, but not recommended for P4 projects (experimenting). Deployment steps were iteratively – Reuse, Reduce, and Recycle. Further, SCION does not determine the packet fields, so parser has to be deployed for this, which can be used for external monitoring of SCION. The most important inside of parser deployment is not to parse the whole path, just to save it, but this needs to modify the NetFPGA design in direction to build sub-parser, and this again costs many FPGA-area and RAM. Limiter of high-speed on P4 are CAM-tables, then meeting the time requirements is a frequent issue, therefore the critical path on implemented design has to be checked. However at the end a special end host with P4-enabled SmartNICs is desirable, so that processing is on the host.

The project she explained is mostly done, currently parsing and validation is challenging and timing starts failing. The FPGA deployment is still not production-ready as power requirements has to be optimised in comparison to IP, and last but not least it has to make faster by planning 1 Tbps using multiple FPGA-enabled-NICs.

Victor Reijs (SIDN) reports about Future Internet activities at the SIDN. He pointed out to the 2STiC Program: Security, Stability and Transparency of inter-network Communication. This is a program in collaboration with NLnet labs, SIDN labs, SURF, TU Delft, UTwente, and UvA. They approached on realistic/practical use cases and build demonstrators and testbeds, covering multi domain, governance and trust aspects, and acting as a think thank providing publications, guidelines and evaluation expertise of new technologies, and experience around future Internet technologies.

Future Internet activities at the SIDN

Victor referred to the presentation of Kamilla and mentioned that work in SCION is currently done at SIDN, a testbed/prototype 2STiC in P4 is in place, where they determine maturity of P4 implementations for Future Internet technologies like SCION, also RINA and Content Centric Networks. P4-nodes of the testbed he mentioned are Barefoot switches and servers with Netronome SmartNics. The In-band Network Telemetry was a use case for the 2STiC testbed. Data collector design, gathering data from control and data plane as well as topology design is a part of the initial setup.

In the future they see P4 as a tool, they will investigate in above mentioned technologies, and enforce community work as a part by providing the testbed for use cases and demonstrators. In context of porting protocols to hardware, there are some crucial questions: “How open are these opportunities?…How hardware dependent is the P4 code?And Victor argued too, that NDAs becomes an important role – NDAs for hardware can make cooperations more cumbersome.

Serge Monney (IBM) illustrated how next generation IT technical support for cloud and storage can facilitate monitoring and troubleshooting of large scaled virtualized multi tenant clouds and their DC environment. Through evolution of virtualization, DC complexity is increased, so we need to be proactive, pure anomaly detection and root cause analysis does not anymore satisfy. Further, many trash-holds and KPIs have to be managed successfully; for example in IBM they collect up to 800 metrics, and high-level workload may be normal or will appear from issues. Thus the semi-automated approaches are no longer practical, nor sustainable and doesn’t live up to next generation of cloud technical support. This concludes to address novel ML-based approaches for timely detected IT-issues and events in (virtualized) cloud/storage fabrics with its associated causal troubleshooting.

The next-gen IT tech support for Cloud and Storage (Fabrics)

So the question comes up with “what do compare to what”?…, and can we say that’s true that “Performance analysis is more like art than science?” Serge introduced the building blocks for performance analysis: Source: DC => Monitoring => Thresholds => Alerting => Collecting raw data => Time Series made correlations => Ranking of similar time series metrics => Target: Support, focus on specific areas.

Serge pointed out that the nucleus to find metrics is to have causal relationship with the alert; that means a Time series cross correlation. From this cross-correlation picture we are able to make image recognition using by auto encoder in a neural network, also making output layer to reproduce the input signal as much as possible. However, if data does not match at least of 60% then we conclude to have errors, since the machine learning is not able to reconstruct it based on the most important metrics.

Now questions from the audience came up with – how many metrics the system was tested – 2, 3 metrics? …and what’s about fingerprinting when we detect real issues to use this later against other data? He pointed out that he was able to find a definition, if 1,4,10,200…or more are in charge, then you can fingerprint and apply this by type of machine, industry, through collecting a lot of performance data, tag them (type of machine and workload) and then create a model.

Hanieh Rajabi (SWITCH) gave as a view of how to build an automatable data centre, from design to deployment. In a first step she introduced SWITCHengines and cloud offerings. She pointed out that IaaS, PaaS, and SCALE-UP, community work are currently the “main pillars” of service delivery. With SWITCHengines she pointed out how difficult it is to automate – We know how to swim, but not to automate. Issues like having less time for provisioning, configuring, updating and maintaining services, and how important to detect problem as quickly as possible and to resolve it was mentioned. She linked to the goals of infrastructure as a code and underlined benefits of automation like – solution to problem are proven through implementation, testing and measuring, and therefore simplifying network operations for IT teams not only for network engineers.

Hanieh introduced to the CLOS-DC-Toplogy at SWITCH according to the motto – Scale the datacenter like you scale the internet. She explained a Spine-Leaf Architecture using standard L3 routing protocols (BGP), L2 data plane built on VXLAN tunneled and L2 control plane by means of eVPN protocol. As a background: A CLOS topology is comprised of spine and leaf layers. Servers are connected to leaf switches (TOR switches) and each leaf is connected to all spines. There is no direct leaf-to-leaf and spine-to-spine connection. She pointed out that in the SWITCH-CLOS-topology switches are manufactured from different vendors, that the topology is highly scalable, where it’s possible to add switches as to add nodes to the Internet.

Building an automatable data centre network from design to deployment

Further, all switches speak to each other with the standard routing protocol BGP; reasons for using BGP are “one for all” and a good implementation form (see RFC7938). The routing at the host stays in context how the servers would be attached to the fabric. So she explained that through L2 bonding (could be active/passive), through L3 routing with BGP policy, servers accept only a default route and announce only their loop back address. Main-reasons for using eBGP and not iBGP are robust implementation, fully featured (iBGP limited in multi path support), multiple implementation of iBGP, and last but not least simpler to understand.

Automation processes/procedures shall simplify network operations, complex configurations and device management while providing business agility to adapt in a steadily-changing environment. So automation can be approached differently, e.g. most device have an option to import configuration file through TFTP, by dynamically ( template, declaratively defined infrastructure) generate the configuration file, or by testing environment, which allows evaluation of configuration-correctness. Used software and tools are – ONIE (open network installation environment) that will be used to bootstrap switches, Jinja2, the full featured template engine and Ansible for automation configuration, e.g. single play book with different roles. User-facing cloud network monitoring (availability, latency) and measuring Service Level Indicators/Objectives (SLI/SLO) for users from 3 different locations will be covered with Nagios, site 24×7 monitoring platform.

As a wrap up, let’s give a tribute to the female community and networkers who are working hard to get into any hacks and technology details. It was great to see that the interest is increasing each time and so we hope this trend will continue in the future. Also, as the SDN topic is shaping and reshaping continuously, we are as well adapting the topics to match the recent trends, so we can offer to our community the most interesting and up-to-date research and implementation solutions. Therefore, thanks for all who came and see you on the 13th SDN event. Don’t wait for the announcement, you can already start sending us ideas and topics for presentations. Till next time, enjoy the summer.

Authors: Kurt Baumann (SWITCH), Irena Trajkovska (Cisco)

11th. SDN Workshop

We are happy to invite you to the SDN Switzerland MeetUp

in December 2018

The scope of the SDN Workshop includes, but is not restricted on:
– Service development over SDN-enabled architectures
– SDN and Security
– Container networking (e.g. Docker and SDN)
– SDN/NFV, testbeds and industry solutions
– Survey on SDN: strategy, future directions, community, etc.
– SDN: From WAN to Cloud solutions
– SDN and automation
– DPDK and OVS makes SDN/NFV deployment easier?!

If you like to be a part of the program, please send us your presentation proposal on SDN MeetUp page or on the mailing list: (not more than 100 words) not later than 24th. of November 2018, EoB.

Best regards,

Irena Trajkovska (Cisco), Kurt Baumann (SWITCH)

Breaking SDN myths and facing automation challenges

The 11th SDN workshop took place on December 7th 2018 at HSR Hochschule für Technik in Rapperswil (Zurich). It covered as always, variety of topics that were joined by interesting discussions and networking with our guests. Facing the upcoming event on July 5th 2019, we wanted to refresh you with the impressions and the knowledge shared among our community, through a (not so brief :)) blog of the event. I hope you will find some useful insights of the trends and the developments in the networking world, as presented by our industry and academic collaborators. And of course, we are looking forward for you to join us in the SDN meetup journey.


Ivan Pepelnjak (“Real-life SDN: 7 years later”) revisited the current SDN principles and commonly accepted definitions. He named Google, the Internet Exchange Points (IXP) and scale-out IDS – an academic infrastructure that relied on the standard SDN definition. It was very interesting to follow his analysis on the overall development and what are in Ivan’s view, the most relevant pieces to focus on within the entire “SDN madness”. What some like Cisco got right was putting control plane functions also within the data plane, i.e. with the Cisco ACI solution the APIC is just a controller of the policies. Does the standard SDN definition includes failure detection scenarios? – this is the crucial pointer one needs to ask themselves when speaking about SDN. Ivan later discussed how the “software-defined” got so prevalent term, reminding that the old-time Cisco AGS was always running SW and that SDN is packet forwarding done in software. And “build our own white box switch” x86-based (like Arista did it with Cumulus), wouldn’t be a bad idea if HW and SW licenses were sold separately. Ivan pointed out …“Isn’­­­t great to have switches and servers with common OS – yes but you also inherit common bugs!”… A success story was discussed: 50 thousand hot prefixes of Spotify traffic were filtered and pushed in hardware, out of 1 million entries, setting up default routing for the rest. This extensible SDN Internet router used IXP and sFlow. Going further Ivan stressed that the SDN buzz brought common benefits such as NETCONF and REST APIs, setting the stone for the automation era. And here ACI and OpenStack got it right automating VLANs, yet the automatic responses on events in network stays a hard problem and bound to each and every network…So yes – you can buy it, but you need to know how to automate it, and you also need to know how to fix it, albeit “you can’t buy something that will implement your ideas”. Indeed, a very insightful talk by Ivan.

IMG_7822.JPG IMG_7825.JPG

Roque Gagliano from Cisco (“Model Driven SDN – The new YANG universe”) followed up on the SDN vision pointing out that even though SDN was around data/control plan – we still do BGP. What comes as a challenge is to achieve a programmability that will allow to reuse APIs, and this is where YANG has played a big role. YANG is used to describe whatever is on the wire and being a standardization modeling language, it is very formal and easy to understand. NETCONF was designed to provide the pipeline and the primitives, while RESTCONF is rest based NETCONF with well-defined primitives. The goal is to omit working with plugins and instead, use different models. You can still program in Java and then use NETCONF. Roque explained that the increase of streaming telemetry and standards (eg. Apache Thrift or Google’s Protocol Buffers) raises the question on how to program the interface – customers want to program Cisco box as Juniper box and not only the pipeline, but also the content. “Can I do a transaction and roll back transaction?” Roque explained that network orchestration and automation is the organizational boundary – the foundation is having a model-driven network based on standard and robust APIs, to offer same services regardless of the changes you do in the network. From a customer perspective, the most common way to start is in the middle and do a self-learning L2 or L3 VPN due to the dynamic and agile way to publish the services onto the network. Roque explained that for last two years there are more than 2 vendors relying on NETCONF (Juniper and Cisco). OpenConfig came later as alternative from the standard model. Other examples using YANG models involve OASIS Topology and Orchestration pen for Cloud Applications (TOSCA) and NFV MANO with Telefonica being its majojr user. Hardest problem of automation has to do with legacy – you won’t trash the boxes because of automation. He mentioned ngena as the star alliance of service providers – somebody else owns the service used by the business, some parts reside in different services. The disaggregated transport network development in NTT Communication combines optical vendors in a single optical network. They ask each individual vendor to talk NETCONF.

Roque later spoke about Cisco NSO as his main focus project that has been integrated by several vendors, due to the easy setup using YANG models and no additional coding. He discussed the streaming telemetry in NSO and the tests done with Telegaf and Prometheus. NSO bases on the Model-driven Orchestrated Assurance and provides the confidence that the Application Intent is configured correctly and it is also SLAs compliant. Finally, Roque discussed the upcoming initiatives like focusing on data stores and enhancements, go agile with YANG, adapter-free world etc. Finally he wrapped up by explaining that there are many tools out there: a use-case based YANG catalog as a global YANG repository, a git support of YANG models, Ansible NETCONF modules are very convenient solution where one needs just to create the JSON payload, etc. All in all, it was interesting to see how Cisco is also sailing in open-source waters. 

IMG_7834.JPG IMG_7836.JPG

Sean Murphy from the ICCLab (“EdgeConnect: Enabling an edge based docker engine to access an OVN based OpenStack Networks”) showed a demo implementation of how it looks like to extend the OVN to the enterprise edge. The goal is to create edge networking testbed. OVN is a high-level implementation of logical switches, ports etc. on the top of OVS, which is where the physical implementation happens. OVN is packaged with OVS, and the source code is coupled. It uses Kolla-Ansible for OpenStack deployment by adding Kolla containers dedicated for OVN. Initially a vanilla ansible is installed and then the Kolla containers for OVN are pulled. This is also brought without some of the OpenStack networking services, l3, DHCP, methadata etc. Sean explained that there is a docker engine plugin to enable the containers to connect to OVN appropriately. OVN northbound connects to Neutron, and Neutron tells which switches (nets) to configure. He pointed out that there are issues for HA support for OVN, as full consistent HA manner of operating OVN is not ready yet. In terms of security in OVN – only control path security exists, data pane is not supported. SSL certificates are used via creating PKI with the possibility to also integrate with external PKI. He explained that the controller is conservative on who to trust, for e.g. Kubernetes guys use approach where the switches present certificates to controller on where to connect, but not the other way around. Sean later explained the OVN-Docker integration is either underlay or overlay. The underlay assumes docker runs as OpenStack VM, and the VM has two interfaces – one for the docker container one for the VM. In the overlay approach, the Docker is tightly coupled with OVN and docker creates an OVN logical switch – giving with this to the container a full control.

Sean explained that for their approach neither solution suited the context – they wanted standard controller running at the Edge but w/o full control of OVN. So they used Docker libnetwork plugin and wrote service that provides set of endpoints that return responses defined in the Network driver. Keystone and Neutorn integration is required for OVN – the Edge connect uses the keystone credentials in the authentication process. The demo Sean made stressed the work on securing OpenStack /OVN by adding SSL. Initially he showed the edge device uploading the self-signed certificate, then the connection request from the edge device. The ID of the chassis then appears after authentication. He pointed that further work is required on OVN via OpenStack port id because they are not being plugged altogether in terms of Docker plugin. As a future directive Sean said that the open source of the code will be ready by end of year with targeted use-case towards Enterprise networking involving terabits of data. More details and update of the Sean’s great work as of today, you can check out in the Sean’s blog.

IMG_7845.JPG IMG_7849.JPG

Thomas Zasowski, Swisscom (“SDN/NFV in a telco provider environment”) presented the Swisscom NFV enterprise connect solution and how the increasing traffic demands of the mobile devices have driven the 5G acceleration, tackling use-cases such as: real-time data, location of goods in a factory, virtualized production machine, quality control, etc. Those challenges led to virtualize the functions required to enable the specific use-cases and produce them on the telco cloud in Swisscom in order to improve the localization of those services closer to customer. “So what does it mean if we have these challenges?” You would need to deal with 5G radio equipment and perform network slicing, which then raises the question on how to do low-latency slice including some mobile functionality on premises? Afterwards the challenge also includes the need logical level and services on the top of the infrastructure,  and APIs to allow access to various functions in the network. As a discussion – Thomas pointed out that if you have distributed systems, where should you run your VNFs? Thomas discussed the NFV through the example of self-autonomous ants, if one dies, another one takes it over. He joked that PaaS is free like an elephant, and despite the ants there are still elephant tasks – there are still things you don’t want to do with micro services.

What is also important to have functionalities such as: self-healing, self-configuring, and protection – all fully automated operations. The issue – Thomas pointed in automation is the lack of standards for automation. There is a multi-vendor strategy for each domain, which is focused on making it more complex on OSS side, while do organically grown mass of systems. But all of those are vendor-specific with dependent changes based on preferences, which eventually creates mess due to the lack standards. He argued that they work on open source and follow the community solutions in order to base the work in function of the standards that gets the highest traction. As example ONAP as E2E service orchestration might become the standard for network automation because it brings tooling for the whole end-to-end service design. They used ONAP, vBNG from Metaswitch – image with ONAP not only for 5G but also for fixed access. They worked with Huawei and then Nokia and Erickson joined in the process of creating cross-organization CI/CD pipeline for VNFs as a simple common joint repository. Thomas finalized by stating that disaggregation of functions is not about cost but about getting flexibility. The talk was followed by interesting discussions in terms of automating self-protecting functions in the context of self-defending systems, about the VNFs supplies from different providers and the complexity that raises due to the different integration tests and verifications, the design of specifications, dynamic scale-in and out of NFVs, how does it work in terms of modeling of services, interaction, how SW talk to the different systems. Thomas explained for the last one, that it is very use-case geared and can be configured via dedicated dashboard. And so he concluding this engaging presentation.

IMG_7856.JPG IMG_7859.JPG

Douglas Copas, Oracle (“High Performance Virtual Networking with Oracle Cloud Infrastructure”) spoke about the Oracle approach to SDN. As a consumer of the SDN architecture his intro focused on what is like to operate SDN, including the myths of horizontal scaling, and then talked about performance. He discussed that the block storage is important in the competition – “you don’t compete based on the price if you are not the decision maker”. He pointed out that performance does matter, more efficient to vertically scale is to think about scale horizontally when you are dealing with micro services. One approach to manage SDN in cloud environment, is to run some kind of SW in a host device, where the performance of de-encapsulating the VXLAN is the key factor. From operations perspective this is hard due to coupling the SDN agent with the kernel version on the host. He mentioned the example of Cavium (bought by Marcell for 6B) – a hardware that takes the SDN agent off the host OS and runs in custom Linux. Douglas pointed out that Oracle does not use hyper converged (HX) architecture. The block storage lies in isolated HW over the network. They have ISCASI over L3 and when you connect the block storage via L3, the performance is bound to CPU or network. This doesn’t oversubscribe the network, as every VM gets full share in the network and this is key due to the block storage – they saturate the block storage. The SDN from Oracle runs at line speed. On a question if the VM says what it needs or it is determined heuristically, Douglas answered that it depends on price as cheaper VM uses less resources, so it scales with the CPU. Douglas promised to follow up more on the Oracle collaboration and solutions with Cavium in one of the next workshops. We can’t wait.

IMG_7865.JPG IMG_7871.JPG

In addition to interesting sessions, we had lunch discussions about topics such as the verification and troubleshooting of false negatives when it comes of network verification, training a deep learning platforms, hyper converged solutions – as a way to get unified and complete toolset with all included unless you are not willing to develop it yourself.

Naturally when the workshop was done, we continued our discussion at the Rapperswil Christmas market taking Glühwein with our colleagues from Swisscom, HP and ICCLab, because of course – there was much more to discuss. 🙂


Finally, thanks to all the presenters and attendees in contributing to yet another interesting workshop. If you happens to be curious to attend and present at our event, don’t hesitate to get in touch with Kurt or myself (Irena), we will be glad to allocate a presenter slot for your great solutions and ideas. Our next event’s registration is open on the SDN meetup page, this time hosted by Swisscom, and as always, cordially sponsored by SWITCH and supported by Cisco.

10th SDN Workshop at Cisco

10 ways how SDN influences the modern networking,

— Which one is yours?—

To think on disruption, we think on innovations that displace earlier technologies and take things to a new level – SDN is an “emerging” concept, highly dynamic, manageable, cost-effective and adaptable, therefore SDN is eclipsed by Software-Defined everything (SDx), and is now more than networking being transformed by software.To understand fundamentals of the SDx ecosystem was our challenge, but hang on – first we were happy to celebrate our 5th year anniversary. The “#SDN_CH – 5” Polo Shirt was a gift for all participants, a sign for confidence and excellent work in the past – THANK YOU VERY MUCH for all your efforts. In order to keep the tradition to hold SDN workshops twice per year, Irena Trajkovska (Cisco) and Kurt Baumann (SWITCH) were proud to welcome up to 30 networkers mid in the summertime, the 29th of June 2018 at the location of Cisco Switzerland. We had exciting discussions about new ideas, topics like Wireless Sensor networks (WSNs) and SDN, vision and missions, implementation forms on the industry or “carrier grade” level technology considerations to the SDx ecosystem.

The opening session started with a fundamental question: “Can I be sure that the network is doing what I intended it do to?”. Irena Trajkovska (Cisco) showed how to find answers about issues that arise from the “reactive network management (current approach) and how these issues can be improved by highlighting the importance of network assurance. She showed a demo based on use-cases from real example scenarios that justify the need for proactive network assurance in a dynamic and continuous fashion.

With the SDNWisebed”, Jacob Schaerer (University of Bern) a “real-world testbed” environment was introduced for Software-Defined WSNs, where using TARWIS – a functional and performance evaluation was conducted on a multi-tier network architecture. In a WSN, the sensors’ power consumption is high, so the route nodes may run out of battery, which can cause the network to fail. One possibility to avoid this is to route the traffic smarter around the congestion nodes, so the best approach for that is using shortest path and not RPL. This follows with the statement – dynamic routing prolongs the lifetime.

Christian Kuster asked the audience “What is SDN (not) today & tomorrow”? – He explained the current state and meaning of SDN from a Huawai perspective. He proposed T-SDN – a transport SDN on the optical layer, showing use cases on SD-WAN from a large carrier in Japan and a demonstration of a metro network evolution – 12 nodes cluster distributed across DCs. He introduced the building blocks of the proposed architecture pointing out agile sdn controllers, which steers the over/underlays of the metro fabrics.

A distributed “DDoS Defense in SDN” presented by Gürkan Gür pointed out – “if you are able to control the brain you can control the body too”. The fact that networks have natural protection against common threats/vulnerability and that SDN emerge against those properties was not surprising but a bit worrying. So controller, multi-network segments, link, switches etc. can be attacked. Gürkan Gür explained JESS (joint entropy based security schema); the aim is to enhance security of SDN by a reinforced SDN architecture against DDoS. That means that a deep packet inspection comes from the controller and after traffic analysis, the data will be send to the controller for calculating the entropy.

Thomas Graf the CTO and Co-Founder of COVALENT gave us a closer look to “Cilium and why bringing BPF Revolution to Kubernetes Networking and Security”. BPF (Berkeley Packet Filter) became of interests as it came with replacing of IPTables. It was shown as the fastest emerging technology of the Linux Kernel – HW programmability (e.g by P4) is the basis for the Cilium project, thus it grants API-aware efficient networking, security and load balancing for containers and microservices. Future aspects shown in a a roadmap shall involve socket networking and a service mesh datapath capabilities deployment providing efficient networking between native cloud apps and sidecar proxis.

In context of DPDK evolution, Luis Pedrosa from EPFL presented the “Automated Synthesis of Adversarial Workloads for Network Functions”. He introduced the good, the bad, and the ugly in deploying software network functions, and referred to the benefit of sNF, simplifying network service deployment and reduction of the network operations costs. Having this in mind and unpredictable performance variability given, it’s imperative towards the sNF deployment that network operators consider the network performance not only on typical also on adversarial workloads;…but this demands better tools they contributed on a “Cycle Approximating Symbolic Timing Analysis for sNFs” tool called CASTAN. For the future work they aim to make more analysis on sNFs (e.g. Intel EPC), to analyze more core-sNFs (e.g. attacking cache coherence) and to figure out performance contracts in variation of worst-case execution time.

Sean Murphy (ICCLab) with the title of his presentation – “Configuring Openstack with OVN in a containerized context” gave us a lightning talk about the the Open Virtual Network (OVN) activities originally launched by the OVS team, and his work in progress based on OVN. He is using OVN as a test bed environment running on OpenStack realized by Kolla production-ready containers. He explained that OpenStack support for OVN is relatively new and he pointed out the challenges in his work and potential ways hot to tackle them.

Atilla de Groot, a Senior Systems Engineer from Cumulus Networks discussed “If (network == server) { magic happens }”. His main focus is the Cumulus Linux, which is a NOS that runs on data center hardware platforms and uses the Linux kernel for routing, arp, bridge table, etc. Atilla targeted a showcase related to orchestration, where by using DevOps tools and CI/CD concepts, he programmed a python script that uses Ansible to push the configuration defined in Netbox to the switches. There was a question related to the scalability issues of running Ansible, which is a current debate among the community. Doing the configuration only on the changing hosts would probably address this Ansible performance issue, but his PoC in progress with 300 switches, will give a more detailed answer and vision. You can find out more about this demo here.

The GEANT contribution presented by Kurt Baumann (SWITCH) focused on “WiFiMon – Wi-Fi Performance as Experienced by the End-User”. The basis for elaborating measurements are eduroam enabled university campuses. Kurt introduced the measurement architecture (building blocks), and deployed algorithms for measuring, collecting and correlating real data experienced by the end-users. An individual customized Portal provided by Kibana visualizes the real time measurements. Kurt pointed out that future work will include improvements on performance verification, app measurements, a performance benchmarking, etc. in order to deploy prediction algorithm for answering strategic and technical questions.

And as every nice things come to end quickly, with this – we called out for discussion and wrap up of the sessions. As a reminder, we learnt how the academia and industry perceive and implement the SDN concepts in a set of solutions that drive the modern networking forward. Datacenter network, WAN, Campus, provider network, – over bare metal, virtual and hybrid solutions – you name it – they all bring something we can call a successor of the well known networking principals in liaison with the modern SDN and SDx trends.

Once again, many thanks for our presenters and to the organization team for the efforts on making the anniversary event a special day.

#SDN_CH in front of the famous Eiger 🙂

See you at our next meetup, around end of year 2018! If you want to know more about what the SDN Switzerland MeetUp is about, don’t forget to visit our twitter feed (#SDN_CH) and find out details from the past 5 years workshops on our meetup page.

With this we want to remind you that this workshop is a collaboration between the entire Swiss SDN community, so don’t be shy – go ahead and come back with a talk proposal or a demo – we will make sure to give a light of your efforts and make you shine 😉