International conference of developers
and users of free / open source software

FlowForwarding Warp: how is JVM running SDN controller

Dmitry Orekhov, Minsk, Belarus

LVEE 2014

How to build a fast, scalable and portable SDN controller? Is JVM an appropriate platform for this? What solutions may Java world suggest for distribute systems and data serialization? And how fast would it be, eventually? These questions make a subject of the presentation.

Introduction: be ready for the Real World

Today Software-defining is a real factor in the Industry. Network Function Virtualization, Service insertion in datacenters and clouds; Dynamic WAN rerouting and interconnecting, Bandwidth on Demand for providers — that’s only a short list of SDN use cases. The most well-known usage of SDN is OpenStack — Open Source cloud computing platform, created and supported by free developers in tight collaboration with enterprise vendors.

At considering SDN as an Enterprise technology, new non-functional requirements become actual: stable work under high-load (hundreds and thousands of controllers and switches) and scalability.

For Open Source developers we may add one more requirement, a portability. Enterprise vendors can tune software carefully against certain hardware, but Open Source developers cannot afford this to themselves.

Instead of this, a strategy would be to provide open and portable solutions, so everybody may use them on favorite platforms, improve and customize. Probably JVM is currently one of the best platform for such solutions.
Additionally, the most interesting Open Source initiatives in SDN, OpenDaylight and ONOS, are written in Java. Also one can take Hadoop as an example: we have made some experiments using OpenFlow controller Java library to make Hadoop topology more adaptive.

So our decision is JVM.

Apache Avro: Fast run-time serialization framework in Java.

In the real world, the very desirable feature for SDN (and, particularly, OpenFlow) Controller would be an ability to update itself with new versions on the fly. It dictates us, at first, to separate protocol definition from other code and, secondly, to provide dynamic load of protocol in run-time, as it may be critical for topology to update SDN controller on the fly, without stopping. To fulfill these requirements we chose Apache Avro.

Avro is a data serialization and remote procedure call framework. For us, the most important Avro distinction from other similar solutions like Protobuff or Thrift, is that Avro doesn’t demand code generation and may parse protocol and apply any protocol changes in run-time.

When you use Avro, the workflow is:

  1. Define protocol in JSON-based Avro language,
  2. If you don’t need run-time protocol updating, you may compile your protocol, get bunch of classes and get all advantages of static typing
  3. If real-time protocol update in Avro is quick enough, then there should be wide use caching and pools of pre-built objects.


Akka library was developed to simplify development of distributed and concurrent software on JVM. It was inspired with Erlang and implements high-performing Actors model. Millions messages per second, very small footprint and distributability by design make Akka very good for distributed software on JVM.

We have chosen Akka because Actors model fits ideally into the SDN controller architecture. SDN Controller must run multiple independent and stateless sessions, one per connected Switch. No session can harm the other one. Further, according to SDN Controller ideology, it is stateless, and therefore shouldn’t store any information about the Switch state. So, we don’t need any failover. Eventually, if any Controller session want to crash — we should let it crashing. And it matches ideally Actors model implemented by Akka (and by Erlang before).


Akka is written in Scala, so it brings support of functional programming.
Also, Scala is scalable by it’s design. One Scala feature we’re not using now is very powerful parsing facility, which we know and use in our other project, dedicated to Domain-specific language for ‘binary’ protocols, like OpenFlow.

Assemble everything together

Using Akka we have built a pool of Actors, communicating via sending messages. Every Switch is handled by separate Switch Connector. Switch Connectors define only basic functionality, and developer may customize behavior, implementing Event handlers and registering his Actor for specific Event. Currently, we have some custom actors implementing REST API for Warp controller.

Protocol drivers are totally separated from Controller part. Via simple API every Controller actor may build, customize and serialize messages. Moreover, it’s possible to use Protocol drivers with any other JVM applications. We made proof of a concept, using Warp OpenFlow driver in Floodlight controller, by the way.

Currently, only OpenFlow protocol is implemented as the most widely used protocol in SDN.

High-load testing

Well, we’ve just started yet. We have performed quick and initial testing of Warp controller running on on 4-core CPU against 600 LINC logical switches running on two 4-core CPUs. I would say, we were satisfied:

  1. There were about thousand heartbeats from different switches per minute,
  2. Session establishing (TCP connection, handshake) took 25 seconds,
  3. Using script we’re able to establish and we have about 60-70% of CPU utilization during these tests for all cores.


  1. SDN use cases
  3. FlowForwarding Warp Git repository
  4. FlowForwarding Warp Git repository
  5. Akka
  6. Avro

Abstract licensed under Creative Commons Attribution-ShareAlike 3.0 license