Configure the default docker network with systemd

Docker is a really good application for not destroying your system with high dependency applications. This is really useful, especially when you want to have a clean system (as much as possible).

Last day, we were studying databases in my engineering school, and we had to install oracle-xe. I really don't like installing these kinds of things on my system, because it's ugly, it does not work like every other things (no pacman -S oracle-xe) etc. I decided to use Docker to install oracle-xe in a particular environment, instead of directly install it on my system.

The point is that Docker use a bridge interface for running his containers. This bridge interface is pretty useful, except when... your route -n look like this :

root@kripton ~ # route -n         UG    202    0        0 eno1         UG    0      0        0 docker0     U     202    0        0 docker0

Because the Docker's route is more specific than your default route (the eno1 route), when you want to connect on a computer in your LAN (let's say, your system will route the packet through the docker0 interface instead of the eno1 interface. Conclusion ? You can't connect to this computer. Uuuuuh ;(

How does it work ?

Docker is a container that manage your dependencies and your application runtime environment. In order to have his own network, it creates a virtual network interface also known as bridged interface. This interface allows you to connect to your application through a secured network, which is... pretty nice.

The bridge interface is created by docker at the daemon startup, and is globally managed by it if you do not configure your own interface. Because it's created by docker, it means that it has default values that might not fill with your system or network configuration.

My issue was here. The docker bridge interface default subnet was, and my school network was on the subnet Because of this, I couldn't connect to my school VPN, and can not use Internet and Docker at the same time.

Configure your bridge interface

If as me you didn't wrote anything in your systemd network management configuration, your /etc/systemd/network is empty. This folder contains all network interface and virtual interface configuration used by the systemd-networkd service in order to configure your network. In order to test your configuration, you can just restart the service :

systemctl restart systemd-networkd

How to organise your /etc/systemd/network folder ?

Files are read in alphabetical order. As a lot other configuration folder like that, you should prefix all your files by a number in $[\![00, 99]\!]$.

Files have also 2 different extensions :

*.netdev  for network device creation and setup
*.network for network device configuration

Before going any further, you should read the man and the man systemd.netdev. It's really useful to understand what means every sections and why we use them. Systemd's network configuration does not work like old configuration (through the /etc/network/interfaces file).

Before doing anything, stop docker

You are going to make some huge modifications to the docker infrastructure. That might be harmful for docker, so just stop it before doing anything.

systemctl stop docker

Create your docker's bridge network

The first thing to do is to create the bridge interface for docker. You can name this bridge as you want, but I like the docker0 name (in case you want to have more than one instance of docker running).

In order to create it, you must create a .netdev file containing :

  • the device name (docker0 is an example)
  • the device kind (see man systemd.netdev to see all available device kind)
  • and if you want a device description.
$ cat /etc/systemd/network/20-bridge-docker0.netdev
Description=Docker bridge network

Configure your main(s) interface(s)

Before configuring your docker bridge, setup your other interfaces. Here is my ethernet network configuration named file that configure my eno1 interface configuration :

$ cat /etc/systemd/network/


As you can see, my interface eno1 is in DHCP mode. You can also configure it in static mode like this :

$ cat /etc/systemd/network/


Configure your bridge

Configuring your bridge interface is nearly the same as configuring your basic network interfaces. The main issue is that if you specify a gateway parameter in your configuration (obligatory for docker), it creates a default route for your system. Sometimes this default route overwrite your main interface default route (for me the eno1). The consequence of this is you can't access to internet anymore, which is quite annoying.

So we need a route specific route to access to our network, and we define it into the network configuration file.

$ cat /etc/systemd/network/



Tell docker which interface to use

The default interface docker use is docker0. But if for some reasons you want to rename this device, you must specificate to docker which device he has to use. I didn't find any other way by now, so here is what I did :

# cat /etc/systemd/system/ 
Description=Docker Application Container Engine
Documentation= docker.socket

ExecStart=/usr/bin/docker daemon -b docker0 -H fd://


The only thing I modificate in this file is the option -d docker0 in the Service.ExecStart section. It specify docker to use the docker0 bridge controller. And that's it.

Restart your networkd and check if all is okay

After this, just reload docker and systemd-networkd and you're done !

systemctl restart systemd-networkd

At this point, your route should look like this :

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface         UG    202    0        0 eno1   U     0      0        0 docker0   UG    0      0        0 docker0     U     202    0        0 eno1 UH    1024   0        0 eno1

And your interfaces like this :

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether ec:ff:bb:cc:00:22 brd ff:ff:ff:ff:ff:ff
    inet brd scope global dynamic eno1
       valid_lft 169262sec preferred_lft 169262sec
    inet brd scope global secondary dynamic eno1
       valid_lft 170251sec preferred_lft 170251sec
    inet brd scope global secondary eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::d569:a090:c38d:5a6/64 scope link 
       valid_lft forever preferred_lft forever
31: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:f9:ce:3c:c8 brd ff:ff:ff:ff:ff:ff
    inet brd scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::d252:9a4:c734:a541/64 scope link 
       valid_lft forever preferred_lft forever

Little point : sometimes, the interface docker0 might keep some traces of the old configuration (setup by docker). You might have something like this :

# ip a
31: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:f9:ce:3c:c8 brd ff:ff:ff:ff:ff:ff
    inet brd scope global docker0
       valid_lft forever preferred_lft forever
    inet brd scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::d252:9a4:c734:a541/64 scope link 
       valid_lft forever preferred_lft forever

In order to delete this, just tell ip to delete the address on your bridge interface

ip addr del dev docker0

You're done with this part if :

  • your bridged interface is configured on the expected sub-network (for example
  • you have a route that points to this sub-network as this :
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface   UG    0      0        0 docker0
  • you do NOT have a default route using your bridge network
  • your default route use your main interface like this
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface         UG    202    0        0 eno1

Time to restart docker and enjoy !

You can now restart docker.

systemctl restart docker

Any issues ?

Please tell me !

Let's forget about net-tools and welcome iproute2

iproute2 is a bunch of tools that aim to replace the old commands like ifconfig, route or arp contained in the net-tools package. A lot of people are still using net-tools but as described in this official statement, net-tools is intended to deprecation and does not support some new kernel network functionalities. Some linux distribution like Archlinux already deprecated the net-tools utility but still provide them in their repository for backward compatibility scripts.

Beside the lack of functionalities, there is also a lack of efficiency in the net-tools package. net-tools commands reads informations from the /proc directory while iproute2 uses the Linux kernel interface Netlink which is much faster.

An example of iproute2 strenght versus net-tools

Let's compare the net-tools' route utility and the ip route utility. In most of the cases, classic routing is enough for what you want to do. For example if you want to modify the default route, both of iproute2 and net-tools embed this functionality :

# net-tools version :
route add default gw $ip

# iproute2 version :
ip route add default via $ip

However sometimes you do not want to route your packet only based on the destination adresse of a packet. As said in the ip rule manual :

In some circumstances we want to route packets differently depending not only on destination addresses, but also on other packet fields: source address, IP protocol, transport protocol ports or even packet payload. This task is called 'policy routing'.

Example of situation that requires policy routing

Let's say we have the following machine :

+----- Machine ----+      VPN subnet       +-- Router --+
| 'tun0' interface  >--- ---< |
|       10.10.10.x |                       +------------+
|                  |
|                  |      WAN subnet       +-- Router --+
|  'wan' interface  >--- ---< |
|  |                       +------------+

This machine is subject to these constraints :

  • the default route of every packets is over the VPN's router (i.e.
  • only packets marked 32/32 (VPN packets and some other packets) must go through the wan interface (32/32 is an example).

In this situation, the problem is the last constraint. Using the classic routing which is the purpose of net-tools' route utility, you cannot associate a mark to a specific route.

A solution (there are tons of different solutions) is to create a new routing table wan-force in the /etc/iproute2/rt_table :

# reserved values
255 local
254 main
253 default
0   unspec
# local
1 wan-force

Then bind every packet marked 32/32 to the wan-force table using ip rule :

ip rule fwmark "32/32" priority 16286 lookup wan-force

If now you list your rules using ip rule list, you will see the different rules and their priority :

0:      from all lookup local 
16286:  from all fwmark 0x20/0x20 lookup wan-force 
32766:  from all lookup main 
32767:  from all lookup default

Finally add the default routes in the wan-force and the main table

ip route add default via "" table wan-force
ip route add default via "" # table main is automatically selected

Now every packet marked 32/32 will be automatically routed through the wan interface.

net-tools and iproute2 commands cheatsheet

It is sometimes hard to get into iproute2 when you are used to use the net-tools commands. In order to make it easier, here is a small cheatsheet from the Red Hat Entreprise ip command cheatsheet.

`net-tools` command `iproute2` command Command purpose
`arp -na` `ip neigh` Display the neighbour objects (i.e. the arp table)
`ifconfig` `ip link` Manage and display the state of all network interfaces
`ifconfig -a` `ip addr show` Display IP addresses and propertty information
`ifconfig -s` `ip -s link` Display the network statistics per interfaces
`ifconfig eth0 up|down` `ip link set eth0 up|down` Enable / disable an interface
`netstat` `ss` Display socket statistics (see the Red Hat cheatsheet for some useful options)
`route [...]` `ip route [...] [table {table-name}]` Display and alter the routing table(s)
`ip rule` Display and manage the rules for routing table affectation

Python: dive into the functions

Recently I found a really useful blog and it motivated me to make my own posts about Python and some other technologies. As he said in his first post, writing about things you discover helps you to remember how things work, and I hope I wouldn't be too lazy to continue writing :)

Let's define an easy function.

In [1]:
def foo(a=[]):
    return a

What does the following code do ?

for i in xrange(4):

result = foo()

For the ones who think that result == [5], you need to learn some things about Python.

As you may know, everything in Python is an object. 3, 3.5, "Hello World"... are objects, but even the types (int, float or type) and the functions are objects. The question is what's a function object ?

 The function object ?

At this point, you can see that in your globals, the foo name target to a function object :

>>> globals()
    # .../...
    'foo': <function foo at 0x2b919f6a17d0>,
    # .../...
>>> type(foo)
<type 'function'>
>>> print type(foo).__doc__
function(code, globals[, name[, argdefs[, closure]]])

Create a function object from a code object and a dictionary.
The optional name string overrides the name from the code object.
The optional argdefs tuple specifies the default argument values.
The optional closure tuple supplies the bindings for free variables.

Function are first-class object. This means that as any other object you can :

  • store it in variables or data structures (i.e. classes)
  • compare them with other entities
  • pass it as parameter or result of an other function
  • construct it at runtime
  • print or read it
  • manipulate their attributes
  • etc...

The function object's attributes

In [2]:
print [e for e in dir(type(foo)) if not e.startswith("_")]
['func_closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals', 'func_name']

A function instance is composed of 7 components. This notebook is written with the Python 2.7 interpreter.

From the What's new in Python 3.0 :

The function attributes named `func_X` have been renamed to use the `__X__` form, freeing up these names in the function attribute namespace for user-defined attributes.

So Python 3.x does not support anymore this attributes, and use instead :

In [3]:
print ["__%s__" % e[5:] for e in dir(type(foo)) if not e.startswith("_")]
['__closure__', '__code__', '__defaults__', '__dict__', '__doc__', '__globals__', '__name__']
In [4]:
public_attributes = (e for e in dir(foo) if not e.startswith('_') and e != 'func_globals')
print "\n".join("%-14s : %s" % (e, getattr(foo, e)) for e in public_attributes)
func_closure   : None
func_code      : <code object foo at 0x13bdbb0, file "<ipython-input-1-a62784216969>", line 1>
func_defaults  : ([],)
func_dict      : {}
func_doc       : None
func_name      : foo

The function describers

The following attributes helps to represent the Python function for the user :

  • func_name (or __name__) attribute is the original function name. This value is used when you want to represent your object (i.e. in the __repr__ method).
def my_func(): pass
    bar = my_func
    assert bar.func_name == my_func.func_name == "my_func"  # True
    assert str(bar).startswith("<function my_func at")  # True
  • func_doc (or __doc__) attribute contains the docstring of your function. This is highly used by some tools such as Sphinx (or even the help function) to generate the function's documentation (so don't forget to write your docstring !)

The end-user storage properties

Sometimes, it would be useful to mark your function for whatever reasons. If you often use decorateurs, you may heard about the synchronized decorator. This decorator intend to make the same thing as the keyword synchronized in java. In threaded context, sometimes you don't want that two thread access to a function at the same time. The decorator looks like this :

from functools import wraps
import threading

def synchronized(func):
    """ Decorator to make a function thread-safe """
    lock = threading.Lock()

    def _wrapper(*args, **kwargs):
        """ Wrapper of the function to lock the thread until the lock is acquired """
        with lock:
            return func(*args, **kwargs)

    return _wrapper

def foo():
    print "I'm synchronized !"

Sounds cool isn't it ? Yes but sometimes, for some reason, you want a specific lock on your function. For example, if you have 2 functions that must have the same lock (reading the same socket for example), a good way could be to register the lock into a shared space and then change a bit your synchronized decorator in order to adapt.

Good news, you have a end-user dictionary here for registering meta-data of the function (in this example, a lock). This dictionary is known under the func_dict attribute, and here is an example of how to use it.

def synchronized_v2(lock=None):
    if lock is None:
        lock = threading.Lock()

    def _decorator(func):
        func.func_dict["lock"] = lock

        def _wrapper(*args, **kwargs):
            with func.func_dict["lock"]:
                return func(*args, **kwargs)

        return _wrapper
    return _decorator

def bar(): pass

def joe(): pass

The functions bar and joe are now synchronized with the same lock.

With this kind of code, you can easily maintain or change the lock of your function by changing their attribute func_dict["lock"]. This is a good way to deal with that, but there're tons of way to deal with that, as you an see on the Graham Dumpleton's blog.

A scope and closure overview

One of the most important thing of a Python function is its scopes. Before Python 2.2, Python defined 3 scopes :

  • the local namespace, which reference all the objects names defined in the block of the function, the class or the method ;
  • the global namespace, which reference all the global objects defined in the module ;
  • the built-in namespace referencing all the built-in functions.

This was useful, but not enough. A common issue happens with nested function :

def outer():
    somevar = []

    def inner():
        return somevar

    return inner

The function outer a function generator. It generate the inner function, with a variable somevar. But... What is the scope of this variable ? Obviously, not built-in. In order to be global, the function definition would look like this :

def outer_global():
    global somevar
    somevar = []

    def inner():
        global somevar
        assert "somevar" in globals() and not "somevar" in locals()
        return somevar

    return inner
In [5]:
def outer_test():
    somevar = []
    def inner():
        assert "somevar" in locals()
        return somevar
    return inner

inner = outer_test()

Alright, the assertion has been passed. The overall idea here is to say that all the variable from the outer scope are copied to the current scope (except if the outer scope is the module scope, because we fall into the global namespace).

Let's see...

In [6]:
def outer_test_2():
    somevar, someothervar = [], []
    def inner():
        assert "somevar" in locals() and "someothervar" in locals()
        return somevar
    return inner

inner = outer_test_2()
AssertionError                            Traceback (most recent call last)
<ipython-input-6-a0c8881c4412> in <module>()
     11 inner = outer_test_2()
---> 12 inner()

<ipython-input-6-a0c8881c4412> in inner()
      4     def inner():
----> 5         assert "somevar" in locals() and "someothervar" in locals()
      6         somevar.append(5)
      7         return somevar


Okay so it just copy a part of the variable (the variable used in the current scope and defined in the outer scope).

And what if the variables are not copied ?

In [7]:
def outer_test_3():
    somevar = []
    def inner_1():
        return somevar
    def inner_2():
        return somevar
    return inner_1, inner_2

inner_1, inner_2 = outer_test_3()
assert inner_1() == [5]
[5, 6]

Hmm it looks like the variables are just references to the outer scope.

Let's destroy all your dreams.

In [8]:
del outer_test_3

    _ = outer()
except NameError:
    print "Yes, it does not exists anymore."

print inner_1()
print inner_2()
Yes, it does not exists anymore.
[5, 6, 5]
[5, 6, 5, 6]

As you may know, Python objects have a reference counter. When the reference counter goes to 0, the object is ready to be garbage collected.

A reference is symbolized by a variable associated to the object, the fact that the object is in another object (for example a list or the attribute of another object) or whatever pointing to this object. Here, the inner function have a local variable somevar pointing to the list created in the outer function. Because it's local, the inner's somevar variable is automatically unreferenced at the end of function's execution.

So because we deleted outer function, it looks like nothing reference anymore to the list. And that's not true. Python mechanism create a reference to the object in an attribute of the function, in order to be able to use it in the function and to not garbage collect it. This is known as a function closure.

In [9]:
print "Function closure content  : ", function.func_closure
print "Closure cell contents     : ", function.func_closure[0].cell_contents
print "Id of the closure object  : ", id(function.func_closure[0].cell_contents)
print "Id of the returned object : ", id(function())
print "Returned == closure ?     : ", id(function()) == id(function.func_closure[0].cell_contents)
Function closure content  : 
NameError                                 Traceback (most recent call last)
<ipython-input-9-03fb8ad932d7> in <module>()
----> 1 print "Function closure content  : ", function.func_closure
      2 print "Closure cell contents     : ", function.func_closure[0].cell_contents
      3 print "Id of the closure object  : ", id(function.func_closure[0].cell_contents)
      4 print "Id of the returned object : ", id(function())
      5 print "Returned == closure ?     : ", id(function()) == id(function.func_closure[0].cell_contents)

NameError: name 'function' is not defined

How Python manage the closures ?

When Python compile a function, at every direct variable reference, the compiler will solve the reference in the following order :

  • If the action is an assignation, the reference will be generated at runtime in the locals
  • If the action is not an assignation, if an assignation with the same name has been done before, the reference will be resolved at runtime in the locals
  • If the action is not an assignation, check if the reference has been made in locals of the outer scope until reaching the global scope, then add a closure associated to this reference
  • If the previous case failed, find the reference at runtime in the globals (and raise a NameError if not found).

Once a closure is detected, a cell will be created in the func_closure attribute of the function, containing the reference to the element, and the name of the element will be bind in the code object as a co_freevars of the inner function, and a co_cellvars of the outer function.

I will write another article on this, because it's a bit hard to understand, and we need deep analysis on the Python comportement which is not the current subject.

Closures side effects

Closure also give a not well known issue. If your variable associated to the closure is mutated for some reasons outside of the function, the function will not have the same behavior. Let's see an example :

In [10]:
def closure_issue(value):
    funcs = []
    for i in xrange(3):
        def my_func(x):
            return x * i
    for func in funcs:
        print func(value),
    return funcs

funcs = closure_issue(2)
 4 4 4

Here, we expected a result like "0 2 4" and we have a "4 4 4". However, 3 different functions have been created :

In [11]:
assert id(funcs[0]) != id(funcs[1]) != id(funcs[2])

The point is the function closure. For every instance of my_func, i is a closure variable. This means that if i mutate outside of the function, my_func's i will also mutate. Even if funcs is a list of 3 different functions, all the closures target to the same instance :

In [12]:
closures_contents = [func.func_closure[0].cell_contents for func in funcs]
assert id(closures_contents[0]) == id(closures_contents[1]) == id(closures_contents[2])

A way to deal with this kind of issue is to bind the closure as a default parameter.

In [13]:
def closure_solved(value):
    funcs = []
    for i in xrange(3):
        def my_func(x, i=i):
            print "Value of i:", i, " - Id of i:", id(i)
            return x * i
    print " ".join(str(func(value)) for func in funcs)
    return funcs

funcs = closure_solved(2)
Value of i: 0  - Id of i: 9369584
Value of i: 1  - Id of i: 9369560
Value of i: 2  - Id of i: 9369536
0 2 4

Written by Axel Martin

I am a french engineering school student passionated by Python and new technologies. I like a lot of things, think about a lot of things, and try to share as much as possible with people.