About this series
Ever since I first saw VPP - the Vector Packet Processor - I have been deeply impressed with its performance and versatility. For those of us who have used Cisco IOS/XR devices, like the classic ASR (aggregation service router), VPP will look and feel quite familiar as many of the approaches are shared between the two.
You’ll hear me talk about VPP being API centric, with no configuration persistence, and that’s by
design. However, there is this also a CLI utility called vppctl
, right, so what gives? In truth,
the CLI is used a lot by folks to configure their dataplane, but it really was always meant to be
a debug utility. There’s a whole wealth of programmability that is not exposed via the CLI at all,
and the VPP community develops and maintains an elaborate set of tools to allow external programs
to (re)configure the dataplane. One such tool is my own [vppcfg] which takes a YAML specification that describes the dataplane configuration, and applies it
safely to a running VPP instance.
Introduction
In case you’re interested in writing your own automation, this article is for you! I’ll provide a deep dive into the Python API which ships with VPP. It’s actually very easy to use once you get used it it – assuming you know a little bit of Python of course :)
VPP API: Anatomy
When developers write their VPP features, they’ll add an API definition file that describes control-plane messages that are typically called via shared memory interface, which explains why these things are called memclnt in VPP. Certain API types can be created, resembling their underlying C structures, and these types are passed along in messages. Finally a service is a Request/Reply pair of messages. When requests are received, VPP executes a handler whose job it is to parse the request and send either a singular reply, or a stream of replies (like a list of interfaces).
Clients connect to a unix domain socket, typically /run/vpp/api.sock
. A TCP port can also
be used, with the caveat that there is no access control provided. Messages are exchanged over this
channel asynchronously. A common pattern of async API design is to have a client identifier
(called a client_index) and some random number (called a context) with which the client
identifies their request. Using these two things, VPP will issue a callback using (a) the
client_index to send the reply to and (b) the client knows which context the reply is meant for.
By the way, this asynchronous design pattern gives programmers one really cool benefit out of the box: events that are not explicitly requested, like say, link-state change on an interface, can now be implemented by simply registering a standing callback for a certain message type - I’ll show how that works at the end of this article. As a result, any number of clients, their requests and even arbitrary VPP initiated events can be in flight at the same time, which is pretty slick!
API Types
Most APIs requests pass along datastructures, which follow their internal representation in VPP. I’ll start
by taking a look at a simple example – the VPE itself. It defines a few things in
src/vpp/vpe_types.api
, notably a few type definitions and one enum:
typedef version
{
u32 major;
u32 minor;
u32 patch;
/* since we can't guarantee that only fixed length args will follow the typedef,
string type not supported for typedef for now. */
u8 pre_release[17]; /* 16 + "\0" */
u8 build_metadata[17]; /* 16 + "\0" */
};
typedef f64 timestamp;
typedef f64 timedelta;
enum log_level {
VPE_API_LOG_LEVEL_EMERG = 0, /* emerg */
VPE_API_LOG_LEVEL_ALERT = 1, /* alert */
VPE_API_LOG_LEVEL_CRIT = 2, /* crit */
VPE_API_LOG_LEVEL_ERR = 3, /* err */
VPE_API_LOG_LEVEL_WARNING = 4, /* warn */
VPE_API_LOG_LEVEL_NOTICE = 5, /* notice */
VPE_API_LOG_LEVEL_INFO = 6, /* info */
VPE_API_LOG_LEVEL_DEBUG = 7, /* debug */
VPE_API_LOG_LEVEL_DISABLED = 8, /* disabled */
};
By doing this, API requests and replies can start referring to these types in. When reading this, it
feels a bit like a C header file, showing me the structure. For example, I know that if I ever need
to pass along an argument called log_level
, I know which values I can provide, together with their
meaning.
API Messages
I now take a look at src/vpp/api/vpe.api
itself, this is where the VPE API is defined. It includes
the former vpe_types.api
file, so it can reference these typedefs and the enum. Here, I see a few
messages defined that constitute a Request/Reply pair:
define show_version
{
u32 client_index;
u32 context;
};
define show_version_reply
{
u32 context;
i32 retval;
string program[32];
string version[32];
string build_date[32];
string build_directory[256];
};
There’s one small surprise here out of the gate. I would’ve expected that beautiful typedef called
version
from the vpe_types.api
file to make an appearance, but it’s conspicuously missing from
the show_version_reply
message. Ha! But the rest of it seems reasonably self-explanatory – as I
already know about the client_index and context fields, I now know that this request does not
carry any arguments, and that the reply has a retval for application errors, similar to how most
libC functions return 0 on success, and some negative value error number defined in
[errno.h]. Then, there are four strings of the given
length, which I should be able to consume.
API Services
The VPP API defines three types of message exchanges:
-
Request/Reply - The client sends a request message and the server replies with a single reply message. The convention is that the reply message is named as
method_name + _reply
. -
Dump/Detail - The client sends a “bulk” request message to the server, and the server replies with a set of detail messages. These messages may be of different type. The method name must end with
method + _dump
, the reply message should be namedmethod + _details
. These Dump/Detail methods are typically used for acquiring bulk information, like the complete FIB table. -
Events - The client can register for getting asynchronous notifications from the server. This is useful for getting interface state changes, and so on. The method name for requesting notifications is conventionally prefixed with
want_
, for examplewant_interface_events
.
If the convention is kept, the API machinery will correlate the foo
and foo_reply
messages into
RPC services. But it’s also possible to be explicit about these, by defining service scopes in the
*.api
files. I’ll take two examples, the first one is from the Linux Control Plane plugin (which
I’ve [written about] a lot while I was contributing to it back in
2021).
Dump/Detail (example): When enumerating Linux Interface Pairs, the service definition looks like this:
service {
rpc lcp_itf_pair_get returns lcp_itf_pair_get_reply
stream lcp_itf_pair_details;
};
To puzzle this together, the request called lcp_itf_pair_get
is paired up with a reply called
lcp_itf_pair_get_reply
followed by a stream of zero-or-more lcp_itf_pair_details
messages. Note
the use of the pattern rpc X returns Y stream Z.
Events (example): I also take a look at an event handler like the one in the interface API that made an appearance in my list of API message types, above:
service {
rpc want_interface_events returns want_interface_events_reply
events sw_interface_event;
};
Here, the request is want_interface_events
which returns a want_interface_events_reply
followed
by zero or more sw_interface_event
messages, which is very similar to the streaming (dump/detail)
pattern. The semantic difference is that streams are lists of things and events are asynchronously
happening things in the dataplane – in other words the stream is meant to end whlie the events
messages are generated by VPP when the event occurs. In this case, if an interface is created or
deleted, or the link state of an interface changes, one of these is sent from VPP to the client(s)
that registered an interested in it by calling the want_interface_events
RPC.
JSON Representation
VPP comes with an internal API compiler that scans the source code for these *.api
files and
assembles them into a few output formats. I take a look at the Python implementation of it in
src/tools/vppapigen/
and see that it generates C, Go and JSON. As an aside, I chuckle a little bit
on a Python script generating Go and C, but I quickly get over myself. I’m not that funny.
The vppapigen
tool outputs a bunch of JSON files, one per API specification, which wraps up all of
the information from the types, unions and enums, the message and service definitions,
together with a few other bits and bobs, and when VPP is installed, these end up in
/usr/share/vpp/api/
. As of the upcoming VPP 24.02 release, there’s about 50 of these core APIs
and an additional 80 or so APIs defined by plugins like the Linux Control Plane.
Implementing APIs is pretty user friendly, largely due to the vppapigen
tool taking so much of the
boilerplate and autogenerating things. As an example, I need to be able to enumerate the interfaces
that are MPLS enabled, so that I can use my [vppcfg] utility to
configure MPLS. I contributed an API called mpls_interface_dump
which returns a
stream of mpls_interface_details
messages. You can see that small contribution in merged [Gerrit
39022].
VPP Python API
The VPP API has been ported to many languages (C, C++, Go, Lua, Rust, Python, probably a few others).
I am primarily a user of the Python API, which ships alongside VPP in a separate Debian package. The
source code lives in src/vpp-api/python/
which doesn’t have any dependencies other than Python’s
own setuptools
. Its implementation canonically called vpp_papi
, which, I cannot tell a lie,
reminds me of spanish rap music. But, if you’re still reading, maybe now is a good time to depart
from the fundamental, and get to the practical!
Example: Hello World
Without further ado, I dive right in with this tiny program:
from vpp_papi import VPPApiClient, VPPApiJSONFiles
vpp_api_dir = VPPApiJSONFiles.find_api_dir([])
vpp_api_files = VPPApiJSONFiles.find_api_files(api_dir=vpp_api_dir)
vpp = VPPApiClient(apifiles=vpp_api_files, server_address="/run/vpp/api.sock")
vpp.connect("ipng-client")
api_reply = vpp.api.show_version()
print(api_reply)
The first thing this program does is construct a so-called VPPApiClient
object. To do this, I need
to feed it a list of JSON definitions, so that it knows what types of APIs are available.
As I
mentioned above, those
to create the list of files, but there are two handy helpers here:
- find_api_dir() - This is a helper that finds the location of the API files. Normally, the JSON
files get installed in
/usr/share/vpp/api/
, but when I’m writing code, it’s more likely that the files are in/home/pim/src/vpp/
somewhere. This helper function tries to do the right thing and detect if I’m in a client or if I’m using a production install, and will return the correct directory. - find_api_files() - Now, I could rummage through that directory and find the JSON files, but there’s another handy helper that does that for me, given a directory (like the one I just got handed to me). Life is easy.
Once I have the JSON files in hand, I can construct a client by specifying the server_address
location to connect to – this is typically a unix domain socket in /run/vpp/api.sock
but it can
also be a TCP endpoint. As a quick aside: If you, like me, stumbled over the socket being owned by
root:vpp
but not writable by the group, that finally got fixed by Georgy in
[Gerrit 39862].
Once I’m connected, I can start calling arbitrary API methods, like show_version()
which does not
take any arguments. Its reply is a named tuple, and it looks like this:
pim@vpp0-0:~/vpp_papi_examples$ ./00-version.py
show_version_reply(_0=1415, context=1,
retval=0, program='vpe', version='24.02-rc0~46-ga16463610',
build_date='2023-10-15T14:50:49', build_directory='/home/pim/src/vpp')
And here is my beautiful hello world in seven (!) lines of code. All that reading and preparing finally starts paying off. Neat-oh!
Example: Listing Interfaces
From here on out, it’s just incremental learning. Here’s an example of how to extend the hello world example above and make it list the dataplane interfaces and their IPv4/IPv6 addresses:
api_reply = vpp.api.sw_interface_dump()
for iface in api_reply:
str = f"[{iface.sw_if_index}] {iface.interface_name}"
ipr = vpp.api.ip_address_dump(sw_if_index=iface.sw_if_index, is_ipv6=False)
for addr in ipr:
str += f" {addr.prefix}"
ipr = vpp.api.ip_address_dump(sw_if_index=iface.sw_if_index, is_ipv6=True)
for addr in ipr:
str += f" {addr.prefix}"
print(str)
The API method sw_interface_dump()
can take a few optional arguments. Notably, if sw_if_index
is
set, the call will dump that exact interface. If it’s not set, it will default to -1 which will dump
all interfaces, and this is how I use it here. For completeness, the method also has an optional
string name_filter
, which will dump all interfaces which contain a given substring. For example
passing name_filter='loop'
and name_filter_value=True
as arguments, would enumerate all interfaces
that have the word ’loop’ in them.
Now, the definition of the sw_interface_dump
method suggests that it returns a stream (remember
the Dump/Detail pattern above), so I can predict that the messages I will receive are of type
sw_interface_details
. There’s lots of cool information in here, like the MAC address, MTU,
encapsulation (if this is a sub-interface), but for now I’ll only make note of the sw_if_index
and
interface_name
.
Using this interface index, I then call the ip_address_dump()
method, which looks like this:
define ip_address_dump
{
u32 client_index;
u32 context;
vl_api_interface_index_t sw_if_index;
bool is_ipv6;
};
define ip_address_details
{
u32 context;
vl_api_interface_index_t sw_if_index;
vl_api_address_with_prefix_t prefix;
};
Allright then! If I want the IPv4 addresses for a given interface (referred to not by its name, but
by its index), I can call it with argument is_ipv6=False
. The return is zero or more messages that
contain the index again, and a prefix the precise type of which can be looked up in ip_types.api
.
After doing a form of layer-one traceroute through the API specification files, it turns out, that
this prefix is cast to an instance of the IPv4Interface()
class in Python. I won’t bore you with
it, but the second call sets is_ipv6=True
and, unsurprisingly, returns a bunch of
IPv6Interface()
objects.
To put it all together, the output of my little script:
pim@vpp0-0:~/vpp_papi_examples$ ./01-interface.py
VPP version is 24.02-rc0~46-ga16463610
[0] local0
[1] GigabitEthernet10/0/0 192.168.10.5/31 2001:678:d78:201::fffe/112
[2] GigabitEthernet10/0/1 192.168.10.6/31 2001:678:d78:201::1:0/112
[3] GigabitEthernet10/0/2
[4] GigabitEthernet10/0/3
[5] loop0 192.168.10.0/32 2001:678:d78:200::/128
Example: Linux Control Plane
Normally, services of are either a Request/Reply or a Dump/Detail type. But careful readers may
have noticed that the Linux Control Plane does a little bit of both. It has a Request/Reply/Detail
triplet, because for request lcp_itf_pair_get
, it will return a lcp_itf_pair_get_reply
AND a
stream of lcp_itf_pair_details
. Perhaps in hindsight a more idiomatic way to do this was to have
created simply a lcp_itf_pair_dump
, but considering this is what we ended up with, I can use it as
a good example case – how might I handle such a response?
api_reply = vpp.api.lcp_itf_pair_get()
if isinstance(api_reply, tuple) and api_reply[0].retval == 0:
for lcp in api_reply[1]:
str = f"[{lcp.vif_index}] {lcp.host_if_name}"
api_reply2 = vpp.api.sw_interface_dump(sw_if_index=lcp.host_sw_if_index)
tap_iface = api_reply2[0]
api_reply2 = vpp.api.sw_interface_dump(sw_if_index=lcp.phy_sw_if_index)
phy_iface = api_reply2[0]
str += f" tap {tap_iface.interface_name} phy {phy_iface.interface_name} mtu {phy_iface.link_mtu}"
print(str)
This particular API first sends its reply and then its stream, so I can expect it to be a tuple
with the first element being a namedtuple and the second element being a list of details messages. A
good way to ensure that is to check for the reply’s retval field to be 0 (success) before trying
to enumerate the Linux Interface Pairs. These consist of a VPP interface (say
GigabitEthernet10/0/0
), which corresponds to a TUN/TAP device which in turn has a VPP name (eg
tap1
) and a Linux name (eg. e0
).
The Linux Control Plane call will return these dataplane objects as numerical interface indexes,
not names. However, I can resolve them to names by calling the sw_interface_dump()
method and
specifying the index as an argument. Because this is a Dump/Detail type API call, the return will
be a stream (a list), which will have either zero (if the index didn’t exist), or one element
(if it did).
Using this I can puzzle together the following output:
pim@vpp0-0:~/vpp_papi_examples$ ./02-lcp.py
VPP version is 24.02-rc0~46-ga16463610
[2] loop0 tap tap0 phy loop0 mtu 9000
[3] e0 tap tap1 phy GigabitEthernet10/0/0 mtu 9000
[4] e1 tap tap2 phy GigabitEthernet10/0/1 mtu 9000
[5] e2 tap tap3 phy GigabitEthernet10/0/2 mtu 9000
[6] e3 tap tap4 phy GigabitEthernet10/0/3 mtu 9000
VPP’s Python API objects
The objects in the VPP dataplane can be arbitrarily complex. They can have nested objects, enums, unions, repeated fields and so on. To illustrate a more complete example, I will take a look at an MPLS tunnel object in the dataplane. I first create the MPLS tunnel using the CLI, as follows:
vpp# mpls tunnel l2-only via 192.168.10.3 GigabitEthernet10/0/1 out-labels 8298 100 200
vpp# mpls local-label add 8298 eos via l2-input-on mpls-tunnel0
The first command creates an interface called mpls-tunnel0
which, if it receives an ethernet frame, will
encapsulate it into an MPLS datagram with a labelstack of 8298.100.200
, and then forward it on to
the router at 192.168.10.3. The second command adds a FIB entry to the MPLS table, upon receipt of a
datagram with the label 8298
, unwrap it and present the resulting datagram contents as an ethernet
frame into mpls-tunnel0
. By cross connecting this MPLS tunnel with any other dataplane interface
(for example, HundredGigabitEthernet10/0/1.1234
), this would be an elegant way to configure a
classic L2VPN ethernet-over-MPLS transport. Which is hella cool, but I digress :)
I want to inspect this tunnel using the API, and I find an mpls_tunnel_dump()
method. As we
know well by now, this is a Dump/Detail type method, so the return value will be a list of
zero or more mpls_tunnel_details
messages.
The mpls_tunnel_details
message is simply a wrapper around an mpls_tunnel
type as can be seen in
mpls.api
, and it references the fib_path
type as well. Here they are:
typedef fib_path
{
u32 sw_if_index;
u32 table_id;
u32 rpf_id;
u8 weight;
u8 preference;
vl_api_fib_path_type_t type;
vl_api_fib_path_flags_t flags;
vl_api_fib_path_nh_proto_t proto;
vl_api_fib_path_nh_t nh;
u8 n_labels;
vl_api_fib_mpls_label_t label_stack[16];
};
typedef mpls_tunnel
{
vl_api_interface_index_t mt_sw_if_index;
u32 mt_tunnel_index;
bool mt_l2_only;
bool mt_is_multicast;
string mt_tag[64];
u8 mt_n_paths;
vl_api_fib_path_t mt_paths[mt_n_paths];
};
define mpls_tunnel_details
{
u32 context;
vl_api_mpls_tunnel_t mt_tunnel;
};
Taking a closer look, the mpls_tunnel
message consists of an index, then an mt_tunnel_index
which corresponds to the tunnel number (ie. interface mpls-tunnelN), some boolean flags, and
then a vector of N FIB paths. Incidentally, you’ll find FIB paths all over the place in VPP: in
routes, tunnels like this one, ACLs, and so on, so it’s good to get to know them a bit.
Remember when I created the tunnel, I specified something like .. via ..
? That’s a tell-tale
sign that what follows is a FIB path. I specified a nexhop (192.168.10.3 GigabitEthernet10/0/3) and
a list of three out-labels (8298, 100 and 200), all of which VPP has tucked them away in this
mt_paths
field.
Although it’s a bit verbose, I’ll paste the complete object for this tunnel, including the FIB path. You know, for science:
mpls_tunnel_details(_0=1185, context=5,
mt_tunnel=vl_api_mpls_tunnel_t(
mt_sw_if_index=17,
mt_tunnel_index=0,
mt_l2_only=True,
mt_is_multicast=False,
mt_tag='',
mt_n_paths=1,
mt_paths=[
vl_api_fib_path_t(sw_if_index=2, table_id=0,rpf_id=0, weight=1, preference=0,
type=<vl_api_fib_path_type_t.FIB_API_PATH_TYPE_NORMAL: 0>,
flags=<vl_api_fib_path_flags_t.FIB_API_PATH_FLAG_NONE: 0>,
proto=<vl_api_fib_path_nh_proto_t.FIB_API_PATH_NH_PROTO_IP4: 0>,
nh=vl_api_fib_path_nh_t(
address=vl_api_address_union_t(
ip4=IPv4Address('192.168.10.3'), ip6=IPv6Address('c0a8:a03::')),
via_label=0, obj_id=0, classify_table_index=0),
n_labels=3,
label_stack=[
vl_api_fib_mpls_label_t(is_uniform=0, label=8298, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=100, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=200, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0),
vl_api_fib_mpls_label_t(is_uniform=0, label=0, ttl=0, exp=0)
]
)
]
)
)
This mt_paths
is really interesting, and I’d like to make a few observations:
- type, flags and proto are ENUMs which I can find in
fib_types.api
- nh is the nexthop - there is only one nexthop specified per path entry, so when things like ECMP multipath are in play, this will be a vector of N paths each with one nh. Good to know. This nexthop specifies an address which is a union just like in C. It can be either an ip4 or an ip6. I will know which to choose due to the proto field above.
- n_labels and label_stack: The MPLS label stack has a fixed size. VPP reveals here (in
the API definition but also in the response) that the label-stack can be at most 16 labels deep.
I feel like this is an interview question at Cisco, somehow. I know how many labels are relevant
because of the n_labels field above. Their type is of
fib_mpls_label
which can be found inmpls.api
.
After having consumed all of this, I am ready to write a program that wheels over these message types and prints something a little bit more compact. The final program, in all of its glory –
from vpp_papi import VPPApiClient, VPPApiJSONFiles, VppEnum
def format_path(path):
str = ""
if path.proto == VppEnum.vl_api_fib_path_nh_proto_t.FIB_API_PATH_NH_PROTO_IP4:
str += f" ipv4 via {path.nh.address.ip4}"
elif path.proto == VppEnum.vl_api_fib_path_nh_proto_t.FIB_API_PATH_NH_PROTO_IP6:
str += f" ipv6 via {path.nh.address.ip6}"
elif path.proto == VppEnum.vl_api_fib_path_nh_proto_t.FIB_API_PATH_NH_PROTO_MPLS:
str += f" mpls"
elif path.proto == VppEnum.vl_api_fib_path_nh_proto_t.FIB_API_PATH_NH_PROTO_ETHERNET:
api_reply2 = vpp.api.sw_interface_dump(sw_if_index=path.sw_if_index)
iface = api_reply2[0]
str += f" ethernet to {iface.interface_name}"
else:
print(path)
if path.n_labels > 0:
str += " label"
for i in range(path.n_labels):
str += f" {path.label_stack[i].label}"
return str
api_reply = vpp.api.mpls_tunnel_dump()
for tunnel in api_reply:
str = f"Tunnel [{tunnel.mt_tunnel.mt_sw_if_index}] mpls-tunnel{tunnel.mt_tunnel.mt_tunnel_index}"
for path in tunnel.mt_tunnel.mt_paths:
str += format_path(path)
print(str)
api_reply = vpp.api.mpls_table_dump()
for table in api_reply:
print(f"Table [{table.mt_table.mt_table_id}] {table.mt_table.mt_name}")
api_reply2 = vpp.api.mpls_route_dump(table=table.mt_table.mt_table_id)
for route in api_reply2:
str = f" label {route.mr_route.mr_label} eos {route.mr_route.mr_eos}"
for path in route.mr_route.mr_paths:
str += format_path(path)
print(str)
Funny detail - it took me almost two years to discover VppEnum
, which contains all of these
symbols. If you end up reading this after a Bing, Yahoo or DuckDuckGo search, feel free to buy
me a bottle of Glenmorangie - sláinte!
The format_path()
method here has the smarts. Depending on the proto field, I print either
an IPv4 path, an IPv6 path, an internal MPLS path (for example for the reserved labels 0..15), or an
Ethernet path, which is the case in the FIB entry above that diverts incoming packets with label 8298 to be
presented as ethernet datagrams into the intererface mpls-tunnel0
. If it is an Ethernet proto, I
can use the sw_if_index field to figure out which interface, and retrieve its details to find its
name.
The format_path()
method finally adds the stack of labels to the returned string, if the n_labels
field is non-zero.
My program’s output:
pim@vpp0-0:~/vpp_papi_examples$ ./03-mpls.py
VPP version is 24.02-rc0~46-ga16463610
Tunnel [17] mpls-tunnel0 ipv4 via 192.168.10.3 label 8298 100 200
Table [0] MPLS-VRF:0
label 0 eos 0 mpls
label 0 eos 1 mpls
label 1 eos 0 mpls
label 1 eos 1 mpls
label 2 eos 0 mpls
label 2 eos 1 mpls
label 8298 eos 1 ethernet to mpls-tunnel0
Creating VxLAN Tunnels
Until now, all I’ve done is inspect the dataplane, in other words I’ve called a bunch of APIs
that do not change state. Of course, many of VPP’s API methods change state as well. I’ll turn to
another example API – The VxLAN tunnel API is defined in plugins/vxlan/vxlan.api
and it’s gone
through a few iterations. The VPP community tries to keep backwards compatibility, and a simple way
of doing this is to create new versions of the methods by tagging them with suffixes such as _v2
,
while eventually marking the older versions as deprecated by setting the option deprecated;
field
in the definition. In this API specification I can see that we’re already at version 3 of the
Request/Reply method in vxlan_add_del_tunnel_v3
and version 2 of the Dump/Detail method in
vxlan_tunnel_v2_dump
.
Once again, using these *.api
defintions, finding an incantion to create a unicast VxLAN tunnel
with a given VNI, then listing the tunnels, and finally deleting the tunnel I just created, would
look like this:
api_reply = vpp.api.vxlan_add_del_tunnel_v3(is_add=True, instance=100, vni=8298,
src_address="192.0.2.1", dst_address="192.0.2.254", decap_next_index=1)
if api_reply.retval == 0:
print(f"Created VXLAN tunnel with sw_if_index={api_reply.sw_if_index}")
api_reply = vpp.api.vxlan_tunnel_v2_dump()
for vxlan in api_reply:
str = f"[{vxlan.sw_if_index}] instance {vxlan.instance} vni {vxlan.vni}"
str += " src {vxlan.src_address}:{vxlan.src_port} dst {vxlan.dst_address}:{vxlan.dst_port}")
print(str)
api_reply = vpp.api.vxlan_add_del_tunnel_v3(is_add=False, instance=100, vni=8298,
src_address="192.0.2.1", dst_address="192.0.2.254", decap_next_index=1)
if api_reply.retval == 0:
print(f"Deleted VXLAN tunnel with sw_if_index={api_reply.sw_if_index}")
Many of the APIs in VPP will have create and delete in the same method, mostly by specifying the
operation with an is_add
argument like here. I think it’s kind of nice because it makes the
creation and deletion symmetric, even though the deletion needs to specify a fair bit more than
strictly necessary: the instance uniquely identifies the tunnel and should have been enough.
The output of this [CRUD] sequence (which stands for Create, Read, Update, Delete, in case you haven’t come across that acronym yet) then looks like this:
pim@vpp0-0:~/vpp_papi_examples$ ./04-vxlan.py
VPP version is 24.02-rc0~46-ga16463610
Created VXLAN tunnel with sw_if_index=18
[18] instance 100 vni 8298 src 192.0.2.1:4789 dst 192.0.2.254:4789
Deleted VXLAN tunnel with sw_if_index=18
Listening to Events
But wait, there’s more! Just one more thing, I promise. Way in the beginning of this article, I mentioned that there is a special variant of the Dump/Detail pattern, and that’s the Events pattern. With the VPP API client, first I register a single callback function, and then I can enable/disable events to trigger this callback.
One important note to this: enabling this callback will spawn a new (Python) thread so that the main program can continue to execute. Because of this, all the standard care has to be taken to make the program thread-aware. Make sure to pass information from the events-thread to the main-thread in a safe way!
Let me demonstrate this powerful functionality with a program that listens on
want_interface_events
which is defined in interface.api
:
service {
rpc want_interface_events returns want_interface_events_reply
events sw_interface_event;
};
define sw_interface_event
{
u32 client_index;
u32 pid;
vl_api_interface_index_t sw_if_index;
vl_api_if_status_flags_t flags;
bool deleted;
};
Here’s a complete program, shebang and all, that accomplishes this in a minimalistic way:
#!/usr/bin/env python3
import time
from vpp_papi import VPPApiClient, VPPApiJSONFiles, VppEnum
def sw_interface_event(msg):
print(msg)
def vpp_event_callback(msg_name, msg):
if msg_name == "sw_interface_event":
sw_interface_event(msg)
else:
print(f"Received unknown callback: {msg_name} => {msg}")
vpp_api_dir = VPPApiJSONFiles.find_api_dir([])
vpp_api_files = VPPApiJSONFiles.find_api_files(api_dir=vpp_api_dir)
vpp = VPPApiClient(apifiles=vpp_api_files, server_address="/run/vpp/api.sock")
vpp.connect("ipng-client")
vpp.register_event_callback(vpp_event_callback)
vpp.api.want_interface_events(enable_disable=True, pid=8298)
api_reply = vpp.api.show_version()
print(f"VPP version is {api_reply.version}")
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
pass
Results
After all of this deep-diving, all that’s left is for me to demonstrate the API by means of this little screencast [asciinema, gif] - I hope you enjoy it as much as I enjoyed creating it:
Note to self:
$ asciinema-edit quantize --range 0.18,0.8 --range 0.5,1.5 --range 1.5 \
vpp_papi.cast > clean.cast
$ Insert the ANSI colorcodes from the mac's terminal into clean.cast's header:
"theme":{"fg": "#ffffff","bg":"#000000",
"palette":"#000000:#990000:#00A600:#999900:#0000B3:#B300B3:#999900:#BFBFBF:
#666666:#F60000:#00F600:#F6F600:#0000F6:#F600F6:#00F6F6:#F6F6F6"}
$ agg --font-size 18 clean.cast clean.gif
$ gifsicle --lossy=80 -k 128 -O2 -Okeep-empty clean.gif -o vpp_papi_clean.gif