[Nsrc-rwnog16] Virtual training platform
Brian Candler
brian at nsrc.org
Mon Jul 18 03:10:48 PDT 2016
Oh, something I forgot to add.
Several of you asked me about how you could set up a training platform
like the one we had in the workshop - that is, to build a network with
virtual PCs with virtual cisco routers.
At the moment, I'm afraid I don't have a simple step-by-step guide for
setting this up. What I am in the process of doing is creating a single
virtual machine image which contains all the pieces pre-configured and
ready-to-run; you would then just download this VM and fire it up under
any VM platform of your choice (e.g. KVM, virtualbox, ESXi). Much work
has been done, but quite a lot more is remaining. I'm hoping this will
be released before the end of the year.
In the mean time, Alexandrique has kindly provided a hosted server in
Rwanda, and I have installed all the pieces on it in the same way that
the Mac Mini was set up. Talk to him about whether it's possible to get
access to it. (The best way to do this would be to E-mail him your ssh
*public* key and then he can install it in ~/.ssh/authorized_keys on the
host)
I can summarise the pieces used:
* The base OS on the server is Ubuntu 14.04; in future we're likely to
move to 16.04.
* The Cisco emulation is using dynamips, controlled by dynagen.
If you just want to play with this part, I suggest you download the
package "GNS3" which is available for Windows, Linux or OSX, and run it
directly on your laptop. This allows you to emulate one or more virtual
routers and connect them together, and has a good graphical user
interface. There's also lots of help available that you can search for
online.
* In previous workshops, the Linux server emulation was done using KVM
virtual machines controlled by libvirt.
This time we were using "lxd" containers, which have lower resource
demands, since you are sharing the same kernel as the host. They also
allow easy image creation and cloning. However they are less isolated
from each other than "real" VMs because they are sharing resources on
the underlying kernel.
lxd is a new feature native to ubuntu 16.04, but you can get it on
ubuntu 14.04 by installing lxd from a private package archive. The
instructions for doing this are here:
https://linuxcontainers.org/lxd/getting-started-cli/
For using lxd containers with ubuntu 14.04, you should also update your
kernel using "apt-get install linux-headers-generic-lts-xenial"
Note that for this workshop the containers themselves were running
ubuntu 16.04, even though the base platform was 14.04.
* Connection between dynamips and the containers is done using Linux
software virtual bridges and tap interfaces.
For example: the backbone network 10.10.0/24 is on a bridge called
"br-lan" where the Mac Mini and the wifi access points also connect. We
have 9 tap interfaces (tap-lan1 to tap-lan9) as ports on that bridge,
and the dynagen config file has the 9 cisco routers with their outside
interfaces connected to those tap interfaces.
Then there are 9 more bridges (br-int1 to br-int9) each of which with a
single tap interface (tap-int1 to tap-int9). On each bridge there are
four virtual PCs, and the inside interface of one of the virtual Ciscos
connects to that bridge. That ensures that the traffic from each group
of four virtual PCs flows through the group's router.
* There are a number of other pieces in the platform: it has its own DNS
server (for local names like pc1.ws.nsrc.org which resolve to private IP
addresses), its own DHCP server, apt-cacher-ng to speed up package
downloads, and softflowd to generate netflow records. There is also NAT
routing configured between the br-lan bridge and the outside interface,
so that the containers have access to the Internet.
* Managing the student PCs in bulk is done using ansible. If you're
interested in this, there was a presentation and some exercises in the
RWNOG2 system administration workshop last year:
https://nsrc.org/workshops/2015/rwnog/wiki/Track2Agenda
(see the Tuesday sessions)
Regarding the hardware:
* Any physical server will do, although for this number of students we
really need at least 16GB of RAM, and using SSDs rather than spinning
disks makes the platform run much, much faster.
In our case, for portability we use a Mac Mini, Late 2012 Server
edition, which has a quad-core i7 processor and 16GB of RAM (the maximum
allowed), with two internal 2.5" SSDs. We use the internal ethernet port
for the lab network, and an external USB ethernet adaptor for the uplink
into the provider's network.
Unfortunately, the current available model of Mac Mini (Late 2014) only
has a dual-core processor:
https://en.wikipedia.org/wiki/Mac_Mini#Unibody_Mac_Mini
So we are not buying any more Mac Minis. What we will probably use going
forward is the new Intel NUC6I7KYK "Skull Canyon", which is even smaller
than the Mac Mini, although it has an external power brick.
It has a quad core i7, and supports up to 32GB RAM and dual NVMe SSDs.
http://www.intel.com/content/www/us/en/nuc/nuc-kit-nuc6i7kyk-features-configurations.html
* The class wifi access point is a Ubiquiti Unifi Pro, which is
dual-band. We know these are deployed successfully in many challenging
environments - we know a university with over 900 of these APs.
There is an even cheaper and smaller version of this, which is the Unifi
AC Lite, still dual-band. This is what I'll likely be upgrading my home
network with in the future :-)
I don't really understand what was going on with the wireless slowness
and disconnects some of you experienced during the week. We do know that
there was a CMU access point mounted on the ceiling and therefore giving
a very strong interfering signal, but I suspect there is more to it than
that. This will be a focus for future investigation.
* The switch is a Netgear GS110TP, which is a fully-managed portable
switch with 8 gigabit copper ports (PoE capable) plus two SFP ports. It
has SNMP, a web interface and even an undocumented command line, which
you can get to by telnet to port 60000.
Cheers,
Brian.
More information about the Nsrc-rwnog16
mailing list