GNS3
I’m pretty sure anyone who has known me for more than a day already noticed that I’m pretty fond of the Fediverse. For example, one of the main reasons why I chose to use Duckquill as the base theme for this blog was the Mastodon integration for comments (though this is the first post which will make use of the feature).
I’ve been using Mastodon ever since the great Twitter migration, and I have even launched a PeerTube instance, but I still only know the bare minimum about ActivityPub, the beating heart of the network. And when I don’t know something, it tends to keep me up at night.
My friend Sky recently added more fuel to that fire, by letting me know of a project in very early planning stages that could potentially benefit from ActivityPub integration. It would be an actual reason to learn more about the protocol other than “is interesting”.
Problem: it’s a network, so it is inherently multi-machine, and depends on a lot of scaffolding that I would rather not disturb with buggy pre-alpha code and experiments. The last thing I would like is to accidentally broadcast a thousand follow requests or random mentions. But I still wanted to be able to see how my activities and objects would show up in other software, so I made the completely reasonable decision to rebuild the internet from scratch.
If you’ve had any exposure to networking more advanced than plugging in a router and going through the setup guide, you might have heard of a tool called Packet Tracer. It is a really cool piece of software for learning the ropes on Cisco hardware and seeing how packets are routed based on your configuration. But if you want to break out of that specific ecosystem, you’re probably going to be much better served by a FOSS network simulation tool called GNS3.
It lets you build graphs made up of devices ranging from emulations of Cisco hardware to full-blown QEMU virtual machines, and run them as if they were separate physical machines in an actual network. It has a pretty wide range of preconfigured “appliances”:
And it also lets you build your own from scratch:
Then you can wire these components together like packet-switched LEGO:
And interact with the elements as if they were real hardware:
In the background the graph is translated into the right virtual machines, adapters, configuration files etc., which makes for a surprisingly performant framework for all your network building needs.
So let’s build a network!
Even though the real internet is a network of networks, we thankfully won’t have to simulate this aspect for our toy internet. Though BGP would definitely be an interesting aspect to explore, I’d rather keep things simple this time. This time. For our use, a single centralised network is enough, which can all be handled by a main router running OpenWRT.
Though OpenWRT can be configured through the serial console exposed by GNS3 alone, I would prefer to be able to use LuCI, so we’ll also need some sort of management machine. For this, I spawned a Kali Linux virtual machine and attached it to the router via a simple network switch.
We have to keep in mind the usage notes for the OpenWRT node, which say that the default configuration is for Ethernet0 to be the LAN interface and for Ethernet1 to be WAN, with the other two interfaces going unused.
GNS3’s Kali template assigns an absolutely pitiful amount of RAM and only a single CPU core to the freshly created VM, so we better up those numbers in the node properties accessed through the right click menu. Four gigs of RAM and four cores should make for a happy and snappy OS.
Starting the graph and opening the console for the Kali node through its right click menu should net us GRUB, and after a quick boot, the default Kali desktop environment. OpenWRT defaults to 192.168.1.1/24, so we can easily start configuring our fresh new network through Firefox. You might also be able to use http://OpenWRT.lan
if that fails. (Don’t mind the different domain name shown in the following screenshots, they were taken after I was done with the next configuration steps.)
The default password for root is blank, so just go ahead and click login.
OpenWRT already has both a DHCP and a DNS server provided by dnsmasq
, configurable under Network->DHCP and DNS. The only parameter I’d like to change here is editing the .lan
default TLD to something more appropriate, like .test
. Just edit the local server and local domain lines:
I would also prefer to move our virtual internet’s subnet a little further away from the default 192.168.1.0/24. I’m not going to just let it loose and assign 0.0.0.0/0 because I would prefer to be able to still communicate with the real internet when I need to (for instance while installing and configuring other services). So let’s just nudge it over to 192.168.2.0/24.
This part of the configuration isn’t accessible here, because dnsmasq
inherits the subnet it’s allowed to assign IPs in from the adapter configuration. Makes sense, since it wouldn’t be too smart to give devices IPs in a subnet that OpenWRT isn’t a part of. A quick trot over to Network->Interfaces, and we should be done with most of the configuration.
A quick restart of everything involved, and we’re now on the new subnet.
The next step could be to start assigning static DHCP leases, but since by default dnsmasq
assigns a domain name to everything it can get the hostname of in the style of hostname.tld, we shouldn’t need to bother.
You might think me crazy for wanting to encrypt connections after ignoring OpenWRT’s pleas to set a password so hard it didn’t even get a mention, but since the real internet also has TLS and our software will have to work with it, this is a step we unfortunately cannot skip.
But for TLS, we’re going to need certificates. A lot of them. And no matter how much I would beg, no certificate authority would humour me with a cert valid for *.test, so we’ll have to become one ourselves, at least within our network.
I’m not a cryptography expert, far from it, so take this entire section with a grain of salt, but here’s the process as far as I could understand it:
To issue certificates, you don’t really need anything, just openssl. But for other computers to respect your signature, you have to get added to an internet-wide list of blessed Certificate Authorities. This list then gets replicated across all computers as part of an OS package or the web browser. The simplest way to look at the current list is to open your browser of choice and navigate to the security settings. In Firefox, you can list the current authorities under Privacy and Security, Security, Certificates, View Certificates…
There are many entries here from well-known companies like Amazon and Microsoft all the way to whole countries like the Netherlands. These are all globally trusted entities that either directly sign certificates for websites and services or delegate their status by signing an intermediate certificate for them.
If we now look at the certificate provided by my blog, we can look at the chain of certificates that gives the computer hosting my blog authority over the blog.karcsesz.hu domain:
Navigating the tabs up top we can see that the main certificate has a “Common Name” of blog.karcsesz.hu. The issuer is from the US and is called Let’s Encrypt also known as R11. And who would have guessed, the next certificate tab is called R11.
This certificate is what lets Let’s Encrypt sign certificates for other domains. But it’s not part of the earlier global list, so it has to import authority from another issuer which is the Internet Security Research Group and their ISRG Root X1 certificate.
And we’ve reached the end of the road, this is a globally trusted certificate.
So to be able to sign certificates, we either have to get added to the list of authorities, or get our hands on a permissive certificate that was signed by one of them. Fortunately, it’s surprisingly trivial to do the former if you have root access to the computers you want your certificates to work on.
The actual name of what we need is “root certificate”. So to become a CA, we just have to generate one of those and then add it to the certificate store of every computer in our network. I’ll just use the Kali VM from earlier to handle this.
Tip
This is the more verbose and manual way to handle certificate signing. A guide to use configuration files for setting details can be found here
We are dealing with asymmetric cryptography (the type that has both a private and a public key), so we first have to generate a private key that we will use to sign signatures. RSA with a length of 4096 bits should be more than enough.
> openssl genrsa -out rootCA.key 4096
This generates an RSA signing key and saves it to rootCA.key (set by the -out
parameter). The next step is to sign the first certificate with this key that will act as the root certificate on our clients.
> openssl req -x509 -new -noenc -key rootCA.key -sha256 -days 1024 -out rootCA.crt
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:
Email Address []:
openssl req
is a command that works with certificate requests. We’ll see why they are requests soon, but for now we are self-signing, so no request is generated. We pass quite a few flags:
-x509
sets the standard to follow with certificate generation-new
marks that we are generating a new certificate request instead of working with an already created one-noenc
marks that the private key isn’t encrypted and shouldn’t be-key
is how we pass in the key we just generated-sha256
sets the hashing function to use-days
sets the length the certificate should be valid for, in this case 1024 days which will be just long enough to forget about having to renew-out
sets the file our certificate should be written toThis initiates a series of queries from the application requesting information about who you are. These are the data points that will get integrated into the certificate as we have seen earlier. And just like that, we now have a self-signed root certificate.
Every operating system (and browser and email client and…) has its own way to store certificates. It isn’t always standardised even between linux distributions, so you might have to do some research. For now, I’m only going to install it to the Kali VM, which is Debian based and has Firefox.
On Debian-based systems, certs are stored in /usr/local/share/ca-certificates
, but we can’t just copy the file in there, because it is not the right format yet. So first we have to create a PEM format certificate:
> openssl x509 -in rootCA.crt -out rootCA.pem
But to ensure confusion, now we will have to change the extension on it to once again say .crt
. We can do that and put it in its new home in a single command:
> mv ./rootCA.pem /usr/local/share/ca-certificates/rootCA.crt
Now we just run have to tell the OS to update its database:
> sudo update-ca-certificates
Voilá, the OS now knows about our upstart CA.
To install into Firefox, we will just have to use the “Import…” button under the list of known certificate authorities and then select our new certificate. It can take the .crt
file directly, so no format shenanigans are necessary.
With all that scaffolding taken care of, now we can start creating certificates and signing keys for HTTPS services. Let’s upgrade OpenWRT to use something more respected than its default self-signed certificate.
The process will require three steps:
Generating the new signing key is just as simple as generating the CA signing key, this time only using 2048-bit RSA:
> openssl genrsa -out openwrt.test.key 2048
Generating the signing request will also look familiar, though we will have to add some fields that the interactive queries don’t automatically ask for:
> openssl req -new -sha256 -key openwrt.test.key -out openwrt.test.csr -addext "subjectAltName = DNS:openwrt.test"
You area about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:openwrt.test
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
This will generate a new Certificate Signing Request (CSR) which will contain the information required to generate a certificate without a signature. Take note of the -addext
flag adding a subjectAltName
of DNS:openwrt.test
. Assigning domains to certificates used to be simple because browsers would just validate that the Common Name field matched. But due to security issues, browsers no longer use that field and instead require that one or more DNS Alternative Names be given as an extension.
At this point, the file would be forwarded to the CA to get their stamp and signature. They can dump the contents with openssl req -in openwrt.test.csr -noout -text
to verify their legitimacy, then they can generate the certificate:
> openssl x509 -req -in openwrt.test.csr -CA rootCA.crt -CAkey rootCA.key -CAcreateserial -out openwrt.test.crt -days 500 -sha256 -copy_extensions copy
Certificate request self-signature ok
subject=C=AU, ST=Some-State, O=Internet Widgits Pty Ltd
This takes the request as input, the CA certificate and CA signing key, then generates the certificate that will be valid on all computers who trust the root certificate. There are some interesting flags here:
-CAcreateserial
makes sure the “serial number file” exists for the passed in CA. Every signature has a unique serial number, and this is how openssl ensures that no duplicates are created.-copy_extensions copy
lets the CA decide which extensions they allow into the certificate. copy
just copies everything. This is required, as by default, every extension is stripped and your certificate won’t work with anything. Ask me how I know.And voilá, certificate. Let’s install it!
The only service we have right now is OpenWRT, so it will have to act as a test subject. It will require both the signing key and the generated certificate to be uploaded to the filesystem, so I’m just going to pipe it in over ssh:
> cat openwrt.test.key | ssh root@openwrt.test "tee openwrt.test.key"
[snip]
> cat openwrt.test.crt | ssh root@openwrt.test "tee openwrt.test.crt"
[snip]
Now we can replace the self-signed certificate in the /etc
directory:
> ssh root@openwrt.test
BusyBox v1.36.1 (2023-10-09 21:45:35 UTC) built-in shell (ash)
[snip]
root@OpenWrt:~# ls /etc/uhttpd*
/etc/uhttpd.crt /etc/uhttpd.key
root@OpenWrt:~# mv openwrt.test.crt /etc/uhttpd.crt
root@OpenWrt:~# mv openwrt.test.key /etc/uhttpd.key
Then after a quick restart of uhttpd
…
Victory!
The test services we are about to stand up all expect a place to send emails for notifications, error messages, initial setup, verification… so we better have something for that as well. There are multiple ways to set up a nice web-accessible email client for testing (something like MailHog for instance), but where’s the fun in that?
Let’s do postfix!
For this, I have created a blank VM with some storage, CPU, and RAM, and uploaded the Ubuntu 22.04.5 Server install ISO. A quick install later, making sure to set the hostname to email
so dnsmasq
assigns us the memorable hostname of email.test
, we have a plain and simple Ubuntu Server install.
Before we can start installing any packages, we will unfortunately have to break containment for our network. Thankfully this is extremely easy thanks to GNS3’s NAT node. Placing one down and attaching it to the WAN port (Remember, it’s Ethernet1) of the OpenWRT node is all we need to let servers call out to the real internet. And we can always delete the connection to isolate again.
Now we can install and configure postfix. Just follow the official Ubuntu Postfix guide. I didn’t configure TLS, but it should be simple enough to generate a certificate and pass it in. Then making new email accounts will be as simple as making new users on the server.
To check emails, let’s install a nicer CLI instead of having to cat
and manually truncate the inbox file.
> sudo apt install mailutils
And now checking your mail is as simple as typing mail
!
Having an isolated internet is nice and all, but I would prefer not having to VNC into a virtual machine to do development. Wouldn’t it be nice to be able to connect in from my laptop as if it was a node on the GNS graph? How hard can that be…
This section is going to require some details about how I am running my graph. My brother recently went mad with power and invested in 128 GB of ECC RAM for a server he uses for development, making it the ideal VM runner. And he was kind enough to let me VPN into it through Wireguard and install the GNS3 runtime. This means I have immense computing power at my disposal, but I’m limited in how I am allowed to mess with the host’s network interfaces. But I am definitely not limited in what my GNS3 nodes can do!
The key is the NAT node. There’s also a cloud node that attaches a specific network interface to the graph, but I didn’t want to create more random interfaces outside GNS3 than absolutely necessary. But NAT is usually one way only. You are allowed to connect out to the wider network, but without explicitly forwarding ports, the other direction is not possible.
Thankfully we have read David Anderson’s excellent post on Tailscale’s blog explaining How NAT traversal works so we know that as long as the connection is initiated from inside the NAT-ed network, the reverse connection also becomes possible. And we are also familiar with the fact, that OpenWRT easily supports Wireguard VPNs. The only missing piece is a bit of creativity.
Wireguard by default requires at least one direction of a connection to be possible without punching through NAT, but guess what, we have exactly that through the already running Wireguard connection attaching to the server running the node. What’s stopping us from double-bagging our VPN packets?
Nothing.
…except for performance considerations but who cares this is for development purposes only.
The OpenWRT image running in GNS3 doesn’t ship with Wireguard installed, but opkg makes short work of that issue. Installing kmod-wireguard
, wireguard-tools
, and luci-proto-wireguard
gives us a nice GUI for everything.
Wireguard connections are created under Network->Interfaces->Add new interface… Give the interface a name, then set the protocol to Wireguard VPN.
Wireguard works on public and private keys, with the public keys having to be shared between peers to secure the connection. LuCI exposes a button to generate the keypair for itself, and we can use the Peers tab to input the one we will generate. I’ve set the router’s IP to be 192.168.10.1, so we are definitely not going to collide with anything on the GNS network.
In firewall settings, assign this interface to be in the lan
zone to be able to access all devices on the network. It’s possible to configure a DHCP server as well, but fixed IPs will do fine for this. Now we can add the external computer as a peer.
The external computer’s configuration is similarly a breeze thanks to NetworkManager. We can generate a private key using wg genkey
, derive a public key using wg pubkey
, then plug these numbers into the connection parameters. Listen port can be anything (I used 12345), but it’s important to set the correct fixed IP address. Here I went with 192.168.10.2
with a netmask of /16
and a blank gateway. Under peers, you can plug in the private key, and set allowed IPs to include 192.168.10.1/32
as well as 192.168.2.0/24
.
Peer config on the OpenWRT side will be similar. Set the public key, set Allowed IPs to be 192.168.10.2
, and set your endpoint host to be the IP of the external machine on the VPN network. Here it’s 10.200.200.3
. Port is 12345
from earlier, and the final touch is setting a persistent keepalive of 25 as recommended by LuCI itself.
Now as long as OpenWRT is alive and the interface is up, it will send out packets trying to connect. And every attempt punches through NAT, making sure that when the connection is eventually successful, there will be a channel running backwards too.
And almost as if by magic:
> sudo wg
interface: wg2
public key: [snip]
private key: (hidden)
listening port: 12345
peer: [snip]
endpoint: 10.200.200.1:43203
allowed ips: 192.168.10.1/32, 192.168.2.0/24
latest handshake: 9 seconds ago
transfer: 180 B received, 972 B sent
And it actually works:
[karcsesz@Littlepip ~]$ ssh karcsesz@192.168.2.124
[snip]
You have mail.
Last login: Tue Oct 8 13:37:25 2024
karcsesz@email:~$
Which means now you can start adding whatever further servers and services you end up needing for your experiments. Enjoy!Blog post authorBoosts from $INSTANCEen-GBFavorites from $INSTANCEequestria.social114157096256714805Loading…No Comments yet :/ReloadSensitive ContentKarcseszView Comment AtView Profile AtfalseCopy Code
Comments
You can comment on this blog post by publicly replying to this post using a Mastodon or other ActivityPub/Fediverse account. Known non-private replies are displayed below.