CGHMN-Demo-Network: Difference between revisions
m Added reserved IP for WIREGUARD-EXTERNAL for CursedSilicon |
Ilostmybagel (talk | contribs) No edit summary |
||
| (2 intermediate revisions by one other user not shown) | |||
| Line 1: | Line 1: | ||
[[Category:Compu-Global-Hyper-Mega-Net]] | |||
=== Demo Network for the Interim Computer Festival === | === Demo Network for the Interim Computer Festival === | ||
This page documents the quickly set up demo network to show off the CGHMN network at the [https://sdf.org/icf/ SDF's Interim Computer Festival] taking place between March 22nd and 23rd. Consider this a sort-of draft, an experimental first version, a test on what might work and what doesn't. | This page documents the quickly set up demo network to show off the CGHMN network at the [https://sdf.org/icf/ SDF's Interim Computer Festival] taking place between March 22nd and 23rd. Consider this a sort-of draft, an experimental first version, a test on what might work and what doesn't. | ||
Currently, the basics are up and running on the CGHMN Proxmox hypervisor living in the [https://devhack.net/ /dev/hack Hackerspace] in Seattle. These include a router and Wireguard endpoint through an OPNsense VM, a VXLAN tunnel endpoint container with some custom scripts to make deploying new member tunnel easier and two containers running a basic authoritative BIND DNS server for <code>.cghmn</code> and <code>.retro</code> and one hosting a custom, internal Certificate Authority for those domains. | Currently, the basics are up and running on the CGHMN Proxmox hypervisor living in the [https://devhack.net/ /dev/hack Hackerspace] in Seattle. These include a router and Wireguard endpoint through an OPNsense VM, a VXLAN tunnel endpoint container with some custom scripts to make deploying new member tunnel easier and two containers running a basic authoritative BIND DNS server for <code>.cghmn</code> and <code>.retro</code> and one hosting a custom, internal Certificate Authority for those domains. | ||
=== Changes to the network layout === | |||
Since this page was written, there have been quite a lot of discussions about how and what we might change going forward, after the initial test of the network at the ICF was a success. The biggest change, so far, has been the idea to move away from VXLANs to GRETAP tunnels for the Layer 2 and non-IP Layer 3 traffic. This is mostly due to the fact that VXLANs, by their RFC definition, MAY NOT fragment packets coming into the VTEP (aka. a VXLAN tunnel endpoint) and packets flowing out of a VTEP MAY be reassembled if fragmented, but don't necessarily have to. In addition to this, the IP packets generated by the VXLAN tunnels have the Don't Fragment bit set, so those packets may also not be fragmented. This means that the underlying transport of the VXLAN tunnels, here Wireguard, would have to open a path that allows 1500 byte frames through its tunnel, which would make the tunnel packets themselves quite large at ~1600 bytes, which would then be fragmented by whatever routers are in between the client router and the CGHMN router. Turns out, that's quite inefficient. | |||
GRETAP tunnels, on the other hand, have the two flags <code>ignore-df</code> and <code>nopmtudisc</code>, which together with <code>ttl 255</code> create a tunnel over IP, which can carry ''and fragment'' 1500 byte Ethernet frames over a smaller underlying transport, still Wireguard in this case. This was a massive boost not only in speed under certain circumstances, like running this all on a small travel router with a weak MIPS CPU, but also reliability, as less dropped packets could be observed and MTU blackholes finally not happening in our testing. | |||
To bring up a GRETAP tunnel within the network to the CGHMN central router, use the following commands on a Linux box:<blockquote><code>ip link add gretap-cghmn type gretap remote 172.23.4.103 dev wg0 ignore-df nopmtudisc ttl 255</code> | |||
<code>ip link set gretap-cghmn master br0 mtu 1500</code> | |||
<code>ip link set gretap-cghmn up</code></blockquote>Where <code>wg0</code> is your CGHMN Wireguard tunnel and <code>br0</code> is the bridge you'd want to bridge the GRETAP tunnel to. | |||
However, to improve performance more and make the network a little more reliable, there was another idea for a change: Sending routable IP traffic not over the Layher 2 tunnel, but rather routing it directly through the Wireguard tunnel, which already is a straight Layer 3 path to the CGHMN core router. This is possible due to the nftables <code>bridge</code> filter table, which can match and filter packets on bridge interfaces, incuding what "bridge port" they come in and go out of. This means we can filter IP traffic from leaving the retro LAN bridge, to which you'd connect your retro machines via a phyiscal LAN port, by creating a filter that says "Block all traffic on bridge <code>br-retrolan</code> which leaves through a GRETAP interface" and "Block all traffic on bridge <code>br-retrolan</code> which comes in on a GRETAP interface". Now, you can assign the router a static IP address on the bridge, so it can talk to your retro machines, enable DHCP and NAT and route IP traffic from your machines straight to the CGHMN via Wireguard. In the future, this shall be extended to work without NAT on the client side, so that every member has a small subnet, /24 for example, which is routed to the Wireugard tunnel client IP. This also means that the VLAN1 described in the next section might not need an IP address in the future so that the VLAN1 is purely non-IP traffic at least from the CGHMN side of things. | |||
Yet another idea mentioned was the ability to span tunnels directly between members, even without going through the CGHMN core network in the first place. This can be accomplished by creating another GRETAP interface whose <code>remote</code> IP argument points to the IP of another members router, either through the existing CGHMN Wireguard tunnel or through a separate tunnel that you span between you and the other member. This GRETAP interface is then bridged to the <code>br-retrolan</code> bridge and with a couple of (perhaps default) bridge firewall rules, you and the other member should be able to communicate directly! Of course, this also means we'd have to implement some sort of loopback protection not just on the member router side (the default bridge firewall rules mentioned in the last sentence), but also on the core router side. so this idea is not yet fully implemented for testing. | |||
IP Allocations within the network are now kept track of [[CGHMN-IP-Allocations|in this Wiki page]], though the IPs listed there might not be applied in the current configuration yet. | |||
=== Network Layout === | === Network Layout === | ||
| Line 16: | Line 35: | ||
This network is our layer 2 bridged network to all members who wish to participate and is intended to be used for retro computers to directly communicate with each other even across the globe. This is accomplished by spanning a VXLAN tunnel across a Wireguard connection from the CGHMN server infrastructure to each members' router endpoint, which can be any OpenWRT compatible device that contains the packages for VXLANs and Wireguard. The idea is to bridge on of at least two available interfaces from said router to the VXLAN network and thus directly bridge any connected retro machines to VLAN1. All members will be in the same L2 broadcast domain, meaning even non-IP protocols that are able to run over Ethernet should be able to communicate with each other from all over the world. | This network is our layer 2 bridged network to all members who wish to participate and is intended to be used for retro computers to directly communicate with each other even across the globe. This is accomplished by spanning a VXLAN tunnel across a Wireguard connection from the CGHMN server infrastructure to each members' router endpoint, which can be any OpenWRT compatible device that contains the packages for VXLANs and Wireguard. The idea is to bridge on of at least two available interfaces from said router to the VXLAN network and thus directly bridge any connected retro machines to VLAN1. All members will be in the same L2 broadcast domain, meaning even non-IP protocols that are able to run over Ethernet should be able to communicate with each other from all over the world. | ||
Machines on this network are able to connect to all hosts on the Server VLAN (see below), the firewall for DNS, NTP and ICMP queries and to the root DNS and CA servers for DNS queries and HTTP access to the CA web server. They are not, however, able to communicate with any hosts on the internet, the /dev/hack network or any of the other existing VLANs aside from specific exceptions. | Machines on this network are able to connect to all hosts on the Server VLAN (see below), the firewall for DNS, NTP and ICMP queries and to the root DNS and CA servers for DNS queries and HTTP access to the CA web server. They may also query DNS lookups at the legacy DNS server at <code>172.23.0.104</code>. They are not, however, able to communicate with any hosts on the internet, the /dev/hack network or any of the other existing VLANs aside from specific exceptions. | ||
Addresses are handed out via DHCP by the router in the range <code>172.23.1.1-172.23.3.254</code>, the range <code>172.23.1.11-172.23.1.255</code> is reserved for static hosts. The search domain for this network is <code>clients.retro</code>. | Addresses are handed out via DHCP by the router in the range <code>172.23.1.1-172.23.3.254</code>, the range <code>172.23.1.11-172.23.1.255</code> is reserved for static hosts. The search domain for this network is <code>clients.retro</code>. | ||
| Line 23: | Line 42: | ||
This VLAN in intended for core internal services, like the root DNS server, VXLAN endpoint and our custom Certificate Authority. The Proxmox host also has an IP address in this subnet (<code>172.23.4.11</code>), it does not however have any routes to the rest of this CGHMN demo infrastructure and thus can only be accessed from clients in the Core Services subnet. | This VLAN in intended for core internal services, like the root DNS server, VXLAN endpoint and our custom Certificate Authority. The Proxmox host also has an IP address in this subnet (<code>172.23.4.11</code>), it does not however have any routes to the rest of this CGHMN demo infrastructure and thus can only be accessed from clients in the Core Services subnet. | ||
Hosts in this subnet may currently access the internet, the router for DNS, NTP and ICMP queries and the VXLAN endpoint may send UDP datagrams to anyone at port <code>4789</code> for VXLAN tunnel replies, any other internal connections are prohibited. | Hosts in this subnet may currently access the internet, the router for DNS, NTP and ICMP queries, query DNS lookups at the legacy DNS server at <code>172.23.4.104</code> and the VXLAN endpoint may send UDP datagrams to anyone at port <code>4789</code> for VXLAN tunnel replies, any other internal connections are prohibited. | ||
Addresses are handed out via DHCP by the router in the range <code>172.23.7.1-172.23.7.254</code>, the range <code>172.23.4.11-172.23.6.255</code> is reserved for static hosts. The search domain for this network is <code>core.cghmn</code>. | Addresses are handed out via DHCP by the router in the range <code>172.23.7.1-172.23.7.254</code>, the range <code>172.23.4.11-172.23.6.255</code> is reserved for static hosts. The search domain for this network is <code>core.cghmn</code>. | ||
| Line 30: | Line 49: | ||
This VLAN will contain all servers hosted and managed by members, which can be any (retro) service that works across an IP router. For anything that requires direct layer 2 access or the same broadcast domain as the client machines, it is advised to host said server in the Global LAN network. This is the only VLAN clients from the bridged Global LAN network may access freely, so members should be wary about what ports they open up for anyone outside of localhost. Another option is to run a tiny router instance based on OpenWRT in front of your server which will act as a basic firewall and NAT router behind which one can run their servers. | This VLAN will contain all servers hosted and managed by members, which can be any (retro) service that works across an IP router. For anything that requires direct layer 2 access or the same broadcast domain as the client machines, it is advised to host said server in the Global LAN network. This is the only VLAN clients from the bridged Global LAN network may access freely, so members should be wary about what ports they open up for anyone outside of localhost. Another option is to run a tiny router instance based on OpenWRT in front of your server which will act as a basic firewall and NAT router behind which one can run their servers. | ||
Hosts in this subnet may not access the internet inherently, however a firewall rule is in place that allows specific servers internet access, it is still uncertain if this will make it to the final CGHMN or if this subnet is also supposed to be entirely sealed off from the public internet. During a few | Hosts in this subnet may not access the internet inherently, however a firewall rule is in place that allows specific servers internet access, it is still uncertain if this will make it to the final CGHMN or if this subnet is also supposed to be entirely sealed off from the public internet. During a few chats on the Discord server, the idea of hosting local package mirrors of popular distros and projects was mentioned so that both modern and retro systems won't need to connect to internet servers for package installations and upgrades. Hosts may access the router for DNS, NTP and ICMP queries and query DNS lookups at the legacy DNS server at <code>172.23.4.104</code>, other internal connections are prohibited. | ||
Addresses are handed out via DHCP by the router in the range <code>172.23.11.1-172.23.11.254</code>, the range <code>172.23.8.11-172.23.10.255</code> is reserved for static hosts. The search domain for this network is <code>hosting.retro</code>. | Addresses are handed out via DHCP by the router in the range <code>172.23.11.1-172.23.11.254</code>, the range <code>172.23.8.11-172.23.10.255</code> is reserved for static hosts. The search domain for this network is <code>hosting.retro</code>. | ||
| Line 36: | Line 55: | ||
==== VLAN 12 - DMZ (172.23.12.0/22) ==== | ==== VLAN 12 - DMZ (172.23.12.0/22) ==== | ||
Currently not in use. | Currently not in use. | ||
=== Containers and VMs === | === Containers and VMs === | ||
| Line 63: | Line 83: | ||
The /root directory of this container contains a script called <code>create-and-sign-server-csr.sh</code> that, when run without any arguments, will ask a few questions on the command line and generate a signed TLS certificate in the root directory for the specified DNS names to make deployment of new TLS certificates a little easier. This requires the password of the private key of the intermediate CA, which again is currently stored in Sneps password manager but of course will be copied to a safe location to store passwords once available for the CGHMN. | The /root directory of this container contains a script called <code>create-and-sign-server-csr.sh</code> that, when run without any arguments, will ask a few questions on the command line and generate a signed TLS certificate in the root directory for the specified DNS names to make deployment of new TLS certificates a little easier. This requires the password of the private key of the intermediate CA, which again is currently stored in Sneps password manager but of course will be copied to a safe location to store passwords once available for the CGHMN. | ||
Currently, there is no root password set, console access works either via key-based SSH or by entering <code>pct enter | Currently, there is no root password set, console access works either via key-based SSH or by entering <code>pct enter 10402</code> on the Proxmox host console. | ||
==== Container 10403 (demo-cghmn-vxlan-endpoint, VLAN4, 172.23.4.103) ==== | ==== Container 10403 (demo-cghmn-vxlan-endpoint, VLAN4, 172.23.4.103) ==== | ||
| Line 80: | Line 100: | ||
This container is only reachable by the firewall itself and by the clients connecting their VXLAN bridge to port 4789 from the Wireguard tunnel, as it doesn't do any routing or hosting of services directly aside from the VXLAN endpoint. | This container is only reachable by the firewall itself and by the clients connecting their VXLAN bridge to port 4789 from the Wireguard tunnel, as it doesn't do any routing or hosting of services directly aside from the VXLAN endpoint. | ||
Currently, there is no root password set, console access works either via key-based SSH or by entering <code>pct enter 10403</code> on the Proxmox host onsole. | |||
==== Container 10404 (demo-cghmn-legacy-dns, VLAN4, 172.23.4.104) ==== | |||
This container, based on Alpine, runs a dnsmasq instance configured to look up certain DNS overrides either in the hosts file at <code>/etc/cghmn-dns-overrides</code> or by including a dnsmasq configuration file from <code>/etc/dnsmasq-cghmn.d/*.conf</code>. Any other requests it cannot resolve locally are forwarded to the Unbound DNS resolver running on the OPNsense router VM. This setup is used to create DNS overrides for existing domains to make old software, which is hardcoded to specific DNS entries, work again with custom servers hosted internally. | |||
Currently, there is no root password set, console access works either via key-based SSH or by entering <code>pct enter 10404</code> on the Proxmox host console. | |||
=== Proposed Organization of IDs and IPs === | === Proposed Organization of IDs and IPs === | ||
| Line 94: | Line 122: | ||
After that, the first half of the subnet (see above under Network Layout for the actual start and end of this range) is supposed to be reserved for any hosts that are set up with a fully static IP. This is entirely outside of the DHCP range to avoid any conflicts. That DHCP range then starts with the second half of the subnet and goes up to the last available host IP of each subnet. | After that, the first half of the subnet (see above under Network Layout for the actual start and end of this range) is supposed to be reserved for any hosts that are set up with a fully static IP. This is entirely outside of the DHCP range to avoid any conflicts. That DHCP range then starts with the second half of the subnet and goes up to the last available host IP of each subnet. | ||
=== Other Notes === | === Other Notes === | ||
| Line 100: | Line 129: | ||
* Currently, the OPNsense router does DHCP as it already has an IP in each VLAN and comes with a solid DHCP server that can also support failover out of the box (ISC DHCP). I (Snep) chose this route over a standalone DHCP server to avoid having a second container/VM in each subnet that solely does DHCP or DHCP proxying, mainly to keep the setup and maintenance work as low as possible. | * Currently, the OPNsense router does DHCP as it already has an IP in each VLAN and comes with a solid DHCP server that can also support failover out of the box (ISC DHCP). I (Snep) chose this route over a standalone DHCP server to avoid having a second container/VM in each subnet that solely does DHCP or DHCP proxying, mainly to keep the setup and maintenance work as low as possible. | ||
* A customized OpenWRT image for the Gl.iNet MT300n and AR300n are currently being built and tested, which includes required packages and UCI configurations out of the box to make joining the network perhaps a little bit easier. Will update this page or create a new one and link to it once a working image exists! | * A customized OpenWRT image for the Gl.iNet MT300n and AR300n are currently being built and tested, which includes required packages and UCI configurations out of the box to make joining the network perhaps a little bit easier. Will update this page or create a new one and link to it once a working image exists! | ||
=== Reserved static IPs === | === Reserved static IPs === | ||
* '''VLAN1, 172.23.0.11:''' WIREGUARD-EXTERNAL (CursedSilicon) | * '''VLAN1, 172.23.0.11:''' WIREGUARD-EXTERNAL (CursedSilicon) | ||
Latest revision as of 18:33, 10 May 2025
Demo Network for the Interim Computer Festival
This page documents the quickly set up demo network to show off the CGHMN network at the SDF's Interim Computer Festival taking place between March 22nd and 23rd. Consider this a sort-of draft, an experimental first version, a test on what might work and what doesn't.
Currently, the basics are up and running on the CGHMN Proxmox hypervisor living in the /dev/hack Hackerspace in Seattle. These include a router and Wireguard endpoint through an OPNsense VM, a VXLAN tunnel endpoint container with some custom scripts to make deploying new member tunnel easier and two containers running a basic authoritative BIND DNS server for .cghmn and .retro and one hosting a custom, internal Certificate Authority for those domains.
Changes to the network layout
Since this page was written, there have been quite a lot of discussions about how and what we might change going forward, after the initial test of the network at the ICF was a success. The biggest change, so far, has been the idea to move away from VXLANs to GRETAP tunnels for the Layer 2 and non-IP Layer 3 traffic. This is mostly due to the fact that VXLANs, by their RFC definition, MAY NOT fragment packets coming into the VTEP (aka. a VXLAN tunnel endpoint) and packets flowing out of a VTEP MAY be reassembled if fragmented, but don't necessarily have to. In addition to this, the IP packets generated by the VXLAN tunnels have the Don't Fragment bit set, so those packets may also not be fragmented. This means that the underlying transport of the VXLAN tunnels, here Wireguard, would have to open a path that allows 1500 byte frames through its tunnel, which would make the tunnel packets themselves quite large at ~1600 bytes, which would then be fragmented by whatever routers are in between the client router and the CGHMN router. Turns out, that's quite inefficient.
GRETAP tunnels, on the other hand, have the two flags ignore-df and nopmtudisc, which together with ttl 255 create a tunnel over IP, which can carry and fragment 1500 byte Ethernet frames over a smaller underlying transport, still Wireguard in this case. This was a massive boost not only in speed under certain circumstances, like running this all on a small travel router with a weak MIPS CPU, but also reliability, as less dropped packets could be observed and MTU blackholes finally not happening in our testing.
To bring up a GRETAP tunnel within the network to the CGHMN central router, use the following commands on a Linux box:
ip link add gretap-cghmn type gretap remote 172.23.4.103 dev wg0 ignore-df nopmtudisc ttl 255
ip link set gretap-cghmn master br0 mtu 1500
ip link set gretap-cghmn up
Where wg0 is your CGHMN Wireguard tunnel and br0 is the bridge you'd want to bridge the GRETAP tunnel to.
However, to improve performance more and make the network a little more reliable, there was another idea for a change: Sending routable IP traffic not over the Layher 2 tunnel, but rather routing it directly through the Wireguard tunnel, which already is a straight Layer 3 path to the CGHMN core router. This is possible due to the nftables bridge filter table, which can match and filter packets on bridge interfaces, incuding what "bridge port" they come in and go out of. This means we can filter IP traffic from leaving the retro LAN bridge, to which you'd connect your retro machines via a phyiscal LAN port, by creating a filter that says "Block all traffic on bridge br-retrolan which leaves through a GRETAP interface" and "Block all traffic on bridge br-retrolan which comes in on a GRETAP interface". Now, you can assign the router a static IP address on the bridge, so it can talk to your retro machines, enable DHCP and NAT and route IP traffic from your machines straight to the CGHMN via Wireguard. In the future, this shall be extended to work without NAT on the client side, so that every member has a small subnet, /24 for example, which is routed to the Wireugard tunnel client IP. This also means that the VLAN1 described in the next section might not need an IP address in the future so that the VLAN1 is purely non-IP traffic at least from the CGHMN side of things.
Yet another idea mentioned was the ability to span tunnels directly between members, even without going through the CGHMN core network in the first place. This can be accomplished by creating another GRETAP interface whose remote IP argument points to the IP of another members router, either through the existing CGHMN Wireguard tunnel or through a separate tunnel that you span between you and the other member. This GRETAP interface is then bridged to the br-retrolan bridge and with a couple of (perhaps default) bridge firewall rules, you and the other member should be able to communicate directly! Of course, this also means we'd have to implement some sort of loopback protection not just on the member router side (the default bridge firewall rules mentioned in the last sentence), but also on the core router side. so this idea is not yet fully implemented for testing.
IP Allocations within the network are now kept track of in this Wiki page, though the IPs listed there might not be applied in the current configuration yet.
Network Layout
This section describes the network layout currently set up for the CGHMN demo network, none of which is necessarily permanent and already set in stone. I (Snep) made some assumptions about domain names, IP addresses, firewall rules and general design ideas to get something up and running for the computer festival based on info from the many chats and discussions on the Cursed Silicon Discord's CGHMN channel (See Signup for more details). So, please feel free to give input on things you'd like to see changed or added!
On the Proxmox host, all VLANs mentioned below are available tagged on the bridge brcghmn, with exception of VLAN1, which is untagged and the default network when a new container or VM is added to this bridge.
For servers and retro clients, the subnet 172.23.0.0/16 is currently in place, divided into smaller subnets, and might be subject to change later down the line. For Wireguard clients, the 100.89.128.0/22 subnet out of the CGNAT block is used and again, might change later.
Below is a further breakdown of VLANs existing in this CGHMN demo network:
VLAN 1 - The Global LAN (172.23.0.0/22)
This network is our layer 2 bridged network to all members who wish to participate and is intended to be used for retro computers to directly communicate with each other even across the globe. This is accomplished by spanning a VXLAN tunnel across a Wireguard connection from the CGHMN server infrastructure to each members' router endpoint, which can be any OpenWRT compatible device that contains the packages for VXLANs and Wireguard. The idea is to bridge on of at least two available interfaces from said router to the VXLAN network and thus directly bridge any connected retro machines to VLAN1. All members will be in the same L2 broadcast domain, meaning even non-IP protocols that are able to run over Ethernet should be able to communicate with each other from all over the world.
Machines on this network are able to connect to all hosts on the Server VLAN (see below), the firewall for DNS, NTP and ICMP queries and to the root DNS and CA servers for DNS queries and HTTP access to the CA web server. They may also query DNS lookups at the legacy DNS server at 172.23.0.104. They are not, however, able to communicate with any hosts on the internet, the /dev/hack network or any of the other existing VLANs aside from specific exceptions.
Addresses are handed out via DHCP by the router in the range 172.23.1.1-172.23.3.254, the range 172.23.1.11-172.23.1.255 is reserved for static hosts. The search domain for this network is clients.retro.
VLAN 4 - Core Services (172.23.4.0/22)
This VLAN in intended for core internal services, like the root DNS server, VXLAN endpoint and our custom Certificate Authority. The Proxmox host also has an IP address in this subnet (172.23.4.11), it does not however have any routes to the rest of this CGHMN demo infrastructure and thus can only be accessed from clients in the Core Services subnet.
Hosts in this subnet may currently access the internet, the router for DNS, NTP and ICMP queries, query DNS lookups at the legacy DNS server at 172.23.4.104 and the VXLAN endpoint may send UDP datagrams to anyone at port 4789 for VXLAN tunnel replies, any other internal connections are prohibited.
Addresses are handed out via DHCP by the router in the range 172.23.7.1-172.23.7.254, the range 172.23.4.11-172.23.6.255 is reserved for static hosts. The search domain for this network is core.cghmn.
VLAN 8 - Servers (172.23.8.0/22)
This VLAN will contain all servers hosted and managed by members, which can be any (retro) service that works across an IP router. For anything that requires direct layer 2 access or the same broadcast domain as the client machines, it is advised to host said server in the Global LAN network. This is the only VLAN clients from the bridged Global LAN network may access freely, so members should be wary about what ports they open up for anyone outside of localhost. Another option is to run a tiny router instance based on OpenWRT in front of your server which will act as a basic firewall and NAT router behind which one can run their servers.
Hosts in this subnet may not access the internet inherently, however a firewall rule is in place that allows specific servers internet access, it is still uncertain if this will make it to the final CGHMN or if this subnet is also supposed to be entirely sealed off from the public internet. During a few chats on the Discord server, the idea of hosting local package mirrors of popular distros and projects was mentioned so that both modern and retro systems won't need to connect to internet servers for package installations and upgrades. Hosts may access the router for DNS, NTP and ICMP queries and query DNS lookups at the legacy DNS server at 172.23.4.104, other internal connections are prohibited.
Addresses are handed out via DHCP by the router in the range 172.23.11.1-172.23.11.254, the range 172.23.8.11-172.23.10.255 is reserved for static hosts. The search domain for this network is hosting.retro.
VLAN 12 - DMZ (172.23.12.0/22)
Currently not in use.
Containers and VMs
Containers and VMs on the Proxmox host are currently assigned in the 10000 ID range to keep clear of existing VMs.
There is one VM and three containers at the time of writing this:
VM 10001 (demo-chhmn-router)
This is the OPNsense VM running as the primary router, firewall, DHCP server and Wireguard endpoint for the demo network. Its login credentials are currently in the paws of Snep, as I'm still unsure where any passwords for the CGHMN are going to be stored safely and with proper access rights.
The router has the first IP in any of the available demo network subnets and responds to IPv4 and IPv4 ICMP packets, DNS queries to its local Unbound resolver and NTP sync requests to the built-in NTP server.
Unbound currently resolves all requests it cannot resolve locally recursively against the internet root servers and returns those replies to clients, this may be subject to change as we potentially plan on sealing the network off more. It is configured to forward all requests with a TLD of .cghmn and .retro to the internal DNS root server.
The Wireguard endpoint servers as the connection into the CGHMN from the outside internet on 66.170.190.194:42070 for anyone that wishes to parttake the network. See Signup for more details on how to join.
Container 10401 (demo-cghmn-root-dns, VLAN4, 172.23.4.101)
This container, based on the absolutely tiny-footprinted Alpine image, hosts the BIND-based root DNS server for the internal CGHMN domains .retro and .cghmn together with the reverse DNS zone for the 172.23.0.0 network. It lives in the Core Services subnet and is reachable on port 53 for DNS queries from every other internal subnet. Zones are configured in the zone files under /etc/bind/zones and loaded by the zone blocks in the /etc/named.conf file.
Currently, there is no root password set, console access works either via key-based SSH or by entering pct enter 10401 on the Proxmox host console.
Container 10402 (demo-cghmn-ca, VLAN4, 172.23.4.102)
This container, also based on Alpine, is hosting the custom Certificate Authority based on OpenSSL created and self-signed certificate files. It is currently constructed in a Root CA -> Intermediate CA -> Server Certificates structure, where the CA signed certificates of the intermediate CA, which then signs all certificates requested for servers and clients on the network. Clients thus should only need to install the CA certificate into their trusted keychain to have valid TLS connections to servers using certificates signed by this internal CA.
Clients can access a web server on certs.cghmn:80 or 172.23.4.102:80 via plain HTTP to download the root CA and intermediate CA certificate files for installation on their retro machines. Note: This is not meant to be secure. When you add this root CA, we could pretend to be any server on the internet under any domain and any system that has the root CA or intermediate CA certificate installed will trust it. Don't add this on machines you would have personal data on or that you would let onto the public internet!
The /root directory of this container contains a script called create-and-sign-server-csr.sh that, when run without any arguments, will ask a few questions on the command line and generate a signed TLS certificate in the root directory for the specified DNS names to make deployment of new TLS certificates a little easier. This requires the password of the private key of the intermediate CA, which again is currently stored in Sneps password manager but of course will be copied to a safe location to store passwords once available for the CGHMN.
Currently, there is no root password set, console access works either via key-based SSH or by entering pct enter 10402 on the Proxmox host console.
Container 10403 (demo-cghmn-vxlan-endpoint, VLAN4, 172.23.4.103)
This container, another Alpine instance, connects all the VXLAN clients together under one virtual Linux bridge and is constructed with a couple if-up/if-down scripts and a Bash script to create new tunnels at /opt/vxlan-scripts/create-vxlan-interface.sh.
This script, when called like for example so: create-vxlan-interface.sh 100.89.128.90 will do the following:
- Find the first unused VXLAN ID
- Output the VXLAN ID for configuring a new VXLAN tunnel on the client side
- Add an interface configuration to
/etc/vxlan-interfaces/which is sourced by ifupdown - Bring up that new VXLAN interface, which bridges it to the Global LAN bridge
after which the client with IP 100.89.128.90 can connect a VXLAN tunnel with the newly added VXLAN ID to their router and join the network.
This is still a very manual process, though one which will probably become more streamlined in the future of the CGHMN network, perhaps with some APIs and/or custom OpenWRT web interface *wink wink*.
This container is only reachable by the firewall itself and by the clients connecting their VXLAN bridge to port 4789 from the Wireguard tunnel, as it doesn't do any routing or hosting of services directly aside from the VXLAN endpoint.
Currently, there is no root password set, console access works either via key-based SSH or by entering pct enter 10403 on the Proxmox host onsole.
Container 10404 (demo-cghmn-legacy-dns, VLAN4, 172.23.4.104)
This container, based on Alpine, runs a dnsmasq instance configured to look up certain DNS overrides either in the hosts file at /etc/cghmn-dns-overrides or by including a dnsmasq configuration file from /etc/dnsmasq-cghmn.d/*.conf. Any other requests it cannot resolve locally are forwarded to the Unbound DNS resolver running on the OPNsense router VM. This setup is used to create DNS overrides for existing domains to make old software, which is hardcoded to specific DNS entries, work again with custom servers hosted internally.
Currently, there is no root password set, console access works either via key-based SSH or by entering pct enter 10404 on the Proxmox host console.
Proposed Organization of IDs and IPs
My (Sneps) idea behind Proxmox container and VM IDs are as follows:
101xx - 103xx are for Containers and VMs in the bridged layer 2 network, so any hosts that members want to run in the bridged network directly.
104xx - 107xx are for Containers and VMs in the Core Services VLAN4, so anything that is necessary for the operation of the CGHMN network.
108xx - 111xx are for Containers and VMs in the Servers VLAN8, so anything that members would choose to host on the CGHMN Proxmox.
For IPs, I left the first 10 IPs in each subnet reserved for things like routers, for example (perhaps a second router and a virtual IP for failover down the line?).
After that, the first half of the subnet (see above under Network Layout for the actual start and end of this range) is supposed to be reserved for any hosts that are set up with a fully static IP. This is entirely outside of the DHCP range to avoid any conflicts. That DHCP range then starts with the second half of the subnet and goes up to the last available host IP of each subnet.
Other Notes
- Currently, the advertised DNS server via DHCP is the included Unbound Server on the OPNsense instance. If we want to completely seal off clients and servers from the rest of the internet, we could directly point the clients towards our root DNS server for all requests.
- Currently, the OPNsense router does DHCP as it already has an IP in each VLAN and comes with a solid DHCP server that can also support failover out of the box (ISC DHCP). I (Snep) chose this route over a standalone DHCP server to avoid having a second container/VM in each subnet that solely does DHCP or DHCP proxying, mainly to keep the setup and maintenance work as low as possible.
- A customized OpenWRT image for the Gl.iNet MT300n and AR300n are currently being built and tested, which includes required packages and UCI configurations out of the box to make joining the network perhaps a little bit easier. Will update this page or create a new one and link to it once a working image exists!
Reserved static IPs
- VLAN1, 172.23.0.11: WIREGUARD-EXTERNAL (CursedSilicon)