Server behind CGNAT - Reverse VPN? Or how to bypass?
So..in a short sentence...the title. I have a server in a remote location which also happens to be under CGNAT. I only get to visit this location once a year at best, so if anything goes off...It stays off for the rest of that year until I can go and troubleshoot.
I have a main location/home where everything works, I get a fixed IP and I can connect multiple services as desired.
I'd like to make this so I could publish internal servers such as HA or similar on this remote location, and reach them in a way easy enough that I could install the apps to non-tech users and they could just use them through a normal URL. Is this possible? I already have a PiVPN running wireguard on the main location, and I just tested an LXC container from remote location, it connects via wireguard to the main location just fine, can ping/ssh machines correctly. But I can't reach this VPN-connected machine from the main location.
Alternatively, I'm happy to listen to alternative solutions/ideas on how to connect this remote location to the main one somehow.
Install Tailscale on the remote server and leave it up. Whenever you need to connect to it launch Tailscale on another device that you have access to, and you'll be able to connect to the remote server at its Tailscale IP.
Tailscale consists of a config tool called tailscale and a daemon called tailscaled. The daemon needs to be up for connectivity, and it will raise a network interface called tailscale0 when it works. To connect/disconnect from the tailnet you say tailscale up or down. This is independent of whether the daemon runs or not – that's a separate issue that's usually dealt with by systemd or whatever service manager you use.
Tailscale doesn't need public IPs because all the clients connect outward to a pairing server, which uses STUN to negociate direct connections between the nodes. The connection keys always stay with each client machine, the established connections cannot be snooped by Tailscale, and the clients are FOSS to make sure of that.
If by any chance the ISP of any node does aggressive UDP filtering STUN may not work and result in connections being relayed through a network of so-called DERP servers maintaned by Tailscale. These servers are reduced in number and locations so relayed connections will be bandwidth- and latency-limited. If STUN succeeds you'll only be limited by each node's internet connection.
Tailscale can provide DNS names for the enrolled nodes if you want, but you can also assign fixed IPs to each node in the range 100.64.0.0/10. I'm not a huge fan of the provided DNS because it's a bit invasive (works by replacing /etc/resolv.conf temporarily with a version that resolves via 100.100.100.100 on the tailnet, and integrating it with local DNS can be a chore as you can imagine). There's an option for tailscale nodes to not accept this DNS.
Make sure that services on the remote server that you want to access via Tailnet (also) bind to the Tailscale IP (or to 0.0.0.0).
Should you mess up, so long as the Tailscale client is still up on the remote server and it has an internet connection you can still reach it by enabling the Tailscale "fake" ssh service, which works through the tailscale client rather than a real ssh daemon. But please read up on what it involves to have this fake ssh access available (you don't want to have to issue a command on the remote server to enable it).
Thanks...I think I have now both containers in the home and remote servers within the tailscale network, showing their own IPs. This will make the site accessible via SSH, if I get this right. I'm not sure though how to route whichever remote server I build to the home network
You mean how to expose the local services on the machine via Tailscale? You can use the TS_DEST_IP env var. Let me show you my compose.yaml:
services:
tailscale:
image: tailscale/tailscale:v1.62.0
container_name: tailscale
cap_add:
- NET_ADMIN
- SYS_MODULE
environment:
- TS_HOSTNAME=tailnet-node-name
- TS_AUTHKEY=tailnet-node-key
- TS_STATE_DIR=/var/lib/tailscale # see below for persistence
- TS_ACCEPT_DNS=true # to resolve tailnet names (not necessary if you use IPs)
- TS_USERSPACE=false # to be able to raise tailscale0 interface
- TS_DEST_IP=local-ip # the IP where you bind services you want to expose
- TS_ROUTES=192.168.1.0/24 # optionally expose other local devices
volumes:
- "./data/var/lib/tailscale:/var/lib/tailscale" # persist daemon preferences
- "/dev/net/tun:/dev/net/tun" # to be able to raise tailscale0 interface
deploy:
resources:
limits:
memory: 256M
pids: 100
restart: always
With TS_USERSPACE=false Tailscale will use kernel access to raise tailscale0 and allocate the tailnet IP you've specificed in Tailscale admin.
(TS_USERSPACE=true doesn't use a network interface at all, you have to use a forward HTTP or SOCKS5 proxy. If you use =true and set up a proxy you don't have to use TS_DEST_IP.)
With TS_DEST_IP you can bridge the tailnet IP to a local IP. By choosing that local IP well you can expose services to Tailscale selectively.
The simplest approach is to have docker services listen on 0.0.0.0 (happens by default if you don't specify IP in the "ports:" section, and put the machine's loopback or LAN IP in TS_DEST_IP. That way you'll expose all the services automatically.
But you can also expose things explicitly. Let's say the machine LAN IP is 192.168.1.1. You make up 10.100.100.100 on the host specifically for Tailscale exposure so you say TS_DEST_IP=10.100.100.100. Then in your "ports:" section in compose you add ports explicitly depending on what you want them exposed on. Let's say you want Jellyfin exposed on both LAN and Tailscale, you add an entry for - 192.168.1.1:8096:8096/tcp and one for - 10.100.100.100:8096:8096/tcp. You only want CUPS exposed on the LAN not on Tailscale so you only add - 192.168.1.1:631:631/tcp for it. For Deluge you want to use the web interface on the LAN so you say - 192.168.1.1:8112:8112/tcp but you want RCP over Tailscale so you can use a phone admin app so you say - 10.100.100.100:58846:58846/tcp.
You get the idea. Yeah it's a bit more work to specify everything explicitly but it's a lot more precise (and you can still use the default and listen on 0.0.0.0 for when you want to expose everywhere).
I've been told that zerotier is even better. Haven't tried it myself (it looks more complicated to selfhost) but the guy suggesting it knows waaaaay more than me on these things. Just if you want to look into another option.
For what it's worth (from a random guy on the internet) selt-hosting tailscale is quite easy! 🙂
Yes, but autossh will automatically try to reestablish connection when its down, which is perfect for servers behind cgnat that you can't physically access. Basically setup and forget kind of app.
I'm noob and this was simple. Works like a charm. It has readymade installers for Wireguard on different VPS providers and installer for your client (home server).
https://github.com/mochman/Bypass_CGNAT
Thanks...I use my own home server, so I'd try to avoid the VPS part if I can, and directly re-address it to the home server, as I already have that one with a working fixed domain etc.
I use this too.. though I had to do some modifications to the wireguard script (it cleared iptables and blocked the SSH port). Other than that it works great.
Can the ISP at the remote location remove your remote location from the NAT? I have a similar issue where sometimes it’s will reset and my services are inaccessible. A quick call to support usually has the problem fixed in a few minutes.
That’s unfortunate you have to put in a ticket. They should be able to help you with this during a 5-10m phone call. Hopefully they get to your ticket in a timely manner
You probably need the server to do relatively aggressive keepalive to keep the connection alive. You go through CGNAT, so if the server doesn't talk over the VPN for say 30 seconds, the NAT may drop the mapping and now it's gone. WireGuard doesn't send any packet unless it's actively talking to the other peer, so you need to enable keepalive so it's sending stuff often enough the connection doesn't drop and if it does, quickly bring it back up.
Also make sure if you don't NAT the VPN, that everything has a route that goes back to the VPN. If 192.168.1.34 (main location) talks to 192.168.2.69 (remote location) over a VPN 192.168.3.0/24, without NAT, both ends needs to know to route it through the VPN network. Your PIVPN probably does NAT so it works one way but not the other. Traceroute from both ends should give you some insight.
From your remote location I would set up at least two different tunnels back to your home network. Perhaps one service using cloudflare tunneling, and one using wire guard as you mentioned. That way if one of your tunnels goes down you have time to fix using the other tunnel.
If you have the budget for it ubiquiti gear is pretty good, and using their cloud configuration makes sense in this scenario. The ubiquiti gateway would sit at your remote location, maintaining tunnels, and if there any issues you could fix them through the UI.com interface.
Can the ISP offer dedicated IPv4 addresses? We had a similar issue with the new rural fiber provider. I spent hours tinkering and researching only to finally call support.
15 minutes and $2/mo later it's all taken care of. I have a direct IP and no maintenance nightmare that I have to sacrifice a goat to the printer gods and pray for mercy to make work*