Pangolin performance via Newt #512
Replies: 23 comments 63 replies
-
|
Is nobody else experiencing this problem ? |
Beta Was this translation helpful? Give feedback.
-
|
I'm having the same issue. Maybe the problem is with Traefik( or gerbil ??)since the issue persists even locally. Some people have fixed it by switching VPS. Temporarily, I solved it by enabling the Cloudflare Proxy, which tripled my upload speed. |
Beta Was this translation helpful? Give feedback.
-
|
I’m also experiencing a severe performance issue with my Pangolin/Newt/Gerbil setup, and I can’t understand why. It works flawlessly for serving low-bandwidth workloads (websites) — it's great, and I love it. My setup is as follows:
What I’ve tried to identify the root cause:Suspected CPU bottleneck (WireGuard overhead)I initially suspected that Newt and/or Gerbil/Traefik were CPU-bound due to WireGuard overhead. Suspected bandwidth bottleneckI ran
Everything seems to have a decent internet connection. Suspected network throughput between componentsI deployed
So everything seems to have proper link speed internally despite the abstraction layers (VMs, Docker, VPS). When monitoring Given the above, it doesn’t appear to be a CPU, RAM, or bandwidth issue — which leaves us with either the Newt/Gerbil/Traefik codebase or their configuration. Let me know if you need more details or logs — I’d be happy to help debug this further. |
Beta Was this translation helpful? Give feedback.
-
|
btw. i would like to share my workaround until we have this. It´s simple. Basically let every service/ressource have it´s own newt tunnel/sites. |
Beta Was this translation helpful? Give feedback.
-
|
I've been pulling my hair out over this until I found this discussion. I had no problems at all for several months, but only with low-bandwidth services. Recently I added my photo library to the setup, and then realized I'm not able to stream any videos reliably. The behavior I see is consistently the following: with larger files, the initial transfer speed is high for a few seconds (meaning a few megabytes), and then it rapidly drops to unacceptable levels (around 10 kb/s!). After a quite long time (1-2 minutes) it equally rapidly starts to pick up speed again and then keeps saturating my line until the transfer is complete (several mb/s). If I interrupt the transfer and restart, it starts all over again with the same behavior. During the time it's only dripping, the load on the VPS is non-existent. Once it picks up speed again I see Traefik at ~25% CPU. I did extensive testing to check for bandwidth and speeds all along the setup, and I'm able to saturate my public line to and from the VPS (Strato Germany) no matter what (raw transfer through SCP, with or without Docker etc.). I'm also able to max out the local GBit network when I access the services only locally. It doesn't sound like the exact symptoms you describe, but similar in that I cannot find any issues at any stage of the setup, but when I add Newt to the equation it reproducible shows this behavior. |
Beta Was this translation helpful? Give feedback.
-
|
I migrated my Pangolin instance from the default SQLite database (pangolin:1.12.2) to a PostgreSQL backend (pangolin:postgresql-1.12.2). It was definitely worth trying, and the performance boost for the UI and static pages is appreciated. |
Beta Was this translation helpful? Give feedback.
-
|
This discussion seems to be one of the most active ones, yet I haven’t seen this issue being addressed. I used to really like Pangolin, but it’s time for me to move on from the project. Thank you very much for everything so far. I wish the project and everyone involved all the best — and as they say, paths always cross again. |
Beta Was this translation helpful? Give feedback.
-
|
Unfortunately issue still persists in the latest newt version. (IPV6 only) |
Beta Was this translation helpful? Give feedback.
-
|
After struggling with inconsistent throughput over Newt, I switched to a Basic WireGuard Site. Setup was pretty painless and immediately fixed the “bursty stall/buffering” behavior for long-lived streams. ProblemPersistent connections would stall in bursts over Newt tunnels (CPU wasn’t pegged). Shorter/lower-bitrate flows were ok which strongly pointed to PMTU/fragmentation (TCP bursts + long stalls). FixUse a Basic WireGuard Site and route traffic over kernel WireGuard. Key points:
Solution1) Create a Basic WireGuard Site in Pangolin
Note: the server-side WG interface may live inside the docker exec -it gerbil ip -br addr # should show wg0 with e.g. 100.89.x.x/242) Point the resource upstream to the WG peer IP (NOT the LAN IP)In Pangolin, set the resource Target/Upstream to: 3) On the WG peer gateway: DNAT only what you need to {TARGET_IP}:{PORT}Put the following in /etc/wireguard/wg0.conf on the WG peer (replace {TARGET_IP}, {PORT}, and your LAN interface name if it’s not eth0): [Interface]
Address = <WG_PEER_IP_CIDR>
PrivateKey = ...
MTU = 1280
# Forward <PORT> from WG -> target
PostUp = sysctl -w net.ipv4.ip_forward=1
PostUp = iptables -t nat -A PREROUTING -i %i -p tcp --dport <PORT> -j DNAT --to-destination <TARGET_IP>:<PORT>
PostUp = iptables -t nat -A POSTROUTING -o eth0 -p tcp -d <TARGET_IP> --dport <PORT> -j MASQUERADE
PostUp = iptables -A FORWARD -i %i -o eth0 -p tcp -d <TARGET_IP> --dport <PORT> -j ACCEPT
PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp -s <TARGET_IP> --sport <PORT> -j ACCEPT
# MSS clamp (prevents PMTU blackholes / burst-stall behavior)
PostUp = iptables -t mangle -A FORWARD -i %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostUp = iptables -t mangle -A FORWARD -o %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostDown = iptables -t nat -D PREROUTING -i %i -p tcp --dport <PORT> -j DNAT --to-destination <TARGET_IP>:<PORT>
PostDown = iptables -t nat -D POSTROUTING -o eth0 -p tcp -d <TARGET_IP> --dport <PORT> -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -o eth0 -p tcp -d <TARGET_IP> --dport <PORT> -j ACCEPT
PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp -s <TARGET_IP> --sport <PORT> -j ACCEPT
PostDown = iptables -t mangle -D FORWARD -i %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostDown = iptables -t mangle -D FORWARD -o %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu4) Persist across rebootssystemctl enable --now wg-quick@wg0 Quick sanity checks |
Beta Was this translation helpful? Give feedback.
-
|
Same issue here. |
Beta Was this translation helpful? Give feedback.
-
|
Yeah, I’ve been running into the same issue with large file streams. I also tried replacing Newt with a plain WireGuard connection, but unfortunately the speeds still dropped quite a bit. For now, I’ve had to switch to a different tool for the affected applications. Really hoping this gets improved at some point. Everything else works great, so it would be amazing to see this resolved. |
Beta Was this translation helpful? Give feedback.
-
|
Same issue here. I understand there's likely no easy fix, bu it seems like a pretty critical issue. |
Beta Was this translation helpful? Give feedback.
-
|
I've encountered same issue. Using a 2vCPU VPS with Pangolin installed and running on it. Newt, as a service on my Win11P box. Have nice 'elegant' solution that I can point to (plex.mydomain.com, overseer.mydomain.com, etc.) but Plex streaming is limited to 720p/4mbps through the Newt tunnel... I've had to switch out to Tailscale which isn't quite as elegant, as it requires something installed on my Plex users' machines. I too would really like to see this resolved as Pangolin is a super-neat solution otherwise. |
Beta Was this translation helpful? Give feedback.
-
|
Same here. Streaming 4K over newt is impossible. Working ok on standard Wireguard. |
Beta Was this translation helpful? Give feedback.
-
|
Racknerd 2vcpu streaming 4k was working but it took good minute to start switched to std WireGuard and its almost instant |
Beta Was this translation helpful? Give feedback.
-
|
Same issue for me with newt ->120-150 Mbps (6-30 MB/s). iperf between the local vm and my vps (without tunnel) -> 6 Gbps. [EDIT] With wireguard instead of newt -> 600+ Mbps . It's 5-6 times faster. |
Beta Was this translation helpful? Give feedback.
-
|
This issue is painful and it really has be needing an alternate system despite Pangolin serving me so well. What I find puzzling is why did this happen some time in recent months? I recall my network transfer speed being almost as fast as local in the past. |
Beta Was this translation helpful? Give feedback.
-
|
I get usable 15MBsec speeds from my Windows Host using Newt via Docker. I'd like it to be more but compared to the others here I can't complain. My VPS offers 500Mbit speeds and iperf3 to the VPS really provides that speed. When downloading large files from Nextcloud, all involved containers (Nextcloud, Authentik, Nginx and ultimately Newt) consume a lot of CPU power on the host with almost all 16 cores used. If I reduce the CPU max speed to 90% (via Power Settings) the DL speeds will drop to 13MBsec, at max speeds I get 15MBsec Upgrading the VPS from 1 to 2 cores did absolutely nothing, however. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Hey all just an update that on newt i was getting 32mbps, switched to wireguard and am slightly below 1000mbps now. |
Beta Was this translation helpful? Give feedback.
-
|
it is almost a year and we still have this issue :)) |
Beta Was this translation helpful? Give feedback.
-
|
After struggling with this issue for several weeks, I found the solution for my configuration. The main problem comes mainly from Docker, so don't expect any more speed with it. You MUST install Newt AND Gerbil in pass-through mode (host mode, not bridge mode) to allow Gerbil and Newt to create a true WG virtual card on your servers. Without this, you can't exceed a speed of ~300-400 MB/s. In addition, for 4k streamers, in my opinion, you MUST have a dedicated WireGuard site on your router/server (and only for this use). It's a painfull setup for some, but it's worth it. For example, I now reach 750-850 MB/s for a 950MB/s real capable upload speed. |
Beta Was this translation helpful? Give feedback.
-
|
I'd love to see a straightforward set of instructions for having Pangolin on a VPS, and setting up a WG client at the Windows 11 end... I can't be the only one with this setup (currently with Newt, and struggling with the speed for streaming media). I'm using Tailscale now which does work but I'd like to make use of the VPS I'm paying for. Everyone says setting a WG tunnel up is the answer but I can't find anywhere that instructs exactly how to do it - I mean, like I'm an idiot and have no experience of WG. They either have Windows at both ends or a Linux distro where my Win 11 box is... Please? |
Beta Was this translation helpful? Give feedback.





Uh oh!
There was an error while loading. Please reload this page.
-
Hey everyone,
First of all, thanks for the great work on Pangolin – I really like how clean and straightforward it is to set up.
I’m currently testing Pangolin in a fairly performant setup:
• Home connection: 1 Gbit fiber
• VPS: 2.5 Gbit throughput
• Service behind Pangolin: Pingvin Share (self-hosted file sharing tool)
The problem I’m running into is related to throughput performance. When I upload a file to Pingvin through Pangolin, I only get about 4 MB/s (≈32 Mbit/s), and the speed occasionally drops even further. The same happens when I try to download the file back – it’s consistently low and doesn’t reflect the bandwidth I actually have on either side.
So now I’m wondering:
Where’s the bottleneck? Is it:
• A known limitation or default setting in Pangolin?
• Related to WireGuard performance under load?
• Possibly something in the configuration (like MTU size, buffer sizes, etc.)?
I’d really appreciate if someone could share similar experiences, tuning tips, or diagnostic steps I might try.
Thanks in advance!
Alex
Beta Was this translation helpful? Give feedback.
All reactions