Awesome growing list of network automation enthusiasts
go.bsky.app/N9nHqzg
Awesome growing list of network automation enthusiasts
go.bsky.app/N9nHqzg
6am, Rise and shine! Nothing like starting the day by making sure the Border0 packet parser handles fragmented IP packets correctly π€ #NetworkingGeek
I'm heading to AWS re:Invent next week! π
If you're attending, swing by our Booth 1768 to meet the Border0 team (and me!) in person. We're showcasing the Worldβs First Application-Aware VPN! Stop by for a demo, a chat, or just to say hello! π
π Find us at Booth 1768 #Reinvent2024
Wow, just wow! These Orcas came to visit right in front of our place. A mother Orca and her three "little" ones. They're so majestic! π #Vancouver www.youtube.com/watch?v=NYBX...
Was talking about exactly this yesterday with someone, a common response after an outage:
βBe mindful of the knee-jerk management response: βWe need more change management process!β Unless youβre a real YOLO shop, this is rarely the answer.β
toonk.io/navigating-i...
this was posted in the equinix metal community slack a few days ago
Ooh man, bummed to hear Equinix Metal is shutting down. I loved the original Packet service and later the Equinix version. Ran many of my network perf testings and BGP anycast pet projects on their amazing infra. End of an era. π
To be clear, there's no issue here, this was just me being curious π€ Either way, that was my geeky early Saturday morning: a mix of coffee, FreeBSD, Go, and curiosity.
Anyone still use hosts.allow style filtering?
Example code for the curious: gist.github.com/atoonk/8863c...
I'm sure there's a good reason though. My guess? Likely performance related. Avoid using pf to squeeze more performance out of these boxes and make them more resilient against attacks.
Now, I have no idea if Netflix uses this method it could be in-app (bgpd/sshd) filtering, or some other proxy thing(even nginx) filtering TCP entirely. Whatβs intriguing to me is the choice not to use the kernel firewall (pf) for this kind of traffic filtering as that would be the "obvious" choice.
To make it interesting, I wrote a simple Go program that integrates with libwrap, the library enabling /etc/hosts.allow functionality. Sure enough, when adding a deny statement, I replicated the same behavior: TCP session established, followed by an immediate disconnect.
Itβs been about 20 years since I last used that feature, but I woke up early this morning, made some coffee, and decided to revisit it, poke around π€ . I spun up a FreeBSD box on Vultr (Netflix famously uses FreeBSD for their caching servers) and started experimenting with /etc/hosts.allow.
It got me wondering: Is this filtering happening in the applications themselves (e.g., sshd or bgpd) or somewhere else? Or perhaps a blast from the past... could this be the classic /etc/hosts.allow and /etc/hosts.deny at play?
Running an nmap scan, I noticed something interesting: ports 22 (SSH) and 179 (BGP) appeared to be wide open. This surprised me. we all know best practices dictate that sensitive services like these should only be accessible from trusted sources, not the wide open internet?!
Yesterday, while Netflix was grappling with the live streams covering Iron Mike, I got curious and decided to poke around a bit, specifically checking where my caching server was located.
According to @kentik.bsky.social's OTT Service Tracking, Netflix traffic volume is currently up almost 3x normal. #TysonPaul
Are you experiencing any buffering?
www.kentik.com/product/subs...
Hhm interesting
Project for later π€ thanks π
It feels friendly :) great first impression
And yah reminds me of early twitter days π