Most of the internet’s critical plumbing was built between 1981 and 1994. That’s not trivia. IPv4, BGP, DNS, SMTP: these protocols move your data, route your packets, and deliver your mail on infrastructure that predates Google by over a decade.
People spend a lot of energy talking about what’s next in networking. Less attention goes to why the stuff from the Reagan era still works.
IPv4 Refuses to Die
When USC’s Information Sciences Institute published RFC 791 in September 1981, IPv4’s 4.3 billion addresses felt inexhaustible. The internet had maybe a few hundred hosts at that point. Nobody was planning for a world where a single household might burn through five or six IPs between phones, laptops, smart TVs, and whatever else is on the Wi-Fi.
Addresses ran out (officially, the last top-level blocks were allocated back in 2011), but IPv4 didn’t go away. Network Address Translation stepped in. NAT lets a whole office or apartment building share one public IP, and it turned out to be good enough that the migration to IPv6 basically stalled.
Google tracks IPv6 adoption globally, and the number sits at around 46%. So after nearly three decades of IPv6 being available, it still hasn’t cracked the halfway mark. That’s wild if you think about it.
The practical fallout is real for people who work with proxies and web automation. Anyone running scraping jobs, ad verification, or geo-testing still needs ipv4 proxies because plenty of websites either serve different content to IPv6 connections or just don’t support them properly yet. Compatibility is still an IPv4 story.
There’s a financial angle too. Gartner analyst Andrew Lerner pointed out (in an interview with The Register) that migration costs remain high and the ROI case is weak, with some enterprises actually disabling IPv6 to avoid performance headaches.
Meanwhile, a /24 block of IPv4 addresses sells for tens of thousands of dollars on the secondary market, because no new addresses exist.
BGP: A Napkin Sketch That Routes the Planet
The Border Gateway Protocol has one of the better origin stories in tech. Yakov Rekhter and Kirk Lougheed sketched it on two napkins at the 12th IETF meeting in Austin, Texas, in January 1989. It was supposed to be a quick fix.
Thirty-six years later, that quick fix routes basically all traffic between autonomous systems on the public internet. And it runs on trust, which is the scary part. When an AS announces that it owns a block of IP addresses, neighboring routers take its word for it.
Pakistan Telecom proved how badly that can go in 2008, when a misconfigured route announcement hijacked YouTube’s traffic globally for about two hours. These kinds of incidents (called BGP hijacks or leaks) happen more often than most people realize. RPKI gives network operators a way to verify route announcements cryptographically, but adoption has been slow, especially among smaller providers.
Replacing BGP would mean getting every AS on earth to agree on something new and switch over together. That isn’t a realistic scenario.
DNS and SMTP Keep Doing Their Jobs
DNS is older than most people reading this article. It’s been resolving domain names to IP addresses since 1985, operating on specs (RFC 1034 and 1035) from November 1987. The extensions bolted on since then, like DNSSEC, address real security gaps but haven’t changed the fundamental lookup process.
SMTP has an even longer pedigree, going back to ARPANET. Formalized in 1982, it still carries every email sent anywhere in the world. TLS encryption, SPF, and DKIM were added over the years to deal with spoofing and spam, but the basic mechanics of message relay haven’t really shifted.
What both protocols have in common is that they were written for a friendly network. The engineers who designed them assumed good faith from all participants. That assumption gave us decades of spam and phishing attacks, plus DNS cache poisoning as a bonus.
The Replacement Problem
Swapping out a protocol that billions of devices depend on is a coordination problem that borders on impossible. You need backward compatibility, buy-in from competitors, and a budget for something that (from a quarterly earnings perspective) won’t show results for years.
IPv6 is the test case. It shipped in 1998 and 28 years later, it’s still the underdog.
The original IPv4 spec is 45 pages, written for a network of a few hundred machines, and it still runs a network of billions. That says something about the difference between protocols that need to be perfect and protocols that just need to not break. These ones chose the second option, and it’s worked out better than anyone expected.



