The Internet Isn’t in the Cloud. It’s Deep Under the Ocean.

You’ve been told everything’s in the cloud. But 99% of traffic moves through cables under the ocean. This changes how you think about DNS, DR, compliance—and everything you build.

📘 This article is part of my ongoing blog series:
The True Code of a Complete Engineer
(Explore all episodes →)


The Cloud Isn’t in the Sky. It’s Underwater.

If someone had told me early in my career that understanding geography—yes, the literal map of Earth—would help me make better cloud decisions, I would’ve laughed.

Turns out, they were right.

And I wish someone had shown me how most of what we call "cloud" is actually deep under the ocean, riding on invisible cables across continents. Because once you see this, your decisions change:

  • How you choose cloud regions
  • Where your DR setup lives
  • How your app responds to DNS issues
  • Even why your page loads slow only in certain countries
  • Whether you’re accidentally violating GDPR or data sovereignty laws

This isn’t a tutorial. It’s a story. About how I discovered what’s really under the cloud—and how that changed the way I build.


🧭 Start Here: Most of the Internet Lives Under the Ocean

Imagine a giant spider web under the sea. 1.4 million kilometers of fiber-optic cables stretch across oceans, quietly powering nearly 99% of global internet traffic.

Let that sink in: 99%.

These cables connect continents:

  • India to Singapore
  • Singapore to Europe
  • Europe to the US
  • And loop back across Pacific routes

They're not just backups. They're the default highway your app traffic uses.

So when you’re sitting in India and your DNS query or image request travels to a US server, it’s not “airborne cloud magic.” It’s literally riding a cable under the ocean floor.

That’s where the cloud lives.


🧠 Why Geography Suddenly Becomes a Tech Skill

Most devs think:

“We’ve hosted the site on AWS. It’s in the cloud. All good.”

But that’s like saying:

“We’ve built a city. It has roads. Everyone can reach it easily.”

No. It depends where you built it, how far the roads go, how many lanes they have, and whether there’s a toll gate or a traffic jam.

Here’s what I mean:

  • A user in Delhi visiting a US-hosted site might travel via Singapore → Pacific cable → California
  • That’s thousands of kilometers of latency
  • And one cable cut in the middle of the ocean? Total chaos

This is where geography meets architecture. And ignoring it can be the costliest blind spot.


📍 Simple, But Powerful Example 1: DNS Resolution

You type myapp.com. Hit enter.

But where does your computer actually ask: “Where is this site?”

It sends a DNS query to a recursive resolver—maybe Google DNS (8.8.8.8) or Cloudflare (1.1.1.1).

If your nearest resolver is in India but your authoritative DNS server is in Virginia, USA, here’s what happens:

  • The resolver crosses the ocean to ask for the answer
  • Comes back with an IP
  • Only then do you start connecting to the site

So before a single byte of your app loads, a roundtrip under the ocean has already happened.

Every time.

🧠 Lesson: Place your authoritative DNS close to your users, or use a geo-aware DNS provider like Route 53 or Cloudflare Geo DNS. Don’t let users in Asia go hunting answers in America.


📍 Example 2: CDN Confusion — Why the Image Took 3 Seconds

Let’s say you’re using a global CDN like Cloudflare or Akamai. Great.

But:

  • Your origin server is in Oregon (US West)
  • And your user is in Hyderabad, India

Now if the image was never cached in the Asia edge node, the CDN will have to fetch it from the origin.

That fetch = Ocean trip again.

This might happen for:

  • Rare images
  • Personalized assets
  • Cache misses

Result? A 2–3 second image load for first-time users. Your first impression is ruined, not because you did anything “wrong,” but because the default behavior crossed an ocean.

🧠 Lesson: Use pre-warming strategies for key CDN assets. Push static content to the CDN edge manually if needed.


📍 Example 3: Disaster Recovery That’s Not Really a Recovery

You designed a smart DR setup:

  • Primary app in Azure Central India
  • DR in Azure Canada Central

All good, right?

But when India goes down, users now:

  • Send requests across the entire planet
  • Bounce off multiple oceanic cables
  • Get higher latency and maybe fail at compliance (if your app has data sovereignty needs)

Also:

  • What if a geopolitical issue delays that cable route?
  • Or a natural disaster impacts multiple cables in one region?

This is where your DR looks solid on paper, but weak in the real world.

🧠 Lesson: Build DR not just across cloud regions, but across geography-aware regions. Sometimes, hosting DR in the same continent but different seismic zone is safer than crossing hemispheres.


📍 Example 4: GDPR & Geography — Hidden Risk for Global Apps

Here’s a real problem I once saw:

  • A European company hosted its app on Azure East US.
  • They claimed “GDPR compliance” because they deleted user data quickly.

But they didn’t realize:

  • Every time a user in France visited the app, their request (and session) traveled through submarine cables to the US.
  • And logs, session tokens, auth headers—all of that briefly lived on infra outside the EU.

That’s a GDPR violation in spirit and possibly in law.

Even using a third-party API with an endpoint in a different region can cause this.

🧠 Lesson: Geography matters in GDPR. Hosting in EU is not enough. You need to ask: “Is any traffic or metadata crossing borders via these ocean cables?” Use tools like Azure Network Watcher or Data Residency dashboards in AWS to verify paths.

Or go deeper: run Wireshark + Traceroute and prove it.


📍 A Mistake I Made (And What It Taught Me)

Years ago, we had a site hosted in a US region. Seemed fast enough for everyone.

Until one day, a submarine cable between Asia and US West was down.

Suddenly:

  • Users in India saw 5–8 second delays
  • Some pages wouldn’t load at all
  • Our uptime monitors (in US) showed green

But the actual users? They were stuck.

That day I learned: Uptime without geography awareness is a lie.


📍 This Isn’t Theory. This Is What the Best Engineers Know

Some of the smartest teams I’ve worked with now:

  • Run traceroute from each country to see how their app loads
  • Choose cloud regions not just by “cost” but by cable proximity
  • Keep a dashboard of major submarine cable outages
  • Choose DNS/CDN/DR based on latency maps not just checkboxes
  • Design data flow per compliance region, not just per availability zone

It’s not paranoia. It’s preparedness.


🌐 See It With Your Own Eyes (Do This Today)

Want to feel the ocean?

Run this command from your terminal:

tracert www.google.com (on Windows)
traceroute www.google.com (on Mac/Linux)

Notice the hops? Some of them are in different countries. Some even mention submarine cable landing stations. This is your app traveling continents—in real time.

Try this from:

  • Your home Wi-Fi
  • A VPS in another country

You’ll see the difference geography makes.


💡 Final Takeaways (That No One Told Me Early)

  • The cloud lives under oceans, not in the sky
  • DNS and CDN performance depends on where things live
  • DR must consider earth geography, not just cloud regions
  • Traceroute is your friend. So is submarine cable awareness
  • GDPR, HIPAA, RBI and other compliance frameworks demand geo-awareness
  • Cloud decisions are better when made with a map in your mind

TL;DR

“Understanding the internet’s geography is no longer optional. It’s your edge.”

Next time someone asks what region to deploy in, don’t just open the AWS/Azure pricing calculator.

Open a map.

And maybe, zoom into the ocean.


🔗 Want a visual map of major submarine cables? Visit: submarinecablemap.com

It’s not art. It’s architecture.


🧠 Written for “The True Code of a Complete Engineer” — Episode 3
Want to catch the rest of the journey?
👉 Browse the full series here


👉 Follow me on LinkedIn for more insights like this.
▶️ And check out my YouTube channel if you prefer quick, audio versions from this series.