The internet commons under siege: Why 33 Tbps DDoS attacks are everyone's problem

Abstract purple, blue and pink waves

On October 9, Nokia Deepfield observed a 33 terabits-per-second distributed denial-of-service (DDoS) attack against a gaming provider. For context, the total capacity of many national internet backbones is measured in similar terabit ranges. An attack of this magnitude isn't just targeting a server or even a network — it's consuming infrastructure at internet scale.

This wasn't an isolated incident. As Brian Krebs reported, the Aisuru botnet has been systematically breaking DDoS records, with a 29.6 Tbps attack recorded just days before, on October 6. The trajectory is clear: what was unthinkable two years ago is now routine. What was routine six months ago is now inadequate for defense. We're in a new regime, and most of our defensive thinking hasn't caught up.

The hidden costs of internet-scale attacks

What makes these attacks fundamentally different from what we have seen before is the collateral damage. When someone launches a 33 Tbps attack at a single IP address, they're not just attacking that target. They're attacking the internet infrastructure itself.

The attack traffic has to travel through the internet's shared infrastructure: peering links, internet exchange points (IXPs), backbone networks. At these volumes, all that infrastructure becomes congested. Peering links saturate. Internet exchange (IX) ports max out. And suddenly, traffic that has nothing to do with the attack starts getting dropped. Legitimate users can't access legitimate services because the internet pipes are full of “garbage” traffic.

This is a classic tragedy of the commons. The attacker bears almost none of the expense. The target bears some. But most of the burden falls on everyone else who shares the infrastructure: other networks, other users, other services. The costs are externalized across the entire internet.

The problem inside your network

There's another cost that's getting less attention: the impact on the networks hosting the attacking devices. According to Krebs's investigation, a significant portion of Aisuru's 300,000 compromised devices is hosted within communications service provider (CSP) networks in the U.S. One security researcher observed 500 gigabits per second of attack traffic leaving a single provider's network during an Aisuru campaign.

It’s worth noting the U.S. doesn't have the most bots. It has bots with more bandwidth. Average U.S. residential connections are faster than those in most countries, so each compromised device can push more attack traffic. As residential broadband speeds increase globally, this geographic concentration will shift. The problem is becoming more distributed and more global. 

Think about what that means for the service providers. Their customers' devices (compromised routers, security cameras, DVRs) are all blasting DDoS traffic outbound. That traffic consumes bandwidth that could otherwise be used for legitimate purposes. It strains infrastructure like carrier-grade network address translation gateways (CG-NAT) that weren't designed to handle this kind of load. It generates complaints from peer networks about congestion.

Service providers didn't launch these attacks. Their customers didn't knowingly launch these attacks. But they are paying the costs: in infrastructure strain, in customer support calls, in degraded service quality, and in relationships with peer networks.

This is a textbook case of misaligned incentives. The manufacturers who build insecure IoT devices don't bear the costs when those devices are compromised. The customers who buy them don't have the expertise to secure them (or even know they need to). The broadband service providers who host them can't easily control what their customers do with their connections. And the attackers operate with near impunity from jurisdictions where enforcement is difficult or non-existent.

Why centralization makes things worse

The obvious response is to scale up defenses. If attacks are exceeding 30 Tbps levels, then we need providers who can absorb 30+ Tbps attacks, right? Just funnel all traffic through a handful of mega-providers with the capacity to handle these volumes.

This is an intuitive argument, but it's wrong for several reasons.

First, it would concentrate traffic through a small number of chokepoints. That reduces resilience. If one of these mega-providers has problems—technical issues, misconfigurations, or becomes a target itself—the impact would be even more widespread.

Second, this is economically unsustainable for most organizations. Diverting all your traffic through a third-party scrubbing center is expensive. Smaller organizations and those in regions where these services aren't well-established can't afford it.

Third, and most importantly, this approach would not solve the underlying problem—it's like building bigger emergency rooms instead of preventing accidents. The attack traffic would still flood the internet. It would still congest shared infrastructure. It would still create collateral damage. You would just be dealing with it downstream instead of at the source.

A better approach: Distributed, network-native defense

Instead, we need the defense that is built into the network fabric itself. That means three things:

First, service providers need to monitor and control outbound attack traffic. This isn't about censorship or restricting what users can do. It's about detecting when thousands of devices on your network are simultaneously blasting traffic at the same target (which is never legitimate behavior) and taking action. The recent attacks make clear that effective and universal outbound DDoS attack suppression can no longer be treated as optional infrastructure.

Second, internet exchange points and transit providers need to become active participants in defense. Their internet infrastructure is located at natural chokepoints where DDoS attack traffic can be detected and filtered before it spreads. NL-ix, a major European internet exchange, has deployed exactly this kind of capability, using network-native DDoS protection that filters attacks inline without diverting traffic to remote scrubbing centers. (They even showed a live demo of their anti-DDoS capabilities).

Third, we need everyone in this global ecosystem to work together to make the internet more secure. Manufacturers need to build more secure devices by default; not because they're altruistic, but because the market and their customers demand it. Service providers need tools and support to detect and mitigate outbound DDoS attacks. And users need better defaults: devices that are secure out of the box, not after reading a 50-page manual on how to make them secure.

What needs to happen

The 33 Tbps attack we observed isn't just a milestone. It's a signal that the current approach isn't working. We can't scale our way out of this by building bigger scrubbing centers. We can't ignore it and hope the problem goes away.

What we need is a shift in our thinking. 

DDoS defense needs to be distributed, not centralized. It needs to be proactive, not reactive. It needs to address the economics and incentives, not just the technical capabilities. That means service providers prioritizing outbound controls, internet exchanges and transit providers deploying inline filtering, and manufacturers building more secure devices. None of this is easy. 

The tragedy of the internet commons is notoriously difficult to solve: it requires coordination, shared standards, and collective action in an ecosystem that prizes autonomy. But the alternative is watching the internet's shared infrastructure buckle under increasingly massive DDoS attacks while we wait for someone else to solve the problem. The internet commons doesn't defend itself. Either we build distributed defense into the fabric of our networks, or we accept that 33 Tbps attacks are just the beginning.

Learn more about what is fueling the rise of these hyperscale DDoS attacks and about the latest cybersecurity threats in the Nokia Threat Intelligence Report 2025.

Jérôme Meyer

About Jérôme Meyer

Jérôme is a Security Researcher at Nokia Deepfield, where he helps develop the Deepfield network security and analytics portfolio. He is also the co-creator of Nokia’s OUTstanding Leaders, a leadership development program empowering LGBT+ leaders across Nokia and its ecosystem of customers, partners, and suppliers.

He graduated with a Master’s degree from the Institut National des Sciences Appliquées in Lyon, France.

Connect with Jérôme on LinkedIn

Article tags