💾 Archived View for dioskouroi.xyz › thread › 29419448 captured on 2021-12-03 at 14:04:38. Gemini links have been rewritten to link to archived content

View Raw

More Information

-=-=-=-=-=-=-

Comcast reduced “working latency” 90% by deploying AQM during Covid

Author: dtaht

Score: 19

Comments: 9

Date: 2021-12-02 17:37:23

Web Link

________________________________________________________________________________

winternett wrote at 2021-12-02 18:09:50:

I got Comcast internet in September, the 200MBPS plan for 50$ per month and for the first 2 months speed tests averaged 150MBPs down and 6MBPS up... For the first 2 months, then speed degraded to 50MBPs down and 3MBPS up... somehow, and isn't going back up. Criminals.

The main reason why I didn't have much choice was that Verizon was out of stock on routers and I needed a connection immediately. Verizon was more expensive anyway, and their plans were tied to specific hardware, which would make it difficult/more costly to upgrade easily later on.

I also had an order in for T-Mobile's home internet at $50 per month where speed tests averaged 50MBPs down and 3MBPS up...

I personally believe that all services are seeking advantage with any customer plan, so my solution is to keep both services and just switch to with network that is more reliable and as an end result be able to have Internet that is less dependent on a single point of failure with my Verizon phone as a potential back-up hotspot.

Plain and simple, it's fleecing and theft that Government has permitted over a long span of time that will not change. All of the plans and terms of service are changed at the will of service providers. They learned their tactics from mobsters and drug dealers... Thoroughly convinced that possibly some decedents of crime organizations run a lot of these companies to be honest.

May your costs be low and your speed high my friends... Ay... :/

nickysielicki wrote at 2021-12-02 18:22:16:

This is damning evidence that packet shaping is completely necessary and that net neutrality is unrealistic.

All packets are not created equally: zoom packets for client A or Call of Duty packets for client B _should_ go ahead of a 30s chunk of Netflix video for client C that has 60 seconds of video in the tank.

The net neutrality solution to this problem is to spend billions of dollars building out redundant and wide pipes so that everyone can go fast all the time.

Comcast solved it with router configuration.

traverseda wrote at 2021-12-02 18:32:53:

There's a way to do it in a way that doesn't completely violate net neutrality. Make some mechanism that programs can use to mark connections as bulk-rate or priority. Limit people's priority bandwidth to maybe 10% of their rated max throughput.

That requires some new technology, but letting the user _choose_ when a connection should be considered a low-latency priority is important to keep this kind of system from turning into a bunch of consumer-unfriendly back-room dealing. Creating an opaque system with unclear rules about what connections are classified as what priority level will also make it very difficult for new businesses to get their start, as eventually I think you'll probably only be considered high priority if you're a big business or you're mimicking the characteristics of some big businesses application.

dtaht wrote at 2021-12-02 19:22:49:

Keeping the queues short with AQM makes every application share more fairly. Low rate applications like gaming and zoom benefit.

QoS, as you describe, has been tried (see diffserv and intserv), and fell down because mostly everyone felt "their" application had priority. Being fair, instead, and improving statistical multiplexing in particular lets 'packets be packets'.

willcipriano wrote at 2021-12-02 18:51:17:

Most people don't use the whole pipe, I doubt I use 10% of my total available traffic each month and I'm a power user. At 100 Mbit/s you'd have 100 GB per day of high priority traffic to use.

traverseda wrote at 2021-12-02 19:18:20:

I mean I think imagining it as 100GB of high priority traffic per day is the wrong way to think about it. I know that in some places in the US for some reason you pay for the bandwidth you use, instead of for the capacity of your line. Personally I think that's silly.

With a 100Mbits/s connection you'd have 10Mbits/s of high priority traffic, per second. I feel like an ISP should be able to not completely oversubscribe to the point where you can't even use 10% of your connection at low-latency.

dtaht wrote at 2021-12-02 19:38:46:

I've given more than a few lectures on how packets and queues and aqm are supposed to work, with very physical demos, using "packets as people", such as this one:

https://blog.apnic.net/2020/01/22/bufferbloat-may-be-solved-...

I'm cited in the article, participated in the study, and have been trying to counter this mis-impression about "priority" vs filling the pipes, not the queues, as the answer to better network latency for many, many years now. It's still a tough slog, it seems, and I fear this will be a long day on reddit for me! But with comcast's now enormous and deployed existence proof, perhaps more will deeply grok it, and we'll see movement by other isps to apply the same technologies so all their users would benefit.

dtaht wrote at 2021-12-02 19:34:30:

by multiplexing better, you generally need to not differentiate between high and low priority. Being that I've been trying to convince people of this for years - all I can do is try to point at resources that demonstrate how we think about how packets work is frequently wrong, and/or try to provide a demonstrable example.

Let's take voip vs netflix over 60ms - netflix, like most DASH video traffic from most video services, is very bursty. It tries to grab a chunk of video,

and over the course of 1-2 seconds, fills the pipe, and voip sends one tiny packet every 20ms, which kind of looks like this:

What you want:

NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN -> you

V V V

What you get:

VVVNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN -> you

What AQM does is keep that induced queue short (roughly 16ms with DOCSIS-PIE). The netflix "pulse" goes away and you end up with

NNNNNNNNNNNVNNNNNNNNNNNNNN -> you

Netflix loads the next segment a tiny fraction slower, but your voip call doesn't get the jitter and latency side-effect anymore.

dtaht wrote at 2021-12-02 19:17:37:

AQM technologies were described as necessary best current practice by the IETF back in 1992, and recommendations revised in 2013

https://datatracker.ietf.org/doc/rfc7567/

because the algorithm for it chosen in 1992 (RED) didn't work well enough. Two new algorithms appeared in 2012 (Pie and codel/fq-codel) that did.

I was very frustrated by the network neutrality "debate", partisans excluding the idea that it was _actually_ a huge technical flaw with how the internet is structured that was at the root of the problem - when many flows attempted to co-exist:

https://blog.cerowrt.org/post/net_neutrality_customers/

Now Solved. Thoroughly. By those two theoretical breakthrus. Since, zillions of knowlegable users and new products (like those from openwrt, eero, google wifi, and many others) managed to fix the underlying bufferbloat problem for themselves via "smart queue management" or "optimizing for conferencing and gaming", but it required manual configuration and tweaking and the _right place_ for better bandwidth does indeed lie within the ISPs shapers and CPE.

And _on by default_.

However, it's not as simple of "call of duty or zoom" going "ahead". Better multiplexing (FQ), and shorter queues (AQM) lead to those applications' packets mixing into the heavier netflix flows without manual intervention.

Comcast