• benkuhn 3 days ago

    > the median global network and device are both quite slow. This means a lot of the network gains from QUIC are potentially (largely) undone by the slower hardware.

    The rest of the article made reasonable points but this is a really bad argument.

    The cited claim that congestion control was application limited 50% of the time used bandwidth of either 10Mbps or 50Mbps. Meanwhile, the median mobile device is on a low-end (~15th percentile) 3g connection, I would guess around 1Mbps or less; 45% of devices were on 2g in 2017 according to the author's own citation.

    I'm writing this from Senegal, currently on great internet with a 350ms RTT; mobile RTT often hits 1s. At those speeds I really couldn't care less how much time my phone spends decrypting if you can save me roundtrips!

  • karmakaze 3 days ago

    This brings up an interesting point, who is QUIC/HTTP3 for? Who benefits, who doesn't. I was initially going to ask about why we don't have wider adoption of larger packet sizes along with all out increases in bandwidth. If feasible this should be put into the protocol sooner than later.

    But on the issue of who it's for, it it helps the less connected be much more connected than it does get hard to argue against it, unless it leads to a loss of decentralization or other prime principle.

  • kjeetgill 2 days ago

    I'm no expert, but generally, large packet sizes have all sorts of issues.

    While they decrease some overhead, they increase the surface for single bit errors, require larger buffers for sending/receiving retransmission buffers, and hurt bad connections while only minimally improving good ones.

    I don't know the full protocol, but if you can decrease round trips you improve latency while also taking a ton of load off of all intermediate systems, lower RAM used for buffering, etc. "Buffer bloat" has been an issue for a long time.

  • karmakaze a day ago

    I was just thinking it odd that we're still using the same packet sizes from the 70s despite all the changes in communications technology. Have we just worked around this, or is it actually optimal in some way?

    As for the pros/cons, I'm only suggesting increasing maximum packet size and using whatever size works for a given connection. I suppose that adds some complexity to negotiation and operating modes. Cost for memory buffers is a good explanation.

  • melan13 2 days ago

    The way they think they can fight Amplification ddos attacks is a joke. They need to research more about how to beat layer 7 attacks before crying during the release version.