[clug] Emitting Source Quench When Queues Are Full?

Alex Satrapa grail at goldweb.com.au
Wed Oct 1 20:56:42 EST 2003


On Wednesday, October 1, 2003, at 09:50 , Matthew Hawkins wrote:

> Why not configure your QoS stuff (with tc, etc) to shape your bandwidth
> the way you need it?  ICMP source quench is a little pointless when some
> old legacy systems like Microsoft Windows ignores it and continues
> flooding you.

The QoS stuff ultimately ends up dropping packets in order to force the 
remote end to slow down.

ECN breaks stuff (since many routers interpret ECN-enabled packets as 
having invalid ToS fields). I wonder if I can selectively disable ECN 
for certain connections... or even certain routes?  Hmm.

I'm hoping that enough stuff out there will honour source quench, so 
that I don't end up having to drop packets. My windows boxen will be 
collecting most of their downloads through a proxy server, which is 
running a real operating system (Linux for now, *BSD when I start 
getting adventurous), so source quench *will* work, if I can figure out 
how to get the SQ messages onto the wire automagically.

I'd really like to be able to delay ACK packets "enough" to force the 
remote end to slow down too (ie: run out of xmit window) without 
retransmitting packets.

I'd prefer to try source-quench when the buffers become, say, 
half-full.  Then start dropping packets when the buffers get to 75% - 
most likely by reclassifying "bandwidth hogging" connections into a RED 
queue. I love the euphemism "Random Early Detection".  Dammit... the 
packet's made it 3/4 of the way around the world, only to be dropped at 
the upstream from my connection?  Hello?!?!

On Wednesday, October 1, 2003, at 06:39 , Sam Couter wrote:

> Retransmission will also cause the sending TCP stack to back off. Is 
> this not the goal?

Absolutely not!  The goal is to not drop packets *or* have packets 
retransmitted.  Either way around will end up in wasted bandwidth.

I'd prefer to waste 10Mb of RAM in queues and buffers than drop a single 
packet.  Ideally, any packet-dropping would be done by my upstream, 
before they try to shove packets down the skinny pipe downstream to my 
network. I don't see the sense in shaping traffic over my link by 
dropping packets once they've already been sent to this network.  
Dropping packets at my end seems an awful waste of bandwidth for the 
whole intermediary network.

I would also like to try shaping traffic before it leaves the remote 
network - if the remote end starts off slow and stays slow, usage of my 
local network link can be more fairly allocated.

Perhaps one day people will be using Fast TCP (or TCP Vegas, I think 
it's called), and I can start playing silly buggers with delay queues 
for ACK packets ;)

Alex




More information about the linux mailing list