Updates using idle bandwidth

Bruno Wolff III bruno at wolff.to
Wed Mar 12 15:50:42 UTC 2008


On Wed, Mar 12, 2008 at 01:59:46 -0700,
  Andrew Farris <lordmorgul at gmail.com> wrote:
> 
> I was thinking more along the lines of just the local machine's behavior 
> with different connections having higher or lower priority for outbound 
> (which is often what hurts response time the most for slower connections 
> while longer running transfers occur).  I really don't know how effective 
> QOS is, so it may be a bad way to approach this issue.

You would still need to write the rules that do the shaping. However some
applications set QoS (particularly to distinguish between interactive and
bulk traffic) so it can be useful to look at. For outbound packets you
are OK, for inbound not so much.

> If an update connection had low priority for the bandwidth resources, that 
> connection should be postponed whenever a higher priority connection wants 
> to push outbound traffic.  A browser then would get to send its page 
> requests or acks ahead of running transfer packets from the update utility; 
> the result would be a much more responsive browser while still using most 
> of the available bandwidth.  Whether the QOS flags are being 
> stripped/mangled once the traffic leaves the local machine should not 
> really hurt that improvement would it?

It makes it hard to handle inbound traffic which you may also need to
manage. Though in a particular case that may not be a bottleneck. In your
case it looks like you will be needing to throttle inbound traffic, so
this is relevant. The way this shaping is done you either drop some packets
from the connections you want to slow down or you set bits in the
acknowledgement thay say the sender should slow down as if you had dropped
the packet. Not all network stacks support the later feature, but I don't
know what fraction do these days. It might be in practice almost everyone
does.
So you aren't blocking outbound requests in order to prevent applications
from retrieving data. That kind of approach would be a lot different and
have to be customized to each application.
> 
> I'm just thinking it may not require full end-to-end to enjoy some benefit. 
> The incoming connection would not be slowed or postponed to let the browser 
> respond, but by not acking what comes in until the outbound clears up I 
> think it might help anyway.

You don't really want to drop all packets, just some. The sender is supposed
to back off with an exponential reduction in send rate until packets stop
getting dropped. If you block all of them, the application will likely assume
the connection has been broken and stop working. Generally throttling and
giving priority to low latency packets should work fairly well.




More information about the fedora-devel-list mailing list