greg@linuxpower.cx
remco@virtu.nl
kleptog@cupid.suninternet.com
paulsch@us.ibm.com
jasper@spaans.ds9a.nl
Revision History | ||
---|---|---|
Revision $Revision: 1.12 $ | $Date: 2002/07/20 14:44:26 $ | |
DocBook Edition |
You can always reach us by writing to the HOWTO team. However, please consider posting to the mailing list (see the relevant section) if you have questions which are not directly related to this HOWTO. We are no free helpdesk, but we often will answer questions asked on the list.
Before losing your way in this HOWTO, if all you want to do is simple traffic shaping, skip everything and head to the Other possibilities chapter, and read about CBQ.init.
Here are some other references which might help teach you more:
Very nice introduction, explaining what a network is, and how it is connected to other networks.
Great stuff, although very verbose. It teaches you a lot of stuff that's already configured if you are able to connect to the Internet. Should be located in /usr/doc/HOWTO/NET3-4-HOWTO.txt but can be also be found online.
The canonical location for the HOWTO is here.
We now have anonymous CVS access available to the world at large. This is good in a number of ways. You can easily upgrade to newer versions of this HOWTO and submitting patches is no work at all.
Furthermore, it allows the authors to work on the source independently, which is good too.
$ export CVSROOT=:pserver:anon@outpost.ds9a.nl:/var/cvsroot $ cvs login CVS password: [enter 'cvs' (without 's)] $ cvs co 2.4routing cvs server: Updating 2.4routing U 2.4routing/2.4routing.sgml |
If you spot an error, or want to add something, just fix it locally, and run cvs diff -u, and send the result off to us.
A Makefile is supplied which should help you create postscript, dvi, pdf, html and plain text. You may need to install docbook, docbook-utils, ghostscript and tetex to get all formats.
The authors receive an increasing amount of mail about this HOWTO. Because of the clear interest of the community, it has been decided to start a mailinglist where people can talk to each other about Advanced Routing and Traffic Control. You can subscribe to the list here.
It should be pointed out that the authors are very hesitant of answering questions not asked on the list. We would like the archive of the list to become some kind of knowledge base. If you have a question, please search the archive, and then post to the mailinglist.
We will be focusing mostly on what is possible by combining netfilter and iproute2.
With iproute2, tunnels are an integral part of the tool set.
This new framework makes it possible to clearly express features previously beyond Linux's reach.
You can also try here for the latest version.
Some parts of iproute require you to have certain kernel options enabled. It should also be noted that all releases of RedHat up to and including 6.2 come without most of the traffic control features in the default kernel.
RedHat 7.2 has everything in by default.
Also make sure that you have netlink support, should you choose to roll your own kernel. Iproute2 needs it.
The ip tool is central, and we'll ask it to display our interfaces for us.
[ahu@home ahu]$ ip link list 1: lo: <LOOPBACK,UP> mtu 3924 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: dummy: <BROADCAST,NOARP> mtu 1500 qdisc noop link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 3: eth0: <BROADCAST,MULTICAST,PROMISC,UP> mtu 1400 qdisc pfifo_fast qlen 100 link/ether 48:54:e8:2a:47:16 brd ff:ff:ff:ff:ff:ff 4: eth1: <BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc pfifo_fast qlen 100 link/ether 00:e0:4c:39:24:78 brd ff:ff:ff:ff:ff:ff 3764: ppp0: <POINTOPOINT,MULTICAST,NOARP,UP> mtu 1492 qdisc pfifo_fast qlen 10 link/ppp |
It does show us the MAC addresses though, the hardware identifier of our ethernet interfaces.
[ahu@home ahu]$ ip address show 1: lo: <LOOPBACK,UP> mtu 3924 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 brd 127.255.255.255 scope host lo 2: dummy: <BROADCAST,NOARP> mtu 1500 qdisc noop link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 3: eth0: <BROADCAST,MULTICAST,PROMISC,UP> mtu 1400 qdisc pfifo_fast qlen 100 link/ether 48:54:e8:2a:47:16 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/8 brd 10.255.255.255 scope global eth0 4: eth1: <BROADCAST,MULTICAST,PROMISC,UP> mtu 1500 qdisc pfifo_fast qlen 100 link/ether 00:e0:4c:39:24:78 brd ff:ff:ff:ff:ff:ff 3764: ppp0: <POINTOPOINT,MULTICAST,NOARP,UP> mtu 1492 qdisc pfifo_fast qlen 10 link/ppp inet 212.64.94.251 peer 212.64.94.1/32 scope global ppp0 |
You may also note 'qdisc', which stands for Queueing Discipline. This will become vital later on.
[ahu@home ahu]$ ip route show 212.64.94.1 dev ppp0 proto kernel scope link src 212.64.94.251 10.0.0.0/8 dev eth0 proto kernel scope link src 10.0.0.1 127.0.0.0/8 dev lo scope link default via 212.64.94.1 dev ppp0 |
For reference, this is what the old route utility shows us:
[ahu@home ahu]$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 212.64.94.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 212.64.94.1 0.0.0.0 UG 0 0 0 ppp0 |
ARP is the Address Resolution Protocol as described in RFC 826. ARP is used by a networked machine to resolve the hardware location/address of another machine on the same local network. Machines on the Internet are generally known by their names which resolve to IP addresses. This is how a machine on the foo.com network is able to communicate with another machine which is on the bar.net network. An IP address, though, cannot tell you the physical location of a machine. This is where ARP comes into the picture.
Let's take a very simple example. Suppose I have a network composed of several machines. Two of the machines which are currently on my network are foo with an IP address of 10.0.0.1 and bar with an IP address of 10.0.0.2. Now foo wants to ping bar to see that he is alive, but alas, foo has no idea where bar is. So when foo decides to ping bar he will need to send out an ARP request. This ARP request is akin to foo shouting out on the network "Bar (10.0.0.2)! Where are you?" As a result of this every machine on the network will hear foo shouting, but only bar (10.0.0.2) will respond. Bar will then send an ARP reply directly back to foo which is akin bar saying, "Foo (10.0.0.1) I am here at 00:60:94:E9:08:12." After this simple transaction that's used to locate his friend on the network, foo is able to communicate with bar until he (his arp cache) forgets where bar is (typically after 15 minutes on Unix).
Now let's see how this works. You can view your machines current arp/neighbor cache/table like so:
[root@espa041 /home/src/iputils]# ip neigh show 9.3.76.42 dev eth0 lladdr 00:60:08:3f:e9:f9 nud reachable 9.3.76.1 dev eth0 lladdr 00:06:29:21:73:c8 nud reachable |
As you can see my machine espa041 (9.3.76.41) knows where to find espa042 (9.3.76.42) and espagate (9.3.76.1). Now let's add another machine to the arp cache.
[root@espa041 /home/paulsch/.gnome-desktop]# ping -c 1 espa043 PING espa043.austin.ibm.com (9.3.76.43) from 9.3.76.41 : 56(84) bytes of data. 64 bytes from 9.3.76.43: icmp_seq=0 ttl=255 time=0.9 ms --- espa043.austin.ibm.com ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 0.9/0.9/0.9 ms [root@espa041 /home/src/iputils]# ip neigh show 9.3.76.43 dev eth0 lladdr 00:06:29:21:80:20 nud reachable 9.3.76.42 dev eth0 lladdr 00:60:08:3f:e9:f9 nud reachable 9.3.76.1 dev eth0 lladdr 00:06:29:21:73:c8 nud reachable |
As a result of espa041 trying to contact espa043, espa043's hardware address/location has now been added to the arp/neighbor cache. So until the entry for espa043 times out (as a result of no communication between the two) espa041 knows where to find espa043 and has no need to send an ARP request.
Now let's delete espa043 from our arp cache:
[root@espa041 /home/src/iputils]# ip neigh delete 9.3.76.43 dev eth0 [root@espa041 /home/src/iputils]# ip neigh show 9.3.76.43 dev eth0 nud failed 9.3.76.42 dev eth0 lladdr 00:60:08:3f:e9:f9 nud reachable 9.3.76.1 dev eth0 lladdr 00:06:29:21:73:c8 nud stale |
Now espa041 has again forgotten where to find espa043 and will need to send another ARP request the next time he needs to communicate with espa043. You can also see from the above output that espagate (9.3.76.1) has been changed to the "stale" state. This means that the location shown is still valid, but it will have to be confirmed at the first transaction to that machine.
[ahu@home ahu]$ ip rule list 0: from all lookup local 32766: from all lookup main 32767: from all lookup default |
[ahu@home ahu]$ ip route list table local broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 local 10.0.0.1 dev eth0 proto kernel scope host src 10.0.0.1 broadcast 10.0.0.0 dev eth0 proto kernel scope link src 10.0.0.1 local 212.64.94.251 dev ppp0 proto kernel scope host src 212.64.94.251 broadcast 10.255.255.255 dev eth0 proto kernel scope link src 10.0.0.1 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 212.64.78.148 dev ppp2 proto kernel scope host src 212.64.78.148 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 |
[ahu@home ahu]$ ip route list table main 195.96.98.253 dev ppp2 proto kernel scope link src 212.64.78.148 212.64.94.1 dev ppp0 proto kernel scope link src 212.64.94.251 10.0.0.0/8 dev eth0 proto kernel scope link src 10.0.0.1 127.0.0.0/8 dev lo scope link default via 212.64.94.1 dev ppp0 |
# echo 200 John >> /etc/iproute2/rt_tables # ip rule add from 10.0.0.10 table John # ip rule ls 0: from all lookup local 32765: from 10.0.0.10 lookup John 32766: from all lookup main 32767: from all lookup default |
Now all that is left is to generate John's table, and flush the route cache:
# ip route add default via 195.96.98.253 dev ppp2 table John # ip route flush cache |
And we are done. It is left as an exercise for the reader to implement this in ip-up.
________ +------------+ / | | | +-------------+ Provider 1 +------- __ | | | / ___/ \_ +------+-------+ +------------+ | _/ \__ | if1 | / / \ | | | | Local network -----+ Linux router | | Internet \_ __/ | | | \__ __/ | if2 | \ \___/ +------+-------+ +------------+ | | | | \ +-------------+ Provider 2 +------- | | | +------------+ \________ |
ip route add $P1_NET dev $IF1 src $IP1 table T1 ip route add default via $P1 table T1 ip route add $P2_NET dev $IF2 src $IP2 table T2 ip route add default via $P2 table T2 |
ip route add $P1_NET dev $IF1 src $IP1 ip route add $P2_NET dev $IF2 src $IP2 |
ip route add default via $P1 |
ip rule add from $IP1 table T1 ip rule add from $IP2 table T2 |
ip route add default scope global nexthop via $P1 dev $IF1 weight 1 \ nexthop via $P2 dev $IF2 weight 1 |
Furthermore, if you really want to do this, you probably also want to look at Julian Anastasov's patches at http://www.linuxvirtualserver.org/~julian/#routes , Julian's route patch page. They will make things nicer to work with.
network 10.0.1.0 netmask 255.255.255.0 router 10.0.1.1 |
The router has address 172.16.17.18 on network C.
network 10.0.2.0 netmask 255.255.255.0 router 10.0.2.1 |
The router has address 172.19.20.21 on network C.
First, make sure the modules are installed:
insmod ipip.o insmod new_tunnel.o |
Then, on the router of network A, you do the following:
ifconfig tunl0 10.0.1.1 pointopoint 172.19.20.21 route add -net 10.0.2.0 netmask 255.255.255.0 dev tunl0 |
And on the router of network B:
ifconfig tunl0 10.0.2.1 pointopoint 172.16.17.18 route add -net 10.0.1.0 netmask 255.255.255.0 dev tunl0 |
And if you're finished with your tunnel:
ifconfig tunl0 down |
In Linux, you'll need the ip_gre.o module.
Let's do IPv4 tunneling first:
network 10.0.1.0 netmask 255.255.255.0 router 10.0.1.1 |
network 10.0.2.0 netmask 255.255.255.0 router 10.0.2.1 |
On the router of network A, you do the following:
ip tunnel add netb mode gre remote 172.19.20.21 local 172.16.17.18 ttl 255 ip link set netb up ip addr add 10.0.1.1 dev netb ip route add 10.0.2.0/24 dev netb |
The second line enables the device.
But enough about this, let's go on with the router of network B.
ip tunnel add neta mode gre remote 172.16.17.18 local 172.19.20.21 ttl 255 ip link set neta up ip addr add 10.0.2.1 dev neta ip route add 10.0.1.0/24 dev neta |
ip link set netb down ip tunnel del netb |
See Section 6 for a short bit about IPv6 Addresses.
Network 3ffe:406:5:1:5:a:2:1/96 |
ip tunnel add sixbone mode sit remote 172.22.23.24 local 172.16.17.18 ttl 255 ip link set sixbone up ip addr add 3ffe:406:5:1:5:a:2:1/96 dev sixbone ip route add 3ffe::/15 dev sixbone |
By Marco Davids <marco@sara.nl>
A short bit about IPv6 addresses:
An example: 2002:836b:9820:0000:0000:0000:836b:9886
The address 2002:836b:9820:0000:0000:0000:836b:9886 can be written down as 2002:836b:9820::836b:9886, which is somewhat friendlier.
Another example, the address 3ffe:0000:0000:0000:0000:0020:34A1:F32C can be written down as 3ffe::20:34A1:F32C, which is a lot shorter.
IPv6 is intended to be the successor of the current IPv4. Because it is relatively new technology, there is no worldwide native IPv6 network yet. To be able to move forward swiftly, the 6bone was introduced.
Native IPv6 networks are connected to each other by encapsulating the IPv6 protocol in IPv4 packets and sending them over the existing IPv4 infrastructure from one IPv6 site to another.
That is precisely where the tunnel steps in.
To be able to use IPv6, we should have a kernel that supports it. There are many good documents on how to achieve this. But it all comes down to a few steps:
Get yourself a recent Linux distribution, with suitable glibc.
Then get yourself an up-to-date kernel source.
Go to /usr/src/linux and type:
make menuconfig
Choose "Networking Options"
Select "The IPv6 protocol", "IPv6: enable EUI-64 token format", "IPv6: disable provider based addresses"
In other words, compile IPv6 as 'built-in' in your kernel. You can then save your config like usual and go ahead with compiling the kernel.
HINT: Before doing so, consider editing the Makefile: EXTRAVERSION = -x ; --> ; EXTRAVERSION = -x-IPv6
There is a lot of good documentation about compiling and installing a kernel, however this document is about something else. If you run into problems at this stage, go and look for documentation about compiling a Linux kernel according to your own specifications.
The file /usr/src/linux/README might be a good start. After you accomplished all this, and rebooted with your brand new kernel, you might want to issue an '/sbin/ifconfig -a' and notice the brand new 'sit0-device'. SIT stands for Simple Internet Transition. You may give yourself a compliment; you are now one major step closer to IP, the Next Generation ;-)
Now on to the next step. You want to connect your host, or maybe even your entire LAN to another IPv6 capable network. This might be the "6bone" that is setup especially for this particular purpose.
Let's assume that you have the following IPv6 network: 3ffe:604:6:8::/64 and you want to connect it to 6bone, or a friend. Please note that the /64 subnet notation works just like with regular IP addresses.
Your IPv4 address is 145.100.24.181 and the 6bone router has IPv4 address 145.100.1.5
# ip tunnel add sixbone mode sit remote 145.100.1.5 [local 145.100.24.181 ttl 255] # ip link set sixbone up # ip addr add 3FFE:604:6:7::2/126 dev sixbone # ip route add 3ffe::0/16 dev sixbone |
Let's discuss this. In the first line, we created a tunnel device called sixbone. We gave it mode sit (which is IPv6 in IPv4 tunneling) and told it where to go to (remote) and where to come from (local). TTL is set to maximum, 255.
Next, we made the device active (up). After that, we added our own network address, and set a route for 3ffe::/15 (which is currently all of 6bone) through the tunnel. If the particular machine you run this on is your IPv6 gateway, then consider adding the following lines:
# echo 1 >/proc/sys/net/ipv6/conf/all/forwarding # /usr/local/sbin/radvd |
The latter, radvd is -like zebra- a router advertisement daemon, to support IPv6's autoconfiguration features. Search for it with your favourite search-engine if you like. You can check things like this:
# /sbin/ip -f inet6 addr |
If you happen to have radvd running on your IPv6 gateway and boot your IPv6 capable Linux on a machine on your local LAN, you would be able to enjoy the benefits of IPv6 autoconfiguration:
# /sbin/ip -f inet6 addr 1: lo: <LOOPBACK,UP> mtu 3924 qdisc noqueue inet6 ::1/128 scope host 3: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 100 inet6 3ffe:604:6:8:5054:4cff:fe01:e3d6/64 scope global dynamic valid_lft forever preferred_lft 604646sec inet6 fe80::5054:4cff:fe01:e3d6/10 scope link |
You could go ahead and configure your bind for IPv6 addresses. The A type has an equivalent for IPv6: AAAA. The in-addr.arpa's equivalent is: ip6.int. There's a lot of information available on this topic.
There is an increasing number of IPv6-aware applications available, including secure shell, telnet, inetd, Mozilla the browser, Apache the webserver and a lot of others. But this is all outside the scope of this Routing document ;-)
On the Cisco side the configuration would be something like this:
! interface Tunnel1 description IPv6 tunnel no ip address no ip directed-broadcast ipv6 enable ipv6 address 3FFE:604:6:7::1/126 tunnel source Serial0 tunnel destination 145.100.24.181 tunnel mode ipv6ip ! ipv6 route 3FFE:604:6:8::/64 Tunnel1 |
FIXME: editor vacancy. In the meantime, see: The FreeS/WAN project. Another IPSec implementation for Linux is Cerberus, by NIST. However, their web pages have not been updated in over a year, and their version tended to trail well behind the current Linux kernel. USAGI, an alternative IPv6 implementation for Linux, also includes an IPSec implementation, but that might only be for IPv6.
ip route add 224.0.0.0/4 dev eth0
Now, tell Linux to forward packets...
Linux even goes far beyond what Frame and ATM provide.
Just to prevent confusion, tc uses the following rules for bandwith specification:
mbps = 1024 kbps = 1024 * 1024 bps => byte/s mbit = 1024 kbit => kilo bit/s. mb = 1024 kb = 1024 * 1024 b => byte mbit = 1024 kbit => kilo bit. |
But when tc prints the rate, it uses following :
1Mbit = 1024 Kbit = 1024 * 1024 bps => bit/s |
Each of these queues has specific strengths and weaknesses. Not all of them may be as well tested.
0 1 2 3 4 5 6 7 +-----+-----+-----+-----+-----+-----+-----+-----+ | | | | | PRECEDENCE | TOS | MBZ | | | | | +-----+-----+-----+-----+-----+-----+-----+-----+ |
The four TOS bits (the 'TOS field') are defined as:
Binary Decimcal Meaning ----------------------------------------- 1000 8 Minimize delay (md) 0100 4 Maximize throughput (mt) 0010 2 Maximize reliability (mr) 0001 1 Minimize monetary cost (mmc) 0000 0 Normal Service |
TOS Bits Means Linux Priority Band ------------------------------------------------------------ 0x0 0 Normal Service 0 Best Effort 1 0x2 1 Minimize Monetary Cost 1 Filler 2 0x4 2 Maximize Reliability 0 Best Effort 1 0x6 3 mmc+mr 0 Best Effort 1 0x8 4 Maximize Throughput 2 Bulk 2 0xa 5 mmc+mt 2 Bulk 2 0xc 6 mr+mt 2 Bulk 2 0xe 7 mmc+mr+mt 2 Bulk 2 0x10 8 Minimize Delay 6 Interactive 0 0x12 9 mmc+md 6 Interactive 0 0x14 10 mr+md 6 Interactive 0 0x16 11 mmc+mr+md 6 Interactive 0 0x18 12 mt+md 4 Int. Bulk 1 0x1a 13 mmc+mt+md 4 Int. Bulk 1 0x1c 14 mr+mt+md 4 Int. Bulk 1 0x1e 15 mmc+mr+mt+md 4 Int. Bulk 1 |
1, 2, 2, 2, 1, 2, 0, 0 , 1, 1, 1, 1, 1, 1, 1, 1 |
TELNET 1000 (minimize delay) FTP Control 1000 (minimize delay) Data 0100 (maximize throughput) TFTP 1000 (minimize delay) SMTP Command phase 1000 (minimize delay) DATA phase 0100 (maximize throughput) Domain Name Service UDP Query 1000 (minimize delay) TCP Query 0000 Zone Transfer 0100 (maximize throughput) NNTP 0001 (minimize monetary cost) ICMP Errors 0000 Requests 0000 (mostly) Responses <same as request> (mostly) |
The last scenario is very important, because it allows to administratively shape the bandwidth available to data that's passing the filter.
The accumulation of tokens allows a short burst of overlimit data to be still passed without loss, but any lasting overload will cause packets to be constantly delayed, and then dropped.
Please note that in the actual implementation, tokens correspond to bytes, not packets.
A simple but *very* useful configuration is this:
# tc qdisc add dev ppp0 root tbf rate 220kbit latency 50ms burst 1540 |
# tc qdisc add dev ppp0 root sfq perturb 10 # tc -s -d qdisc ls qdisc sfq 800c: dev ppp0 quantum 1514b limit 128p flows 128/1024 perturb 10sec Sent 4812 bytes 62 pkts (dropped 0, overlimits 0) |
The following tips may help in choosing which queue to use. It mentions some qdiscs described in the Chapter 14 chapter.
To purely slow down outgoing traffic, use the Token Bucket Filter. Works up to huge bandwidths, if you scale the bucket.
If your link is truly full and you want to make sure that no single session can dominate your outgoing bandwidth, use Stochastical Fairness Queueing.
If you have a big backbone and know what you are doing, consider Random Early Drop (see Advanced chapter).
To 'shape' incoming traffic which you are not forwarding, use the Ingress Policer. Incoming shaping is called 'policing', by the way, not 'shaping'.
If you *are* forwarding it, use a TBF on the interface you are forwarding the data to. Unless you want to shape traffic that may go out over several interfaces, in which case the only common factor is the incoming interface. In that case use the Ingress Policer.
If you don't want to shape, but only want to see if your interface is so loaded that it has to queue, use the pfifo queue (not pfifo_fast). It lacks internal bands but does account the size of its backlog.
Finally - you can also do "social shaping". You may not always be able to use technology to achieve what you want. Users experience technical constraints as hostile. A kind word may also help with getting your bandwidth to be divided right!
The following is loosely based on draft-ietf-diffserv-model-06.txt, An Informal Management Model for Diffserv Routers. It can currently be found at http://www.ietf.org/internet-drafts/draft-ietf-diffserv-model-06.txt.
Read it for the strict definitions of the terms used.
An algorithm that manages the queue of a device, either incoming (ingress) or outgoing (egress).
A qdisc with no configurable internal subdivisions.
A classful qdisc contains multiple classes. Each of these classes contains a further qdisc, which may again be classful, but need not be. According to the strict definition, pfifo_fast *is* classful, because it contains three bands which are, in fact, classes. However, from the user's configuration perspective, it is classless as the classes can't be touched with the tc tool.
A classful qdisc may have many classes, which each are internal to the qdisc. Each of these classes may contain a real qdisc.
Each classful qdisc needs to determine to which class it needs to send a packet. This is done using the classifier.
Classification can be performed using filters. A filter contains a number of conditions which if matched, make the filter match.
A qdisc may, with the help of a classifier, decide that some packets need to go out earlier than others. This process is called Scheduling, and is performed for example by the pfifo_fast qdisc mentioned earlier. Scheduling is also called 'reordering', but this is confusing.
The process of delaying packets before they go out to make traffic confirm to a configured maximum rate. Shaping is performed on egress. Colloquially, dropping packets to slow traffic down is also often called Shaping.
Delaying or dropping packets in order to make traffic stay below a configured bandwidth. In Linux, policing can only drop a packet and not delay it - there is no 'ingress queue'.
A work-conserving qdisc always delivers a packet if one is available. In other words, it never delays a packet if the network adaptor is ready to send one (in the case of an egress qdisc).
Some queues, like for example the Token Bucket Filter, may need to hold on to a packet for a certain time in order to limit the bandwidth. This means that they sometimes refuse to give up a packet, even though they have one available.
Now that we have our terminology straight, let's see where all these things are.
Userspace programs ^ | +---------------+-----------------------------------------+ | Y | | -------> IP Stack | | | | | | | Y | | | Y | | ^ | | | | / ----------> Forwarding -> | | ^ / | | | |/ Y | | | | | | ^ Y /-qdisc1-\ | | | Egress /--qdisc2--\ | --->->Ingress Classifier ---qdisc3---- | -> | Qdisc \__qdisc4__/ | | \-qdiscN_/ | | | +----------------------------------------------------------+ |
The big block represents the kernel. The leftmost arrow represents traffic entering your machine from the network. It is then fed to the Ingress Qdisc which may apply Filters to a packet, and decide to drop it. This is called 'Policing'.
This happens at a very early stage, before it has seen a lot of the kernel. It is therefore a very good place to drop traffic very early, without consuming a lot of CPU power.
If the packet is allowed to continue, it may be destined for a local application, in which case it enters the IP stack in order to be processed, and handed over to a userspace program. The packet may also be forwarded without entering an application, in which case it is destined for egress. Userspace programs may also deliver data, which is then examined and forwarded to the Egress Classifier.
There it is investigated and enqueued to any of a number of qdiscs. In the unconfigured default case, there is only one egress qdisc installed, the pfifo_fast, which always receives the packet. This is called 'enqueueing'.
The packet now sits in the qdisc, waiting for the kernel to ask for it for transmission over the network interface. This is called 'dequeueing'.
This picture also holds in case there is only one network adaptor - the arrows entering and leaving the kernel should not be taken too literally. Each network adaptor has both ingress and egress hooks.
More about CBQ and its alternatives shortly.
Classes need to have the same major number as their parent.
Recapping, a typical hierarchy might look like this:
root 1: | _1:1_ / | \ / | \ / | \ 10: 11: 12: / \ / \ 10:1 10:2 12:1 12:2 |
A packet might get classified in a chain like this:
In this case, a filter attached to the root decided to send the packet directly to 12:2.
In formal words, the PRIO qdisc is a Work-Conserving scheduler.
The following parameters are recognized by tc:
Reiterating, band 0 goes to minor number 1! Band 1 to minor number 2, etc.
root 1: prio / | \ 1:1 1:2 1:3 | | | 10: 20: 30: sfq tbf sfq band 0 1 2 |
Bulk traffic will go to 30:, interactive traffic to 20: or 10:.
# tc qdisc add dev eth0 root handle 1: prio ## This *instantly* creates classes 1:1, 1:2, 1:3 # tc qdisc add dev eth0 parent 1:1 handle 10: sfq # tc qdisc add dev eth0 parent 1:2 handle 20: tbf rate 20kbit buffer 1600 limit 3000 # tc qdisc add dev eth0 parent 1:3 handle 30: sfq |
Now let's see what we created:
# tc -s qdisc ls dev eth0 qdisc sfq 30: quantum 1514b Sent 0 bytes 0 pkts (dropped 0, overlimits 0) qdisc tbf 20: rate 20Kbit burst 1599b lat 667.6ms Sent 0 bytes 0 pkts (dropped 0, overlimits 0) qdisc sfq 10: quantum 1514b Sent 132 bytes 2 pkts (dropped 0, overlimits 0) qdisc prio 1: bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 174 bytes 3 pkts (dropped 0, overlimits 0) |
We now do some bulk data transfer with a tool that properly sets TOS flags, and take another look:
# scp tc ahu@10.0.0.11:./ ahu@10.0.0.11's password: tc 100% |*****************************| 353 KB 00:00 # tc -s qdisc ls dev eth0 qdisc sfq 30: quantum 1514b Sent 384228 bytes 274 pkts (dropped 0, overlimits 0) qdisc tbf 20: rate 20Kbit burst 1599b lat 667.6ms Sent 2640 bytes 20 pkts (dropped 0, overlimits 0) qdisc sfq 10: quantum 1514b Sent 2230 bytes 31 pkts (dropped 0, overlimits 0) qdisc prio 1: bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 389140 bytes 326 pkts (dropped 0, overlimits 0) |
# tc -s qdisc ls dev eth0 qdisc sfq 30: quantum 1514b Sent 384228 bytes 274 pkts (dropped 0, overlimits 0) qdisc tbf 20: rate 20Kbit burst 1599b lat 667.6ms Sent 2640 bytes 20 pkts (dropped 0, overlimits 0) qdisc sfq 10: quantum 1514b Sent 14926 bytes 193 pkts (dropped 0, overlimits 0) qdisc prio 1: bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1 Sent 401836 bytes 488 pkts (dropped 0, overlimits 0) |
These are parameters you can specify in order to configure shaping:
The following parameters control the WRR process:
Please note that all classes within an CBQ hierarchy need to share the same major number!
Within such an agency class, there might be other classes which are allowed to swap bandwidth.
# tc qdisc add dev eth0 root handle 1:0 cbq bandwidth 100Mbit \ avpkt 1000 cell 8 # tc class add dev eth0 parent 1:0 classid 1:1 cbq bandwidth 100Mbit \ rate 6Mbit weight 0.6Mbit prio 8 allot 1514 cell 8 maxburst 20 \ avpkt 1000 bounded |
# tc class add dev eth0 parent 1:1 classid 1:3 cbq bandwidth 100Mbit \ rate 5Mbit weight 0.5Mbit prio 5 allot 1514 cell 8 maxburst 20 \ avpkt 1000 # tc class add dev eth0 parent 1:1 classid 1:4 cbq bandwidth 100Mbit \ rate 3Mbit weight 0.3Mbit prio 5 allot 1514 cell 8 maxburst 20 \ avpkt 1000 |
# tc qdisc add dev eth0 parent 1:3 handle 30: sfq # tc qdisc add dev eth0 parent 1:4 handle 40: sfq |
# tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 match ip \ sport 80 0xffff flowid 1:3 # tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 match ip \ sport 25 0xffff flowid 1:4 |
These commands, attached directly to the root, send traffic to the right qdiscs.
# tc qdisc add dev eth1 root handle 1: cbq bandwidth 10Mbit allot 1514 \ cell 8 avpkt 1000 mpu 64 # tc class add dev eth1 parent 1:0 classid 1:1 cbq bandwidth 10Mbit \ rate 10Mbit allot 1514 cell 8 weight 1Mbit prio 8 maxburst 20 \ avpkt 1000 |
Defmap refers to TC_PRIO bits, which are defined as follows:
TC_PRIO.. Num Corresponds to TOS ------------------------------------------------- BESTEFFORT 0 Maximize Reliablity FILLER 1 Minimize Cost BULK 2 Maximize Throughput (0x8) INTERACTIVE_BULK 4 INTERACTIVE 6 Minimize Delay (0x10) CONTROL 7 |
Now the interactive and the bulk classes:
# tc class add dev eth1 parent 1:1 classid 1:2 cbq bandwidth 10Mbit \ rate 1Mbit allot 1514 cell 8 weight 100Kbit prio 3 maxburst 20 \ avpkt 1000 split 1:0 defmap c0 # tc class add dev eth1 parent 1:1 classid 1:3 cbq bandwidth 10Mbit \ rate 8Mbit allot 1514 cell 8 weight 800Kbit prio 7 maxburst 20 \ avpkt 1000 split 1:0 defmap 3f |
Node 1:0 now has a table like this:
priority send to 0 1:3 1 1:3 2 1:3 3 1:3 4 1:3 5 1:3 6 1:2 7 1:2 |
# tc class change dev eth1 classid 1:2 cbq defmap 01/01 |
The priority map over at 1:0 now looks like this:
priority send to 0 1:2 1 1:3 2 1:3 3 1:3 4 1:3 5 1:3 6 1:2 7 1:2 |
FIXME: did not test 'tc class change', only looked at the source.
HTB works just like CBQ but does not resort to idle time calculations to shape. Instead, it is a classful Token Bucket Filter - hence the name. It has only a few parameters, which are well documented on his site.
As your HTB configuration gets more complex, your configuration scales well. With CBQ it is already complex even in simple cases! HTB is not yet a part of the standard kernel, but it should soon be!
If you are in a position to patch your kernel, by all means consider HTB.
Functionally almost identical to the CBQ sample configuration above:
# tc qdisc add dev eth0 root handle 1: htb default 30 # tc class add dev eth0 parent 1: classid 1:1 htb rate 6mbit burst 15k # tc class add dev eth0 parent 1:1 classid 1:10 htb rate 5mbit burst 15k # tc class add dev eth0 parent 1:1 classid 1:20 htb rate 3mbit ceil 6mbit burst 15k # tc class add dev eth0 parent 1:1 classid 1:30 htb rate 1kbit ceil 6mbit burst 15k |
The author then recommends SFQ for beneath these classes:
# tc qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 # tc qdisc add dev eth0 parent 1:20 handle 20: sfq perturb 10 # tc qdisc add dev eth0 parent 1:30 handle 30: sfq perturb 10 |
Add the filters which direct traffic to the right classes:
# U32="tc filter add dev eth0 protocol ip parent 1:0 prio 1 u32" # $U32 match ip dport 80 0xffff flowid 1:10 # $U32 match ip sport 25 0xffff flowid 1:20 |
To reiterate the tree, which is not a tree:
root 1: | _1:1_ / | \ / | \ / | \ 10: 11: 12: / \ / \ 10:1 10:2 12:1 12:2 |
# tc filter add dev eth0 protocol ip parent 10: prio 1 u32 match \ ip dport 22 0xffff flowid 10:1 # tc filter add dev eth0 protocol ip parent 10: prio 1 u32 match \ ip sport 80 0xffff flowid 10:1 # tc filter add dev eth0 protocol ip parent 10: prio 2 flowid 10:2 |
To select on an IP address, use this:
# tc filter add dev eth0 parent 10:0 protocol ip prio 1 u32 \ match ip dst 4.3.2.1/32 flowid 10:1 # tc filter add dev eth0 parent 10:0 protocol ip prio 1 u32 \ match ip src 1.2.3.4/32 flowid 10:1 # tc filter add dev eth0 protocol ip parent 10: prio 2 \ flowid 10:2 |
You can concatenate matches, to match on traffic from 1.2.3.4 and from port 80, do this:
# tc filter add dev eth0 parent 10:0 protocol ip prio 1 u32 match ip src 4.3.2.1/32 match ip sport 80 0xffff flowid 10:1 |
Most shaping commands presented here start with this preamble:
# tc filter add dev eth0 parent 1:0 protocol ip prio 1 u32 .. |
Use the numbers from /etc/protocols, for example, icmp is 1: 'match ip protocol 1 0xff'.
You can place a mark like this:
# iptables -A PREROUTING -t mangle -i eth0 -j MARK --set-mark 6 |
To select interactive, minimum delay traffic:
# tc filter add dev ppp0 parent 1:0 protocol ip prio 10 u32 \ match ip tos 0x10 0xff \ flowid 1:4 |
For more filtering commands, see the Advanced Filters chapter.
2. A qdisc can only see traffic of one interface, global limitations can't be placed.
tc qdisc add dev imq0 root handle 1: htb default 20 tc class add dev imq0 parent 1: classid 1:1 htb rate 2mbit burst 15k tc class add dev imq0 parent 1:1 classid 1:10 htb rate 1mbit tc class add dev imq0 parent 1:1 classid 1:20 htb rate 1mbit tc qdisc add dev imq0 parent 1:10 handle 10: pfifo tc qdisc add dev imq0 parent 1:20 handle 20: sfq tc filter add dev imq0 parent 10:0 protocol ip prio 1 u32 match \ ip dst 10.0.0.230/32 flowid 1:10 |
iptables -t mangle -A PREROUTING -i eth0 -j IMQ --todev 0 ip link set imq0 up |
IMQ [ --todev n ] n : number of imq device |
enum nf_ip_hook_priorities { NF_IP_PRI_FIRST = INT_MIN, NF_IP_PRI_CONNTRACK = -200, NF_IP_PRI_MANGLE = -150, NF_IP_PRI_NAT_DST = -100, NF_IP_PRI_FILTER = 0, NF_IP_PRI_NAT_SRC = 100, NF_IP_PRI_LAST = INT_MAX, }; |
The patches and some more information can be found at the imq site.
+-------+ eth1 +-------+ | |==========| | 'network 1' ----| A | | B |---- 'network 2' | |==========| | +-------+ eth2 +-------+ |
The distributing part is done by a 'TEQL' device, like this (it couldn't be easier):
# tc qdisc add dev eth1 root teql0 # tc qdisc add dev eth2 root teql0 # ip link set dev teql0 up |
Don't forget the 'ip link set up' command!
# ip addr add dev eth1 10.0.0.0/31 # ip addr add dev eth2 10.0.0.2/31 # ip addr add dev teql0 10.0.0.4/31 |
# ip addr add dev eth1 10.0.0.1/31 # ip addr add dev eth2 10.0.0.3/31 # ip addr add dev teql0 10.0.0.5/31 |
# echo 0 > /proc/net/ipv4/conf/eth1/rp_filter # echo 0 > /proc/net/ipv4/conf/eth2/rp_filter |
However, for lots of applications, link load balancing is a great idea.
William Stearns has used an advanced tunneling setup to achieve good use of multiple, unrelated, internet connections together. It can be found on his tunneling page.
The HOWTO may feature more about this in the future.
So far we've seen how iproute works, and netfilter was mentioned a few times. This would be a good time to browse through Rusty's Remarkably Unreliable Guides. Netfilter itself can be found here.
Netfilter allows us to filter packets, or mangle their headers. One special feature is that we can mark a packet with a number. This is done with the --set-mark facility.
As an example, this command marks all packets destined for port 25, outgoing mail:
# iptables -A PREROUTING -i eth0 -t mangle -p tcp --dport 25 \ -j MARK --set-mark 1 |
Let's say that we have multiple connections, one that is fast (and expensive, per megabyte) and one that is slower, but flat fee. We would most certainly like outgoing mail to go via the cheap route.
We've already marked the packets with a '1', we now instruct the routing policy database to act on this:
# echo 201 mail.out >> /etc/iproute2/rt_tables # ip rule add fwmark 1 table mail.out # ip rule ls 0: from all lookup local 32764: from all fwmark 1 lookup mail.out 32766: from all lookup main 32767: from all lookup default |
Now we generate the mail.out table with a route to the slow but cheap link:
# /sbin/ip route add default via 195.96.98.253 dev ppp0 table mail.out |
And we are done. Should we want to make exceptions, there are lots of ways to achieve this. We can modify the netfilter statement to exclude certain hosts, or we can insert a rule with a lower priority that points to the main table for our excepted hosts.
We can also use this feature to honour TOS bits by marking packets with a different type of service with different numbers, and creating rules to act on that. This way you can even dedicate, say, an ISDN line to interactive sessions.
Needless to say, this also works fine on a host that's doing NAT ('masquerading').
IMPORTANT: We received a report that MASQ and SNAT at least collide with marking packets. Rusty Russell explains it in this posting. Turn off the reverse path filter to make it work properly.
Note: to mark packets, you need to have some options enabled in your kernel:
IP: advanced router (CONFIG_IP_ADVANCED_ROUTER) [Y/n/?] IP: policy routing (CONFIG_IP_MULTIPLE_TABLES) [Y/n/?] IP: use netfilter MARK value as routing key (CONFIG_IP_ROUTE_FWMARK) [Y/n/?] |
See also the Section 15.5 in the Cookbook.
Here is an incomplete list of classifiers available:
Bases the decision on fields within the packet (i.e. source IP address, etc)
Bases the decision on which route the packet will be routed by
Routes packets based on RSVP . Only useful on networks you control - the Internet does not respect RSVP.
Used in the DSMARK qdisc, see the relevant section.
Note that in general there are many ways in which you can classify packet and that it generally comes down to preference as to which system you wish to use.
Classifiers in general accept a few arguments in common. They are listed here for convenience:
The protocol this classifier will accept. Generally you will only be accepting only IP traffic. Required.
The handle this classifier is to be attached to. This handle must be an already existing class. Required.
The priority of this classifier. Lower numbers get tested first.
This handle means different things to different filters.
All the following sections will assume you are trying to shape the traffic going to HostA. They will assume that the root class has been configured on 1: and that the class you want to send the selected traffic to is 1:1.
tc filter add dev IF [ protocol PROTO ] [ (preference|priority) PRIO ] [ parent CBQ ] |
The options described above apply to all filters, not only U32.
# tc filter add dev eth0 protocol ip parent 1:0 pref 10 u32 \ match u32 00100000 00ff0000 at 0 flowid 1:10 |
# tc filter add dev eth0 protocol ip parent 1:0 pref 10 u32 \ match u32 00000016 0000ffff at nexthdr+0 flowid 1:10 |
match [ u32 | u16 | u8 ] PATTERN MASK [ at OFFSET | nexthdr+OFFSET] |
# tc filter add dev ppp14 parent 1:0 prio 10 u32 \ match u8 64 0xff at 8 \ flowid 1:4 |
# tc filter add dev ppp14 parent 1:0 prio 10 u32 \ match u8 0x10 0xff at nexthdr+13 \ protocol tcp \ flowid 1:3 |
FIXME: it has been pointed out that this syntax does not work currently.
Use this to match ACKs on packets smaller than 64 bytes:
## match acks the hard way, ## IP protocol 6, ## IP header length 0x5(32 bit words), ## IP Total length 0x34 (ACK + 12 bytes of TCP options) ## TCP ack set (bit 5, offset 33) # tc filter add dev ppp14 parent 1:0 protocol ip prio 10 u32 \ match ip protocol 6 0xff \ match u8 0x05 0x0f at 0 \ match u16 0x0000 0xffc0 at 2 \ match u8 0x10 0xff at 33 \ flowid 1:3 |
FIXME: table placeholder - the table is in separate file ,,selector.html''
FIXME: it's also still in Polish :-(
# tc filter add dev ppp0 parent 1:0 prio 10 u32 \ match ip tos 0x10 0xff \ flowid 1:4 |
FIXME: tcp dst match does not work as described below:
# tc filter add dev ppp0 parent 1:0 prio 10 u32 \ match tcp dst 53 0xffff \ match ip protocol 0x6 0xff \ flowid 1:2 |
# tc filter add dev eth1 parent 1:0 protocol ip prio 100 route |
# ip route add Host/Network via Gateway dev Device realm RealmNumber |
For instance, we can define our destination network 192.168.10.0 with a realm number 10:
# ip route add 192.168.10.0/24 via 192.168.10.1 dev eth1 realm 10 |
# tc filter add dev eth1 parent 1:0 protocol ip prio 100 \ route to 10 classid 1:10 |
The above rule says packets going to the network 192.168.10.0 match class id 1:10.
# ip route add 192.168.2.0/24 dev eth2 realm 2 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 \ route from 2 classid 1:2 |
Uses the following parameters:
Which behave mostly identical to those described in the Token Bucket Filter section. Please note however that if you set the mtu of a TBF policer too low, *no* packets will pass, whereas the egress TBF qdisc will just pass them slower.
Another difference is that a policer can only let a packet pass, or drop it. It cannot delay hold on to it in order to delay it.
# tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.0.0 classid 1:1 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.0.1 classid 1:1 ... # tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.3.254 classid 1:3 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.3.255 classid 1:2 |
# tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.0.0 classid 1:1 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.1.0 classid 1:1 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.2.0 classid 1:3 # tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.3.0 classid 1:2 |
The next one starts like this:
# tc filter add dev eth1 parent 1:0 protocol ip prio 100 match ip src \ 1.2.0.1 classid 1:1 ... |
This way, only four checks are needed at most, two on average.
# tc filter add dev eth1 parent 1:0 prio 5 protocol ip u32 # tc filter add dev eth1 parent 1:0 prio 5 handle 2: protocol ip u32 divisor 256 |
Now we add some rules to entries in the created table:
# tc filter add dev eth1 protocol ip parent 1:0 prio 5 u32 ht 2:7b: \ match ip src 1.2.0.123 flowid 1:1 # tc filter add dev eth1 protocol ip parent 1:0 prio 5 u32 ht 2:7b: \ match ip src 1.2.1.123 flowid 1:2 # tc filter add dev eth1 protocol ip parent 1:0 prio 5 u32 ht 2:7b: \ match ip src 1.2.3.123 flowid 1:3 # tc filter add dev eth1 protocol ip parent 1:0 prio 5 u32 ht 2:7b: \ match ip src 1.2.4.123 flowid 1:2 |
Next create a 'hashing filter' that directs traffic to the right entry in the hashing table:
# tc filter add dev eth1 protocol ip parent 1:0 prio 5 u32 ht 800:: \ match ip src 1.2.0.0/16 \ hashkey mask 0x000000ff at 12 \ link 2: |
The following fragment will turn this on for all current and future interfaces.
# for i in /proc/sys/net/ipv4/conf/*/rp_filter ; do > echo 2 > $i > done |
# echo 1 >/proc/sys/net/ipv4/conf/<interfacename>/log_martians |
FIXME: is setting the conf/{default,all}/* files enough? - martijn
The rate at which echo replies are sent to any one destination.
Maximum number of listening igmp (multicast) sockets on the host. FIXME: Is this true?
If the kernel should attempt to forward packets. Off by default.
Range of local ports for outgoing connections. Actually quite small by default, 1024 to 4999.
Set this if you want to disable Path MTU discovery - a technique to determine the largest Maximum Transfer Unit possible on your path. See also the section on Path MTU discovery in the Cookbook chapter.
Maximum memory used to reassemble IP fragments. When ipfrag_high_thresh bytes of memory is allocated for this purpose, the fragment handler will toss packets until ipfrag_low_thresh is reached.
Set this if you want your applications to be able to bind to an address which doesn't belong to a device on your system. This can be useful when your machine is on a non-permanent (or even dynamic) link, so your services are able to start up and bind to a specific address when your link is down.
Minimum memory used to reassemble IP fragments.
Time in seconds to keep an IP fragment in memory.
A boolean flag controlling the behaviour under lots of incoming connections. When enabled, this causes the kernel to actively send RST packets when a service is overloaded.
Time to hold socket in state FIN-WAIT-2, if it was closed by our side. Peer can be broken and never close its side, or even died unexpectedly. Default value is 60sec. Usual value used in 2.2 was 180 seconds, you may restore it, but remember that if your machine is even underloaded WEB server, you risk to overflow memory with kilotons of dead sockets, FIN-WAIT-2 sockets are less dangerous than FIN-WAIT-1, because they eat maximum 1.5K of memory, but they tend to live longer. Cf. tcp_max_orphans.
How often TCP sends out keepalive messages when keepalive is enabled. Default: 2hours.
How frequent probes are retransmitted, when a probe isn't acknowledged. Default: 75 seconds.
How many keepalive probes TCP will send, until it decides that the connection is broken. Default value: 9. Multiplied with tcp_keepalive_intvl, this gives the time a link can be non-responsive after a keepalive has been sent.
Maximal number of TCP sockets not attached to any user file handle, held by system. If this number is exceeded orphaned connections are reset immediately and warning is printed. This limit exists only to prevent simple DoS attacks, you _must_ not rely on this or lower the limit artificially, but rather increase it (probably, after increasing installed memory), if network conditions require more than default value, and tune network services to linger and kill such states more aggressively. Let me remind you again: each orphan eats up to �64K of unswappable memory.
How may times to retry before killing TCP connection, closed by our side. Default value 7 corresponds to �50sec-16min depending on RTO. If your machine is a loaded WEB server, you should think about lowering this value, such sockets may consume significant resources. Cf. tcp_max_orphans.
Maximal number of remembered connection requests, which still did not receive an acknowledgment from connecting client. Default value is 1024 for systems with more than 128Mb of memory, and 128 for low memory machines. If server suffers of overload, try to increase this number. Warning! If you make it greater than 1024, it would be better to change TCP_SYNQ_HSIZE in include/net/tcp.h to keep TCP_SYNQ_HSIZE*16<=tcp_max_syn_backlog and to recompile kernel.
Maximal number of timewait sockets held by system simultaneously. If this number is exceeded time-wait socket is immediately destroyed and warning is printed. This limit exists only to prevent simple DoS attacks, you _must_ not lower the limit artificially, but rather increase it (probably, after increasing installed memory), if network conditions require more than default value.
Bug-to-bug compatibility with some broken printers. On retransmit try to send bigger packets to work around bugs in certain TCP stacks.
How many times to retry before deciding that something is wrong and it is necessary to report this suspicion to network layer. Minimal RFC value is 3, it is default, which corresponds to �3sec-8min depending on RTO.
How may times to retry before killing alive TCP connection. RFC 1122 says that the limit should be longer than 100 sec. It is too small number. Default value 15 corresponds to �13-30min depending on RTO.
This boolean enables a fix for 'time-wait assassination hazards in tcp', described in RFC 1337. If enabled, this causes the kernel to drop RST packets for sockets in the time-wait state. Default: 0
Use Selective ACK which can be used to signify that specific packets are missing - therefore helping fast recovery.
Use the Host requirements interpretation of the TCP urg pointer field. Most hosts use the older BSD interpretation, so if you turn this on Linux might not communicate correctly with them. Default: FALSE
Number of SYN packets the kernel will send before giving up on the new connection.
To open the other side of the connection, the kernel sends a SYN with a piggybacked ACK on it, to acknowledge the earlier received SYN. This is part 2 of the threeway handshake. This setting determines the number of SYN+ACK packets sent before the kernel gives up on the connection.
Timestamps are used, amongst other things, to protect against wrapping sequence numbers. A 1 gigabit link might conceivably re-encounter a previous sequence number with an out-of-line value, because it was of a previous generation. The timestamp will let it recognize this 'ancient packet'.
Enable fast recycling TIME-WAIT sockets. Default value is 1. It should not be changed without advice/request of technical experts.
TCP/IP normally allows windows up to 65535 bytes big. For really fast networks, this may not be enough. The window scaling options allows for almost gigabyte windows, which is good for high bandwidth*delay products.
The default is 0, since this feature is not implemented yet (kernel version 2.2.12).
If we do multicast forwarding on this interface
If you set this to 1, this interface will respond to ARP requests for addresses the kernel has routes to. Can be very useful when building 'ip pseudo bridges'. Do take care that your netmasks are very correct before enabling this! Also be aware that the rp_filter, mentioned elsewhere, also operates on ARP queries!
See the section on Reverse Path Filtering.
Accept ICMP redirect messages only for gateways, listed in default gateway list. Enabled by default.
If we send the above mentioned redirects.
If it is not set the kernel does not assume that different subnets on this device can communicate directly. Default setting is 'yes'.
FIXME: fill this in
Determines the number of requests to send to the user level ARP daemon. Use 0 to turn off.
A base value used for computing the random reachable time value as specified in RFC2461.
Delay for the first time probe if the neighbor is reachable. (see gc_stale_time)
Maximum queue length of the delayed proxy arp timer. (see proxy_delay).
Writing to this file results in a flush of the routing cache.
See /proc/sys/net/ipv4/route/gc_elasticity.
See /proc/sys/net/ipv4/route/gc_elasticity.
See /proc/sys/net/ipv4/route/gc_elasticity.
See /proc/sys/net/ipv4/route/gc_elasticity.
Delays for flushing the routing cache.
Maximum size of the routing cache. Old entries will be purged once the cache reached has this size.
FIXME: fill this in
Delays for flushing the routing cache.
FIXME: fill this in
FIXME: fill this in
Factors which determine if more ICMP redirects should be sent to a specific host. No redirects will be sent once the load limit or the maximum number of redirects has been reached.
See /proc/sys/net/ipv4/route/redirect_load.
Timeout for redirects. After this period redirects will be sent again, even if this has been stopped, because the load or number limit has been reached.
David D. Clark, Scott Shenker and Lixia Zhang Supporting Real-Time Applications in an Integrated Services Packet Network: Architecture and Mechanism.
As I understand it, the main idea is to create WFQ flows for each guaranteed service and to allocate the rest of bandwith to dummy flow-0. Flow-0 comprises the predictive services and the best effort traffic; it is handled by a priority scheduler with the highest priority band allocated for predictive services, and the rest --- to the best effort packets.
Note that in CSZ flows are NOT limited to their bandwidth. It is supposed that the flow passed admission control at the edge of the QoS network and it doesn't need further shaping. Any attempt to improve the flow or to shape it to a token bucket at intermediate hops will introduce undesired delays and raise jitter.
At the moment CSZ is the only scheduler that provides true guaranteed service. Another schemes (including CBQ) do not provide guaranteed delay and randomize jitter."
Does not currently seem like a good candidate to use, unless you've read and understand the article mentioned.
Esteve Camps
This text is an extract from my thesis on QoS Support in Linux, September 2000.
Source documents:
Examples in iproute2 distribution.
White Paper-QoS protocols and architectures and IP QoS Frequently Asked Questions both by Quality of Service Forum.
This chapter was written by Esteve Camps <esteve@hades.udg.es>.
First of all, first of all, it would be a great idea for you to read RFCs written about this (RFC2474, RFC2475, RFC2597 and RFC2598) at IETF DiffServ working Group web site and Werner Almesberger web site (he wrote the code to support Differentiated Services on Linux).
But, first of all, take a look at DSMARK qdisc command and its parameters:
... dsmark indices INDICES [ default_index DEFAULT_INDEX ] [ set_tc_index ] |
indices: size of table of (mask,value) pairs. Maximum value is 2^n, where n>=0.
Default_index: the default table entry index if classifier finds no match.
Set_tc_index: instructs dsmark discipline to retrieve the DS field and store it onto skb->tc_index.
This qdisc will apply the next steps:
New_Ds_field = ( Old_DS_field & mask ) | value |
skb->ihp->tos - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > | | ^ | -- If you declare set_tc_index, we set DS | | <-----May change | value into skb->tc_index variable | |O DS field | A| |R +-|-+ +------+ +---+-+ Internal +-+ +---N|-----|----+ | | | | tc |--->| | |--> . . . -->| | | D| | | | | |----->|index |--->| | | Qdisc | |---->| v | | | | | |filter|--->| | | +---------------+ | ---->(mask,value) | -->| O | +------+ +-|-+--------------^----+ / | (. , .) | | | | ^ | | | | (. , .) | | | +----------|---------|----------------|-------|--+ (. , .) | | | sch_dsmark | | | | | +-|------------|---------|----------------|-------|------------------+ | | | <- tc_index -> | | | |(read) | may change | | <--------------Index to the | | | | | (mask,value) v | v v | pairs table - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -> skb->tc_index |
How to do marking? Just change the mask and value of the class you want to remark. See next line of code:
tc class change dev eth0 classid 1:1 dsmark mask 0x3 value 0xb8 |
Now, we'll explain how TC_INDEX filter works and how fits into this. Besides, TCINDEX filter can be used in other configurations rather than those including DS services.
This is the basic command to declare a TC_INDEX filter:
... tcindex [ hash SIZE ] [ mask MASK ] [ shift SHIFT ] [ pass_on | fall_through ] [ classid CLASSID ] [ police POLICE_SPEC ] |
TC INDEX FILTER +---+ +-------+ +---+-+ +------+ +-+ +-------+ | | | | | | | |FILTER| +-+ +-+ | | | | | |----->| MASK | -> | | | -> |HANDLE|->| | | | -> | | -> | | | | . | =0xfc | | | | |0x2E | | +----+ | | | | | | | . | | | | | +------+ +--------+ | | | | | | . | | | | | | | | | -->| | . | SHIFT | | | | | | | |--> | | . | =2 | | | +----------------------------+ | | | | | | | | | CBQ 2:0 | | | | | +-------+ +---+--------------------------------+ | | | | | | | +-------------------------------------------------------------+ | | DSMARK 1:0 | +-------------------------------------------------------------------------+ |
Value1 = skb->tc_index & MASK Key = Value1 >> SHIFT |
In the example, MASK=0xFC i SHIFT=2.
Value1 = 10111000 & 11111100 = 10111000 Key = 10111000 >> 2 = 00101110 -> 0x2E in hexadecimal |
Finally, let's see which possible values can be set to all this TCINDEX parameters:
TC Name Value Default ----------------------------------------------------------------- Hash 1...0x10000 Implementation dependent Mask 0...0xffff 0xffff Shift 0...15 0 Fall through / Pass_on Flag Fall_through Classid Major:minor None Police ..... None |
# tc qdisc add dev eth0 ingress |
For a contrived example how the ingress qdisc could be used, see the Cookbook.
Read the paper on RED queueing by Sally Floyd and Van Jacobson for technical information.
Not a lot is known about GRED. It looks like GRED with several internal queues, whereby the internal queue is chosen based on the Diffserv tcindex field. According to a slide found here, it contains the capabilities of Cisco's 'Distributed Weighted RED', as well as Dave Clark's RIO.
Each virtual queue can have its own Drop Parameters specified.
FIXME: get Jamal or Werner to tell us more
This qdisc is not included in the standard kernels but can be downloaded from �. Currently the qdisc is only tested with Linux 2.2 kernels but it will probably work with 2.4/2.5 kernels too.
The WRR qdisc distributes bandwidth between its classes using the weighted round robin scheme. That is, like the CBQ qdisc it contains classes into which arbitrary qdiscs can be plugged. All classes which have sufficient demand will get bandwidth proportional to the weights associated with the classes. The weights can be set manually using the tc program. But they can also be made automatically decreasing for classes transferring much data.
The qdisc has a built-in classifier which assigns packets coming from or sent to different machines to different classes. Either the MAC or IP and either source or destination addresses can be used. The MAC address can only be used when the Linux box is acting as an ethernet bridge, however. The classes are automatically assigned to machines based on the packets seen.
The qdisc can be very useful at sites such as dorms where a lot of unrelated individuals share an Internet connection. A set of scripts setting up a relevant behavior for such a site is a central part of the WRR distribution.
# ip address add 188.177.166.1 dev eth0 # ip address add 188.177.166.2 dev eth0 |
We first attach a CBQ qdisc to eth0:
# tc qdisc add dev eth0 root handle 1: cbq bandwidth 10Mbit cell 8 avpkt 1000 \ mpu 64 |
We then create classes for our customers:
# tc class add dev eth0 parent 1:0 classid 1:1 cbq bandwidth 10Mbit rate \ 2MBit avpkt 1000 prio 5 bounded isolated allot 1514 weight 1 maxburst 21 # tc class add dev eth0 parent 1:0 classid 1:2 cbq bandwidth 10Mbit rate \ 5Mbit avpkt 1000 prio 5 bounded isolated allot 1514 weight 1 maxburst 21 |
Then we add filters for our two classes:
##FIXME: Why this line, what does it do?, what is a divisor?: ##FIXME: A divisor has something to do with a hash table, and the number of ## buckets - ahu # tc filter add dev eth0 parent 1:0 protocol ip prio 5 handle 1: u32 divisor 1 # tc filter add dev eth0 parent 1:0 prio 5 u32 match ip src 188.177.166.1 flowid 1:1 # tc filter add dev eth0 parent 1:0 prio 5 u32 match ip src 188.177.166.2 flowid 1:2 |
FIXME: why no token bucket filter? is there a default pfifo_fast fallback somewhere?
If you want to protect an entire network, skip this script, which is best suited for a single host.
#! /bin/sh -x # # sample script on using the ingress capabilities # this script shows how one can rate limit incoming SYNs # Useful for TCP-SYN attack protection. You can use # IPchains to have more powerful additions to the SYN (eg # in addition the subnet) # #path to various utilities; #change to reflect yours. # TC=/sbin/tc IP=/sbin/ip IPTABLES=/sbin/iptables INDEV=eth2 # # tag all incoming SYN packets through $INDEV as mark value 1 ############################################################ $iptables -A PREROUTING -i $INDEV -t mangle -p tcp --syn \ -j MARK --set-mark 1 ############################################################ # # install the ingress qdisc on the ingress interface ############################################################ $TC qdisc add dev $INDEV handle ffff: ingress ############################################################ # # # SYN packets are 40 bytes (320 bits) so three SYNs equals # 960 bits (approximately 1kbit); so we rate limit below # the incoming SYNs to 3/sec (not very useful really; but #serves to show the point - JHS ############################################################ $TC filter add dev $INDEV parent ffff: protocol ip prio 50 handle 1 fw \ police rate 1kbit burst 40 mtu 9k drop flowid :1 ############################################################ # echo "---- qdisc parameters Ingress ----------" $TC qdisc ls dev $INDEV echo "---- Class parameters Ingress ----------" $TC class ls dev $INDEV echo "---- filter parameters Ingress ----------" $TC filter ls dev $INDEV parent ffff: #deleting the ingress qdisc #$TC qdisc del $INDEV ingress |
Rate limiting goes much as shown earlier. To refresh your memory, our ASCIIgram again:
[The Internet] ---<E3, T3, whatever>--- [Linux router] --- [Office+ISP] eth1 eth0 |
We first set up the prerequisite parts:
# tc qdisc add dev eth0 root handle 10: cbq bandwidth 10Mbit avpkt 1000 # tc class add dev eth0 parent 10:0 classid 10:1 cbq bandwidth 10Mbit rate \ 10Mbit allot 1514 prio 5 maxburst 20 avpkt 1000 |
# tc class add dev eth0 parent 10:1 classid 10:100 cbq bandwidth 10Mbit rate \ 100Kbit allot 1514 weight 800Kbit prio 5 maxburst 20 avpkt 250 \ bounded |
This limits at 100Kbit. Now we need a filter to assign ICMP traffic to this class:
# tc filter add dev eth0 parent 10:0 protocol ip prio 100 u32 match ip protocol 1 0xFF flowid 10:100 |
We blatantly adapt from the (soon to be obsolete) ipchains HOWTO:
Especially the "Minimum Delay" is important for me. I switch it on for "interactive" packets in my upstream (Linux) router. I'm behind a 33k6 modem link. Linux prioritizes packets in 3 queues. This way I get acceptable interactive performance while doing bulk downloads at the same time. |
# iptables -A PREROUTING -t mangle -p tcp --sport telnet \ -j TOS --set-tos Minimize-Delay # iptables -A PREROUTING -t mangle -p tcp --sport ftp \ -j TOS --set-tos Minimize-Delay # iptables -A PREROUTING -t mangle -p tcp --sport ftp-data \ -j TOS --set-tos Maximize-Throughput |
# iptables -A OUTPUT -t mangle -p tcp --dport telnet \ -j TOS --set-tos Minimize-Delay # iptables -A OUTPUT -t mangle -p tcp --dport ftp \ -j TOS --set-tos Minimize-Delay # iptables -A OUTPUT -t mangle -p tcp --dport ftp-data \ -j TOS --set-tos Maximize-Throughput |
This section was sent in by reader Ram Narula from Internet for Education (Thailand).
|----------------| | Implementation | |----------------| Addresses used 10.0.0.1 naret (NetFilter server) 10.0.0.2 silom (Squid server) 10.0.0.3 donmuang (Router connected to the Internet) 10.0.0.4 kaosarn (other server on network) 10.0.0.5 RAS 10.0.0.0/24 main network 10.0.0.0/19 total network |---------------| |Network diagram| |---------------| Internet | donmuang | ------------hub/switch---------- | | | | naret silom kaosarn RAS etc. |
(all servers on my network had 10.0.0.1 as the default gateway which was the former IP address of donmuang router so what I did was changed the IP address of donmuang to 10.0.0.3 and gave naret ip address of 10.0.0.1)
Silom ----- -setup squid and ipchains |
Setup Squid server on silom, make sure it does support transparent caching/proxying, the default port is usually 3128, so all traffic for port 80 has to be redirected to port 3128 locally. This can be done by using ipchains with the following:
silom# ipchains -N allow1 silom# ipchains -A allow1 -p TCP -s 10.0.0.0/19 -d 0/0 80 -j REDIRECT 3128 silom# ipchains -I input -j allow1 |
Or, in netfilter lingo:
silom# iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 |
(note: you might have other entries as well)
For more information on setting Squid server please refer to Squid FAQ page on http://squid.nlanr.net).
Make sure ip forwarding is enabled on this server and the default gateway for this server is donmuang router (NOT naret).
Naret ----- -setup iptables and iproute2 -disable icmp REDIRECT messages (if needed) |
"Mark" packets of destination port 80 with value 2
naret# iptables -A PREROUTING -i eth0 -t mangle -p tcp --dport 80 \ -j MARK --set-mark 2 |
Setup iproute2 so it will route packets with "mark" 2 to silom
naret# echo 202 www.out >> /etc/iproute2/rt_tables naret# ip rule add fwmark 2 table www.out naret# ip route add default via 10.0.0.2 dev eth0 table www.out naret# ip route flush cache |
If donmuang and naret is on the same subnet then naret should not send out icmp REDIRECT messages. In this case it is, so icmp REDIRECTs has to be disabled by:
naret# echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects naret# echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects naret# echo 0 > /proc/sys/net/ipv4/conf/eth0/send_redirects |
The setup is complete, check the configuration
On naret: naret# iptables -t mangle -L Chain PREROUTING (policy ACCEPT) target prot opt source destination MARK tcp -- anywhere anywhere tcp dpt:www MARK set 0x2 Chain OUTPUT (policy ACCEPT) target prot opt source destination naret# ip rule ls 0: from all lookup local 32765: from all fwmark 2 lookup www.out 32766: from all lookup main 32767: from all lookup default naret# ip route list table www.out default via 203.114.224.8 dev eth0 naret# ip route 10.0.0.1 dev eth0 scope link 10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.1 127.0.0.0/8 dev lo scope link default via 10.0.0.3 dev eth0 (make sure silom belongs to one of the above lines, in this case it's the line with 10.0.0.0/24) |------| |-DONE-| |------| |
|-----------------------------------------| |Traffic flow diagram after implementation| |-----------------------------------------| INTERNET /\ || \/ -----------------donmuang router--------------------- /\ /\ || || || || || \/ || naret silom || *destination port 80 traffic=========>(cache) || /\ || || || \/ \/ \\===================================kaosarn, RAS, etc. |
Note that the network is asymmetric as there is one extra hop on general outgoing path.
Here is run down for packet traversing the network from kaosarn to and from the Internet. For web/http traffic: kaosarn http request->naret->silom->donmuang->internet http replies from Internet->donmuang->silom->kaosarn For non-web/http requests(eg. telnet): kaosarn outgoing data->naret->donmuang->internet incoming data from Internet->donmuang->kaosarn |
This process is called 'Path MTU Discovery', where MTU stands for 'Maximum Transfer Unit.'
The following problem: I set the mtu/mru of my leased line running ppp to 296 because it's only 33k6 and I cannot influence the queueing on the other side. At 296, the response to a key press is within a reasonable time frame.
And, on my side I have a masqrouter running (of course) Linux.
Recently I split 'server' and 'router' so most applications are run on a different machine than the routing happens on.
I then had trouble logging into irc. Big panic! Some digging did find out that I got connected to irc, even showed up as 'connected' on irc but I did not receive the motd from irc. I checked what could be wrong and noted that I already had some previous trouble reaching certain websites related to the MTU, since I had no trouble reaching them when the MTU was 1500, the problem just showed when the MTU was set to 296. Since irc servers block about every kind of traffic not needed for their immediate operation, they also block icmp.
I managed to convince the operators of a webserver that this was the cause of a problem, but the irc server operators were not going to fix this.
So, I had to make sure outgoing masqueraded traffic started with the lower mtu of the outside link. But I want local ethernet traffic to have the normal mtu (for things like nfs traffic).
Solution:
ip route add default via 10.0.0.1 mtu 296(10.0.0.1 being the default gateway, the inside address of the masquerading router)
In general, it is possible to override PMTU Discovery by setting specific routes. For example, if only a certain subnet is giving problems, this should help:
ip route add 195.96.96.0/24 via 10.0.0.1 mtu 1000 |
# iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu |
# iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 128 |
I attempted to create the holy grail:
Even though http is 'bulk' traffic, other traffic should not drown it out too much.
This is a much observed phenomenon where upstream traffic simply destroys download speed.
The next section explains in depth what causes the delays, and how we can fix them. You can safely skip it and head straight for the script if you don't care how the magic is performed.
Baseline latency: round-trip min/avg/max = 14.4/17.1/21.7 ms Without traffic conditioner, while downloading: round-trip min/avg/max = 560.9/573.6/586.4 ms Without traffic conditioner, while uploading: round-trip min/avg/max = 2041.4/2332.1/2427.6 ms With conditioner, during 220kbit/s upload: round-trip min/avg/max = 15.7/51.8/79.9 ms With conditioner, during 850kbit/s download: round-trip min/avg/max = 20.4/46.9/74.0 ms When uploading, downloads proceed at ~80% of the available speed. Uploads at around 90%. Latency then jumps to 850 ms, still figuring out why. |
Downstream traffic is policed using a tc filter containing a Token Bucket Filter.
#!/bin/bash # The Ultimate Setup For Your Internet Connection At Home # # # Set the following values to somewhat less than your actual download # and uplink speed. In kilobits DOWNLINK=800 UPLINK=220 DEV=ppp0 # clean existing down- and uplink qdiscs, hide errors tc qdisc del dev $DEV root 2> /dev/null > /dev/null tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null ###### uplink # install root CBQ tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 10mbit # shape everything at $UPLINK speed - this prevents huge queues in your # DSL modem which destroy latency: # main class tc class add dev $DEV parent 1: classid 1:1 cbq rate ${UPLINK}kbit \ allot 1500 prio 5 bounded isolated # high prio class 1:10: tc class add dev $DEV parent 1:1 classid 1:10 cbq rate ${UPLINK}kbit \ allot 1600 prio 1 avpkt 1000 # bulk and default class 1:20 - gets slightly less traffic, # and a lower priority: tc class add dev $DEV parent 1:1 classid 1:20 cbq rate $[9*$UPLINK/10]kbit \ allot 1600 prio 2 avpkt 1000 # both get Stochastic Fairness: tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10 tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10 # start filters # TOS Minimum Delay (ssh, NOT scp) in 1:10: tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \ match ip tos 0x10 0xff flowid 1:10 # ICMP (ip protocol 1) in the interactive class 1:10 so we # can do measurements & impress our friends: tc filter add dev $DEV parent 1:0 protocol ip prio 11 u32 \ match ip protocol 1 0xff flowid 1:10 # To speed up downloads while an upload is going on, put ACK packets in # the interactive class: tc filter add dev $DEV parent 1: protocol ip prio 12 u32 \ match ip protocol 6 0xff \ match u8 0x05 0x0f at 0 \ match u16 0x0000 0xffc0 at 2 \ match u8 0x10 0xff at 33 \ flowid 1:10 # rest is 'non-interactive' ie 'bulk' and ends up in 1:20 tc filter add dev $DEV parent 1: protocol ip prio 13 u32 \ match ip dst 0.0.0.0/0 flowid 1:20 ########## downlink ############# # slow downloads down to somewhat less than the real speed to prevent # queuing at our ISP. Tune to see how high you can set it. # ISPs tend to have *huge* queues to make sure big downloads are fast # # attach ingress policer: tc qdisc add dev $DEV handle ffff: ingress # filter *everything* to it (0.0.0.0/0), drop everything that's # coming in too fast: tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \ 0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1 |
If the last two lines give an error, update your tc tool to a newer version!
#!/bin/bash # The Ultimate Setup For Your Internet Connection At Home # # # Set the following values to somewhat less than your actual download # and uplink speed. In kilobits DOWNLINK=800 UPLINK=220 DEV=ppp0 # clean existing down- and uplink qdiscs, hide errors tc qdisc del dev $DEV root 2> /dev/null > /dev/null tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null ###### uplink # install root HTB, point default traffic to 1:20: tc qdisc add dev $DEV root handle 1: htb default 20 # shape everything at $UPLINK speed - this prevents huge queues in your # DSL modem which destroy latency: tc class add dev $DEV parent 1: classid 1:1 htb rate ${UPLINK}kbit burst 6k # high prio class 1:10: tc class add dev $DEV parent 1:1 classid 1:10 htb rate ${UPLINK}kbit \ burst 6k prio 1 # bulk & default class 1:20 - gets slightly less traffic, # and a lower priority: tc class add dev $DEV parent 1:1 classid 1:20 htb rate $[9*$UPLINK/10]kbit \ burst 6k prio 2 # both get Stochastic Fairness: tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10 tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10 # TOS Minimum Delay (ssh, NOT scp) in 1:10: tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \ match ip tos 0x10 0xff flowid 1:10 # ICMP (ip protocol 1) in the interactive class 1:10 so we # can do measurements & impress our friends: tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \ match ip protocol 1 0xff flowid 1:10 # To speed up downloads while an upload is going on, put ACK packets in # the interactive class: tc filter add dev $DEV parent 1: protocol ip prio 10 u32 \ match ip protocol 6 0xff \ match u8 0x05 0x0f at 0 \ match u16 0x0000 0xffc0 at 2 \ match u8 0x10 0xff at 33 \ flowid 1:10 # rest is 'non-interactive' ie 'bulk' and ends up in 1:20 ########## downlink ############# # slow downloads down to somewhat less than the real speed to prevent # queuing at our ISP. Tune to see how high you can set it. # ISPs tend to have *huge* queues to make sure big downloads are fast # # attach ingress policer: tc qdisc add dev $DEV handle ffff: ingress # filter *everything* to it (0.0.0.0/0), drop everything that's # coming in too fast: tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \ 0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1 |
If you want this script to be run by ppp on connect, copy it to /etc/ppp/ip-up.d.
If the last two lines give an error, update your tc tool to a newer version!
This three line script does the trick:
tc qdisc add dev $DEV root handle 1: cbq avpkt 1000 bandwidth 10mbit tc class add dev $DEV parent 1: classid 1:1 cbq rate 512kbit \ allot 1500 prio 5 bounded isolated tc filter add dev $DEV parent 1: protocol ip prio 16 u32 \ match ip dst 195.96.96.97 flowid 1:1 |
The second line creates a 512kbit class with some reasonable defaults. For details, see the cbq manpages and Chapter 9.
The last line tells which traffic should go to the shaped class. Traffic not matched by this rule is NOT shaped. To make more complicated matches (subnets, source ports, destination ports), see Section 9.6.2.
If you changed anything and want to reload the script, execute 'tc qdisc del dev $DEV root' to clean up your existing configuration.
The script can further be improved by adding a last optional line 'tc qdisc add dev $DEV parent 1:1 sfq perturb 10'. See Section 9.2.3 for details on what this does.
The Linux 2.4/2.5 bridge is documented on this page.
Assign an IP address to both interfaces, the 'left' and the 'right' one
Create routes so your machine knows which hosts reside on the left, and which on the right
Also, do not forget to turn on the ip_forwarding flag! When converting from a true bridge, you may find that this flag was turned off as it is not needed when bridging.
Another thing you might note when converting is that you need to clear the arp cache of computers in the network - the arp cache might contain old pre-bridge hardware addresses which are no longer correct.
On a Cisco, this is done using the command 'clear arp-cache', under Linux, use 'arp -d ip.address'. You can also wait for the cache to expire manually, which can take rather long.
You can speed this up using the wonderful 'arping' tool, which on many distributions is part of the 'iputils' package. Using 'arping' you can send out unsolicited ARP messages so as to update remote arp caches.
This is a very powerful technique that is also used by 'black hats' to subvert your routing!
![]() | On Linux 2.4, you may need to execute 'echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind' before being able to send out unsolicited ARP messages! |
You may also discover that your network was misconfigured if you are/were of the habit of specifying routes without netmasks. To explain, some versions of route may have guessed your netmask right in the past, or guessed wrong without you noticing. When doing surgical routing like described above, it is *vital* that you check your netmasks!
Cisco Systems Designing large-scale IP Internetworks
For OSPF:
Moy, John T. "OSPF. The anatomy of an Internet routing protocol" Addison Wesley. Reading, MA. 1998.
Halabi has also written a good guide to OSPF routing design, but this appears to have been dropped from the Cisco web site.
For BGP:
Halabi, Bassam "Internet routing architectures" Cisco Press (New Riders Publishing). Indianapolis, IN. 1997.
also
Cisco Systems
Using the Border Gateway Protocol for interdomain routing
Although the examples are Cisco-specific, they are remarkably similar to the configuration language in Zebra :-)
VLANs are a very cool way to segregate your networks in a more virtual than physical way. Good information on VLANs can be found here. With this implementation, you can have your Linux box talk VLANs with machines like Cisco Catalyst, 3Com: {Corebuilder, Netbuilder II, SuperStack II switch 630}, Extreme Ntwks Summit 48, Foundry: {ServerIronXL, FastIron}.
A great HOWTO about VLANs can be found here.
Update: has been included in the kernel as of 2.4.14 (perhaps 13).
Alternative VLAN implementation for linux. This project was started out of disagreement with the 'established' VLAN project's architecture and coding style, resulting in a cleaner overall design.
These people are brilliant. The Linux Virtual Server is a highly scalable and highly available server built on a cluster of real servers, with the load balancer running on the Linux operating system. The architecture of the cluster is transparent to end users. End users only see a single virtual server.
In short whatever you need to load balance, at whatever level of traffic, LVS will have a way of doing it. Some of their techniques are positively evil! For example, they let several machines have the same IP address on a segment, but turn off ARP on them. Only the LVS machine does ARP - it then decides which of the backend hosts should handle an incoming packet, and sends it directly to the right MAC address of the backend server. Outgoing traffic will flow directly to the router, and not via the LVS machine, which does therefor not need to see your 5Gbit/s of content flowing to the world, and cannot be a bottleneck.
The LVS is implemented as a kernel patch in Linux 2.0 and 2.2, but as a Netfilter module in 2.4/2.5, so it does not need kernel patches! Their 2.4 support is still in early development, so beat on it and give feedback or send patches.
Configuring CBQ can be a bit daunting, especially if all you want to do is shape some computers behind a router. CBQ.init can help you configure Linux with a simplified syntax.
For example, if you want all computers in your 192.168.1.0/24 subnet (on 10mbit eth1) to be limited to 28kbit/s download speed, put this in the CBQ.init configuration file:
DEVICE=eth1,10Mbit,1Mbit RATE=28Kbit WEIGHT=2Kbit PRIO=5 RULE=192.168.1.0/24 |
By all means use this program if the 'how and why' don't interest you. We're using CBQ.init in production and it works very well. It can even do some more advanced things, like time dependent shaping. The documentation is embedded in the script, which explains why you can't find a README.
Stephan Mueller (smueller@chronox.de) wrote two useful scripts, 'limit.conn' and 'shaper'. The first one allows you to easily throttle a single download session, like this:
# limit.conn -s SERVERIP -p SERVERPORT -l LIMIT |
It works on Linux 2.2 and 2.4/2.5.
The second script is more complicated, and can be used to make lots of different queues based on iptables rules, which are used to mark packets which are then shaped.
This is purely for redundancy. Two machines with their own IP address and MAC Address together create a third IP Address and MAC Address, which is virtual. Originally intended purely for routers, which need constant MAC addresses, it also works for other servers.
The beauty of this approach is the incredibly easy configuration. No kernel compiling or patching required, all userspace.
Just run this on all machines participating in a service:
# vrrpd -i eth0 -v 50 10.0.0.22 |
And you are in business! 10.0.0.22 is now carried by one of your servers, probably the first one to run the vrrp daemon. Now disconnect that computer from the network and very rapidly one of the other computers will assume the 10.0.0.22 address, as well as the MAC address.
I tried this over here and had it up and running in 1 minute. For some strange reason it decided to drop my default gateway, but the -n flag prevented that.
This is a 'live' fail over:
64 bytes from 10.0.0.22: icmp_seq=3 ttl=255 time=0.2 ms 64 bytes from 10.0.0.22: icmp_seq=4 ttl=255 time=0.2 ms 64 bytes from 10.0.0.22: icmp_seq=5 ttl=255 time=16.8 ms 64 bytes from 10.0.0.22: icmp_seq=6 ttl=255 time=1.8 ms 64 bytes from 10.0.0.22: icmp_seq=7 ttl=255 time=1.7 ms |
Not *one* ping packet was lost! Just after packet 4, I disconnected my P200 from the network, and my 486 took over, which you can see from the higher latency.
Contains lots of technical information, comments from the kernel
Slides by Jamal Hadi Salim, one of the authors of Linux traffic control
HTML version of Alexeys LaTeX documentation - explains part of iproute2 in great detail
Sally Floyd has a good page on CBQ, including her original papers. None of it is Linux specific, but it does a fair job discussing the theory and uses of CBQ. Very technical stuff, but good reading for those so inclined.
This document by Werner Almesberger, Jamal Hadi Salim and Alexey Kuznetsov describes DiffServ facilities in the Linux kernel, amongst which are TBF, GRED, the DSMARK qdisc and the tcindex classifier.
Yet another HOWTO, this time in Polish! You can copy/paste command lines however, they work just the same in every language. The author is cooperating with us and may soon author sections of this HOWTO.
>From the helpful folks of Cisco who have the laudable habit of putting their documentation online. Cisco syntax is different but the concepts are the same, except that we can do more and do it without routers the price of cars :-)
Stef Coene is busy convincing his boss to sell Linux support, and so he is experimenting a lot, especially with managing bandwidth. His site has a lot of practical information, examples, tests and also points out some CBQ/tc bugs.
Required reading if you truly want to understand TCP/IP. Entertaining as well.
Joe Van Andel
Michael T. Babcock
Christopher Barton
Ard van Breemen
Ron Brinker
?ukasz Bromirski
Lennert Buytenhek
Esteve Camps
Stef Coene
Don Cohen
Jonathan Corbet
Gerry N5JXS Creager
Marco Davids
Jonathan Day
Martin aka devik Devera
Stephan "Kobold" Gehring
Jacek Glinkowski
Andrea Glorioso
Nadeem Hasan
Erik Hensema
Vik Heyndrickx
Spauldo Da Hippie
Koos van den Hout
Stefan Huelbrock <shuelbrock%datasystems.de>
Alexander W. Janssen <yalla%ynfonatic.de>
Gareth John <gdjohn%zepler.org>
Martin Josefsson <gandalf%wlug.westbo.se>
Andi Kleen <ak%suse.de>
Andreas J. Koenig <andreas.koenig%anima.de>
Pawel Krawczyk <kravietz%alfa.ceti.pl>
Amit Kucheria <amitk@ittc.ku.edu>
Edmund Lau <edlau%ucf.ics.uci.edu>
Philippe Latu <philippe.latu%linux-france.org>
Arthur van Leeuwen <arthurvl%sci.kun.nl>
Jason Lunz <j@cc.gatech.edu>
Stuart Lynne <sl@fireplug.net>
Alexey Mahotkin <alexm@formulabez.ru>
Predrag Malicevic <pmalic@ieee.org>
Patrick McHardy <kaber@trash.net>
Andreas Mohr <andi%lisas.de>
Andrew Morton <akpm@zip.com.au>
Wim van der Most
Stephan Mueller <smueller@chronox.de>
Togan Muftuoglu <toganm%yahoo.com>
Chris Murray <cmurray@stargate.ca>
Patrick Nagelschmidt <dto%gmx.net>
Ram Narula <ram@princess1.net>
Jorge Novo <jnovo@educanet.net>
Patrik <ph@kurd.nu>
P?l Osgy?ny <oplab%westel900.net>
Lutz Pre�ler <Lutz.Pressler%SerNet.DE>
Jason Pyeron <jason%pyeron.com>
Rusty Russell <rusty%rustcorp.com.au>
Mihai RUSU <dizzy%roedu.net>
Jamal Hadi Salim <hadi%cyberus.ca>
David Sauer <davids%penguin.cz>
Sheharyar Suleman Shaikh <sss23@drexel.edu>
Stewart Shields <MourningBlade%bigfoot.com>
Nick Silberstein <nhsilber%yahoo.com>
Konrads Smelkov <konrads@interbaltika.com>
William Stearns
Andreas Steinmetz <ast%domdv.de>
Jason Tackaberry <tack@linux.com>
Charles Tassell <ctassell%isn.net>
Glen Turner <glen.turner%aarnet.edu.au>
Tea Sponsor: Eric Veldhuyzen <eric%terra.nu>
Song Wang <wsong@ece.uci.edu>
Lazar Yanackiev