1. How Much Bandwidth Do You Need?
Server bandwidth is the single most important factor in the performance of your web site. The math to determine what bandwidth you need is, in essence, very simple:
hits/second * average size of a hit in bits = bits/second
That is, you need some estimate of the number of hits per second you want to be able to serve. Then you need to know the average size of one of these hits. From this, you know what sort of network bandwidth you need.
1.1. Latencies Are More Important than Bandwidth
It has become clear that the number of packets is a more significant determinant of web performance than raw bandwidth once users are beyond ordinary dial-up modems. This is because each packet must be acknowledged, and the speed of light fixed, while bandwidth is increasing. It may take 20 milliseconds to send a 1500- byte packet to a PC on a DSL line, but only 12 milliseconds to get it from the network into the PC. It will take another 20 milliseconds for the acknowledgment to get back to the sender. So the 40 milliseconds latency is more than three times as important as bandwidth in this case, and it will only get more important later.
This is why it is so important to keep the number of individual items on a page to a minimum. Still, because most browsers are multithreaded, some latencies can happen in parallel. It turns out through experimentation that the best number of embedded images on a page is about the same as the number of threads the browser uses. For example, Netscape uses four threads, and you may get best performance by breaking a single large image into four smaller ones, in which acknowledgments can proceed in parallel rather than being strictly serial. But this holds only where the browser uses HTTP persistent connections ("keepalives") to avoid the overhead of setting up a TCP connection for each of the four smaller images.
Here are some numbers to help think about latency. While the latency from CPU to memory is on the order of 100 nanoseconds, LAN latency is usually about 1 millisecond, or 10,000 times slower. Going across a campus with its own network, latencies are about 5 milliseconds. Going across the Internet, latencies range from 10 to 500 milliseconds. Satellite links can take a whole second or more.
1.2. Thinking About Bandwidth
You can get some perspective when thinking about bandwidth from the Table Note that this chart uses the decimal million (1000000) and not "mega," which is 220 = 1048576.
This chart ignores latency, which varies even from bit to bit, and can be huge, especially upon startup of any component. If you're a bit romantic, you can imagine a blurry picture of virtual reality coming into focus over the years in this chart, from short symbol transmissions twenty years ago, to the limit of human audio perception today, to the limit of visual perception in the coming years.
1.3. Estimating Web Server Network Bandwidth
The following chart displays an estimate of the number of hits per second of a given size (y-axis) a given amount of bandwidth (x-axis) can handle with a 30 percent deduction for TCP/IP and other network overhead. Numbers are truncated to integers, so 0 means "less than one per second" rather than truly zero.
You can use the table to estimate, for example, how many 4K files per second your T1 line can handle. The answer is 33. Keep in mind that the table refers just to the network capacity and does not say whether the load was generated by static HTML or CGIs. That is, network capacity says nothing about the capacity of your disk or server CPU, or of a database behind the web server.
In fact, the capacity of a server to fill a network connection is distinctly nonlinear. Smaller packets require a larger overhead in terms of interrupts and packet header processing. This means that sending two packets will require more of the server than combining them into one larger packet.
The table is also a bit deceptive in that you will rarely see a smooth distribution of hits filling your network capacity. Rather, there will be peaks of three to four times the average rate per second and long troughs of no load at all.
To scale any of these connection types, you can add more network cards or modems until you reach the maximum number of cards or modems the server has room for. Then you can move up to the next connection type or to a bigger server. This is the easy way to do things, throwing more hardware into a single server. Scaling across multiple servers is typically more complicated, requiring load balancing strategies.