Now that we know logic and decision making can be easily driven and encoded via electrons in transistors we know have the means to make decisions on how to route information on a network. The basic operational concept of network routed information relies on the device known as a packet. A reasonable analogy is that a single packet is a combination of individual bits of information, let's call them letters, into a word. Stringing together several words (e.g. packets) makes a sentence. All your life you have serially processed verbal communication, waiting for the next packet to arrive. For instance a packet stream might look like this

Stop ... and ... beware ... that ... many ... angry ... cows ... aim ... to ... push ... you ... over ... that ... cliff

From your point of view, you can not make a decision until you have received the entire packet stream which contains the key word/packet CLIFF !!

Basic Packet Transmission:

Later on we will dissect the TCP/IP layers directly to show how the bitstream and headers are constructed. For now we just want to concentrate on basic packet transmission.

    TCP sends data serially, one packet at a time , in order, until the receiver has a full buffer. This buffer is called the receiver window or data window.

    Soon the data window becomes full - this is why its important to control the transmission rate.

    When the data window is full, the receiver sends an ACK (acknowledgement) back to the sender so the process can start over. So TCP/IP is not strictly one way - there is occassionally communication from the receiver back to the sender. However, this is an oversimplification of the actual process, which relies on overlapping packets as discussed below.



The physical time it takes each packet to arrive at the receiver and/or the time it takes to ACK back is called latency (all you gamers out there constantly bitch about this...). The latency of a typical wired LAN is less than 10 milliseconds. The latency of a wireless LAN can be much higher one reason is that packet loss is more probable. In the Early days of wireless (around 2000) packet loss was a large issue. The last 15 years of evolution of wireless technology has greatly eased this problem. We will discuss packet loss in detail later.



Typically in a wide area network or WAN (like the public internet), the latencies are much higher, so each packet arrival and ACK have longer transaction times. This can drastically slow down overall network throughput. The Internet global traffic monitor provides a glimpse into the current performance of the global internet.



Typical WAN latency is about 100 milliseconds, or 10 times worse than LAN communiction. Latency, however, is also effected by the actual protocal used to transmit the signal.

More severe than latency, however, is packet loss or data loss, which occurs on all networks.

How do you think packet loss should be handled? How do you handle it personally?

Loss is most typically caused by congestion which is the overloading of any single point on the network between sender and receiver.

Because TCP/IP requires that all packets arrive in order (thus TCP/IP is not very smart), any single lost packet means that then entire contents of the receive buffer must be retransmitted. Sufficient loss can cause some TCP/IP based application to completely fail.



In the bad old days of poor bandwith (e.g. 1985-1993), FTP transport of data failed more often than it succeeded! The use of the ping command on any PC to any IP host will show packet loss statistics as well as latency. (e.g. ping -t comcast.net).



The more geographically separated two networks are, the higher the latency will be (due to finite travel times). Together, latency and loss can seriously compromise TCP/IP performance. This means that a typical pipeline (of bandwidth X) will have a throughput rate of .2 -.3 X. Not real good, but mostly unavoidable under TCP/IP transmissions.

Is there any way to avoid this basic problem?

One attempt would be to use overlapping transfers, which is what TCP has always done, with various degrees of efficiency. Overlapping transmission can help to overcome latency by having more packets in flight at any given time. That way the server is not stuck wating for an ACK.

However, with overlapping packet transmission,as you might expect, loss becomes an even more critical problem.

The best solution is to develop intelligent packet reconstruction protocols where the data can be reconstructed via the encoding of meta-data into the packet stream. This means that no single packet of data has any particular impact on the overall reliability of the stream



Think of it via this analogy.

If I wanted to send a box of cereal between two points, TCP/IP would send each grain of cereal, one piece at a time, bit by bit. If any grain is missing then the entire box is not correct and must be resent.

Meta-content or meta-taging instead takes the box of cereal as an entity and develops a recipe for how to make it. It send sends the recipe for the cereal and how it is to be put in the box over the network. This builds in redundancy and any loss of packets (individual grains of cereal) along the way can be recreated from another group of packets that has already arrived or will arrive. As a result, you will always have the ability to recreate a box of cereal. Conceptually this approach is great, the difficulty lies in how to best construct and transmit the recipe.

Moreover, these days someone is likely to steal you packets of Cereal instruction to make their own and so you also have to develop encryption to prevent packet theft. This adds another whole new layer to this entire process