XMODEM
XMODEM is a simple file transfer protocol developed as a quick hack by Ward Christensen for use in his 1977 MODEM.ASM terminal program. It allowed users to transmit files between their computers when both sides used MODEM. Keith Petersen made a minor update to always turn on "quiet mode", and called the result XMODEM.[3][4] XMODEM, like most file transfer protocols, breaks up the original data into a series of "packets" that are sent to the receiver, along with additional information allowing the receiver to determine whether that packet was correctly received. If an error is detected, the receiver requests that the packet be re-sent. A string of bad packets causes the transfer to abort. XMODEM became extremely popular in the early bulletin board system (BBS) market, largely because it was simple to implement. It was also fairly inefficient, and as modem speeds increased, this problem led to the development of a number of modified versions of XMODEM to improve performance or address other problems with the protocol.[4] Christensen believed his original XMODEM to be "the single most modified program in computing history".[5] Chuck Forsberg collected a number of common modifications into his YMODEM protocol, but poor implementation led to a further fracturing before they were re-unified by his later ZMODEM protocol. ZMODEM became very popular, but never completely replaced XMODEM in the BBS market. Packet structureThe original XMODEM used a 128-byte data packet, the block size used on CP/M floppy disks. The packet was prefixed by a simple 3-byte header containing a <SOH> character, a "block number" from 1-255, and the "inverse" block number—255 minus the block number. Block numbering starts with 1 for the first block sent, not 0. The header was followed by the 128 bytes of data, and then a single-byte checksum. The checksum was the sum of all 128 data bytes in the packet modulo 256. The complete packet was thus 132 bytes long, containing 128 bytes of payload data, for a total channel efficiency of about 97%. The file was marked "complete" with a <EOT> character sent after the last block. This character was not in a packet, but sent alone as a single byte. Since the file length was not sent as part of the protocol, the last packet was padded out with a "known character" that could be dropped. In the original specification, this defaulted to <SUB> or 26 decimal, which CP/M used as the end-of-file marker inside its own disk format. The standard suggested any character could be used for padding, but there was no way for it to be changed within the protocol itself – if an implementation changed the padding character, only clients using the same implementation would correctly interpret the new padding character. Transfer detailsFiles were transferred one packet at a time. When received, the packet's checksum was calculated by the receiver and compared to the one received from the sender at the end of the packet. If the two matched, the receiver sent an <ACK> message back to the sender, which then sent the next packet in sequence. If there was a problem with the checksum, the receiver instead sent a <NAK>. If a <NAK> was received, the sender would re-send the packet,[4] and continued to try several times, normally ten, before aborting the transfer. A <NAK> was also sent if the receiver did not receive a valid packet within ten seconds while still expecting data due to the lack of a <EOT> character. A seven-second timeout was also used within a packet, guarding against dropped connections in mid-packet. The block numbers were also examined in a simple way to check for errors. After receiving a packet successfully, the next packet should have a one-higher number. If it instead received the same block number this was not considered serious, it was implied that the <ACK> had not been received by the sender, which had then re-sent the packet. Any other packet number signalled that packets had been lost. Transfers were receiver-driven; the transmitter would not send any data until an initial <NAK> was sent by the receiver. This was a logical outcome of the way the user interacted with the sending machine, which would be remotely located. The user would navigate to the requested file on the sending machine, and then ask that machine to transfer it. Once this command was issued, the user would then execute a command in their local software to start receiving. Since the delay between asking the remote system for the file and issuing a local command to receive was unknown, XMODEM allowed up to 90 seconds for the receiver to begin issuing requests for data packets. ProblemsAlthough XMODEM was robust enough for a journalist in 1982 to transmit stories from Pakistan to the United States with an Osborne 1 and acoustic coupler over poor-quality telephone lines,[6] the protocol had several flaws. Minor problemsXMODEM was written for CP/M machines, and bears several marks of that operating system. Notably, files on CP/M were always multiples of 128 bytes, and their end was marked within a block with the <EOT> character. These characteristics were transplanted directly into XMODEM. However, other operating systems did not feature either of these peculiarities, and the widespread introduction of MS-DOS in the early 1980s led to XMODEM having to be updated to notice either a <EOT> or <EOF> as the end-of-file marker. For some time it was suggested that sending a <CAN> character instead of an <ACK> or <NAK> should be supported in order to easily abort the transfer from the receiving end. Likewise, a <CAN> received in place of the <SOH> indicated the sender wished to cancel the transfer. However, this character could be easily "created" via simple noise-related errors of what was meant to be an <ACK> or <NAK>. A double-<CAN> was proposed to avoid this problem, but it is not clear if this was widely implemented. Major problemsXMODEM was designed for simplicity, without much knowledge of other file transfer protocols – which were fairly rare anyway. Due to its simplicity, there were a number of very basic errors that could cause a transfer to fail, or worse, result in an incorrect file which went unnoticed by the protocol. Most of this was due to the use of a simple checksum for error correction,[4] which is susceptible to missing errors in the data if two bits are reversed, which can happen with a suitably short burst of noise. Additionally, similar damage to the header or checksum could lead to a failed transfer in cases where the data itself was undamaged. Many authors introduced extensions to XMODEM to address these and other problems. Many asked for these extensions to be included as part of a new XMODEM standard. However, Ward Christensen refused to do this, as it was precisely the lack of these features, and the associated coding needed to support them, which led to XMODEM's widespread use. As he explained:
Batch transfersAnother problem with XMODEM was that it required the transfer to be user-driven rather than automated.[4] Typically this meant the user would navigate on the sender's system to select the file they wanted, and then use a command to put that system into the "ready to send" mode. They would then trigger the transfer from their end using a command in their terminal emulator. If the user wanted to transfer another file, they would have to repeat this process again. For automated transfers between two sites, a number of add-ons to the XMODEM protocol were implemented over time. These generally assumed the sender would continue sending file after file, with the receiver attempting to trigger the next file by sending a MODEM7MODEM7, also known as MODEM7 batch or Batch XMODEM, was the first known extension of the XMODEM protocol. A normal XMODEM file transfer starts with the receiver sending a single MODEM7 changed this behavior only slightly, by sending the filename, in 8.3 filename format, before the <SOH>. Each character was sent individually and had to be echoed by the receiver as a form of error correction. For a non-aware XMODEM implementation, this data would simply be ignored while it waited for the Jerry Pournelle in 1983 described MODEM7 as "probably the most popular microcomputer communications program in existence".[7] TeLinkMODEM7 sent the filename as normal text, which meant it could be corrupted by the same problems that XMODEM was attempting to avoid. This led to the introduction of TeLink by Tom Jennings, author of the original FidoNet mailers. TeLink avoided MODEM7's problems by standardizing a new "zero packet" containing information about the original file. This included the file's name, size, and timestamp, which were placed in a regular 128 byte XMODEM block. Whereas a normal XMODEM transfer would start with the sender sending "block 1" with a A normal XMODEM implementation would simply discard the packet, the assumption being that the packet number had been corrupted. But this led to a potential time delay if the packet were discarded, as the sender could not tell whether the receiver had responded with a The basic "block 0" system became a standard in the FidoNet community, and was re-used by a number of future protocols like SEAlink and YMODEM. XMODEM-CRCThe checksum used in the original protocol was extremely simple, and errors within the packet could go unnoticed. This led to the introduction of XMODEM-CRC by John Byrns,[9][10] which used a 16-bit CRC in place of the 8-bit checksum.[4] CRCs encode not only the data in the packet, but its location as well, allowing it to notice the bit-replacement errors that a checksum would miss. Statistically, this made the chance of detecting an error less than 16 bits long 99.9969%, and even higher for longer error bit strings.[11] XMODEM-CRC was designed to be backwardly compatible with XMODEM. To do this, the receiver sent a C (capital C) character instead of a <NAK> to start the transfer. If the sender responded by sending a packet, it was assumed the sender "knew" XMODEM-CRC, and the receiver continued sending C's. If no packet was forthcoming, the receiver assumed the sender did not know the protocol, and sent an <NAK> to start a "traditional" XMODEM transfer.[11] Unfortunately, this attempt at backward compatibility had a downside. Since it was possible that the initial C character would be lost or corrupted, it could not be assumed that the receiver did not support XMODEM-CRC if the first attempt to trigger the transfer failed. The receiver thus tried to start the transfer three times with C, waiting three seconds between each attempt. This meant that if the user selected XMODEM-CRC while attempting to talk to any XMODEM, as it was intended, there was a potential 10 second delay before the transfer started.[11] To avoid the delay, the sender and receiver would generally list XMODEM-CRC separately from XMODEM, allowing the user to select "basic" XMODEM if the sender didn't explicitly list it. To the average user, XMODEM-CRC was essentially a "second protocol", and treated as such. This was not true of FidoNet mailers, however, where CRC was defined as the standard for all TeLink transfers.[8] Higher throughputSince the XMODEM protocol required the sender to stop and wait for an <ACK> or <NAK> message from the receiver, it tended to be quite slow. In the era of 300 bit/s modems, the entire 132-byte packet required 4.4 seconds to send (132 bytes * (8 bits per byte + 1 start bit + 1 stop bit) / 300 bits per second). Assuming it takes 0.2 seconds for the receiver's <ACK> to make it back to the sender and the next packet to start hitting the receiver (0.1 seconds in both directions), the overall time for one packet would be 4.6 seconds, just over 92% channel efficiency. The time for the <ACK>/<NAK> process was a fixed function of the underlying communications network, not of the performance of the modems. As modem speeds increased, the fixed delay grew in proportion to time needed to send the packet. For instance, at 2400 bit/s the packets took only 0.55 seconds to send, so if the <ACK>/<NAK> still took 0.2 seconds to make it back to the user's machine, the efficiency has fallen to 71%. At 9600 bit/s it is just under 40% – more time is spent waiting for the reply than is needed to send the packet. A number of new versions of XMODEM were introduced in order to address these problems. Like earlier extensions, these versions tended to be backward-compatible with the original XMODEM, and like those extensions, this led to further fracturing of the XMODEM landscape in the user's terminal emulator. In the end, dozens of versions of XMODEM emerged. WXModemWXmodem, short for "Windowed Xmodem", is a variant of XMODEM developed by Peter Boswell in 1986 for use on high-latency lines, specifically public X.25 systems and PC Pursuit. These have latencies that are far higher than the plain-old telephone service, which leads to very poor efficiency in XMODEM. Additionally, these networks often use control characters for flow control and other tasks, notably XON/XOFF will stop the data flow. Finally, in the case of an error that required a resend, it was sometimes difficult to know whether a One change was to escape a small set of control characters: Additionally, all packets were prefixed with a The major change in WXMODEM is the use of a sliding window to improve throughput on high-latency links. To do so, the Requiring an SEAlinkOne of the first third-party mailers for the FidoNet system was SEAdog, written by the same author as the then-popular .arc data compression format. SEAdog included a wide variety of improvements, including SEAlink, an improved transfer protocol based on the same sliding window concept as WXmodem.[12] It differed from WXmodem mostly in details. One difference is that SEAlink supported the "zero packet" introduced by TeLink, which is needed in order to operate as a drop-in replacement for TeLink in FidoNet systems where the header was expected. SEAlink was not expected to operate over X.25 or similar links, and thus did not perform escaping. This was also needed so the zero packet would work properly, as this standard used the SEAlink later added a number of other improvements and was a useful general-purpose protocol. However, it remained rare outside the FidoNet world, and was rarely seen in user-facing software. XMODEM-1KAnother way to solve the throughput problem is to increase the packet size. Although the fundamental problem of latency remains, the speed at which it becomes a problem is higher. XMODEM-1K with 1024-byte packets[4] was the most popular such solution. In this case, the throughput at 9600 bit/s is 81%, given the same assumptions as above. XMODEM-1K was an expanded version of XMODEM-CRC, which indicated the longer block size in the sender by starting a packet with the <STX> character instead of <SOH>. Like other backward-compatible XMODEM extensions, it was intended that a -1K transfer could be started with any implementation of XMODEM on the other end, backing off features as required. XMODEM-1K was originally one of the many improvements to XMODEM introduced by Chuck Forsberg in his YMODEM protocol. Forsberg suggested that the various improvements were optional, expecting software authors to implement as many of them as possible. Instead, they generally implemented the bare minimum, leading to a profusion of semi-compatible implementations, and eventually, the splitting out of the name "YMODEM" into "XMODEM-1K" and a variety of YMODEMs. Thus XMODEM-1K actually post-dates YMODEM, but remained fairly common anyway. NMODEMNMODEM is a file transfer protocol developed by L. B. Neal, who released it in 1990. NMODEM is essentially a version of XMODEM-CRC using larger 2048 byte blocks, as opposed to XMODEM's 128 byte blocks. NMODEM was implemented as a separate program, written in Turbo Pascal 5.0 for the IBM PC compatible family of computers. The block size was chosen to match the common cluster size of the MS-DOS FAT file system on contemporary hard drives, making buffering data for writing simpler.[13][14] Protocol spoofingOver reliable (error-free) connections, it is possible to eliminate latency by "pre-acknowledging" the packets, a technique known more generally as "protocol spoofing". This is normally accomplished in the link hardware, notably Telebit modems. The modems, when the option was turned on, would notice the XMODEM header and immediately sent an The system can also be implemented in the protocol itself, and variations of XMODEM offered this feature. In these cases, the receiver would send the This concept should be contrasted with the one used in SEAlink, which changes the behavior on both sides of the link. In SEAlink, the receiver stops sending the See alsoReferencesCitations
Bibliography
External links
Information related to XMODEM |