Friday, November 27, 2009

 

Data Communication- Module 3




Module 3

Digital data transmission – Serial, Parallel, Synchronous, Asynchronous and Isochronous transmission. Transmission mode-    Simplex - Half duplex – Full duplex, Noise- different types of noise – Basic Principles of Switching (circuit, packet, message switching)

 

 

Timing

Timing refers to how the receiving system knows that it received the start of a group of bits and the end of a group of bits. Two major timing schemes are used: Asynchronous and Synchronous Transmission.

  1. Asynchronous Transmission sends only 1 character at a time. A character being a letter of the alphabet or number or control character. Preceding each character is a Start bit and ending each character is 1 or more Stop bits.
  1. Synchronous Transmission sends packets of characters at a time. Each packet is preceded by a Start Frame which is used to tell the receiving station that a new packet of characters is arriving and to synchronize the receiving station's internal clock. The packets also have End Frames to indicate the end of the packet. The packet can contain up to 64,000 bits. Both Start and End Frames have a special bit sequence that the receiving station recognizes to indicate the start and end of a packet. The Start and End frames may be only 2 bytes each.


Conventional representation has asynchronous data flowing left to right and synchronous data flowing right to left.

 

 Asynchronous vs. Synchronous Transmission

Asynchronous transmission is simple and inexpensive to implement. It is used mainly with Serial Ports and dialup connections. Requires start and stop bits for each character - this adds a high overhead to transmission. For example: for every byte of data, add 1 Start Bit and 2 Stop Bits. 11 bits are required to send 8 bits! Asynchronous is used in slow transfer rates typically up to 56 kbps.

Synchronous transmission is more efficient as little as only 4 bytes (3 Start Framing bytes and 1 Stop Framing byte) are required to transmit up to 64 kbits. Synchronous transmission is more difficult and expensive to implement. It is used with all higher comunication transfer rates: Ethernet, Token Ring etc... Synchronous is used in fast transfer rates typically 56 kbps to 100 Mbps.

Historically, synchronous communications were operating over 2400/4800 baud modems on point-to-point communications, for example: IBM2770/IBM2780/IBM3780 (historical information courtesy of Jacques Sincennes, University of Ottawa)


Transmission

Advantages

Disadvantages

Asynchronous

Simple & Inexpensive

High Overhead

Synchronous

Efficient

Complex and Expensive

 Asynchronous Communications

Asynchronous communications or transmission sends individual characters one at a time framed by a start bit and 1 or 2 stop bits.


Start/Stop bits

The purpose of the Start bit is to notify the receiving station of a new character arriving. Typically data is shown moving left to right. This is how it would appear on a Storage Oscilloscope or Network Analyser. The MSB ( Most Significant Bit) is sent first and the LSB (Least Significant Bit) is sent last.


The purpose of the Stop bits is to indicate the end of data. There could be 1 or 2 stop bits with 1 being the typical number of stop bits used today. In Asynchronous transmission, the characters are sent individually with a quiet period in between (quiet meaning 0 bit level). Asynchronous communications requires the transmitting station and the receiving station to have individual internal free-running clocks operating at the same frequency. Free-running means that the clocks are not locked together.

Both clocks operating at same frequency:

The receive station starts checking for data after the Start bit is received (Start bit is a wake up call!).

The receive station samples the transmitted data in the middle of each data bit. The samples are evenly spaced and match the transmitted data because both transmit and receive clocks are operating at the same frequency.

Receive clock frequency higher than transmitted frequency:


If the receive station's clock is higher in frequency, the samples will be spaced closer together (higher frequency - shorter period). In the above example, we transmitted the following data: 0100 1010 but we received the data: 0100 0101. The samples are out of synchronization with the transmitting data. We would have an error in receiving data.

Clocks are controlled by crystals (abbreviated: Xtal). Crystals are metal cans that hold a piezo-electric element that resonates at a certain frequency when a voltage is applied to it. If you drop a crystal or a printed circuit board (PCB) that has a crystal on it, the crystal can fracture inside the metal can. Either it will stop working or change its frequency, both result in a malfunctioning circuit! Crystals are also temperature sensitive and change frequency with temperature!

Receive clock frequency lower than transmitted frequency:

If the receiving station's clock is lower in frequency than the transmitted frequency, then the samples become farther apart (lower frequency - wider period). Again the samples become out of sync with the transmitted data!

The transmitted data is 0100 1010 but the receive data is 0101 0101! Again we would have receive data errors.

This is a basic problem with asynchronous communications, both transmitter and receiver require a very stable clock to work properly. At high frequencies (which result in high transfer rates), clock stability is critical and asynchronous transmission is very difficult to accomplish. Because of this inherent problem with asynchronous transmission, it is used at low frequency/slow transfer rates.

 

Data Transmission Modes

The term transmission mode defines the direction of data flow between two linked devices. The manner or way in which data is transmitted from one place to another is called Data Transmission Mode. There are three ways for transmitting data from one location to another. These are:

  1. Simplex mode
  2. Half-Duplex mode
  3. Full-Duplex mode

1. Simplex Mode

In Simplex mode, the communication can take place in only one direction. In this mode, a terminal can only send data and cannot receive it or it can only receive data but cannot send it. It means that in this mode communication is uni-directional. Today, this mode of data communication is not popular, because most of the modem communications require two-way exchange of data. However, this mode of communication is used in business field at certain point-of-sale terminals in which sales data is entered without a corresponding reply. The other examples of simplex communication modes are Radio and T.V transmissions.

In computer system, the keyboard, monitor and printer are examples of simplex devices. The keyboard can only be used to enter data into computer, while monitor and printer can only accept (display/print) output.

2. Half-Duplex Mode

In Half-duplex mode, the communication can take place in both directions, but only in one direction at a time. In this mode, data is sent and received alternatively. It is like a one-lane bridge where two-way traffic must give way in order to cross the other.

In half-duplex mode, at a time only one end transmits data while other end receives. In addition, it is possible to perform error detection and request the sender to re-transmit information. The Internet browsing is an example of half duplex. When we issue a request to download a web document, then that document is downloaded and displayed before we issue another request.

3. Full-Duplex Mode

In Full-duplex mode, the communication can take place in both directions simultaneously, i.e. at the same time on the same channel. It is the fastest directional mode of communication. Example of this mode is conversation of the persons through telephone. This type of communication is similar to automobile traffic on a two-lane road. The telephone communication system is an example of MI-duplex communication mode.


Types of Data Transmission Modes

There are two types of data transmission modes. These are:

  1. Parallel Transmission
  2. Serial Transmission

1. Parallel Transmission

In parallel transmission, bits of data flow concurrently through separate communication lines. Parallel transmission is shown in figure below. The automobile traffic on a multi-lane highway is an example of parallel transmission. Inside the computer binary data flows from one unit to another using parallel mode. If the computer uses 32-bk internal structure, all the 32-bits of data are transferred simultaneously on 32-lane connections. Similarly, parallel transmission is commonly used to transfer data from computer to printer. The printer is connected to the parallel port of computer and parallel cable that has many wires is used to connect the printer to computer. It is very fast data transmission mode.

2. Serial Transmission

In serial data transmission, bits of data flow in sequential order through single communication line. Serial dat& transmission is shown in figure below. The flow of traffic on one-lane residential street is an example of serial data transmission mode. Serial transmission is typically slower than parallel transmission, because data is sent sequentially in a bit-by-bit fashion. Serial mouse uses serial transmission mode in computer.

Synchronous & Asynchronous Transmissions

Synchronous Transmission

In synchronous transmission, large volumes of information can be transmitted at a time. In this type of transmission, data is transmitted block-by-block or word-byword simultaneously. Each block may contain several bytes of data. In synchronous transmission, a special communication device known as synchronized clock' is required to schedule the transmission of information. This special communication device or equipment is expensive.

Asynchronous Transmission

In asynchronous transmission, data is transmitted one byte at a 'time'. This type of transmission is most commonly used by microcomputers. The data is transmitted character-by-character as the user types it on a keyboard.

An asynchronous line that is idle (not being used) is identified with a value 1, also known as 'Mark' state. This value is used by the communication devices to find whether the line is idle or disconnected. When a character (or byte) is about to be transmitted, a start bit is sent. A start bit has a value of 0, also called a space state. Thus, when the line switches from a value of 1 to a value of 0, the receiver is alerted that a character is coming.

 

Circuit switching

In circuit switching, transmission between a source and the destination is achieved through a dedicated physical link for the entire duration of transmission. The entire link remains dedicated and no other potential and/or emergent users can use it even when the path happens to be idle. Circuit switching is used in telephones. A circuit switching telephone circuit is only 30-40 per cent efficient as most of the time is spent on the listening mode. Synchronous transfer mode (STM) uses circuit switching. The common-T carrier for digitised voice uses STM.

In order to avoid the irregular gaps/delays in between words and/or bunches of words, synchronised time-division multiplexing is used in voice communication. Each slot in the multiplexing system is assigned to a voice call and thereby the access is guaranteed as long as the call lasts. All the multiplexed time slots form a frame. Each slot in the frame is synchronised through the frame bit and position.

Advantages of Circuit Switching:

    * Once the circuit has been set up, communication is fast and without error.

    * It is highly reliable

Disadvantages:

    * Involves a lot of overhead, during channel set up.

    * Waists a lot of bandwidth, especial in speech whereby a user is sometimes listening, and not talking.

    * Channel set up may take longer.

 

To overcome the disadvantages of circuit switching, packet switching was introduced, and instead of dedicating a channel to only two parties for the duration of the call it routes packets individually as they are available. This mechanism is referred to as being connectionless.

 

Packet Switching

Since its introduction in the early 1970s, packet switching has received widespread acceptance. Public networks have been constructed in most developed countries and many developing countries. The internetwork ITU-T X.75 protocol provides for interlinking of national networks at an international level. The ITU-T X.25 Recommendation is the original standard for packet-switching architecture.

Packet switching has several advantages over conventional circuit-switched networks. The circuit-switched network maintains a fixed bandwidth between the transmitter and receiver for the duration of a call. Also, the circuit-switched network is bit stream transparent, meaning it is not concerned with the data content or error-checking process. This is not the case for packet switching, where bandwidth is allocated dynamically on an "as required" basis. Data is transmitted in packets, each containing a header that contains the destination of the packet and a tail, or footer, for error-checking information. Packets from different sources can coexist on the same customer-to-network physical link without interference. The simultaneous call and variable bandwidth facilities improve the efficiency of the overall network. The buffering in the system which allows terminals operating at different bit rates to interwork with each other is a significant advantage of packet switching. The obvious disadvantage is the extra dimension of complexity with respect to the switches and network-to-customer protocol.

Furthermore, in certain circumstances, packet switching has several advantages over other methods of data communication:

1. Packet switching might be more economical than using private lines if the amount of traffic between terminals does not warrant a dedicated circuit.

2. Packet switching might be more economical than dialed data when the data communication sessions are shorter than a telephone call minimum chargeable time unit.

3. Destination information is contained in each packet, so numerous messages can be sent very quickly to many different destinations. The rate depends on how fast the data terminal equipment (DTE) can transmit the packets,

4. Computers at each node allow dynamic data routing. This inherent intelligence in the network picks the best possible route for a packet to take through the network at any particular time. Throughput and efficiency are therefore maximized.

5. The packet network inherent intelligence also allows graceful degradation of the network in the event of a node or path (link) failure. Automatic rerouting of the packets around the failed area causes more congestion in those areas, but the overall system is still operable.

6. Other features of this intelligence are error detection and correction, fault diagnosis, verification of message delivery, message sequence checking, reverse billing (charging), etc.

 

Message switching

A form of store-and-forward switching known as message switching has existed for many decades in telegraphy. e-mail is a latest example of message switching.

The message, a meaningful information or a set of data, as a whole is sent from a source to its nearest node from where, depending on the availability of free link, it is either passed or stored for next change.

A major problem in message switching is that if a message is long, an urgent message may have to be sacrificed for an ordinary one, because once a message is on transmission, it cannot be stopped. In order to avoid this problem and to make message switching more practical, the concept of packet switching was introduced.

Comparison

A packet switching network is expected to deliver its packets in a fraction of a second, whereas a message switching network is expected to deliver a message typically in a fraction of an hour.

A packet switching mode deletes a message from the memory as soon as an acknowledgement of correct receipt is received from the next node. On the contrary, message switching system files the message for any possible retrieval in future. Due to this, a message switching network has a mesh topology where no particular node dominates the structure. Thus, message switching incorporates the advantage of both circuit switching and message switching.

In packet switching, there are two approaches—virtual circuit and datagram—for transmitting data from a source to the destination.In virtual circuit approach, a logical connection between the source and the destination is established prior to transmission of packets. The approach is similar to circuit switching, but the established path is not physically dedicated and may be shared by other sets of users. In the datagram service, each packet uses a path throughout the network in between a source and a destination based on current information available to the node.

The basic difference between the virtual circuit and datagram approach is that the node does not take a routing decision for each packet in virtual circuit approach as is done in datagram approach.

Circuit switching is best suited for time-sensitive communications, like voice, because this service does not tolerate delay and jittering. Low delay and avoidance of jittering are possible with circuit switching. On the other hand, packet switching is suitable for time-insensitive communications, like data, because it does not tolerate error but can tolerate delay. Circuit switching can be considered equivalent to packet switching where one packet in the information is transmitted over entire duration of the call.

 


Labels: ,


 

Data Communication- Module2






Module 2

Multiplexing - Frequency Division Multiplexing (FDM) – Time Division Multiplexing (TDM), Synchronous Time Division Multiplexing –Statistical time Division multiplexing – Key Techniques - ASK, FSK, PSK, DPSK - Channel capacity - Shannon`s Theorem.

 

Multiplexing

Multiplexing is the transmission of multiple data communication sessions over a common wire or medium. Multiplexing reduces the number of wires or cable required to connect multiple sessions. A session is considered to be data communication between two devices: computer to computer, terminal to computer, etc..

Individual lines running from 3 terminals to one mainframe are not a problem but when the number of terminals increases to 10 and up, it becomes a problem. Imagine a mainframe computer with 1200 terminals connected and each terminal running its own wire to the mainframe. If each wire was 1/4" in diameter (typical Cat 5 cable), you would have a wiring bundle going into the computer, roughly 2 feet in diameter.

A multiplexer allows sharing of a common line to transmit the many terminal communications as in the above example. The connection between the multiplexer and the mainframe is normally a high speed data link and is not usually divided into separate lines.

The operation of multiplexers (abbreviated MUXs) is transparent to the sending and receiving computers or terminals. Transparent means that as far as everyone is concerned, they appear to be directly connected to the mainframe with individual wires. The multiplexer does not interfere with the normal flow of data and it can allow a significant reduction in the overall cost of connecting to remote sites, through the reduced cost of cable and telephone line charges.

Multiplexers are used to connect terminals located throughout a building to a central mainframe. They are also used to connect terminals located at remote locations to a central mainframe through the phone lines.

There are 3 basic techniques used for multiplexing:

  1. Frequency Division Multiplexing (FDM)
  2. Time Division Multiplexing (TDM)
  3. Statistical Time Division Multiplexing (STDM)

FDM - Frequency Division Multiplexing

Frequency Division Multiplexing (FDM) is an analog technique where each communications channel is assigned a carrier frequency. To separate the channels, a guard-band would be used. This is to ensure that the channels do not interfere with each other.

For example, if we had our 3 terminals each requiring a bandwidth of 3 kHz and a 300 Hz guard-band, Terminal 1 would be assigned the lowest frequency channel 0 - 3 kHz, Terminal 2 would be assigned the next frequency channel 3.3 kHz - 6.3 kHz and Terminal 3 would be assigned the final frequency channel 6.6 kHz - 9.6 kHz.

The frequencies are stacked on top of each other and many frequencies can be sent at once. The downside is that the overall line bandwidth increases. Individual terminal requirement were 3 kHz bandwidth each, in the above example: the bandwidth to transmit all 3 terminals is now 9.6 kHz.

FDM does not require all channels to terminate at a single location. Channels can be extracted using a multi-drop technique, terminals can be stationed at different locations within a building or a city.

FDM is an analog and slightly historical multiplexing technique. It is prone to noise problems and has been overtaken by Time Division Multiplexing which is better suited for digital data.

 

TDM - Time Division Multiplexing

Time Division Multiplexing is a technique where a short time sample of each channel is inserted into the multiplexed data stream. Each channel is sampled in turn and then the sequence is repeated. The sample period has to be fast enough to sample each channel according to the Nyquist Theory (2x highest frequency) and to be able to sample all the other channels within that same time period. It can be thought of as a very fast mechanical switch, selecting each channel for a very short time then going on to the next channel.

Each channel has a time slice assigned to it whether the terminal is being used or not. Again, to the send and receiving stations, it appears as if there is a single line connecting them. All lines originate in one location and end in one location. TDM is more efficient, easier to operate, less complex and less expensive than FDM.

 

► One drawback of the TDM approach, as discussed earlier, is that many of the time slots

in the frame are wasted. It is because, if a particular terminal has no data to transmit at particular instant of time, an empty time slot will be transmitted. An efficient alternative to this synchronous TDM is statistical TDM, also known as asynchronous TDM or Intelligent TDM. It dynamically allocates the time slots on demand to separate input channels, thus saving the channel capacity. As with Synchronous TDM, statistical multiplexers also have many I/O lines with a buffer associated to each of them. During the input, the multiplexer scans the input buffers, collecting data until the frame is filled and send the frame. At the receiving end, the demultiplexer receives the frame and distributes the data to the appropriate buffers. The difference between synchronous TDM and asynchronous TDM is illustrated with the help of Fig. 2.7.9. It may be noted that many slots remain unutilised in case synchronous TDM, but the slots are fully utilized leading to smaller time for transmission and better utilization of bandwidth of the medium. In case of statistical TDM, the data in each slot must have an address part, which identifies the source of data. Since data arrive from and are distributed to I/O lines unpredictably, address information is required to assure proper delivery as shown in Fig. 2.7.10.. This leads to more overhead per slot. Relative addressing can be used to reduce overhead

 

Keying Techniques

Keying techniques are methods used to encode digital information in an analog world. The 3 basic keying techniques are:

  1. ASK (amplitude shift keying)
  2. FSK(frequency shift keying
  3. PSK (phase shift keying)

All 3 keying techniques employ a carrier signal. A carrier signal is a single frequency that is used to carry the intelligence (data). For digital, the intelligence is either a 1 or 0. When we modulate the carrier , we are changing its characteristics to correspond to either a 1 or 0.


ASK

ASK modifies the amplitude of the carrier to represent 1s or 0s. In the above example, a 1 is represented by the presence of the carrier for a predefined period of 3 cycles of carrier. Absence or no carrier indicates a 0.


Advantages:

  • Simple to design.

Disadvantages:

  • Noise spikes on transmission medium interfere with the carrier signal.
  • Loss of connection is read as 0s.

FSK

FSK modifies the frequency of the carrier to represent the 1s or 0s. In the above example, a 0 is represented by the original carrier frequency and a 1 by a much higher frequency ( the cycles are spaced closer together).


Advantages:

  • Immunity to noise on transmission medium.
  • Always a signal present. Loss of signal easily detected

Disadvantages:

  • Requires 2 frequencies
  • Detection circuit needs to recognize both frequencies when signal is lost.

 

PSK

PSK modifies the phase of the carrier to represent a 1 or 0.

The carrier phase is switched at every occurrence of a 1 bit but remains unaffected for a 0 bit. The phase of the signal is measured relative to the phase of the preceding bit. The bits are timed to coincide with a specific number of carrier cycles (3 in this example = 1 bit).

Advantage:

  • Only 1 frequency used
  • Easy to detect loss of carrier

Disadvantages:

  • Complex circuitry required to generate and detect phase changes.

 

 

 

Differential phase shift keying (DPSK)

Differential phase shift keying (DPSK), a common form of phase modulation conveys data by changing the phase of carrier wave. In Phase shift keying, High state contains only one cycle but DPSK contains one and half cycle. Figure illustrates PSK and DPSK Modulated signal by 10101110 pulse sequence

DPSK and PSK modulated signals

High state is represented by a M in modulated signal and low state is represented by a wave which appears like W in modulated signal DPSK encodes two distinct signals of same frequency with 180 degree phase difference between the two. This experiment requires two 180 degree out of phase carrier and modulating signals. Sine wave from oscillator is selected as carrier signal. DSG converts DC input voltage into pulse trains. These pulse trains are taken as modulating signals. In actual practice modulating signal is digital form of voice or data.

Differential phase shift keying (DPSK) is a common form of phase modulation that conveys data by changing the phase of the carrier wave. As mentioned for BPSK and QPSK there is an ambiguity of phase if the constellation is rotated by some effect in the communications channel through which the signal passes. This problem can be overcome by using the data to change rather than set the phase.

For example, in differentially-encoded BPSK a binary '1' may be transmitted by adding 180° to the current phase and a binary '0' by adding 0° to the current phase. In differentially-encoded QPSK, the phase-shifts are 0°, 90°, 180°, -90° corresponding to data '00', '01', '11', '10'. This kind of encoding may be demodulated in the same way as for non-differential PSK but the phase ambiguities can be ignored. Thus, each received symbol is demodulated to one of the M points in the constellation and a comparator then computes the difference in phase between this received signal and the preceding one. The difference encodes the data as described above.

The modulated signal is shown below for both DBPSK and DQPSK as described above. It is assumed that the signal starts with zero phase, and so there is a phase shift in both signals at t = 0.

Timing diagram for DBPSK and DQPSK. The binary data stream is above the DBPSK signal. The individual bits of the DBPSK signal are grouped into pairs for the DQPSK signal, which only changes every Ts = 2Tb.

Analysis shows that differential encoding approximately doubles the error rate compared to ordinary M-PSK but this may be overcome by only a small increase in Eb / N0. Furthermore, this analysis (and the graphical results below) are based on a system in which the only corruption is additive white Gaussian noise. However, there will also be a physical channel between the transmitter and receiver in the communication system. This channel will, in general, introduce an unknown phase-shift to the PSK signal; in these cases the differential schemes can yield a better error-rate than the ordinary schemes which rely on precise phase information.

 

Shannon–Hartley theorem

In information theory, the Shannon–Hartley theorem is an application of the noisy channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise. The theorem establishes Shannon's channel capacity for such a communication link, a bound on the maximum amount of error-free digital data (that is, information) that can be transmitted with a specified bandwidth in the presence of the noise interference, under the assumption that the signal power is bounded and the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley.

Def: channel capacity

Maximum amount of data, energy, or material an appropriate channel can carry under given constraints.

Statement of the theorem

Considering all possible multi-level and multi-phase encoding techniques, the Shannon–Hartley theorem states that the channel capacity C, meaning the theoretical tightest upper bound on the information rate (excluding error correcting codes) of clean (or arbitrarily low bit error rate) data that can be sent with a given average signal power S through an analog communication channel subject to additive white Gaussian noise of power N, is:

C= B log(1+S/N)

where

C is the channel capacity in bits per second;

B is the bandwidth of the channel in hertz (passband bandwidth in case of a modulated signal);

S is the total received signal power over the bandwidth (in case of a modulated signal, often denoted C, i.e. modulated carrier), measured in watt or volt2;

N is the total noise or interference power over the bandwidth, measured in watt or volt2; and

S/N is the signal-to-noise ratio (SNR) or the carrier-to-noise ratio (CNR) of the communication signal to the Gaussian noise interference expressed as a linear power ratio (not as logarithmic decibels).

 

 

Claude Shannon's development of information theory during World War II provided the next big step in understanding how much information could be reliably communicated through noisy channels. Building on Hartley's foundation, Shannon's noisy channel coding theorem (1948) describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption.[5][6] The proof of the theorem shows that a randomly constructed error correcting code is essentially as good as the best possible code; the theorem is proved through the statistics of such random codes.

Shannon's theorem shows how to compute a channel capacity from a statistical description of a channel, and establishes that given a noisy channel with capacity C and information transmitted at a line rate R, then if

R<C

there exists a coding technique which allows the probability of error at the receiver to be made arbitrarily small. This means that theoretically, it is possible to transmit information nearly without error up to nearly a limit of C bits per second.

The converse is also important. If

R>C

the probability of error at the receiver increases without bound as the rate is increased. So no useful information can be transmitted beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal.

 

Shannon–Hartley theorem

The Shannon–Hartley theorem establishes what that channel capacity is for a finite-bandwidth continuous-time channel subject to Gaussian noise. It connects Hartley's result with Shannon's channel capacity theorem in a form that is equivalent to specifying the M in Hartley's line rate formula in terms of a signal-to-noise ratio, but achieving reliability through error-correction coding rather than through reliably distinguishable pulse levels.

If there were such a thing as an infinite-bandwidth, noise-free analog channel, one could transmit unlimited amounts of error-free data over it per unit of time. Real channels, however, are subject to limitations imposed by both finite bandwidth and nonzero noise.

So how do bandwidth and noise affect the rate at which information can be transmitted over an analog channel?

Surprisingly, bandwidth limitations alone do not impose a cap on maximum information rate. This is because it is still possible for the signal to take on an indefinitely large number of different voltage levels on each symbol pulse, with each slightly different level being assigned a different meaning or bit sequence. If we combine both noise and bandwidth limitations, however, we do find there is a limit to the amount of information that can be transferred by a signal of a bounded power, even when clever multi-level encoding techniques are used.

In the channel considered by the Shannon-Hartley theorem, noise and signal are combined by addition. That is, the receiver measures a signal that is equal to the sum of the signal encoding the desired information and a continuous random variable that represents the noise. This addition creates uncertainty as to the original signal's value. If the receiver has some information about the random process that generates the noise, one can in principle recover the information in the original signal by considering all possible states of the noise process. In the case of the Shannon-Hartley theorem, the noise is assumed to be generated by a Gaussian process with a known variance. Since the variance of a Gaussian process is equivalent to its power, it is conventional to call this variance the noise power.

Such a channel is called the Additive White Gaussian Noise channel, because Gaussian noise is added to the signal; "white" means equal amounts of noise at all frequencies within the channel bandwidth. Such noise can arise both from random sources of energy and also from coding and measurement error at the sender and receiver respectively. Since sums of independent Gaussian random variables are themselves Gaussian random variables, this conveniently simplifies analysis, if one assumes that such error sources are also Gaussian and independent.

Examples

  1. If the SNR is 20 dB, and the bandwidth available is 4 kHz, which is appropriate for telephone communications, then C = 4 log2(1 + 100) = 4 log2 (101) = 26.63 kbit/s. Note that the value of S/N = 100 is equivalent to the SNR of 20 dB.
  2. If it is required to transmit at 50 kbit/s, and a bandwidth of 1 MHz is used, then the minimum S/N required is given by 50 = 1000 log2(1+S/N) so S/N = 2C/W -1 = 0.035 corresponding to an SNR of -14.5 dB. This shows that it is possible to transmit using signals which are actually much weaker than the background noise level, as in spread-spectrum communications.


Labels: ,


This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]