Friday, November 27, 2009

 

Data Communication- Module2






Module 2

Multiplexing - Frequency Division Multiplexing (FDM) – Time Division Multiplexing (TDM), Synchronous Time Division Multiplexing –Statistical time Division multiplexing – Key Techniques - ASK, FSK, PSK, DPSK - Channel capacity - Shannon`s Theorem.

 

Multiplexing

Multiplexing is the transmission of multiple data communication sessions over a common wire or medium. Multiplexing reduces the number of wires or cable required to connect multiple sessions. A session is considered to be data communication between two devices: computer to computer, terminal to computer, etc..

Individual lines running from 3 terminals to one mainframe are not a problem but when the number of terminals increases to 10 and up, it becomes a problem. Imagine a mainframe computer with 1200 terminals connected and each terminal running its own wire to the mainframe. If each wire was 1/4" in diameter (typical Cat 5 cable), you would have a wiring bundle going into the computer, roughly 2 feet in diameter.

A multiplexer allows sharing of a common line to transmit the many terminal communications as in the above example. The connection between the multiplexer and the mainframe is normally a high speed data link and is not usually divided into separate lines.

The operation of multiplexers (abbreviated MUXs) is transparent to the sending and receiving computers or terminals. Transparent means that as far as everyone is concerned, they appear to be directly connected to the mainframe with individual wires. The multiplexer does not interfere with the normal flow of data and it can allow a significant reduction in the overall cost of connecting to remote sites, through the reduced cost of cable and telephone line charges.

Multiplexers are used to connect terminals located throughout a building to a central mainframe. They are also used to connect terminals located at remote locations to a central mainframe through the phone lines.

There are 3 basic techniques used for multiplexing:

  1. Frequency Division Multiplexing (FDM)
  2. Time Division Multiplexing (TDM)
  3. Statistical Time Division Multiplexing (STDM)

FDM - Frequency Division Multiplexing

Frequency Division Multiplexing (FDM) is an analog technique where each communications channel is assigned a carrier frequency. To separate the channels, a guard-band would be used. This is to ensure that the channels do not interfere with each other.

For example, if we had our 3 terminals each requiring a bandwidth of 3 kHz and a 300 Hz guard-band, Terminal 1 would be assigned the lowest frequency channel 0 - 3 kHz, Terminal 2 would be assigned the next frequency channel 3.3 kHz - 6.3 kHz and Terminal 3 would be assigned the final frequency channel 6.6 kHz - 9.6 kHz.

The frequencies are stacked on top of each other and many frequencies can be sent at once. The downside is that the overall line bandwidth increases. Individual terminal requirement were 3 kHz bandwidth each, in the above example: the bandwidth to transmit all 3 terminals is now 9.6 kHz.

FDM does not require all channels to terminate at a single location. Channels can be extracted using a multi-drop technique, terminals can be stationed at different locations within a building or a city.

FDM is an analog and slightly historical multiplexing technique. It is prone to noise problems and has been overtaken by Time Division Multiplexing which is better suited for digital data.

 

TDM - Time Division Multiplexing

Time Division Multiplexing is a technique where a short time sample of each channel is inserted into the multiplexed data stream. Each channel is sampled in turn and then the sequence is repeated. The sample period has to be fast enough to sample each channel according to the Nyquist Theory (2x highest frequency) and to be able to sample all the other channels within that same time period. It can be thought of as a very fast mechanical switch, selecting each channel for a very short time then going on to the next channel.

Each channel has a time slice assigned to it whether the terminal is being used or not. Again, to the send and receiving stations, it appears as if there is a single line connecting them. All lines originate in one location and end in one location. TDM is more efficient, easier to operate, less complex and less expensive than FDM.

 

► One drawback of the TDM approach, as discussed earlier, is that many of the time slots

in the frame are wasted. It is because, if a particular terminal has no data to transmit at particular instant of time, an empty time slot will be transmitted. An efficient alternative to this synchronous TDM is statistical TDM, also known as asynchronous TDM or Intelligent TDM. It dynamically allocates the time slots on demand to separate input channels, thus saving the channel capacity. As with Synchronous TDM, statistical multiplexers also have many I/O lines with a buffer associated to each of them. During the input, the multiplexer scans the input buffers, collecting data until the frame is filled and send the frame. At the receiving end, the demultiplexer receives the frame and distributes the data to the appropriate buffers. The difference between synchronous TDM and asynchronous TDM is illustrated with the help of Fig. 2.7.9. It may be noted that many slots remain unutilised in case synchronous TDM, but the slots are fully utilized leading to smaller time for transmission and better utilization of bandwidth of the medium. In case of statistical TDM, the data in each slot must have an address part, which identifies the source of data. Since data arrive from and are distributed to I/O lines unpredictably, address information is required to assure proper delivery as shown in Fig. 2.7.10.. This leads to more overhead per slot. Relative addressing can be used to reduce overhead

 

Keying Techniques

Keying techniques are methods used to encode digital information in an analog world. The 3 basic keying techniques are:

  1. ASK (amplitude shift keying)
  2. FSK(frequency shift keying
  3. PSK (phase shift keying)

All 3 keying techniques employ a carrier signal. A carrier signal is a single frequency that is used to carry the intelligence (data). For digital, the intelligence is either a 1 or 0. When we modulate the carrier , we are changing its characteristics to correspond to either a 1 or 0.


ASK

ASK modifies the amplitude of the carrier to represent 1s or 0s. In the above example, a 1 is represented by the presence of the carrier for a predefined period of 3 cycles of carrier. Absence or no carrier indicates a 0.


Advantages:

  • Simple to design.

Disadvantages:

  • Noise spikes on transmission medium interfere with the carrier signal.
  • Loss of connection is read as 0s.

FSK

FSK modifies the frequency of the carrier to represent the 1s or 0s. In the above example, a 0 is represented by the original carrier frequency and a 1 by a much higher frequency ( the cycles are spaced closer together).


Advantages:

  • Immunity to noise on transmission medium.
  • Always a signal present. Loss of signal easily detected

Disadvantages:

  • Requires 2 frequencies
  • Detection circuit needs to recognize both frequencies when signal is lost.

 

PSK

PSK modifies the phase of the carrier to represent a 1 or 0.

The carrier phase is switched at every occurrence of a 1 bit but remains unaffected for a 0 bit. The phase of the signal is measured relative to the phase of the preceding bit. The bits are timed to coincide with a specific number of carrier cycles (3 in this example = 1 bit).

Advantage:

  • Only 1 frequency used
  • Easy to detect loss of carrier

Disadvantages:

  • Complex circuitry required to generate and detect phase changes.

 

 

 

Differential phase shift keying (DPSK)

Differential phase shift keying (DPSK), a common form of phase modulation conveys data by changing the phase of carrier wave. In Phase shift keying, High state contains only one cycle but DPSK contains one and half cycle. Figure illustrates PSK and DPSK Modulated signal by 10101110 pulse sequence

DPSK and PSK modulated signals

High state is represented by a M in modulated signal and low state is represented by a wave which appears like W in modulated signal DPSK encodes two distinct signals of same frequency with 180 degree phase difference between the two. This experiment requires two 180 degree out of phase carrier and modulating signals. Sine wave from oscillator is selected as carrier signal. DSG converts DC input voltage into pulse trains. These pulse trains are taken as modulating signals. In actual practice modulating signal is digital form of voice or data.

Differential phase shift keying (DPSK) is a common form of phase modulation that conveys data by changing the phase of the carrier wave. As mentioned for BPSK and QPSK there is an ambiguity of phase if the constellation is rotated by some effect in the communications channel through which the signal passes. This problem can be overcome by using the data to change rather than set the phase.

For example, in differentially-encoded BPSK a binary '1' may be transmitted by adding 180° to the current phase and a binary '0' by adding 0° to the current phase. In differentially-encoded QPSK, the phase-shifts are 0°, 90°, 180°, -90° corresponding to data '00', '01', '11', '10'. This kind of encoding may be demodulated in the same way as for non-differential PSK but the phase ambiguities can be ignored. Thus, each received symbol is demodulated to one of the M points in the constellation and a comparator then computes the difference in phase between this received signal and the preceding one. The difference encodes the data as described above.

The modulated signal is shown below for both DBPSK and DQPSK as described above. It is assumed that the signal starts with zero phase, and so there is a phase shift in both signals at t = 0.

Timing diagram for DBPSK and DQPSK. The binary data stream is above the DBPSK signal. The individual bits of the DBPSK signal are grouped into pairs for the DQPSK signal, which only changes every Ts = 2Tb.

Analysis shows that differential encoding approximately doubles the error rate compared to ordinary M-PSK but this may be overcome by only a small increase in Eb / N0. Furthermore, this analysis (and the graphical results below) are based on a system in which the only corruption is additive white Gaussian noise. However, there will also be a physical channel between the transmitter and receiver in the communication system. This channel will, in general, introduce an unknown phase-shift to the PSK signal; in these cases the differential schemes can yield a better error-rate than the ordinary schemes which rely on precise phase information.

 

Shannon–Hartley theorem

In information theory, the Shannon–Hartley theorem is an application of the noisy channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise. The theorem establishes Shannon's channel capacity for such a communication link, a bound on the maximum amount of error-free digital data (that is, information) that can be transmitted with a specified bandwidth in the presence of the noise interference, under the assumption that the signal power is bounded and the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley.

Def: channel capacity

Maximum amount of data, energy, or material an appropriate channel can carry under given constraints.

Statement of the theorem

Considering all possible multi-level and multi-phase encoding techniques, the Shannon–Hartley theorem states that the channel capacity C, meaning the theoretical tightest upper bound on the information rate (excluding error correcting codes) of clean (or arbitrarily low bit error rate) data that can be sent with a given average signal power S through an analog communication channel subject to additive white Gaussian noise of power N, is:

C= B log(1+S/N)

where

C is the channel capacity in bits per second;

B is the bandwidth of the channel in hertz (passband bandwidth in case of a modulated signal);

S is the total received signal power over the bandwidth (in case of a modulated signal, often denoted C, i.e. modulated carrier), measured in watt or volt2;

N is the total noise or interference power over the bandwidth, measured in watt or volt2; and

S/N is the signal-to-noise ratio (SNR) or the carrier-to-noise ratio (CNR) of the communication signal to the Gaussian noise interference expressed as a linear power ratio (not as logarithmic decibels).

 

 

Claude Shannon's development of information theory during World War II provided the next big step in understanding how much information could be reliably communicated through noisy channels. Building on Hartley's foundation, Shannon's noisy channel coding theorem (1948) describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption.[5][6] The proof of the theorem shows that a randomly constructed error correcting code is essentially as good as the best possible code; the theorem is proved through the statistics of such random codes.

Shannon's theorem shows how to compute a channel capacity from a statistical description of a channel, and establishes that given a noisy channel with capacity C and information transmitted at a line rate R, then if

R<C

there exists a coding technique which allows the probability of error at the receiver to be made arbitrarily small. This means that theoretically, it is possible to transmit information nearly without error up to nearly a limit of C bits per second.

The converse is also important. If

R>C

the probability of error at the receiver increases without bound as the rate is increased. So no useful information can be transmitted beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal.

 

Shannon–Hartley theorem

The Shannon–Hartley theorem establishes what that channel capacity is for a finite-bandwidth continuous-time channel subject to Gaussian noise. It connects Hartley's result with Shannon's channel capacity theorem in a form that is equivalent to specifying the M in Hartley's line rate formula in terms of a signal-to-noise ratio, but achieving reliability through error-correction coding rather than through reliably distinguishable pulse levels.

If there were such a thing as an infinite-bandwidth, noise-free analog channel, one could transmit unlimited amounts of error-free data over it per unit of time. Real channels, however, are subject to limitations imposed by both finite bandwidth and nonzero noise.

So how do bandwidth and noise affect the rate at which information can be transmitted over an analog channel?

Surprisingly, bandwidth limitations alone do not impose a cap on maximum information rate. This is because it is still possible for the signal to take on an indefinitely large number of different voltage levels on each symbol pulse, with each slightly different level being assigned a different meaning or bit sequence. If we combine both noise and bandwidth limitations, however, we do find there is a limit to the amount of information that can be transferred by a signal of a bounded power, even when clever multi-level encoding techniques are used.

In the channel considered by the Shannon-Hartley theorem, noise and signal are combined by addition. That is, the receiver measures a signal that is equal to the sum of the signal encoding the desired information and a continuous random variable that represents the noise. This addition creates uncertainty as to the original signal's value. If the receiver has some information about the random process that generates the noise, one can in principle recover the information in the original signal by considering all possible states of the noise process. In the case of the Shannon-Hartley theorem, the noise is assumed to be generated by a Gaussian process with a known variance. Since the variance of a Gaussian process is equivalent to its power, it is conventional to call this variance the noise power.

Such a channel is called the Additive White Gaussian Noise channel, because Gaussian noise is added to the signal; "white" means equal amounts of noise at all frequencies within the channel bandwidth. Such noise can arise both from random sources of energy and also from coding and measurement error at the sender and receiver respectively. Since sums of independent Gaussian random variables are themselves Gaussian random variables, this conveniently simplifies analysis, if one assumes that such error sources are also Gaussian and independent.

Examples

  1. If the SNR is 20 dB, and the bandwidth available is 4 kHz, which is appropriate for telephone communications, then C = 4 log2(1 + 100) = 4 log2 (101) = 26.63 kbit/s. Note that the value of S/N = 100 is equivalent to the SNR of 20 dB.
  2. If it is required to transmit at 50 kbit/s, and a bandwidth of 1 MHz is used, then the minimum S/N required is given by 50 = 1000 log2(1+S/N) so S/N = 2C/W -1 = 0.035 corresponding to an SNR of -14.5 dB. This shows that it is possible to transmit using signals which are actually much weaker than the background noise level, as in spread-spectrum communications.


Labels: ,


Comments: Post a Comment

Subscribe to Post Comments [Atom]





<< Home

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]