Get started !
online LTE test
online C test

Updated or New
5G NR Data Rate calculator New
5G NR TBS calculator New
5G NR UE Registration New
Python programming
C++ programming
MIMO for 3GPP Developer - 2
Uplink power control
MIMO for 3GPP Developer
NR ARFCN and GSCN
5GS Interfaces



About
Feedback
Information Theory
Modulation
Multiple Access
OSI Model
Data Link layer
SS7
Word about ATM
GSM
GPRS
UMTS
WiMAX
LTE
CV2X
5G
Standard Reference
Reference books
Resources on Web
Miscellaneous
Mind Map
Magic MSC tool
Bar graph tool
C programming
C++ programming
Perl resources
Python programming
Javascript/HTML
MATLAB
ASCII table
Project Management

another knowledge site

3GPP Modem
Simulator


Sparkle At Office comic strip

Another angle

Another angle [Under Information theory > Shannon's paper > Capacity Theorem]

John R. Pierce, a colleague of Shannon, describes another proof for capacity theorem in his book, An Introduction to Information Theory. Here we will explore capacity theorem little differently but based on same ideas.

1) Energy per unit time of sampled signal can graphically be represented as square of radius of hypersphere wherein number of dimensions is equal to number of samples and sampled value correspond to a coordinate value. This is based on mathematical theorem that when we represent a point in hyper-space of n dimensions as (x1, x2, ....., xn), distance of the point from origin(zero) point is √(x12 + x22 + ..... + xn2). The special case of this theorem is two dimensional Cartesian coordinate system in which we calculate distance of a point from origin as square root of the point's x and y coordinates.

2) Volume of hypersphere of n dimensions is rn. When we increase number of dimensions of hypersphere, more and more (percentage) of its volume gets concentrated towards the surface. That means (VnVn-1)/Vn-1 < (Vn+1 Vn)/Vn.

Say we have input signal of bandwidth W. We sample it with 2W frequency (obeying Nyquist theorem) and transmit with an average power of P per symbol. Received signal will have both transmitted signal and noise (we assume average noise power as N per symbol). Let us plot a hyperspace of number of dimensions equal to number of samples per symbol. In such hyperspace, symbols can be represented as points on hypersphere of radius √P. Following diagram shows symbolically the noise symbols and received symbols.

It is assumed that P >> N. Say we keep transmitted power same, but there is an increase in N to N'. As P >> N, there is only very marginal increase (and not equivalent to what is found in noise hypersphere) in volume of hypersphere of received power as shown in diagram below:

This suggest that even though N (noise power) has increased, transmitted signal can still be extracted. But when N increase to an extent wherein hypersphere of √N try to overlap hypersphere of √(P+N), extraction of transmitted signal becomes impossible. We can say that the number of transmitted signals or information symbols that can be extracted is proportional to percentage incremental volumes of hyperspheres of radia √(P+N) and N where (N' - N) is sufficiently small. If the hyperspheres are closure to each other (that is their boundaries are closure to each other), a small increment in noise would cause hypersphere to come still closure to each other (that is overlap each other causing difficulty in extraction to larger extent). But if hyperspheres are sufficiently away (or overlap to a smaller extent) and P > N, small increment in noise would only add smaller difficulties in extraction.

As number of dimensions are sufficiently large, the ratio of incremental volumes is as good as the ratio of their volumes (as per point 2 above, volume is concentrated more and more towards surface of hypersphere). Thus amount of information is equivalent to ( √(P+N) )n / ( N )n where n is number of samples per symbol. As sampling frequency is 2W, n is equal to 2W (per unit time). So we get ( (P+N)/N )w .

Logarithmic (to 2) of amount of information will denote entropy of information in terms of number of bits per unit time - which is nothing but information rate or capacity of channel (we calculated maximum number of information signals that can be extracted above and so it can be considered as capacity of channel).

Thus,

C = log2( (P+N)/N )w

that is:

C = W log2( (P+N)/N )

References: A Mathematical Theory of Communication by Claude E. Shannon, An Introduction to Information Theory by John R. Pierce.

Copyright © Samir Amberkar 2010-11§

Deducing capacity theorem « Theory Index » Couple of more points