<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://en.lntwww.lnt.ei.tum.de/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Rosa</id>
	<title>LNTwww - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://en.lntwww.lnt.ei.tum.de/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Rosa"/>
	<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/Special:Contributions/Rosa"/>
	<updated>2026-05-02T01:59:29Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.6</generator>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Memoryless_Sources&amp;diff=35088</id>
		<title>Information Theory/Discrete Memoryless Sources</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Memoryless_Sources&amp;diff=35088"/>
		<updated>2020-11-02T13:45:05Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{FirstPage}}&lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Entropie wertdiskreter Nachrichtenquellen&lt;br /&gt;
|Vorherige Seite=&lt;br /&gt;
|Nächste Seite=Nachrichtenquellen mit Gedächtnis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== # OVERVIEW OF THE FIRST MAIN CHAPTER # ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This first chapter describes the calculation and the meaning of entropy.&amp;amp;nbsp; According to the Shannonian information definition, entropy is a measure of the mean uncertainty about the outcome of a statistical event or the uncertainty in the measurement of a stochastic quantity.&amp;amp;nbsp; Somewhat casually expressed, the entropy of a random quantity quantifies its &amp;quot;randomness&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
In detail are discussed:&lt;br /&gt;
&lt;br /&gt;
*the &#039;&#039;decision content&#039;&#039;&amp;amp;nbsp; and the &#039;&#039;entropy&#039;&#039;&amp;amp;nbsp; of a memoryless news source,&lt;br /&gt;
*the &#039;&#039;binary entropy function&#039;&#039;&amp;amp;nbsp; and its application to &#039;&#039;non-binary sources&#039;&#039;,&lt;br /&gt;
*the entropy calculation for &#039;&#039;memory sources&#039;&#039;&amp;amp;nbsp; and suitable approximations,&lt;br /&gt;
*the peculiarities of &#039;&#039;Markov sources&#039;&#039;&amp;amp;nbsp; regarding the entropy calculation,&lt;br /&gt;
*the procedure for sources with a large number of symbols, for example &#039;&#039;natural texts&#039;&#039;,&lt;br /&gt;
*the &#039;&#039;entropy estimates&#039;&#039;&amp;amp;nbsp; according to Shannon and Küpfmüller.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Further information on the topic as well as Exercises, simulations and programming exercises can be found in the experiment &amp;quot;Value Discrete Information Theory&amp;quot; of the practical course &amp;quot;Simulation Digitaler Übertragungssysteme&amp;quot; (english: Simulation of Digital Transmission Systems).&amp;amp;nbsp; This (former) LNT course at the TU Munich is based on&lt;br /&gt;
&lt;br /&gt;
*the Windows program&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Programme/WDIT.zip WDIT] &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link points to the ZIP version of the program and &lt;br /&gt;
*the associated&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Texte/Wertdiskrete_Informationstheorie.pdf Internship guide]  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link refers to the PDF version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model and requirements == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We consider a value discrete message source&amp;amp;nbsp; $\rm Q$, which gives a sequence&amp;amp;nbsp; $ \langle q_ν \rangle$&amp;amp;nbsp; of symbols. &lt;br /&gt;
*For the run variable &amp;amp;nbsp;$ν = 1$, ... , $N$, where&amp;amp;nbsp; $N$&amp;amp;nbsp; should be &amp;quot;sufficiently large&amp;quot;. &lt;br /&gt;
*Each individual source symbol &amp;amp;nbsp;$q_ν$&amp;amp;nbsp; comes from a symbol set&amp;amp;nbsp; $\{q_μ \}$&amp;amp;nbsp; where&amp;amp;nbsp; $μ = 1$, ... , $M$, where&amp;amp;nbsp; $M$&amp;amp;nbsp; denotes the symbol range:&lt;br /&gt;
 &lt;br /&gt;
:$$q_{\nu} \in \left \{ q_{\mu}  \right \}, \hspace{0.25cm}{\rm with}\hspace{0.25cm} \nu = 1, \hspace{0.05cm} \text{ ...}\hspace{0.05cm} , N\hspace{0.25cm}{\rm and}\hspace{0.25cm}\mu = 1,\hspace{0.05cm} \text{ ...}\hspace{0.05cm} , M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The figure shows a quaternary message source&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; with the alphabet&amp;amp;nbsp; $\rm \{A, \ B, \ C, \ D\}$&amp;amp;nbsp; and an exemplary sequence of length&amp;amp;nbsp; $N = 100$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2227_Inf_T_1_1_S1a_neu.png|frame|Memoryless Quaternary Message Source]]&lt;br /&gt;
&lt;br /&gt;
The following requirements apply:&lt;br /&gt;
*The quaternary news source is fully described by&amp;amp;nbsp; $M = 4$&amp;amp;nbsp; symbol probabilities&amp;amp;nbsp; $p_μ$.&amp;amp;nbsp; In general it applies:&lt;br /&gt;
:$$\sum_{\mu = 1}^M \hspace{0.1cm}p_{\mu} = 1 \hspace{0.05cm}.$$&lt;br /&gt;
*The message source is memoryless, i.e., the individual sequence elements are&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Statistical Dependence and Independence#General_definition_of_statistical_dependence|statistically independent of each other]]:&lt;br /&gt;
:$${\rm Pr} \left (q_{\nu} = q_{\mu} \right ) = {\rm Pr} \left (q_{\nu} = q_{\mu} \hspace{0.03cm} | \hspace{0.03cm} q_{\nu -1}, q_{\nu -2}, \hspace{0.05cm} \text{ ...}\hspace{0.05cm}\right ) \hspace{0.05cm}.$$&lt;br /&gt;
*Since the alphabet consists of symbols&amp;amp;nbsp; (and not of random variables)&amp;amp;nbsp;, the specification of&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Expected_Values_and_Moments|expected values]]&amp;amp;nbsp; (linear mean, quadratic mean, dispersion, etc.) is not possible here, but also not necessary from an information-theoretical point of view.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These properties will now be illustrated with an example.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_1_S1b_vers2.png|right|frame|Relative frequencies as a function of&amp;amp;nbsp; $N$]]&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp;&lt;br /&gt;
For the symbol probabilities of a quaternary source applies: &lt;br /&gt;
:$$p_{\rm A} = 0.4 \hspace{0.05cm},\hspace{0.2cm}p_{\rm B} = 0.3 \hspace{0.05cm},\hspace{0.2cm}p_{\rm C} = 0.2 \hspace{0.05cm},\hspace{0.2cm} &lt;br /&gt;
p_{\rm D} = 0.1\hspace{0.05cm}.$$&lt;br /&gt;
For an infinitely long sequence&amp;amp;nbsp; $(N \to \infty)$ &lt;br /&gt;
*the&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/From_Random_Experiment_to_Random_Variable#Bernoulli&#039;s_Law_of_Large_Numbers|relative frequencies]]&amp;amp;nbsp; $h_{\rm A}$,&amp;amp;nbsp; $h_{\rm B}$,&amp;amp;nbsp; $h_{\rm C}$,&amp;amp;nbsp; $h_{\rm D}$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; a-posteriori parameters &lt;br /&gt;
*were identical to the&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Some_Basic_Definitions#Event_and_Event_set|probabilities]]&amp;amp;nbsp; $p_{\rm A}$,&amp;amp;nbsp; $p_{\rm B}$,&amp;amp;nbsp; $p_{\rm C}$,&amp;amp;nbsp; $p_{\rm D}$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; a-priori parameters. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With smaller&amp;amp;nbsp; $N$&amp;amp;nbsp; deviations may occur, as the adjacent table (result of a simulation) shows. &lt;br /&gt;
&lt;br /&gt;
*In the graphic above an exemplary sequence is shown with&amp;amp;nbsp; $N = 100$&amp;amp;nbsp; symbols. &lt;br /&gt;
*Due to the set elements&amp;amp;nbsp; $\rm A$,&amp;amp;nbsp; $\rm B$,&amp;amp;nbsp; $\rm C$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm D$&amp;amp;nbsp; no mean values can be given. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, if you replace the symbols with numerical values, for example&amp;amp;nbsp; $\rm A \Rightarrow 1$, &amp;amp;nbsp; $\rm B \Rightarrow 2$, &amp;amp;nbsp; $\rm C \Rightarrow 3$, &amp;amp;nbsp; $\rm D \Rightarrow 4$, then you will get &amp;lt;br&amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; time averaging &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; crossing line &amp;amp;nbsp; &amp;amp;nbsp; or &amp;amp;nbsp; &amp;amp;nbsp; ensemble averaging &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; expected value formation&lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Moments of a Discrete Random Variable#Linear_Average_-_Direct_Component|linear average]] :&lt;br /&gt;
:$$m_1 = \overline { q_{\nu} } = {\rm E} \big [ q_{\mu} \big ] = 0.4 \cdot 1 + 0.3 \cdot 2 + 0.2 \cdot 3 + 0.1 \cdot 4&lt;br /&gt;
= 2 \hspace{0.05cm},$$ &lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Moments of a Discrete Random Variable#Square_mean_.E2.80.93_Variance_.E2.80.93_Scattering |square mean]]:&lt;br /&gt;
:$$m_2 = \overline { q_{\nu}^{\hspace{0.05cm}2}  } = {\rm E} \big [ q_{\mu}^{\hspace{0.05cm}2} \big ] = 0.4 \cdot 1^2 + 0.3 \cdot 2^2 + 0.2 \cdot 3^2 + 0.1 \cdot 4^2&lt;br /&gt;
= 5 \hspace{0.05cm},$$&lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Expected_Values_and_Moments#Some_often_used_Central_Moments|standard deviation]] (scattering) according to the &amp;quot;Theorem of Steiner&amp;quot;:&lt;br /&gt;
:$$\sigma = \sqrt {m_2 - m_1^2} = \sqrt {5 - 2^2} = 1 \hspace{0.05cm}.$$}}	&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
==Decision content - Message content==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[https://de.wikipedia.org/wiki/Claude_Shannon Claude Elwood Shannon]&amp;amp;nbsp; defined in 1948 in the standard work of information theory&amp;amp;nbsp; [Sha48]&amp;lt;ref name=&#039;Sha48&#039;&amp;gt;Shannon, C.E.: A Mathematical Theory of Communication. In: Bell Syst. Techn. J. 27 (1948), pp. 379-423 and pp. 623-656.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; the concept of information as &amp;quot;decrease of uncertainty about the occurrence of a statistical event&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
Let us make a mental experiment with&amp;amp;nbsp; $M$&amp;amp;nbsp; possible results, which are all equally probable: &amp;amp;nbsp; $p_1 = p_2 = \hspace{0.05cm} \text{ ...}\hspace{0.05cm} = p_M = 1/M \hspace{0.05cm}.$ &lt;br /&gt;
&lt;br /&gt;
Under this assumption applies:&lt;br /&gt;
*Is&amp;amp;nbsp; $M = 1$, then each individual attempt will yield the same result and therefore there is no uncertainty about the output.&lt;br /&gt;
*On the other hand, an observer learns about an experiment with&amp;amp;nbsp; $M = 2$, for example the &amp;quot;coin toss&amp;quot; with the set of events&amp;amp;nbsp; $\big \{\rm \boldsymbol{\rm Z}, \rm \boldsymbol{\rm W} \big \}$&amp;amp;nbsp; and the probabilities&amp;amp;nbsp; $p_{\rm Z} = p_{\rm W} = 0. 5$, a gain in information; The uncertainty regarding&amp;amp;nbsp; $\rm Z$ &amp;amp;nbsp;resp.&amp;amp;nbsp; $\rm W$&amp;amp;nbsp; is resolved.&lt;br /&gt;
*In the experiment &amp;quot;dice&amp;quot;&amp;amp;nbsp; $(M = 6)$&amp;amp;nbsp; and even more in roulette&amp;amp;nbsp; $(M = 37)$&amp;amp;nbsp; the gained information is even more significant for the observer than in the &amp;quot;coin toss&amp;quot; when he learns which number was thrown or which ball fell.&lt;br /&gt;
*Finally it should be considered that the experiment&amp;amp;nbsp; &amp;quot;triple coin toss&amp;quot;&amp;amp;nbsp; with the&amp;amp;nbsp; $M = 8$&amp;amp;nbsp; possible results&amp;amp;nbsp; $\rm ZZZ$,&amp;amp;nbsp; $\rm ZZW$,&amp;amp;nbsp; $\rm ZWZ$,&amp;amp;nbsp; $\rm ZWW$,&amp;amp;nbsp; $\rm WZZ$,&amp;amp;nbsp; $\rm WZW$,&amp;amp;nbsp; $\rm WWZ$,&amp;amp;nbsp; $\rm WWW$&amp;amp;nbsp; provides three times the information as the single coin toss&amp;amp;nbsp; $(M = 2)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following definition fulfills all the requirements listed here for a quantitative information measure for equally probable events, indicated only by the symbol range&amp;amp;nbsp; $M$.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp; The&amp;amp;nbsp; &#039;&#039;&#039;decision content&#039;&#039;&#039; &amp;amp;nbsp; of a message source depends only on the symbol range&amp;amp;nbsp; $M$&amp;amp;nbsp; and results in&lt;br /&gt;
 &lt;br /&gt;
:$$H_0 = {\rm log}\hspace{0.1cm}M = {\rm log}_2\hspace{0.1cm}M \hspace{0.15cm} {\rm (in \ 	&amp;amp;#8220;bit&amp;quot;)}&lt;br /&gt;
= {\rm ln}\hspace{0.1cm}M \hspace{0.15cm}\text {(in 	&amp;amp;#8220;nat&amp;quot;)}&lt;br /&gt;
= {\rm lg}\hspace{0.1cm}M \hspace{0.15cm}\text {(in 	&amp;amp;#8220;Hartley&amp;quot;)}\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*The term&amp;amp;nbsp; &#039;&#039;message content&#039;&#039; is also commonly used for this. &lt;br /&gt;
*Since&amp;amp;nbsp; $H_0$&amp;amp;nbsp; indicates the maximum value of the&amp;amp;nbsp; [[Information_Theory/Sources with Memory#Information_Content_and_Entropy|Entropy]]&amp;amp;nbsp; $H$&amp;amp;nbsp;, $H_\text{max}$&amp;amp;nbsp; is also used in our tutorial as short notation&amp;amp;nbsp;. }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Please note our nomenclature:&lt;br /&gt;
*The logarithm will be called &amp;quot;log&amp;quot; in the following, independent of the base. &lt;br /&gt;
*The relations mentioned above are fulfilled due to the following properties:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm log}\hspace{0.1cm}1 = 0 \hspace{0.05cm},\hspace{0.2cm}&lt;br /&gt;
{\rm log}\hspace{0.1cm}37 &amp;gt; {\rm log}\hspace{0.1cm}6 &amp;gt; {\rm log}\hspace{0.1cm}2\hspace{0.05cm},\hspace{0.2cm}&lt;br /&gt;
{\rm log}\hspace{0.1cm}M^k = k \cdot {\rm log}\hspace{0.1cm}M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
* Usually we use the logarithm to the base&amp;amp;nbsp; $2$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;Logarithm dualis&#039;&#039;&amp;amp;nbsp; $\rm (ld)$, where the pseudo unit &amp;quot;bit&amp;quot;, more precisely:&amp;amp;nbsp; &amp;quot;bit/symbol&amp;quot;, is then added:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm ld}\hspace{0.1cm}M = {\rm log_2}\hspace{0.1cm}M = \frac{{\rm lg}\hspace{0.1cm}M}{{\rm lg}\hspace{0.1cm}2}&lt;br /&gt;
= \frac{{\rm ln}\hspace{0.1cm}M}{{\rm ln}\hspace{0.1cm}2} &lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*In addition, you can find in the literature some additional definitions, which are based on the natural logarithm&amp;amp;nbsp; $\rm (ln)$&amp;amp;nbsp; or the logarithm&amp;amp;nbsp; $\rm (lg)$&amp;amp;nbsp;.&lt;br /&gt;
 &lt;br /&gt;
==Information content and entropy ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We now waive the previous requirement that all&amp;amp;nbsp; $M$&amp;amp;nbsp; possible results of an experiment are equally probable.&amp;amp;nbsp; In order to keep the spelling as compact as possible, we define for this page only:&lt;br /&gt;
 &lt;br /&gt;
:$$p_1 &amp;gt; p_2 &amp;gt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;gt; p_\mu &amp;gt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;gt; p_{M-1} &amp;gt; p_M\hspace{0.05cm},\hspace{0.4cm}\sum_{\mu = 1}^M p_{\mu} = 1 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
We now consider the &#039;&#039;information content&#039;&#039;&amp;amp;nbsp; of the individual symbols, where we denote the &amp;quot;logarithm dualis&amp;quot; with $\log_2$:&lt;br /&gt;
 &lt;br /&gt;
:$$I_\mu = {\rm log_2}\hspace{0.1cm}\frac{1}{p_\mu}= -\hspace{0.05cm}{\rm log_2}\hspace{0.1cm}{p_\mu}&lt;br /&gt;
\hspace{0.5cm}{\rm (Einheit\hspace{-0.15cm}: \hspace{0.15cm}bit\hspace{0.15cm}oder\hspace{0.15cm}bit/Symbol)}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
You can see:&lt;br /&gt;
*because of&amp;amp;nbsp; $p_μ ≤ 1$&amp;amp;nbsp; the information content is never negative.&amp;amp;nbsp; In the borderline case&amp;amp;nbsp; $p_μ \to 1$&amp;amp;nbsp; goes&amp;amp;nbsp; $I_μ \to 0$. &lt;br /&gt;
*However for&amp;amp;nbsp; $I_μ = 0$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $p_μ = 1$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $M = 1$&amp;amp;nbsp; the decision content is also&amp;amp;nbsp; $H_0 = 0$.&lt;br /&gt;
*For decreasing probabilities&amp;amp;nbsp; $p_μ$&amp;amp;nbsp; the information content increases continuously:&lt;br /&gt;
 &lt;br /&gt;
:$$I_1 &amp;lt; I_2 &amp;lt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;lt; I_\mu &amp;lt;\hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;lt; I_{M-1} &amp;lt; I_M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; &#039;&#039;&#039;The more improbable an event is, the greater is its information content&#039;&#039;&#039;.&amp;amp;nbsp; This fact is also found in daily life:&lt;br /&gt;
*&amp;quot;6 right ones&amp;quot; in the lottery are more likely to be noticed than &amp;quot;3 right ones&amp;quot; or no win at all.&lt;br /&gt;
*A tsunami in Asia also dominates the news in Germany for weeks as opposed to the almost standard Deutsche Bahn delays.&lt;br /&gt;
*A series of defeats of Bayern Munich leads to huge headlines in contrast to a winning series.&amp;amp;nbsp; With 1860 Munich exactly the opposite is the case.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, the information content of a single symbol (or event) is not very interesting.&amp;amp;nbsp; On the other hand &lt;br /&gt;
*by ensemble averaging over all possible symbols&amp;amp;nbsp; $q_μ$ &amp;amp;nbsp;bzw.&amp;amp;nbsp; &lt;br /&gt;
*by time averaging over all elements of the sequence&amp;amp;nbsp; $\langle q_ν \rangle$&lt;br /&gt;
&lt;br /&gt;
one of the central variables of information theory. &lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp; The&amp;amp;nbsp; &#039;&#039;&#039;Entropy&#039;&#039;&#039;&amp;amp;nbsp; $H$&amp;amp;nbsp; of a source indicates the &#039;&#039;mean information content of all symbols&#039;&#039;&amp;amp;nbsp;:&lt;br /&gt;
 &lt;br /&gt;
:$$H = \overline{I_\nu} = {\rm E}\hspace{0.01cm}[I_\mu] = \sum_{\mu = 1}^M p_{\mu} \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{p_\mu}=&lt;br /&gt;
 -\sum_{\mu = 1}^M p_{\mu} \cdot{\rm log_2}\hspace{0.1cm}{p_\mu} \hspace{0.5cm}\text{(unit: bit, more precisely: bit/symbol)} &lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The overline marks again a time averaging and&amp;amp;nbsp; $\rm E[\text{...}]$&amp;amp;nbsp; a ensemble averaging.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Entropy is among other things a measure for&lt;br /&gt;
*the mean uncertainty about the outcome of a statistical event,&lt;br /&gt;
*the &amp;quot;randomness&amp;quot; of this event,&amp;amp;nbsp; and&lt;br /&gt;
*the average information content of a random variable.	 &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
==Binary entropy function ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
At first we will restrict ourselves to the special case&amp;amp;nbsp; $M = 2$&amp;amp;nbsp; and consider a binary source, which returns the two symbols&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; &amp;amp;nbsp; The occurrence probabilities are &amp;amp;nbsp; $p_{\rm A} = p$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm B} = 1 - p$.&lt;br /&gt;
&lt;br /&gt;
For the entropy of this binary source applies: &lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm bin} (p) = p \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{\hspace{0.1cm}p\hspace{0.1cm}} + (1-p) \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{1-p} \hspace{0.5cm}{\rm (Einheit\hspace{-0.15cm}: \hspace{0.15cm}bit\hspace{0.15cm}oder\hspace{0.15cm}bit/Symbol)}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The function is called&amp;amp;nbsp; $H_\text{bin}(p)$&amp;amp;nbsp; the&amp;amp;nbsp; &#039;&#039;&#039;binary entropy function&#039;&#039;&#039;.&amp;amp;nbsp; The entropy of a source with a larger symbol range&amp;amp;nbsp; $M$&amp;amp;nbsp; can often be expressed using&amp;amp;nbsp; $H_\text{bin}(p)$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp;&lt;br /&gt;
The figure shows the binary entropy function for the values&amp;amp;nbsp; $0 ≤ p ≤ 1$&amp;amp;nbsp; of the symbol probability of&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; $($or also of&amp;amp;nbsp; $\rm B)$.&amp;amp;nbsp; You can see&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_1_S4_vers2.png|frame|Binary entropy function as function of&amp;amp;nbsp; $p$|right]]&lt;br /&gt;
*The maximum value&amp;amp;nbsp; $H_\text{max} = 1\; \rm bit$&amp;amp;nbsp; results for&amp;amp;nbsp; $p = 0.5$, thus for equally probable binary symbols.&amp;amp;nbsp; Then &amp;amp;nbsp; $\rm A$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; contribute the same amount to entropy.&lt;br /&gt;
* $H_\text{bin}(p)$&amp;amp;nbsp; is symmetrical about&amp;amp;nbsp; $p = 0.5$.&amp;amp;nbsp; A source with&amp;amp;nbsp; $p_{\rm A} = 0.1$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm B} = 0. 9$&amp;amp;nbsp; has the same entropy&amp;amp;nbsp; $H = 0.469 \; \rm bit$&amp;amp;nbsp; as a source with&amp;amp;nbsp; $p_{\rm A} = 0.9$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm B} = 0.1$.&lt;br /&gt;
*The difference&amp;amp;nbsp; $ΔH = H_\text{max} - H$ gives&amp;amp;nbsp; the&amp;amp;nbsp; &#039;&#039;redundancy&#039;&#039;&amp;amp;nbsp; of the source and&amp;amp;nbsp; $r = ΔH/H_\text{max}$&amp;amp;nbsp; the&amp;amp;nbsp; &#039;&#039;relative redundancy&#039;&#039;. &amp;amp;nbsp; In the example,&amp;amp;nbsp; $ΔH = 0.531\; \rm bit$&amp;amp;nbsp; and&amp;amp;nbsp; $r = 53.1 \rm \%$.&lt;br /&gt;
*For&amp;amp;nbsp; $p = 0$&amp;amp;nbsp; this results in&amp;amp;nbsp; $H = 0$, since the symbol sequence &amp;amp;nbsp;$\rm B \ B \ B \text{...}$&amp;amp;nbsp; can be predicted with certainty. &amp;amp;nbsp; Actually, the symbol range is now only&amp;amp;nbsp; $M = 1$.&amp;amp;nbsp; The same applies to&amp;amp;nbsp; $p = 1$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; symbol sequence &amp;amp;nbsp;$\rm A \ A \ A \ text{...}$.&lt;br /&gt;
*$H_\text{bin}(p)$&amp;amp;nbsp; is always a&amp;amp;nbsp; &#039;&#039;concave function&#039;&#039;, since the second derivative after the parameter&amp;amp;nbsp; $p$&amp;amp;nbsp; is negative for all values of&amp;amp;nbsp; $p$&amp;amp;nbsp;: &lt;br /&gt;
:$$\frac{ {\rm d}^2H_{\rm bin} (p)}{ {\rm d}\,p^2} = \frac{- 1}{ {\rm ln}(2) \cdot p \cdot (1-p)}&amp;lt; 0&lt;br /&gt;
\hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
==Message sources with a larger symbol range==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In the&amp;amp;nbsp; [[Information_Theory/Sources with Memory#Model_and_Prerequisites|first section]]&amp;amp;nbsp; of this chapter we have a quaternary message source&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; with the symbol probabilities&amp;amp;nbsp; $p_{\rm A} = 0. 4$, &amp;amp;nbsp; $p_{\rm B} = 0.3$, &amp;amp;nbsp; $p_{\rm C} = 0.2$ &amp;amp;nbsp; and&amp;amp;nbsp; $ p_{\rm D} = 0.1$&amp;amp;nbsp; considered.&amp;amp;nbsp; This source has the following entropy:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm quat} = 0.4 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.4} + 0.3 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0. 3} + 0.2 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.2}+ 0.1 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.1}.$$&lt;br /&gt;
&lt;br /&gt;
For numerical calculation, the detour via the decimal logarithm&amp;amp;nbsp; $\lg \ x = {\rm log}_{10} \ x$&amp;amp;nbsp;, is often necessary. Since the &#039;&#039;logarithm dualis&#039;&#039;&amp;amp;nbsp; $ {\rm log}_2 \ x$&amp;amp;nbsp; is mostly not found on pocket calculators.&lt;br /&gt;
&lt;br /&gt;
:$$H_{\rm quat}=\frac{1}{{\rm lg}\hspace{0.1cm}2} \cdot \left [ 0.4 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.4} + 0.3 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0. 3} + 0.2 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.2} + 0.1 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.1} \right ] = 1.845\,{\rm bit}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 3:}$&amp;amp;nbsp;&lt;br /&gt;
Now there are certain symmetries between the symbol probabilities: &lt;br /&gt;
[[File:Inf_T_1_1_S5_vers2.png|frame|Entropy of binary source and quaternary source]]&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = p_{\rm D} = p \hspace{0.05cm},\hspace{0.4cm}p_{\rm B} = p_{\rm C} = 0.5 - p \hspace{0.05cm},\hspace{0.3cm}{\rm with} \hspace{0.15cm}0 \le p \le 0.5 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
In this case, the binary entropy function can be used to calculate the entropy:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm quat} = 2 \cdot p \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{\hspace{0.1cm}p\hspace{0.1cm} } + 2 \cdot (0.5-p) \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.5-p}$$&lt;br /&gt;
$$\Rightarrow \hspace{0.3cm} H_{\rm quat} = 1 + H_{\rm bin}(2p) \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The graphic shows as a function of&amp;amp;nbsp; $p$&lt;br /&gt;
*the entropy of the quaternary source (blue) &lt;br /&gt;
*in comparison to the entropy course of the binary source (red). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the quaternary source only the abscissa&amp;amp;nbsp; $0 ≤ p ≤ 0.5$&amp;amp;nbsp; is allowed. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
You can see from the blue curve for the quaternary source:&lt;br /&gt;
*The maximum entropy&amp;amp;nbsp; $H_\text{max} = 2 \; \rm bit/symbol$&amp;amp;nbsp; results for&amp;amp;nbsp; $p = 0.25$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; equally probable symbols: &amp;amp;nbsp; $p_{\rm A} = p_{\rm B} = p_{\rm C} = p_{\rm A} = 0.25$.&lt;br /&gt;
*With&amp;amp;nbsp; $p = 0$&amp;amp;nbsp; resp.&amp;amp;nbsp; $p = 0.5$&amp;amp;nbsp; the quaternary source degenerates to a binary source with&amp;amp;nbsp; $p_{\rm B} = p_{\rm C} = 0. 5$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm A} = p_{\rm D} = 0$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; entropy&amp;amp;nbsp; $H = 1 \; \rm bit/symbol$.&lt;br /&gt;
*The source with&amp;amp;nbsp; $p_{\rm A} = p_{\rm D} = 0.1$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm B} = p_{\rm C} = 0.4$&amp;amp;nbsp; has the following characteristics (each with the pseudo unit &amp;quot;bit/symbol&amp;quot;):&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp; &amp;amp;nbsp; &#039;&#039;&#039;(1)&#039;&#039;&#039; &amp;amp;nbsp; entropy: &amp;amp;nbsp; $H = 1 + H_{\rm bin} (2p) =1 + H_{\rm bin} (0.2) = 1.722,$&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp; &amp;amp;nbsp; &#039;&#039;&#039;(2)&#039;&#039;&#039; &amp;amp;nbsp; Redundancy: &amp;amp;nbsp; ${\rm \Delta }H = {\rm log_2}\hspace{0.1cm} M - H =2- 1.722= 0.278,$&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp; &amp;amp;nbsp; &#039;&#039;&#039;(3)&#039;&#039;&#039; &amp;amp;nbsp; relative redundancy: &amp;amp;nbsp; $r ={\rm \delta }H/({\rm log_2}\hspace{0.1cm} M) = 0.139\hspace{0.05cm}.$&lt;br /&gt;
&lt;br /&gt;
*The redundancy of the quaternary source with&amp;amp;nbsp; $p = 0.1$&amp;amp;nbsp; is equal to&amp;amp;nbsp; $ΔH = 0.278 \; \rm bit/symbol$&amp;amp;nbsp; and thus exactly the same as the redundancy of the binary source with&amp;amp;nbsp; $p = 0.2$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Exercises for chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:1.1 Wetterentropie|Aufgabe 1.1: Wetterentropie]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.1Z Binäre Entropiefunktion|Aufgabe 1.1Z: Binäre Entropiefunktion]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.2 Entropie von Ternärquellen|Aufgabe 1.2: Entropie von Ternärquellen]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==List of sources==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Sources_with_Memory&amp;diff=35087</id>
		<title>Information Theory/Discrete Sources with Memory</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Sources_with_Memory&amp;diff=35087"/>
		<updated>2020-11-02T13:44:27Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Entropie wertdiskreter Nachrichtenquellen&lt;br /&gt;
|Vorherige Seite=Gedächtnislose Nachrichtenquellen&lt;br /&gt;
|Nächste Seite=Natürliche wertdiskrete Nachrichtenquellen&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==A simple introductory example ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp;&lt;br /&gt;
At the&amp;amp;nbsp; [[Information_Theory/Discrete Memoryless Sources#Model_and_requirements|beginning of the first chapter]]&amp;amp;nbsp; we have considered a memoryless message source with the symbol set&amp;amp;nbsp; $\rm \{A, \ B, \ C, \ D\}$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 4$ &amp;amp;nbsp;. &amp;amp;nbsp; An exemplary symbol sequence is shown again in the following figure as source&amp;amp;nbsp; $\rm Q1$&amp;amp;nbsp;. &lt;br /&gt;
&lt;br /&gt;
With the symbol probabilities&amp;amp;nbsp; $p_{\rm A} = 0.4 \hspace{0.05cm},\hspace{0.2cm}p_{\rm B} = 0.3 \hspace{0.05cm},\hspace{0.2cm}p_{\rm C} = 0.2 \hspace{0.05cm},\hspace{0.2cm} &lt;br /&gt;
p_{\rm D} = 0.1\hspace{0.05cm}$&amp;amp;nbsp; the entropy is&lt;br /&gt;
 &lt;br /&gt;
:$$H \hspace{-0.05cm}= 0.4 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.4} + 0.3 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.2} + 0.1 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.1} \approx 1.84 \hspace{0.05cm}{\rm bit/symbol}&lt;br /&gt;
 \hspace{0.01cm}.$$&lt;br /&gt;
&lt;br /&gt;
Due to the unequal symbol probabilities the entropy is smaller than the decision content&amp;amp;nbsp; $H_0 = \log_2 M = 2 \hspace{0.05cm}. \rm bit/symbol$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2238__Inf_T_1_2_S1a_neu.png|right|frame|Quaternary message source without/with memory]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The source&amp;amp;nbsp; $\rm Q2$&amp;amp;nbsp; is almost identical to the source&amp;amp;nbsp; $\rm Q1$, except that each individual symbol is output not only once, but twice in a row:&amp;amp;nbsp; $\rm A ⇒ AA$,&amp;amp;nbsp; $\rm B ⇒ BB$,&amp;amp;nbsp; and so on. &lt;br /&gt;
*It is obvious that&amp;amp;nbsp; $\rm Q2$&amp;amp;nbsp; has a smaller entropy (uncertainty) than&amp;amp;nbsp; $\rm Q1$. &lt;br /&gt;
*Because of the simple repetition code,&amp;amp;nbsp; &lt;br /&gt;
:$$H = 1.84/2 = 0.92 \hspace{0.05cm} \rm bit/symbol$$&lt;br /&gt;
:only half the size, although the occurrence probabilities have not changed.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp;&lt;br /&gt;
This example shows:&lt;br /&gt;
*The entropy of a source with memory is smaller than the entropy of a memoryless source with equal symbol probabilities.&lt;br /&gt;
*The statistical bonds within the sequence&amp;amp;nbsp; $〈 q_ν 〉$&amp;amp;nbsp; have to be considered now, &lt;br /&gt;
*namely the dependency of the symbol&amp;amp;nbsp; $q_ν$&amp;amp;nbsp; from the predecessor symbols&amp;amp;nbsp; $q_{ν-1}$,&amp;amp;nbsp; $q_{ν-2}$, ... }}&lt;br /&gt;
 &lt;br /&gt;
	 &lt;br /&gt;
== Entropy with respect to two-tuples == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We continue to look at the source symbol sequence&amp;amp;nbsp; $〈 q_1, \hspace{0.05cm} q_2,\hspace{0.05cm}\text{ ...} \hspace{0.05cm}, q_{ν-1}, \hspace{0.05cm}q_ν, \hspace{0.05cm}\hspace{0.05cm}q_{ν+1} .\hspace{0.05cm}\text{...} \hspace{0.05cm}〉$&amp;amp;nbsp; and now consider the entropy of two successive source symbols. &lt;br /&gt;
*All source symbols&amp;amp;nbsp; $q_ν$&amp;amp;nbsp; are taken from an alphabet with the symbol beginning&amp;amp;nbsp; $M$, so that it is not necessary for the combination&amp;amp;nbsp; $(q_ν, \hspace{0.05cm}q_{ν+1})$&amp;amp;nbsp; exactly&amp;amp;nbsp; $M^2$&amp;amp;nbsp; there are possible symbol pairs with the following [[Theory_of_Stochastic_Signals/Set Theory Basics#Intersection|combined probabilities]]:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm Pr}(q_{\nu}\cap q_{\nu+1})\le {\rm Pr}(q_{\nu}) \cdot {\rm Pr}( q_{\nu+1})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*From this, the&amp;amp;nbsp; &#039;&#039;compound entropy&#039;&#039;&amp;amp;nbsp; of an ordered pair can be computed:&lt;br /&gt;
 &lt;br /&gt;
:$$H_2\hspace{0.05cm}&#039; = \sum_{q_{\nu}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm}q_{\mu}\hspace{0.01cm} \}} \sum_{q_{\nu+1}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm} q_{\mu}\hspace{0.01cm} \}}\hspace{-0.1cm}{\rm Pr}(q_{\nu}\cap q_{\nu+1}) \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{{\rm Pr}(q_{\nu}\cap q_{\nu+1})} \hspace{0.4cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/order pair})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
:The index &amp;quot;2&amp;quot; symbolizes that the entropy thus calculated refers to two-tuples. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To get the average information content per symbol,&amp;amp;nbsp; $H_2\hspace{0.05cm}&#039;$&amp;amp;nbsp; has to be divided in half:&lt;br /&gt;
 &lt;br /&gt;
:$$H_2 = {H_2\hspace{0.05cm}&#039;}/{2}  \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
In order to achieve a consistent nomenclature, we now label the entropy defined in chapter&amp;amp;nbsp; [[Information_Theory/Discrete Memoryless Sources#Model_and_Prerequisites|Memoryless Message Sources]]&amp;amp;nbsp; with&amp;amp;nbsp; $H_1$:&lt;br /&gt;
&lt;br /&gt;
:$$H_1 = \sum_{q_{\nu}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm}q_{\mu}\hspace{0.01cm} \}} {\rm Pr}(q_{\nu}) \cdot {\rm log_2}\hspace{0.1cm}\frac {1}{{\rm Pr}(q_{\nu})} \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The index &amp;quot;1&amp;quot; is supposed to indicate that&amp;amp;nbsp; $H_1$&amp;amp;nbsp; considers only the symbol probabilities and not statistical links between symbols within the sequence.&amp;amp;nbsp; With the decision content&amp;amp;nbsp; $H_0 = \log_2 \ M$&amp;amp;nbsp; the following size relation results:&lt;br /&gt;
 &lt;br /&gt;
$$H_0 \ge H_1 \ge H_2&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
With statistical independence of the sequence elements&amp;amp;nbsp; $H_2 = H_1$.&lt;br /&gt;
&lt;br /&gt;
The previous equations each indicate a share mean value. &amp;amp;nbsp; The probabilities required for the calculation of&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; can, however, also be calculated as time averages from a very long sequence or, more precisely, approximated by the corresponding&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/From Random Experiment to Random Variable#Bernoulli&#039;s_Law_of_Large_Numbers|relative frequencies]].&lt;br /&gt;
&lt;br /&gt;
Let us now illustrate the calculation of entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; with three examples.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp;&lt;br /&gt;
We will first look at the sequence&amp;amp;nbsp; $〈 q_1$, ... , $q_{50} \rangle $&amp;amp;nbsp; according to the graphic, where the sequence elements&amp;amp;nbsp; $q_ν$&amp;amp;nbsp; originate from the alphabet $\rm \{A, \ B, \ C \}$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; the symbol range is&amp;amp;nbsp; $M = 3$.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_2_S2_vers2.png|center|frame|Ternary symbol sequence and formation of two-tuples]]&lt;br /&gt;
&lt;br /&gt;
By time averaging over the&amp;amp;nbsp; $50$&amp;amp;nbsp; symbols one gets the symbol probabilities&amp;amp;nbsp; $p_{\rm A} ≈ 0.5$, &amp;amp;nbsp; $p_{\rm B} ≈ 0.3$ &amp;amp;nbsp;and&amp;amp;nbsp; $p_{\rm C} ≈ 0.2$, with which one can calculate the first order entropy approximation:&lt;br /&gt;
 &lt;br /&gt;
:$$H_1 = 0.5 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.5} + 0.3 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.2}  \approx \, 1.486 \,{\rm bit/symbol}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Due to the not equally probable symbols&amp;amp;nbsp; $H_1 &amp;lt; H_0 = 1.585 \hspace{0.05cm}. \rm bit/symbol$.&amp;amp;nbsp; As an approximation for the probabilities of two-tuples one gets from the above sequence:&lt;br /&gt;
 &lt;br /&gt;
:$$\begin{align*}p_{\rm AA} \hspace{-0.1cm}&amp;amp; = \hspace{-0.1cm} 14/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm AB} = 8/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm AC} = 3/49\hspace{0.05cm}, \\\&lt;br /&gt;
 p_{\rm BA} \hspace{-0.1cm}&amp;amp; = \hspace{0.07cm} 7/49\hspace{0.05cm}, \hspace{0.25cm}p_{\rm BB} = 2/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm BC} = 5/49\hspace{0.05cm}, \\\&lt;br /&gt;
 p_{\rm CA} \hspace{-0.1cm}&amp;amp; = \hspace{0.07cm} 4/49\hspace{0.05cm}, \hspace{0.25cm}p_{\rm CB} = 5/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm CC} = 1/49\hspace{0.05cm}.\end{align*}$$&lt;br /&gt;
&lt;br /&gt;
Please note that the&amp;amp;nbsp; $50$&amp;amp;nbsp; sequence elements can only be formed from&amp;amp;nbsp; $49$&amp;amp;nbsp; two-tuples&amp;amp;nbsp; $(\rm AA$, ... , $\rm CC)$&amp;amp;nbsp; which are marked in different colors in the graphic.&lt;br /&gt;
&lt;br /&gt;
*The entropy approximation&amp;amp;nbsp; $H_2$&amp;amp;nbsp; should actually be equal to&amp;amp;nbsp; $H_1$&amp;amp;nbsp; since the given symbol sequence comes from a memoryless source. &lt;br /&gt;
*Because of the short sequence length&amp;amp;nbsp; $N = 50$&amp;amp;nbsp; and the resulting statistical inaccuracy, however, a smaller value results: &amp;amp;nbsp; &lt;br /&gt;
:$$H_2 ≈ 1.39\hspace{0.05cm} \rm bit/symbol.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 3:}$&amp;amp;nbsp;&lt;br /&gt;
Now let us consider a&amp;amp;nbsp; &#039;&#039;memoryless&amp;amp;nbsp; binary source&#039;&#039;&amp;amp;nbsp; with equally probable symbols, i.e. there would be&amp;amp;nbsp; $p_{\rm A} = p_{\rm B} = 1/2$.&amp;amp;nbsp; The first twenty subsequent elements are &amp;amp;nbsp; $〈 q_ν 〉 =\rm BBAAABAABBBBBAAAABABAB$ ...&lt;br /&gt;
*Because of the equally probable binary symbols &amp;amp;nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$.&lt;br /&gt;
*The compound probability&amp;amp;nbsp; $p_{\rm AB}$&amp;amp;nbsp; of the combination&amp;amp;nbsp; $\rm AB$&amp;amp;nbsp; is equal to&amp;amp;nbsp; $p_{\rm A} - p_{\rm B} = 1/4$.&amp;amp;nbsp; Also $p_{\rm AA} = p_{\rm BB} = p_{\rm BA} = 1/4$. &lt;br /&gt;
*This gives for the second entropy approximation&lt;br /&gt;
 &lt;br /&gt;
:$$H_2 = {1}/{2} \cdot \big [ {1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 + {1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 +{1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 +{1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 \big ] = 1 \,{\rm bit/symbol} = H_1 = H_0&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note&#039;&#039;: &amp;amp;nbsp; Due to the short length of the given sequence the probabilities are slightly different:&amp;amp;nbsp; $p_{\rm AA} = 6/19$,&amp;amp;nbsp; $p_{\rm BB} = 5/19$,&amp;amp;nbsp; $p_{\rm AB} = p_{\rm BA} = 4/19$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 4:}$&amp;amp;nbsp;&lt;br /&gt;
The third sequence considered here results from the binary sequence of&amp;amp;nbsp; $\text{Example 3}$&amp;amp;nbsp; by using a simple repeat code: &lt;br /&gt;
:$$〈 q_ν 〉 =\rm BbBbAaAaAaBbAaAaBbBb \text{...} $$&lt;br /&gt;
*The repeated symbols are marked by corresponding lower case letters.&amp;amp;nbsp; It still applies&amp;amp;nbsp; $M=2$.&lt;br /&gt;
*Because of the equally probable binary symbols, this also results in&amp;amp;nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$.&lt;br /&gt;
*As shown in&amp;amp;nbsp; [[Aufgaben:1.3_Entropienäherungen|Exercise 1.3]]&amp;amp;nbsp; for the compound probabilities we obtain&amp;amp;nbsp; $p_{\rm AA}=p_{\rm BB} = 3/8$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm AB}=p_{\rm BA} = 1/8$.&amp;amp;nbsp; Hence&lt;br /&gt;
:$$\begin{align*}H_2 ={1}/{2} \cdot \big [ 2 \cdot {3}/{8} \cdot {\rm log}_2\hspace{0.1cm} {8}/{3} + &lt;br /&gt;
 2 \cdot {1}/{8} \cdot {\rm log}_2\hspace{0.1cm}8\big ] = {3}/{8} \cdot {\rm log}_2\hspace{0.1cm}8 - {3}/{8} \cdot{\rm log}_2\hspace{0.1cm}3 + {1}/{8} \cdot {\rm log}_2\hspace{0.1cm}8 \approx 0.906 \,{\rm bit/symbol} &amp;lt; H_1 = H_0&lt;br /&gt;
 \hspace{0.05cm}.\end{align*}$$&lt;br /&gt;
&lt;br /&gt;
A closer look at the task at hand leads to the following conclusion: &lt;br /&gt;
*The entropy should actually be&amp;amp;nbsp; $H = 0.5 \hspace{0.05cm} \rm bit/symbol$&amp;amp;nbsp; its (every second symbol does not provide new information) &lt;br /&gt;
*The second entropy approximation&amp;amp;nbsp; $H_2 = 0.906 \hspace{0.05cm} \rm bit/symbol$&amp;amp;nbsp; but is much larger than the entropy&amp;amp;nbsp; $H$.&lt;br /&gt;
*For the determination of entropy the second order approximation is not sufficient.&amp;amp;nbsp; rather, larger continuous blocks must be considered with&amp;amp;nbsp; $k &amp;gt; 2$&amp;amp;nbsp; symbols. &lt;br /&gt;
*In the following, such a block is referred to as&amp;amp;nbsp; $k$-tuple.}}&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Generalization to $k$-tuple and boundary crossing ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For abbreviation we write using the compound probability&amp;amp;nbsp; $p_i^{(k)}$&amp;amp;nbsp; a&amp;amp;nbsp; $k$-tuple in general:&lt;br /&gt;
 &lt;br /&gt;
:$$H_k = \frac{1}{k} \cdot \sum_{i=1}^{M^k} p_i^{(k)} \cdot {\rm log}_2\hspace{0.1cm} \frac{1}{p_i^{(k)}} \hspace{0.5cm}({\rm Einheit\hspace{-0.1cm}: \hspace{0.1cm}bit/Symbol})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The control variable&amp;amp;nbsp; $i$&amp;amp;nbsp; stands for one of the&amp;amp;nbsp; $M^k$ Tuple.&amp;amp;nbsp; The previously calculated approximation&amp;amp;nbsp; $H_2$&amp;amp;nbsp; results with&amp;amp;nbsp; $k = 2$.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp;&lt;br /&gt;
The&amp;amp;nbsp; &#039;&#039;&#039;Entropy of a message source with memory&#039;&#039;&#039;&amp;amp;nbsp; has the following limit &lt;br /&gt;
:$$H = \lim_{k \rightarrow \infty }H_k \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
For the&amp;amp;nbsp; &#039;&#039;entropy approximations&#039;&#039;&amp;amp;nbsp; $H_k$&amp;amp;nbsp; the following relations apply&amp;amp;nbsp; $(H_0$ is the decision content$)$:&lt;br /&gt;
:$$H \le \text{...} \le H_k \le \text{...} \le H_2 \le H_1 \le H_0 \hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The computational effort will increase with increasing&amp;amp;nbsp; $k$&amp;amp;nbsp; except for a few special cases (see the following example) and depends on the symbol size&amp;amp;nbsp; $M$&amp;amp;nbsp; of course:&lt;br /&gt;
*For the calculation of&amp;amp;nbsp; $H_{10}$&amp;amp;nbsp; a binary source&amp;amp;nbsp; $(M = 2)$&amp;amp;nbsp; is to be averaged over&amp;amp;nbsp; $2^{10} = 1024$&amp;amp;nbsp; terms. &lt;br /&gt;
*With each further increase of&amp;amp;nbsp; $k$&amp;amp;nbsp; by&amp;amp;nbsp; $1$&amp;amp;nbsp; the number of sum terms doubles.&lt;br /&gt;
*In case of a quaternary source&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; it must already be averaged over&amp;amp;nbsp; $4^{10} = 1\hspace{0.08cm}048\hspace{0.08cm}576$&amp;amp;nbsp; summation terms must be averaged for the determination of&amp;amp;nbsp; $H_{10}$.&lt;br /&gt;
* Considering that each of these&amp;amp;nbsp; $4^{10} =2^{20} &amp;gt;10^6$&amp;amp;nbsp; $k$-tuple should occur in simulation/time averaging about&amp;amp;nbsp; $100$&amp;amp;nbsp; times (statistical guideline) to ensure sufficient simulation accuracy, it follows that the sequence length should be greater than&amp;amp;nbsp; $N = 10^8$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 5:}$&amp;amp;nbsp;&lt;br /&gt;
We consider an alternating binary sequence &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $〈 q_ν 〉 =\rm ABABABAB$ ... &amp;amp;nbsp;.&amp;amp;nbsp; Accordingly it holds that&amp;amp;nbsp; $H_0 = H_1 = 1 \hspace{0.1cm} \rm bit/symbol$. &lt;br /&gt;
&lt;br /&gt;
In this special case, the&amp;amp;nbsp; $H_k$ approximation must be determined independently from&amp;amp;nbsp; $k$&amp;amp;nbsp; by averaging only two compound probabilities:&lt;br /&gt;
* $k = 2$: &amp;amp;nbsp;&amp;amp;nbsp; $p_{\rm AB} = p_{\rm BA} = 1/2$ &amp;amp;nbsp; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &amp;amp;nbsp; $H_2 = 1/2 \hspace{0.2cm} \rm bit/symbol$,&lt;br /&gt;
* $k = 3$:  &amp;amp;nbsp;&amp;amp;nbsp; $p_{\rm ABA} = p_{\rm BAB} = 1/2$ &amp;amp;nbsp; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &amp;amp;nbsp; $H_3 = 1/3 \hspace{0.2cm} \rm bit/symbol$,&lt;br /&gt;
* $k = 4$:  &amp;amp;nbsp;&amp;amp;nbsp; $p_{\rm ABAB} = p_{\rm BABA} = 1/2$ &amp;amp;nbsp; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &amp;amp;nbsp; $H_4 = 1/4 \hspace{0.2cm} \rm bit/symbol$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The (actual) entropy of this alternating binary sequence is therefore&lt;br /&gt;
:$$H = \lim_{k \rightarrow \infty }{1}/{k} = 0 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The result was to be expected, since the considered sequence has only minimal information, which does not affect the entropy end value&amp;amp;nbsp; $H$&amp;amp;nbsp; namely: &amp;lt;br&amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; &amp;quot;Does&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; occur at the even or the odd times?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
You can see that&amp;amp;nbsp; $H_k$&amp;amp;nbsp; comes closer to this final value&amp;amp;nbsp; $H = 0$&amp;amp;nbsp; very slowly:&amp;amp;nbsp; The twentieth entropy approximation still returns&amp;amp;nbsp; $H_{20} = 0.05 \hspace{0.05cm} \rm bit/symbol$. }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Summary of the results of the last pages:}$&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
*Generally it applies to the&amp;amp;nbsp; &#039;&#039;&#039;Entropy of a message source&#039;&#039;&#039;:&lt;br /&gt;
:$$H \le \text{...} \le H_3 \le H_2 \le H_1 \le H_0 &lt;br /&gt;
 \hspace{0.05cm}.$$ &lt;br /&gt;
*A&amp;amp;nbsp; &#039;&#039;&#039;redundancy-free source&#039;&#039; &amp;amp;nbsp; exists if all&amp;amp;nbsp; $M$&amp;amp;nbsp; symbols are equally probable and there are no statistical bonds within the sequence. &amp;lt;br&amp;gt; For these,&amp;amp;nbsp; $(r$&amp;amp;nbsp; denotes the &#039;&#039;relative redundancy&#039;&#039; $)$:&lt;br /&gt;
:$$H = H_0 = H_1 = H_2 = H_3 = \text{...}\hspace{0.5cm}&lt;br /&gt;
\Rightarrow \hspace{0.5cm} r = \frac{H - H_0}{H_0}= 0 \hspace{0.05cm}.$$ &lt;br /&gt;
*A&amp;amp;nbsp; &#039;&#039;&#039;memoryless source&#039;&#039;&#039; &amp;amp;nbsp; can be quite redundant&amp;amp;nbsp; $(r&amp;gt; 0)$.&amp;amp;nbsp; This redundancy then is solely due to the deviation of the symbol probabilities from the uniform distribution.&amp;amp;nbsp; Here the following relations are valid&lt;br /&gt;
:$$H = H_1 = H_2 = H_3 = \text{...} \le H_0 \hspace{0.5cm}\Rightarrow \hspace{0.5cm}0 \le r = \frac{H_1 - H_0}{H_0}&amp;lt; 1 \hspace{0.05cm}.$$ &lt;br /&gt;
*The corresponding condition for a&amp;amp;nbsp; &#039;&#039;&#039;source with memory&#039;&#039;&#039;&amp;amp;nbsp; is&lt;br /&gt;
:$$ H &amp;lt;\text{...} &amp;lt; H_3 &amp;lt; H_2 &amp;lt; H_1 \le H_0 \hspace{0.5cm}\Rightarrow \hspace{0.5cm} 0 &amp;lt; r = \frac{H_1 - H_0}{H_0}\le1 \hspace{0.05cm}.$$&lt;br /&gt;
*If&amp;amp;nbsp; $H_2 &amp;lt; H_1$, then&amp;amp;nbsp; $H_3 &amp;lt; H_2$, &amp;amp;nbsp; $H_4 &amp;lt; H_3$, etc. &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; In the general equation, the&amp;amp;nbsp; &amp;quot;≤&amp;quot; character must be replaced by the&amp;amp;nbsp; &amp;quot;&amp;lt;&amp;quot; character. &lt;br /&gt;
*If the symbols are equally probable, then again&amp;amp;nbsp; $H_1 = H_0$, while&amp;amp;nbsp; $H_1 &amp;lt; H_0$&amp;amp;nbsp; applies to symbols which are not equally probable.}}&lt;br /&gt;
	 	 &lt;br /&gt;
&lt;br /&gt;
==The entropy of the AMI code ==	 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In chapter&amp;amp;nbsp; [[Digital_Signal_Transmission/Symbol-Wise Coding with Pseudo Ternary Codes#Properties_of_AMI Code|Symbol-wise Coding with Pseudo-Ternary Codes]]&amp;amp;nbsp; of the book &amp;quot;Digital Signal Transmission&amp;quot;, among other things, the AMI pseudo-ternary code is discussed. &lt;br /&gt;
*This converts the binary sequence&amp;amp;nbsp; $〈 q_ν 〉$&amp;amp;nbsp; with&amp;amp;nbsp; $q_ν ∈ \{ \rm L, \ H \}$&amp;amp;nbsp; into the ternary sequence&amp;amp;nbsp; $〈 c_ν 〉$&amp;amp;nbsp; with&amp;amp;nbsp; $q_ν ∈ \{ \rm M, \ N, \ P \}$.&lt;br /&gt;
*The names of the source symbols stand for&amp;amp;nbsp; $\rm L$ow&amp;amp;nbsp; and&amp;amp;nbsp; $\rm H$igh&amp;amp;nbsp; and those of the code symbols for&amp;amp;nbsp; $\rm M$inus,&amp;amp;nbsp; $\rm N$ull&amp;amp;nbsp; and&amp;amp;nbsp; $\rm P$lus&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coding rule of the AMI code (&amp;quot;Alternate Mark Inversion&amp;quot;) is&lt;br /&gt;
[[File:P_ID2240__Inf_T_1_2_S4_neu.png|right|frame|Signals and symbol sequences for AMI code]]&lt;br /&gt;
&lt;br /&gt;
*Each binary symbol&amp;amp;nbsp; $q_ν =\rm L$&amp;amp;nbsp; is represented by the code symbol&amp;amp;nbsp; $c_ν =\rm N$&amp;amp;nbsp;.&lt;br /&gt;
*In contrast,&amp;amp;nbsp; $q_ν =\rm H$&amp;amp;nbsp; alternates with&amp;amp;nbsp; $c_ν =\rm P$&amp;amp;nbsp; and&amp;amp;nbsp; $c_ν =\rm M$&amp;amp;nbsp; coded &amp;amp;nbsp; ⇒ &amp;amp;nbsp; name &amp;quot;AMI&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This special encoding adds redundancy with the sole purpose of ensuring that the code sequence does not contain a DC component. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
However, we do not consider the spectral properties of the AMI code here, but interpret this code information-theoretically:&lt;br /&gt;
*Based on the number of steps&amp;amp;nbsp; $M = 3$&amp;amp;nbsp; the decision content of the (ternary) code sequence is equal to&amp;amp;nbsp; $H_0 = \log_2 \ 3 ≈ 1.585 \hspace{0.05cm} \rm bit/symbol$.&amp;amp;nbsp; The first entropy approximation returns&amp;amp;nbsp; $H_1 = 1.5 \hspace{0.05cm} \rm bit/Symbol$, as shown in the following calculation:&lt;br /&gt;
  &lt;br /&gt;
:$$p_{\rm H} = p_{\rm L} = 1/2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm}&lt;br /&gt;
p_{\rm N} = p_{\rm L} = 1/2\hspace{0.05cm},\hspace{0.2cm}p_{\rm M} = p_{\rm P}= p_{\rm H}/2 = 1/4\hspace{0.3cm}&lt;br /&gt;
\Rightarrow \hspace{0.3cm} H_1 = 1/2 \cdot {\rm log}_2\hspace{0.1cm}2 + 2 \cdot 1/4 \cdot{\rm log}_2\hspace{0.1cm}4 = 1.5 \,{\rm bit/symbol}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*Let&#039;s now look at two-tuples.&amp;amp;nbsp; With AMI code,&amp;amp;nbsp; $\rm P$&amp;amp;nbsp; cannot follow&amp;amp;nbsp; $\rm P$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm M$&amp;amp;nbsp; cannot follow&amp;amp;nbsp; $\rm M$&amp;amp;nbsp; follow&amp;amp;nbsp; The probability for&amp;amp;nbsp; $\rm NN$&amp;amp;nbsp; is equal to&amp;amp;nbsp; $p_{\rm L} - p_{\rm L} = 1/4$.&amp;amp;nbsp; All other (six) two-tuples occur with the probability&amp;amp;nbsp; $1/8$&amp;amp;nbsp; on.&amp;amp;nbsp; From this follows for the second entropy approximation:&lt;br /&gt;
:$$H_2 = 1/2 \cdot \big [ 1/4 \cdot {\rm log_2}\hspace{0.1cm}4 + 6 \cdot 1/8 \cdot {\rm log_2}\hspace{0.1cm}8 \big ] = 1,375 \,{\rm bit/symbol} \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*For the further entropy approximations&amp;amp;nbsp; $H_3$,&amp;amp;nbsp; $H_4$, ...&amp;amp;nbsp; and the actual entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; will apply:&lt;br /&gt;
:$$ H &amp;lt; \hspace{0.05cm}\text{...}\hspace{0.05cm} &amp;lt; H_5 &amp;lt; H_4 &amp;lt; H_3 &amp;lt; H_2 = 1.375 \,{\rm bit/symbol} \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*Exceptionally in this example we know the actual entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; of the code symbol sequence&amp;amp;nbsp; $〈 c_ν 〉$: &amp;amp;nbsp; since no new information is added by the coder and no information is lost, it is the same entropy&amp;amp;nbsp; $H = 1 \,{\rm bit/symbol} $&amp;amp;nbsp; as the one of the redundancy-free binary sequence $〈 q_ν 〉$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The&amp;amp;nbsp; [[Aufgaben:1.4_Entropienäherungen_für_den_AMI-Code|Exercise 1.4]]&amp;amp;nbsp; shows the considerable effort required to calculate the entropy approximation&amp;amp;nbsp; $H_3$. &amp;amp;nbsp; Moreover,&amp;amp;nbsp; $H_3$&amp;amp;nbsp; still deviates significantly from the final value&amp;amp;nbsp; $H = 1 \,{\rm bit/symbol} $.&amp;amp;nbsp; A faster result is achieved if the AMI code is described by a markov chain as explained in the next section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Binary sources with Markov properties ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Inf_T_1_2_S5_vers2.png|right|frame|Markov processes with&amp;amp;nbsp; $M = 2$&amp;amp;nbsp; states]]&lt;br /&gt;
&lt;br /&gt;
Sequences with statistical bonds between the sequence elements (symbols) are often modeled by&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Markov_Chains|Markov processes]]&amp;amp;nbsp; where we limit ourselves here to first-order Markov processes.&amp;amp;nbsp; First we consider a binary Markov process&amp;amp;nbsp; $(M = 2)$&amp;amp;nbsp; with the states (symbols)&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$.&lt;br /&gt;
&lt;br /&gt;
On the right, you can see the transition diagram for a first-order binary Markov process&amp;amp;nbsp; however, only two of the four transfer probabilities given are freely selectable, for example&lt;br /&gt;
* $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = \rm Pr(A\hspace{0.01cm}|\hspace{0.01cm}B)$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; conditional probability that&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; follows&amp;amp;nbsp; $\rm B$&amp;amp;nbsp;.&lt;br /&gt;
* $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}} = \rm Pr(B\hspace{0.01cm}|\hspace{0.01cm}A)$   &amp;amp;nbsp; ⇒ &amp;amp;nbsp; conditional probability that&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; follows&amp;amp;nbsp; $\rm A$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the other two transition probabilities&amp;amp;nbsp; $p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} = 1- p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}$ &amp;amp;nbsp;and &amp;amp;nbsp; $p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} = 1- p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}&lt;br /&gt;
 \hspace{0.05cm}.$&lt;br /&gt;
&lt;br /&gt;
Due to the presupposed properties&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Auto Correlation Function (ACF)#Stationary_Random Processes|Stationarity]]&amp;amp;nbsp; and&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Auto Correlation Function (ACF)#Ergodic_Random Processes|Ergodicity]]&amp;amp;nbsp; the following applies to the state or symbol probabilities:&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = {\rm Pr}({\rm A}) = \frac{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}}{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}}&lt;br /&gt;
 \hspace{0.05cm}, \hspace{0.5cm}p_{\rm B} = {\rm Pr}({\rm B}) = \frac{p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}}{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
These equations allow first information theoretical statements about the Markov processes:&lt;br /&gt;
* For&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$&amp;amp;nbsp; the symbols are equally likely &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $p_{\text{A}} = p_{\text{B}}= 0.5$.&amp;amp;nbsp; The first entropy approximation returns&amp;amp;nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$, independent of the actual values of the (conditional) transition probabilities&amp;amp;nbsp; $p_{\text{A|B}}$&amp;amp;nbsp; &amp;amp;nbsp;or &amp;amp;nbsp; $p_{\text{B|A}}$.&lt;br /&gt;
*The source entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; as the threshold value&amp;amp;nbsp; $($for&amp;amp;nbsp; $k \to \infty)$&amp;amp;nbsp; of the&amp;amp;nbsp; [[Information_Theory/Discrete Memoryless Sources#Generalization_to_.7F.27 .22.60UNIQ-MathJax109-QINU.60.22.27.7F.E2.80.93Tuple_and_boundary.C3.BCtransition|Entropy approximation&amp;amp;nbsp; $k$th order]] &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $H_k$&amp;amp;nbsp; but depends very much on the actual values of&amp;amp;nbsp; $p_{\text{A|B}}$ &amp;amp;nbsp;and&amp;amp;nbsp; $p_{\text{B|A}}$&amp;amp;nbsp; and not only on their quotients.&amp;amp;nbsp; This is shown by the following example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 6:}$&amp;amp;nbsp;&lt;br /&gt;
We consider three binary symmetric Markov sources that are represented by the numerical values of the symmetric transition probabilities&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} }$&amp;amp;nbsp;.&amp;amp;nbsp; For the symbol probabilities the following applies:&amp;amp;nbsp; $p_{\rm A} = p_{\rm B}= 0.5$, and the other transition probabilities have the values&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2242__Inf_T_1_2_S5b_neu.png|right|frame|Three examples of binary Markov sources]] &lt;br /&gt;
:$$p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 1 - p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } =&lt;br /&gt;
p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}B} }.$$&lt;br /&gt;
&lt;br /&gt;
*The middle (blue) symbol sequence with&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.5$&amp;amp;nbsp; has the entropy&amp;amp;nbsp; $H ≈ 1 \hspace{0.1cm}  \rm bit/symbol$.&amp;amp;nbsp; That means: &amp;amp;nbsp; In this special case there are no statistical bonds within the sequence.&lt;br /&gt;
&lt;br /&gt;
*The left (red) sequence with&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.2$&amp;amp;nbsp; has less changes between&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; Due to statistical dependencies between neighboring symbols the entropy is now smaller&amp;amp;nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/symbol$.&lt;br /&gt;
&lt;br /&gt;
*The right (green) symbol sequence with&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.8$&amp;amp;nbsp; has the exact same entropy&amp;amp;nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/symbol$&amp;amp;nbsp; as the red sequence.&amp;amp;nbsp; Here you can see many areas with alternating symbols&amp;amp;nbsp; $($... $\rm ABABAB$ ... $)$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This example is worth noting:&lt;br /&gt;
*If you had not used the markup properties of the red and green sequences, you would have reached the respective result&amp;amp;nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/symbol$&amp;amp;nbsp; only after lengthy calculations.&lt;br /&gt;
*The following pages show that for a source with mark properties the final value&amp;amp;nbsp; $H$&amp;amp;nbsp; can be determined from the entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; alone. &amp;amp;nbsp; Likewise, all entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; can also be calculated from&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k$-tuples in a simple manner &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $H_3$,&amp;amp;nbsp; $H_4$,&amp;amp;nbsp; $H_5$, ... &amp;amp;nbsp; $H_{100}$, ...&lt;br /&gt;
	&lt;br /&gt;
 &lt;br /&gt;
== Simplified entropy calculation for Markov sources ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Inf_T_1_2_S5_vers2.png|right|frame|Markov processes with&amp;amp;nbsp; $M = 2$&amp;amp;nbsp; states]]&lt;br /&gt;
We continue to assume the first-order symmetric binary Markov source.&amp;amp;nbsp; As on the previous page, we use the following nomenclature for&lt;br /&gt;
*the transition probabilities&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}$, &amp;amp;nbsp; $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$,&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}A}}= 1- p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$, &amp;amp;nbsp; $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}B}} = 1 - p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}$, &amp;amp;nbsp; &lt;br /&gt;
*the ergodic probabilities&amp;amp;nbsp; $p_{\text{A}}$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\text{B}}$,&lt;br /&gt;
*the compound probabilities, for example&amp;amp;nbsp; $p_{\text{AB}} = p_{\text{A}} - p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We now compute the&amp;amp;nbsp; [[Information_Theory/Sources with Memory#Entropy_in_Two_Tuple|Entropy of a two-tuple]]&amp;amp;nbsp; (with the unit &amp;quot;bit/two-tuple&amp;quot;):&lt;br /&gt;
 &lt;br /&gt;
:$$H_2\hspace{0.05cm}&#039; = p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}} + p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B}}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
If one now replaces the logarithms of the products by corresponding sums of logarithms, one gets the result&amp;amp;nbsp; $H_2\hspace{0.05cm}&#039; = H_1 + H_{\text{M}}$&amp;amp;nbsp; with  &lt;br /&gt;
:$$H_1 = p_{\rm A}  \cdot (p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A})\cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm A}} + p_{\rm B}  \cdot (p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B})\cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm B}} = p_{\rm A}  \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm A}} + p_{\rm B}  \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm B}} = H_{\rm bin} (p_{\rm A})= H_{\rm bin} (p_{\rm B})&lt;br /&gt;
 \hspace{0.05cm},$$&lt;br /&gt;
:$$H_{\rm M}= p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}} + p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B}}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; Thus the&amp;amp;nbsp; &#039;&#039;&#039;second entropy approximation&#039;&#039;&#039;&amp;amp;nbsp; (with the unit &amp;quot;bit/symbol&amp;quot;):&lt;br /&gt;
:$$H_2 = {1}/{2} \cdot {H_2\hspace{0.05cm}&#039;} = {1}/{2} \cdot \big [ H_{\rm 1} + H_{\rm M} \big] &lt;br /&gt;
 \hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
is to be noted:&lt;br /&gt;
*The first summand&amp;amp;nbsp; $H_1$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; first entropy approximation depends only on the symbol probabilities.&lt;br /&gt;
*For a symmetrical markov process &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}} $ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $p_{\text{A}} = p_{\text{B}} = 1/2$ &amp;amp;nbsp; this is the result for this first summand&amp;amp;nbsp; $H_1 = 1 \hspace{0.1cm} \rm bit/symbol$.&lt;br /&gt;
*The second summand&amp;amp;nbsp; $H_{\text{M}}$&amp;amp;nbsp; must be calculated according to the second of the two upper equations. &lt;br /&gt;
*For a symmetrical Markov process you get&amp;amp;nbsp; $H_{\text{M}} = H_{\text{bin}}(p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B})$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now, this result is extended to the&amp;amp;nbsp; $k$-th entropy approximation.&amp;amp;nbsp; Here, the advantage of Markov sources over other sources is taken advantage of, that the entropy calculation for&amp;amp;nbsp; $k$-tuples is very simple.&amp;amp;nbsp; For each Markov source, the following applies&lt;br /&gt;
 &lt;br /&gt;
:$$H_k = {1}/{k} \cdot \big [ H_{\rm 1} + (k-1) \cdot H_{\rm M}\big ] \hspace{0.3cm} \Rightarrow \hspace{0.3cm}&lt;br /&gt;
 H_2 = {1}/{2} \cdot \big [ H_{\rm 1} + H_{\rm M} \big ]\hspace{0.05cm}, \hspace{0.3cm}&lt;br /&gt;
 H_3 ={1}/{3} \cdot \big [ H_{\rm 1} + 2 \cdot H_{\rm M}\big ] \hspace{0.05cm},\hspace{0.3cm}&lt;br /&gt;
 H_4 = {1}/{4} \cdot \big [ H_{\rm 1} + 3 \cdot H_{\rm M}\big ] &lt;br /&gt;
 \hspace{0.05cm},\hspace{0.15cm}{\rm usw.}$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; With the boundary condition for&amp;amp;nbsp; $k \to \infty$, one obtains the actual source entropy&lt;br /&gt;
:$$H = \lim_{k \rightarrow \infty} H_k = H_{\rm M} \hspace{0.05cm}.$$&lt;br /&gt;
From this simple result important insights for the entropy calculation follow:&lt;br /&gt;
*For Markov sources it is sufficient to determine the entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$.&amp;amp;nbsp; Thus, the entropy of a Markov source is &lt;br /&gt;
:$$H = 2 \cdot H_2 - H_{\rm 1}  \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*Through&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; all further entropy approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; are also fixed for&amp;amp;nbsp; $k \ge 3$&amp;amp;nbsp;:&lt;br /&gt;
 &lt;br /&gt;
:$$H_k = \frac{2-k}{k} \cdot H_{\rm 1} + \frac{2\cdot (k-1)}{k} \cdot H_{\rm 2}&lt;br /&gt;
 \hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, these approximations are not very important.&amp;amp;nbsp; Mostly only the limit value&amp;amp;nbsp; $H$.&amp;amp;nbsp; For sources without markov properties the approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; are calculated only to be able to estimate this limit value, i.e. the actual entropy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Notes&#039;&#039;: &lt;br /&gt;
*In the&amp;amp;nbsp; [[Aufgaben:1.5_Binäre_Markovquelle|Exercise 1.5]]&amp;amp;nbsp; the above equations are applied to the more general case of an asymmetric binary source.&lt;br /&gt;
*All equations on this page also apply to non-binary Markov sources&amp;amp;nbsp; $(M &amp;gt; 2)$ as shown on the next page.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Non-binary Markov sources == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:P_ID2243__Inf_T_1_2_S6_neu.png|right|frame|Ternary and Quaternary First Order Markov Source]]&lt;br /&gt;
&lt;br /&gt;
The following equations apply to each Markov source regardless of the symbol range:&lt;br /&gt;
 &lt;br /&gt;
:$$H = 2 \cdot H_2 - H_{\rm 1}  \hspace{0.05cm},$$&lt;br /&gt;
:$$H_k = {1}/{k} \cdot \big [ H_{\rm 1} + (k-1) \cdot H_{\rm M}\big ] \hspace{0.05cm},$$&lt;br /&gt;
:$$ \lim_{k \rightarrow \infty} H_k = H &lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
These equations allow the simple calculation of the entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; from the approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$.&lt;br /&gt;
&lt;br /&gt;
We now look at the transition diagrams sketched on the right&lt;br /&gt;
*a ternary Markov source&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; $($&amp;amp;nbsp; $M = 3$,&amp;amp;nbsp; blue coloring$)$ and &lt;br /&gt;
*a quaternary Markov source&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; $(M = 4$,&amp;amp;nbsp; red color$)$. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
In&amp;amp;nbsp; [[Aufgaben:1.6_Nichtbinäre_Markovquellen|Exercise 1.6]]&amp;amp;nbsp; the entropy approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; and the source entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; are calculated as the limit of&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k \to \infty$&amp;amp;nbsp;. &amp;amp;nbsp; The results are shown in the following figure.&amp;amp;nbsp; All entropies specified there have the unit &amp;quot;bit/symbol&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2244__Inf_T_1_2_S6b_neu.png|center|frame|Entropies for&amp;amp;nbsp; $\rm MQ3$,&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; and the&amp;amp;nbsp; $\rm AMI-Code$]]&lt;br /&gt;
&lt;br /&gt;
These results can be interpreted as follows:&lt;br /&gt;
*For the ternary markov source&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; the entropy approximations of&amp;amp;nbsp; $H_1 = 1,500$&amp;amp;nbsp; above&amp;amp;nbsp; $H_2 = 1,375$&amp;amp;nbsp; up to the limit&amp;amp;nbsp; $H = 1,250$&amp;amp;nbsp; continuously decreasing&amp;amp;nbsp; paths&amp;amp;nbsp; $M = 3$&amp;amp;nbsp; the decision content is&amp;amp;nbsp; $H_0 = 1,585$.&lt;br /&gt;
*For the quaternary Markov source&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; one receives&amp;amp;nbsp; $H_0 = H_1 = 2,000$&amp;amp;nbsp; (since four equally probable states) and&amp;amp;nbsp; $H_2 = 1.5$. &amp;amp;nbsp; From the&amp;amp;nbsp; $H_1$-&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$-value all entropy approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; and the final value&amp;amp;nbsp; $H = 1.000$&amp;amp;nbsp; can be calculated.&lt;br /&gt;
*The two models&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; were created during the attempt to calculate the&amp;amp;nbsp; [[Information_Theory/Sources_with_Memory#The_Entropy_of_AMI.E2.80. 93Codes|AMI-Code]]&amp;amp;nbsp; to be described information theoretically by Markov sources.&amp;amp;nbsp; The symbols&amp;amp;nbsp; $\rm M$,&amp;amp;nbsp; $\rm N$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm P$&amp;amp;nbsp; stand for &amp;quot;minus&amp;quot;, &amp;quot;zero&amp;quot; and &amp;quot;plus&amp;quot;.&lt;br /&gt;
*The entropy approximations&amp;amp;nbsp; $H_1$,&amp;amp;nbsp; $H_2$&amp;amp;nbsp; and&amp;amp;nbsp; $H_3$&amp;amp;nbsp; of the AMI code (green markers) were calculated in the&amp;amp;nbsp; [[Aufgaben:Aufgabe 1.4: Entropienäherungen für den AMI-Code|Exercise 1.4]]&amp;amp;nbsp; on the calculation of&amp;amp;nbsp; $H_4$,&amp;amp;nbsp; $H_5$, ... had to be omitted for reasons of effort.&amp;amp;nbsp; But the final value of&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k \to \infty$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $H = 1,000$ is known.&lt;br /&gt;
*You can see that the Markov model&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; for&amp;amp;nbsp; $H_0 = 1,585$,&amp;amp;nbsp; $H_1 = 1,500$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2 = 1,375$&amp;amp;nbsp; yields exactly the same numerical values as the AMI code. &amp;amp;nbsp; On the other hand&amp;amp;nbsp; $H_3$&amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $(1,333$&amp;amp;nbsp; instead of&amp;amp;nbsp; $1,292)$&amp;amp;nbsp; and especially the final value&amp;amp;nbsp; $H$&amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $(1,250$&amp;amp;nbsp; compared to&amp;amp;nbsp; $1,000)$.&lt;br /&gt;
*The model&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; differs from the AMI code&amp;amp;nbsp; $(M = 3)$&amp;amp;nbsp; with respect to the decision content&amp;amp;nbsp; $H_0$&amp;amp;nbsp; and also with respect to all entropy approximations&amp;amp;nbsp; $H_k$. &amp;amp;nbsp; Nevertheless, $\rm MQ4$&amp;amp;nbsp; is the more suitable model for the AMI code, since the final value&amp;amp;nbsp; $H = 1,000$&amp;amp;nbsp; is the same.&lt;br /&gt;
*The model&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; yields entropy values that are too large, since the sequences&amp;amp;nbsp; $\rm PNP$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm MNM$&amp;amp;nbsp; are possible here, which cannot occur in the AMI code. &amp;amp;nbsp; Already with the approximation&amp;amp;nbsp; $H_3$&amp;amp;nbsp; the difference is slightly noticeable, in the final value&amp;amp;nbsp; $H$&amp;amp;nbsp; even clearly&amp;amp;nbsp; $(1.25$&amp;amp;nbsp; instead of&amp;amp;nbsp; $1)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the model&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; the state&amp;amp;nbsp; &amp;quot;Null&amp;quot;&amp;amp;nbsp; was split into two states&amp;amp;nbsp; $\rm N$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm O$&amp;amp;nbsp; (see upper right figure on this page):&lt;br /&gt;
*Here applies to the state&amp;amp;nbsp; $\rm N$: &amp;amp;nbsp; The current binary symbol&amp;amp;nbsp; $\rm L$&amp;amp;nbsp; is displayed with the amplitude value&amp;amp;nbsp; $0$&amp;amp;nbsp; (zero), as per the AMI rule.&amp;amp;nbsp; The next occurring&amp;amp;nbsp; $\rm H$ symbol, on the other hand, is displayed as&amp;amp;nbsp; $-1$&amp;amp;nbsp; (minus), because the last&amp;amp;nbsp; $\rm H$ symbol was encoded as&amp;amp;nbsp; $+1$&amp;amp;nbsp; (plus).&lt;br /&gt;
*The current binary symbol&amp;amp;nbsp; $\rm O$&amp;amp;nbsp; is also displayed with the state&amp;amp;nbsp; $\rm L$&amp;amp;nbsp; with the ternary value&amp;amp;nbsp; $0$&amp;amp;nbsp;. &amp;amp;nbsp; In contrast to the state&amp;amp;nbsp; $\rm N$&amp;amp;nbsp; however, the next occurring&amp;amp;nbsp; $\rm H$ symbol is now displayed as&amp;amp;nbsp; $+1$&amp;amp;nbsp; (Plus) since the last&amp;amp;nbsp; $\rm H$ symbol was encoded as&amp;amp;nbsp; $-1$&amp;amp;nbsp; (Minus).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; &lt;br /&gt;
*The&amp;amp;nbsp; $\rm MQ4$&amp;amp;ndash;Output sequence actually follows the rules of the AMI code and assigns the entropy&amp;amp;nbsp; $H = 1.000 \hspace{0.15cm} \rm bit/symbol$&amp;amp;nbsp; up. &lt;br /&gt;
*Because of the new state&amp;amp;nbsp; $\rm O$&amp;amp;nbsp; but is now&amp;amp;nbsp; $H_0 = 2.000 \hspace{0.15cm} \rm bit/symbol$&amp;amp;nbsp; $($against&amp;amp;nbsp; $1.585 \hspace{0.15cm} \rm bit/symbol)$&amp;amp;nbsp; clearly too large. &lt;br /&gt;
*Also all&amp;amp;nbsp; $H_k$ approximations are larger than in AMI code. &lt;br /&gt;
*First for &amp;amp;nbsp;$k \to \infty$&amp;amp;nbsp; the model&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; and the AMI code match exactly: &amp;amp;nbsp; $H = 1,000 \hspace{0.15cm} \rm bit/symbol$.}}&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
== Exercises for chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:1.3 Entropienäherungen|Aufgabe 1.3: Entropienäherungen]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.4 Entropienäherungen für den AMI-Code|Aufgabe 1.4: Entropienäherungen für den AMI-Code]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.4Z Entropie der AMI-Codierung|Zusatzaufgabe 1.4Z: Entropie der AMI-Codierung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.5 Binäre Markovquelle|Aufgabe 1.5: Binäre Markovquelle]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.5Z Symmetrische Markovquelle|Aufgabe 1.5Z: Symmetrische Markovquelle]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.6 Nichtbinäre Markovquellen|Aufgabe 1.6: Nichtbinäre Markovquellen]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.6Z Ternäre Markovquelle|Aufgabe 1.6Z:Ternäre Markovquelle]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Natural_Discrete_Sources&amp;diff=35086</id>
		<title>Information Theory/Natural Discrete Sources</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Natural_Discrete_Sources&amp;diff=35086"/>
		<updated>2020-11-02T13:37:57Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Entropie wertdiskreter Nachrichtenquellen&lt;br /&gt;
|Vorherige Seite=Nachrichtenquellen mit Gedächtnis&lt;br /&gt;
|Nächste Seite=Allgemeine Beschreibung&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Difficulties with the determination of entropy ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Up to now, we have been dealing exclusively with artificially generated symbol sequences.&amp;amp;nbsp; Now we consider written texts.&amp;amp;nbsp; Such a text can be seen as a natural discrete-value message source, which of course can also be analyzed information-theoretically by determining its entropy.&lt;br /&gt;
&lt;br /&gt;
Even today (2011), natural texts are still often represented with the 8 bit character set according to ANSI (&#039;&#039;American National Standard Institute&#039;&#039;), although there are several &amp;quot;more modern&amp;quot; encodings; &lt;br /&gt;
&lt;br /&gt;
The&amp;amp;nbsp; $M = 2^8 = 256$&amp;amp;nbsp; ANSI characters are used as follows:&lt;br /&gt;
* &#039;&#039;&#039;No.&amp;amp;nbsp; 0 &amp;amp;nbsp; to &amp;amp;nbsp; 31&#039;&#039;&#039;: &amp;amp;nbsp; control commands that cannot be printed or displayed,&lt;br /&gt;
* &#039;&#039;&#039;No.&amp;amp;nbsp; 32 &amp;amp;nbsp; to &amp;amp;nbsp;127&#039;&#039;&#039;: &amp;amp;nbsp; identical to the characters of the 7 bit ASCII code,&lt;br /&gt;
* &#039;&#039;&#039;No.&amp;amp;nbsp; 128 &amp;amp;nbsp; to 159&#039;&#039;&#039;: &amp;amp;nbsp; additional control characters or alphanumeric characters for Windows,&lt;br /&gt;
* &#039;&#039;&#039;No.&amp;amp;nbsp; 160 &amp;amp;nbsp; to &amp;amp;nbsp; 255&#039;&#039;&#039;: &amp;amp;nbsp; identical to the Unicode charts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Theoretically, one could also define the entropy here as the border crossing point of the entropy approximation&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k \to \infty$,&amp;amp;nbsp; according to the procedure from the&amp;amp;nbsp; [[Information_Theory/Sources_with_Memory#Generalization to k -tuple and boundary crossing|last chapter]].&amp;amp;nbsp; In practice, however, insurmountable numerical limitations can be found here as well:&lt;br /&gt;
&lt;br /&gt;
*Already for the entropy approximation&amp;amp;nbsp; $H_2$&amp;amp;nbsp; there is&amp;amp;nbsp; $M^2 = 256^2 = 65\hspace{0.1cm}536$&amp;amp;nbsp; possible two-tuples.&amp;amp;nbsp; Thus, the calculation requires the same amount of memory (in bytes). &amp;amp;nbsp; If you assume that you need &amp;amp;nbsp; $100$&amp;amp;nbsp; equivalents per tuple on average, for a sufficiently safe statistic, the length of the source symbol sequence should already be&amp;amp;nbsp; $N &amp;gt; 6.5 · 10^6$&amp;amp;nbsp;.&lt;br /&gt;
*The number of possible three-tuples is&amp;amp;nbsp; $M^3 &amp;gt; 16 · 10^7$&amp;amp;nbsp; and thus the required source symbol length is already&amp;amp;nbsp; $N &amp;gt; 1.6 · 10^9$.&amp;amp;nbsp; This corresponds to&amp;amp;nbsp; $42$&amp;amp;nbsp; lines per page and&amp;amp;nbsp; $80$&amp;amp;nbsp; characters per line to a book with about&amp;amp;nbsp; $500\hspace{0.1cm}000$&amp;amp;nbsp; pages.&lt;br /&gt;
*For a natural text the statistical bonds extend much further than two or three characters.&amp;amp;nbsp; Küpfmüller gives a value of&amp;amp;nbsp; $100$ for the german language.&amp;amp;nbsp; To determine the 100th entropy approximation you need&amp;amp;nbsp; $2^{800}$ ≈ $10^{240}$&amp;amp;nbsp; frequencies and for the safe statistics &amp;amp;nbsp; $100$&amp;amp;nbsp;times more characters.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A justified question is therefore: &amp;amp;nbsp; How did&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Karl_K%C3%BCpfm%C3%BCller Karl Küpfmüller]&amp;amp;nbsp; determined the entropy of the German language in 1954? How did&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Claude_Shannon Claude Elwood Shannon]&amp;amp;nbsp; do the same for the English language, even before Küpfmüller?&amp;amp;nbsp; One thing is revealed beforehand: &amp;amp;nbsp; Not with the approach described above.	 	 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Entropy estimation according to Küpfmüller ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Karl Küpfmüller has investigated the entropy of German texts in his published assessment &amp;amp;nbsp; [Küpf54]&amp;lt;ref name =&#039;Küpf54&#039;&amp;gt;Küpfmüller, K.: &#039;&#039;Die Entropie der deutschen Sprache&#039;&#039;. Fernmeldetechnische Zeitung 7, 1954, S. 265-272.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; the following assumptions are made:&lt;br /&gt;
*an alphabet with&amp;amp;nbsp; $26$&amp;amp;nbsp; letters&amp;amp;nbsp; (no umlauts or punctuation marks),&lt;br /&gt;
*Not taking into account the space character,&lt;br /&gt;
*no distinction between upper and lower case.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The content of the decision is therefore&amp;amp;nbsp; $H_0 = \log_2 (26) ≈ 4.7\ \rm bit/letter$. &lt;br /&gt;
&lt;br /&gt;
Küpfmueller&#039;s estimation is based on the following considerations:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(1)&#039;&#039;&#039;&amp;amp;nbsp; The&amp;amp;nbsp; &#039;&#039;&#039;first entropy approximation&#039;&#039;&#039;&amp;amp;nbsp; results from the letter frequencies in German texts.&amp;amp;nbsp; According to a study of 1939, &amp;quot;e&amp;quot; is with a frequency of &amp;amp;nbsp; $16. 7\%$&amp;amp;nbsp; the most frequent, the rarest is &amp;quot;x&amp;quot; with&amp;amp;nbsp; $0.02\%$.&amp;amp;nbsp; Averaged over all letters we obtain&amp;amp;nbsp; $H_1 \approx 4.1\,\, {\rm bit/letter}\hspace{0.05 cm}.$&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;(2)&#039;&#039;&#039;&amp;amp;nbsp; Regarding the&amp;amp;nbsp; &#039;&#039;&#039;syllable frequency&#039;&#039;&#039;&amp;amp;nbsp; Küpfmüller evaluates the &amp;quot;Häufigkeitswörterbuch der deutschen Sprache&amp;quot; (Frequency Dictionary of the German Language), published by&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Friedrich_Wilhelm_Kaeding Friedrich Wilhelm Kaeding]&amp;amp;nbsp; 1898; He distinguishes between root syllables, prefixes, and ending syllables and thus arrives at the average information content of all syllables:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm syllable} = \hspace{-0.1cm} H_{\rm stem} + H_{\rm front} + H_{\rm end} + H_{\rm rest} \approx &lt;br /&gt;
4.15 + 0.82+1.62 + 2.0 \approx 8.6\,\, {\rm bit/syllable}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
:The following proportions were taken into account:&lt;br /&gt;
:*According to the Kaeding study of 1898, the&amp;amp;nbsp; $400$&amp;amp;nbsp; most common root syllables&amp;amp;nbsp; (beginning with &amp;quot;de&amp;quot;)&amp;amp;nbsp; represent $47\%$&amp;amp;nbsp; of a German text and contribute to the entropy with&amp;amp;nbsp; $H_{\text{Root}} ≈ 4.15 \ \rm bit/syllable$&amp;amp;nbsp;.&lt;br /&gt;
:*The contribution of&amp;amp;nbsp; $242$&amp;amp;nbsp; most common prefixes - in the first place &amp;quot;ge&amp;quot; with&amp;amp;nbsp; $9\%$ - is numbered by Küpfmüller with&amp;amp;nbsp; $H_{\text{Pre}} ≈ 0.82 \ \rm bit/syllable$.&lt;br /&gt;
:*The contribution of the&amp;amp;nbsp; $118$&amp;amp;nbsp; most used ending syllables is&amp;amp;nbsp; $H_{\text{End}} ≈ 1.62 \ \rm bit/syllable$.&amp;amp;nbsp; Most often, &amp;quot;en&amp;quot; appears at the end of words with&amp;amp;nbsp; $30\%$&amp;amp;nbsp;.&lt;br /&gt;
:*The remaining&amp;amp;nbsp; $14\%$&amp;amp;nbsp; is distributed over syllables not yet measured.&amp;amp;nbsp; Küpfmüller assumes that there are&amp;amp;nbsp; $4000$&amp;amp;nbsp; and that they are equally distributed&amp;amp;nbsp; He assumes&amp;amp;nbsp; $H_{\text{Rest}} ≈ 2 \ \rm bit/syllable$&amp;amp;nbsp; for this.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(3)&#039;&#039;&#039;&amp;amp;nbsp; As average number of letters per syllable Küpfmüller determined the value&amp;amp;nbsp; $3.03$.&amp;amp;nbsp; From this he deduced the&amp;amp;nbsp; &#039;&#039;&#039;third entropy approximation&#039;&#039;&#039;&#039;&amp;amp;nbsp; regarding the letters: &lt;br /&gt;
:$$H_3 \approx {8.6}/{3.03}\approx 2.8\,\, {\rm bit/letter}\hspace{0.05 cm}.$$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(4)&#039;&#039;&#039;&amp;amp;nbsp; Küpfmueller&#039;s estimation of the entropy approximation&amp;amp;nbsp; $H_3$&amp;amp;nbsp; based mainly on the syllable frequencies according to&amp;amp;nbsp; &#039;&#039;&#039;(2)&#039;&#039;&#039;&amp;amp;nbsp; and the mean value of&amp;amp;nbsp; $3.03$&amp;amp;nbsp; letters per syllable. To get another entropy approximation&amp;amp;nbsp; $H_k$&amp;amp;nbsp; with greater&amp;amp;nbsp; $k$&amp;amp;nbsp; Küpfmüller additionally analyzed the words in German texts.&amp;amp;nbsp; He came to the following results:&lt;br /&gt;
&lt;br /&gt;
:*The&amp;amp;nbsp; $322$&amp;amp;nbsp; most common words provide an entropy contribution of&amp;amp;nbsp; $4.5 \ \rm bit/word$. &lt;br /&gt;
:*The contributions of the remaining&amp;amp;nbsp; $40\hspace{0.1cm}000$ words&amp;amp;nbsp; were estimated.&amp;amp;nbsp; Assuming that the frequencies of rare words are reciprocal to their ordinal number ([https://en.wikipedia.org/wiki/Zipf%27s_law Zipf&#039;s Law]). &lt;br /&gt;
*With these assumptions the average information content (related to words) is about &amp;amp;nbsp; $11 \ \rm bit/word$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(5)&#039;&#039;&#039;&amp;amp;nbsp; The counting &amp;quot;letters per word&amp;quot; resulted in average&amp;amp;nbsp; $5.5$.&amp;amp;nbsp; Analogous to point&amp;amp;nbsp; &#039;&#039;&#039;(3)&#039;&#039;&#039;&amp;amp;nbsp; the entropy approximation for&amp;amp;nbsp; $k = 5.5$&amp;amp;nbsp; was approximated. Küpfmüller gives the value&amp;amp;nbsp; $H_{5.5} \approx {11}/{5.5}\approx 2\,\, {\rm bit/letter}\hspace{0.05 cm}.$&amp;amp;nbsp; Of course,&amp;amp;nbsp; $k$&amp;amp;nbsp; can only assume integer values,&amp;amp;nbsp; according to&amp;amp;nbsp; [[Information_Theory/Sources_with_Memory#Generalization to k-tuple and boundary crossing|its definition]].&amp;amp;nbsp; This equation is therefore to be interpreted in such a way that for&amp;amp;nbsp; $H_5$&amp;amp;nbsp; a somewhat larger and for&amp;amp;nbsp; $H_6$&amp;amp;nbsp; a somewhat smaller value than&amp;amp;nbsp; $2 \ {\rm bit/letter}$&amp;amp;nbsp; will result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2303__Inf_T_1_3_S2.png|right|frame|Approximate values of the entropy of the German language according to Küpfmüller]]&lt;br /&gt;
&#039;&#039;&#039;(6)&#039;&#039;&#039;&amp;amp;nbsp; Now you can try to get the final value of entropy for&amp;amp;nbsp; $k \to \infty$&amp;amp;nbsp; by extrapolation from these three points:&lt;br /&gt;
:*The continuous line, taken from Küpfmüller&#039;s original work&amp;amp;nbsp; [Küpf54]&amp;lt;ref name =&#039;Küpf54&#039;&amp;gt;Küpfmüller, K.: &#039;&#039;Die Entropie der deutschen Sprache&#039;&#039;. Fernmeldetechnische Zeitung 7, 1954, S. 265-272.&amp;lt;/ref&amp;gt;,&amp;amp;nbsp;leads to the final entropy value&amp;amp;nbsp; $H = 1.6 \ \rm bit/letter$. &lt;br /&gt;
:*The green curves are two extrapolation attempts (of a continuous function course through three points) of the&amp;amp;nbsp; $\rm LNTwww$&#039;s author.  &lt;br /&gt;
:*These and the brown arrows are actually only meant to show that such an extrapolation&amp;amp;nbsp; (carefully worded)&amp;amp;nbsp; is somewhat vague.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(7)&#039;&#039;&#039;&amp;amp;nbsp; Küpfmüller then tried to verify the final value&amp;amp;nbsp; $H = 1.6 \ \rm bit/letter$&amp;amp;nbsp; found by him with this first estimation with a completely different methodology - see next section. After this estimation he revised his result slightly to&amp;amp;nbsp; $H = 1.51 \ \rm bit/letter$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(8)&#039;&#039;&#039;&amp;amp;nbsp; Three years earlier, after a completely different approach, Claude E. Shannon had given the entropy value&amp;amp;nbsp; $H ≈ 1 \ \rm bit/letter$&amp;amp;nbsp; for the English language, but taking into account the space character.&amp;amp;nbsp; In order to be able to compare his results with Shannom, Küpfmüller subsequently included the space character in his result. &lt;br /&gt;
&lt;br /&gt;
:*The correction factor is the quotient of the average word length without considering the space&amp;amp;nbsp; $(5.5)$&amp;amp;nbsp; and the average word length with consideration of the space&amp;amp;nbsp; $(5.5+1 = 6.5)$. &lt;br /&gt;
:*This correction led to Küpfmueller&#039;s final result&amp;amp;nbsp; $H =1.51 \cdot {5.5}/{6.5}\approx 1.3\,\, {\rm bit/letter}\hspace{0.05 cm}.$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==A further entropy estimation by Küpfmüller ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For the sake of completeness, Küpfmüller&#039;s considerations are presented here, which led him to the final result&amp;amp;nbsp; $H = 1.51 \ \rm bit/letter$&amp;amp;nbsp; &amp;amp;nbsp; Since there was no documentation for the statistics of word groups or whole sentences, he estimated the entropy value of the German language as follows:&lt;br /&gt;
*Any contiguous German text is covered behind a certain word.&amp;amp;nbsp; The preceding text is read and the reader should try to determine the following word from the context of the preceding text.&lt;br /&gt;
*For a large number of such attempts, the percentage of hits gives a measure of the links between words and sentences&amp;amp;nbsp; It can be seen that for one and the same type of text (novels, scientific writings, etc.) by one and the same author, a constant final value of this hit ratio is reached relatively quickly&amp;amp;nbsp; (about one hundred to two hundred attempts).&lt;br /&gt;
*The hit ratio, however, depends quite strongly on the type of text.&amp;amp;nbsp; For different texts, values between&amp;amp;nbsp; $15\%$&amp;amp;nbsp; and&amp;amp;nbsp; $33\%$, with the mean value at&amp;amp;nbsp; $22\%$, are obtained.&amp;amp;nbsp; This also means: &amp;amp;nbsp; On average,&amp;amp;nbsp; $22\%$&amp;amp;nbsp; of the words in a German text can be determined from the context.&lt;br /&gt;
*Alternatively: &amp;amp;nbsp; The word count of a long text can be reduced with the factor&amp;amp;nbsp; $0.78$&amp;amp;nbsp; without a significant loss of the message content of the text.&amp;amp;nbsp; Starting from the reference value&amp;amp;nbsp; $H_{5. 5} = 2 \ \rm bit/letter$&amp;amp;nbsp; $($see dot&amp;amp;nbsp; &#039;&#039;&#039;(5)&#039;&#039;&#039;&amp;amp;nbsp; in the last section$)$&amp;amp;nbsp; for a word of medium length this results in the entropy&amp;amp;nbsp; $H ≈ 0.78 · 2 = 1.56 \ \rm bit/letter$.&lt;br /&gt;
*Küpfmüller verified this value with a comparable empirical study regarding the syllables and thus determined the reduction factor&amp;amp;nbsp; $0.54$&amp;amp;nbsp; (regarding syllables).&amp;amp;nbsp; As final result Küpfmüller&amp;amp;nbsp; $H = 0. 54 · H_3 ≈ 1.51 \ \rm bit/letter$, where&amp;amp;nbsp; $H_3 ≈ 2.8 \ \rm bit/letter$&amp;amp;nbsp; corresponds to the entropy of a syllable of medium length&amp;amp;nbsp; $($about three letters, see point&amp;amp;nbsp; &#039;&#039;&#039;(3)&#039;&#039;&#039;&amp;amp;nbsp; on the last page$)$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The remarks on this and the previous page, which may be perceived as very critical, are not intended to diminish the importance of neither Küpfmüller&#039;s entropy estimation, nor Shannon&#039;s contributions to the same topic are not. &lt;br /&gt;
*They are only meant to point out the great difficulties that arise in this task. &lt;br /&gt;
*This is perhaps also the reason why no one has dealt with this problem intensively since the 1950s.&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Some own simulation results==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The information given by Karl Küpfmüller regarding the entropy of the German language shall now be compared with some (very simple) simulation results, which were developed by the author of this chapter (Günter Söder) at the Chair of Communications Engineering at the Technical University of Munich in the course of an internship.&amp;amp;nbsp; The results are based on&lt;br /&gt;
*the Windows program&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Programme/WDIT.zip WDIT] &amp;amp;nbsp;&amp;amp;rArr;&amp;amp;nbsp; the link refers to the ZIP version of the program; &lt;br /&gt;
*the associated practical training manual&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Texte/Wertdiskrete_Informationstheorie.pdf Wertdiskrete Informationstheorie (Value Discrete Information Theory)].  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link refers to the PDF version;&lt;br /&gt;
*the German Bible in ASCII format with&amp;amp;nbsp; $N \approx 4.37 \cdot 10^6$&amp;amp;nbsp; characters. This corresponds to a book with&amp;amp;nbsp; $1300$&amp;amp;nbsp; pages at&amp;amp;nbsp; $42$&amp;amp;nbsp; lines per page and&amp;amp;nbsp; $80$&amp;amp;nbsp; characters per line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The symbol range has been reduced to&amp;amp;nbsp; $M = 33$&amp;amp;nbsp; and includes the characters &#039;&#039;&#039;a&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;b&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;c&#039;&#039;&#039;,&amp;amp;nbsp; ... .&amp;amp;nbsp; &#039;&#039;&#039;x&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;y&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;z&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ä&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ö&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ü&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ß&#039;&#039;&#039;,&amp;amp;nbsp; $\rm LZ$,&amp;amp;nbsp; $\rm ZI$,&amp;amp;nbsp; $\rm IP$. &amp;amp;nbsp; Our analysis did not differentiate between upper and lower case letters.&lt;br /&gt;
&lt;br /&gt;
In contrast to Küpfmüller&#039;s analysis, we also took into account:&lt;br /&gt;
*the German umlauts&amp;amp;nbsp; &#039;&#039;&#039;ä&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ö&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ü&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;ß&#039;&#039;&#039;, which make up about&amp;amp;nbsp; $1.2\%$&amp;amp;nbsp; of the biblical text, &lt;br /&gt;
*the class punctuation&amp;amp;nbsp; $\rm IP$&amp;amp;nbsp; (Interpunktion) with ca.&amp;amp;nbsp; $3\%$,&lt;br /&gt;
*the class digit&amp;amp;nbsp; $\rm ZI$&amp;amp;nbsp; (Ziffer) with ca.&amp;amp;nbsp; $1.3\%$&amp;amp;nbsp; because of the verse numbering within the bible,&lt;br /&gt;
*the space (Leerzeichen)&amp;amp;nbsp; $\rm (LZ)$&amp;amp;nbsp; as the most common character&amp;amp;nbsp; $(17.8\%)$, even more than the &amp;quot;e&amp;quot;&amp;amp;nbsp; $(12.8\%)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following table summarizes the results &amp;amp;nbsp; $N$&amp;amp;nbsp; indicates the analyzed file size in characters (bytes). &amp;amp;nbsp; The decision content&amp;amp;nbsp; $H_0$&amp;amp;nbsp; as well as the entropy approximations&amp;amp;nbsp; $H_1$,&amp;amp;nbsp; $H_2$&amp;amp;nbsp; and&amp;amp;nbsp; $H_3$&amp;amp;nbsp; were each determined from&amp;amp;nbsp; $N$&amp;amp;nbsp; characters and are each given in &amp;quot;bit/characters&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_3_S3_vers2.png|left|frame|Entropy values (in bit/characters) of the German Bible]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
*Please do not consider these results to be scientific research.&lt;br /&gt;
*It is only an attempt to give students an understanding of the subject matter in an internship. &lt;br /&gt;
*The basis of this study was the Bible, since we had both its German and English versions available to us in the appropriate ASCII format.	 &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
The results of the above table can be summarized as follows:&lt;br /&gt;
*In all rows the entropy approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; decreases monotously with increasing&amp;amp;nbsp; $k$.&amp;amp;nbsp; The decrease is convex, that means &amp;amp;nbsp; $H_1 - H_2 &amp;gt; H_2 - H_3$. &amp;amp;nbsp; The extrapolation of the final value&amp;amp;nbsp; $(k \to \infty)$&amp;amp;nbsp; is not (or only extremely vague) possible from the three entropy approximations determined in each case.&lt;br /&gt;
*If the evaluation of the numbers&amp;amp;nbsp; $(\rm ZI$, line 2 &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 32)$&amp;amp;nbsp; and additionally the evaluation of the punctuation marks&amp;amp;nbsp; $(\rm IP$, line 3 &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 31)$ is omitted, the entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; $($by&amp;amp;nbsp; $0. 114)$,&amp;amp;nbsp; $H_2$&amp;amp;nbsp; $($by&amp;amp;nbsp; $0.063)$&amp;amp;nbsp; and&amp;amp;nbsp; $H_3$&amp;amp;nbsp; $($by&amp;amp;nbsp; $0.038)$&amp;amp;nbsp; decrease. &amp;amp;nbsp; On the final value&amp;amp;nbsp; $H$&amp;amp;nbsp; as the limit value of&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k \to \infty$&amp;amp;nbsp; the omission of numbers and punctuation will probably have little effect.&lt;br /&gt;
*If one leaves the space&amp;amp;nbsp; $(\rm LZ$, line 4 &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 30)$&amp;amp;nbsp; out of consideration, the result is almost the same constellation as Küpfmüller originally considered. The only difference are the rather rare German special characters &#039;&#039;&#039;ä&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ö&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ü&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;ß&#039;&#039;&#039;.&lt;br /&gt;
*The&amp;amp;nbsp; $H_1$-value&amp;amp;nbsp; $(4.132)$&amp;amp;nbsp; indicated in the last line corresponds very well with the value&amp;amp;nbsp; $H_1 ≈ 4.1$&amp;amp;nbsp; determined by Küpfmüller. &amp;amp;nbsp; However, with regard to the&amp;amp;nbsp; $H_3$-values there are clear differences: &amp;amp;nbsp; Our analysis yields&amp;amp;nbsp; $H_3 ≈ 3.4$, while Küpfmüller&amp;amp;nbsp; $H_3 ≈ names 2.8$&amp;amp;nbsp; (all data in bit/letter).&lt;br /&gt;
*From the frequency of occurrence of the space&amp;amp;nbsp; $(17.8\%)$&amp;amp;nbsp; here results an average word length of&amp;amp;nbsp; $1/0.178 - 1 ≈ 4.6$, a smaller value than Küpfmüller&amp;amp;nbsp; ($5.5$)&amp;amp;nbsp; given.&amp;amp;nbsp; The discrepancy can be partly explained with our analysis file &amp;quot;Bible&amp;quot; (many spaces due to verse numbering).&lt;br /&gt;
*Interesting is the comparison of lines 3 and 4.&amp;amp;nbsp; If the space is taken into account, then although&amp;amp;nbsp; $H_0$&amp;amp;nbsp; from&amp;amp;nbsp; $\log_2 \ (30) \approx 4.907$&amp;amp;nbsp; to&amp;amp;nbsp; $\log_2 \ (31) \approx 4. 954$&amp;amp;nbsp; enlarges, but thereby reduces&amp;amp;nbsp; $H_1$&amp;amp;nbsp; $($by the factor&amp;amp;nbsp; $0.98)$,&amp;amp;nbsp; $H_2$&amp;amp;nbsp; $($by&amp;amp;nbsp; $0.96)$&amp;amp;nbsp; and&amp;amp;nbsp; $H_3$&amp;amp;nbsp; $($by&amp;amp;nbsp; $0.93)$. Küpfmüller has intuitively taken this factor into account with&amp;amp;nbsp; $85\%$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although we consider this own research to be rather insignificant, we believe that for today&#039;s texts the&amp;amp;nbsp; $1.0 \ \rm bit/letter$&amp;amp;nbsp; given by Shannon are somewhat too low for the English language and also Küpfmüllers&amp;amp;nbsp; $1.3 \ \rm bit/letter$&amp;amp;nbsp; for the German language, among other things because&lt;br /&gt;
*the symbol range today is larger than that considered by Shannon and Küpfmüller in the 1950s; for example, for the ASCII character set&amp;amp;nbsp; $M = 256$,&lt;br /&gt;
*the multiple formatting options (underlining, bold and italics, indents, colors) further increase the information content of a document.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Synthetically generated texts == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The graphic shows artificially generated German and English texts, which are taken from&amp;amp;nbsp; [Küpf54]&amp;lt;ref name =&#039;Küpf54&#039;&amp;gt;Küpfmüller, K.: &#039;&#039;Die Entropie der deutschen Sprache&#039;&#039;. Fernmeldetechnische Zeitung 7, 1954, S. 265-272.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; taken from The underlying symbol range is&amp;amp;nbsp; $M = 27$,&amp;amp;nbsp; that means, all letters&amp;amp;nbsp; (without umlauts and &#039;&#039;&#039;ß&#039;&#039;&#039;)&amp;amp;nbsp; and the space character are considered.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_3_S4_vers2.png|right|frame|artificially generated German and English texts]]&lt;br /&gt;
&lt;br /&gt;
*The&amp;amp;nbsp; &#039;&#039;Zero-order letter approximation&#039;&#039;&amp;amp;nbsp; assumes equally probable characters in each case.&amp;amp;nbsp; There is therefore no difference between German (red) and English (blue).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The&amp;amp;nbsp; &#039;&#039;first letter approximation&#039;&#039;&amp;amp;nbsp; already considers the different frequencies, the higher order approximations also the preceding characters.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*In the&amp;amp;nbsp; &#039;&#039;4th order synthesis&#039;&#039;&amp;amp;nbsp; one can already recognize meaningful words.&amp;amp;nbsp; Here the probability for a new letter depends on the last three characters.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The&amp;amp;nbsp; &#039;&#039;First-order word approximation&#039;&#039;&amp;amp;nbsp; synthesizes sentences according to the word probabilities that&amp;amp;nbsp; &#039;&#039;Second-order word approximation&#039;&#039;&amp;amp;nbsp; also considers the previous word.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Further information on the synthetic generation of German and English texts can be found in the&amp;amp;nbsp; [[Aufgaben:1.8_Synthetisch_erzeugte_Texte|Exercise 1.8]].&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
==Exercises for chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:1.7 Entropie natürlicher Texte|Aufgabe 1.7:  Entropie natürlicher Texte]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.8 Synthetisch erzeugte Texte|Aufgabe 1.8: Synthetisch erzeugte Texte]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/General_Description&amp;diff=35085</id>
		<title>Information Theory/General Description</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/General_Description&amp;diff=35085"/>
		<updated>2020-11-02T13:33:38Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Quellencodierung – Datenkomprimierung&lt;br /&gt;
|Vorherige Seite=Natürliche wertdiskrete Nachrichtenquellen&lt;br /&gt;
|Nächste Seite=Komprimierung nach Lempel, Ziv und Welch&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== # OVERVIEW OF THE SECOND MAIN CHAPTER # ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The Shannon information theory is used for example in&amp;amp;nbsp; &#039;&#039;source coding&#039;&#039;&amp;amp;nbsp; of digital (i.e. value and time discrete) message sources.&amp;amp;nbsp; One speaks in this context also of&amp;amp;nbsp; &#039;&#039;data compression&#039;&#039;. &lt;br /&gt;
*Attempts are made to reduce the redundancy of natural digital sources such as measurement data, texts, or voice and image files (after digitization) by recoding them, so that they can be stored and transmitted more efficiently. &lt;br /&gt;
*In most cases, source encoding is associated with a change of the symbol range.&amp;amp;nbsp; in the following, the output sequence is always binary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is a detailed discussion:&lt;br /&gt;
&lt;br /&gt;
*the different destinations of &#039;&#039;source coding&#039;&#039;, &#039;&#039;channel coding&#039;&#039;&amp;amp;nbsp; and &#039;&#039;line coding&#039;&#039;,&lt;br /&gt;
*&#039;&#039;lossy&#039;&#039;&amp;amp;nbsp; encoding methods for analog sources, for example GIF, TIFF, JPEG, PNG, MP3,&lt;br /&gt;
*the &#039;&#039;source encoding theorem&#039;&#039;, which specifies a limit for the average codeword length,&lt;br /&gt;
*the frequently used data compression according to &#039;&#039;Lempel&#039;&#039;, &#039;&#039;Ziv&#039;&#039;&amp;amp;nbsp; and &#039;&#039;Welch&#039;&#039;,&lt;br /&gt;
*the &#039;&#039;Huffman code&#039;&#039;&amp;amp;nbsp; as the best known and most efficient form of entropy coding,&lt;br /&gt;
*the &#039;&#039;Shannon-Fano-Code&#039;&#039;&amp;amp;nbsp; as well as the &#039;&#039;arithmetic coding&#039;&#039; - both belong to the class of entropy encoders as well,&lt;br /&gt;
*the &#039;&#039;run-length encoding&#039;&#039;&amp;amp;nbsp; and the &#039;&#039;Burrows-Wheeler Transformation&#039;&#039;&amp;amp;nbsp; (BWT).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Further information on the topic as well as Exercises, simulations and programming exercises can be found in the experiment &amp;quot;Value Discrete Information Theory&amp;quot; (Wertdiskrete Informationstheorie) of the practical course &amp;quot;Simulation of Digital Transmission Systems&amp;quot; (Simulation Digitaler Übertragungssysteme).&amp;amp;nbsp; This (former) LNT course at the TU Munich is based on&lt;br /&gt;
&lt;br /&gt;
*the Windows program&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Programme/WDIT.zip WDIT] &amp;amp;nbsp;&amp;amp;rArr;&amp;amp;nbsp; Link points to the ZIP version of the program; and &lt;br /&gt;
*the associated&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Texte/Wertdiskrete_Informationstheorie.pdf Internship guide]  nbsp;&amp;amp;rArr;&amp;amp;nbsp; Link refers to the PDF version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Source coding - Channel coding - Line coding ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For the descriptions in this second chapter we consider the following digital transfer model:&lt;br /&gt;
*The source signal&amp;amp;nbsp; $q(t)$&amp;amp;nbsp; can be analog as well as digital, just like the sink signal&amp;amp;nbsp; $v(t)$&amp;amp;nbsp; All other signals in this block diagram, even those not explicitly named here, are digital signals.&lt;br /&gt;
*In particular also the signals&amp;amp;nbsp; $x(t)$&amp;amp;nbsp; and&amp;amp;nbsp; $y(t)$&amp;amp;nbsp; at the input and output of the &amp;quot;Digital Channel&amp;quot; are digital and can therefore also be described completely by the symbol sequences&amp;amp;nbsp; $〈x_ν〉$&amp;amp;nbsp; and&amp;amp;nbsp; $〈y_ν〉$&amp;amp;nbsp;.&lt;br /&gt;
*The &amp;quot;digital channel&amp;quot; includes not only the transmission medium and interference (noise) but also components of the transmitter (modulator, transmitter pulse shaper, etc.) and the receiver (demodulator, receive filter or detector, decision maker). &lt;br /&gt;
*The chapter&amp;amp;nbsp; [[Digital_Signal_Transmission/Parameters of Digital Channel Models|Parameters of Digital Channel Models]]&amp;amp;nbsp; in the book &amp;quot;Digital Signal Transmission&amp;quot; describes the modeling of the &amp;quot;Digital Channel&amp;quot; .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2315__Inf_T_2_1_S1_neu.png|center|frame|Simplified model of a message transmission system]]&lt;br /&gt;
&lt;br /&gt;
As can be seen from this block diagram, there are three different types of coding, depending on the target direction, each of which is realized by the encoder on the transmission side (coder) and the corresponding decoder at the receiver:&lt;br /&gt;
&lt;br /&gt;
Exercise of&amp;amp;nbsp; &#039;&#039;&#039;source coding&#039;&#039;&#039; &amp;amp;nbsp; is the redundancy reduction for data compression, as it is used for example in image coding.&amp;amp;nbsp; By using statistical bonds between the individual points of an image or between the brightness values of a pixel at different times&amp;amp;nbsp; (for moving picture sequences)&amp;amp;nbsp; procedures can be developed which lead to a noticeable reduction of the amount of data (measured in bit or byte) with nearly the same picture quality.&amp;amp;nbsp; A simple example is the &#039;&#039;differential pulse code modulation&#039;&#039;&amp;amp;nbsp; (DPCM).&lt;br /&gt;
&lt;br /&gt;
With&amp;amp;nbsp; &#039;&#039;&#039;channel coding&#039;&#039;&#039;&amp;amp;nbsp; on the other hand, a noticeable improvement of the transmission behavior is achieved by using a redundancy specifically added at the transmitter to detect and correct transmission errors at the receiver side.&amp;amp;nbsp; Such codes, whose most important representatives are block codes, convolutional codes and turbo codes, are of great importance, especially for heavily disturbed channels. The greater the relative redundancy of the coded signal, the better the correction properties of the code, but at a reduced payload data rate.&lt;br /&gt;
&lt;br /&gt;
A&amp;amp;nbsp; &#039;&#039;&#039;line coding&#039;&#039;&#039;&amp;amp;nbsp; - sometimes also called &#039;&#039;transmission coding&#039;&#039;&amp;amp;nbsp; - is used to adapt the transmitted signal to the spectral characteristics of channel and receiving equipment by recoding the source symbols. &amp;amp;nbsp; For example, in the case of an (analog) transmission channel over which no direct signal can be transmitted, for which thus&amp;amp;nbsp; $H_{\rm K}(f = 0) = 0$, it must be ensured by line coding that the code symbol sequence does not contain long sequences of the same polarity.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The focus of this chapter is on lossless source coding, which generates a data-compressed code symbol sequence&amp;amp;nbsp; $〈q_ν〉$&amp;amp;nbsp; based on the results of information theory.&lt;br /&gt;
&lt;br /&gt;
* Channel coding is the subject of a separate book in our tutorial with the following&amp;amp;nbsp; [[Channel_Coding|Content]]&amp;amp;nbsp;. &lt;br /&gt;
*The channel coding is discussed in detail in the chapter &amp;quot;Coded and multilevel transmission&amp;quot; of the book&amp;amp;nbsp; [[Digital_Signal_Transmission]]&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note&#039;&#039;: &amp;amp;nbsp; We uniformly use &amp;quot;$\nu$&amp;quot; here as a control variable of a symbol sequence.&amp;amp;nbsp; Normally, for&amp;amp;nbsp; $〈q_ν〉$,&amp;amp;nbsp; $〈c_ν〉$&amp;amp;nbsp; and&amp;amp;nbsp; $〈x_ν〉$&amp;amp;nbsp; different indices should be used if the rates do not match.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lossy source encoding for images==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For digitizing analog source signals such as speech, music or pictures, only lossy source coding methods can be used.&amp;amp;nbsp; Even the storage of a photo in&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Windows_Bitmap BMP]-format is always associated with a loss of information due to sampling, quantization and the finite color depth.&lt;br /&gt;
&lt;br /&gt;
However, there are also a number of compression methods for images that result in much smaller image files than &amp;quot;BMP&amp;quot;, for example:&lt;br /&gt;
*[https://en.wikipedia.org/wiki/GIF GIF]&amp;amp;nbsp; (&#039;&#039;Graphics Interchange Format&#039;&#039;&amp;amp;nbsp;), developed by Steve Wilhite in 1987.&lt;br /&gt;
*[https://de.wikipedia.org/wiki/JPEG JPEG]&amp;amp;nbsp; - a format that was introduced in 1992 by the &#039;&#039;Joint Photography Experts Group&#039;&#039;&amp;amp;nbsp; and is now the standard for digital cameras.&amp;amp;nbsp; ending: &amp;amp;nbsp; &amp;quot;jpeg&amp;quot; or &amp;quot;jpg&amp;quot;.&lt;br /&gt;
*[https://de.wikipedia.org/wiki/Tagged_Image_File_Format TIFF]&amp;amp;nbsp; (&#039;&#039;Tagged Image File Format&#039;&#039;), around 1990 by Aldus Corp. (now Adobe) and Microsoft, still the quasi-standard for print-ready images of the highest quality.&lt;br /&gt;
*[https://de.wikipedia.org/wiki/Portable_Network_Graphics PNG]&amp;amp;nbsp; (&#039;&#039;Portable Network Graphics&#039;&#039;), designed in 1995 by T. Boutell &amp;amp; T. Lane as a replacement for the patent-encumbered GIF format, is less complex than TIFF.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These compression methods partly use &lt;br /&gt;
*Vector quantization for redundancy reduction of correlated pixels, &lt;br /&gt;
*at the same time the lossless compression algorithms according to&amp;amp;nbsp; [[Information_Theory/Entropy Coding According to Huffman#The_Huffman.E2.80.93Algorithm|Huffman]]&amp;amp;nbsp; and&amp;amp;nbsp; [[Information_Theory/Compression According to Lempel, Ziv and Welch|Lempel/Ziv]], &lt;br /&gt;
*possibly also transformation coding based on DFT&amp;amp;nbsp; (&#039;&#039;Discrete Fourier Transformation&#039;&#039;&amp;amp;nbsp;)&amp;amp;nbsp; and&amp;amp;nbsp; DCT&amp;amp;nbsp; (&#039;&#039;Discrete Cosine Transformation&#039;&#039;&amp;amp;nbsp;), &lt;br /&gt;
*then quantization and transfer in the transformed range.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We now compare the effects of two compression methods on the subjective quality of photos and graphics, namely:&lt;br /&gt;
*&#039;&#039;&#039;JPEG&#039;&#039;&#039;&amp;amp;nbsp; $($with compression factor&amp;amp;nbsp; $8)$,&amp;amp;nbsp; and&lt;br /&gt;
*&#039;&#039;&#039;PNG&#039;&#039;&#039;&amp;amp;nbsp; $($with compression factor&amp;amp;nbsp; $24)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp; &lt;br /&gt;
In the upper part of the following figure you can see two compressions of a photo.&lt;br /&gt;
[[File:P_ID2920__Inf_T_2_1_S2_neu.png|right|frame|Compare JPEG and PNG compression]]&lt;br /&gt;
The format&amp;amp;nbsp; &#039;&#039;&#039;JPEG&#039;&#039;&#039; &amp;amp;nbsp; (left image) allows a compression factor of&amp;amp;nbsp; $8$&amp;amp;nbsp; to&amp;amp;nbsp; $15$&amp;amp;nbsp; with (nearly) lossless compression. &lt;br /&gt;
*Even with the compression factor&amp;amp;nbsp; $35$&amp;amp;nbsp; the result can still be called &amp;quot;good&amp;quot;. &lt;br /&gt;
*For most consumer digital cameras, &amp;quot;JPEG&amp;quot; is the default storage format.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shown on the right was compressed with&amp;amp;nbsp; &#039;&#039;&#039;PNG&#039;&#039;&#039;&amp;amp;nbsp;. &lt;br /&gt;
*The quality is similar to the left JPEG image, although the compression is about&amp;amp;nbsp; $3$ stronger. &lt;br /&gt;
*In contrast, PNG achieves a worse compression result than JPEG if the photo contains a lot of color gradations. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
PNG is also better suited for line drawings with captions than JPEG (lower images).&amp;amp;nbsp; The quality of the JPEG compression (left) is significantly worse than the PNG result, although the resulting file size is about three times as large.&amp;amp;nbsp; Especially fonts look &amp;quot;washed out&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note&#039;&#039;: &amp;amp;nbsp; Due to technical limitations of&amp;amp;nbsp; $\rm LNTww$&amp;amp;nbsp; all graphics had to be saved as &amp;quot;PNG&amp;quot;. &lt;br /&gt;
*In the above graphic, &amp;quot;JPEG&amp;quot; means the PNG conversion of a file previously compressed with &amp;quot;JPEG&amp;quot;. &lt;br /&gt;
*However, the associated loss is negligible. }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
	 &lt;br /&gt;
== Lossy source coding for audio signals==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
A first example of source coding for speech and music is the&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Pulse-code_modulation Pulse-code modulation]&amp;amp;nbsp; (PCM), invented in 1938, which extracts the code symbol sequence&amp;amp;nbsp; $〈c_ν〉 $&amp;amp;nbsp;from an analog source signal&amp;amp;nbsp; $q(t)$, corresponding to the three processing blocks &lt;br /&gt;
[[File:P_ID2925__Mod_T_4_1_S1_neu.png|right|frame|Principle of Pulse Code Modulation (PCM)]]&lt;br /&gt;
*Sampling,&lt;br /&gt;
*Quantization, and&lt;br /&gt;
*PCM encoding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The graphic illustrates the PCM principle.&amp;amp;nbsp; A detailed description of the picture can be found on the first pages of the chapter&amp;amp;nbsp; [[Modulation_Methods/Pulse Code Modulation|Pulse Code Modulation]]&amp;amp;nbsp; in the book &amp;quot;Modulation Methods&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Because of the necessary band limitation and quantization, this transformation is always lossy.&amp;amp;nbsp; That means&lt;br /&gt;
*The code sequence&amp;amp;nbsp; $〈c_ν〉$&amp;amp;nbsp; has less information than the signal&amp;amp;nbsp; $q(t)$.&lt;br /&gt;
*The sink signal&amp;amp;nbsp; $v(t)$&amp;amp;nbsp; is fundamentally different from&amp;amp;nbsp; $q(t)$. &lt;br /&gt;
*Mostly, however, the deviation is not very large.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
We will now mention two transmission methods based on pulse code modulation as examples.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp;  &lt;br /&gt;
The following data is taken from the&amp;amp;nbsp; [[Examples_of_Communication_Systems/Entire GSM Transmission System#Components_of_Language.E2.80.93_and_Data.C3.BCtransmission|GSM-Specification]]&amp;amp;nbsp;:&lt;br /&gt;
*If a speech signal is spectrally limited to the bandwidth&amp;amp;nbsp; $B = 4 \, \rm kHz$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; sampling rate $f_{\rm A} = 8 \, \rm kHz$&amp;amp;nbsp; with a quantization of $13 \, \rm Bit$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; number of quantization levels&amp;amp;nbsp; $M = 2^{13} = 8192$&amp;amp;nbsp; a binary data stream of data rate&amp;amp;nbsp; $R = 104 \, \rm kbit/s$ results. &lt;br /&gt;
*The quantization noise ratio is then&amp;amp;nbsp; $20 - \lg M ≈ 78 \, \rm dB$. &lt;br /&gt;
*For quantization with&amp;amp;nbsp; $16 \, \rm Bit$&amp;amp;nbsp; this increases to&amp;amp;nbsp; $96 \, \rm dB$.&amp;amp;nbsp; At the same time, however, the required data rate increases to&amp;amp;nbsp; $R = 128 \, \rm kbit/s$. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The interactive applet&amp;amp;nbsp; [[Applets:Bandwidth Limitation|Impact of a Bandwidth Limitation for Speech and Music]] illustrates the effects of a bandwidth limitation.}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 3:}$&amp;amp;nbsp;  &lt;br /&gt;
The standard&amp;amp;nbsp; [[Examples_of_Communication_Systems/General_Description_of_ISDN|ISDN]]&amp;amp;nbsp; (&#039;&#039;Integrated Services Digital Network&#039;&#039;&amp;amp;nbsp;) for telephony via two-wire line is also based on the PCM principle, whereby each user is assigned two B-channels&amp;amp;nbsp; (&#039;&#039;Bearer Channels&#039;&#039;&amp;amp;nbsp;)&amp;amp;nbsp; with &amp;amp;nbsp;$64 \, \rm kbit/s$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 2^{8} = 256$&amp;amp;nbsp; and a D-channel&amp;amp;nbsp; (&#039;&#039;Data Channel&#039;&#039;&amp;amp;nbsp;)&amp;amp;nbsp; with &amp;amp;nbsp;$ 16 \, \rm kbit/s$. &lt;br /&gt;
*The net data rate is thus&amp;amp;nbsp; $R_{\rm net} = 144 \, \rm kbit/s$. &lt;br /&gt;
*In consideration of the channel coding and the control bits (required for organizational reasons), the ISDN gross data rate of&amp;amp;nbsp; $R_{\rm gross} = 192 \, \rm kbit/s$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In mobile communications, very high data rates often could not (yet) be handled.&amp;amp;nbsp; In the 1990s, voice coding procedures were developed that led to data compression by the factor&amp;amp;nbsp; $8$&amp;amp;nbsp; and more.&amp;amp;nbsp; From today&#039;s point of view, it is worth mentioning&lt;br /&gt;
*the&amp;amp;nbsp; [[Examples_of_Communication_Systems/Voice Coding#Halfrate_Vocoder_and_Enhanced_Fullrate_Codec|Enhanced Full-Rate Codec]]&amp;amp;nbsp; (&#039;&#039;&#039;EFR&#039;&#039;&#039;), which extracts&amp;amp;nbsp; exactly&amp;amp;nbsp; $244 \, \rm Bit$&amp;amp;nbsp; for each speech frame of&amp;amp;nbsp; $20\, \rm ms$&amp;amp;nbsp;$($Data rate: &amp;amp;nbsp; $12. 2 \, \rm kbit/s)$; &amp;lt;br&amp;gt; this data compression of more than the factor&amp;amp;nbsp; $8$&amp;amp;nbsp; is achieved by stringing together several procedures: &lt;br /&gt;
:# &amp;amp;nbsp;&#039;&#039;Linear Predictive Coding&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;LPC&#039;&#039;&#039;, short term prediction), &lt;br /&gt;
:# &amp;amp;nbsp;&#039;&#039;Long Term Prediction&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;LTP&#039;&#039;&#039;, long term prediction) and &lt;br /&gt;
:# &amp;amp;nbsp;&#039;&#039;Regular Pulse Excitation&#039;&#039;&amp;amp;nbsp; (&#039;&#039;RPE&#039;&#039;&#039;);&lt;br /&gt;
*the&amp;amp;nbsp; [[Examples_of_Communication_Systems/Voice Coding#Adaptive_Multi.E2.80. 93Rate_Codec|Adaptive Multi-Rate Codec]]&amp;amp;nbsp; (&#039;&#039;&#039;AMR&#039;&#039;&#039;) based on&amp;amp;nbsp; [[Examples_of_Communication_Systems/Voice Coding#Algebraic_Code_Excited_Linear_Prediction|ACELP]]&amp;amp;nbsp; (&#039;&#039;Algebraic Code Excited Linear Prediction&#039;&#039;) and several modes between&amp;amp;nbsp; $12. 2 \, \rm kbit/s$&amp;amp;nbsp; (EFR) and&amp;amp;nbsp; $4.75 \, \rm kbit/s$&amp;amp;nbsp; so that improved channel coding can be used in case of poorer channel quality;&lt;br /&gt;
*the&amp;amp;nbsp; [[Examples_of_Communication_Systems/Voice Coding#Various_Language Coding Methods|Wideband-AMR]]&amp;amp;nbsp; (&#039;&#039;&#039;WB-AMR&#039;&#039;&#039;) with nine modes between&amp;amp;nbsp; $6.6 \, \rm kbit/s$&amp;amp;nbsp; and&amp;amp;nbsp; $23.85 \, \rm kbit/s$. &amp;amp;nbsp; This is used with&amp;amp;nbsp; [[Examples_of_Communication_Systems/General_Description_of_UMTS|UMTS]]&amp;amp;nbsp; and is suitable for broadband signals between&amp;amp;nbsp; $200 \, \rm Hz$&amp;amp;nbsp; and&amp;amp;nbsp; $7 \, \rm kHz$&amp;amp;nbsp;. &amp;amp;nbsp; Sampling is done with&amp;amp;nbsp; $16 \, \rm kHz$, quantization with&amp;amp;nbsp; $4 \, \rm Bit$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All these compression methods are described in detail in the chapter&amp;amp;nbsp; [[Examples_of_Communication_Systems/Voice Coding|Voice Coding]]&amp;amp;nbsp; of the book &amp;quot;Examples of Communication Systems&amp;quot; &amp;amp;nbsp; The Audio Module&amp;amp;nbsp; [[Applets:Quality of different voice codecs (Applet)|Quality of different voice codecs (Applet)]]&amp;amp;nbsp; allows a subjective comparison of these codecs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==MPEG-2 Audio Layer III - short MP3 ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Today (2015) the most common compression method for audio files is&amp;amp;nbsp; [https://en.wikipedia.org/wiki/MP3 MP3].&amp;amp;nbsp; This format was developed from 1982 on at the Fraunhofer Institute for Integrated Circuits (IIS) in Erlangen under the direction of Prof.&amp;amp;nbsp; [https://de.wikipedia. org/wiki/Hans-Georg_Musmann Hans-Georg Musmann]&amp;amp;nbsp; in collaboration with the Friedrich Alexander University Erlangen-Nuremberg and AT&amp;amp;T Bell Labs.&amp;amp;nbsp; Other institutions are also asserting patent claims in this regard, so that since 1998 various lawsuits have been filed which, to the authors&#039; knowledge, have not yet been finally concluded.&lt;br /&gt;
&lt;br /&gt;
In the following some measures are called, which are used with MP3, in order to reduce the data quantity in relation to the raw version in the&amp;amp;nbsp; [https://en.wikipedia.org/wiki/WAV WAV]-format.&amp;amp;nbsp; The compilation is not complete.&amp;amp;nbsp; A comprehensive representation about this can be found for example in a&amp;amp;nbsp; [https://de.wikipedia.org/wiki/MP3 Wikipedia article].&lt;br /&gt;
*The audio compression method &amp;quot;MP3&amp;quot; uses among other things psychoacoustic effects of perception.&amp;amp;nbsp; So a person can only distinguish two sounds from each other from a certain minimum difference in pitch.&amp;amp;nbsp; One speaks of so-called &amp;quot;masking effects&amp;quot;.&lt;br /&gt;
*Using the masking effects, MP3 signals that are less important for the auditory impression are stored with less bits (reduced accuracy).&amp;amp;nbsp; A dominant tone at&amp;amp;nbsp; $4 \, \rm kHz$&amp;amp;nbsp; can, for example, cause neighboring frequencies to be of only minor importance for the current auditory sensation up to&amp;amp;nbsp; $11 \, \rm kHz$&amp;amp;nbsp;.&lt;br /&gt;
*The greatest saving of MP3 coding, however, is that the sounds are stored with just enough bits so that the resulting&amp;amp;nbsp; [[Modulation_Methods/Pulse Code Modulation#Quantization_and_Quantization Noise|Quantization Noise]]&amp;amp;nbsp; is still masked and is not audible.&lt;br /&gt;
*Other MP3 compression mechanisms are the exploitation of the correlations between the two channels of a stereo signal by difference formation as well as the&amp;amp;nbsp; [[Information_Theory/Entropy Coding According to Huffman|Huffman Coding]]&amp;amp;nbsp; of the resulting data stream.&amp;amp;nbsp; Both measures are lossless.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A disadvantage of the MP3 coding is that with strong compression also &amp;quot;important&amp;quot; frequency components are unintentionally captured and thus audible errors occur.&amp;amp;nbsp; Furthermore it is disturbing that due to the blockwise application of the MP3 procedure gaps can occur at the end of a file.&amp;amp;nbsp; A remedy is the use of the so-called&amp;amp;nbsp; [https://en.wikipedia.org/wiki/LAME LAME]-Coder, an &#039;&#039;Open Source Project&#039;&#039;, and a corresponding player.	 	&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==Description of lossless source encoding &amp;amp;ndash; Requirements==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In the following, we only consider lossless source coding methods and make the following assumptions:&lt;br /&gt;
*The digital source has the symbol range&amp;amp;nbsp; $M$.&amp;amp;nbsp; For the individual source symbols of the sequence&amp;amp;nbsp; $〈q_ν〉$&amp;amp;nbsp; applies with the symbol range&amp;amp;nbsp; $\{q_μ\}$:&lt;br /&gt;
 &lt;br /&gt;
:$$q_{\nu} \in \{ q_{\mu} \}\hspace{0.05cm}, \hspace{0.2cm}\mu = 1, \hspace{0.05cm}\text \hspace{0.05cm}, M \hspace{0.05cm}. $$&lt;br /&gt;
&lt;br /&gt;
The individual sequence elements&amp;amp;nbsp; $q_ν$&amp;amp;nbsp; may be statistically independent or may have statistical bonds;&lt;br /&gt;
* First we consider&amp;amp;nbsp; &#039;&#039;&#039;message sources without memory&#039;&#039;&#039;, which are fully characterized by symbol probabilities alone, for example:&lt;br /&gt;
$$M = 4\text{:} \  \  \  q_μ \in \{ {\rm A}, \ {\rm B}, \ {\rm C}, \ {\rm D} \}, \hspace{2.5cm} \text{with the probabilities}\ p_{\rm A},\ p_{\rm B},\ p_{\rm C},\ p_{\rm D},$$&lt;br /&gt;
:$$M = 8\text{:} \  \  \  q_μ \in \{ {\rm A}, \ {\rm B}, \ {\rm C}, \ {\rm D},\ {\rm E}, \ {\rm F}, \ {\rm G}, \ {\rm H} \}, \hspace{0.5cm} \text{with the probabilities }\ p_{\rm A},\hspace{0.05cm}\text{...} \hspace{0.05cm} ,\ p_{\rm H}.$$&lt;br /&gt;
&lt;br /&gt;
*The source encoder replaces the source symbol&amp;amp;nbsp; $q_μ$&amp;amp;nbsp; with the code word&amp;amp;nbsp; $\mathcal{C}(q_μ)$, consisting of&amp;amp;nbsp; $L_μ$&amp;amp;nbsp; code symbols of a new alphabet with the symbol range&amp;amp;nbsp; $D$&amp;amp;nbsp; $\{0, \ 1$, ... ,&amp;amp;nbsp; $D - 1\}$.&amp;amp;nbsp; This gives the&amp;amp;nbsp; &#039;&#039;&#039;average code word length&#039;&#039;&#039;:&lt;br /&gt;
 &lt;br /&gt;
:$$L_{\rm M} = \sum_{\mu=1}^{M} \hspace{0.1cm} p_{\mu} \cdot L_{\mu} \hspace{0.05cm}, \hspace{0.2cm}{\rm mit} \hspace{0.2cm}p_{\mu} = {\rm Pr}(q_{\mu}) \hspace{0.05cm}. $$&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 4:}$&amp;amp;nbsp; We consider two different types of source encoding, each with the parameters&amp;amp;nbsp; $M = 9$&amp;amp;nbsp; and&amp;amp;nbsp; $D = 3$.&lt;br /&gt;
&lt;br /&gt;
*In the first encoding&amp;amp;nbsp; $\mathcal{C}_1(q_μ)$&amp;amp;nbsp; according to line 2 (red) of the lower table, each source symbol&amp;amp;nbsp; $q_μ$&amp;amp;nbsp; is replaced by two ternary symbols&amp;amp;nbsp; $(0$,&amp;amp;nbsp; $1$&amp;amp;nbsp; or&amp;amp;nbsp; $2)$&amp;amp;nbsp; &amp;amp;nbsp; For example, the mapping:&lt;br /&gt;
: $$\rm A C F B I G \ ⇒ \ 00 \ 02 \ 12 \ 01 \ 22 \ 20.$$&lt;br /&gt;
*With this coding, all code words have&amp;amp;nbsp; $\mathcal{C}_1(q_μ)$&amp;amp;nbsp; with&amp;amp;nbsp; $1 ≤ μ ≤ 6$&amp;amp;nbsp; the same length&amp;amp;nbsp; $L_μ = 2$.&amp;amp;nbsp; Thus, the average code word length&amp;amp;nbsp; $L_{\rm M} = 2$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2316__Inf_T_2_1_S3_Ganz_neu.png|center|frame|Two examples of source encoding]]&lt;br /&gt;
&lt;br /&gt;
*The second, the blue source coder&amp;amp;nbsp; $L_μ ∈ \{1, 2 \}$&amp;amp;nbsp; and accordingly, the average code word length&amp;amp;nbsp; $L_{\rm M}$&amp;amp;nbsp; will be less than two code symbols per source symbol. Here we have for example this mapping:&lt;br /&gt;
: $$\rm A C F B I G \ ⇒ \ 0 \ 02 \ 12 \ 01 \ 22 \ 2.$$&lt;br /&gt;
&lt;br /&gt;
*It is obvious that this second code symbol sequence cannot be decoded unambiguously, since the symbol sequence naturally does not include the spaces inserted in this text for display reasons. }}&lt;br /&gt;
 	 &lt;br /&gt;
&lt;br /&gt;
==Kraft–McMillan inequality - Prefix-free codes == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Binary codes for compressing a memoryless value discrete source are characterized by the fact that the individual symbols are represented by code symbol sequences of different lengths:&lt;br /&gt;
 &lt;br /&gt;
:$$L_{\mu} \ne {\rm const.}  \hspace{0.4cm}(\mu = 1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, M ) \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Only then it is possible,&lt;br /&gt;
*that the&amp;amp;nbsp; &#039;&#039;&#039;average code word length becomes minimal&#039;&#039;&#039;&amp;amp;nbsp;,&lt;br /&gt;
*if the&amp;amp;nbsp; &#039;&#039;&#039;source symbols are not equally probable&#039;&#039;&#039;&amp;amp;nbsp; are&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To enable a unique decoding, the code must also be &amp;quot;prefix-free&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp; The property&amp;amp;nbsp; &#039;&#039;&#039;prefix-free&#039;&#039;&#039;&amp;amp;nbsp; indicates that no codeword may be the prefix (beginning) of a longer codeword.&amp;amp;nbsp; Such a codeword is immediately decodable.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The blue code in the&amp;amp;nbsp; [[Information_Theory/General_Description#Description_of_lossless_source_encoding_.E2.80.93_Prerequisites|Example 4]]&amp;amp;nbsp; is not prefix-free.&amp;amp;nbsp; For example, the code symbol sequence &amp;quot;01&amp;quot; could be interpreted by the decoder as&amp;amp;nbsp; $\rm AD$&amp;amp;nbsp; but also as&amp;amp;nbsp; $\rm B$. &lt;br /&gt;
*The red code, on the other hand, is prefix-free, although prefix freedom would not be absolutely necessary here because of&amp;amp;nbsp; $L_μ = \rm const.$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Without proof:}$&amp;amp;nbsp;  &lt;br /&gt;
The necessary&amp;amp;nbsp; &#039;&#039;&#039;Condition for the existence of a prefix-free code&#039;&#039;&#039;&amp;amp;nbsp; was specified by Leon Kraft in his master thesis 1949 at&amp;amp;nbsp; &#039;&#039;Massachusetts Institute of Technology&#039;&#039;&amp;amp;nbsp; (MIT) :&lt;br /&gt;
 &lt;br /&gt;
:$$\sum_{\mu=1}^{M} \hspace{0.2cm} D^{-L_{\mu} \le 1 \hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 5:}$&amp;amp;nbsp;  &lt;br /&gt;
If you check the second (blue) code of&amp;amp;nbsp; [[Information_Theory/General_Description#Description of lossless source encoding.E2.80.93_Requirements|Example 4]]&amp;amp;nbsp; with&amp;amp;nbsp; $M = 9$&amp;amp;nbsp; and&amp;amp;nbsp; $D = 3$, you get:&lt;br /&gt;
 &lt;br /&gt;
:$$3 \cdot 3^{-1} + 6 \cdot 3^{-2} = 1.667 &amp;gt; 1 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
From this you can see that this code cannot be prefix-free }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 6:}$&amp;amp;nbsp; Let&#039;s look at the binary code&lt;br /&gt;
 &lt;br /&gt;
:$$\boldsymbol{\rm A } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 0 &lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\boldsymbol{\rm B } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 00&lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\boldsymbol{\rm C } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 11&lt;br /&gt;
\hspace{0.05cm}, $$&lt;br /&gt;
&lt;br /&gt;
it is obviously not prefix-free.&amp;amp;nbsp; The equation&lt;br /&gt;
 &lt;br /&gt;
:$$1 \cdot 2^{-1} + 2 \cdot 2^{-2} = 1 $$&lt;br /&gt;
&lt;br /&gt;
does not mean that this code is actually prefix-free, it just means that there is a prefix-free code with the same length distribution, for example&lt;br /&gt;
  &lt;br /&gt;
:$$\boldsymbol{\rm A } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 0 &lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\boldsymbol{\rm B } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 10&lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\boldsymbol{\rm C } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 11&lt;br /&gt;
\hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Source encoding theorem==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We now look at a redundant message source with the symbol set&amp;amp;nbsp; $〈q_μ〉$, where the control variable&amp;amp;nbsp; $μ$&amp;amp;nbsp; takes all values between&amp;amp;nbsp; $1$&amp;amp;nbsp; and the symbol range&amp;amp;nbsp; $M$.&amp;amp;nbsp; The source entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; is smaller than the message content&amp;amp;nbsp; $H_0$.&lt;br /&gt;
&lt;br /&gt;
The redundancy&amp;amp;nbsp; $(H_0- H)$&amp;amp;nbsp; is either caused by&lt;br /&gt;
*not equally probable symbols &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $p_μ ≠ 1/M$,&amp;amp;nbsp; and/or&lt;br /&gt;
*statistical bonds within the sequence&amp;amp;nbsp; $〈q_\nu〉$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A source encoder replaces the source symbol&amp;amp;nbsp; $q_μ$&amp;amp;nbsp; with the binary codeword&amp;amp;nbsp; $\mathcal{C}(q_μ)$, consisting of&amp;amp;nbsp; $L_μ$&amp;amp;nbsp; binary symbols (zeros or ones).&amp;amp;nbsp; This results in an average codeword length:&lt;br /&gt;
 &lt;br /&gt;
:$$L_{\rm M} = \sum_{\mu=1}^{M} \hspace{0.2cm} p_{\mu} \cdot L_{\mu} \hspace{0.05cm}, \hspace{0.2cm}{\rm mit} \hspace{0.2cm}p_{\mu} = {\rm Pr}(q_{\mu}) \hspace{0.05cm}. $$&lt;br /&gt;
&lt;br /&gt;
For the source encoding task described here the following&amp;amp;nbsp; &#039;&#039;&#039;limit&#039;&#039;&#039;&amp;amp;nbsp; can be specified:&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Theorem:}$&amp;amp;nbsp;  &lt;br /&gt;
For the possibility of a complete reconstruction of the sent string from the binary sequence it is sufficient, but also necessary, that &lt;br /&gt;
&lt;br /&gt;
*for encoding on the transmitting side at least&amp;amp;nbsp; $H$&amp;amp;nbsp; binary symbols per source symbol are used. &lt;br /&gt;
&lt;br /&gt;
*the average code word length&amp;amp;nbsp; $L_{\rm M}$&amp;amp;nbsp; cannot be smaller than the entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; of the source symbol sequence: &amp;amp;nbsp; &lt;br /&gt;
:$$L_{\rm M} \ge H \hspace{0.05cm}. $$&lt;br /&gt;
&lt;br /&gt;
This regularity is called&amp;amp;nbsp; &#039;&#039;&#039; Source Coding Theorem&#039;&#039;&#039;, which goes back to&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Claude_Shannon Claude Elwood Shannon]&amp;amp;nbsp; &amp;amp;nbsp; If the source coder considers only the different probabilities of occurrence, but not the inner statistical bonds of the sequence, then&amp;amp;nbsp; $L_{\rm M} ≥ H_1$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; [[Information_Theory/Sources with Memory#Entropy with respect to two-tuples|first Entropy Approximation]].}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 7:}$&amp;amp;nbsp;  &lt;br /&gt;
For a quaternary source with the symbol probabilities&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = 2^{-1}\hspace{0.05cm}, \hspace{0.2cm}p_{\rm B} = 2^{-2}\hspace{0.05cm}, \hspace{0.2cm}p_{\rm C} = p_{\rm D} = 2^{-3}&lt;br /&gt;
\hspace{0.3cm} \Rightarrow \hspace{0.3cm} H = H_1 = 1.75\,\, {\rm bit/source symbol} $$&lt;br /&gt;
&lt;br /&gt;
equality in the above equation &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $L_{\rm M} = H$ results, if for example the following assignment is chosen&lt;br /&gt;
 &lt;br /&gt;
:$$\boldsymbol{\rm A } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 0 &lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\boldsymbol{\rm B } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 10&lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\boldsymbol{\rm C} \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 110&lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\boldsymbol{\rm D }\hspace{0.15cm} \Rightarrow \hspace{0.15cm} 111&lt;br /&gt;
\hspace{0.05cm}. $$&lt;br /&gt;
&lt;br /&gt;
In contrast, with the same mapping and&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = 0.4\hspace{0.05cm}, \hspace{0.2cm}p_{\rm B} = 0.3\hspace{0.05cm}, \hspace{0.2cm}p_{\rm C} = 0.2&lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}p_{\rm D} = 0.1\hspace{0.05cm}&lt;br /&gt;
\hspace{0.3cm} \Rightarrow \hspace{0.3cm} H = 1.845\,\, {\rm bit/source&amp;amp;nbsp;symbol}$$&lt;br /&gt;
&lt;br /&gt;
the average code word length&lt;br /&gt;
 &lt;br /&gt;
:$$L_{\rm M} = 0.4 \cdot 1 + 0.3 \cdot 2 + 0.2 \cdot 3 + 0.1 \cdot 3 &lt;br /&gt;
= 1.9\,\, {\rm bit/source&amp;amp;nbsp;symbol}\hspace{0.05cm}. $$&lt;br /&gt;
&lt;br /&gt;
Because of the unfavorably chosen symbol probabilities (no powers of two) $L_{\rm M} &amp;gt; H$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 8:}$&amp;amp;nbsp;  &lt;br /&gt;
We will look at some very early attempts at source encoding for the transmission of natural texts, based on the letter frequencies given in the table. &lt;br /&gt;
*In the literature many different frequencies are found,&amp;amp;nbsp; also because,&amp;amp;nbsp; the investigations were carried out for different languages. &lt;br /&gt;
*Mostly, however, the list starts with the blank and &amp;quot;E&amp;quot; and ends with letters like &amp;quot;X&amp;quot;,&amp;amp;nbsp; &amp;quot;Y&amp;quot;&amp;amp;nbsp; and&amp;amp;nbsp; &amp;quot;Q&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2323__Inf_T_2_1_S6_ganz_neu.png|center|frame|Letter encodings according to Bacon/Bandot, Morse and Huffman]]&lt;br /&gt;
&lt;br /&gt;
Please note the following about this table:&lt;br /&gt;
*The entropy of this alphabet with&amp;amp;nbsp; $M = 27$&amp;amp;nbsp; character will be&amp;amp;nbsp; $H≈ 4 \, \rm bit/character$&amp;amp;nbsp; is&amp;amp;nbsp; We have not recalculated this.&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Francis_Bacon Francis Bacon]&amp;amp;nbsp; had already given a binary code in 1623, where each letter is represented by five bits: &amp;amp;nbsp; $L_{\rm M} = 5$.&lt;br /&gt;
*About 250 years later&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Baudot_code Jean-Maurice-Émile Baudot]&amp;amp;nbsp; adopted this code, which was later standardized for the entire telegraphy.&amp;amp;nbsp; One consideration important to him was that a code with a uniform five binary characters per letter is more difficult for an enemy to decipher, since he cannot draw conclusions about the transmitted character from the frequency of its occurrence.&lt;br /&gt;
*The last line in the above table gives an exemplary&amp;amp;nbsp; [[Information_Theory/Entropy_Coding_According_to_Huffman#The_Huffman.E2.80.93Algorithm|Huffman-Code]]&amp;amp;nbsp; for the above frequency distribution.&amp;amp;nbsp; Probable characters like &amp;quot;E&amp;quot; or &amp;quot;N&amp;quot; and also the &amp;quot;Blank&amp;quot; are represented with only three bits, the rare &amp;quot;Q&amp;quot; on the other hand with&amp;amp;nbsp; $11$ bit. &lt;br /&gt;
*The average code word length &amp;amp;nbsp;$L_{\rm M} = H + ε$&amp;amp;nbsp; is slightly larger than&amp;amp;nbsp; $H$, whereby we will not go into more detail here about the small positive size&amp;amp;nbsp; $ε$&amp;amp;nbsp; &amp;amp;nbsp; Only this much: &amp;amp;nbsp; There is no prefix-free code with smaller average word length than the Huffman code.&lt;br /&gt;
*Also&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Morse_code Samuel Morse]&amp;amp;nbsp; took into account the different frequencies in his code for telegraphy, already in the 1830s.&amp;amp;nbsp; The Morse code of each character consists of two to four binary characters, which are designated here according to the application with dot (&amp;quot;short&amp;quot;) and bar (&amp;quot;long&amp;quot;).&lt;br /&gt;
*It is obvious that for the Morse code&amp;amp;nbsp; $L_{\rm M} &amp;lt; 4$&amp;amp;nbsp; will apply, according to the penultimate line.&amp;amp;nbsp; But this is connected with the fact that this code is not prefix-free.&amp;amp;nbsp; Therefore, the radio operator had to take a break between each short-long sequence so that the other station could decode the radio signal as well.}}&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
== Exercises for chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:2.1 Codierung mit und ohne Verlust|Aufgabe 2.1: Codierung mit und ohne Verlust]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:2.2 Kraftsche Ungleichung|Aufgabe 2.2: Kraftsche Ungleichung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:2.2Z Mittlere Codewortlänge|Aufgabe 2.2Z: Mittlere Codewortlänge]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Compression_According_to_Lempel,_Ziv_and_Welch&amp;diff=35084</id>
		<title>Information Theory/Compression According to Lempel, Ziv and Welch</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Compression_According_to_Lempel,_Ziv_and_Welch&amp;diff=35084"/>
		<updated>2020-11-02T13:29:28Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Quellencodierung – Datenkomprimierung&lt;br /&gt;
|Vorherige Seite=Allgemeine Beschreibung&lt;br /&gt;
|Nächste Seite=Entropiecodierung nach Huffman&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Static and dynamic dictionary techniques == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Many data compression methods use dictionaries.&amp;amp;nbsp; the idea is the following: &lt;br /&gt;
*Construct a list of character patterns that occur in the text, &lt;br /&gt;
*and encode these patterns as indices of the list. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This procedure is particularly efficient if certain patterns are repeated frequently in the text and this is also taken into account in the coding.&amp;amp;nbsp; A distinction is made here&lt;br /&gt;
*Procedure with static dictionary,&lt;br /&gt;
*Procedure with dynamic dictionary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
$\text{(1) Procedure with static dictionary}$&lt;br /&gt;
&lt;br /&gt;
A static dictionary is only useful for very special applications, for example for a file of the following form:&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2424__Inf_T_2_2_S1a.png|center|frame|File to edit in this section]]&lt;br /&gt;
&lt;br /&gt;
For example, the assignments result in&lt;br /&gt;
     &lt;br /&gt;
:$$&amp;quot;\boldsymbol{\rm 0}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 000000} \hspace{0.05cm},\hspace{0.15cm} ... \hspace{0.15cm},\hspace{0.05cm}&lt;br /&gt;
&amp;quot;\boldsymbol{\rm 9}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 001001} \hspace{0.05cm},&lt;br /&gt;
&amp;quot;\hspace{-0.03cm}\_\hspace{-0.03cm}\_\hspace{0.03cm}&amp;quot; \hspace{0.1cm}{\rm (Blank)}\hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 001010} \hspace{0.05cm},$$&lt;br /&gt;
&lt;br /&gt;
:$$&amp;quot;\hspace{-0.01cm}.\hspace{-0.01cm}&amp;quot; \hspace{0.1cm}{\rm (Punkt)}\hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 001011} \hspace{0.05cm},&lt;br /&gt;
&amp;quot;\hspace{-0.01cm},\hspace{-0.01cm}&amp;quot; \hspace{0.1cm}{\rm (Komma)}\hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 001011} \hspace{0.05cm},&lt;br /&gt;
&amp;quot; {\rm end\hspace{-0.1cm}-\hspace{-0.1cm}of\hspace{-0.1cm}-\hspace{-0.1cm}line}\hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 001101} \hspace{0.05cm},$$&lt;br /&gt;
&lt;br /&gt;
:$$&amp;quot;\boldsymbol{\rm A}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 100000} \hspace{0.05cm},\hspace{0.15cm} ... \hspace{0.15cm},\hspace{0.05cm}&lt;br /&gt;
&amp;quot;\boldsymbol{\rm E}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 100100} \hspace{0.05cm},&lt;br /&gt;
\hspace{0.15cm} ... \hspace{0.15cm},\hspace{0.05cm}&lt;br /&gt;
&amp;quot;\boldsymbol{\rm L}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 101011} \hspace{0.05cm},\hspace{0.15cm}&amp;quot;\boldsymbol{\rm M}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 101100} \hspace{0.05cm},$$&lt;br /&gt;
&lt;br /&gt;
:$$&amp;quot;\boldsymbol{\rm O}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 101110} \hspace{0.05cm},\hspace{0.15cm} ... \hspace{0.15cm},\hspace{0.05cm}&lt;br /&gt;
&amp;quot;\boldsymbol{\rm U}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 110100} \hspace{0.05cm},&lt;br /&gt;
&amp;quot;\boldsymbol{\rm Name\hspace{-0.1cm}:\hspace{-0.05cm}\_\hspace{-0.03cm}\_}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 010000} \hspace{0.05cm},\hspace{0.05cm}$$&lt;br /&gt;
&lt;br /&gt;
:$$&amp;quot;\boldsymbol{\rm ,\_\hspace{-0.03cm}\_Vorname\hspace{-0.1cm}:\hspace{-0.05cm}\_\hspace{-0.03cm}\_}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 010001} \hspace{0.05cm},\hspace{0.05cm}&lt;br /&gt;
&amp;quot;\boldsymbol{\rm ,\_\hspace{-0.03cm}\_Wohnort\hspace{-0.1cm}:\hspace{-0.05cm}\_\hspace{-0.03cm}\_}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 010010} \hspace{0.05cm},\hspace{0.15cm} ... \hspace{0.15cm}$$&lt;br /&gt;
&lt;br /&gt;
for the first line of the above text, binary source coded with six bits per character:&lt;br /&gt;
    &lt;br /&gt;
:$$\boldsymbol{010000} \hspace{0.15cm}\boldsymbol{100000} \hspace{0.15cm}\boldsymbol{100001} \hspace{0.15cm}\boldsymbol{100100} \hspace{0.15cm}\boldsymbol{101011} \hspace{0.3cm}&lt;br /&gt;
\Rightarrow \hspace{0.3cm}&lt;br /&gt;
\boldsymbol{(\rm Name\hspace{-0.1cm}:\hspace{-0.05cm}\_\hspace{-0.03cm}\_)&lt;br /&gt;
\hspace{0.05cm}(A)\hspace{0.05cm}(B)\hspace{0.05cm}(E)\hspace{0.05cm}(L)}$$&lt;br /&gt;
&lt;br /&gt;
:$$\boldsymbol{010001} \hspace{0.15cm}\boldsymbol{101011}\hspace{0.15cm} \boldsymbol{100100} \hspace{0.15cm}\boldsymbol{101110} &lt;br /&gt;
 \hspace{0.3cm}&lt;br /&gt;
\Rightarrow \hspace{0.3cm}&lt;br /&gt;
\boldsymbol{(,\hspace{-0.05cm}\_\hspace{-0.03cm}\_\rm Vorname\hspace{-0.1cm}:\hspace{-0.05cm}\_\hspace{-0.03cm}\_)&lt;br /&gt;
\hspace{0.05cm}(L)\hspace{0.05cm}(E)\hspace{0.05cm}(O)}$$&lt;br /&gt;
&lt;br /&gt;
:$$\boldsymbol{010010} \hspace{0.15cm}\boldsymbol{110100} \hspace{0.15cm}\boldsymbol{101011} \hspace{0.15cm}\boldsymbol{101100} &lt;br /&gt;
 \hspace{0.3cm}\Rightarrow \hspace{0.3cm}&lt;br /&gt;
\boldsymbol{(,\hspace{-0.05cm}\_\hspace{-0.03cm}\_\rm Wohnort\hspace{-0.1cm}:\hspace{-0.05cm}\_\hspace{-0.03cm}\_)&lt;br /&gt;
\hspace{0.05cm}(U)\hspace{0.05cm}(L)\hspace{0.05cm}(M)}&lt;br /&gt;
\hspace{0.05cm} $$&lt;br /&gt;
&lt;br /&gt;
:$$\boldsymbol{001101}&lt;br /&gt;
 \hspace{0.3cm}\Rightarrow \hspace{0.3cm}&lt;br /&gt;
({\rm end\hspace{-0.1cm}-\hspace{-0.1cm}of\hspace{-0.1cm}-\hspace{-0.1cm}line})&lt;br /&gt;
\hspace{0.05cm}$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; &lt;br /&gt;
In this specific application the first line can be displayed with&amp;amp;nbsp; $14 \cdot 6 = 84$&amp;amp;nbsp; Bit. &lt;br /&gt;
*In contrast, conventional binary coding would require&amp;amp;nbsp; $39 \cdot 7 = 273$&amp;amp;nbsp; bits since: &lt;br /&gt;
*Because of the lowercase letters in the text, six bits per character are not sufficient here. &lt;br /&gt;
*For the entire text, this results in&amp;amp;nbsp; $103 \cdot 6 = 618$&amp;amp;nbsp; Bit versus&amp;amp;nbsp; $196 \cdot 7 = 1372$&amp;amp;nbsp; Bit. &lt;br /&gt;
*However, the code table must also be known to the recipient.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
$\text{(2) Procedure with dynamic dictionary}$&lt;br /&gt;
&lt;br /&gt;
Nevertheless, all relevant compression methods do not work with static dictionaries, but with &#039;&#039;dynamic dictionaries&#039;&#039;, which are created successively only during the coding:&lt;br /&gt;
*Such procedures are flexible and do not have to be adapted to the application &amp;amp;nbsp; One speaks of &#039;&#039;universal source coding procedures&#039;&#039;.&lt;br /&gt;
*A single pass is sufficient, whereas with a static dictionary the file must first be analyzed before the encoding process.&lt;br /&gt;
*At the sink, the dynamic dictionary is generated in the same way as for the source .&amp;amp;nbsp; this eliminates the need to transfer the dictionary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2926__Inf_T_2_2_S1b_neu.png|frame|Extract from the hexdump of a natural image in BMP format]]&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp; &lt;br /&gt;
The graphic shows a small section of&amp;amp;nbsp; $80$&amp;amp;nbsp; Byte of a&amp;amp;nbsp; [[Digital_Signal_Transmission/Applications for Multimedia Files#Pictures_in_BMP.E2.80.93Format_.281.29|BMP-File]]&amp;amp;nbsp; in hexadecimal representation.&amp;amp;nbsp; It is the uncompressed representation of a natural picture.&lt;br /&gt;
&lt;br /&gt;
*You can see that in this small section of a landscape image the bytes&amp;amp;nbsp; $\rm FF$,&amp;amp;nbsp; $\rm 55$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm 47$&amp;amp;nbsp; occur very frequently. &lt;br /&gt;
*Data compression is therefore promising. &lt;br /&gt;
*But since other parts of the&amp;amp;nbsp; $\text{4 MByte}$ file or other byte combinations dominate in other image contents, the use of a static dictionary would not be appropriate here.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2927__Inf_T_2_2_S1c_GANZ_neu.png|right|frame|Possible encoding of a simple graphic]]&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp; &lt;br /&gt;
For an artificially created graphic, for example a form, you could work with a static dictionary. &lt;br /&gt;
&lt;br /&gt;
We are looking at a b/w image with&amp;amp;nbsp; $27 × 27$&amp;amp;nbsp; pixels, where the mapping &amp;quot;black&amp;quot; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;&#039;0&#039;&#039;&#039;&amp;amp;nbsp; and &amp;quot;white&amp;quot; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;&#039;1&#039;&#039;&#039;&amp;amp;nbsp; has been agreed upon.&lt;br /&gt;
&lt;br /&gt;
*At the top (black marker) each line is described by&amp;amp;nbsp; $27$&amp;amp;nbsp; zeros.&lt;br /&gt;
*In the middle (blue marking), three zeros and three ones always alternate.&lt;br /&gt;
*At the bottom (red mark), each line is delimited by&amp;amp;nbsp; $25$&amp;amp;nbsp; ones by two zeros.}}&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==LZ77 - the basic form of the Lempel-Ziv-algorithms ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The most important procedures for data compression with a dynamic dictionary go back to&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Abraham_Lempel Abraham Lempel]&amp;amp;nbsp; and&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Jacob_Ziv Jacob Ziv]&amp;amp;nbsp; zurück.&amp;amp;nbsp; The entire Lempel-Ziv family&amp;amp;nbsp; (in the following we will use for this briefly: &amp;amp;nbsp; LZ procedure)&amp;amp;nbsp; can be characterized as follows&lt;br /&gt;
*Lempel-Ziv methods use the fact that often whole words, or at least parts of them, occur several times in a text.&amp;amp;nbsp; One collects all word fragments, which are also called&amp;amp;nbsp; &#039;&#039;phrases&#039;&#039;&amp;amp;nbsp; in a sufficiently large dictionary.&lt;br /&gt;
*Contrary to the entropy coding developed before (by Shannon and Huffman), the frequency of single characters or character strings is not the basis of the compression here, so that the LZ procedures can be applied even without knowledge of the source statistics.&lt;br /&gt;
*LZ compression accordingly manages with a single pass and also the source symbol range&amp;amp;nbsp; $M$&amp;amp;nbsp; and the symbol set&amp;amp;nbsp; $\{q_μ\}$&amp;amp;nbsp; with&amp;amp;nbsp; $μ = 1$, ... , $M$&amp;amp;nbsp; does not have to be known.&amp;amp;nbsp; This is called &#039;&#039;Universal Source Coding&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We first look at the Lempel-Ziv algorithm in its original form from 1977, known as&amp;amp;nbsp; [https://en.wikipedia.org/wiki/LZ77_and_LZ78#LZ77 LZ77]: &lt;br /&gt;
*This works with a window that is successively moved over the text&amp;amp;nbsp; one also speaks of a&amp;amp;nbsp; &#039;&#039;sliding window&#039;&#039;. &lt;br /&gt;
*The window size&amp;amp;nbsp; $G$&amp;amp;nbsp; is an important parameter that decisively influences the compression result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2426__Inf_T_2_2_S2a_neu.png|center|frame|Sliding window with LZ77 compression]]&lt;br /&gt;
&lt;br /&gt;
The graphic shows an example of the&amp;amp;nbsp; &#039;&#039;sliding windows&#039;&#039;.&amp;amp;nbsp; This is divided into&lt;br /&gt;
*the preview buffer&amp;amp;nbsp; $($blue background),&amp;amp;nbsp; and&lt;br /&gt;
*the search buffer&amp;amp;nbsp; $($red background, with the positions&amp;amp;nbsp; $P = 0$, ... , $7$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; window size&amp;amp;nbsp; $G = 8)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The edited text consists of the four words&amp;amp;nbsp; &#039;&#039;&#039;Miss&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;Mission&#039;&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;Mississippi&#039;&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;Mistral&#039;&#039;&#039;&#039;, each separated by a hyphen.&amp;amp;nbsp; At the time in question the preview buffer contains&amp;amp;nbsp; &#039;&#039;&#039;Mississi&#039;&#039;&#039;.&lt;br /&gt;
*Search now in the search buffer for the best match &amp;amp;nbsp; ⇒ &amp;amp;nbsp; the string with the maximum match length&amp;amp;nbsp; $L$.&amp;amp;nbsp; This is the result for the position&amp;amp;nbsp; $P = 7$&amp;amp;nbsp; and the length&amp;amp;nbsp; $L = 5$&amp;amp;nbsp; to&amp;amp;nbsp; &#039;&#039;&#039;Missi&#039;&#039;&#039;.&lt;br /&gt;
*This step is then expressed by the &#039;&#039;triple&#039;&#039;&amp;amp;nbsp; $(7,&amp;amp;nbsp; 5,&amp;amp;nbsp; $ &#039;&#039;&#039;s&#039;&#039;&#039;$)$&amp;amp;nbsp; expressed &amp;amp;nbsp; ⇒ &amp;amp;nbsp; general&amp;amp;nbsp; $(P, \ L, \ Z)$, where&amp;amp;nbsp; $Z =$&amp;amp;nbsp;&#039;&#039;&#039;s&#039;&#039;&#039;&amp;amp;nbsp; specifies the character that no longer matches the string found in the search buffer.&lt;br /&gt;
*At the end the window is moved by&amp;amp;nbsp; $L + 1 = 6$&amp;amp;nbsp; character is moved to the right.&amp;amp;nbsp; In the preview buffer there is now&amp;amp;nbsp; &#039;&#039;&#039;sippi-Mi&#039;&#039;,&amp;amp;nbsp; in the search buffer&amp;amp;nbsp; &#039;&#039;&#039;n-Missis&#039;&#039;&#039;&amp;amp;nbsp; and the encoding gives the triple&amp;amp;nbsp; $(2, 2,$&amp;amp;nbsp; &#039;&#039;&#039;p&#039;&#039;&#039;$)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the following example the LZ77-coding algorithms are described in more detail.&amp;amp;nbsp; The decoding runs in a similar way.&lt;br /&gt;
	 &lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 3:}$&amp;amp;nbsp; &lt;br /&gt;
We consider the LZ77 encoding of the string&amp;amp;nbsp; &#039;&#039;&#039;ABABCBCBAABCABe&#039;&#039;&#039;&amp;amp;nbsp; according to the following graphic.&amp;amp;nbsp; The input sequence has the length $N = 15$.&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Further is assumed:&lt;br /&gt;
*For the characters apply&amp;amp;nbsp; $Z ∈ \{$ &#039;&#039;&#039;A&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;e&#039;&#039;&#039; $\}$,&amp;amp;nbsp; where&amp;amp;nbsp; &#039;&#039;&#039;e&#039;&#039;&#039;&amp;amp;nbsp; corresponds to the&amp;amp;nbsp; &#039;&#039;end-of-file&#039;&#039;&amp;amp;nbsp; (end of the input string)&lt;br /&gt;
*The size of the preview and search buffer are&amp;amp;nbsp; $G = 4$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; Position&amp;amp;nbsp; $P ∈ {0, 1, 2, 3}$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2427__Inf_T_2_2_S2b_neu.png|frame|To illustrate the LZ77 encoding]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Display of the encoding process&amp;lt;/u&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step 1 and 2&amp;lt;/u&amp;gt;: &amp;amp;nbsp; The characters&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;&amp;amp;nbsp; are encoded by the triple&amp;amp;nbsp; $(0, 0,&amp;amp;nbsp; $ &#039;&#039;&#039;A&#039;&#039;&#039;$)$&amp;amp;nbsp; and&amp;amp;nbsp; $(0, 0,&amp;amp;nbsp; $ &#039;&#039;&#039;B&#039;&#039;&#039;$)$, because they are not yet stored in the search buffer. &amp;amp;nbsp; Then move the Sliding Window by 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;step 3&amp;lt;/u&amp;gt;: &amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;&amp;amp;nbsp; is masked by the search buffer and at the same time the still unknown character&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039;&amp;amp;nbsp; is appended.&amp;amp;nbsp; After that the Sliding Window is moved three positions to the right.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step 4&amp;lt;/u&amp;gt;: &amp;amp;nbsp; This shows that the search string&amp;amp;nbsp; &#039;&#039;&#039;BCB&#039;&#039;&#039;&amp;amp;nbsp; may also end in the preview buffer.&amp;amp;nbsp; Now the window can be moved four positions to the right.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step 5&amp;lt;/u&amp;gt;: &amp;amp;nbsp; Only&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; is found in the search buffer and&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;&amp;amp;nbsp; is dropped.&amp;amp;nbsp; If the search buffer is larger, however,&amp;amp;nbsp; &#039;&#039;&#039;ABC&#039;&#039;&#039;&amp;amp;nbsp; could be masked together.&amp;amp;nbsp; For this purpose&amp;amp;nbsp; $G ≥ must be 7$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step 6&amp;lt;/u&amp;gt;: &amp;amp;nbsp; Likewise, the character&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039;&amp;amp;nbsp; must be coded separately due to the buffer being too small.&amp;amp;nbsp; But since&amp;amp;nbsp; &#039;&#039;&#039;CA&#039;&#039;&#039;&amp;amp;nbsp; hasn&#039;t occurred before,&amp;amp;nbsp; would not improve the compression here&amp;amp;nbsp; $G = 7$&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step 7&amp;lt;/u&amp;gt;: &amp;amp;nbsp; With the consideration of the end-of-file&amp;amp;nbsp; (&#039;&#039;&#039;e&#039;&#039;&#039;)&amp;amp;nbsp; together with&amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;&amp;amp;nbsp; from the search buffer, the encoding process is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Before transmission, the specified triples must of course be binary coded.&amp;amp;nbsp; In this example you need&lt;br /&gt;
*the position&amp;amp;nbsp; $P ∈ \{0, 1, 2, 3\}$&amp;amp;nbsp; two Bit&amp;amp;nbsp; (yellow background in the table above),&lt;br /&gt;
*the copy length&amp;amp;nbsp; $L$&amp;amp;nbsp; three bits&amp;amp;nbsp; (green background), so that one could also&amp;amp;nbsp; $L = 7$&amp;amp;nbsp; still be displayed,&lt;br /&gt;
*all characters are two bits&amp;amp;nbsp; (white background),&amp;amp;nbsp; for example&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039; &amp;amp;#8594; &#039;&#039;&#039;00&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039; &amp;amp;#8594; &#039;&#039;&#039;01&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039; &amp;amp;#8594; &#039;&#039;&#039;10&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;e&#039;&#039;&#039; (&amp;quot;end-of-file&amp;quot;) &amp;amp;#8594; &#039;&#039;&#039;11&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Thus the LZ77 output sequence has a length of&amp;amp;nbsp; $7 - 7 = 49$&amp;amp;nbsp; bit, while the input sequence only needed&amp;amp;nbsp; $15 - 2 = 30$&amp;amp;nbsp; bit.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; &#039;&#039;&#039;A Lempel-Ziv compression only makes sense with large files !&#039;&#039;&#039;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==The Lempel-Ziv-Variant LZ78 ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The LZ77-algorithm produces very inefficient output if more frequent strings are repeated only with a larger distance.&amp;amp;nbsp; Such repetitions can often not be recognized due to the limited buffer size&amp;amp;nbsp; $G$&amp;amp;nbsp; des&amp;amp;nbsp; &#039;&#039;Sliding Window&#039;&#039;&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
Lempel and Ziv corrected this shortcoming already one year after the release of the first version LZ77: &lt;br /&gt;
*The algorithm LZ78 uses a global dictionary for compression instead of the local dictionary (search buffer). &lt;br /&gt;
*The size of the dictionary allows efficient compression of phrases that have been used for a long time before.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 4:}$&amp;amp;nbsp; &lt;br /&gt;
To explain the LZ78 algorithm we consider the same sequence&amp;amp;nbsp; &#039;&#039;&#039;ABABCBCBAABCABe&#039;&#039;&#039;&amp;amp;nbsp; as for the LZ77-$\text{Example 3}$.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_2_2_S3_neu.png|frame|Generation of the dictionary and output at LZ78]]&lt;br /&gt;
&lt;br /&gt;
*The graphic shows&amp;amp;nbsp; (with red background)&amp;amp;nbsp; the dictionary with index&amp;amp;nbsp; $I&amp;amp;nbsp;$&amp;amp;nbsp; (in decimal and binary representation, column 1 and 2)&amp;amp;nbsp; and the corresponding content (column 3), which is entered for coding step&amp;amp;nbsp; $i&amp;amp;nbsp;$&amp;amp;nbsp; (column 4).&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
*For LZ78 both encoding and decoding are always&amp;amp;nbsp; $i = I$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*In column 5 you find the formalized code output&amp;amp;nbsp; $($Index&amp;amp;nbsp; $I$,&amp;amp;nbsp; new character&amp;amp;nbsp; $Z)$.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
*In column 6 the corresponding binary coding is given with four bits for the index and the same character assignment&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039; &amp;amp;#8594; &#039;&#039;&#039;00&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039; &amp;amp;#8594; &#039;&#039;&#039;01&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039; &amp;amp;#8594; &#039;&#039;&#039;10&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;e&#039;&#039;&#039; (&amp;quot;end-of-file&amp;quot;) &amp;amp;#8594; &#039;&#039;&#039;11&#039;&#039;&#039;&amp;amp;nbsp; as in&amp;amp;nbsp; $\text{Example 3}$.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
*At the beginning&amp;amp;nbsp; (step $\underline{i = 0}$)&amp;amp;nbsp; the dictionary is&amp;amp;nbsp; (WB)&amp;amp;nbsp; empty except for the entry&amp;amp;nbsp; &#039;&#039;&#039;ε&#039;&#039;&#039;&amp;amp;nbsp; $($empty character, not to be confused with the space character, which is not used here$)$&amp;amp;nbsp; with index&amp;amp;nbsp; $I = 0$.&lt;br /&gt;
*In the step&amp;amp;nbsp; $\underline{i = 1}$&amp;amp;nbsp; there is no usable entry in the dictionary yet, and it becomes&amp;amp;nbsp; (&#039;&#039;&#039;0,&amp;amp;nbsp; A&#039;&#039;&#039;)&amp;amp;nbsp; output&amp;amp;nbsp; (&#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; follows&amp;amp;nbsp; &#039;&#039;&#039;ε&#039;&#039;&#039;). &amp;amp;nbsp; In the dictionary, the entry&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; follows in line&amp;amp;nbsp; $I = 1$&amp;amp;nbsp; (abbreviated&amp;amp;nbsp; &#039;&#039;&#039;1: A&#039;&#039;&#039;).&lt;br /&gt;
*The procedure in the second step&amp;amp;nbsp; ($\underline{i = 2}$).&amp;amp;nbsp; The output is&amp;amp;nbsp; (&#039;&#039;&#039;0,&amp;amp;nbsp; B&#039;&#039;&#039;)&amp;amp;nbsp; and the dictionary entry is&amp;amp;nbsp; &#039;&#039;&#039;2: B&#039;&#039;&#039;&amp;amp;nbsp;.&lt;br /&gt;
*As the entry&amp;amp;nbsp; $\underline{i = 3}$&amp;amp;nbsp; is already found in step&amp;amp;nbsp; &#039;&#039;&#039;1: A&#039;&#039;&#039;&amp;amp;nbsp;, the characters&amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;&amp;amp;nbsp; can be coded together by&amp;amp;nbsp; (&#039;&#039;&#039;1, B&#039;&#039;&#039;)&amp;amp;nbsp; and the new dictionary entry&amp;amp;nbsp; &#039;&#039;&#039;3: AB&#039;&#039;&#039;&amp;amp;nbsp; is made.&lt;br /&gt;
*After coding and insertion of the new character&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039;&amp;amp;nbsp; in step&amp;amp;nbsp; $\underline{i = 4}$&amp;amp;nbsp; the pair of characters&amp;amp;nbsp; &#039;&#039;&#039;BC&#039;&#039;&#039;&amp;amp;nbsp; is coded together &amp;amp;nbsp; ⇒ &amp;amp;nbsp; (&#039;&#039;&#039;2, C&#039;&#039;&#039;) and entered into the dictionary&amp;amp;nbsp; &#039;&#039;&#039;5: BC&#039;&#039;&#039;&amp;amp;nbsp; in step&amp;amp;nbsp; $\underline{i = 5}$&amp;amp;nbsp;.&lt;br /&gt;
*In step&amp;amp;nbsp; $\underline{i = 6}$&amp;amp;nbsp; two characters are treated together with&amp;amp;nbsp; &#039;&#039;&#039;&#039;6: BA&#039;&#039;&#039;&amp;amp;nbsp; and in the last two steps three each, namely&amp;amp;nbsp; &#039;&#039;&#039;7: ABC&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;8: ABe&#039;&#039;&#039;. &lt;br /&gt;
*The output&amp;amp;nbsp; (3, &#039;&#039;&#039;C&#039;&#039;&#039;)&amp;amp;nbsp; in step&amp;amp;nbsp; $\underline{i = 7}$&amp;amp;nbsp; stands for&amp;amp;nbsp; &amp;quot;WB(3) + &#039;&#039;&#039;C&#039;&#039;&#039; = &#039;&#039;&#039;ABC&#039;&#039;&#039; &amp;amp;nbsp; and the output&amp;amp;nbsp; (3, &#039;&#039;&#039;e&#039;&#039;&#039;)&amp;amp;nbsp; in step&amp;amp;nbsp; $\underline{i = 8}$&amp;amp;nbsp; for&amp;amp;nbsp; &#039;&#039;&#039;ABe&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this&amp;amp;nbsp; $\text{Example 4}$&amp;amp;nbsp; the LZ78 code symbol sequence thus consists of&amp;amp;nbsp; $8 - 6 = 48$&amp;amp;nbsp; Bit.&amp;amp;nbsp; The result is comparable to the LZ77-$\text{Example 3}$&amp;amp;nbsp; $(49$ Bit$)$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; Details and improvements of LZ78 will be omitted here.&amp;amp;nbsp; Here we refer to the&amp;amp;nbsp; [[Information_Theory/Compression According to Lempel, Ziv and Welch#The_Lempel. E2.80.93Ziv.E2.80.93Welch.E2.80.93Algorithm|LZW-Algorithm]], which will be described on the following pages.&amp;amp;nbsp; Only this much will be said now:&lt;br /&gt;
*The index&amp;amp;nbsp; $I$&amp;amp;nbsp; is uniformly represented here with four bits, whereby the dictionary is limited to&amp;amp;nbsp; $16$&amp;amp;nbsp; entries.&amp;amp;nbsp; By a &#039;&#039;variable number of bits&#039;&#039;&amp;amp;nbsp; for the index one can bypass this limitation.&amp;amp;nbsp; At the same time one gets a better compression factor.&lt;br /&gt;
*The dictionary does not have to be transmitted with all LZ variants, but is generated with the decoder in exactly the same way as on the coder side.&amp;amp;nbsp; The decoding is also done with LZ78, but not with LZW, in the same way as the coding.&lt;br /&gt;
*All LZ procedures are asymptotically optimal, i.e., for infinitely long sequences the mean code word length&amp;amp;nbsp; $L_{\rm M}$&amp;amp;nbsp; per source symbol is equal to the source entropy&amp;amp;nbsp; $H$&amp;amp;nbsp;. &lt;br /&gt;
*For short sequences, however, the deviation is considerable.&amp;amp;nbsp; More about this at&amp;amp;nbsp; [[Information_Theory/Compression According to Lempel, Ziv and Welch#Quantitative statements on asymptotic optimality|end of chapter]].}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==The Lempel-Ziv-Welch algorithm ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The most common variant of Lempel-Ziv compression used today was designed by&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Terry_Welch Terry Welch]&amp;amp;nbsp; and published in 1983.&amp;amp;nbsp; In the following we refer to it as the&amp;amp;nbsp; &#039;&#039;Lempel-Ziv-Welch-Algorithm&#039;&#039;, abbreviated as &amp;quot;LZW&amp;quot;. &amp;amp;nbsp; Just as LZ78 has slight advantages over LZ77&amp;amp;nbsp; (as expected, why else would the algorithm have been modified?),&amp;amp;nbsp; LZW also has more advantages than disadvantages compared to LZ78.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2430__Inf_T_2_2_S4_neu.png|center|frame|LZW encoding of the sequence&amp;amp;nbsp; &#039;&#039;&#039;ABABCBCBAABCABe&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
The graphic shows the coder output for the exemplary input sequence&amp;amp;nbsp; &#039;&#039;&#039;ABABCBCBAABCABe&#039;&#039;&#039;.&amp;amp;nbsp; On the right is the dictionary (highlighted in red), which is successively generated during LZW encoding.&amp;amp;nbsp; The differences to LZ78 can be seen in comparison to the graphic on the last page, namely&lt;br /&gt;
*For LZW, all characters occurring in the text are already entered at the beginning of&amp;amp;nbsp; $(i = 0)$&amp;amp;nbsp; and assigned to a binary sequence, in the example with the indices&amp;amp;nbsp; $I = 0$, ... ,&amp;amp;nbsp; $I = 3$.&amp;amp;nbsp; This also means that LZW requires some knowledge of the message source, whereas LZ78 is a &amp;quot;true universal encoding&amp;quot;.&lt;br /&gt;
*For LZW, only the dictionary index&amp;amp;nbsp; $I$&amp;amp;nbsp; is transmitted for each encoding step&amp;amp;nbsp; $i$&amp;amp;nbsp; while for LZ78 the output is the combination&amp;amp;nbsp; $(I$,&amp;amp;nbsp; $Z)$,&amp;amp;nbsp;where $Z$&amp;amp;nbsp; denotes the current new character. &amp;amp;nbsp; Due to the absence of&amp;amp;nbsp; $Z$&amp;amp;nbsp; in the code output, LZW decoding is more complicated than with LZ78, as described on page&amp;amp;nbsp; [[Information_Theory/Compression_According_to_Lempel,_Ziv_and_Welch#Decoding_of_LZW.E2.80.93Algorithm|Decoding of LZW&amp;amp;ndash;Algorithm]]&amp;amp;nbsp;. &lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 5:}$&amp;amp;nbsp; For this exemplary LZW encoding, as with &amp;quot;LZ77&amp;quot; and &amp;quot;LZ78&amp;quot; again the input sequence&amp;amp;nbsp; &#039;&#039;&#039;ABABCBCBAABCABe&#039;&#039;&#039;&amp;amp;nbsp; is assumed.&amp;amp;nbsp; So the following description refers to the above graphic.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step i = 0&amp;lt;/u&amp;gt; (default): &amp;amp;nbsp; The allowed characters&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;e&#039;&#039;&#039;&amp;amp;nbsp; (&amp;quot;end-of-file&amp;quot;) are entered into the dictionary and assigned to the indices&amp;amp;nbsp; $I = 0$, ... , $I = 3$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step i = 1&amp;lt;/u&amp;gt;: &amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; is coded by the decimal index&amp;amp;nbsp; $I = 0$&amp;amp;nbsp; and its binary representation&amp;amp;nbsp; &#039;&#039;&#039;0000&#039;&#039;&#039;&amp;amp;nbsp; is transmitted. &amp;amp;nbsp; Then the combination of the current character&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; and the following character&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;&amp;amp;nbsp; of the input sequence is stored in the dictionary under the index&amp;amp;nbsp; $I = 4$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;step i = 2&amp;lt;/u&amp;gt;: &amp;amp;nbsp; representation of&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;&amp;amp;nbsp; by index&amp;amp;nbsp; $I = 1$&amp;amp;nbsp; or.&amp;amp;nbsp; &#039;&#039;&#039;0001&#039;&#039;&#039;&amp;amp;nbsp; (binary) as well as dictionary entry of&amp;amp;nbsp; &#039;&#039;&#039;BA&#039;&#039;&#039;&amp;amp;nbsp; placed under index&amp;amp;nbsp; $I = 5$.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step i = 3&amp;lt;/u&amp;gt;: &amp;amp;nbsp; Because of the entry&amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;&amp;amp;nbsp; at time&amp;amp;nbsp; $i = 1$&amp;amp;nbsp; the index to be transmitted is&amp;amp;nbsp; $I = 4$&amp;amp;nbsp; (binary: &#039;&#039;&#039;0100&#039;&#039;&#039;).&amp;amp;nbsp; New dictionary entry of&amp;amp;nbsp; &#039;&#039;&#039;ABC&#039;&#039;&#039;&amp;amp;nbsp; under&amp;amp;nbsp; $I = 6$.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step i = 8&amp;lt;/u&amp;gt;: &amp;amp;nbsp; Here the characters&amp;amp;nbsp; &#039;&#039;&#039;ABC&#039;&#039;&#039;&amp;amp;nbsp; are represented together by the index&amp;amp;nbsp; $I = 6$&amp;amp;nbsp; (binary: &#039;&#039;&#039;0110&#039;&#039;&#039;)&amp;amp;nbsp; and the entry for&amp;amp;nbsp; &#039;&#039;&#039;ABCA&#039;&#039;&#039;&amp;amp;nbsp; is made.&lt;br /&gt;
&lt;br /&gt;
With the encoding of&amp;amp;nbsp; &#039;&#039;&#039;e&#039;&#039;&#039;&amp;amp;nbsp; (EOF mark) the encoding process is finished after ten steps.&amp;amp;nbsp; With LZ78 only eight steps were needed.&amp;amp;nbsp; But it has to be considered:&lt;br /&gt;
*The LZW algorithm needs only&amp;amp;nbsp; $10 \cdot 4 = 40$&amp;amp;nbsp; Bit versus the&amp;amp;nbsp; $8 \cdot 6 = 48$&amp;amp;nbsp; Bit for LZ78.&amp;amp;nbsp; Provided that for this simple calculation, four bits each are needed for index representation.&lt;br /&gt;
*LZW as well as LZ78 require less bits&amp;amp;nbsp; $($namely &amp;amp;nbsp; $34$&amp;amp;nbsp; or &amp;amp;nbsp; $42)$, if one considers that for the step&amp;amp;nbsp; $i = 1$&amp;amp;nbsp; the index has to be coded with two bits only&amp;amp;nbsp; $(I ≤ 3)$&amp;amp;nbsp; and for&amp;amp;nbsp; $2 ≤ i ≤ 5$&amp;amp;nbsp; three bits are sufficient&amp;amp;nbsp; $(I ≤ 7)$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following pages describe in detail the variable bit count for index representation and the decoding of LZ78- and LZW-encoded binary sequences.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Lempel-Ziv-Coding with variable index bit length == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For reasons of a most compact representation, we now consider only binary sources with the value set&amp;amp;nbsp; $\{$&#039;&#039;&#039;A&#039;&#039;&#039;, &#039;&#039;&#039;B&#039;&#039;&#039;$\}$.&amp;amp;nbsp; The terminating character&amp;amp;nbsp; &#039;&#039;&#039;end-of-file&#039;&#039;&#039;&amp;amp;nbsp; is also not considered. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2432__Inf_T_2_2_S5_neu.png|center|frame|LZW-Coding of a binary input sequence]]&lt;br /&gt;
&lt;br /&gt;
We demonstrate the LZW coding by means of a screenshot of our interactive Flash module&amp;amp;nbsp; [[Applets:Lempel-Ziv-Welch|Lempel-Ziv-Welch&amp;amp;ndash;Algorithms]]. &lt;br /&gt;
&lt;br /&gt;
*In the first coding step&amp;amp;nbsp; $(i = 1)$&amp;amp;nbsp;  &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp;⇒&amp;amp;nbsp; &#039;&#039;&#039;0&#039;&#039;&#039;.&amp;amp;nbsp; Afterwards the entry with index&amp;amp;nbsp; $I = 2$&amp;amp;nbsp; and&amp;amp;nbsp; content&amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;.&lt;br /&gt;
*As there are only two entries in step&amp;amp;nbsp; $i = 1$&amp;amp;nbsp; in the dictionary (&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;&amp;amp;nbsp;) one bit is sufficient. &amp;amp;nbsp; On the other hand, for step&amp;amp;nbsp; $i = 2$&amp;amp;nbsp; and&amp;amp;nbsp; $i = 3$&amp;amp;nbsp; for&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039; &amp;amp;nbsp;⇒&amp;amp;nbsp; &#039;&#039;&#039;01&#039;&#039;&#039;&amp;amp;nbsp; and &amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039; &amp;amp;nbsp;⇒&amp;amp;nbsp; &#039;&#039;&#039;00&#039;&#039;&#039;&amp;amp;nbsp; two bits are needed in each case.&lt;br /&gt;
*Starting on &amp;amp;nbsp;$i = 4$&amp;amp;nbsp;, the index representation is done with three bits, then from &amp;amp;nbsp; $i = 8$&amp;amp;nbsp; with four bits and from&amp;amp;nbsp; $i = 16$&amp;amp;nbsp; with five bits.&amp;amp;nbsp; A simple algorithm for the respective index bit number&amp;amp;nbsp; $L(i)$&amp;amp;nbsp; can be derived.&lt;br /&gt;
*Let&#039;s finally consider the coding step&amp;amp;nbsp; $i = 18$.&amp;amp;nbsp; Here, the sequence&amp;amp;nbsp; &#039;&#039;&#039;ABABB&#039;&#039;&#039; marked in red, which was entered into the dictionary at time&amp;amp;nbsp; $i = 11$&amp;amp;nbsp; at time&amp;amp;nbsp; $($Index&amp;amp;nbsp; $I = 13$ ⇒ &#039;&#039;&#039;1101&#039;&#039;&#039;$)$&amp;amp;nbsp; is edited. &amp;amp;nbsp; However, the coder output is now&amp;amp;nbsp; &#039;&#039;&#039;01101&#039;&#039;&#039;&amp;amp;nbsp;because of&amp;amp;nbsp; $i ≥ 16$&amp;amp;nbsp;  (green mark at the coder output).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The statements also apply to&amp;amp;nbsp; [[Information_Theory/Compression According to Lempel, Ziv and Welch#The_Lempel.E2.80.93Ziv.E2.80.93Variant_LZ78|LZ78]].&amp;amp;nbsp; That is: &amp;amp;nbsp; With the LZ78 a variable index bit length results in the same improvement as with the LZW.&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Decoding of the LZW algorithm == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The decoder now displays the decoded output on the&amp;amp;nbsp; [[Information_Theory/Compression_by_Lempel,_Ziv_and_Welch#Lempel-Ziv-Coding with variable index bit length|last page]]&amp;amp;nbsp; as input sequence.&amp;amp;nbsp; The graphic shows that it is possible to uniquely decode this sequence even with variable index bit lengths. Please note:&lt;br /&gt;
&lt;br /&gt;
*The decoder knows that in the first coding step&amp;amp;nbsp; $(i = 1)$&amp;amp;nbsp; the index&amp;amp;nbsp; $I&amp;amp;nbsp;$ was coded with only one bit, in the steps&amp;amp;nbsp; $i = 2$&amp;amp;nbsp; and&amp;amp;nbsp; $i = 3$&amp;amp;nbsp; with two bit, from &amp;amp;nbsp; $i = 4$&amp;amp;nbsp; with three bit, from&amp;amp;nbsp; $i = 8$&amp;amp;nbsp; with four bit, and so on.&lt;br /&gt;
*The decoder generates the same dictionary as the coder, but the dictionary entries are made one time step later. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2433__Inf_T_2_2_S6_neu.png|center|frame|LZW-Decoding of a binary input sequence]]&lt;br /&gt;
&lt;br /&gt;
*At step&amp;amp;nbsp; $\underline{i = 1}$&amp;amp;nbsp; the adjacent symbol&amp;amp;nbsp; &#039;&#039;&#039;0&#039;&#039;&#039;&amp;amp;nbsp; is decoded as&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp;. &amp;amp;nbsp; Likewise, the following results for step&amp;amp;nbsp; $\underline{i = 2}$&amp;amp;nbsp; from the preassignment of the dictionary and the two-bit representation agreed upon for this: &amp;amp;nbsp; &#039;&#039;&#039;01&#039;&#039;&#039; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;.&lt;br /&gt;
*The entry of the line&amp;amp;nbsp; $\underline{I = 2}$&amp;amp;nbsp; $($content: &amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;$)$&amp;amp;nbsp; of the dictionary is therefore only made at the step&amp;amp;nbsp; $\underline{i = 2}$, while at the&amp;amp;nbsp; [[Information_Theory/Compression According to Lempel, Ziv and Welch#Lempel-Ziv-Coding with variable index bit length|Coding process]]&amp;amp;nbsp; this could already be done at the end of step&amp;amp;nbsp; $i = 1$&amp;amp;nbsp;.&lt;br /&gt;
*Let us now consider the decoding for&amp;amp;nbsp; $\underline{i = 4}$. &amp;amp;nbsp; The index&amp;amp;nbsp; $\underline{I = 2}$&amp;amp;nbsp; returns the decoding result&amp;amp;nbsp; &#039;&#039;&#039;010&#039;&#039;&#039; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;&amp;amp;nbsp; and in the next step&amp;amp;nbsp; $(\underline{i = 5})$&amp;amp;nbsp; the dictionary line&amp;amp;nbsp; $\underline{I = 5}$&amp;amp;nbsp; will be filled with&amp;amp;nbsp; &#039;&#039;&#039;ABA&#039;&#039;&#039;&amp;amp;nbsp;.&lt;br /&gt;
*This time difference with respect to the dictionary entries can lead to decoding problems.&amp;amp;nbsp; For example, for step&amp;amp;nbsp; $\underline{i = 7}$&amp;amp;nbsp; there is no dictionary entry with index&amp;amp;nbsp; $\underline{I= 7}$.&lt;br /&gt;
*What is to do in such a case as&amp;amp;nbsp; $(\underline{I = i})$?&amp;amp;nbsp; In this case you take the result of the previous decoding step&amp;amp;nbsp; $($here: &amp;amp;nbsp; &#039;&#039;&#039;BA&#039;&#039;&#039;&amp;amp;nbsp; for&amp;amp;nbsp; $\underline{i = 6})$&amp;amp;nbsp; and append the first character of this sequence at the end again. &amp;amp;nbsp; This gives the decoding result for&amp;amp;nbsp; $\underline{i = 7}$&amp;amp;nbsp; to&amp;amp;nbsp; &#039;&#039;&#039;111&#039;&#039;&#039; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;&#039;BAB&#039;&#039;&#039;.&lt;br /&gt;
*Naturally, it is unsatisfactory to specify only one recipe.&amp;amp;nbsp; In the&amp;amp;nbsp; [[Aufgaben:Aufgabe_2.4Z:_Nochmals_LZW-Codierung_und_-Decodierung|Exercise 2.4Z]]&amp;amp;nbsp; you should justify the procedure demonstrated here.&amp;amp;nbsp; We refer to the sample solution for this exercise.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With LZ78 decoding, the problem described here does not occur because not only the index&amp;amp;nbsp; $I&amp;amp;nbsp;$ but also the current character&amp;amp;nbsp; $Z$&amp;amp;nbsp; is included in the encoding result and is transmitted.&lt;br /&gt;
 	&lt;br /&gt;
 &lt;br /&gt;
==Remaining redundancy as a measure for the efficiency of encoding methods==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For the rest of this chapter we assume the following prerequisites:&lt;br /&gt;
*The&amp;amp;nbsp; &#039;&#039;symbol range&#039;&#039;&amp;amp;nbsp; the source&amp;amp;nbsp; $($or in the transmission sense: &amp;amp;nbsp; the number of stages)&amp;amp;nbsp; sei&amp;amp;nbsp; $M$, where&amp;amp;nbsp; $M$&amp;amp;nbsp; represents a power of two &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 2, \ 4, \ 8, \ 16$, ....&lt;br /&gt;
*The source entropy is&amp;amp;nbsp; $H$.&amp;amp;nbsp; If there are no statistical bonds between the symbols and if they are equally probable, then&amp;amp;nbsp; $H = H_0$, where&amp;amp;nbsp; $H_0 = \log_2 \ M$&amp;amp;nbsp; indicates the decision content.&amp;amp;nbsp; Otherwise, $H &amp;lt; H_0$ applies.&lt;br /&gt;
*A symbol sequence of length&amp;amp;nbsp; $N$&amp;amp;nbsp; is source-coded and returns a binary code sequence of length&amp;amp;nbsp; $L$.&amp;amp;nbsp; For the time being we do not make any statement about the type of source coding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
According to the&amp;amp;nbsp; [[Information_Theory/General_Description#Source Encoding Theorem|Source Encoding Theorem]]&amp;amp;nbsp; the mean code word length&amp;amp;nbsp; $L_{\rm M}$&amp;amp;nbsp; must be greater than or equal to the source entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; (in bit/source symbol).&amp;amp;nbsp; This means&lt;br /&gt;
*for the total length of the source-encoded binary sequence:&lt;br /&gt;
:$$L \ge N \cdot H \hspace{0.05cm},$$ &lt;br /&gt;
*for the relative redundancy of the code sequence, in the following called&amp;amp;nbsp; &#039;&#039;&#039;Rest Redundancy&#039;&#039;&#039;:&lt;br /&gt;
:$$r = \frac{L - N \cdot H}{L} \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 6:}$&amp;amp;nbsp; If there were a&amp;amp;nbsp; &#039;&#039;perfect source encoding&#039;&#039; for a redundancy-free binary source symbol sequence&amp;amp;nbsp; $(M = 2,\ p_{\rm A} = p_{\rm B} = 0.5$,&amp;amp;nbsp; without statistical bonds$)$&amp;amp;nbsp; of length&amp;amp;nbsp; $N = 10000$, the code sequence would have length&amp;amp;nbsp; $L = 10000$. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Consequence:&amp;lt;/u&amp;gt; &amp;amp;nbsp; If in a code the result&amp;amp;nbsp; $L = N$&amp;amp;nbsp; is never possible, then this code is called&amp;amp;nbsp; &#039;&#039;not&amp;amp;ndash;perfect&#039;&#039;.&lt;br /&gt;
*Lempel-Ziv is not suitable for this redundancy-free message source.&amp;amp;nbsp; It will always be&amp;amp;nbsp; $L &amp;gt; N$&amp;amp;nbsp; &amp;amp;nbsp; You can also put it quite succinctly like this: &amp;amp;nbsp; The perfect source encoding here is &amp;quot;no encoding at all&amp;quot;.&lt;br /&gt;
*A redundant binary source with &amp;amp;nbsp;$p_{\rm A} = 0.89$,&amp;amp;nbsp; $p_{\rm B} = 0.11$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $H = 0.5$&amp;amp;nbsp; could be represented with a perfect source encoding with&amp;amp;nbsp;$L = 5000$&amp;amp;nbsp; bit, without being able to say what this perfect source encoding looks like.&lt;br /&gt;
*For a quaternary source,&amp;amp;nbsp; $H &amp;gt; 1 \ \rm (bit/source symbol)$&amp;amp;nbsp; is possible, so that even with perfect source encoding there will always be&amp;amp;nbsp; $L &amp;gt; N$&amp;amp;nbsp;.&amp;amp;nbsp; If the source is redundancy-free&amp;amp;nbsp; (no bonds, all&amp;amp;nbsp; $M$&amp;amp;nbsp; symbols equally probable), it has entropy&amp;amp;nbsp; $H= 2 \ \rm (bit/source symbol)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For all these examples of perfect source encoding, the relative redundancy of the code sequence (residual redundancy) is&amp;amp;nbsp; $r = 0$. That is: &amp;amp;nbsp; The zeros and ones are equally probable and there are no statistical bonds between the individual binary symbols.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The problem is: &amp;amp;nbsp; At finite sequence length&amp;amp;nbsp; $N$&amp;amp;nbsp; there is no perfect source code&#039;&#039;&#039;&amp;amp;nbsp;!}} 	&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Efficiency of Lempel-Ziv encoding ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
From the Lempel-Ziv algorithms we know (and can even prove this statement) that they are&amp;amp;nbsp; &#039;&#039;&#039;asymptotically optimal&#039;&#039;&#039; &amp;amp;nbsp; This means that the relative redundancy of the code symbol sequence&amp;amp;nbsp; (here written as a function of the source symbol sequence length&amp;amp;nbsp; $N$&amp;amp;nbsp;) &lt;br /&gt;
 &lt;br /&gt;
:$$r(N) = \frac{L(N) - N \cdot H}{L(N)}= 1 - \frac{ N \cdot H}{L(N)}\hspace{0.05cm}$$&lt;br /&gt;
&lt;br /&gt;
for large&amp;amp;nbsp; $N$&amp;amp;nbsp; returns the limit value &amp;quot;zero&amp;quot;:&lt;br /&gt;
 &lt;br /&gt;
:$$\lim_{N \rightarrow \infty}r(N) = 0 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
But what does the property&amp;amp;nbsp; &amp;quot;asymptotically optimal&amp;quot;&amp;amp;nbsp; say for practical sequence lengths?&amp;amp;nbsp; Not too much, as the following screenshot of our simulation tool&amp;amp;nbsp; [[Applets:Lempel-Ziv-Welch|Lempel-Ziv-Algorithms]]&amp;amp;nbsp; shows. All curves apply exactly only to the [[Information_Theory/Compression According to Lempel, Ziv and Welch#The Lempel-Ziv-Welch algorithm|LZW-Algorithm]]].&amp;amp;nbsp; However, the results for&amp;amp;nbsp; [[Information_Theory/Compression According to Lempel, Ziv and Welch#LZ77 - the basic form of the Lempel-Ziv-algorithms|LZ77]]&amp;amp;nbsp; and&amp;amp;nbsp; [[Information_Theory/Compression According to Lempel, Ziv and Welch#The Lempel-Ziv-Variant LZ78|LZ78]]&amp;amp;nbsp; are only slightly worse.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The three graphs show for different message sources the dependence of the following sizes on the source symbol sequence length&amp;amp;nbsp; $N$:&lt;br /&gt;
*the required number of bits&amp;amp;nbsp; $N \cdot \log_2 M$&amp;amp;nbsp; without source coding&amp;amp;nbsp; (black curves),&lt;br /&gt;
*the required number of bits&amp;amp;nbsp; $H$ \cdot $N$&amp;amp;nbsp; with perfect source encoding&amp;amp;nbsp; (gray dashed),&lt;br /&gt;
*the required number of bits&amp;amp;nbsp; $L(N)$&amp;amp;nbsp; for LZW coding&amp;amp;nbsp; (red curves after averaging),&lt;br /&gt;
*the relative redundancy &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; residual redundancy &amp;amp;nbsp;$r(N)$&amp;amp;nbsp; in case of LZW coding (green curves).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2450__Inf_T_2_2_S7b_neu.png|frame|Example curves of&amp;amp;nbsp; $L(N)$&amp;amp;nbsp; and&amp;amp;nbsp; $r(N)$]].&lt;br /&gt;
&lt;br /&gt;
$\underline{\text{Redundant binary source (upper graphic)} }$ &lt;br /&gt;
:$$M = 2, \hspace{0.1cm}p_{\rm A} = 0.89,\hspace{0.1cm} p_{\rm B} = 0.11$$&lt;br /&gt;
:$$\Rightarrow \hspace{0.15cm} H = 0.5 \ \rm bit/source symbol\text{:}$$ &lt;br /&gt;
*The black and grey curves are true straight lines (not only for this parameter set).&lt;br /&gt;
*The red curve&amp;amp;nbsp; $L(N)$&amp;amp;nbsp; is slightly curved&amp;amp;nbsp; (difficult to see with the naked eye).&lt;br /&gt;
*Because of this curvature of&amp;amp;nbsp; $L(N)$&amp;amp;nbsp; the residual redundancy (green curve) drops slightly.&lt;br /&gt;
:$$r(N) = 1 - 0.5 \cdot N/L(N).$$ &lt;br /&gt;
*The numerical values can be read &lt;br /&gt;
:$$L(N = 10000) = 6800,\hspace{0.5cm}&lt;br /&gt;
r(N = 10000) = 26.5\%.$$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
$\underline{\text{Redundancy-free binary source (middle graphic)} }$ &lt;br /&gt;
:$$M = 2,\hspace{0.1cm} p_{\rm A} = p_{\rm B} = 0.5$$ &lt;br /&gt;
:$$\Rightarrow \hspace{0.15cm} H = 1 \ \rm bit/source symbol\text{:}$$&lt;br /&gt;
* Here the grey and the black straight line coincide and the slightly curved red curve lies above it, as expected. &lt;br /&gt;
*Although the LZW coding brings a deterioration here, recognizable from the indication&amp;amp;nbsp; $L(N = 10000) = 12330$, the relative redundancy is smaller than in the upper graph: &lt;br /&gt;
:$$r(N = 10000) = 18.9\%.$$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
$\underline{\text{Redundant quaternary source (lower graphic)} }$&lt;br /&gt;
:$$M = 4,\hspace{0.1cm}p_{\rm A} = 0.7,\hspace{0.1cm} p_{\rm B} = p_{\rm C} = p_{\rm D} = 0.1$$&lt;br /&gt;
:$$ \Rightarrow \hspace{0.15cm} H \approx 1.357 \ \rm bit/source symbol\text{:}$$&lt;br /&gt;
* Without source coding, for&amp;amp;nbsp; $N = 10000$&amp;amp;nbsp; Quaternary symbols&amp;amp;nbsp; $20000$&amp;amp;nbsp; binary symbols (bit) would be required (black curve).&lt;br /&gt;
* If source encoding was perfect, this would result in&amp;amp;nbsp; $N \cdot H= 13570$&amp;amp;nbsp; Bit&amp;amp;nbsp; (grey curve).&lt;br /&gt;
* With (imperfect) LZW encoding you need&amp;amp;nbsp; $L(N = 10000) ≈ 16485$&amp;amp;nbsp; Bit&amp;amp;nbsp; (red curve). &lt;br /&gt;
*The relative redundancy here is&amp;amp;nbsp; $r(N = 10000) ≈17.7\%$&amp;amp;nbsp; (green curve).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Quantitative statements on asymptotic optimality==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The results on the last page have shown that the relative residual redundancy&amp;amp;nbsp; $r(N = 10000)$&amp;amp;nbsp; is significantly greater than the theoretically promised value&amp;amp;nbsp; $r(N \to \infty) = 0$. &lt;br /&gt;
&lt;br /&gt;
This practically relevant result shall now be clarified using the example of the redundant binary source with&amp;amp;nbsp; $H = 0.5 \ \rm bit/source symbol$&amp;amp;nbsp; according to the middle graphic on the last page. However, we now consider values between&amp;amp;nbsp; $N=10^3$&amp;amp;nbsp; and&amp;amp;nbsp; $N=10^{12}$&amp;amp;nbsp; for the source symbol sequence length.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2443__Inf_T_2_2_S8_neu.png|frame|LZW-Rest redundancy&amp;amp;nbsp; $r(N)$&amp;amp;nbsp; with redundant binary source&amp;amp;nbsp; $(H = 0.5)$ ]].&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 7:}$&amp;amp;nbsp; The graphic shows simulations with&amp;amp;nbsp; $N = 1000$&amp;amp;nbsp; binary symbols. &lt;br /&gt;
*After averaging over ten series of experiments the result is&amp;amp;nbsp; $r(N = 1000) ≈35.2\%$. &lt;br /&gt;
*below the yellow dot&amp;amp;nbsp; $($in the example with&amp;amp;nbsp; $N ≈ 150)$&amp;amp;nbsp; the LZW algorithm even brings a deterioration. &lt;br /&gt;
*In this range, namely&amp;amp;nbsp; $L &amp;gt; N$, that is: &amp;amp;nbsp; &amp;lt;br&amp;gt;The red curve is above the black one.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the table below, the results for this redundant binary source&amp;amp;nbsp; $(H = 0.5)$&amp;amp;nbsp; are summarized&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_2_2_S8b_neu.png|right|frame|Some numerical values for the efficiency of LZW coding]]&lt;br /&gt;
&lt;br /&gt;
*The compression factor&amp;amp;nbsp; $K(n)= L(n)/N$&amp;amp;nbsp; decreases with increasing&amp;amp;nbsp; $N$&amp;amp;nbsp; only very slowly&amp;amp;nbsp; (line 3).&lt;br /&gt;
*In line 4 the rest redundancy&amp;amp;nbsp; $r(N)$&amp;amp;nbsp; is given for different lengths between&amp;amp;nbsp; $N =1000$&amp;amp;nbsp; and&amp;amp;nbsp; $N =50000$&amp;amp;nbsp;; &lt;br /&gt;
*According to relevant literature this residual redundancy decreases proportionally to&amp;amp;nbsp; $\big[\hspace{0.05cm}\lg(N)\hspace{0.05cm}\big]^{-1}$&amp;amp;nbsp;. &lt;br /&gt;
*In line 5 the results of an empirical formula are entered $($adaptation for $N = 10000)$:&lt;br /&gt;
 &lt;br /&gt;
:$$r\hspace{0.05cm}&#039;(N) = \frac{A}{ {\rm lg}\hspace{0.1cm}(N)}\hspace{0.5cm}{\rm with}$$&lt;br /&gt;
:$$ A = {r(N = 10000)} \cdot { {\rm lg}\hspace{0.1cm}10000} = 0.265 \cdot 4 = 1.06&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
*You can see the good agreement between our simulation results&amp;amp;nbsp; $r(N)$&amp;amp;nbsp; and the rule of thumb&amp;amp;nbsp; $r\hspace{0.05cm}′(N)$. &lt;br /&gt;
*You can also see that the residual redundancy of the LZW algorithm for&amp;amp;nbsp; $N = 10^{12}$&amp;amp;nbsp; is still&amp;amp;nbsp; $8.8\%$&amp;amp;nbsp;.&lt;br /&gt;
*For other sources, with other&amp;amp;nbsp; $A$&amp;amp;ndash;values you will get similar results.&amp;amp;nbsp; The principle process remains the same.&amp;amp;nbsp; See also&amp;amp;nbsp; [[Aufgaben:Aufgabe_2.5:_Restredundanz_bei_LZW-Codierung|Exercise 2.5]]&amp;amp;nbsp; and&amp;amp;nbsp; [[Aufgaben:Aufgabe_2.5Z:_Komprimierungsfaktor_vs._Restredundanz|Task 2.5Z]].}}&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Exercises to chapter ==	   &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:2.3 Zur LZ78-Komprimierung|Aufgabe 2.3: Zur LZ78-Komprimierung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:2.3Z Zur LZ77-Codierung|Aufgabe 2.3Z: Zur LZ77-Codierung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:2.4 Zum LZW-Algorithmus|Aufgabe 2.4: Zum LZW-Algorithmus]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:2.4Z Nochmals LZW-Codierung und -Decodierung|Aufgabe 2.4Z: Nochmals LZW-Codierung und -Decodierung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Aufgabe_2.5:_Restredundanz_bei_LZW-Codierung|Aufgabe 2.5: Relative Restredundanz bei LZW-Codierung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Aufgabe_2.5Z:_Komprimierungsfaktor_vs._Restredundanz|Aufgabe 2.5Z: Komprimierungsfaktor vs. Restredundanz]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Natural_Discrete_Sources&amp;diff=35083</id>
		<title>Information Theory/Natural Discrete Sources</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Natural_Discrete_Sources&amp;diff=35083"/>
		<updated>2020-11-02T13:15:50Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Entropie wertdiskreter Nachrichtenquellen&lt;br /&gt;
|Vorherige Seite=Nachrichtenquellen mit Gedächtnis&lt;br /&gt;
|Nächste Seite=Allgemeine Beschreibung&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Difficulties with the determination of entropy ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Up to now, we have been dealing exclusively with artificially generated symbol sequences.&amp;amp;nbsp; Now we consider written texts.&amp;amp;nbsp; Such a text can be seen as a natural discrete-value message source, which of course can also be analyzed information-theoretically by determining its entropy.&lt;br /&gt;
&lt;br /&gt;
Even today (2011), natural texts are still often represented with the 8 bit character set according to ANSI (&#039;&#039;American National Standard Institute&#039;&#039;), although there are several &amp;quot;more modern&amp;quot; encodings; &lt;br /&gt;
&lt;br /&gt;
The&amp;amp;nbsp; $M = 2^8 = 256$&amp;amp;nbsp; ANSI characters are used as follows:&lt;br /&gt;
* &#039;&#039;&#039;No.&amp;amp;nbsp; 0 &amp;amp;nbsp; to &amp;amp;nbsp; 31&#039;&#039;&#039;: &amp;amp;nbsp; control commands that cannot be printed or displayed,&lt;br /&gt;
* &#039;&#039;&#039;No.&amp;amp;nbsp; 32 &amp;amp;nbsp; to &amp;amp;nbsp;127&#039;&#039;&#039;: &amp;amp;nbsp; identical to the characters of the 7 bit ASCII code,&lt;br /&gt;
* &#039;&#039;&#039;No.&amp;amp;nbsp; 128 &amp;amp;nbsp; to 159&#039;&#039;&#039;: &amp;amp;nbsp; additional control characters or alphanumeric characters for Windows,&lt;br /&gt;
* &#039;&#039;&#039;No.&amp;amp;nbsp; 160 &amp;amp;nbsp; to &amp;amp;nbsp; 255&#039;&#039;&#039;: &amp;amp;nbsp; identical to the Unicode charts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Theoretically, one could also define the entropy here as the border crossing point of the entropy approximation&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k \to \infty$,&amp;amp;nbsp; according to the procedure from the&amp;amp;nbsp; [[Information_Theory/Sources_with_Memory#Generalization to k -tuple and boundary crossing|last chapter]].&amp;amp;nbsp; In practice, however, insurmountable numerical limitations can be found here as well:&lt;br /&gt;
&lt;br /&gt;
*Already for the entropy approximation&amp;amp;nbsp; $H_2$&amp;amp;nbsp; there is&amp;amp;nbsp; $M^2 = 256^2 = 65\hspace{0.1cm}536$&amp;amp;nbsp; possible two-tuples.&amp;amp;nbsp; Thus, the calculation requires the same amount of memory (in bytes). &amp;amp;nbsp; If you assume that you need &amp;amp;nbsp; $100$&amp;amp;nbsp; equivalents per tuple on average, for a sufficiently safe statistic, the length of the source symbol sequence should already be&amp;amp;nbsp; $N &amp;gt; 6.5 · 10^6$&amp;amp;nbsp;.&lt;br /&gt;
*The number of possible three-tuples is&amp;amp;nbsp; $M^3 &amp;gt; 16 · 10^7$&amp;amp;nbsp; and thus the required source symbol length is already&amp;amp;nbsp; $N &amp;gt; 1.6 · 10^9$.&amp;amp;nbsp; This corresponds to&amp;amp;nbsp; $42$&amp;amp;nbsp; lines per page and&amp;amp;nbsp; $80$&amp;amp;nbsp; characters per line to a book with about&amp;amp;nbsp; $500\hspace{0.1cm}000$&amp;amp;nbsp; pages.&lt;br /&gt;
*For a natural text the statistical bonds extend much further than two or three characters.&amp;amp;nbsp; Küpfmüller gives a value of&amp;amp;nbsp; $100$ for the german language.&amp;amp;nbsp; To determine the 100th entropy approximation you need&amp;amp;nbsp; $2^{800}$ ≈ $10^{240}$&amp;amp;nbsp; frequencies and for the safe statistics &amp;amp;nbsp; $100$&amp;amp;nbsp;times more characters.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A justified question is therefore: &amp;amp;nbsp; How did&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Karl_K%C3%BCpfm%C3%BCller Karl Küpfmüller]&amp;amp;nbsp; determined the entropy of the German language in 1954? How did&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Claude_Shannon Claude Elwood Shannon]&amp;amp;nbsp; do the same for the English language, even before Küpfmüller?&amp;amp;nbsp; One thing is revealed beforehand: &amp;amp;nbsp; Not with the approach described above.	 	 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Entropy estimation according to Küpfmüller ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Karl Küpfmüller has investigated the entropy of German texts in his published assessment &amp;amp;nbsp; [Küpf54]&amp;lt;ref name =&#039;Küpf54&#039;&amp;gt;Küpfmüller, K.: &#039;&#039;Die Entropie der deutschen Sprache&#039;&#039;. Fernmeldetechnische Zeitung 7, 1954, S. 265-272.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; the following assumptions are made:&lt;br /&gt;
*an alphabet with&amp;amp;nbsp; $26$&amp;amp;nbsp; letters&amp;amp;nbsp; (no umlauts or punctuation marks),&lt;br /&gt;
*Not taking into account the space character,&lt;br /&gt;
*no distinction between upper and lower case.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The content of the decision is therefore&amp;amp;nbsp; $H_0 = \log_2 (26) ≈ 4.7\ \rm bit/letter$. &lt;br /&gt;
&lt;br /&gt;
Küpfmueller&#039;s estimation is based on the following considerations:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(1)&#039;&#039;&#039;&amp;amp;nbsp; The&amp;amp;nbsp; &#039;&#039;&#039;first entropy approximation&#039;&#039;&#039;&amp;amp;nbsp; results from the letter frequencies in German texts.&amp;amp;nbsp; According to a study of 1939, &amp;quot;e&amp;quot; is with a frequency of &amp;amp;nbsp; $16. 7\%$&amp;amp;nbsp; the most frequent, the rarest is &amp;quot;x&amp;quot; with&amp;amp;nbsp; $0.02\%$.&amp;amp;nbsp; Averaged over all letters we obtain&amp;amp;nbsp; $H_1 \approx 4.1\,\, {\rm bit/letter}\hspace{0.05 cm}.$&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;(2)&#039;&#039;&#039;&amp;amp;nbsp; Regarding the&amp;amp;nbsp; &#039;&#039;&#039;syllable frequency&#039;&#039;&#039;&amp;amp;nbsp; Küpfmüller evaluates the &amp;quot;Häufigkeitswörterbuch der deutschen Sprache&amp;quot; (Frequency Dictionary of the German Language), published by&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Friedrich_Wilhelm_Kaeding Friedrich Wilhelm Kaeding]&amp;amp;nbsp; 1898; He distinguishes between root syllables, prefixes, and ending syllables and thus arrives at the average information content of all syllables:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm syllable} = \hspace{-0.1cm} H_{\rm stem} + H_{\rm front} + H_{\rm end} + H_{\rm rest} \approx &lt;br /&gt;
4.15 + 0.82+1.62 + 2.0 \approx 8.6\,\, {\rm bit/syllable}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
:The following proportions were taken into account:&lt;br /&gt;
:*According to the Kaeding study of 1898, the&amp;amp;nbsp; $400$&amp;amp;nbsp; most common root syllables&amp;amp;nbsp; (beginning with &amp;quot;de&amp;quot;)&amp;amp;nbsp; represent $47\%$&amp;amp;nbsp; of a German text and contribute to the entropy with&amp;amp;nbsp; $H_{\text{Root}} ≈ 4.15 \ \rm bit/syllable$&amp;amp;nbsp;.&lt;br /&gt;
:*The contribution of&amp;amp;nbsp; $242$&amp;amp;nbsp; most common prefixes - in the first place &amp;quot;ge&amp;quot; with&amp;amp;nbsp; $9\%$ - is numbered by Küpfmüller with&amp;amp;nbsp; $H_{\text{Pre}} ≈ 0.82 \ \rm bit/syllable$.&lt;br /&gt;
:*The contribution of the&amp;amp;nbsp; $118$&amp;amp;nbsp; most used ending syllables is&amp;amp;nbsp; $H_{\text{End}} ≈ 1.62 \ \rm bit/syllable$.&amp;amp;nbsp; Most often, &amp;quot;en&amp;quot; appears at the end of words with&amp;amp;nbsp; $30\%$&amp;amp;nbsp;.&lt;br /&gt;
:*The remaining&amp;amp;nbsp; $14\%$&amp;amp;nbsp; is distributed over syllables not yet measured.&amp;amp;nbsp; Küpfmüller assumes that there are&amp;amp;nbsp; $4000$&amp;amp;nbsp; and that they are equally distributed&amp;amp;nbsp; He assumes&amp;amp;nbsp; $H_{\text{Rest}} ≈ 2 \ \rm bit/syllable$&amp;amp;nbsp; for this.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(3)&#039;&#039;&#039;&amp;amp;nbsp; As average number of letters per syllable Küpfmüller determined the value&amp;amp;nbsp; $3.03$.&amp;amp;nbsp; From this he deduced the&amp;amp;nbsp; &#039;&#039;&#039;third entropy approximation&#039;&#039;&#039;&#039;&amp;amp;nbsp; regarding the letters: &lt;br /&gt;
:$$H_3 \approx {8.6}/{3.03}\approx 2.8\,\, {\rm bit/letter}\hspace{0.05 cm}.$$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(4)&#039;&#039;&#039;&amp;amp;nbsp; Küpfmueller&#039;s estimation of the entropy approximation&amp;amp;nbsp; $H_3$&amp;amp;nbsp; based mainly on the syllable frequencies according to&amp;amp;nbsp; &#039;&#039;&#039;(2)&#039;&#039;&#039;&amp;amp;nbsp; and the mean value of&amp;amp;nbsp; $3.03$&amp;amp;nbsp; letters per syllable. To get another entropy approximation&amp;amp;nbsp; $H_k$&amp;amp;nbsp; with greater&amp;amp;nbsp; $k$&amp;amp;nbsp; Küpfmüller additionally analyzed the words in German texts.&amp;amp;nbsp; He came to the following results:&lt;br /&gt;
&lt;br /&gt;
:*The&amp;amp;nbsp; $322$&amp;amp;nbsp; most common words provide an entropy contribution of&amp;amp;nbsp; $4.5 \ \rm bit/word$. &lt;br /&gt;
:*The contributions of the remaining&amp;amp;nbsp; $40\hspace{0.1cm}000$ words&amp;amp;nbsp; were estimated.&amp;amp;nbsp; Assuming that the frequencies of rare words are reciprocal to their ordinal number ([https://en.wikipedia.org/wiki/Zipf%27s_law Zipf&#039;s Law]). &lt;br /&gt;
*With these assumptions the average information content (related to words) is about &amp;amp;nbsp; $11 \ \rm bit/word$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(5)&#039;&#039;&#039;&amp;amp;nbsp; The counting &amp;quot;letters per word&amp;quot; resulted in average&amp;amp;nbsp; $5.5$.&amp;amp;nbsp; Analogous to point&amp;amp;nbsp; &#039;&#039;&#039;(3)&#039;&#039;&#039;&amp;amp;nbsp; the entropy approximation for&amp;amp;nbsp; $k = 5.5$&amp;amp;nbsp; was approximated. Küpfmüller gives the value&amp;amp;nbsp; $H_{5.5} \approx {11}/{5.5}\approx 2\,\, {\rm bit/letter}\hspace{0.05 cm}.$&amp;amp;nbsp; Of course,&amp;amp;nbsp; $k$&amp;amp;nbsp; can only assume integer values,&amp;amp;nbsp; according to&amp;amp;nbsp; [[Information_Theory/Sources_With_Memory#Generalization to k-tuple and boundary crossing|its definition]].&amp;amp;nbsp; This equation is therefore to be interpreted in such a way that for&amp;amp;nbsp; $H_5$&amp;amp;nbsp; a somewhat larger and for&amp;amp;nbsp; $H_6$&amp;amp;nbsp; a somewhat smaller value than&amp;amp;nbsp; $2 \ {\rm bit/letter}$&amp;amp;nbsp; will result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2303__Inf_T_1_3_S2.png|right|frame|Approximate values of the entropy of the German language according to Küpfmüller]].&lt;br /&gt;
&#039;&#039;&#039;(6)&#039;&#039;&#039;&amp;amp;nbsp; Now you can try to get the final value of entropy for&amp;amp;nbsp; $k \to \infty$&amp;amp;nbsp; by extrapolation from these three points:&lt;br /&gt;
:*The continuous line, taken from Küpfmüller&#039;s original work&amp;amp;nbsp; [Küpf54]&amp;lt;ref name =&#039;Küpf54&#039;&amp;gt;Küpfmüller, K.: &#039;&#039;Die Entropie der deutschen Sprache&#039;&#039;. Fernmeldetechnische Zeitung 7, 1954, S. 265-272.&amp;lt;/ref&amp;gt;,&amp;amp;nbsp;leads to the final entropy value&amp;amp;nbsp; $H = 1.6 \ \rm bit/letter$. &lt;br /&gt;
:*The green curves are two extrapolation attempts (of a continuous function course through three points) of the&amp;amp;nbsp; $\rm LNTwww$&#039;s author.  &lt;br /&gt;
:*These and the brown arrows are actually only meant to show that such an extrapolation&amp;amp;nbsp; (carefully worded)&amp;amp;nbsp; is somewhat vague.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(7)&#039;&#039;&#039;&amp;amp;nbsp; Küpfmüller then tried to verify the final value&amp;amp;nbsp; $H = 1.6 \ \rm bit/letter$&amp;amp;nbsp; found by him with this first estimation with a completely different methodology - see next section. After this estimation he revised his result slightly to&amp;amp;nbsp; $H = 1.51 \ \rm bit/letter$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(8)&#039;&#039;&#039;&amp;amp;nbsp; Three years earlier, after a completely different approach, Claude E. Shannon had given the entropy value&amp;amp;nbsp; $H ≈ 1 \ \rm bit/letter$&amp;amp;nbsp; for the English language, but taking into account the space character.&amp;amp;nbsp; In order to be able to compare his results with Shannom, Küpfmüller subsequently included the space character in his result. &lt;br /&gt;
&lt;br /&gt;
:*The correction factor is the quotient of the average word length without considering the space&amp;amp;nbsp; $(5.5)$&amp;amp;nbsp; and the average word length with consideration of the space&amp;amp;nbsp; $(5.5+1 = 6.5)$. &lt;br /&gt;
:*This correction led to Küpfmueller&#039;s final result&amp;amp;nbsp; $H =1.51 \cdot {5.5}/{6.5}\approx 1.3\,\, {\rm bit/letter}\hspace{0.05 cm}.$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==A further entropy estimation by Küpfmüller ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For the sake of completeness, Küpfmüller&#039;s considerations are presented here, which led him to the final result&amp;amp;nbsp; $H = 1.51 \ \rm bit/letter$&amp;amp;nbsp; &amp;amp;nbsp; Since there was no documentation for the statistics of word groups or whole sentences, he estimated the entropy value of the German language as follows:&lt;br /&gt;
*Any contiguous German text is covered behind a certain word.&amp;amp;nbsp; The preceding text is read and the reader should try to determine the following word from the context of the preceding text.&lt;br /&gt;
*For a large number of such attempts, the percentage of hits gives a measure of the links between words and sentences&amp;amp;nbsp; It can be seen that for one and the same type of text (novels, scientific writings, etc.) by one and the same author, a constant final value of this hit ratio is reached relatively quickly&amp;amp;nbsp; (about one hundred to two hundred attempts).&lt;br /&gt;
*The hit ratio, however, depends quite strongly on the type of text.&amp;amp;nbsp; For different texts, values between&amp;amp;nbsp; $15\%$&amp;amp;nbsp; and&amp;amp;nbsp; $33\%$, with the mean value at&amp;amp;nbsp; $22\%$, are obtained.&amp;amp;nbsp; This also means: &amp;amp;nbsp; On average,&amp;amp;nbsp; $22\%$&amp;amp;nbsp; of the words in a German text can be determined from the context.&lt;br /&gt;
*Alternatively: &amp;amp;nbsp; The word count of a long text can be reduced with the factor&amp;amp;nbsp; $0.78$&amp;amp;nbsp; without a significant loss of the message content of the text.&amp;amp;nbsp; Starting from the reference value&amp;amp;nbsp; $H_{5. 5} = 2 \ \rm bit/letter$&amp;amp;nbsp; $($see dot&amp;amp;nbsp; &#039;&#039;&#039;(5)&#039;&#039;&#039;&amp;amp;nbsp; in the last section$)$&amp;amp;nbsp; for a word of medium length this results in the entropy&amp;amp;nbsp; $H ≈ 0.78 · 2 = 1.56 \ \rm bit/letter$.&lt;br /&gt;
*Küpfmüller verified this value with a comparable empirical study regarding the syllables and thus determined the reduction factor&amp;amp;nbsp; $0.54$&amp;amp;nbsp; (regarding syllables).&amp;amp;nbsp; As final result Küpfmüller&amp;amp;nbsp; $H = 0. 54 · H_3 ≈ 1.51 \ \rm bit/letter$, where&amp;amp;nbsp; $H_3 ≈ 2.8 \ \rm bit/letter$&amp;amp;nbsp; corresponds to the entropy of a syllable of medium length&amp;amp;nbsp; $($about three letters, see point&amp;amp;nbsp; &#039;&#039;&#039;(3)&#039;&#039;&#039;&amp;amp;nbsp; on the last page$)$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The remarks on this and the previous page, which may be perceived as very critical, are not intended to diminish the importance of neither Küpfmüller&#039;s entropy estimation, nor Shannon&#039;s contributions to the same topic are not. &lt;br /&gt;
*They are only meant to point out the great difficulties that arise in this task. &lt;br /&gt;
*This is perhaps also the reason why no one has dealt with this problem intensively since the 1950s.&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Some own simulation results==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The information given by Karl Küpfmüller regarding the entropy of the German language shall now be compared with some (very simple) simulation results, which were developed by the author of this chapter (Günter Söder) at the Chair of Communications Engineering at the Technical University of Munich in the course of an internship.&amp;amp;nbsp; The results are based on&lt;br /&gt;
*the Windows program&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Programme/WDIT.zip WDIT] &amp;amp;nbsp;&amp;amp;rArr;&amp;amp;nbsp; the link refers to the ZIP version of the program; &lt;br /&gt;
*the associated practical training manual&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Texte/Wertdiskrete_Informationstheorie.pdf Wertdiskrete Informationstheorie (Value Discrete Information Theory)].  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link refers to the PDF version;&lt;br /&gt;
*the German Bible in ASCII format with&amp;amp;nbsp; $N \approx 4.37 \cdot 10^6$&amp;amp;nbsp; characters. This corresponds to a book with&amp;amp;nbsp; $1300$&amp;amp;nbsp; pages at&amp;amp;nbsp; $42$&amp;amp;nbsp; lines per page and&amp;amp;nbsp; $80$&amp;amp;nbsp; characters per line. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The symbol range has been reduced to&amp;amp;nbsp; $M = 33$&amp;amp;nbsp; and includes the characters &#039;&#039;&#039;a&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;b&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;c&#039;&#039;&#039;,&amp;amp;nbsp; ... .&amp;amp;nbsp; &#039;&#039;&#039;x&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;y&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;z&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ä&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ö&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ü&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ß&#039;&#039;&#039;,&amp;amp;nbsp; $\rm LZ$,&amp;amp;nbsp; $\rm ZI$,&amp;amp;nbsp; $\rm IP$. &amp;amp;nbsp; Our analysis did not differentiate between upper and lower case letters.&lt;br /&gt;
&lt;br /&gt;
In contrast to Küpfmüller&#039;s analysis, we also took into account:&lt;br /&gt;
*the German umlauts&amp;amp;nbsp; &#039;&#039;&#039;ä&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ö&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ü&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;ß&#039;&#039;&#039;, which make up about&amp;amp;nbsp; $1.2\%$&amp;amp;nbsp; of the biblical text, &lt;br /&gt;
*the class punctuation&amp;amp;nbsp; $\rm IP$&amp;amp;nbsp; (Interpunktion) with ca.&amp;amp;nbsp; $3\%$,&lt;br /&gt;
*the class digit&amp;amp;nbsp; $\rm ZI$&amp;amp;nbsp; (Ziffer) with ca.&amp;amp;nbsp; $1.3\%$&amp;amp;nbsp; because of the verse numbering within the bible,&lt;br /&gt;
*the space (Leerzeichen)&amp;amp;nbsp; $\rm (LZ)$&amp;amp;nbsp; as the most common character&amp;amp;nbsp; $(17.8\%)$, even more than the &amp;quot;e&amp;quot;&amp;amp;nbsp; $(12.8\%)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following table summarizes the results &amp;amp;nbsp; $N$&amp;amp;nbsp; indicates the analyzed file size in characters (bytes). &amp;amp;nbsp; The decision content&amp;amp;nbsp; $H_0$&amp;amp;nbsp; as well as the entropy approximations&amp;amp;nbsp; $H_1$,&amp;amp;nbsp; $H_2$&amp;amp;nbsp; and&amp;amp;nbsp; $H_3$&amp;amp;nbsp; were each determined from&amp;amp;nbsp; $N$&amp;amp;nbsp; characters and are each given in &amp;quot;bit/characters&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_3_S3_vers2.png|left|frame|Entropy values (in bit/characters) of the German Bible]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
*Please do not consider these results to be scientific research.&lt;br /&gt;
*It is only an attempt to give students an understanding of the subject matter in an internship. &lt;br /&gt;
*The basis of this study was the Bible, since we had both its German and English versions available to us in the appropriate ASCII format.	 &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
The results of the above table can be summarized as follows:&lt;br /&gt;
*In all rows the entropy approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; decreases monotously with increasing&amp;amp;nbsp; $k$.&amp;amp;nbsp; The decrease is convex, that means &amp;amp;nbsp; $H_1 - H_2 &amp;gt; H_2 - H_3$. &amp;amp;nbsp; The extrapolation of the final value&amp;amp;nbsp; $(k \to \infty)$&amp;amp;nbsp; is not (or only extremely vague) possible from the three entropy approximations determined in each case.&lt;br /&gt;
*If the evaluation of the numbers&amp;amp;nbsp; $(\rm ZI$, line 2 &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 32)$&amp;amp;nbsp; and additionally the evaluation of the punctuation marks&amp;amp;nbsp; $(\rm IP$, line 3 &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 31)$ is omitted, the entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; $($um&amp;amp;nbsp; $0. 114)$,&amp;amp;nbsp; $H_2$&amp;amp;nbsp; $($um&amp;amp;nbsp; $0.063)$&amp;amp;nbsp; and&amp;amp;nbsp; $H_3$&amp;amp;nbsp; $($um&amp;amp;nbsp; $0.038)$&amp;amp;nbsp; decrease. &amp;amp;nbsp; On the final value&amp;amp;nbsp; $H$&amp;amp;nbsp; as the limit value of&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k \to \infty$&amp;amp;nbsp; the omission of numbers and punctuation will probably have little effect.&lt;br /&gt;
*If one leaves the space&amp;amp;nbsp; $(\rm LZ$, line 4 &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 30)$&amp;amp;nbsp; out of consideration, the result is almost the same constellation as Küpfmüller originally considered. The only difference are the rather rare German special characters &#039;&#039;&#039;ä&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ö&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;ü&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;ß&#039;&#039;&#039;.&lt;br /&gt;
*The&amp;amp;nbsp; $H_1$-value&amp;amp;nbsp; $(4.132)$&amp;amp;nbsp; indicated in the last line corresponds very well with the value&amp;amp;nbsp; $H_1 ≈ 4.1$&amp;amp;nbsp; determined by Küpfmüller. &amp;amp;nbsp; However, with regard to the&amp;amp;nbsp; $H_3$-values there are clear differences: &amp;amp;nbsp; Our analysis yields&amp;amp;nbsp; $H_3 ≈ 3.4$, while Küpfmüller&amp;amp;nbsp; $H_3 ≈ names 2.8$&amp;amp;nbsp; (all data in bit/letter).&lt;br /&gt;
*From the frequency of occurrence of the space&amp;amp;nbsp; $(17.8\%)$&amp;amp;nbsp; here results an average word length of&amp;amp;nbsp; $1/0.178 - 1 ≈ 4.6$, a smaller value than Küpfmüller&amp;amp;nbsp; ($5.5$)&amp;amp;nbsp; given.&amp;amp;nbsp; The discrepancy can be partly explained with our analysis file &amp;quot;Bible&amp;quot; (many spaces due to verse numbering).&lt;br /&gt;
*Interesting is the comparison of lines 3 and 4.&amp;amp;nbsp; If the space is taken into account, then although&amp;amp;nbsp; $H_0$&amp;amp;nbsp; from&amp;amp;nbsp; $\log_2 \ (30) \approx 4.907$&amp;amp;nbsp; to&amp;amp;nbsp; $\log_2 \ (31) \approx 4. 954$&amp;amp;nbsp; enlarges, but thereby reduces&amp;amp;nbsp; $H_1$&amp;amp;nbsp; $($by the factor&amp;amp;nbsp; $0.98)$,&amp;amp;nbsp; $H_2$&amp;amp;nbsp; $($um&amp;amp;nbsp; $0.96)$&amp;amp;nbsp; and&amp;amp;nbsp; $H_3$&amp;amp;nbsp; $($um&amp;amp;nbsp; $0.93)$. Küpfmüller has intuitively taken this factor into account with&amp;amp;nbsp; $85\%$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although we consider this own research to be rather insignificant, we believe that for today&#039;s texts the&amp;amp;nbsp; $1.0 \ \rm bit/letter$&amp;amp;nbsp; given by Shannon are somewhat too low for the English language and also Küpfmüllers&amp;amp;nbsp; $1.3 \ \rm bit/letter$&amp;amp;nbsp; for the German language, among other things because&lt;br /&gt;
*the symbol range today is larger than that considered by Shannon and Küpfmüller in the 1950s; for example, for the ASCII character set&amp;amp;nbsp; $M = 256$,&lt;br /&gt;
*the multiple formatting options (underlining, bold and italics, indents, colors) further increase the information content of a document.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Synthetically generated texts == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The graphic shows artificially generated German and English texts, which are taken from&amp;amp;nbsp; [Küpf54]&amp;lt;ref name =&#039;Küpf54&#039;&amp;gt;Küpfmüller, K.: &#039;&#039;Die Entropie der deutschen Sprache&#039;&#039;. Fernmeldetechnische Zeitung 7, 1954, S. 265-272.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; taken from The underlying symbol range is&amp;amp;nbsp; $M = 27$,&amp;amp;nbsp; that means, all letters&amp;amp;nbsp; (without umlauts and &#039;&#039;&#039;ß&#039;&#039;&#039;)&amp;amp;nbsp; and the space character are considered.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_3_S4_vers2.png|right|frame|artificially generated German and English texts]]&lt;br /&gt;
&lt;br /&gt;
*The&amp;amp;nbsp; &#039;&#039;Zero-order letter approximation&#039;&#039;&amp;amp;nbsp; assumes equally probable characters in each case.&amp;amp;nbsp; There is therefore no difference between German (red) and English (blue).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The&amp;amp;nbsp; &#039;&#039;first letter approximation&#039;&#039;&amp;amp;nbsp; already considers the different frequencies, the higher order approximations also the preceding characters.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*In the&amp;amp;nbsp; &#039;&#039;4th order synthesis&#039;&#039;&amp;amp;nbsp; one can already recognize meaningful words.&amp;amp;nbsp; Here the probability for a new letter depends on the last three characters.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The&amp;amp;nbsp; &#039;&#039;First-order word approximation&#039;&#039;&amp;amp;nbsp; synthesizes sentences according to the word probabilities that&amp;amp;nbsp; &#039;&#039;Second-order word approximation&#039;&#039;&amp;amp;nbsp; also considers the previous word.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Further information on the synthetic generation of German and English texts can be found in the&amp;amp;nbsp; [[Aufgaben:1.8_Synthetisch_erzeugte_Texte|Aufgabe 1.8]].&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
==Exercises for chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:1.7 Entropie natürlicher Texte|Aufgabe 1.7:  Entropie natürlicher Texte]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.8 Synthetisch erzeugte Texte|Aufgabe 1.8: Synthetisch erzeugte Texte]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/General_Description&amp;diff=35082</id>
		<title>Information Theory/General Description</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/General_Description&amp;diff=35082"/>
		<updated>2020-11-02T13:15:14Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Quellencodierung – Datenkomprimierung&lt;br /&gt;
|Vorherige Seite=Natürliche wertdiskrete Nachrichtenquellen&lt;br /&gt;
|Nächste Seite=Komprimierung nach Lempel, Ziv und Welch&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== # OVERVIEW OF THE SECOND MAIN CHAPTER # ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The Shannon information theory is used for example in&amp;amp;nbsp; &#039;&#039;source coding&#039;&#039;&amp;amp;nbsp; of digital (i.e. value and time discrete) message sources.&amp;amp;nbsp; One speaks in this context also of&amp;amp;nbsp; &#039;&#039;data compression&#039;&#039;. &lt;br /&gt;
*Attempts are made to reduce the redundancy of natural digital sources such as measurement data, texts, or voice and image files (after digitization) by recoding them, so that they can be stored and transmitted more efficiently. &lt;br /&gt;
*In most cases, source encoding is associated with a change of the symbol range.&amp;amp;nbsp; in the following, the output sequence is always binary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is a detailed discussion:&lt;br /&gt;
&lt;br /&gt;
*the different destinations of &#039;&#039;source coding&#039;&#039;, &#039;&#039;channel coding&#039;&#039;&amp;amp;nbsp; and &#039;&#039;line coding&#039;&#039;,&lt;br /&gt;
*&#039;&#039;lossy&#039;&#039;&amp;amp;nbsp; encoding methods for analog sources, for example GIF, TIFF, JPEG, PNG, MP3,&lt;br /&gt;
*the &#039;&#039;source encoding theorem&#039;&#039;, which specifies a limit for the average codeword length,&lt;br /&gt;
*the frequently used data compression according to &#039;&#039;Lempel&#039;&#039;, &#039;&#039;Ziv&#039;&#039;&amp;amp;nbsp; and &#039;&#039;Welch&#039;&#039;,&lt;br /&gt;
*the &#039;&#039;Huffman code&#039;&#039;&amp;amp;nbsp; as the best known and most efficient form of entropy coding,&lt;br /&gt;
*the &#039;&#039;Shannon-Fano-Code&#039;&#039;&amp;amp;nbsp; as well as the &#039;&#039;arithmetic coding&#039;&#039; - both belong to the class of entropy encoders as well,&lt;br /&gt;
*the &#039;&#039;run-length encoding&#039;&#039;&amp;amp;nbsp; and the &#039;&#039;Burrows-Wheeler Transformation&#039;&#039;&amp;amp;nbsp; (BWT).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Further information on the topic as well as Exercises, simulations and programming exercises can be found in the experiment &amp;quot;Value Discrete Information Theory&amp;quot; (Wertdiskrete Informationstheorie) of the practical course &amp;quot;Simulation of Digital Transmission Systems&amp;quot; (Simulation Digitaler Übertragungssysteme).&amp;amp;nbsp; This (former) LNT course at the TU Munich is based on&lt;br /&gt;
&lt;br /&gt;
*the Windows program&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Programme/WDIT.zip WDIT] &amp;amp;nbsp;&amp;amp;rArr;&amp;amp;nbsp; Link points to the ZIP version of the program; and &lt;br /&gt;
*the associated&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Texte/Wertdiskrete_Informationstheorie.pdf Internship guide]  nbsp;&amp;amp;rArr;&amp;amp;nbsp; Link refers to the PDF version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Source coding - Channel coding - Line coding ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For the descriptions in this second chapter we consider the following digital transfer model:&lt;br /&gt;
*The source signal&amp;amp;nbsp; $q(t)$&amp;amp;nbsp; can be analog as well as digital, just like the sink signal&amp;amp;nbsp; $v(t)$&amp;amp;nbsp; All other signals in this block diagram, even those not explicitly named here, are digital signals.&lt;br /&gt;
*In particular also the signals&amp;amp;nbsp; $x(t)$&amp;amp;nbsp; and&amp;amp;nbsp; $y(t)$&amp;amp;nbsp; at the input and output of the &amp;quot;Digital Channel&amp;quot; are digital and can therefore also be described completely by the symbol sequences&amp;amp;nbsp; $〈x_ν〉$&amp;amp;nbsp; and&amp;amp;nbsp; $〈y_ν〉$&amp;amp;nbsp;.&lt;br /&gt;
*The &amp;quot;digital channel&amp;quot; includes not only the transmission medium and interference (noise) but also components of the transmitter (modulator, transmitter pulse shaper, etc.) and the receiver (demodulator, receive filter or detector, decision maker). &lt;br /&gt;
*The chapter&amp;amp;nbsp; [[Digital_Signal_Transmission/Parameters of Digital Channel Models|Parameters of Digital Channel Models]]&amp;amp;nbsp; in the book &amp;quot;Digital Signal Transmission&amp;quot; describes the modeling of the &amp;quot;Digital Channel&amp;quot; .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2315__Inf_T_2_1_S1_neu.png|center|frame|Simplified model of a message transmission system]]&lt;br /&gt;
&lt;br /&gt;
As can be seen from this block diagram, there are three different types of coding, depending on the target direction, each of which is realized by the encoder on the transmission side (coder) and the corresponding decoder at the receiver:&lt;br /&gt;
&lt;br /&gt;
Exercise of&amp;amp;nbsp; &#039;&#039;&#039;source coding&#039;&#039;&#039; &amp;amp;nbsp; is the redundancy reduction for data compression, as it is used for example in image coding.&amp;amp;nbsp; By using statistical bonds between the individual points of an image or between the brightness values of a pixel at different times&amp;amp;nbsp; (for moving picture sequences)&amp;amp;nbsp; procedures can be developed which lead to a noticeable reduction of the amount of data (measured in bit or byte) with nearly the same picture quality.&amp;amp;nbsp; A simple example is the &#039;&#039;differential pulse code modulation&#039;&#039;&amp;amp;nbsp; (DPCM).&lt;br /&gt;
&lt;br /&gt;
With&amp;amp;nbsp; &#039;&#039;&#039;channel coding&#039;&#039;&#039;&amp;amp;nbsp; on the other hand, a noticeable improvement of the transmission behavior is achieved by using a redundancy specifically added at the transmitter to detect and correct transmission errors at the receiver side.&amp;amp;nbsp; Such codes, whose most important representatives are block codes, convolutional codes and turbo codes, are of great importance, especially for heavily disturbed channels. The greater the relative redundancy of the coded signal, the better the correction properties of the code, but at a reduced payload data rate.&lt;br /&gt;
&lt;br /&gt;
A&amp;amp;nbsp; &#039;&#039;&#039;line coding&#039;&#039;&#039;&amp;amp;nbsp; - sometimes also called &#039;&#039;transmission coding&#039;&#039;&amp;amp;nbsp; - is used to adapt the transmitted signal to the spectral characteristics of channel and receiving equipment by recoding the source symbols. &amp;amp;nbsp; For example, in the case of an (analog) transmission channel over which no direct signal can be transmitted, for which thus&amp;amp;nbsp; $H_{\rm K}(f = 0) = 0$, it must be ensured by line coding that the code symbol sequence does not contain long sequences of the same polarity.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The focus of this chapter is on lossless source coding, which generates a data-compressed code symbol sequence&amp;amp;nbsp; $〈q_ν〉$&amp;amp;nbsp; based on the results of information theory.&lt;br /&gt;
&lt;br /&gt;
* Channel coding is the subject of a separate book in our tutorial with the following&amp;amp;nbsp; [[Channel_Coding|Content]]&amp;amp;nbsp;. &lt;br /&gt;
*The channel coding is discussed in detail in the chapter &amp;quot;Coded and multilevel transmission&amp;quot; of the book&amp;amp;nbsp; [[Digital_Signal_Transmission]]&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note&#039;&#039;: &amp;amp;nbsp; We uniformly use &amp;quot;$\nu$&amp;quot; here as a control variable of a symbol sequence.&amp;amp;nbsp; Normally, for&amp;amp;nbsp; $〈q_ν〉$,&amp;amp;nbsp; $〈c_ν〉$&amp;amp;nbsp; and&amp;amp;nbsp; $〈x_ν〉$&amp;amp;nbsp; different indices should be used if the rates do not match.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lossy source encoding for images==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For digitizing analog source signals such as speech, music or pictures, only lossy source coding methods can be used.&amp;amp;nbsp; Even the storage of a photo in&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Windows_Bitmap BMP]-format is always associated with a loss of information due to sampling, quantization and the finite color depth.&lt;br /&gt;
&lt;br /&gt;
However, there are also a number of compression methods for images that result in much smaller image files than &amp;quot;BMP&amp;quot;, for example:&lt;br /&gt;
*[https://en.wikipedia.org/wiki/GIF GIF]&amp;amp;nbsp; (&#039;&#039;Graphics Interchange Format&#039;&#039;&amp;amp;nbsp;), developed by Steve Wilhite in 1987.&lt;br /&gt;
*[https://de.wikipedia.org/wiki/JPEG JPEG]&amp;amp;nbsp; - a format that was introduced in 1992 by the &#039;&#039;Joint Photography Experts Group&#039;&#039;&amp;amp;nbsp; and is now the standard for digital cameras.&amp;amp;nbsp; ending: &amp;amp;nbsp; &amp;quot;jpeg&amp;quot; or &amp;quot;jpg&amp;quot;.&lt;br /&gt;
*[https://de.wikipedia.org/wiki/Tagged_Image_File_Format TIFF]&amp;amp;nbsp; (&#039;&#039;Tagged Image File Format&#039;&#039;), around 1990 by Aldus Corp. (now Adobe) and Microsoft, still the quasi-standard for print-ready images of the highest quality.&lt;br /&gt;
*[https://de.wikipedia.org/wiki/Portable_Network_Graphics PNG]&amp;amp;nbsp; (&#039;&#039;Portable Network Graphics&#039;&#039;), designed in 1995 by T. Boutell &amp;amp; T. Lane as a replacement for the patent-encumbered GIF format, is less complex than TIFF.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These compression methods partly use &lt;br /&gt;
*Vector quantization for redundancy reduction of correlated pixels, &lt;br /&gt;
*at the same time the lossless compression algorithms according to&amp;amp;nbsp; [[Information_Theory/Entropy Coding According to Huffman#The_Huffman.E2.80.93Algorithm|Huffman]]&amp;amp;nbsp; and&amp;amp;nbsp; [[Information_Theory/Compression According to Lempel, Ziv and Welch|Lempel/Ziv]], &lt;br /&gt;
*possibly also transformation coding based on DFT&amp;amp;nbsp; (&#039;&#039;Discrete Fourier Transformation&#039;&#039;&amp;amp;nbsp;)&amp;amp;nbsp; and&amp;amp;nbsp; DCT&amp;amp;nbsp; (&#039;&#039;Discrete Cosine Transformation&#039;&#039;&amp;amp;nbsp;), &lt;br /&gt;
*then quantization and transfer in the transformed range.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We now compare the effects of two compression methods on the subjective quality of photos and graphics, namely:&lt;br /&gt;
*&#039;&#039;&#039;JPEG&#039;&#039;&#039;&amp;amp;nbsp; $($with compression factor&amp;amp;nbsp; $8)$,&amp;amp;nbsp; and&lt;br /&gt;
*&#039;&#039;&#039;PNG&#039;&#039;&#039;&amp;amp;nbsp; $($with compression factor&amp;amp;nbsp; $24)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp; &lt;br /&gt;
In the upper part of the following figure you can see two compressions of a photo.&lt;br /&gt;
[[File:P_ID2920__Inf_T_2_1_S2_neu.png|right|frame|Compare JPEG and PNG compression]]&lt;br /&gt;
The format&amp;amp;nbsp; &#039;&#039;&#039;JPEG&#039;&#039;&#039; &amp;amp;nbsp; (left image) allows a compression factor of&amp;amp;nbsp; $8$&amp;amp;nbsp; to&amp;amp;nbsp; $15$&amp;amp;nbsp; with (nearly) lossless compression. &lt;br /&gt;
*Even with the compression factor&amp;amp;nbsp; $35$&amp;amp;nbsp; the result can still be called &amp;quot;good&amp;quot;. &lt;br /&gt;
*For most consumer digital cameras, &amp;quot;JPEG&amp;quot; is the default storage format.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The image shown on the right was compressed with&amp;amp;nbsp; &#039;&#039;&#039;PNG&#039;&#039;&#039;&amp;amp;nbsp;. &lt;br /&gt;
*The quality is similar to the left JPEG image, although the compression is about&amp;amp;nbsp; $3$ stronger. &lt;br /&gt;
*In contrast, PNG achieves a worse compression result than JPEG if the photo contains a lot of color gradations. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
PNG is also better suited for line drawings with captions than JPEG (lower images).&amp;amp;nbsp; The quality of the JPEG compression (left) is significantly worse than the PNG result, although the resulting file size is about three times as large.&amp;amp;nbsp; Especially fonts look &amp;quot;washed out&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note&#039;&#039;: &amp;amp;nbsp; Due to technical limitations of&amp;amp;nbsp; $\rm LNTww$&amp;amp;nbsp; all graphics had to be saved as &amp;quot;PNG&amp;quot;. &lt;br /&gt;
*In the above graphic, &amp;quot;JPEG&amp;quot; means the PNG conversion of a file previously compressed with &amp;quot;JPEG&amp;quot;. &lt;br /&gt;
*However, the associated loss is negligible. }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
	 &lt;br /&gt;
== Lossy source coding for audio signals==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
A first example of source coding for speech and music is the&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Pulse-code_modulation Pulse-code modulation]&amp;amp;nbsp; (PCM), invented in 1938, which extracts the code symbol sequence&amp;amp;nbsp; $〈c_ν〉 $&amp;amp;nbsp;from an analog source signal&amp;amp;nbsp; $q(t)$, corresponding to the three processing blocks &lt;br /&gt;
[[File:P_ID2925__Mod_T_4_1_S1_neu.png|right|frame|Principle of Pulse Code Modulation (PCM)]]&lt;br /&gt;
*Sampling,&lt;br /&gt;
*Quantization, and&lt;br /&gt;
*PCM encoding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The graphic illustrates the PCM principle.&amp;amp;nbsp; A detailed description of the picture can be found on the first pages of the chapter&amp;amp;nbsp; [[Modulation_Methods/Pulse Code Modulation|Pulse Code Modulation]]&amp;amp;nbsp; in the book &amp;quot;Modulation Methods&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Because of the necessary band limitation and quantization, this transformation is always lossy.&amp;amp;nbsp; That means&lt;br /&gt;
*The code sequence&amp;amp;nbsp; $〈c_ν〉$&amp;amp;nbsp; has less information than the signal&amp;amp;nbsp; $q(t)$.&lt;br /&gt;
*The sink signal&amp;amp;nbsp; $v(t)$&amp;amp;nbsp; is fundamentally different from&amp;amp;nbsp; $q(t)$. &lt;br /&gt;
*Mostly, however, the deviation is not very large.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
We will now mention two transmission methods based on pulse code modulation as examples.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp;  &lt;br /&gt;
The following data is taken from the&amp;amp;nbsp; [[Examples_of_Communication_Systems/Entire GSM Transmission System#Components_of_Language.E2.80.93_and_Data.C3.BCtransmission|GSM-Specification]]&amp;amp;nbsp;:&lt;br /&gt;
*If a speech signal is spectrally limited to the bandwidth&amp;amp;nbsp; $B = 4 \, \rm kHz$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; sampling rate $f_{\rm A} = 8 \, \rm kHz$&amp;amp;nbsp; with a quantization of $13 \, \rm Bit$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; number of quantization levels&amp;amp;nbsp; $M = 2^{13} = 8192$&amp;amp;nbsp; a binary data stream of data rate&amp;amp;nbsp; $R = 104 \, \rm kbit/s$ results. &lt;br /&gt;
*The quantization noise ratio is then&amp;amp;nbsp; $20 - \lg M ≈ 78 \, \rm dB$. &lt;br /&gt;
*For quantization with&amp;amp;nbsp; $16 \, \rm Bit$&amp;amp;nbsp; this increases to&amp;amp;nbsp; $96 \, \rm dB$.&amp;amp;nbsp; At the same time, however, the required data rate increases to&amp;amp;nbsp; $R = 128 \, \rm kbit/s$. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The interactive applet&amp;amp;nbsp; [[Applets:Bandwidth Limitation|Impact of a Bandwidth Limitation for Speech and Music]] illustrates the effects of a bandwidth limitation.}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 3:}$&amp;amp;nbsp;  &lt;br /&gt;
The standard&amp;amp;nbsp; [[Examples_of_Communication_Systems/General_Description_of_ISDN|ISDN]]&amp;amp;nbsp; (&#039;&#039;Integrated Services Digital Network&#039;&#039;&amp;amp;nbsp;) for telephony via two-wire line is also based on the PCM principle, whereby each user is assigned two B-channels&amp;amp;nbsp; (&#039;&#039;Bearer Channels&#039;&#039;&amp;amp;nbsp;)&amp;amp;nbsp; with &amp;amp;nbsp;$64 \, \rm kbit/s$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 2^{8} = 256$&amp;amp;nbsp; and a D-channel&amp;amp;nbsp; (&#039;&#039;Data Channel&#039;&#039;&amp;amp;nbsp;)&amp;amp;nbsp; with &amp;amp;nbsp;$ 16 \, \rm kbit/s$. &lt;br /&gt;
*The net data rate is thus&amp;amp;nbsp; $R_{\rm net} = 144 \, \rm kbit/s$. &lt;br /&gt;
*In consideration of the channel coding and the control bits (required for organizational reasons), the ISDN gross data rate of&amp;amp;nbsp; $R_{\rm gross} = 192 \, \rm kbit/s$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In mobile communications, very high data rates often could not (yet) be handled.&amp;amp;nbsp; In the 1990s, voice coding procedures were developed that led to data compression by the factor&amp;amp;nbsp; $8$&amp;amp;nbsp; and more.&amp;amp;nbsp; From today&#039;s point of view, it is worth mentioning&lt;br /&gt;
*the&amp;amp;nbsp; [[Examples_of_Communication_Systems/Voice Coding#Halfrate_Vocoder_and_Enhanced_Fullrate_Codec|Enhanced Full-Rate Codec]]&amp;amp;nbsp; (&#039;&#039;&#039;EFR&#039;&#039;&#039;), which extracts&amp;amp;nbsp; exactly&amp;amp;nbsp; $244 \, \rm Bit$&amp;amp;nbsp; for each speech frame of&amp;amp;nbsp; $20\, \rm ms$&amp;amp;nbsp;$($Data rate: &amp;amp;nbsp; $12. 2 \, \rm kbit/s)$; &amp;lt;br&amp;gt; this data compression of more than the factor&amp;amp;nbsp; $8$&amp;amp;nbsp; is achieved by stringing together several procedures: &lt;br /&gt;
:# &amp;amp;nbsp;&#039;&#039;Linear Predictive Coding&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;LPC&#039;&#039;&#039;, short term prediction), &lt;br /&gt;
:# &amp;amp;nbsp;&#039;&#039;Long Term Prediction&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;LTP&#039;&#039;&#039;, long term prediction) and &lt;br /&gt;
:# &amp;amp;nbsp;&#039;&#039;Regular Pulse Excitation&#039;&#039;&amp;amp;nbsp; (&#039;&#039;RPE&#039;&#039;&#039;);&lt;br /&gt;
*the&amp;amp;nbsp; [[Examples_of_Communication_Systems/Voice Coding#Adaptive_Multi.E2.80. 93Rate_Codec|Adaptive Multi-Rate Codec]]&amp;amp;nbsp; (&#039;&#039;&#039;AMR&#039;&#039;&#039;) based on&amp;amp;nbsp; [[Examples_of_Communication_Systems/Voice Coding#Algebraic_Code_Excited_Linear_Prediction|ACELP]]&amp;amp;nbsp; (&#039;&#039;Algebraic Code Excited Linear Prediction&#039;&#039;) and several modes between&amp;amp;nbsp; $12. 2 \, \rm kbit/s$&amp;amp;nbsp; (EFR) and&amp;amp;nbsp; $4.75 \, \rm kbit/s$&amp;amp;nbsp; so that improved channel coding can be used in case of poorer channel quality;&lt;br /&gt;
*the&amp;amp;nbsp; [[Examples_of_Communication_Systems/Voice Coding#Various_Language Coding Methods|Wideband-AMR]]&amp;amp;nbsp; (&#039;&#039;&#039;WB-AMR&#039;&#039;&#039;) with nine modes between&amp;amp;nbsp; $6.6 \, \rm kbit/s$&amp;amp;nbsp; and&amp;amp;nbsp; $23.85 \, \rm kbit/s$. &amp;amp;nbsp; This is used with&amp;amp;nbsp; [[Examples_of_Communication_Systems/General_Description_of_UMTS|UMTS]]&amp;amp;nbsp; and is suitable for broadband signals between&amp;amp;nbsp; $200 \, \rm Hz$&amp;amp;nbsp; and&amp;amp;nbsp; $7 \, \rm kHz$&amp;amp;nbsp;. &amp;amp;nbsp; Sampling is done with&amp;amp;nbsp; $16 \, \rm kHz$, quantization with&amp;amp;nbsp; $4 \, \rm Bit$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
All these compression methods are described in detail in the chapter&amp;amp;nbsp; [[Examples_of_Communication_Systems/Voice Coding|Voice Coding]]&amp;amp;nbsp; of the book &amp;quot;Examples of Communication Systems&amp;quot; &amp;amp;nbsp; The Audio Module&amp;amp;nbsp; [[Applets:Quality of different voice codecs (Applet)|Quality of different voice codecs (Applet)]]&amp;amp;nbsp; allows a subjective comparison of these codecs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==MPEG-2 Audio Layer III - short MP3 ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Today (2015) the most common compression method for audio files is&amp;amp;nbsp; [https://en.wikipedia.org/wiki/MP3 MP3].&amp;amp;nbsp; This format was developed from 1982 on at the Fraunhofer Institute for Integrated Circuits (IIS) in Erlangen under the direction of Prof.&amp;amp;nbsp; [https://de.wikipedia. org/wiki/Hans-Georg_Musmann Hans-Georg Musmann]&amp;amp;nbsp; in collaboration with the Friedrich Alexander University Erlangen-Nuremberg and AT&amp;amp;T Bell Labs.&amp;amp;nbsp; Other institutions are also asserting patent claims in this regard, so that since 1998 various lawsuits have been filed which, to the authors&#039; knowledge, have not yet been finally concluded.&lt;br /&gt;
&lt;br /&gt;
In the following some measures are called, which are used with MP3, in order to reduce the data quantity in relation to the raw version in the&amp;amp;nbsp; [https://en.wikipedia.org/wiki/WAV WAV]-format.&amp;amp;nbsp; The compilation is not complete.&amp;amp;nbsp; A comprehensive representation about this can be found for example in a&amp;amp;nbsp; [https://de.wikipedia.org/wiki/MP3 Wikipedia article].&lt;br /&gt;
*The audio compression method &amp;quot;MP3&amp;quot; uses among other things psychoacoustic effects of perception.&amp;amp;nbsp; So a person can only distinguish two sounds from each other from a certain minimum difference in pitch.&amp;amp;nbsp; One speaks of so-called &amp;quot;masking effects&amp;quot;.&lt;br /&gt;
*Using the masking effects, MP3 signals that are less important for the auditory impression are stored with less bits (reduced accuracy).&amp;amp;nbsp; A dominant tone at&amp;amp;nbsp; $4 \, \rm kHz$&amp;amp;nbsp; can, for example, cause neighboring frequencies to be of only minor importance for the current auditory sensation up to&amp;amp;nbsp; $11 \, \rm kHz$&amp;amp;nbsp;.&lt;br /&gt;
*The greatest saving of MP3 coding, however, is that the sounds are stored with just enough bits so that the resulting&amp;amp;nbsp; [[Modulation_Methods/Pulse Code Modulation#Quantization_and_Quantization Noise|Quantization Noise]]&amp;amp;nbsp; is still masked and is not audible.&lt;br /&gt;
*Other MP3 compression mechanisms are the exploitation of the correlations between the two channels of a stereo signal by difference formation as well as the&amp;amp;nbsp; [[Information_Theory/Entropy Coding According to Huffman|Huffman Coding]]&amp;amp;nbsp; of the resulting data stream.&amp;amp;nbsp; Both measures are lossless.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A disadvantage of the MP3 coding is that with strong compression also &amp;quot;important&amp;quot; frequency components are unintentionally captured and thus audible errors occur.&amp;amp;nbsp; Furthermore it is disturbing that due to the blockwise application of the MP3 procedure gaps can occur at the end of a file.&amp;amp;nbsp; A remedy is the use of the so-called&amp;amp;nbsp; [https://en.wikipedia.org/wiki/LAME LAME]-Coder, an &#039;&#039;Open Source Project&#039;&#039;, and a corresponding player.	 	&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==Description of lossless source encoding &amp;amp;ndash; Requirements==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In the following, we only consider lossless source coding methods and make the following assumptions:&lt;br /&gt;
*The digital source has the symbol range&amp;amp;nbsp; $M$.&amp;amp;nbsp; For the individual source symbols of the sequence&amp;amp;nbsp; $〈q_ν〉$&amp;amp;nbsp; applies with the symbol range&amp;amp;nbsp; $\{q_μ\}$:&lt;br /&gt;
 &lt;br /&gt;
:$$q_{\nu} \in \{ q_{\mu} \}\hspace{0.05cm}, \hspace{0.2cm}\mu = 1, \hspace{0.05cm}\text \hspace{0.05cm}, M \hspace{0.05cm}. $$&lt;br /&gt;
&lt;br /&gt;
The individual sequence elements&amp;amp;nbsp; $q_ν$&amp;amp;nbsp; may be statistically independent or may have statistical bonds;&lt;br /&gt;
* First we consider&amp;amp;nbsp; &#039;&#039;&#039;message sources without memory&#039;&#039;, which are fully characterized by symbol probabilities alone, for example:&lt;br /&gt;
$$M = 4\text{:} \  \  \  q_μ \in \{ {\rm A}, \ {\rm B}, \ {\rm C}, \ {\rm D} \}, \hspace{2.5cm} \text{with the probabilities}\ p_{\rm A},\ p_{\rm B},\ p_{\rm C},\ p_{\rm D},$$&lt;br /&gt;
:$$M = 8\text{:} \  \  \  q_μ \in \{ {\rm A}, \ {\rm B}, \ {\rm C}, \ {\rm D},\ {\rm E}, \ {\rm F}, \ {\rm G}, \ {\rm H} \}, \hspace{0.5cm} \text{with the probabilities }\ p_{\rm A},\hspace{0.05cm}\text{...} \hspace{0.05cm} ,\ p_{\rm H}.$$&lt;br /&gt;
&lt;br /&gt;
*The source encoder replaces the source symbol&amp;amp;nbsp; $q_μ$&amp;amp;nbsp; with the code word&amp;amp;nbsp; $\mathcal{C}(q_μ)$, consisting of&amp;amp;nbsp; $L_μ$&amp;amp;nbsp; code symbols of a new alphabet with the symbol range&amp;amp;nbsp; $D$&amp;amp;nbsp; $\{0, \ 1$, ... ,&amp;amp;nbsp; $D - 1\}$.&amp;amp;nbsp; This gives the&amp;amp;nbsp; &#039;&#039;&#039;average code word length&#039;&#039;&#039;:&lt;br /&gt;
 &lt;br /&gt;
:$$L_{\rm M} = \sum_{\mu=1}^{M} \hspace{0.1cm} p_{\mu} \cdot L_{\mu} \hspace{0.05cm}, \hspace{0.2cm}{\rm mit} \hspace{0.2cm}p_{\mu} = {\rm Pr}(q_{\mu}) \hspace{0.05cm}. $$&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 4:}$&amp;amp;nbsp; We consider two different types of source encoding, each with the parameters&amp;amp;nbsp; $M = 9$&amp;amp;nbsp; and&amp;amp;nbsp; $D = 3$.&lt;br /&gt;
&lt;br /&gt;
*In the first encoding&amp;amp;nbsp; $\mathcal{C}_1(q_μ)$&amp;amp;nbsp; according to line 2 (red) of the lower table, each source symbol&amp;amp;nbsp; $q_μ$&amp;amp;nbsp; is replaced by two ternary symbols&amp;amp;nbsp; $(0$,&amp;amp;nbsp; $1$&amp;amp;nbsp; or&amp;amp;nbsp; $2)$&amp;amp;nbsp; &amp;amp;nbsp; For example, the mapping:&lt;br /&gt;
: $$\rm A C F B I G \ ⇒ \ 00 \ 02 \ 12 \ 01 \ 22 \ 20.$$&lt;br /&gt;
*With this coding, all code words have&amp;amp;nbsp; $\mathcal{C}_1(q_μ)$&amp;amp;nbsp; with&amp;amp;nbsp; $1 ≤ μ ≤ 6$&amp;amp;nbsp; the same length&amp;amp;nbsp; $L_μ = 2$.&amp;amp;nbsp; Thus, the average code word length&amp;amp;nbsp; $L_{\rm M} = 2$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2316__Inf_T_2_1_S3_Ganz_neu.png|center|frame|Two examples of source encoding]]&lt;br /&gt;
&lt;br /&gt;
*The second, the blue source coder&amp;amp;nbsp; $L_μ ∈ \{1, 2 \}$&amp;amp;nbsp; and accordingly, the average code word length&amp;amp;nbsp; $L_{\rm M}$&amp;amp;nbsp; will be less than two code symbols per source symbol. Here we have for example this mapping:&lt;br /&gt;
: $$\rm A C F B I G \ ⇒ \ 0 \ 02 \ 12 \ 01 \ 22 \ 2.$$&lt;br /&gt;
&lt;br /&gt;
*It is obvious that this second code symbol sequence cannot be decoded unambiguously, since the symbol sequence naturally does not include the spaces inserted in this text for display reasons. }}&lt;br /&gt;
 	 &lt;br /&gt;
&lt;br /&gt;
==Kraft–McMillan inequality - Prefix-free codes == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Binary codes for compressing a memoryless value discrete source are characterized by the fact that the individual symbols are represented by code symbol sequences of different lengths:&lt;br /&gt;
 &lt;br /&gt;
:$$L_{\mu} \ne {\rm const.}  \hspace{0.4cm}(\mu = 1, \hspace{0.05cm}\text{...} \hspace{0.05cm}, M ) \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Only then it is possible,&lt;br /&gt;
*that the&amp;amp;nbsp; &#039;&#039;&#039;average code word length becomes minimal&#039;&#039;&#039;&amp;amp;nbsp;,&lt;br /&gt;
*if the&amp;amp;nbsp; &#039;&#039;&#039;source symbols are not equally probable&#039;&#039;&#039;&amp;amp;nbsp; are&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To enable a unique decoding, the code must also be &amp;quot;prefix-free&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp; The property&amp;amp;nbsp; &#039;&#039;&#039;prefix-free&#039;&#039;&#039;&amp;amp;nbsp; indicates that no codeword may be the prefix (beginning) of a longer codeword.&amp;amp;nbsp; Such a codeword is immediately decodable.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*The blue code in the&amp;amp;nbsp; [[Information_Theory/General_Description#Description_of_lossless_source_encoding_.E2.80.93_Prerequisites|Example 4]]&amp;amp;nbsp; is not prefix-free.&amp;amp;nbsp; For example, the code symbol sequence &amp;quot;01&amp;quot; could be interpreted by the decoder as&amp;amp;nbsp; $\rm AD$&amp;amp;nbsp; but also as&amp;amp;nbsp; $\rm B$. &lt;br /&gt;
*The red code, on the other hand, is prefix-free, although prefix freedom would not be absolutely necessary here because of&amp;amp;nbsp; $L_μ = \rm const.$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Without proof:}$&amp;amp;nbsp;  &lt;br /&gt;
The necessary&amp;amp;nbsp; &#039;&#039;&#039;Condition for the existence of a prefix-free code&#039;&#039;&#039;&amp;amp;nbsp; was specified by Leon Kraft in his master thesis 1949 at&amp;amp;nbsp; &#039;&#039;Massachusetts Institute of Technology&#039;&#039;&amp;amp;nbsp; (MIT) :&lt;br /&gt;
 &lt;br /&gt;
:$$\sum_{\mu=1}^{M} \hspace{0.2cm} D^{-L_{\mu} \le 1 \hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 5:}$&amp;amp;nbsp;  &lt;br /&gt;
If you check the second (blue) code of&amp;amp;nbsp; [[Information_Theory/General_Description#Description of lossless source encoding.E2.80.93_Requirements|Example 4]]&amp;amp;nbsp; with&amp;amp;nbsp; $M = 9$&amp;amp;nbsp; and&amp;amp;nbsp; $D = 3$, you get:&lt;br /&gt;
 &lt;br /&gt;
:$$3 \cdot 3^{-1} + 6 \cdot 3^{-2} = 1.667 &amp;gt; 1 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
From this you can see that this code cannot be prefix-free }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 6:}$&amp;amp;nbsp; Let&#039;s look at the binary code&lt;br /&gt;
 &lt;br /&gt;
:$$\boldsymbol{\rm A } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 0 &lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\boldsymbol{\rm B } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 00&lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\boldsymbol{\rm C } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 11&lt;br /&gt;
\hspace{0.05cm}, $$&lt;br /&gt;
&lt;br /&gt;
it is obviously not prefix-free.&amp;amp;nbsp; The equation&lt;br /&gt;
 &lt;br /&gt;
:$$1 \cdot 2^{-1} + 2 \cdot 2^{-2} = 1 $$&lt;br /&gt;
&lt;br /&gt;
does not mean that this code is actually prefix-free, it just means that there is a prefix-free code with the same length distribution, for example&lt;br /&gt;
  &lt;br /&gt;
:$$\boldsymbol{\rm A } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 0 &lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\boldsymbol{\rm B } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 10&lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\boldsymbol{\rm C } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 11&lt;br /&gt;
\hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Source encoding theorem==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We now look at a redundant message source with the symbol set&amp;amp;nbsp; $〈q_μ〉$, where the control variable&amp;amp;nbsp; $μ$&amp;amp;nbsp; takes all values between&amp;amp;nbsp; $1$&amp;amp;nbsp; and the symbol range&amp;amp;nbsp; $M$.&amp;amp;nbsp; The source entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; is smaller than the message content&amp;amp;nbsp; $H_0$.&lt;br /&gt;
&lt;br /&gt;
The redundancy&amp;amp;nbsp; $(H_0- H)$&amp;amp;nbsp; is either caused by&lt;br /&gt;
*not equally probable symbols &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $p_μ ≠ 1/M$,&amp;amp;nbsp; and/or&lt;br /&gt;
*statistical bonds within the sequence&amp;amp;nbsp; $〈q_\nu〉$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A source encoder replaces the source symbol&amp;amp;nbsp; $q_μ$&amp;amp;nbsp; with the binary codeword&amp;amp;nbsp; $\mathcal{C}(q_μ)$, consisting of&amp;amp;nbsp; $L_μ$&amp;amp;nbsp; binary symbols (zeros or ones).&amp;amp;nbsp; This results in an average codeword length:&lt;br /&gt;
 &lt;br /&gt;
:$$L_{\rm M} = \sum_{\mu=1}^{M} \hspace{0.2cm} p_{\mu} \cdot L_{\mu} \hspace{0.05cm}, \hspace{0.2cm}{\rm mit} \hspace{0.2cm}p_{\mu} = {\rm Pr}(q_{\mu}) \hspace{0.05cm}. $$&lt;br /&gt;
&lt;br /&gt;
For the source encoding task described here the following&amp;amp;nbsp; &#039;&#039;&#039;limit&#039;&#039;&#039;&amp;amp;nbsp; can be specified:&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Theorem:}$&amp;amp;nbsp;  &lt;br /&gt;
For the possibility of a complete reconstruction of the sent string from the binary sequence it is sufficient, but also necessary, that &lt;br /&gt;
&lt;br /&gt;
*for encoding on the transmitting side at least&amp;amp;nbsp; $H$&amp;amp;nbsp; binary symbols per source symbol are used. &lt;br /&gt;
&lt;br /&gt;
*the average code word length&amp;amp;nbsp; $L_{\rm M}$&amp;amp;nbsp; cannot be smaller than the entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; of the source symbol sequence: &amp;amp;nbsp; &lt;br /&gt;
:$$L_{\rm M} \ge H \hspace{0.05cm}. $$&lt;br /&gt;
&lt;br /&gt;
This regularity is called&amp;amp;nbsp; &#039;&#039;&#039; Source Coding Theorem&#039;&#039;&#039;, which goes back to&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Claude_Shannon Claude Elwood Shannon]&amp;amp;nbsp; &amp;amp;nbsp; If the source coder considers only the different probabilities of occurrence, but not the inner statistical bonds of the sequence, then&amp;amp;nbsp; $L_{\rm M} ≥ H_1$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; [[Information_Theory/Sources with Memory#Entropy with respect to two-tuples|first Entropy Approximation]].}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 7:}$&amp;amp;nbsp;  &lt;br /&gt;
For a quaternary source with the symbol probabilities&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = 2^{-1}\hspace{0.05cm}, \hspace{0.2cm}p_{\rm B} = 2^{-2}\hspace{0.05cm}, \hspace{0.2cm}p_{\rm C} = p_{\rm D} = 2^{-3}&lt;br /&gt;
\hspace{0.3cm} \Rightarrow \hspace{0.3cm} H = H_1 = 1.75\,\, {\rm bit/source symbol} $$&lt;br /&gt;
&lt;br /&gt;
equality in the above equation &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $L_{\rm M} = H$ results, if for example the following assignment is chosen&lt;br /&gt;
 &lt;br /&gt;
:$$\bold symbol{\rm A } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 0 &lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\bold symbol{\rm B } \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 10&lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\bold symbol{\rm C} \hspace{0.15cm} \Rightarrow \hspace{0.15cm} 110&lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}\bold symbol{\rm D }\hspace{0.15cm} \Rightarrow \hspace{0.15cm} 111&lt;br /&gt;
\hspace{0.05cm}. $$&lt;br /&gt;
&lt;br /&gt;
In contrast, with the same mapping and&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = 0.4\hspace{0.05cm}, \hspace{0.2cm}p_{\rm B} = 0.3\hspace{0.05cm}, \hspace{0.2cm}p_{\rm C} = 0.2&lt;br /&gt;
\hspace{0.05cm}, \hspace{0.2cm}p_{\rm D} = 0.1\hspace{0.05cm}&lt;br /&gt;
\hspace{0.3cm} \Rightarrow \hspace{0.3cm} H = 1.845\,\, {\rm bit/source symbol}$$&lt;br /&gt;
&lt;br /&gt;
the average code word length&lt;br /&gt;
 &lt;br /&gt;
:$$L_{\rm M} = 0.4 \cdot 1 + 0.3 \cdot 2 + 0.2 \cdot 3 + 0.1 \cdot 3 &lt;br /&gt;
= 1.9\,\, {\rm bit/source symbol}\hspace{0.05cm}. $$&lt;br /&gt;
&lt;br /&gt;
Because of the unfavorably chosen symbol probabilities (no powers of two) $L_{\rm M} &amp;gt; H$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 8:}$&amp;amp;nbsp;  &lt;br /&gt;
We will look at some very early attempts at source encoding for the transmission of natural texts, based on the letter frequencies given in the table. &lt;br /&gt;
*In the literature many different frequencies are found,&amp;amp;nbsp; also because,&amp;amp;nbsp; the investigations were carried out for different languages. &lt;br /&gt;
*Mostly, however, the list starts with the blank and &amp;quot;E&amp;quot; and ends with letters like &amp;quot;X&amp;quot;,&amp;amp;nbsp; &amp;quot;Y&amp;quot;&amp;amp;nbsp; and&amp;amp;nbsp; &amp;quot;Q&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2323__Inf_T_2_1_S6_ganz_neu.png|center|frame|Letter encodings according to Bacon/Bandot, Morse and Huffman]]&lt;br /&gt;
&lt;br /&gt;
Please note the following about this table:&lt;br /&gt;
*The entropy of this alphabet with&amp;amp;nbsp; $M = 27$&amp;amp;nbsp; character will be&amp;amp;nbsp; $H≈ 4 \, \rm bit/character$&amp;amp;nbsp; is&amp;amp;nbsp; We have not recalculated this.&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Francis_Bacon Francis Bacon]&amp;amp;nbsp; had already given a binary code in 1623, where each letter is represented by five bits: &amp;amp;nbsp; $L_{\rm M} = 5$.&lt;br /&gt;
*About 250 years later&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Baudot_code Jean-Maurice-Émile Baudot]&amp;amp;nbsp; adopted this code, which was later standardized for the entire telegraphy.&amp;amp;nbsp; One consideration important to him was that a code with a uniform five binary characters per letter is more difficult for an enemy to decipher, since he cannot draw conclusions about the transmitted character from the frequency of its occurrence.&lt;br /&gt;
*The last line in the above table gives an exemplary&amp;amp;nbsp; [[Information_Theory/Entropy_Coding_According_to_Huffman#The_Huffman.E2.80.93Algorithm|Huffman-Code]]&amp;amp;nbsp; for the above frequency distribution.&amp;amp;nbsp; Probable characters like &amp;quot;E&amp;quot; or &amp;quot;N&amp;quot; and also the &amp;quot;Blank&amp;quot; are represented with only three bits, the rare &amp;quot;Q&amp;quot; on the other hand with&amp;amp;nbsp; $11$ bit. &lt;br /&gt;
*The average code word length &amp;amp;nbsp;$L_{\rm M} = H + ε$&amp;amp;nbsp; is slightly larger than&amp;amp;nbsp; $H$, whereby we will not go into more detail here about the small positive size&amp;amp;nbsp; $ε$&amp;amp;nbsp; &amp;amp;nbsp; Only this much: &amp;amp;nbsp; There is no prefix-free code with smaller average word length than the Huffman code.&lt;br /&gt;
*Also&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Morse_code Samuel Morse]&amp;amp;nbsp; took into account the different frequencies in his code for telegraphy, already in the 1830s.&amp;amp;nbsp; The Morse code of each character consists of two to four binary characters, which are designated here according to the application with dot (&amp;quot;short&amp;quot;) and bar (&amp;quot;long&amp;quot;).&lt;br /&gt;
*It is obvious that for the Morse code&amp;amp;nbsp; $L_{\rm M} &amp;lt; 4$&amp;amp;nbsp; will apply, according to the penultimate line.&amp;amp;nbsp; But this is connected with the fact that this code is not prefix-free.&amp;amp;nbsp; Therefore, the radio operator had to take a break between each short-long sequence so that the other station could decode the radio signal as well.}}&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
== Exercises for chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:2.1 Codierung mit und ohne Verlust|Aufgabe 2.1: Codierung mit und ohne Verlust]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:2.2 Kraftsche Ungleichung|Aufgabe 2.2: Kraftsche Ungleichung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:2.2Z Mittlere Codewortlänge|Aufgabe 2.2Z: Mittlere Codewortlänge]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Compression_According_to_Lempel,_Ziv_and_Welch&amp;diff=35081</id>
		<title>Information Theory/Compression According to Lempel, Ziv and Welch</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Compression_According_to_Lempel,_Ziv_and_Welch&amp;diff=35081"/>
		<updated>2020-11-02T13:13:52Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Quellencodierung – Datenkomprimierung&lt;br /&gt;
|Vorherige Seite=Allgemeine Beschreibung&lt;br /&gt;
|Nächste Seite=Entropiecodierung nach Huffman&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Static and dynamic dictionary techniques == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Many data compression methods use dictionaries.&amp;amp;nbsp; the idea is the following: &lt;br /&gt;
*Construct a list of character patterns that occur in the text, &lt;br /&gt;
*and encode these patterns as indices of the list. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This procedure is particularly efficient if certain patterns are repeated frequently in the text and this is also taken into account in the coding.&amp;amp;nbsp; A distinction is made here&lt;br /&gt;
*Procedure with static dictionary,&lt;br /&gt;
*Procedure with dynamic dictionary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
$\text{(1) Procedure with static dictionary}$&lt;br /&gt;
&lt;br /&gt;
A static dictionary is only useful for very special applications, for example for a file of the following form:&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2424__Inf_T_2_2_S1a.png|center|frame|File to edit in this section]]&lt;br /&gt;
&lt;br /&gt;
For example, the assignments result in&lt;br /&gt;
     &lt;br /&gt;
:$$&amp;quot;\boldsymbol{\rm 0}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 000000} \hspace{0.05cm},\hspace{0.15cm} ... \hspace{0.15cm},\hspace{0.05cm}&lt;br /&gt;
&amp;quot;\boldsymbol{\rm 9}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 001001} \hspace{0.05cm},&lt;br /&gt;
&amp;quot;\hspace{-0.03cm}\_\hspace{-0.03cm}\_\hspace{0.03cm}&amp;quot; \hspace{0.1cm}{\rm (Blank)}\hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 001010} \hspace{0.05cm},$$&lt;br /&gt;
&lt;br /&gt;
:$$&amp;quot;\hspace{-0.01cm}.\hspace{-0.01cm}&amp;quot; \hspace{0.1cm}{\rm (Punkt)}\hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 001011} \hspace{0.05cm},&lt;br /&gt;
&amp;quot;\hspace{-0.01cm},\hspace{-0.01cm}&amp;quot; \hspace{0.1cm}{\rm (Komma)}\hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 001011} \hspace{0.05cm},&lt;br /&gt;
&amp;quot; {\rm end\hspace{-0.1cm}-\hspace{-0.1cm}of\hspace{-0.1cm}-\hspace{-0.1cm}line}\hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 001101} \hspace{0.05cm},$$&lt;br /&gt;
&lt;br /&gt;
:$$&amp;quot;\boldsymbol{\rm A}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 100000} \hspace{0.05cm},\hspace{0.15cm} ... \hspace{0.15cm},\hspace{0.05cm}&lt;br /&gt;
&amp;quot;\boldsymbol{\rm E}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 100100} \hspace{0.05cm},&lt;br /&gt;
\hspace{0.15cm} ... \hspace{0.15cm},\hspace{0.05cm}&lt;br /&gt;
&amp;quot;\boldsymbol{\rm L}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 101011} \hspace{0.05cm},\hspace{0.15cm}&amp;quot;\boldsymbol{\rm M}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 101100} \hspace{0.05cm},$$&lt;br /&gt;
&lt;br /&gt;
:$$&amp;quot;\boldsymbol{\rm O}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 101110} \hspace{0.05cm},\hspace{0.15cm} ... \hspace{0.15cm},\hspace{0.05cm}&lt;br /&gt;
&amp;quot;\boldsymbol{\rm U}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 110100} \hspace{0.05cm},&lt;br /&gt;
&amp;quot;\boldsymbol{\rm Name\hspace{-0.1cm}:\hspace{-0.05cm}\_\hspace{-0.03cm}\_}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 010000} \hspace{0.05cm},\hspace{0.05cm}$$&lt;br /&gt;
&lt;br /&gt;
:$$&amp;quot;\boldsymbol{\rm ,\_\hspace{-0.03cm}\_Vorname\hspace{-0.1cm}:\hspace{-0.05cm}\_\hspace{-0.03cm}\_}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 010001} \hspace{0.05cm},\hspace{0.05cm}&lt;br /&gt;
&amp;quot;\boldsymbol{\rm ,\_\hspace{-0.03cm}\_Wohnort\hspace{-0.1cm}:\hspace{-0.05cm}\_\hspace{-0.03cm}\_}&amp;quot; \hspace{0.05cm} \mapsto \hspace{0.05cm} \boldsymbol{\rm 010010} \hspace{0.05cm},\hspace{0.15cm} ... \hspace{0.15cm}$$&lt;br /&gt;
&lt;br /&gt;
for the first line of the above text, binary source coded with six bits per character:&lt;br /&gt;
    &lt;br /&gt;
:$$\boldsymbol{010000} \hspace{0.15cm}\boldsymbol{100000} \hspace{0.15cm}\boldsymbol{100001} \hspace{0.15cm}\boldsymbol{100100} \hspace{0.15cm}\boldsymbol{101011} \hspace{0.3cm}&lt;br /&gt;
\Rightarrow \hspace{0.3cm}&lt;br /&gt;
\boldsymbol{(\rm Name\hspace{-0.1cm}:\hspace{-0.05cm}\_\hspace{-0.03cm}\_)&lt;br /&gt;
\hspace{0.05cm}(A)\hspace{0.05cm}(B)\hspace{0.05cm}(E)\hspace{0.05cm}(L)}$$&lt;br /&gt;
&lt;br /&gt;
:$$\boldsymbol{010001} \hspace{0.15cm}\boldsymbol{101011}\hspace{0.15cm} \boldsymbol{100100} \hspace{0.15cm}\boldsymbol{101110} &lt;br /&gt;
 \hspace{0.3cm}&lt;br /&gt;
\Rightarrow \hspace{0.3cm}&lt;br /&gt;
\boldsymbol{(,\hspace{-0.05cm}\_\hspace{-0.03cm}\_\rm Vorname\hspace{-0.1cm}:\hspace{-0.05cm}\_\hspace{-0.03cm}\_)&lt;br /&gt;
\hspace{0.05cm}(L)\hspace{0.05cm}(E)\hspace{0.05cm}(O)}$$&lt;br /&gt;
&lt;br /&gt;
:$$\boldsymbol{010010} \hspace{0.15cm}\boldsymbol{110100} \hspace{0.15cm}\boldsymbol{101011} \hspace{0.15cm}\boldsymbol{101100} &lt;br /&gt;
 \hspace{0.3cm}\Rightarrow \hspace{0.3cm}&lt;br /&gt;
\boldsymbol{(,\hspace{-0.05cm}\_\hspace{-0.03cm}\_\rm Wohnort\hspace{-0.1cm}:\hspace{-0.05cm}\_\hspace{-0.03cm}\_)&lt;br /&gt;
\hspace{0.05cm}(U)\hspace{0.05cm}(L)\hspace{0.05cm}(M)}&lt;br /&gt;
\hspace{0.05cm} $$&lt;br /&gt;
&lt;br /&gt;
:$$\boldsymbol{001101}&lt;br /&gt;
 \hspace{0.3cm}\Rightarrow \hspace{0.3cm}&lt;br /&gt;
({\rm end\hspace{-0.1cm}-\hspace{-0.1cm}of\hspace{-0.1cm}-\hspace{-0.1cm}line})&lt;br /&gt;
\hspace{0.05cm}$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; &lt;br /&gt;
In this specific application the first line can be displayed with&amp;amp;nbsp; $14 \cdot 6 = 84$&amp;amp;nbsp; Bit. &lt;br /&gt;
*In contrast, conventional binary coding would require&amp;amp;nbsp; $39 \cdot 7 = 273$&amp;amp;nbsp; bits since: &lt;br /&gt;
*Because of the lowercase letters in the text, six bits per character are not sufficient here. &lt;br /&gt;
*For the entire text, this results in&amp;amp;nbsp; $103 \cdot 6 = 618$&amp;amp;nbsp; Bit versus&amp;amp;nbsp; $196 \cdot 7 = 1372$&amp;amp;nbsp; Bit. &lt;br /&gt;
*However, the code table must also be known to the recipient.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
$\text{(2) Procedure with dynamic dictionary}$&lt;br /&gt;
&lt;br /&gt;
Nevertheless, all relevant compression methods do not work with static dictionaries, but with &#039;&#039;dynamic dictionaries&#039;&#039;, which are created successively only during the coding:&lt;br /&gt;
*Such procedures are flexible and do not have to be adapted to the application &amp;amp;nbsp; One speaks of &#039;&#039;universal source coding procedures&#039;&#039;.&lt;br /&gt;
*A single pass is sufficient, whereas with a static dictionary the file must first be analyzed before the encoding process.&lt;br /&gt;
*At the sink, the dynamic dictionary is generated in the same way as for the source .&amp;amp;nbsp; this eliminates the need to transfer the dictionary.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2926__Inf_T_2_2_S1b_neu.png|frame|Extract from the hexdump of a natural image in BMP format]].&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp; &lt;br /&gt;
The graphic shows a small section of&amp;amp;nbsp; $80$&amp;amp;nbsp; Byte of a&amp;amp;nbsp; [[Digital_Signal_Transmission/Applications for Multimedia Files#Pictures_in_BMP.E2.80.93Format_.281.29|BMP-File]]&amp;amp;nbsp; in hexadecimal representation.&amp;amp;nbsp; It is the uncompressed representation of a natural picture.&lt;br /&gt;
&lt;br /&gt;
*You can see that in this small section of a landscape image the bytes&amp;amp;nbsp; $\rm FF$,&amp;amp;nbsp; $\rm 55$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm 47$&amp;amp;nbsp; occur very frequently. &lt;br /&gt;
*Data compression is therefore promising. &lt;br /&gt;
*But since other parts of the&amp;amp;nbsp; $\text{4 MByte}$ file or other byte combinations dominate in other image contents, the use of a static dictionary would not be appropriate here.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2927__Inf_T_2_2_S1c_GANZ_neu.png|right|frame|Possible encoding of a simple graphic]]&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp; &lt;br /&gt;
For an artificially created graphic, for example a form, you could work with a static dictionary. &lt;br /&gt;
&lt;br /&gt;
We are looking at a b/w image with&amp;amp;nbsp; $27 × 27$&amp;amp;nbsp; pixels, where the mapping &amp;quot;black&amp;quot; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;&#039;0&#039;&#039;&#039;&amp;amp;nbsp; and &amp;quot;white&amp;quot; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;&#039;1&#039;&#039;&#039;&amp;amp;nbsp; has been agreed upon.&lt;br /&gt;
&lt;br /&gt;
*At the top (black marker) each line is described by&amp;amp;nbsp; $27$&amp;amp;nbsp; zeros.&lt;br /&gt;
*In the middle (blue marking), three zeros and three ones always alternate.&lt;br /&gt;
*At the bottom (red mark), each line is delimited by&amp;amp;nbsp; $25$&amp;amp;nbsp; ones by two zeros.}}&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==LZ77 - the basic form of the Lempel-Ziv-algorithms ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The most important procedures for data compression with a dynamic dictionary go back to&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Abraham_Lempel Abraham Lempel]&amp;amp;nbsp; and&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Jacob_Ziv Jacob Ziv]&amp;amp;nbsp; zurück.&amp;amp;nbsp; The entire Lempel-Ziv family&amp;amp;nbsp; (in the following we will use for this briefly: &amp;amp;nbsp; LZ procedure)&amp;amp;nbsp; can be characterized as follows&lt;br /&gt;
*Lempel-Ziv methods use the fact that often whole words, or at least parts of them, occur several times in a text.&amp;amp;nbsp; One collects all word fragments, which are also called&amp;amp;nbsp; &#039;&#039;phrases&#039;&#039;&amp;amp;nbsp; in a sufficiently large dictionary.&lt;br /&gt;
*Contrary to the entropy coding developed before (by Shannon and Huffman), the frequency of single characters or character strings is not the basis of the compression here, so that the LZ procedures can be applied even without knowledge of the source statistics.&lt;br /&gt;
*LZ compression accordingly manages with a single pass and also the source symbol range&amp;amp;nbsp; $M$&amp;amp;nbsp; and the symbol set&amp;amp;nbsp; $\{q_μ\}$&amp;amp;nbsp; with&amp;amp;nbsp; $μ = 1$, ... , $M$&amp;amp;nbsp; does not have to be known.&amp;amp;nbsp; This is called &#039;&#039;Universal Source Coding&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We first look at the Lempel-Ziv algorithm in its original form from 1977, known as&amp;amp;nbsp; [https://en.wikipedia.org/wiki/LZ77_and_LZ78#LZ77 LZ77]: &lt;br /&gt;
*This works with a window that is successively moved over the text&amp;amp;nbsp; one also speaks of a&amp;amp;nbsp; &#039;&#039;sliding window&#039;&#039;. &lt;br /&gt;
*The window size&amp;amp;nbsp; $G$&amp;amp;nbsp; is an important parameter that decisively influences the compression result.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2426__Inf_T_2_2_S2a_neu.png|center|frame|Sliding window with LZ77 compression]]&lt;br /&gt;
&lt;br /&gt;
The graphic shows an example of the&amp;amp;nbsp; &#039;&#039;sliding windows&#039;&#039;.&amp;amp;nbsp; This is divided into&lt;br /&gt;
*the preview buffer&amp;amp;nbsp; $($blue background),&amp;amp;nbsp; and&lt;br /&gt;
*the search buffer&amp;amp;nbsp; $($red background, with the positions&amp;amp;nbsp; $P = 0$, ... , $7$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; window size&amp;amp;nbsp; $G = 8)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The edited text consists of the four words&amp;amp;nbsp; &#039;&#039;&#039;Miss&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;Mission&#039;&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;Mississippi&#039;&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;Mistral&#039;&#039;&#039;&#039;, each separated by a hyphen.&amp;amp;nbsp; At the time in question the preview buffer contains&amp;amp;nbsp; &#039;&#039;&#039;Mississi&#039;&#039;&#039;.&lt;br /&gt;
*Search now in the search buffer for the best match &amp;amp;nbsp; ⇒ &amp;amp;nbsp; the string with the maximum match length&amp;amp;nbsp; $L$.&amp;amp;nbsp; This is the result for the position&amp;amp;nbsp; $P = 7$&amp;amp;nbsp; and the length&amp;amp;nbsp; $L = 5$&amp;amp;nbsp; to&amp;amp;nbsp; &#039;&#039;&#039;Missi&#039;&#039;&#039;.&lt;br /&gt;
*This step is then expressed by the &#039;&#039;triple&#039;&#039;&amp;amp;nbsp; $(7,&amp;amp;nbsp; 5,&amp;amp;nbsp; $ &#039;&#039;&#039;s&#039;&#039;&#039;$)$&amp;amp;nbsp; expressed &amp;amp;nbsp; ⇒ &amp;amp;nbsp; general&amp;amp;nbsp; $(P, \ L, \ Z)$, where&amp;amp;nbsp; $Z =$&amp;amp;nbsp;&#039;&#039;&#039;s&#039;&#039;&#039;&amp;amp;nbsp; specifies the character that no longer matches the string found in the search buffer.&lt;br /&gt;
*At the end the window is moved by&amp;amp;nbsp; $L + 1 = 6$&amp;amp;nbsp; character is moved to the right.&amp;amp;nbsp; In the preview buffer there is now&amp;amp;nbsp; &#039;&#039;&#039;sippi-Mi&#039;&#039;,&amp;amp;nbsp; in the search buffer&amp;amp;nbsp; &#039;&#039;&#039;n-Missis&#039;&#039;&#039;&amp;amp;nbsp; and the encoding gives the triple&amp;amp;nbsp; $(2, 2,$&amp;amp;nbsp; &#039;&#039;&#039;p&#039;&#039;&#039;$)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the following example the LZ77-coding algorithms are described in more detail.&amp;amp;nbsp; The decoding runs in a similar way.&lt;br /&gt;
	 &lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 3:}$&amp;amp;nbsp; &lt;br /&gt;
We consider the LZ77 encoding of the string&amp;amp;nbsp; &#039;&#039;&#039;ABABCBCBAABCABe&#039;&#039;&#039;&amp;amp;nbsp; according to the following graphic.&amp;amp;nbsp; The input sequence has the length $N = 15$.&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Further is assumed:&lt;br /&gt;
*For the characters apply&amp;amp;nbsp; $Z ∈ \{$ &#039;&#039;&#039;A&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;e&#039;&#039;&#039; $\}$,&amp;amp;nbsp; where&amp;amp;nbsp; &#039;&#039;&#039;e&#039;&#039;&#039;&amp;amp;nbsp; corresponds to the&amp;amp;nbsp; &#039;&#039;end-of-file&#039;&#039;&amp;amp;nbsp; (end of the input string)&lt;br /&gt;
*The size of the preview and search buffer are&amp;amp;nbsp; $G = 4$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; Position&amp;amp;nbsp; $P ∈ {0, 1, 2, 3}$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2427__Inf_T_2_2_S2b_neu.png|frame|To illustrate the LZ77 encoding]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Display of the encoding process&amp;lt;/u&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step 1 and 2&amp;lt;/u&amp;gt;: &amp;amp;nbsp; The characters&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;&amp;amp;nbsp; are encoded by the triple&amp;amp;nbsp; $(0, 0,&amp;amp;nbsp; $ &#039;&#039;&#039;A&#039;&#039;&#039;$)$&amp;amp;nbsp; and&amp;amp;nbsp; $(0, 0,&amp;amp;nbsp; $ &#039;&#039;&#039;B&#039;&#039;&#039;$)$, because they are not yet stored in the search buffer. &amp;amp;nbsp; Then move the Sliding Window by 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;step 3&amp;lt;/u&amp;gt;: &amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;&amp;amp;nbsp; is masked by the search buffer and at the same time the still unknown character&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039;&amp;amp;nbsp; is appended.&amp;amp;nbsp; After that the Sliding Window is moved three positions to the right.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step 4&amp;lt;/u&amp;gt;: &amp;amp;nbsp; This shows that the search string&amp;amp;nbsp; &#039;&#039;&#039;BCB&#039;&#039;&#039;&amp;amp;nbsp; may also end in the preview buffer.&amp;amp;nbsp; Now the window can be moved four positions to the right.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step 5&amp;lt;/u&amp;gt;: &amp;amp;nbsp; Only&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; is found in the search buffer and&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;&amp;amp;nbsp; is dropped.&amp;amp;nbsp; If the search buffer is larger, however,&amp;amp;nbsp; &#039;&#039;&#039;ABC&#039;&#039;&#039;&amp;amp;nbsp; could be masked together.&amp;amp;nbsp; For this purpose&amp;amp;nbsp; $G ≥ must be 7$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step 6&amp;lt;/u&amp;gt;: &amp;amp;nbsp; Likewise, the character&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039;&amp;amp;nbsp; must be coded separately due to the buffer being too small.&amp;amp;nbsp; But since&amp;amp;nbsp; &#039;&#039;&#039;CA&#039;&#039;&#039;&amp;amp;nbsp; hasn&#039;t occurred before,&amp;amp;nbsp; would not improve the compression here&amp;amp;nbsp; $G = 7$&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step 7&amp;lt;/u&amp;gt;: &amp;amp;nbsp; With the consideration of the end-of-file&amp;amp;nbsp; (&#039;&#039;&#039;e&#039;&#039;&#039;)&amp;amp;nbsp; together with&amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;&amp;amp;nbsp; from the search buffer, the encoding process is finished.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Before transmission, the specified triples must of course be binary coded.&amp;amp;nbsp; In this example you need&lt;br /&gt;
*the position&amp;amp;nbsp; $P ∈ \{0, 1, 2, 3\}$&amp;amp;nbsp; two Bit&amp;amp;nbsp; (yellow background in the table above),&lt;br /&gt;
*the copy length&amp;amp;nbsp; $L$&amp;amp;nbsp; three bits&amp;amp;nbsp; (green background), so that one could also&amp;amp;nbsp; $L = 7$&amp;amp;nbsp; still be displayed,&lt;br /&gt;
*all characters are two bits&amp;amp;nbsp; (white background),&amp;amp;nbsp; for example&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039; &amp;amp;#8594; &#039;&#039;&#039;00&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039; &amp;amp;#8594; &#039;&#039;&#039;01&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039; &amp;amp;#8594; &#039;&#039;&#039;10&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;e&#039;&#039;&#039; (&amp;quot;end-of-file&amp;quot;) &amp;amp;#8594; &#039;&#039;&#039;11&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Thus the LZ77 output sequence has a length of&amp;amp;nbsp; $7 - 7 = 49$&amp;amp;nbsp; bit, while the input sequence only needed&amp;amp;nbsp; $15 - 2 = 30$&amp;amp;nbsp; bit.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; &#039;&#039;&#039;A Lempel-Ziv compression only makes sense with large files !&#039;&#039;&#039;}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==The Lempel-Ziv-Variant LZ78 ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The LZ77-algorithm produces very inefficient output if more frequent strings are repeated only with a larger distance.&amp;amp;nbsp; Such repetitions can often not be recognized due to the limited buffer size&amp;amp;nbsp; $G$&amp;amp;nbsp; des&amp;amp;nbsp; &#039;&#039;Sliding Window&#039;&#039;&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
Lempel and Ziv corrected this shortcoming already one year after the release of the first version LZ77: &lt;br /&gt;
*The algorithm LZ78 uses a global dictionary for compression instead of the local dictionary (search buffer). &lt;br /&gt;
*The size of the dictionary allows efficient compression of phrases that have been used for a long time before.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 4:}$&amp;amp;nbsp; &lt;br /&gt;
To explain the LZ78 algorithm we consider the same sequence&amp;amp;nbsp; &#039;&#039;&#039;ABABCBCBAABCABe&#039;&#039;&#039;&amp;amp;nbsp; as for the LZ77-$\text{Example 3}$.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_2_2_S3_neu.png|frame|Generation of the dictionary and output at LZ78]]&lt;br /&gt;
&lt;br /&gt;
*The graphic shows&amp;amp;nbsp; (with red background)&amp;amp;nbsp; the dictionary with index&amp;amp;nbsp; $I&amp;amp;nbsp;$&amp;amp;nbsp; (in decimal and binary representation, column 1 and 2)&amp;amp;nbsp; and the corresponding content (column 3), which is entered for coding step&amp;amp;nbsp; $i&amp;amp;nbsp;$&amp;amp;nbsp; (column 4).&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
*For LZ78 both encoding and decoding are always&amp;amp;nbsp; $i = I$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*In column 5 you find the formalized code output&amp;amp;nbsp; $($Index&amp;amp;nbsp; $I$,&amp;amp;nbsp; new character&amp;amp;nbsp; $Z)$.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
*In column 6 the corresponding binary coding is given with four bits for the index and the same character assignment&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039; &amp;amp;#8594; &#039;&#039;&#039;00&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039; &amp;amp;#8594; &#039;&#039;&#039;01&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039; &amp;amp;#8594; &#039;&#039;&#039;10&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;e&#039;&#039;&#039; (&amp;quot;end-of-file&amp;quot;) &amp;amp;#8594; &#039;&#039;&#039;11&#039;&#039;&#039;&amp;amp;nbsp; as in&amp;amp;nbsp; $\text{Example 3}$.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
*At the beginning&amp;amp;nbsp; (step $\underline{i = 0}$)&amp;amp;nbsp; the dictionary is&amp;amp;nbsp; (WB)&amp;amp;nbsp; empty except for the entry&amp;amp;nbsp; &#039;&#039;&#039;ε&#039;&#039;&#039;&amp;amp;nbsp; $($empty character, not to be confused with the space character, which is not used here$)$&amp;amp;nbsp; with index&amp;amp;nbsp; $I = 0$.&lt;br /&gt;
*In the step&amp;amp;nbsp; $\underline{i = 1}$&amp;amp;nbsp; there is no usable entry in the dictionary yet, and it becomes&amp;amp;nbsp; (&#039;&#039;&#039;0,&amp;amp;nbsp; A&#039;&#039;&#039;)&amp;amp;nbsp; output&amp;amp;nbsp; (&#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; follows&amp;amp;nbsp; &#039;&#039;&#039;ε&#039;&#039;&#039;). &amp;amp;nbsp; In the dictionary, the entry&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; follows in line&amp;amp;nbsp; $I = 1$&amp;amp;nbsp; (abbreviated&amp;amp;nbsp; &#039;&#039;&#039;1: A&#039;&#039;&#039;).&lt;br /&gt;
*The procedure in the second step&amp;amp;nbsp; ($\underline{i = 2}$).&amp;amp;nbsp; The output is&amp;amp;nbsp; (&#039;&#039;&#039;0,&amp;amp;nbsp; B&#039;&#039;&#039;)&amp;amp;nbsp; and the dictionary entry is&amp;amp;nbsp; &#039;&#039;&#039;2: B&#039;&#039;&#039;&amp;amp;nbsp;.&lt;br /&gt;
*As the entry&amp;amp;nbsp; $\underline{i = 3}$&amp;amp;nbsp; is already found in step&amp;amp;nbsp; &#039;&#039;&#039;1: A&#039;&#039;&#039;&amp;amp;nbsp;, the characters&amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;&amp;amp;nbsp; can be coded together by&amp;amp;nbsp; (&#039;&#039;&#039;1, B&#039;&#039;&#039;)&amp;amp;nbsp; and the new dictionary entry&amp;amp;nbsp; &#039;&#039;&#039;3: AB&#039;&#039;&#039;&amp;amp;nbsp; is made.&lt;br /&gt;
*After coding and insertion of the new character&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039;&amp;amp;nbsp; in step&amp;amp;nbsp; $\underline{i = 4}$&amp;amp;nbsp; the pair of characters&amp;amp;nbsp; &#039;&#039;&#039;BC&#039;&#039;&#039;&amp;amp;nbsp; is coded together &amp;amp;nbsp; ⇒ &amp;amp;nbsp; (&#039;&#039;&#039;2, C&#039;&#039;&#039;) and entered into the dictionary&amp;amp;nbsp; &#039;&#039;&#039;5: BC&#039;&#039;&#039;&amp;amp;nbsp; in step&amp;amp;nbsp; $\underline{i = 5}$&amp;amp;nbsp;.&lt;br /&gt;
*In step&amp;amp;nbsp; $\underline{i = 6}$&amp;amp;nbsp; two characters are treated together with&amp;amp;nbsp; &#039;&#039;&#039;&#039;6: BA&#039;&#039;&#039;&amp;amp;nbsp; and in the last two steps three each, namely&amp;amp;nbsp; &#039;&#039;&#039;7: ABC&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;8: ABe&#039;&#039;&#039;. &lt;br /&gt;
*The output&amp;amp;nbsp; (3, &#039;&#039;&#039;C&#039;&#039;&#039;)&amp;amp;nbsp; in step&amp;amp;nbsp; $\underline{i = 7}$&amp;amp;nbsp; stands for&amp;amp;nbsp; &amp;quot;WB(3) + &#039;&#039;&#039;C&#039;&#039;&#039; = &#039;&#039;&#039;ABC&#039;&#039;&#039; &amp;amp;nbsp; and the output&amp;amp;nbsp; (3, &#039;&#039;&#039;e&#039;&#039;&#039;)&amp;amp;nbsp; in step&amp;amp;nbsp; $\underline{i = 8}$&amp;amp;nbsp; for&amp;amp;nbsp; &#039;&#039;&#039;ABe&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this&amp;amp;nbsp; $\text{Example 4}$&amp;amp;nbsp; the LZ78 code symbol sequence thus consists of&amp;amp;nbsp; $8 - 6 = 48$&amp;amp;nbsp; Bit.&amp;amp;nbsp; The result is comparable to the LZ77-$\text{Example 3}$&amp;amp;nbsp; $(49$ Bit$)$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; Details and improvements of LZ78 will be omitted here.&amp;amp;nbsp; Here we refer to the&amp;amp;nbsp; [[Information_Theory/Compression According to Lempel, Ziv and Welch#The_Lempel. E2.80.93Ziv.E2.80.93Welch.E2.80.93Algorithm|LZW-Algorithm]], which will be described on the following pages.&amp;amp;nbsp; Only this much will be said now:&lt;br /&gt;
*The index&amp;amp;nbsp; $I$&amp;amp;nbsp; is uniformly represented here with four bits, whereby the dictionary is limited to&amp;amp;nbsp; $16$&amp;amp;nbsp; entries.&amp;amp;nbsp; By a &#039;&#039;variable number of bits&#039;&#039;&amp;amp;nbsp; for the index one can bypass this limitation.&amp;amp;nbsp; At the same time one gets a better compression factor.&lt;br /&gt;
*The dictionary does not have to be transmitted with all LZ variants, but is generated with the decoder in exactly the same way as on the coder side.&amp;amp;nbsp; The decoding is also done with LZ78, but not with LZW, in the same way as the coding.&lt;br /&gt;
*All LZ procedures are asymptotically optimal, i.e., for infinitely long sequences the mean code word length&amp;amp;nbsp; $L_{\rm M}$&amp;amp;nbsp; per source symbol is equal to the source entropy&amp;amp;nbsp; $H$&amp;amp;nbsp;. &lt;br /&gt;
*For short sequences, however, the deviation is considerable.&amp;amp;nbsp; More about this at&amp;amp;nbsp; [[Information_Theory/Compression_by_Lempel,_Ziv_and_Welch#Quantitative_Statements_on_asymptotic_Optimality|end of chapter]].}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==The Lempel-Ziv-Welch algorithm ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The most common variant of Lempel-Ziv compression used today was designed by&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Terry_Welch Terry Welch]&amp;amp;nbsp; and published in 1983.&amp;amp;nbsp; In the following we refer to it as the&amp;amp;nbsp; &#039;&#039;Lempel-Ziv-Welch-Algorithm&#039;&#039;, abbreviated as &amp;quot;LZW&amp;quot;. &amp;amp;nbsp; Just as LZ78 has slight advantages over LZ77&amp;amp;nbsp; (as expected, why else would the algorithm have been modified?),&amp;amp;nbsp; LZW also has more advantages than disadvantages compared to LZ78.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2430__Inf_T_2_2_S4_neu.png|center|frame|LZW encoding of the sequence&amp;amp;nbsp; &#039;&#039;&#039;ABABCBCBAABCABe&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
The graphic shows the coder output for the exemplary input sequence&amp;amp;nbsp; &#039;&#039;&#039;ABABCBCBAABCABe&#039;&#039;&#039;.&amp;amp;nbsp; On the right is the dictionary (highlighted in red), which is successively generated during LZW encoding.&amp;amp;nbsp; The differences to LZ78 can be seen in comparison to the graphic on the last page, namely&lt;br /&gt;
*For LZW, all characters occurring in the text are already entered at the beginning of&amp;amp;nbsp; $(i = 0)$&amp;amp;nbsp; and assigned to a binary sequence, in the example with the indices&amp;amp;nbsp; $I = 0$, ... ,&amp;amp;nbsp; $I = 3$.&amp;amp;nbsp; This also means that LZW requires some knowledge of the message source, whereas LZ78 is a &amp;quot;true universal encoding&amp;quot;.&lt;br /&gt;
*For LZW, only the dictionary index&amp;amp;nbsp; $I$&amp;amp;nbsp; is transmitted for each encoding step&amp;amp;nbsp; $i$&amp;amp;nbsp; while for LZ78 the output is the combination&amp;amp;nbsp; $(I$,&amp;amp;nbsp; $Z)$,&amp;amp;nbsp;where $Z$&amp;amp;nbsp; denotes the current new character. &amp;amp;nbsp; Due to the absence of&amp;amp;nbsp; $Z$&amp;amp;nbsp; in the code output, LZW decoding is more complicated than with LZ78, as described on page&amp;amp;nbsp; [[Information_Theory/Compression_According_to_Lempel,_Ziv_and_Welch#Decoding_of_LZW.E2.80.93Algorithm|Decoding of LZW&amp;amp;ndash;Algorithm]]&amp;amp;nbsp;. &lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 5:}$&amp;amp;nbsp; For this exemplary LZW encoding, as with &amp;quot;LZ77&amp;quot; and &amp;quot;LZ78&amp;quot; again the input sequence&amp;amp;nbsp; &#039;&#039;&#039;ABABCBCBAABCABe&#039;&#039;&#039;&amp;amp;nbsp; is assumed.&amp;amp;nbsp; So the following description refers to the above graphic.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step i = 0&amp;lt;/u&amp;gt; (default): &amp;amp;nbsp; The allowed characters&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;,&amp;amp;nbsp; &#039;&#039;&#039;C&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;e&#039;&#039;&#039;&amp;amp;nbsp; (&amp;quot;end-of-file&amp;quot;) are entered into the dictionary and assigned to the indices&amp;amp;nbsp; $I = 0$, ... , $I = 3$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step i = 1&amp;lt;/u&amp;gt;: &amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; is coded by the decimal index&amp;amp;nbsp; $I = 0$&amp;amp;nbsp; and its binary representation&amp;amp;nbsp; &#039;&#039;&#039;0000&#039;&#039;&#039;&amp;amp;nbsp; is transmitted. &amp;amp;nbsp; Then the combination of the current character&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; and the following character&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;&amp;amp;nbsp; of the input sequence is stored in the dictionary under the index&amp;amp;nbsp; $I = 4$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;step i = 2&amp;lt;/u&amp;gt;: &amp;amp;nbsp; representation of&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;&amp;amp;nbsp; by index&amp;amp;nbsp; $I = 1$&amp;amp;nbsp; or.&amp;amp;nbsp; &#039;&#039;&#039;0001&#039;&#039;&#039;&amp;amp;nbsp; (binary) as well as dictionary entry of&amp;amp;nbsp; &#039;&#039;&#039;BA&#039;&#039;&#039;&amp;amp;nbsp; placed under index&amp;amp;nbsp; $I = 5$.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step i = 3&amp;lt;/u&amp;gt;: &amp;amp;nbsp; Because of the entry&amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;&amp;amp;nbsp; at time&amp;amp;nbsp; $i = 1$&amp;amp;nbsp; the index to be transmitted is&amp;amp;nbsp; $I = 4$&amp;amp;nbsp; (binary: &#039;&#039;&#039;0100&#039;&#039;&#039;).&amp;amp;nbsp; New dictionary entry of&amp;amp;nbsp; &#039;&#039;&#039;ABC&#039;&#039;&#039;&amp;amp;nbsp; under&amp;amp;nbsp; $I = 6$.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Step i = 8&amp;lt;/u&amp;gt;: &amp;amp;nbsp; Here the characters&amp;amp;nbsp; &#039;&#039;&#039;ABC&#039;&#039;&#039;&amp;amp;nbsp; are represented together by the index&amp;amp;nbsp; $I = 6$&amp;amp;nbsp; (binary: &#039;&#039;&#039;0110&#039;&#039;&#039;)&amp;amp;nbsp; and the entry for&amp;amp;nbsp; &#039;&#039;&#039;ABCA&#039;&#039;&#039;&amp;amp;nbsp; is made.&lt;br /&gt;
&lt;br /&gt;
With the encoding of&amp;amp;nbsp; &#039;&#039;&#039;e&#039;&#039;&#039;&amp;amp;nbsp; (EOF mark) the encoding process is finished after ten steps.&amp;amp;nbsp; With LZ78 only eight steps were needed.&amp;amp;nbsp; But it has to be considered:&lt;br /&gt;
*The LZW algorithm needs only&amp;amp;nbsp; $10 \cdot 4 = 40$&amp;amp;nbsp; Bit versus the&amp;amp;nbsp; $8 \cdot 6 = 48$&amp;amp;nbsp; Bit for LZ78.&amp;amp;nbsp; Provided that for this simple calculation, four bits each are needed for index representation.&lt;br /&gt;
*LZW as well as LZ78 require less bits&amp;amp;nbsp; $($namely &amp;amp;nbsp; $34$&amp;amp;nbsp; or &amp;amp;nbsp; $42)$, if one considers that for the step&amp;amp;nbsp; $i = 1$&amp;amp;nbsp; the index has to be coded with two bits only&amp;amp;nbsp; $(I ≤ 3)$&amp;amp;nbsp; and for&amp;amp;nbsp; $2 ≤ i ≤ 5$&amp;amp;nbsp; three bits are sufficient&amp;amp;nbsp; $(I ≤ 7)$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following pages describe in detail the variable bit count for index representation and the decoding of LZ78- and LZW-encoded binary sequences.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Lempel-Ziv-Coding with variable index bit length == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For reasons of a most compact representation, we now consider only binary sources with the value set&amp;amp;nbsp; $\{$&#039;&#039;&#039;A&#039;&#039;&#039;, &#039;&#039;&#039;B&#039;&#039;&#039;$\}$.&amp;amp;nbsp; The terminating character&amp;amp;nbsp; &#039;&#039;&#039;end-of-file&#039;&#039;&#039;&amp;amp;nbsp; is also not considered. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2432__Inf_T_2_2_S5_neu.png|center|frame|LZW-Coding of a binary input sequence]]&lt;br /&gt;
&lt;br /&gt;
We demonstrate the LZW coding by means of a screenshot of our interactive Flash module&amp;amp;nbsp; [[Applets:Lempel-Ziv-Welch|Lempel-Ziv-Welch&amp;amp;ndash;Algorithms]]. &lt;br /&gt;
&lt;br /&gt;
*In the first coding step&amp;amp;nbsp; $(i = 1)$&amp;amp;nbsp;  &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp;⇒&amp;amp;nbsp; &#039;&#039;&#039;0&#039;&#039;&#039;.&amp;amp;nbsp; Afterwards the entry with index&amp;amp;nbsp; $I = 2$&amp;amp;nbsp; and&amp;amp;nbsp; content&amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;.&lt;br /&gt;
*As there are only two entries in step&amp;amp;nbsp; $i = 1$&amp;amp;nbsp; in the dictionary (&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;&amp;amp;nbsp;) one bit is sufficient. &amp;amp;nbsp; On the other hand, for step&amp;amp;nbsp; $i = 2$&amp;amp;nbsp; and&amp;amp;nbsp; $i = 3$&amp;amp;nbsp; for&amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039; &amp;amp;nbsp;⇒&amp;amp;nbsp; &#039;&#039;&#039;01&#039;&#039;&#039;&amp;amp;nbsp; and &amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039; &amp;amp;nbsp;⇒&amp;amp;nbsp; &#039;&#039;&#039;00&#039;&#039;&#039;&amp;amp;nbsp; two bits are needed in each case.&lt;br /&gt;
*Starting on &amp;amp;nbsp;$i = 4$&amp;amp;nbsp;, the index representation is done with three bits, then from &amp;amp;nbsp; $i = 8$&amp;amp;nbsp; with four bits and from&amp;amp;nbsp; $i = 16$&amp;amp;nbsp; with five bits.&amp;amp;nbsp; A simple algorithm for the respective index bit number&amp;amp;nbsp; $L(i)$&amp;amp;nbsp; can be derived.&lt;br /&gt;
*Let&#039;s finally consider the coding step&amp;amp;nbsp; $i = 18$.&amp;amp;nbsp; Here, the sequence&amp;amp;nbsp; &#039;&#039;&#039;ABABB&#039;&#039;&#039; marked in red, which was entered into the dictionary at time&amp;amp;nbsp; $i = 11$&amp;amp;nbsp; at time&amp;amp;nbsp; $($Index&amp;amp;nbsp; $I = 13$ ⇒ &#039;&#039;&#039;1101&#039;&#039;&#039;$)$&amp;amp;nbsp; is edited. &amp;amp;nbsp; However, the coder output is now&amp;amp;nbsp; &#039;&#039;&#039;01101&#039;&#039;&#039;&amp;amp;nbsp;because of&amp;amp;nbsp; $i ≥ 16$&amp;amp;nbsp;  (green mark at the coder output).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The statements also apply to&amp;amp;nbsp; [[Information_Theory/Compression According to Lempel, Ziv and Welch#The_Lempel.E2.80.93Ziv.E2.80.93Variant_LZ78|LZ78]].&amp;amp;nbsp; That is: &amp;amp;nbsp; With the LZ78 a variable index bit length results in the same improvement as with the LZW.&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Decoding of the LZW algorithm == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The decoder now displays the decoded output on the&amp;amp;nbsp; [[Information_Theory/Compression_by_Lempel,_Ziv_and_Welch#Lempel-Ziv-Coding with variable index bit length|last page]]&amp;amp;nbsp; as input sequence.&amp;amp;nbsp; The graphic shows that it is possible to uniquely decode this sequence even with variable index bit lengths. Please note:&lt;br /&gt;
&lt;br /&gt;
*The decoder knows that in the first coding step&amp;amp;nbsp; $(i = 1)$&amp;amp;nbsp; the index&amp;amp;nbsp; $I&amp;amp;nbsp;$ was coded with only one bit, in the steps&amp;amp;nbsp; $i = 2$&amp;amp;nbsp; and&amp;amp;nbsp; $i = 3$&amp;amp;nbsp; with two bit, from &amp;amp;nbsp; $i = 4$&amp;amp;nbsp; with three bit, from&amp;amp;nbsp; $i = 8$&amp;amp;nbsp; with four bit, and so on.&lt;br /&gt;
*The decoder generates the same dictionary as the coder, but the dictionary entries are made one time step later. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2433__Inf_T_2_2_S6_neu.png|center|frame|LZW-Decoding of a binary input sequence]]&lt;br /&gt;
&lt;br /&gt;
*At step&amp;amp;nbsp; $\underline{i = 1}$&amp;amp;nbsp; the adjacent symbol&amp;amp;nbsp; &#039;&#039;&#039;0&#039;&#039;&#039;&amp;amp;nbsp; is decoded as&amp;amp;nbsp; &#039;&#039;&#039;A&#039;&#039;&#039;&amp;amp;nbsp;. &amp;amp;nbsp; Likewise, the following results for step&amp;amp;nbsp; $\underline{i = 2}$&amp;amp;nbsp; from the preassignment of the dictionary and the two-bit representation agreed upon for this: &amp;amp;nbsp; &#039;&#039;&#039;01&#039;&#039;&#039; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;&#039;B&#039;&#039;&#039;.&lt;br /&gt;
*The entry of the line&amp;amp;nbsp; $\underline{I = 2}$&amp;amp;nbsp; $($content: &amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;$)$&amp;amp;nbsp; of the dictionary is therefore only made at the step&amp;amp;nbsp; $\underline{i = 2}$, while at the&amp;amp;nbsp; [[Information_Theory/Compression According to Lempel, Ziv and Welch#Lempel-Ziv-Coding with variable index bit length|Coding process]]&amp;amp;nbsp; this could already be done at the end of step&amp;amp;nbsp; $i = 1$&amp;amp;nbsp;.&lt;br /&gt;
*Let us now consider the decoding for&amp;amp;nbsp; $\underline{i = 4}$. &amp;amp;nbsp; The index&amp;amp;nbsp; $\underline{I = 2}$&amp;amp;nbsp; returns the decoding result&amp;amp;nbsp; &#039;&#039;&#039;010&#039;&#039;&#039; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;&#039;AB&#039;&#039;&#039;&amp;amp;nbsp; and in the next step&amp;amp;nbsp; $(\underline{i = 5})$&amp;amp;nbsp; the dictionary line&amp;amp;nbsp; $\underline{I = 5}$&amp;amp;nbsp; will be filled with&amp;amp;nbsp; &#039;&#039;&#039;ABA&#039;&#039;&#039;&amp;amp;nbsp;.&lt;br /&gt;
*This time difference with respect to the dictionary entries can lead to decoding problems.&amp;amp;nbsp; For example, for step&amp;amp;nbsp; $\underline{i = 7}$&amp;amp;nbsp; there is no dictionary entry with index&amp;amp;nbsp; $\underline{I= 7}$.&lt;br /&gt;
*What is to do in such a case as&amp;amp;nbsp; $(\underline{I = i})$?&amp;amp;nbsp; In this case you take the result of the previous decoding step&amp;amp;nbsp; $($here: &amp;amp;nbsp; &#039;&#039;&#039;BA&#039;&#039;&#039;&amp;amp;nbsp; for&amp;amp;nbsp; $\underline{i = 6})$&amp;amp;nbsp; and append the first character of this sequence at the end again. &amp;amp;nbsp; This gives the decoding result for&amp;amp;nbsp; $\underline{i = 7}$&amp;amp;nbsp; to&amp;amp;nbsp; &#039;&#039;&#039;111&#039;&#039;&#039; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;&#039;BAB&#039;&#039;&#039;.&lt;br /&gt;
*Naturally, it is unsatisfactory to specify only one recipe.&amp;amp;nbsp; In the&amp;amp;nbsp; [[Aufgaben:Aufgabe_2.4Z:_Nochmals_LZW-Codierung_und_-Decodierung|Exercise 2.4Z]]&amp;amp;nbsp; you should justify the procedure demonstrated here.&amp;amp;nbsp; We refer to the sample solution for this exercise.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With LZ78 decoding, the problem described here does not occur because not only the index&amp;amp;nbsp; $I&amp;amp;nbsp;$ but also the current character&amp;amp;nbsp; $Z$&amp;amp;nbsp; is included in the encoding result and is transmitted.&lt;br /&gt;
 	&lt;br /&gt;
 &lt;br /&gt;
==Remaining redundancy as a measure for the efficiency of encoding methods==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For the rest of this chapter we assume the following prerequisites:&lt;br /&gt;
*The&amp;amp;nbsp; &#039;&#039;symbol range&#039;&#039;&amp;amp;nbsp; the source&amp;amp;nbsp; $($or in the transmission sense: &amp;amp;nbsp; the number of stages)&amp;amp;nbsp; sei&amp;amp;nbsp; $M$, where&amp;amp;nbsp; $M$&amp;amp;nbsp; represents a power of two &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 2, \ 4, \ 8, \ 16$, ....&lt;br /&gt;
*The source entropy is&amp;amp;nbsp; $H$.&amp;amp;nbsp; If there are no statistical bonds between the symbols and if they are equally probable, then&amp;amp;nbsp; $H = H_0$, where&amp;amp;nbsp; $H_0 = \log_2 \ M$&amp;amp;nbsp; indicates the decision content.&amp;amp;nbsp; Otherwise, $H &amp;lt; H_0$ applies.&lt;br /&gt;
*A symbol sequence of length&amp;amp;nbsp; $N$&amp;amp;nbsp; is source-coded and returns a binary code sequence of length&amp;amp;nbsp; $L$.&amp;amp;nbsp; For the time being we do not make any statement about the type of source coding.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nach dem&amp;amp;nbsp; [[Information_Theory/Allgemeine_Beschreibung#Quellencodierungstheorem|Quellencodierungstheorem]]&amp;amp;nbsp; muss dann die mittlere Codewortlänge&amp;amp;nbsp; $L_{\rm M}$&amp;amp;nbsp; größer oder gleich der Quellenentropie&amp;amp;nbsp; $H$&amp;amp;nbsp; (in bit/Quellensymbol) sein.&amp;amp;nbsp; Das bedeutet&lt;br /&gt;
*für die Gesamtlänge der quellencodierten Binärfolge:&lt;br /&gt;
:$$L \ge N \cdot H \hspace{0.05cm},$$ &lt;br /&gt;
*für die relative Redundanz der Codefolge, im Folgenden kurz&amp;amp;nbsp; &#039;&#039;&#039;Restredundanz&#039;&#039;&#039;&amp;amp;nbsp; genannt:&lt;br /&gt;
:$$r = \frac{L - N \cdot H}{L} \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 6:}$&amp;amp;nbsp; Gäbe es für eine redundanzfreie binäre Quellensymbolfolge&amp;amp;nbsp; $(M = 2,\ p_{\rm A} = p_{\rm B} = 0.5$,&amp;amp;nbsp; ohne statistische Bindungen$)$&amp;amp;nbsp; der Länge&amp;amp;nbsp; $N = 10000$&amp;amp;nbsp; eine&amp;amp;nbsp; &#039;&#039;perfekte Quellencodierung&#039;&#039;, so hätte auch die Codefolge die Länge&amp;amp;nbsp; $L = 10000$. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Konsequenz:&amp;lt;/u&amp;gt; &amp;amp;nbsp; Ist bei einem Code das Ergebnis&amp;amp;nbsp; $L = N$&amp;amp;nbsp; nie möglich, so bezeichnet man diesen Code als&amp;amp;nbsp; &#039;&#039;nicht&amp;amp;ndash;perfekt&#039;&#039;.&lt;br /&gt;
*Für diese redundanzfreie Nachrichtenquelle ist Lempel–Ziv nicht geeignet.&amp;amp;nbsp; Es wird stets&amp;amp;nbsp; $L &amp;gt; N$&amp;amp;nbsp; gelten.&amp;amp;nbsp; Man kann es auch ganz lapidar so ausdrücken: &amp;amp;nbsp; Die perfekte Quellencodierung ist hier &amp;amp;bdquo;gar keine Codierung&amp;amp;rdquo;.&lt;br /&gt;
*Eine redundante Binärquelle mit &amp;amp;nbsp;$p_{\rm A} = 0.89$,&amp;amp;nbsp; $p_{\rm B} = 0.11$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $H = 0.5$&amp;amp;nbsp; könnte man mit einer perfekten Quellencodierung durch &amp;amp;nbsp;$L = 5000$&amp;amp;nbsp; Bit darstellen, ohne dass wir hier sagen können, wie diese perfekte Quellencodierung aussieht.&lt;br /&gt;
*Bei einer Quaternärquelle ist&amp;amp;nbsp; $H &amp;gt; 1 \ \rm (bit/Quellensymbol)$&amp;amp;nbsp; möglich, so dass auch bei perfekter Codierung stets&amp;amp;nbsp; $L &amp;gt; N$&amp;amp;nbsp; sein wird.&amp;amp;nbsp; Ist die Quelle redundanzfrei&amp;amp;nbsp; (keine Bindungen, alle&amp;amp;nbsp; $M$&amp;amp;nbsp; Symbole gleichwahrscheinlich), so hat sie die Entropie&amp;amp;nbsp; $H= 2 \ \rm (bit/Quellensymbol)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bei allen diesen Beispielen für perfekte Quellencodierung wäre die relative Redundanz der Codefolge (also die Restredundanz)&amp;amp;nbsp; $r = 0$. Das heißt: &amp;amp;nbsp; Die Nullen und Einsen sind gleichwahrscheinlich und es bestehen keine statistischen Bindungen zwischen den einzelnen Binärsymbolen.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Das Problem ist: &amp;amp;nbsp; Bei endlicher Folgenlänge&amp;amp;nbsp; $N$&amp;amp;nbsp; gibt es keine perfekte Quellencodierung&#039;&#039;&#039;&amp;amp;nbsp;!}} 	&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
According to the&amp;amp;nbsp; [[Information_Theory/General_Description#Source Encoding Theorem|Source Encoding Theorem]]&amp;amp;nbsp; the mean code word length&amp;amp;nbsp; $L_{\rm M}$&amp;amp;nbsp; must be greater than or equal to the source entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; (in bit/source symbol).&amp;amp;nbsp; This means&lt;br /&gt;
*for the total length of the source-encoded binary sequence:&lt;br /&gt;
:$$L \ge N \cdot H \hspace{0.05cm},$$ &lt;br /&gt;
*for the relative redundancy of the code sequence, in the following called&amp;amp;nbsp; &#039;&#039;&#039;Rest Redundancy&#039;&#039;&#039;:&lt;br /&gt;
:$$r = \frac{L - N \cdot H}{L} \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 6:}$&amp;amp;nbsp; If there were a&amp;amp;nbsp; &#039;&#039;perfect source encoding&#039;&#039; for a redundancy-free binary source symbol sequence&amp;amp;nbsp; $(M = 2,\ p_{\rm A} = p_{\rm B} = 0.5$,&amp;amp;nbsp; without statistical bonds$)$&amp;amp;nbsp; of length&amp;amp;nbsp; $N = 10000$, the code sequence would have length&amp;amp;nbsp; $L = 10000$. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Consequence:&amp;lt;/u&amp;gt; &amp;amp;nbsp; If in a code the result&amp;amp;nbsp; $L = N$&amp;amp;nbsp; is never possible, then this code is called&amp;amp;nbsp; &#039;&#039;not&amp;amp;ndash;perfect&#039;&#039;.&lt;br /&gt;
*Lempel-Ziv is not suitable for this redundancy-free message source.&amp;amp;nbsp; It will always be&amp;amp;nbsp; $L &amp;gt; N$&amp;amp;nbsp; &amp;amp;nbsp; You can also put it quite succinctly like this: &amp;amp;nbsp; The perfect source encoding here is &amp;quot;no encoding at all&amp;quot;.&lt;br /&gt;
*A redundant binary source with &amp;amp;nbsp;$p_{\rm A} = 0.89$,&amp;amp;nbsp; $p_{\rm B} = 0.11$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $H = 0.5$&amp;amp;nbsp; could be represented with a perfect source encoding with&amp;amp;nbsp;$L = 5000$&amp;amp;nbsp; bit, without being able to say what this perfect source encoding looks like.&lt;br /&gt;
*For a quaternary source,&amp;amp;nbsp; $H &amp;gt; 1 \ \rm (bit/source symbol)$&amp;amp;nbsp; is possible, so that even with perfect source encoding there will always be&amp;amp;nbsp; $L &amp;gt; N$&amp;amp;nbsp;.&amp;amp;nbsp; If the source is redundancy-free&amp;amp;nbsp; (no bonds, all&amp;amp;nbsp; $M$&amp;amp;nbsp; symbols equally probable), it has entropy&amp;amp;nbsp; $H= 2 \ \rm (bit/source symbol)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For all these examples of perfect source encoding, the relative redundancy of the code sequence (residual redundancy) is&amp;amp;nbsp; $r = 0$. That is: &amp;amp;nbsp; The zeros and ones are equally probable and there are no statistical bonds between the individual binary symbols.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The problem is: &amp;amp;nbsp; At finite sequence length&amp;amp;nbsp; $N$&amp;amp;nbsp; there is no perfect source code&#039;&#039;&#039;&amp;amp;nbsp;!}} 	&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Efficiency of Lempel-Ziv encoding ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
From the Lempel-Ziv algorithms we know (and can even prove this statement) that they are&amp;amp;nbsp; &#039;&#039;&#039;asymptotically optimal&#039;&#039;&#039; &amp;amp;nbsp; This means that the relative redundancy of the code symbol sequence&amp;amp;nbsp; (here written as a function of the source symbol sequence length&amp;amp;nbsp; $N$&amp;amp;nbsp;) &lt;br /&gt;
 &lt;br /&gt;
:$$r(N) = \frac{L(N) - N \cdot H}{L(N)}= 1 - \frac{ N \cdot H}{L(N)}\hspace{0.05cm}$$&lt;br /&gt;
&lt;br /&gt;
for large&amp;amp;nbsp; $N$&amp;amp;nbsp; returns the limit value &amp;quot;zero&amp;quot;:&lt;br /&gt;
 &lt;br /&gt;
:$$\lim_{N \rightarrow \infty}r(N) = 0 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
But what does the property&amp;amp;nbsp; &amp;quot;asymptotically optimal&amp;quot;&amp;amp;nbsp; say for practical sequence lengths?&amp;amp;nbsp; Not too much, as the following screenshot of our simulation tool&amp;amp;nbsp; [[Applets:Lempel-Ziv-Welch|Lempel-Ziv-Algorithms]]&amp;amp;nbsp; shows. All curves apply exactly only to the [[Information_Theory/Compression_by_Lempel,_Ziv_and_Welch#The_Lempel.E2.80.93Ziv.E2.80.93Welch.E2.80.93Algorithm|LZW-Algorithm]]].&amp;amp;nbsp; However, the results for&amp;amp;nbsp; [[Information_Theory/Compression According to Lempel, Ziv and Welch#LZ77 - the basic form of the Lempel-Ziv-algorithms|LZ77]]&amp;amp;nbsp; and&amp;amp;nbsp; [[Information_Theory/Compression_by_Lempel,_Ziv_and_Welch#The_Lempel.E2.80.93Ziv.E2.80.93Variant_LZ78|LZ78]]&amp;amp;nbsp; are only slightly worse.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The three graphs show for different message sources the dependence of the following sizes on the source symbol sequence length&amp;amp;nbsp; $N$:&lt;br /&gt;
*the required number of bits&amp;amp;nbsp; $N \cdot \log_2 M$&amp;amp;nbsp; without source coding&amp;amp;nbsp; (black curves),&lt;br /&gt;
*the required number of bits&amp;amp;nbsp; $H$ \cdot $N$&amp;amp;nbsp; with perfect source encoding&amp;amp;nbsp; (gray dashed),&lt;br /&gt;
*the required number of bits&amp;amp;nbsp; $L(N)$&amp;amp;nbsp; for LZW coding&amp;amp;nbsp; (red curves after averaging),&lt;br /&gt;
*the relative redundancy &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; residual redundancy &amp;amp;nbsp;$r(N)$&amp;amp;nbsp; in case of LZW coding (green curves).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2450__Inf_T_2_2_S7b_neu.png|frame|Example curves of&amp;amp;nbsp; $L(N)$&amp;amp;nbsp; and&amp;amp;nbsp; $r(N)$]].&lt;br /&gt;
&lt;br /&gt;
$\underline{\text{Redundant binary source (upper graphic)} }$ &lt;br /&gt;
:$$M = 2, \hspace{0.1cm}p_{\rm A} = 0.89,\hspace{0.1cm} p_{\rm B} = 0.11$$&lt;br /&gt;
:$$\Rightarrow \hspace{0.15cm} H = 0.5 \ \rm bit/source symbol\text{:}$$ &lt;br /&gt;
*The black and grey curves are true straight lines (not only for this parameter set).&lt;br /&gt;
*The red curve&amp;amp;nbsp; $L(N)$&amp;amp;nbsp; is slightly curved&amp;amp;nbsp; (difficult to see with the naked eye).&lt;br /&gt;
*Because of this curvature of&amp;amp;nbsp; $L(N)$&amp;amp;nbsp; the residual redundancy (green curve) drops slightly.&lt;br /&gt;
:$$r(N) = 1 - 0.5 \cdot N/L(N).$$ &lt;br /&gt;
*The numerical values can be read &lt;br /&gt;
:$$L(N = 10000) = 6800,\hspace{0.5cm}&lt;br /&gt;
r(N = 10000) = 26.5\%.$$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
$\underline{\text{Redundancy-free binary source (middle graphic)} }$ &lt;br /&gt;
:$$M = 2,\hspace{0.1cm} p_{\rm A} = p_{\rm B} = 0.5$$ &lt;br /&gt;
:$$\Rightarrow \hspace{0.15cm} H = 1 \ \rm bit/source symbol\text{:}$$&lt;br /&gt;
* Here the grey and the black straight line coincide and the slightly curved red curve lies above it, as expected. &lt;br /&gt;
*Although the LZW coding brings a deterioration here, recognizable from the indication&amp;amp;nbsp; $L(N = 10000) = 12330$, the relative redundancy is smaller than in the upper graph: &lt;br /&gt;
:$$r(N = 10000) = 18.9\%.$$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
$\underline{\text{Redundant quaternary source (lower graphic)} }$&lt;br /&gt;
:$$M = 4,\hspace{0.1cm}p_{\rm A} = 0.7,\hspace{0.1cm} p_{\rm B} = p_{\rm C} = p_{\rm D} = 0.1$$&lt;br /&gt;
:$$ \Rightarrow \hspace{0.15cm} H \approx 1.357 \ \rm bit/source symbol\text{:}$$&lt;br /&gt;
* Without source coding, for&amp;amp;nbsp; $N = 10000$&amp;amp;nbsp; Quaternary symbols&amp;amp;nbsp; $20000$&amp;amp;nbsp; binary symbols (bit) would be required (black curve).&lt;br /&gt;
* If source encoding was perfect, this would result in&amp;amp;nbsp; $N \cdot H= 13570$&amp;amp;nbsp; Bit&amp;amp;nbsp; (grey curve).&lt;br /&gt;
* With (imperfect) LZW encoding you need&amp;amp;nbsp; $L(N = 10000) ≈ 16485$&amp;amp;nbsp; Bit&amp;amp;nbsp; (red curve). &lt;br /&gt;
*The relative redundancy here is&amp;amp;nbsp; $r(N = 10000) ≈17.7\%$&amp;amp;nbsp; (green curve).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Quantitative statements on asymptotic optimality==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The results on the last page have shown that the relative residual redundancy&amp;amp;nbsp; $r(N = 10000)$&amp;amp;nbsp; is significantly greater than the theoretically promised value&amp;amp;nbsp; $r(N \to \infty) = 0$. &lt;br /&gt;
&lt;br /&gt;
This practically relevant result shall now be clarified using the example of the redundant binary source with&amp;amp;nbsp; $H = 0.5 \ \rm bit/source symbol$&amp;amp;nbsp; according to the middle graphic on the last page. However, we now consider values between&amp;amp;nbsp; $N=10^3$&amp;amp;nbsp; and&amp;amp;nbsp; $N=10^{12}$&amp;amp;nbsp; for the source symbol sequence length.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2443__Inf_T_2_2_S8_neu.png|frame|LZW-Rest redundancy&amp;amp;nbsp; $r(N)$&amp;amp;nbsp; with redundant binary source&amp;amp;nbsp; $(H = 0.5)$ ]].&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Example 7:}$&amp;amp;nbsp; The graphic shows simulations with&amp;amp;nbsp; $N = 1000$&amp;amp;nbsp; binary symbols. &lt;br /&gt;
*After averaging over ten series of experiments the result is&amp;amp;nbsp; $r(N = 1000) ≈35.2\%$. &lt;br /&gt;
*below the yellow dot&amp;amp;nbsp; $($in the example with&amp;amp;nbsp; $N ≈ 150)$&amp;amp;nbsp; the LZW algorithm even brings a deterioration. &lt;br /&gt;
*In this range, namely&amp;amp;nbsp; $L &amp;gt; N$, that is: &amp;amp;nbsp; &amp;lt;br&amp;gt;The red curve is above the black one.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the table below, the results for this redundant binary source&amp;amp;nbsp; $(H = 0.5)$&amp;amp;nbsp; are summarized&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_2_2_S8b_neu.png|right|frame|Some numerical values for the efficiency of LZW coding]]&lt;br /&gt;
&lt;br /&gt;
*The compression factor&amp;amp;nbsp; $K(n)= L(n)/N$&amp;amp;nbsp; decreases with increasing&amp;amp;nbsp; $N$&amp;amp;nbsp; only very slowly&amp;amp;nbsp; (line 3).&lt;br /&gt;
*In line 4 the rest redundancy&amp;amp;nbsp; $r(N)$&amp;amp;nbsp; is given for different lengths between&amp;amp;nbsp; $N =1000$&amp;amp;nbsp; and&amp;amp;nbsp; $N =50000$&amp;amp;nbsp;; &lt;br /&gt;
*According to relevant literature this residual redundancy decreases proportionally to&amp;amp;nbsp; $\big[\hspace{0.05cm}\lg(N)\hspace{0.05cm}\big]^{-1}$&amp;amp;nbsp;. &lt;br /&gt;
*In line 5 the results of an empirical formula are entered $($adaptation for $N = 10000)$:&lt;br /&gt;
 &lt;br /&gt;
:$$r\hspace{0.05cm}&#039;(N) = \frac{A}{ {\rm lg}\hspace{0.1cm}(N)}\hspace{0.5cm}{\rm with}$$&lt;br /&gt;
:$$ A = {r(N = 10000)} \cdot { {\rm lg}\hspace{0.1cm}10000} = 0.265 \cdot 4 = 1.06&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
*You can see the good agreement between our simulation results&amp;amp;nbsp; $r(N)$&amp;amp;nbsp; and the rule of thumb&amp;amp;nbsp; $r\hspace{0.05cm}′(N)$. &lt;br /&gt;
*You can also see that the residual redundancy of the LZW algorithm for&amp;amp;nbsp; $N = 10^{12}$&amp;amp;nbsp; is still&amp;amp;nbsp; $8.8\%$&amp;amp;nbsp;.&lt;br /&gt;
*For other sources, with other&amp;amp;nbsp; $A$&amp;amp;ndash;values you will get similar results.&amp;amp;nbsp; The principle process remains the same.&amp;amp;nbsp; See also&amp;amp;nbsp; [[Aufgaben:Aufgabe_2.5:_Restredundanz_bei_LZW-Codierung|Exercise 2.5]]&amp;amp;nbsp; and&amp;amp;nbsp; [[Aufgaben:Aufgabe_2.5Z:_Komprimierungsfaktor_vs._Restredundanz|Task 2.5Z]].}}&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Exercises to chapter ==	   &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:2.3 Zur LZ78-Komprimierung|Aufgabe 2.3: Zur LZ78-Komprimierung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:2.3Z Zur LZ77-Codierung|Aufgabe 2.3Z: Zur LZ77-Codierung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:2.4 Zum LZW-Algorithmus|Aufgabe 2.4: Zum LZW-Algorithmus]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:2.4Z Nochmals LZW-Codierung und -Decodierung|Aufgabe 2.4Z: Nochmals LZW-Codierung und -Decodierung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Aufgabe_2.5:_Restredundanz_bei_LZW-Codierung|Aufgabe 2.5: Relative Restredundanz bei LZW-Codierung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Aufgabe_2.5Z:_Komprimierungsfaktor_vs._Restredundanz|Aufgabe 2.5Z: Komprimierungsfaktor vs. Restredundanz]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Applets:Bandbegrenzung&amp;diff=35079</id>
		<title>Applets:Bandbegrenzung</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Applets:Bandbegrenzung&amp;diff=35079"/>
		<updated>2020-11-01T20:11:22Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Applets:Bandbegrenzung to Applets:Bandwidth Limitation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Applets:Bandwidth Limitation]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Applets:Bandwidth_Limitation&amp;diff=35078</id>
		<title>Applets:Bandwidth Limitation</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Applets:Bandwidth_Limitation&amp;diff=35078"/>
		<updated>2020-11-01T20:11:22Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Applets:Bandbegrenzung to Applets:Bandwidth Limitation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{OldFlashComments}}&lt;br /&gt;
&lt;br /&gt;
{{OldFlash|Z_ID104/bandbegrenzung}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Sources_with_Memory&amp;diff=35019</id>
		<title>Information Theory/Discrete Sources with Memory</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Sources_with_Memory&amp;diff=35019"/>
		<updated>2020-10-28T23:04:52Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Entropie wertdiskreter Nachrichtenquellen&lt;br /&gt;
|Vorherige Seite=Gedächtnislose Nachrichtenquellen&lt;br /&gt;
|Nächste Seite=Natürliche wertdiskrete Nachrichtenquellen&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==A simple introductory example ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp;&lt;br /&gt;
At the&amp;amp;nbsp; [[Information_Theory/Discrete Memoryless Sources#Model_and_requirements|beginning of the first chapter]]&amp;amp;nbsp; we have considered a memoryless message source with the symbol set&amp;amp;nbsp; $\rm \{A, \ B, \ C, \ D\}$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 4$ &amp;amp;nbsp;. &amp;amp;nbsp; An exemplary symbol sequence is shown again in the following figure as source&amp;amp;nbsp; $\rm Q1$&amp;amp;nbsp;. &lt;br /&gt;
&lt;br /&gt;
With the symbol probabilities&amp;amp;nbsp; $p_{\rm A} = 0.4 \hspace{0.05cm},\hspace{0.2cm}p_{\rm B} = 0.3 \hspace{0.05cm},\hspace{0.2cm}p_{\rm C} = 0.2 \hspace{0.05cm},\hspace{0.2cm} &lt;br /&gt;
p_{\rm D} = 0.1\hspace{0.05cm}$&amp;amp;nbsp; the entropy is&lt;br /&gt;
 &lt;br /&gt;
:$$H \hspace{-0.05cm}= 0.4 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.4} + 0.3 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.2} + 0.1 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.1} \approx 1.84 \hspace{0.05cm}{\rm bit/symbol}&lt;br /&gt;
 \hspace{0.01cm}.$$&lt;br /&gt;
&lt;br /&gt;
Due to the unequal symbol probabilities the entropy is smaller than the decision content&amp;amp;nbsp; $H_0 = \log_2 M = 2 \hspace{0.05cm}. \rm bit/symbol$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2238__Inf_T_1_2_S1a_neu.png|right|frame|Quaternary message source without/with memory]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The source&amp;amp;nbsp; $\rm Q2$&amp;amp;nbsp; is almost identical to the source&amp;amp;nbsp; $\rm Q1$, except that each individual symbol is output not only once, but twice in a row:&amp;amp;nbsp; $\rm A ⇒ AA$,&amp;amp;nbsp; $\rm B ⇒ BB$,&amp;amp;nbsp; and so on. &lt;br /&gt;
*It is obvious that&amp;amp;nbsp; $\rm Q2$&amp;amp;nbsp; has a smaller entropy (uncertainty) than&amp;amp;nbsp; $\rm Q1$. &lt;br /&gt;
*Because of the simple repetition code,&amp;amp;nbsp; &lt;br /&gt;
:$$H = 1.84/2 = 0.92 \hspace{0.05cm} \rm bit/symbol$$&lt;br /&gt;
:only half the size, although the occurrence probabilities have not changed.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp;&lt;br /&gt;
This example shows:&lt;br /&gt;
*The entropy of a source with memory is smaller than the entropy of a memoryless source with equal symbol probabilities.&lt;br /&gt;
*The statistical bonds within the sequence&amp;amp;nbsp; $〈 q_ν 〉$&amp;amp;nbsp; have to be considered now, &lt;br /&gt;
*namely the dependency of the symbol&amp;amp;nbsp; $q_ν$&amp;amp;nbsp; from the predecessor symbols&amp;amp;nbsp; $q_{ν-1}$,&amp;amp;nbsp; $q_{ν-2}$, ... }}&lt;br /&gt;
 &lt;br /&gt;
	 &lt;br /&gt;
== Entropy with respect to two-tuples == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We continue to look at the source symbol sequence&amp;amp;nbsp; $〈 q_1, \hspace{0.05cm} q_2,\hspace{0.05cm}\text{ ...} \hspace{0.05cm}, q_{ν-1}, \hspace{0.05cm}q_ν, \hspace{0.05cm}\hspace{0.05cm}q_{ν+1} .\hspace{0.05cm}\text{...} \hspace{0.05cm}〉$&amp;amp;nbsp; and now consider the entropy of two successive source symbols. &lt;br /&gt;
*All source symbols&amp;amp;nbsp; $q_ν$&amp;amp;nbsp; are taken from an alphabet with the symbol beginning&amp;amp;nbsp; $M$, so that it is not necessary for the combination&amp;amp;nbsp; $(q_ν, \hspace{0.05cm}q_{ν+1})$&amp;amp;nbsp; exactly&amp;amp;nbsp; $M^2$&amp;amp;nbsp; there are possible symbol pairs with the following [[Theory_of_Stochastic_Signals/Set Theory Basics#Intersection|combined probabilities]]:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm Pr}(q_{\nu}\cap q_{\nu+1})\le {\rm Pr}(q_{\nu}) \cdot {\rm Pr}( q_{\nu+1})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*From this, the&amp;amp;nbsp; &#039;&#039;compound entropy&#039;&#039;&amp;amp;nbsp; of an ordered pair can be computed:&lt;br /&gt;
 &lt;br /&gt;
:$$H_2\hspace{0.05cm}&#039; = \sum_{q_{\nu}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm}q_{\mu}\hspace{0.01cm} \}} \sum_{q_{\nu+1}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm} q_{\mu}\hspace{0.01cm} \}}\hspace{-0.1cm}{\rm Pr}(q_{\nu}\cap q_{\nu+1}) \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{{\rm Pr}(q_{\nu}\cap q_{\nu+1})} \hspace{0.4cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/order pair})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
:The index &amp;quot;2&amp;quot; symbolizes that the entropy thus calculated refers to two-tuples. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To get the average information content per symbol,&amp;amp;nbsp; $H_2\hspace{0.05cm}&#039;$&amp;amp;nbsp; has to be divided in half:&lt;br /&gt;
 &lt;br /&gt;
:$$H_2 = {H_2\hspace{0.05cm}&#039;}/{2}  \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
In order to achieve a consistent nomenclature, we now label the entropy defined in chapter&amp;amp;nbsp; [[Information_Theory/Discrete Memoryless Sources#Model_and_Prerequisites|Memoryless Message Sources]]&amp;amp;nbsp; with&amp;amp;nbsp; $H_1$:&lt;br /&gt;
&lt;br /&gt;
:$$H_1 = \sum_{q_{\nu}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm}q_{\mu}\hspace{0.01cm} \}} {\rm Pr}(q_{\nu}) \cdot {\rm log_2}\hspace{0.1cm}\frac {1}{{\rm Pr}(q_{\nu})} \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The index &amp;quot;1&amp;quot; is supposed to indicate that&amp;amp;nbsp; $H_1$&amp;amp;nbsp; considers only the symbol probabilities and not statistical links between symbols within the sequence.&amp;amp;nbsp; With the decision content&amp;amp;nbsp; $H_0 = \log_2 \ M$&amp;amp;nbsp; the following size relation results:&lt;br /&gt;
 &lt;br /&gt;
$$H_0 \ge H_1 \ge H_2&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
With statistical independence of the sequence elements&amp;amp;nbsp; $H_2 = H_1$.&lt;br /&gt;
&lt;br /&gt;
The previous equations each indicate a share mean value. &amp;amp;nbsp; The probabilities required for the calculation of&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; can, however, also be calculated as time averages from a very long sequence or, more precisely, approximated by the corresponding&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/From Random Experiment to Random Variable#Bernoulli&#039;s_Law_of_Large_Numbers|relative frequencies]].&lt;br /&gt;
&lt;br /&gt;
Let us now illustrate the calculation of entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; with three examples.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp;&lt;br /&gt;
We will first look at the sequence&amp;amp;nbsp; $〈 q_1$, ... , $q_{50} \rangle $&amp;amp;nbsp; according to the graphic, where the sequence elements&amp;amp;nbsp; $q_ν$&amp;amp;nbsp; originate from the alphabet $\rm \{A, \ B, \ C \}$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; the symbol range is&amp;amp;nbsp; $M = 3$.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_2_S2_vers2.png|center|frame|Ternary symbol sequence and formation of two-tuples]]&lt;br /&gt;
&lt;br /&gt;
By time averaging over the&amp;amp;nbsp; $50$&amp;amp;nbsp; symbols one gets the symbol probabilities&amp;amp;nbsp; $p_{\rm A} ≈ 0.5$, &amp;amp;nbsp; $p_{\rm B} ≈ 0.3$ &amp;amp;nbsp;and&amp;amp;nbsp; $p_{\rm C} ≈ 0.2$, with which one can calculate the first order entropy approximation:&lt;br /&gt;
 &lt;br /&gt;
:$$H_1 = 0.5 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.5} + 0.3 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.2}  \approx \, 1.486 \,{\rm bit/symbol}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Due to the not equally probable symbols&amp;amp;nbsp; $H_1 &amp;lt; H_0 = 1.585 \hspace{0.05cm}. \rm bit/symbol$.&amp;amp;nbsp; As an approximation for the probabilities of two-tuples one gets from the above sequence:&lt;br /&gt;
 &lt;br /&gt;
:$$\begin{align*}p_{\rm AA} \hspace{-0.1cm}&amp;amp; = \hspace{-0.1cm} 14/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm AB} = 8/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm AC} = 3/49\hspace{0.05cm}, \\\&lt;br /&gt;
 p_{\rm BA} \hspace{-0.1cm}&amp;amp; = \hspace{0.07cm} 7/49\hspace{0.05cm}, \hspace{0.25cm}p_{\rm BB} = 2/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm BC} = 5/49\hspace{0.05cm}, \\\&lt;br /&gt;
 p_{\rm CA} \hspace{-0.1cm}&amp;amp; = \hspace{0.07cm} 4/49\hspace{0.05cm}, \hspace{0.25cm}p_{\rm CB} = 5/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm CC} = 1/49\hspace{0.05cm}.\end{align*}$$&lt;br /&gt;
&lt;br /&gt;
Please note that the&amp;amp;nbsp; $50$&amp;amp;nbsp; sequence elements can only be formed from&amp;amp;nbsp; $49$&amp;amp;nbsp; two-tuples&amp;amp;nbsp; $(\rm AA$, ... , $\rm CC)$&amp;amp;nbsp; which are marked in different colors in the graphic.&lt;br /&gt;
&lt;br /&gt;
*The entropy approximation&amp;amp;nbsp; $H_2$&amp;amp;nbsp; should actually be equal to&amp;amp;nbsp; $H_1$&amp;amp;nbsp; since the given symbol sequence comes from a memoryless source. &lt;br /&gt;
*Because of the short sequence length&amp;amp;nbsp; $N = 50$&amp;amp;nbsp; and the resulting statistical inaccuracy, however, a smaller value results: &amp;amp;nbsp; &lt;br /&gt;
:$$H_2 ≈ 1.39\hspace{0.05cm} \rm bit/symbol.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 3:}$&amp;amp;nbsp;&lt;br /&gt;
Now let us consider a&amp;amp;nbsp; &#039;&#039;memoryless&amp;amp;nbsp; binary source&#039;&#039;&amp;amp;nbsp; with equally probable symbols, i.e. there would be&amp;amp;nbsp; $p_{\rm A} = p_{\rm B} = 1/2$.&amp;amp;nbsp; The first twenty subsequent elements are &amp;amp;nbsp; $〈 q_ν 〉 =\rm BBAAABAABBBBBAAAABABAB$ ...&lt;br /&gt;
*Because of the equally probable binary symbols &amp;amp;nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$.&lt;br /&gt;
*The compound probability&amp;amp;nbsp; $p_{\rm AB}$&amp;amp;nbsp; of the combination&amp;amp;nbsp; $\rm AB$&amp;amp;nbsp; is equal to&amp;amp;nbsp; $p_{\rm A} - p_{\rm B} = 1/4$.&amp;amp;nbsp; Also $p_{\rm AA} = p_{\rm BB} = p_{\rm BA} = 1/4$. &lt;br /&gt;
*This gives for the second entropy approximation&lt;br /&gt;
 &lt;br /&gt;
:$$H_2 = {1}/{2} \cdot \big [ {1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 + {1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 +{1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 +{1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 \big ] = 1 \,{\rm bit/symbol} = H_1 = H_0&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note&#039;&#039;: &amp;amp;nbsp; Due to the short length of the given sequence the probabilities are slightly different:&amp;amp;nbsp; $p_{\rm AA} = 6/19$,&amp;amp;nbsp; $p_{\rm BB} = 5/19$,&amp;amp;nbsp; $p_{\rm AB} = p_{\rm BA} = 4/19$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 4:}$&amp;amp;nbsp;&lt;br /&gt;
The third sequence considered here results from the binary sequence of&amp;amp;nbsp; $\text{Example 3}$&amp;amp;nbsp; by using a simple repeat code: &lt;br /&gt;
:$$〈 q_ν 〉 =\rm BbBbAaAaAaBbAaAaBbBb \text{...} $$&lt;br /&gt;
*The repeated symbols are marked by corresponding lower case letters.&amp;amp;nbsp; It still applies&amp;amp;nbsp; $M=2$.&lt;br /&gt;
*Because of the equally probable binary symbols, this also results in&amp;amp;nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$.&lt;br /&gt;
*As shown in&amp;amp;nbsp; [[Aufgaben:1.3_Entropienäherungen|Exercise 1.3]]&amp;amp;nbsp; for the compound probabilities we obtain&amp;amp;nbsp; $p_{\rm AA}=p_{\rm BB} = 3/8$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm AB}=p_{\rm BA} = 1/8$.&amp;amp;nbsp; Hence&lt;br /&gt;
:$$\begin{align*}H_2 ={1}/{2} \cdot \big [ 2 \cdot {3}/{8} \cdot {\rm log}_2\hspace{0.1cm} {8}/{3} + &lt;br /&gt;
 2 \cdot {1}/{8} \cdot {\rm log}_2\hspace{0.1cm}8\big ] = {3}/{8} \cdot {\rm log}_2\hspace{0.1cm}8 - {3}/{8} \cdot{\rm log}_2\hspace{0.1cm}3 + {1}/{8} \cdot {\rm log}_2\hspace{0.1cm}8 \approx 0.906 \,{\rm bit/symbol} &amp;lt; H_1 = H_0&lt;br /&gt;
 \hspace{0.05cm}.\end{align*}$$&lt;br /&gt;
&lt;br /&gt;
A closer look at the task at hand leads to the following conclusion: &lt;br /&gt;
*The entropy should actually be&amp;amp;nbsp; $H = 0.5 \hspace{0.05cm} \rm bit/symbol$&amp;amp;nbsp; its (every second symbol does not provide new information) &lt;br /&gt;
*The second entropy approximation&amp;amp;nbsp; $H_2 = 0.906 \hspace{0.05cm} \rm bit/symbol$&amp;amp;nbsp; but is much larger than the entropy&amp;amp;nbsp; $H$.&lt;br /&gt;
*For the determination of entropy the second order approximation is not sufficient.&amp;amp;nbsp; rather, larger continuous blocks must be considered with&amp;amp;nbsp; $k &amp;gt; 2$&amp;amp;nbsp; symbols. &lt;br /&gt;
*In the following, such a block is referred to as&amp;amp;nbsp; $k$-tuple.}}&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Generalization to $k$-tuple and boundary crossing ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For abbreviation we write using the compound probability&amp;amp;nbsp; $p_i^{(k)}$&amp;amp;nbsp; a&amp;amp;nbsp; $k$-tuple in general:&lt;br /&gt;
 &lt;br /&gt;
:$$H_k = \frac{1}{k} \cdot \sum_{i=1}^{M^k} p_i^{(k)} \cdot {\rm log}_2\hspace{0.1cm} \frac{1}{p_i^{(k)}} \hspace{0.5cm}({\rm Einheit\hspace{-0.1cm}: \hspace{0.1cm}bit/Symbol})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The control variable&amp;amp;nbsp; $i$&amp;amp;nbsp; stands for one of the&amp;amp;nbsp; $M^k$ Tuple.&amp;amp;nbsp; The previously calculated approximation&amp;amp;nbsp; $H_2$&amp;amp;nbsp; results with&amp;amp;nbsp; $k = 2$.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp;&lt;br /&gt;
The&amp;amp;nbsp; &#039;&#039;&#039;Entropy of a message source with memory&#039;&#039;&#039;&amp;amp;nbsp; has the following limit &lt;br /&gt;
:$$H = \lim_{k \rightarrow \infty }H_k \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
For the&amp;amp;nbsp; &#039;&#039;entropy approximations&#039;&#039;&amp;amp;nbsp; $H_k$&amp;amp;nbsp; the following relations apply&amp;amp;nbsp; $(H_0$ is the decision content$)$:&lt;br /&gt;
:$$H \le \text{...} \le H_k \le \text{...} \le H_2 \le H_1 \le H_0 \hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The computational effort will increase with increasing&amp;amp;nbsp; $k$&amp;amp;nbsp; except for a few special cases (see the following example) and depends on the symbol size&amp;amp;nbsp; $M$&amp;amp;nbsp; of course:&lt;br /&gt;
*For the calculation of&amp;amp;nbsp; $H_{10}$&amp;amp;nbsp; a binary source&amp;amp;nbsp; $(M = 2)$&amp;amp;nbsp; is to be averaged over&amp;amp;nbsp; $2^{10} = 1024$&amp;amp;nbsp; terms. &lt;br /&gt;
*With each further increase of&amp;amp;nbsp; $k$&amp;amp;nbsp; by&amp;amp;nbsp; $1$&amp;amp;nbsp; the number of sum terms doubles.&lt;br /&gt;
*In case of a quaternary source&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; it must already be averaged over&amp;amp;nbsp; $4^{10} = 1\hspace{0.08cm}048\hspace{0.08cm}576$&amp;amp;nbsp; summation terms must be averaged for the determination of&amp;amp;nbsp; $H_{10}$.&lt;br /&gt;
* Considering that each of these&amp;amp;nbsp; $4^{10} =2^{20} &amp;gt;10^6$&amp;amp;nbsp; $k$-tuple should occur in simulation/time averaging about&amp;amp;nbsp; $100$&amp;amp;nbsp; times (statistical guideline) to ensure sufficient simulation accuracy, it follows that the sequence length should be greater than&amp;amp;nbsp; $N = 10^8$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 5:}$&amp;amp;nbsp;&lt;br /&gt;
We consider an alternating binary sequence &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $〈 q_ν 〉 =\rm ABABABAB$ ... &amp;amp;nbsp;.&amp;amp;nbsp; Accordingly it holds that&amp;amp;nbsp; $H_0 = H_1 = 1 \hspace{0.1cm} \rm bit/symbol$. &lt;br /&gt;
&lt;br /&gt;
In this special case, the&amp;amp;nbsp; $H_k$ approximation must be determined independently from&amp;amp;nbsp; $k$&amp;amp;nbsp; by averaging only two compound probabilities:&lt;br /&gt;
* $k = 2$: &amp;amp;nbsp;&amp;amp;nbsp; $p_{\rm AB} = p_{\rm BA} = 1/2$ &amp;amp;nbsp; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &amp;amp;nbsp; $H_2 = 1/2 \hspace{0.2cm} \rm bit/symbol$,&lt;br /&gt;
* $k = 3$:  &amp;amp;nbsp;&amp;amp;nbsp; $p_{\rm ABA} = p_{\rm BAB} = 1/2$ &amp;amp;nbsp; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &amp;amp;nbsp; $H_3 = 1/3 \hspace{0.2cm} \rm bit/symbol$,&lt;br /&gt;
* $k = 4$:  &amp;amp;nbsp;&amp;amp;nbsp; $p_{\rm ABAB} = p_{\rm BABA} = 1/2$ &amp;amp;nbsp; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &amp;amp;nbsp; $H_4 = 1/4 \hspace{0.2cm} \rm bit/symbol$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The (actual) entropy of this alternating binary sequence is therefore&lt;br /&gt;
:$$H = \lim_{k \rightarrow \infty }{1}/{k} = 0 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The result was to be expected, since the considered sequence has only minimal information, which does not affect the entropy end value&amp;amp;nbsp; $H$&amp;amp;nbsp; namely: &amp;lt;br&amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; &amp;quot;Does&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; occur at the even or the odd times?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
You can see that&amp;amp;nbsp; $H_k$&amp;amp;nbsp; comes closer to this final value&amp;amp;nbsp; $H = 0$&amp;amp;nbsp; very slowly:&amp;amp;nbsp; The twentieth entropy approximation still returns&amp;amp;nbsp; $H_{20} = 0.05 \hspace{0.05cm} \rm bit/symbol$. }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Summary of the results of the last pages:}$&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
*Generally it applies to the&amp;amp;nbsp; &#039;&#039;&#039;Entropy of a message source&#039;&#039;&#039;:&lt;br /&gt;
:$$H \le \text{...} \le H_3 \le H_2 \le H_1 \le H_0 &lt;br /&gt;
 \hspace{0.05cm}.$$ &lt;br /&gt;
*A&amp;amp;nbsp; &#039;&#039;&#039;redundancy-free source&#039;&#039; &amp;amp;nbsp; exists if all&amp;amp;nbsp; $M$&amp;amp;nbsp; symbols are equally probable and there are no statistical bonds within the sequence. &amp;lt;br&amp;gt; For these,&amp;amp;nbsp; $(r$&amp;amp;nbsp; denotes the &#039;&#039;relative redundancy&#039;&#039; $)$:&lt;br /&gt;
:$$H = H_0 = H_1 = H_2 = H_3 = \text{...}\hspace{0.5cm}&lt;br /&gt;
\Rightarrow \hspace{0.5cm} r = \frac{H - H_0}{H_0}= 0 \hspace{0.05cm}.$$ &lt;br /&gt;
*A&amp;amp;nbsp; &#039;&#039;&#039;memoryless source&#039;&#039;&#039; &amp;amp;nbsp; can be quite redundant&amp;amp;nbsp; $(r&amp;gt; 0)$.&amp;amp;nbsp; This redundancy then is solely due to the deviation of the symbol probabilities from the uniform distribution.&amp;amp;nbsp; Here the following relations are valid&lt;br /&gt;
:$$H = H_1 = H_2 = H_3 = \text{...} \le H_0 \hspace{0.5cm}\Rightarrow \hspace{0.5cm}0 \le r = \frac{H_1 - H_0}{H_0}&amp;lt; 1 \hspace{0.05cm}.$$ &lt;br /&gt;
*The corresponding condition for a&amp;amp;nbsp; &#039;&#039;&#039;source with memory&#039;&#039;&#039;&amp;amp;nbsp; is&lt;br /&gt;
:$$ H &amp;lt;\text{...} &amp;lt; H_3 &amp;lt; H_2 &amp;lt; H_1 \le H_0 \hspace{0.5cm}\Rightarrow \hspace{0.5cm} 0 &amp;lt; r = \frac{H_1 - H_0}{H_0}\le1 \hspace{0.05cm}.$$&lt;br /&gt;
*If&amp;amp;nbsp; $H_2 &amp;lt; H_1$, then&amp;amp;nbsp; $H_3 &amp;lt; H_2$, &amp;amp;nbsp; $H_4 &amp;lt; H_3$, etc. &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; In the general equation, the&amp;amp;nbsp; &amp;quot;≤&amp;quot; character must be replaced by the&amp;amp;nbsp; &amp;quot;&amp;lt;&amp;quot; character. &lt;br /&gt;
*If the symbols are equally probable, then again&amp;amp;nbsp; $H_1 = H_0$, while&amp;amp;nbsp; $H_1 &amp;lt; H_0$&amp;amp;nbsp; applies to symbols which are not equally probable.}}&lt;br /&gt;
	 	 &lt;br /&gt;
&lt;br /&gt;
==The entropy of the AMI code ==	 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In chapter&amp;amp;nbsp; [[Digital_Signal_Transmission/Symbol-Wise Coding with Pseudo Ternary Codes#Properties_of_AMI Code|Symbol-wise Coding with Pseudo-Ternary Codes]]&amp;amp;nbsp; of the book &amp;quot;Digital Signal Transmission&amp;quot;, among other things, the AMI pseudo-ternary code is discussed. &lt;br /&gt;
*This converts the binary sequence&amp;amp;nbsp; $〈 q_ν 〉$&amp;amp;nbsp; with&amp;amp;nbsp; $q_ν ∈ \{ \rm L, \ H \}$&amp;amp;nbsp; into the ternary sequence&amp;amp;nbsp; $〈 c_ν 〉$&amp;amp;nbsp; with&amp;amp;nbsp; $q_ν ∈ \{ \rm M, \ N, \ P \}$.&lt;br /&gt;
*The names of the source symbols stand for&amp;amp;nbsp; $\rm L$ow&amp;amp;nbsp; and&amp;amp;nbsp; $\rm H$igh&amp;amp;nbsp; and those of the code symbols for&amp;amp;nbsp; $\rm M$inus,&amp;amp;nbsp; $\rm N$ull&amp;amp;nbsp; and&amp;amp;nbsp; $\rm P$lus&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coding rule of the AMI code (&amp;quot;Alternate Mark Inversion&amp;quot;) is&lt;br /&gt;
[[File:P_ID2240__Inf_T_1_2_S4_neu.png|right|frame|Signals and symbol sequences for AMI code]]&lt;br /&gt;
&lt;br /&gt;
*Each binary symbol&amp;amp;nbsp; $q_ν =\rm L$&amp;amp;nbsp; is represented by the code symbol&amp;amp;nbsp; $c_ν =\rm N$&amp;amp;nbsp;.&lt;br /&gt;
*In contrast,&amp;amp;nbsp; $q_ν =\rm H$&amp;amp;nbsp; alternates with&amp;amp;nbsp; $c_ν =\rm P$&amp;amp;nbsp; and&amp;amp;nbsp; $c_ν =\rm M$&amp;amp;nbsp; coded &amp;amp;nbsp; ⇒ &amp;amp;nbsp; name &amp;quot;AMI&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This special encoding adds redundancy with the sole purpose of ensuring that the code sequence does not contain a DC component. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
However, we do not consider the spectral properties of the AMI code here, but interpret this code information-theoretically:&lt;br /&gt;
*Based on the number of steps&amp;amp;nbsp; $M = 3$&amp;amp;nbsp; the decision content of the (ternary) code sequence is equal to&amp;amp;nbsp; $H_0 = \log_2 \ 3 ≈ 1.585 \hspace{0.05cm} \rm bit/symbol$.&amp;amp;nbsp; The first entropy approximation returns&amp;amp;nbsp; $H_1 = 1.5 \hspace{0.05cm} \rm bit/Symbol$, as shown in the following calculation:&lt;br /&gt;
  &lt;br /&gt;
:$$p_{\rm H} = p_{\rm L} = 1/2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm}&lt;br /&gt;
p_{\rm N} = p_{\rm L} = 1/2\hspace{0.05cm},\hspace{0.2cm}p_{\rm M} = p_{\rm P}= p_{\rm H}/2 = 1/4\hspace{0.3cm}&lt;br /&gt;
\Rightarrow \hspace{0.3cm} H_1 = 1/2 \cdot {\rm log}_2\hspace{0.1cm}2 + 2 \cdot 1/4 \cdot{\rm log}_2\hspace{0.1cm}4 = 1.5 \,{\rm bit/symbol}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*Let&#039;s now look at two-tuples.&amp;amp;nbsp; With AMI code,&amp;amp;nbsp; $\rm P$&amp;amp;nbsp; cannot follow&amp;amp;nbsp; $\rm P$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm M$&amp;amp;nbsp; cannot follow&amp;amp;nbsp; $\rm M$&amp;amp;nbsp; follow&amp;amp;nbsp; The probability for&amp;amp;nbsp; $\rm NN$&amp;amp;nbsp; is equal to&amp;amp;nbsp; $p_{\rm L} - p_{\rm L} = 1/4$.&amp;amp;nbsp; All other (six) two-tuples occur with the probability&amp;amp;nbsp; $1/8$&amp;amp;nbsp; on.&amp;amp;nbsp; From this follows for the second entropy approximation:&lt;br /&gt;
:$$H_2 = 1/2 \cdot \big [ 1/4 \cdot {\rm log_2}\hspace{0.1cm}4 + 6 \cdot 1/8 \cdot {\rm log_2}\hspace{0.1cm}8 \big ] = 1,375 \,{\rm bit/symbol} \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*For the further entropy approximations&amp;amp;nbsp; $H_3$,&amp;amp;nbsp; $H_4$, ...&amp;amp;nbsp; and the actual entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; will apply:&lt;br /&gt;
:$$ H &amp;lt; \hspace{0.05cm}\text{...}\hspace{0.05cm} &amp;lt; H_5 &amp;lt; H_4 &amp;lt; H_3 &amp;lt; H_2 = 1.375 \,{\rm bit/symbol} \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*Exceptionally in this example we know the actual entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; of the code symbol sequence&amp;amp;nbsp; $〈 c_ν 〉$: &amp;amp;nbsp; since no new information is added by the coder and no information is lost, it is the same entropy&amp;amp;nbsp; $H = 1 \,{\rm bit/symbol} $&amp;amp;nbsp; as the one of the redundancy-free binary sequence $〈 q_ν 〉$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The&amp;amp;nbsp; [[Aufgaben:1.4_Entropienäherungen_für_den_AMI-Code|Exercise 1.4]]&amp;amp;nbsp; shows the considerable effort required to calculate the entropy approximation&amp;amp;nbsp; $H_3$. &amp;amp;nbsp; Moreover,&amp;amp;nbsp; $H_3$&amp;amp;nbsp; still deviates significantly from the final value&amp;amp;nbsp; $H = 1 \,{\rm bit/symbol} $.&amp;amp;nbsp; A faster result is achieved if the AMI code is described by a markov chain as explained in the next section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Binary sources with Markov properties ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Inf_T_1_2_S5_vers2.png|right|frame|Markov processes with&amp;amp;nbsp; $M = 2$&amp;amp;nbsp; states]]&lt;br /&gt;
&lt;br /&gt;
Sequences with statistical bonds between the sequence elements (symbols) are often modeled by&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Markov_Chains|Markov processes]]&amp;amp;nbsp; where we limit ourselves here to first-order Markov processes.&amp;amp;nbsp; First we consider a binary Markov process&amp;amp;nbsp; $(M = 2)$&amp;amp;nbsp; with the states (symbols)&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$.&lt;br /&gt;
&lt;br /&gt;
On the right, you can see the transition diagram for a first-order binary Markov process&amp;amp;nbsp; however, only two of the four transfer probabilities given are freely selectable, for example&lt;br /&gt;
* $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = \rm Pr(A\hspace{0.01cm}|\hspace{0.01cm}B)$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; conditional probability that&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; follows&amp;amp;nbsp; $\rm B$&amp;amp;nbsp;.&lt;br /&gt;
* $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}} = \rm Pr(B\hspace{0.01cm}|\hspace{0.01cm}A)$   &amp;amp;nbsp; ⇒ &amp;amp;nbsp; conditional probability that&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; follows&amp;amp;nbsp; $\rm A$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the other two transition probabilities&amp;amp;nbsp; $p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} = 1- p_{\rm B\hspace{0. 01cm}|\hspace{0.01cm}A}$ &amp;amp;nbsp;and &amp;amp;nbsp; $p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} = 1- p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}&lt;br /&gt;
 \hspace{0.05cm}.$&lt;br /&gt;
&lt;br /&gt;
Due to the presupposed properties&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Auto-correlation Function (ACF)#Stationary_Random Processes|Stationarity]]&amp;amp;nbsp; and&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Auto-correlation Function (ACF)#Ergodic_Random Processes|Ergodicity]]&amp;amp;nbsp; the following applies to the state or symbol probabilities:&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = {\rm Pr}({\rm A}) = \frac{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}}{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}}&lt;br /&gt;
 \hspace{0.05cm}, \hspace{0.5cm}p_{\rm B} = {\rm Pr}({\rm B}) = \frac{p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}}{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
These equations allow first information theoretical statements about the Markov processes:&lt;br /&gt;
* For&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}} = p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$&amp;amp;nbsp; the symbols are equally likely &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $p_{\text{A}} = p_{\text{B}}= 0.5$.&amp;amp;nbsp; The first entropy approximation returns&amp;amp;nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$, independent of the actual values of the (conditional) transition probabilities&amp;amp;nbsp; $p_{\text{A|B}}$&amp;amp;nbsp; &amp;amp;nbsp;or &amp;amp;nbsp; $p_{\text{B|A}}$.&lt;br /&gt;
*The source entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; as the threshold value&amp;amp;nbsp; $($for&amp;amp;nbsp; $k \to \infty)$&amp;amp;nbsp; of the&amp;amp;nbsp; [[Information_Theory/Discrete Memoryless Sources#Generalization_to_.7F.27 .22.60UNIQ-MathJax109-QINU.60.22.27.7F.E2.80.93Tuple_and_boundary.C3.BCtransition|Entropy approximation&amp;amp;nbsp; $k$th order]] &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $H_k$&amp;amp;nbsp; but depends very much on the actual values of&amp;amp;nbsp; $p_{\text{A|B}}$ &amp;amp;nbsp;and&amp;amp;nbsp; $p_{\text{B|A}}$&amp;amp;nbsp; and not only on their quotients.&amp;amp;nbsp; This is shown by the following example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 6:}$&amp;amp;nbsp;&lt;br /&gt;
We consider three binary symmetric Markov sources that are represented by the numerical values of the symmetric transition probabilities&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} }$&amp;amp;nbsp;.&amp;amp;nbsp; For the symbol probabilities the following applies:&amp;amp;nbsp; $p_{\rm A} = p_{\rm B}= 0.5$, and the other transition probabilities have the values&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2242__Inf_T_1_2_S5b_neu.png|right|frame|Three examples of binary Markov sources]] &lt;br /&gt;
:$$p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 1 - p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } =&lt;br /&gt;
p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}B} }.$$&lt;br /&gt;
&lt;br /&gt;
*The middle (blue) symbol sequence with&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.5$&amp;amp;nbsp; has the entropy&amp;amp;nbsp; $H ≈ 1 \hspace{0.1cm}  \rm bit/symbol$.&amp;amp;nbsp; That means: &amp;amp;nbsp; In this special case there are no statistical bonds within the sequence.&lt;br /&gt;
&lt;br /&gt;
*The left (red) sequence with&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.2$&amp;amp;nbsp; has less changes between&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; Due to statistical dependencies between neighboring symbols the entropy is now smaller&amp;amp;nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/symbol$.&lt;br /&gt;
&lt;br /&gt;
*The right (green) symbol sequence with&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.8$&amp;amp;nbsp; has the exact same entropy&amp;amp;nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/symbol$&amp;amp;nbsp; as the red sequence.&amp;amp;nbsp; Here you can see many areas with alternating symbols&amp;amp;nbsp; $($... $\rm ABABAB$ ... $)$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This example is worth noting:&lt;br /&gt;
*If you had not used the markup properties of the red and green sequences, you would have reached the respective result&amp;amp;nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/symbol$&amp;amp;nbsp; only after lengthy calculations.&lt;br /&gt;
*The following pages show that for a source with mark properties the final value&amp;amp;nbsp; $H$&amp;amp;nbsp; can be determined from the entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; alone. &amp;amp;nbsp; Likewise, all entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; can also be calculated from&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k$-tuples in a simple manner &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $H_3$,&amp;amp;nbsp; $H_4$,&amp;amp;nbsp; $H_5$, ... &amp;amp;nbsp; $H_{100}$, ...&lt;br /&gt;
	&lt;br /&gt;
 &lt;br /&gt;
== Simplified entropy calculation for Markov sources ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Inf_T_1_2_S5_vers2.png|right|frame|Markov processes with&amp;amp;nbsp; $M = 2$&amp;amp;nbsp; states]]&lt;br /&gt;
We continue to assume the first-order symmetric binary Markov source.&amp;amp;nbsp; As on the previous page, we use the following nomenclature for&lt;br /&gt;
*the transition probabilities&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}$, &amp;amp;nbsp; $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$,&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}A}}}= 1- p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$, &amp;amp;nbsp; $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}B}} = 1 - p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}$, &amp;amp;nbsp; &lt;br /&gt;
*the ergodic probabilities&amp;amp;nbsp; $p_{\text{A}}$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\text{B}}$,&lt;br /&gt;
*the compound probabilities, for example&amp;amp;nbsp; $p_{\text{AB}} = p_{\text{A}} - p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We now compute the&amp;amp;nbsp; [[Information_Theory/Sources with Memory#Entropy_in_Two_Tuple|Entropy of a two-tuple]]&amp;amp;nbsp; (with the unit &amp;quot;bit/two-tuple&amp;quot;):&lt;br /&gt;
 &lt;br /&gt;
:$$H_2\hspace{0.05cm}&#039; = p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}} + p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B}}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
If one now replaces the logarithms of the products by corresponding sums of logarithms, one gets the result&amp;amp;nbsp; $H_2\hspace{0.05cm}&#039; = H_1 + H_{\text{M}}$&amp;amp;nbsp; with  &lt;br /&gt;
:$$H_1 = p_{\rm A}  \cdot (p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A})\cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm A}} + p_{\rm B}  \cdot (p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B})\cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm B}} = p_{\rm A}  \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm A}} + p_{\rm B}  \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm B}} = H_{\rm bin} (p_{\rm A})= H_{\rm bin} (p_{\rm B})&lt;br /&gt;
 \hspace{0.05cm},$$&lt;br /&gt;
:$$H_{\rm M}= p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}} + p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B}}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; Thus the&amp;amp;nbsp; &#039;&#039;&#039;second entropy approximation&#039;&#039;&#039;&amp;amp;nbsp; (with the unit &amp;quot;bit/symbol&amp;quot;):&lt;br /&gt;
:$$H_2 = {1}/{2} \cdot {H_2\hspace{0.05cm}&#039;} = {1}/{2} \cdot \big [ H_{\rm 1} + H_{\rm M} \big] &lt;br /&gt;
 \hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
is to be noted:&lt;br /&gt;
*The first summand&amp;amp;nbsp; $H_1$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; first entropy approximation depends only on the symbol probabilities.&lt;br /&gt;
*For a symmetrical markov process &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}} $ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $p_{\text{A}}} = p_{\text{B}}} = 1/2$ &amp;amp;nbsp; this is the result for this first summand&amp;amp;nbsp; $H_1 = 1 \hspace{0.1cm} \rm bit/symbol$.&lt;br /&gt;
*The second summand&amp;amp;nbsp; $H_{\text{M}}$&amp;amp;nbsp; must be calculated according to the second of the two upper equations. &lt;br /&gt;
*For a symmetrical Markov process you get&amp;amp;nbsp; $H_{\text{M}}} = H_{\text{bin}}(p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B})$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now, this result is extended to the&amp;amp;nbsp; $k$-th entropy approximation.&amp;amp;nbsp; Here, the advantage of Markov sources over other sources is taken advantage of, that the entropy calculation for&amp;amp;nbsp; $k$-tuples is very simple.&amp;amp;nbsp; For each Markov source, the following applies&lt;br /&gt;
 &lt;br /&gt;
:$$H_k = {1}/{k} \cdot \big [ H_{\rm 1} + (k-1) \cdot H_{\rm M}\big ] \hspace{0.3cm} \Rightarrow \hspace{0.3cm}&lt;br /&gt;
 H_2 = {1}/{2} \cdot \big [ H_{\rm 1} + H_{\rm M} \big ]\hspace{0.05cm}, \hspace{0.3cm}&lt;br /&gt;
 H_3 ={1}/{3} \cdot \big [ H_{\rm 1} + 2 \cdot H_{\rm M}\big ] \hspace{0.05cm},\hspace{0.3cm}&lt;br /&gt;
 H_4 = {1}/{4} \cdot \big [ H_{\rm 1} + 3 \cdot H_{\rm M}\big ] &lt;br /&gt;
 \hspace{0.05cm},\hspace{0.15cm}{\rm usw.}$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; With the boundary condition for&amp;amp;nbsp; $k \to \infty$, one obtains the actual source entropy&lt;br /&gt;
:$$H = \lim_{k \rightarrow \infty} H_k = H_{\rm M} \hspace{0.05cm}.$$&lt;br /&gt;
From this simple result important insights for the entropy calculation follow:&lt;br /&gt;
*For Markov sources it is sufficient to determine the entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$.&amp;amp;nbsp; Thus, the entropy of a Markov source is &lt;br /&gt;
:$$H = 2 \cdot H_2 - H_{\rm 1}  \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*Through&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; all further entropy approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; are also fixed for&amp;amp;nbsp; $k \ge 3$&amp;amp;nbsp;:&lt;br /&gt;
 &lt;br /&gt;
:$$H_k = \frac{2-k}{k} \cdot H_{\rm 1} + \frac{2\cdot (k-1)}{k} \cdot H_{\rm 2}&lt;br /&gt;
 \hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, these approximations are not very important.&amp;amp;nbsp; Mostly only the limit value&amp;amp;nbsp; $H$.&amp;amp;nbsp; For sources without markov properties the approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; are calculated only to be able to estimate this limit value, i.e. the actual entropy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Notes&#039;&#039;: &lt;br /&gt;
*In the&amp;amp;nbsp; [[Aufgaben:1.5_Binäre_Markovquelle|Task 1.5]]&amp;amp;nbsp; the above equations are applied to the more general case of an asymmetric binary source.&lt;br /&gt;
*All equations on this page also apply to non-binary Markov sources&amp;amp;nbsp; $(M &amp;gt; 2)$ as shown on the next page.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Non-binary Markov sources == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:P_ID2243__Inf_T_1_2_S6_neu.png|right|frame|Ternary and Quaternary First Order Markov Source]]&lt;br /&gt;
&lt;br /&gt;
The following equations apply to each Markov source regardless of the symbol range:&lt;br /&gt;
 &lt;br /&gt;
:$$H = 2 \cdot H_2 - H_{\rm 1}  \hspace{0.05cm},$$&lt;br /&gt;
:$$H_k = {1}/{k} \cdot \big [ H_{\rm 1} + (k-1) \cdot H_{\rm M}\big ] \hspace{0.05cm},$$&lt;br /&gt;
:$$ \lim_{k \rightarrow \infty} H_k = H &lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
These equations allow the simple calculation of the entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; from the approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$.&lt;br /&gt;
&lt;br /&gt;
We now look at the transition diagrams sketched on the right&lt;br /&gt;
*a ternary Markov source&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; $($&amp;amp;nbsp; $M = 3$,&amp;amp;nbsp; blue coloring$)$ and &lt;br /&gt;
*a quaternary Markov source&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; $(M = 4$,&amp;amp;nbsp; red color$)$. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
In&amp;amp;nbsp; [[Aufgaben:1.6_Nichtbinäre_Markovquellen|Exercise 1.6]]&amp;amp;nbsp; the entropy approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; and the source entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; are calculated as the limit of&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k \to \infty$&amp;amp;nbsp;. &amp;amp;nbsp; The results are shown in the following figure.&amp;amp;nbsp; All entropies specified there have the unit &amp;quot;bit/symbol&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2244__Inf_T_1_2_S6b_neu.png|center|frame|Entropies for&amp;amp;nbsp; $\rm MQ3$,&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; and the&amp;amp;nbsp; $\rm AMI-Code$]]. &lt;br /&gt;
&lt;br /&gt;
These results can be interpreted as follows:&lt;br /&gt;
*For the ternary markov source&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; the entropy approximations of&amp;amp;nbsp; $H_1 = 1,500$&amp;amp;nbsp; above&amp;amp;nbsp; $H_2 = 1,375$&amp;amp;nbsp; up to the limit&amp;amp;nbsp; $H = 1,250$&amp;amp;nbsp; continuously decreasing&amp;amp;nbsp; paths&amp;amp;nbsp; $M = 3$&amp;amp;nbsp; the decision content is&amp;amp;nbsp; $H_0 = 1,585$.&lt;br /&gt;
*For the quaternary Markov source&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; one receives&amp;amp;nbsp; $H_0 = H_1 = 2,000$&amp;amp;nbsp; (since four equally probable states) and&amp;amp;nbsp; $H_2 = 1.5$. &amp;amp;nbsp; From the&amp;amp;nbsp; $H_1$-&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$-value all entropy approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; and the final value&amp;amp;nbsp; $H = 1.000$&amp;amp;nbsp; can be calculated.&lt;br /&gt;
*The two models&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; were created during the attempt to calculate the&amp;amp;nbsp; [[Information_Theory/Sources_with_Memory#The_Entropy_of_AMI.E2.80. 93Codes|AMI-Code]]&amp;amp;nbsp; to be described information theoretically by Markov sources.&amp;amp;nbsp; The symbols&amp;amp;nbsp; $\rm M$,&amp;amp;nbsp; $\rm N$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm P$&amp;amp;nbsp; stand for &amp;quot;minus&amp;quot;, &amp;quot;zero&amp;quot; and &amp;quot;plus&amp;quot;.&lt;br /&gt;
*The entropy approximations&amp;amp;nbsp; $H_1$,&amp;amp;nbsp; $H_2$&amp;amp;nbsp; and&amp;amp;nbsp; $H_3$&amp;amp;nbsp; of the AMI code (green markers) were calculated in the&amp;amp;nbsp; [[Tasks:1.4_Entropy Approximations_Hk|Task A1.4]]&amp;amp;nbsp; on the calculation of&amp;amp;nbsp; $H_4$,&amp;amp;nbsp; $H_5$, ... had to be omitted for reasons of effort.&amp;amp;nbsp; But the final value of&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k \to \infty$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $H = 1,000$ is known.&lt;br /&gt;
*You can see that the Markov model&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; for&amp;amp;nbsp; $H_0 = 1,585$,&amp;amp;nbsp; $H_1 = 1,500$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2 = 1,375$&amp;amp;nbsp; yields exactly the same numerical values as the AMI code. &amp;amp;nbsp; On the other hand&amp;amp;nbsp; $H_3$&amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $(1,333$&amp;amp;nbsp; instead of&amp;amp;nbsp; $1,292)$&amp;amp;nbsp; and especially the final value&amp;amp;nbsp; $H$&amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $(1,250$&amp;amp;nbsp; compared to&amp;amp;nbsp; $1,000)$.&lt;br /&gt;
*The model&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; differs from the AMI code&amp;amp;nbsp; $(M = 3)$&amp;amp;nbsp; with respect to the decision content&amp;amp;nbsp; $H_0$&amp;amp;nbsp; and also with respect to all entropy approximations&amp;amp;nbsp; $H_k$. &amp;amp;nbsp; Nevertheless, $\rm MQ4$&amp;amp;nbsp; is the more suitable model for the AMI code, since the final value&amp;amp;nbsp; $H = 1,000$&amp;amp;nbsp; is the same.&lt;br /&gt;
*The model&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; yields entropy values that are too large, since the sequences&amp;amp;nbsp; $\rm PNP$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm MNM$&amp;amp;nbsp; are possible here, which cannot occur in the AMI code. &amp;amp;nbsp; Already with the approximation&amp;amp;nbsp; $H_3$&amp;amp;nbsp; the difference is slightly noticeable, in the final value&amp;amp;nbsp; $H$&amp;amp;nbsp; even clearly&amp;amp;nbsp; $(1.25$&amp;amp;nbsp; instead of&amp;amp;nbsp; $1)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the model&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; the state&amp;amp;nbsp; &amp;quot;Null&amp;quot;&amp;amp;nbsp; was split into two states&amp;amp;nbsp; $\rm N$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm O$&amp;amp;nbsp; (see upper right figure on this page):&lt;br /&gt;
*Here applies to the state&amp;amp;nbsp; $\rm N$: &amp;amp;nbsp; The current binary symbol&amp;amp;nbsp; $\rm L$&amp;amp;nbsp; is displayed with the amplitude value&amp;amp;nbsp; $0$&amp;amp;nbsp; (zero), as per the AMI rule.&amp;amp;nbsp; The next occurring&amp;amp;nbsp; $\rm H$ symbol, on the other hand, is displayed as&amp;amp;nbsp; $-1$&amp;amp;nbsp; (minus), because the last&amp;amp;nbsp; $\rm H$ symbol was encoded as&amp;amp;nbsp; $+1$&amp;amp;nbsp; (plus).&lt;br /&gt;
*The current binary symbol&amp;amp;nbsp; $\rm O$&amp;amp;nbsp; is also displayed with the state&amp;amp;nbsp; $\rm L$&amp;amp;nbsp; with the ternary value&amp;amp;nbsp; $0$&amp;amp;nbsp;. &amp;amp;nbsp; In contrast to the state&amp;amp;nbsp; $\rm N$&amp;amp;nbsp; however, the next occurring&amp;amp;nbsp; $\rm H$ symbol is now displayed as&amp;amp;nbsp; $+1$&amp;amp;nbsp; (Plus) since the last&amp;amp;nbsp; $\rm H$ symbol was encoded as&amp;amp;nbsp; $-1$&amp;amp;nbsp; (Minus).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; &lt;br /&gt;
*The&amp;amp;nbsp; $\rm MQ4$&amp;amp;ndash;Output sequence actually follows the rules of the AMI code and assigns the entropy&amp;amp;nbsp; $H = 1.000 \hspace{0.15cm} \rm bit/symbol$&amp;amp;nbsp; up. &lt;br /&gt;
*Because of the new state&amp;amp;nbsp; $\rm O$&amp;amp;nbsp; but is now&amp;amp;nbsp; $H_0 = 2.000 \hspace{0.15cm} \rm bit/symbol$&amp;amp;nbsp; $($against&amp;amp;nbsp; $1.585 \hspace{0.15cm} \rm bit/symbol)$&amp;amp;nbsp; clearly too large. &lt;br /&gt;
*Also all&amp;amp;nbsp; $H_k$ approximations are larger than in AMI code. &lt;br /&gt;
*First for &amp;amp;nbsp;$k \to \infty$&amp;amp;nbsp; the model&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; and the AMI code match exactly: &amp;amp;nbsp; $H = 1,000 \hspace{0.15cm} \rm bit/symbol$.}}&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
== Exercises for chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:1.3 Entropienäherungen|Aufgabe 1.3: Entropienäherungen]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.4 Entropienäherungen für den AMI-Code|Aufgabe 1.4: Entropienäherungen für den AMI-Code]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.4Z Entropie der AMI-Codierung|Zusatzaufgabe 1.4Z: Entropie der AMI-Codierung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.5 Binäre Markovquelle|Aufgabe 1.5: Binäre Markovquelle]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.5Z Symmetrische Markovquelle|Aufgabe 1.5Z: Symmetrische Markovquelle]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.6 Nichtbinäre Markovquellen|Aufgabe 1.6: Nichtbinäre Markovquellen]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.6Z Ternäre Markovquelle|Aufgabe 1.6Z:Ternäre Markovquelle]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Sources_with_Memory&amp;diff=35018</id>
		<title>Information Theory/Discrete Sources with Memory</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Sources_with_Memory&amp;diff=35018"/>
		<updated>2020-10-28T22:52:24Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Entropie wertdiskreter Nachrichtenquellen&lt;br /&gt;
|Vorherige Seite=Gedächtnislose Nachrichtenquellen&lt;br /&gt;
|Nächste Seite=Natürliche wertdiskrete Nachrichtenquellen&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==A simple introductory example ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp;&lt;br /&gt;
At the&amp;amp;nbsp; [[Information_Theory/Discrete Memoryless Sources#Model_and_requirements|beginning of the first chapter]]&amp;amp;nbsp; we have considered a memoryless message source with the symbol set&amp;amp;nbsp; $\rm \{A, \ B, \ C, \ D\}$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 4$ &amp;amp;nbsp;. &amp;amp;nbsp; An exemplary symbol sequence is shown again in the following figure as source&amp;amp;nbsp; $\rm Q1$&amp;amp;nbsp;. &lt;br /&gt;
&lt;br /&gt;
With the symbol probabilities&amp;amp;nbsp; $p_{\rm A} = 0.4 \hspace{0.05cm},\hspace{0.2cm}p_{\rm B} = 0.3 \hspace{0.05cm},\hspace{0.2cm}p_{\rm C} = 0.2 \hspace{0.05cm},\hspace{0.2cm} &lt;br /&gt;
p_{\rm D} = 0.1\hspace{0.05cm}$&amp;amp;nbsp; the entropy is&lt;br /&gt;
 &lt;br /&gt;
:$$H \hspace{-0.05cm}= 0.4 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.4} + 0.3 \cdot {\rm log}_2\hspace{0. 05cm}\frac {1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.2} + 0.1 \cdot {\rm log}_2\hspace{0.05cm}\frac {1}{0.1} \approx 1.84 \hspace{0.05cm}{\rm bit/symbol}&lt;br /&gt;
 \hspace{0.01cm}.$$&lt;br /&gt;
&lt;br /&gt;
Due to the unequal symbol probabilities the entropy is smaller than the decision content&amp;amp;nbsp; $H_0 = \log_2 M = 2 \hspace{0.05cm}. \rm bit/symbol$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2238__Inf_T_1_2_S1a_neu.png|right|frame|Quaternary message source without/with memory]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The source&amp;amp;nbsp; $\rm Q2$&amp;amp;nbsp; is almost identical to the source&amp;amp;nbsp; $\rm Q1$, except that each individual symbol is output not only once, but twice in a row:&amp;amp;nbsp; $\rm A ⇒ AA$,&amp;amp;nbsp; $\rm B ⇒ BB$,&amp;amp;nbsp; and so on. &lt;br /&gt;
*It is obvious that&amp;amp;nbsp; $\rm Q2$&amp;amp;nbsp; has a smaller entropy (uncertainty) than&amp;amp;nbsp; $\rm Q1$. &lt;br /&gt;
*Because of the simple repetition code,&amp;amp;nbsp; &lt;br /&gt;
:$$H = 1.84/2 = 0.92 \hspace{0.05cm} \rm bit/symbol$$&lt;br /&gt;
:only half the size, although the occurrence probabilities have not changed.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp;&lt;br /&gt;
This example shows:&lt;br /&gt;
*The entropy of a source with memory is smaller than the entropy of a memoryless source with equal symbol probabilities.&lt;br /&gt;
*The statistical bonds within the sequence&amp;amp;nbsp; $〈 q_ν 〉$&amp;amp;nbsp; have to be considered now, &lt;br /&gt;
*namely the dependency of the symbol&amp;amp;nbsp; $q_ν$&amp;amp;nbsp; from the predecessor symbols&amp;amp;nbsp; $q_{ν-1}$,&amp;amp;nbsp; $q_{ν-2}$, ... }}&lt;br /&gt;
 &lt;br /&gt;
	 &lt;br /&gt;
== Entropy with respect to two-tuples == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We continue to look at the source symbol sequence&amp;amp;nbsp; $〈 q_1, \hspace{0.05cm} q_2,\hspace{0.05cm}\text{ ...} \hspace{0.05cm}, q_{ν-1}, \hspace{0.05cm}q_ν, \hspace{0.05cm}\hspace{0.05cm}q_{ν+1} .\hspace{0.05cm}\text{...} \hspace{0.05cm}〉$&amp;amp;nbsp; and now consider the entropy of two successive source symbols. &lt;br /&gt;
*All source symbols&amp;amp;nbsp; $q_ν$&amp;amp;nbsp; are taken from an alphabet with the symbol beginning&amp;amp;nbsp; $M$, so that it is not necessary for the combination&amp;amp;nbsp; $(q_ν, \hspace{0. 05cm}q_{ν+1})$&amp;amp;nbsp; exactly&amp;amp;nbsp; $M^2$&amp;amp;nbsp; there are possible symbol pairs with the following [[Theory_of_Stochastic_Signals/Set Theory Basics#Intersection|combined probabilities]]:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm Pr}(q_{\nu}\cap q_{\nu+1})\le {\rm Pr}(q_{\nu}) \cdot {\rm Pr}( q_{\nu+1})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*From this, the&amp;amp;nbsp; &#039;&#039;compound entropy&#039;&#039;&amp;amp;nbsp; of an ordered pair can be computed:&lt;br /&gt;
 &lt;br /&gt;
:$$H_2\hspace{0.05cm}&#039; = \sum_{q_{\nu}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm}q_{\mu}\hspace{0.01cm} \}} \sum_{q_{\nu+1}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm} q_{\mu}\hspace{0.01cm} \}}\hspace{-0.1cm}{\rm Pr}(q_{\nu}\cap q_{\nu+1}) \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{{\rm Pr}(q_{\nu}\cap q_{\nu+1})} \hspace{0.4cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/order pair})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
:The index &amp;quot;2&amp;quot; symbolizes that the entropy thus calculated refers to two-tuples. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To get the average information content per symbol,&amp;amp;nbsp; $H_2\hspace{0.05cm}&#039;$&amp;amp;nbsp; has to be divided in half:&lt;br /&gt;
 &lt;br /&gt;
:$$H_2 = {H_2\hspace{0.05cm}&#039;}/{2}  \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
In order to achieve a consistent nomenclature, we now label the entropy defined in chapter&amp;amp;nbsp; [[Information_Theory/Memoryless_Message Sources#Model_and_Prerequisites|Memoryless Message Sources]]&amp;amp;nbsp; with&amp;amp;nbsp; $H_1$:&lt;br /&gt;
&lt;br /&gt;
:$$H_1$ = \sum_{q_{\nu}\hspace{0.05cm} \in \hspace{0.05cm}\{ \hspace{0.05cm}q_{\mu}\hspace{0.01cm} {\}} {\rm Pr}(q_{\nu}) \cdot {\rm log_2}\hspace{0.1cm}\frac {1}{{\rm Pr}(q_{\nu})} \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The index &amp;quot;1&amp;quot; is supposed to indicate that&amp;amp;nbsp; $H_1$&amp;amp;nbsp; considers only the symbol probabilities and not statistical links between symbols within the sequence.&amp;amp;nbsp; With the decision content&amp;amp;nbsp; $H_0 = \log_2 \ M$&amp;amp;nbsp; the following size relation results:&lt;br /&gt;
 &lt;br /&gt;
$$H_0 \ge H_1 \ge H_2&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
With statistical independence of the sequence elements&amp;amp;nbsp; $H_2 = H_1$.&lt;br /&gt;
&lt;br /&gt;
The previous equations each indicate a share mean value. &amp;amp;nbsp; The probabilities required for the calculation of&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; can, however, also be calculated as time averages from a very long sequence or, more precisely, approximated by the corresponding&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/From Random Experiment to Random Variable#Bernoulli&#039;s_Law_of_Large_Numbers|relative frequencies]].&lt;br /&gt;
&lt;br /&gt;
Let us now illustrate the calculation of entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; with three examples.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp;&lt;br /&gt;
We will first look at the sequence&amp;amp;nbsp; $〈 q_1$, ... , $q_{50} \rangle $&amp;amp;nbsp; according to the graphic, where the sequence elements&amp;amp;nbsp; $q_ν$&amp;amp;nbsp; originate from the alphabet $\rm \{A, \ B, \ C \}$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; the symbol range is&amp;amp;nbsp; $M = 3$.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_2_S2_vers2.png|center|frame|Ternary symbol sequence and formation of two-tuples]]&lt;br /&gt;
&lt;br /&gt;
By time averaging over the&amp;amp;nbsp; $50$&amp;amp;nbsp; symbols one gets the symbol probabilities&amp;amp;nbsp; $p_{\rm A} ≈ 0.5$, &amp;amp;nbsp; $p_{\rm B} ≈ 0.3$ &amp;amp;nbsp;and&amp;amp;nbsp; $p_{\rm C} ≈ 0.2$, with which one can calculate the first order entropy approximation:&lt;br /&gt;
 &lt;br /&gt;
:$$H_1 = 0.5 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.5} + 0.3 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{0.2}  \approx \, 1.486 \,{\rm bit/symbol}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Due to the not equally probable symbols&amp;amp;nbsp; $H_1 &amp;lt; H_0 = 1.585 \hspace{0.05cm}. \rm bit/symbol$.&amp;amp;nbsp; As an approximation for the probabilities of two-tuples one gets from the above sequence:&lt;br /&gt;
 &lt;br /&gt;
:$$\begin{align*}p_{\rm AA} \hspace{-0.1cm}&amp;amp; = \hspace{-0.1cm} 14/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm AB} = 8/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm AC} = 3/49\hspace{0.05cm}, \\\&lt;br /&gt;
 p_{\rm BA} \hspace{-0.1cm}&amp;amp; = \hspace{0.07cm} 7/49\hspace{0.05cm}, \hspace{0.25cm}p_{\rm BB} = 2/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm BC} = 5/49\hspace{0.05cm}, \\\&lt;br /&gt;
 p_{\rm CA} \hspace{-0.1cm}&amp;amp; = \hspace{0.07cm} 4/49\hspace{0.05cm}, \hspace{0.25cm}p_{\rm CB} = 5/49\hspace{0.05cm}, \hspace{0.2cm}p_{\rm CC} = 1/49\hspace{0.05cm}.\end{align*}$$&lt;br /&gt;
&lt;br /&gt;
Please note that the&amp;amp;nbsp; $50$&amp;amp;nbsp; sequence elements can only be formed from&amp;amp;nbsp; $49$&amp;amp;nbsp; two-tuples&amp;amp;nbsp; $(\rm AA$, ... , $\rm CC)$&amp;amp;nbsp; which are marked in different colors in the graphic.&lt;br /&gt;
&lt;br /&gt;
*The entropy approximation&amp;amp;nbsp; $H_2$&amp;amp;nbsp; should actually be equal to&amp;amp;nbsp; $H_1$&amp;amp;nbsp; since the given symbol sequence comes from a memoryless source. &lt;br /&gt;
*Because of the short sequence length&amp;amp;nbsp; $N = 50$&amp;amp;nbsp; and the resulting statistical inaccuracy, however, a smaller value results: &amp;amp;nbsp; &lt;br /&gt;
:$$H_2 ≈ 1.39\hspace{0.05cm} \rm bit/symbol.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 3:}$&amp;amp;nbsp;&lt;br /&gt;
Now let us consider a&amp;amp;nbsp; &#039;&#039;memoryless&amp;amp;nbsp; binary source&#039;&#039;&amp;amp;nbsp; with equally probable symbols, i.e. there would be&amp;amp;nbsp; $p_{\rm A} = p_{\rm B} = 1/2$.&amp;amp;nbsp; The first twenty subsequent elements are &amp;amp;nbsp; $〈 q_ν 〉 =\rm BBAAABAABBBBBAAAABABAB$ ...&lt;br /&gt;
*Because of the equally probable binary symbols &amp;amp;nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$.&lt;br /&gt;
*The compound probability&amp;amp;nbsp; $p_{\rm AB}$&amp;amp;nbsp; of the combination&amp;amp;nbsp; $\rm AB$&amp;amp;nbsp; is equal to&amp;amp;nbsp; $p_{\rm A} - p_{\rm B} = 1/4$.&amp;amp;nbsp; Also $p_{\rm AA} = p_{\rm BB} = p_{\rm BA} = 1/4$. &lt;br /&gt;
*This gives for the second entropy approximation&lt;br /&gt;
 &lt;br /&gt;
:$$H_2 = {1}/{2} \cdot \big [ {1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 + {1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 +{1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 +{1}/{4} \cdot {\rm log}_2\hspace{0.1cm}4 \big ] = 1 \,{\rm bit/symbol} = H_1 = H_0&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note&#039;&#039;: &amp;amp;nbsp; Due to the short length of the given sequence the probabilities are slightly different:&amp;amp;nbsp; $p_{\rm AA} = 6/19$,&amp;amp;nbsp; $p_{\rm BB} = 5/19$,&amp;amp;nbsp; $p_{\rm AB} = p_{\rm BA} = 4/19$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 4:}$&amp;amp;nbsp;&lt;br /&gt;
The third sequence considered here results from the binary sequence of&amp;amp;nbsp; $\text{Example 3}$&amp;amp;nbsp; by using a simple repeat code: &lt;br /&gt;
:$$〈 q_ν 〉 =\rm BbBbAaAaAaBbAaAaBbBb \text{...} $$&lt;br /&gt;
*The repeated symbols are marked by corresponding lower case letters.&amp;amp;nbsp; It still applies&amp;amp;nbsp; $M=2$.&lt;br /&gt;
*Because of the equally probable binary symbols, this also results in&amp;amp;nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$.&lt;br /&gt;
*As shown in&amp;amp;nbsp; [[Exercise:1.3_Entropienäherungen|Exercise 1.3]]&amp;amp;nbsp; for the compound probabilities we obtain&amp;amp;nbsp; $p_{\rm AA}=p_{\rm BB} = 3/8$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm AB}=p_{\rm BA} = 1/8$.&amp;amp;nbsp; Hence&lt;br /&gt;
:$$\begin{align*}H_2 ={1}/{2} \cdot \big [ 2 \cdot {3}/{8} \cdot {\rm log}_2\hspace{0.1cm} {8}/{3} + &lt;br /&gt;
 2 \cdot {1}/{8} \cdot {\rm log}_2\hspace{0.1cm}8\big ] = {3}/{8} \cdot {\rm log}_2\hspace{0.1cm}8 - {3}/{8} \cdot{\rm log}_2\hspace{0.1cm}3 + {1}/{8} \cdot {\rm log}_2\hspace{0.1cm}8 \approx 0.906 \,{\rm bit/symbol} &amp;lt; H_1 = H_0&lt;br /&gt;
 \hspace{0.05cm}.\end{align*}$$&lt;br /&gt;
&lt;br /&gt;
A closer look at the task at hand leads to the following conclusion: &lt;br /&gt;
*The entropy should actually be&amp;amp;nbsp; $H = 0.5 \hspace{0.05cm} \rm bit/symbol$&amp;amp;nbsp; its (every second symbol does not provide new information) &lt;br /&gt;
*The second entropy approximation&amp;amp;nbsp; $H_2 = 0.906 \hspace{0.05cm} \rm bit/symbol$&amp;amp;nbsp; but is much larger than the entropy&amp;amp;nbsp; $H$.&lt;br /&gt;
*For the determination of entropy the second order approximation is not sufficient.&amp;amp;nbsp; rather, larger continuous blocks must be considered with&amp;amp;nbsp; $k &amp;gt; 2$&amp;amp;nbsp; symbols. &lt;br /&gt;
*In the following, such a block is referred to as&amp;amp;nbsp; $k$-tuple.}}&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Generalization to $k$-tuple and boundary crossing ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For abbreviation we write using the compound probability&amp;amp;nbsp; $p_i^{(k)}$&amp;amp;nbsp; a&amp;amp;nbsp; $k$-tuple in general:&lt;br /&gt;
 &lt;br /&gt;
:$$H_k = \frac{1}{k} \cdot \sum_{i=1}^{M^k} p_i^{(k)} \cdot {\rm log}_2\hspace{0.1cm} \frac{1}{p_i^{(k)} \hspace{0.5cm}({\rm unit\hspace{-0.1cm}: \hspace{0.1cm}bit/symbol})&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The control variable&amp;amp;nbsp; $i$&amp;amp;nbsp; stands for one of the&amp;amp;nbsp; $M^k$ Tuple.&amp;amp;nbsp; The previously calculated approximation&amp;amp;nbsp; $H_2$&amp;amp;nbsp; results with&amp;amp;nbsp; $k = 2$.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp;&lt;br /&gt;
The&amp;amp;nbsp; &#039;&#039;&#039;Entropy of a message source with memory&#039;&#039;&#039;&amp;amp;nbsp; has the following limit &lt;br /&gt;
:$$H = \lim_{k \rightarrow \infty }H_k \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
For the&amp;amp;nbsp; &#039;&#039;entropy approximations&#039;&#039;&amp;amp;nbsp; $H_k$&amp;amp;nbsp; the following relations apply&amp;amp;nbsp; $(H_0$ is the decision content$)$:&lt;br /&gt;
:$$H \le \text{...} \le H_k \le \text{...} \le H_2 \le H_1 \le H_0 \hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The computational effort will increase with increasing&amp;amp;nbsp; $k$&amp;amp;nbsp; except for a few special cases (see the following example) and depends on the symbol size&amp;amp;nbsp; $M$&amp;amp;nbsp; of course:&lt;br /&gt;
*For the calculation of&amp;amp;nbsp; $H_{10}$&amp;amp;nbsp; a binary source&amp;amp;nbsp; $(M = 2)$&amp;amp;nbsp; is to be averaged over&amp;amp;nbsp; $2^{10} = 1024$&amp;amp;nbsp; terms. &lt;br /&gt;
*With each further increase of&amp;amp;nbsp; $k$&amp;amp;nbsp; by&amp;amp;nbsp; $1$&amp;amp;nbsp; the number of sum terms doubles.&lt;br /&gt;
*In case of a quaternary source&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; it must already be averaged over&amp;amp;nbsp; $4^{10} = 1\hspace{0.08cm}048\hspace{0.08cm}576$&amp;amp;nbsp; summation terms must be averaged for the determination of&amp;amp;nbsp; $H_{10}$.&lt;br /&gt;
* Considering that each of these&amp;amp;nbsp; $4^{10} =2^{20} &amp;gt;10^6$&amp;amp;nbsp; $k$-tuple should occur in simulation/time averaging about&amp;amp;nbsp; $100$&amp;amp;nbsp; times (statistical guideline) to ensure sufficient simulation accuracy, it follows that the sequence length should be greater than&amp;amp;nbsp; $N = 10^8$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 5:}$&amp;amp;nbsp;&lt;br /&gt;
We consider an alternating binary sequence &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $〈 q_ν 〉 =\rm ABABABAB$ ... &amp;amp;nbsp;.&amp;amp;nbsp; Accordingly it holds that&amp;amp;nbsp; $H_0 = H_1 = 1 \hspace{0.1cm} \rm bit/symbol$. &lt;br /&gt;
&lt;br /&gt;
In this special case, the&amp;amp;nbsp; $H_k$ approximation must be determined independently from&amp;amp;nbsp; $k$&amp;amp;nbsp; by averaging only two compound probabilities:&lt;br /&gt;
* $k = 2$: &amp;amp;nbsp;&amp;amp;nbsp; $p_{\rm AB} = p_{\rm BA} = 1/2$ &amp;amp;nbsp; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &amp;amp;nbsp; $H_2 = 1/2 \hspace{0.2cm} \rm bit/symbol$,&lt;br /&gt;
* $k = 3$:  &amp;amp;nbsp;&amp;amp;nbsp; $p_{\rm ABA} = p_{\rm BAB} = 1/2$ &amp;amp;nbsp; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &amp;amp;nbsp; $H_3 = 1/3 \hspace{0.2cm} \rm bit/symbol$,&lt;br /&gt;
* $k = 4$:  &amp;amp;nbsp;&amp;amp;nbsp; $p_{\rm ABAB} = p_{\rm BABA} = 1/2$ &amp;amp;nbsp; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &amp;amp;nbsp; $H_4 = 1/4 \hspace{0.2cm} \rm bit/symbol$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The (actual) entropy of this alternating binary sequence is therefore&lt;br /&gt;
:$$H = \lim_{k \rightarrow \infty }{1}/{k} = 0 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The result was to be expected, since the considered sequence has only minimal information, which does not affect the entropy end value&amp;amp;nbsp; $H$&amp;amp;nbsp; namely: &amp;lt;br&amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; &amp;quot;Does&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; occur at the even or the odd times?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
You can see that&amp;amp;nbsp; $H_k$&amp;amp;nbsp; comes closer to this final value&amp;amp;nbsp; $H = 0$&amp;amp;nbsp; very slowly:&amp;amp;nbsp; The twentieth entropy approximation still returns&amp;amp;nbsp; $H_{20} = 0.05 \hspace{0.05cm} \rm bit/symbol$. }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Summary of the results of the last pages:}$&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
*Generally it applies to the&amp;amp;nbsp; &#039;&#039;&#039;Entropy of a message source&#039;&#039;:&lt;br /&gt;
:$$H \le \text{...} \le H_3 \le H_2 \le H_1 \le H_0 &lt;br /&gt;
 \hspace{0.05cm}.$$ &lt;br /&gt;
*A&amp;amp;nbsp; &#039;&#039;&#039;redundancy-free source&#039;&#039; &amp;amp;nbsp; exists if all&amp;amp;nbsp; $M$&amp;amp;nbsp; symbols are equally probable and there are no statistical bonds within the sequence. &amp;lt;br&amp;gt; For these,&amp;amp;nbsp; $(r$&amp;amp;nbsp; denotes the &#039;&#039;relative redundancy&#039;&#039; $)$:&lt;br /&gt;
:$$H = H_0 = H_1 = H_2 = H_3 = \text{...}\hspace{0.5cm}&lt;br /&gt;
\Rightarrow \hspace{0.5cm} r = \frac{H - H_0}{H_0}= 0 \hspace{0.05cm}.$$ &lt;br /&gt;
*A&amp;amp;nbsp; &#039;&#039;&#039;memoryless source&#039;&#039;&#039; &amp;amp;nbsp; can be quite redundant&amp;amp;nbsp; $(r&amp;gt; 0)$.&amp;amp;nbsp; This redundancy then is solely due to the deviation of the symbol probabilities from the uniform distribution.&amp;amp;nbsp; Here the following relations are valid&lt;br /&gt;
:$$H = H_1 = H_2 = H_3 = \text{...} \le H_0 \hspace{0.5cm}\Rightarrow \hspace{0.5cm}0 \le r = \frac{H_1 - H_0}{H_0}&amp;lt; 1 \hspace{0.05cm}.$$ &lt;br /&gt;
*The corresponding condition for a&amp;amp;nbsp; &#039;&#039;&#039;source with memory&#039;&#039;&#039;&amp;amp;nbsp; is&lt;br /&gt;
:$$ H &amp;lt;\text{...} &amp;lt; H_3 &amp;lt; H_2 &amp;lt; H_1 \le H_0 \hspace{0.5cm}\Rightarrow \hspace{0.5cm} 0 &amp;lt; r = \frac{H_1 - H_0}{H_0}\le1 \hspace{0.05cm}.$$&lt;br /&gt;
*If&amp;amp;nbsp; $H_2 &amp;lt; H_1$, then&amp;amp;nbsp; $H_3 &amp;lt; H_2$, &amp;amp;nbsp; $H_4 &amp;lt; H_3$, etc. &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; In the general equation, the&amp;amp;nbsp; &amp;quot;≤&amp;quot; character must be replaced by the&amp;amp;nbsp; &amp;quot;&amp;lt;&amp;quot; character. &lt;br /&gt;
*If the symbols are equally probable, then again&amp;amp;nbsp; $H_1 = H_0$, while&amp;amp;nbsp; $H_1 &amp;lt; H_0$&amp;amp;nbsp; applies to symbols which are not equally probable.}}&lt;br /&gt;
	 	 &lt;br /&gt;
&lt;br /&gt;
==The entropy of the AMI code ==	 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In chapter&amp;amp;nbsp; [[Digital_Signal_Transmission/Symbol-wise_Coding_with_Pseudo_Ternary Codes#Properties_of_AMI Code|Symbol-wise Coding with Pseudo-Ternary Codes]]&amp;amp;nbsp; of the book &amp;quot;Digital Signal Transmission&amp;quot;, among other things, the AMI pseudo-ternary code is discussed. &lt;br /&gt;
*This converts the binary sequence&amp;amp;nbsp; $〈 q_ν 〉$&amp;amp;nbsp; with&amp;amp;nbsp; $q_ν ∈ \{ \rm L, \ H \}$&amp;amp;nbsp; into the ternary sequence&amp;amp;nbsp; $〈 c_ν 〉$&amp;amp;nbsp; with&amp;amp;nbsp; $q_ν ∈ \{ \rm M, \ N, \ P \}$.&lt;br /&gt;
*The names of the source symbols stand for&amp;amp;nbsp; $\rm L$ow&amp;amp;nbsp; and&amp;amp;nbsp; $\rm H$igh&amp;amp;nbsp; and those of the code symbols for&amp;amp;nbsp; $\rm M$inus,&amp;amp;nbsp; $\rm N$ull&amp;amp;nbsp; and&amp;amp;nbsp; $\rm P$lus&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The coding rule of the AMI code (&amp;quot;Alternate Mark Inversion&amp;quot;) is&lt;br /&gt;
[[File:P_ID2240__Inf_T_1_2_S4_neu.png|right|frame|Signals and symbol sequences for AMI code]]&lt;br /&gt;
&lt;br /&gt;
*Each binary symbol&amp;amp;nbsp; $q_ν =\rm L$&amp;amp;nbsp; is represented by the code symbol&amp;amp;nbsp; $c_ν =\rm N$&amp;amp;nbsp;.&lt;br /&gt;
*In contrast,&amp;amp;nbsp; $q_ν =\rm H$&amp;amp;nbsp; alternates with&amp;amp;nbsp; $c_ν =\rm P$&amp;amp;nbsp; and&amp;amp;nbsp; $c_ν =\rm M$&amp;amp;nbsp; coded &amp;amp;nbsp; ⇒ &amp;amp;nbsp; name &amp;quot;AMI&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This special encoding adds redundancy with the sole purpose of ensuring that the code sequence does not contain a DC component. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
However, we do not consider the spectral properties of the AMI code here, but interpret this code information-theoretically:&lt;br /&gt;
*Based on the number of steps&amp;amp;nbsp; $M = 3$&amp;amp;nbsp; the decision content of the (ternary) code sequence is equal to&amp;amp;nbsp; $H_0 = \log_2 \ 3 ≈ 1.585 \hspace{0.05cm} \rm bit/symbol$.&amp;amp;nbsp; The first entropy approximation returns&amp;amp;nbsp; $H_1 = 1.5 \hspace{0.05cm} \rm bit/Symbol$, as shown in the following calculation:&lt;br /&gt;
  &lt;br /&gt;
:$$p_{\rm H} = p_{\rm L} = 1/2 \hspace{0.3cm}\Rightarrow \hspace{0.3cm}&lt;br /&gt;
p_{\rm N} = p_{\rm L} = 1/2\hspace{0.05cm},\hspace{0.2cm}p_{\rm M} = p_{\rm P}= p_{\rm H}/2 = 1/4\hspace{0.3cm}&lt;br /&gt;
\Rightarrow \hspace{0.3cm} H_1 = 1/2 \cdot {\rm log}_2\hspace{0.1cm}2 + 2 \cdot 1/4 \cdot{\rm log}_2\hspace{0.1cm}4 = 1.5 \,{\rm bit/symbol}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*Let&#039;s now look at two-tuples.&amp;amp;nbsp; With AMI code,&amp;amp;nbsp; $\rm P$&amp;amp;nbsp; cannot follow&amp;amp;nbsp; $\rm P$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm M$&amp;amp;nbsp; cannot follow&amp;amp;nbsp; $\rm M$&amp;amp;nbsp; follow&amp;amp;nbsp; The probability for&amp;amp;nbsp; $\rm NN$&amp;amp;nbsp; is equal to&amp;amp;nbsp; $p_{\rm L} - p_{\rm L} = 1/4$.&amp;amp;nbsp; All other (six) two-tuples occur with the probability&amp;amp;nbsp; $1/8$&amp;amp;nbsp; on.&amp;amp;nbsp; From this follows for the second entropy approximation:&lt;br /&gt;
:$$H_2 = 1/2 \cdot \big [ 1/4 \cdot {\rm log_2}\hspace{0.1cm}4 + 6 \cdot 1/8 \cdot {\rm log_2}\hspace{0.1cm}8 \big ] = 1,375 \,{\rm bit/symbol} \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*For the further entropy approximations&amp;amp;nbsp; $H_3$,&amp;amp;nbsp; $H_4$, ...&amp;amp;nbsp; and the actual entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; will apply:&lt;br /&gt;
:$$ H &amp;lt; \hspace{0.05cm}\text{...}\hspace{0.05cm} &amp;lt; H_5 &amp;lt; H_4 &amp;lt; H_3 &amp;lt; H_2 = 1.375 \,{\rm bit/symbol} \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*Exceptionally in this example we know the actual entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; of the code symbol sequence&amp;amp;nbsp; $〈 c_ν 〉$: &amp;amp;nbsp; since no new information is added by the coder and no information is lost, it is the same entropy&amp;amp;nbsp; $H = 1 \,{\rm bit/symbol} $&amp;amp;nbsp; as the one of the redundancy-free binary sequence $〈 q_ν 〉$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The&amp;amp;nbsp; [[Aufgaben:1.4_Entropienäherungen_für_den_AMI-Code|Exercise 1.4]]&amp;amp;nbsp; shows the considerable effort required to calculate the entropy approximation&amp;amp;nbsp; $H_3$. &amp;amp;nbsp; Moreover,&amp;amp;nbsp; $H_3$&amp;amp;nbsp; still deviates significantly from the final value&amp;amp;nbsp; $H = 1 \,{\rm bit/symbol} $.&amp;amp;nbsp; A faster result is achieved if the AMI code is described by a markov chain as explained in the next section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Binary sources with Markov properties ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Inf_T_1_2_S5_vers2.png|right|frame|Markov processes with&amp;amp;nbsp; $M = 2$&amp;amp;nbsp; states]]&lt;br /&gt;
&lt;br /&gt;
Sequences with statistical bonds between the sequence elements (symbols) are often modeled by&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Markov_Chains|Markov processes]]&amp;amp;nbsp; where we limit ourselves here to first-order Markov processes.&amp;amp;nbsp; First we consider a binary Markov process&amp;amp;nbsp; $(M = 2)$&amp;amp;nbsp; with the states (symbols)&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$.&lt;br /&gt;
&lt;br /&gt;
On the right, you can see the transition diagram for a first-order binary Markov process&amp;amp;nbsp; however, only two of the four transfer probabilities given are freely selectable, for example&lt;br /&gt;
* $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = \rm Pr(A\hspace{0.01cm}|\hspace{0.01cm}B)$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; conditional probability that&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; follows&amp;amp;nbsp; $\rm B$&amp;amp;nbsp;.&lt;br /&gt;
* $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}} = \rm Pr(B\hspace{0.01cm}|\hspace{0.01cm}A)$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; conditional probability that&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; follows&amp;amp;nbsp; $\rm A$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the other two transition probabilities&amp;amp;nbsp; $p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} = 1- p_{\rm B\hspace{0. 01cm}|\hspace{0.01cm}A}$ &amp;amp;nbsp;and &amp;amp;nbsp; $p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} = 1- p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}&lt;br /&gt;
 \hspace{0.05cm}.$&lt;br /&gt;
&lt;br /&gt;
Due to the presupposed properties&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Auto-correlation Function_(ACF)#Stationary_Random Processes|Stationarity]]&amp;amp;nbsp; and&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Auto-correlation Function_(ACF)#Ergodic_Random Processes|Ergodicity]]&amp;amp;nbsp; the following applies to the state or symbol probabilities:&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = {\rm Pr}({\rm A}) = \frac{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}}{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}}&lt;br /&gt;
 \hspace{0.05cm}, \hspace{0.5cm}p_{\rm B} = {\rm Pr}({\rm B}) = \frac{p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}}{p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
These equations allow first information theoretical statements about the Markov processes:&lt;br /&gt;
* For&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}} = p_{\rm {B\hspace{0.01cm}|\hspace{0. 01cm}A}}$&amp;amp;nbsp; the symbols are equally likely &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $p_{\text{A}} = p_{\text{B}}= 0.5$.&amp;amp;nbsp; The first entropy approximation returns&amp;amp;nbsp; $H_1 = H_0 = 1 \hspace{0.05cm} \rm bit/symbol$, independent of the actual values of the (conditional) transition probabilities&amp;amp;nbsp; $p_{\text{A|B}}$&amp;amp;nbsp; &amp;amp;nbsp;or &amp;amp;nbsp; $p_{\text{B|A}}$.&lt;br /&gt;
*The source entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; as the threshold value&amp;amp;nbsp; $($for&amp;amp;nbsp; $k \to \infty)$&amp;amp;nbsp; of the&amp;amp;nbsp; [[Information_Theory/News_sources_with_Memory#Generalization_to_.7F.27 .22.60UNIQ-MathJax109-QINU.60.22.27.7F.E2.80.93Tuple_and_boundary.C3.BCtransition|Entropy approximation&amp;amp;nbsp; $k$th order]] &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $H_k$&amp;amp;nbsp; but depends very much on the actual values of&amp;amp;nbsp; $p_{\text{A|B}}$ &amp;amp;nbsp;and&amp;amp;nbsp; $p_{\text{B|A}}$&amp;amp;nbsp; and not only on their quotients.&amp;amp;nbsp; This is shown by the following example.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 6:}$&amp;amp;nbsp;&lt;br /&gt;
We consider three binary symmetric Markov sources that are represented by the numerical values of the symmetric transition probabilities&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} }$&amp;amp;nbsp;.&amp;amp;nbsp; For the symbol probabilities the following applies:&amp;amp;nbsp; $p_{\rm A} = p_{\rm B}= 0.5$, and the other transition probabilities have the values&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2242__Inf_T_1_2_S5b_neu.png|right|frame|Three examples of binary Markov sources]] &lt;br /&gt;
:$$p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 1 - p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } =&lt;br /&gt;
p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}B} }.$$&lt;br /&gt;
&lt;br /&gt;
*The middle (blue) symbol sequence with&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.5$&amp;amp;nbsp; has the entropy&amp;amp;nbsp; $H ≈ 1 \hspace{0.1cm}  \rm bit/symbol$.&amp;amp;nbsp; That means: &amp;amp;nbsp; In this special case there are no statistical bonds within the sequence.&lt;br /&gt;
&lt;br /&gt;
*The left (red) sequence with&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.2$&amp;amp;nbsp; has less changes between&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; Due to statistical dependencies between neighboring symbols the entropy is now smaller&amp;amp;nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/symbol$.&lt;br /&gt;
&lt;br /&gt;
*The right (green) symbol sequence with&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}\vert\hspace{0.01cm}B} } = p_{\rm {B\hspace{0.01cm}\vert\hspace{0.01cm}A} } = 0.8$&amp;amp;nbsp; has the exact same entropy&amp;amp;nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/symbol$&amp;amp;nbsp; as the red sequence.&amp;amp;nbsp; Here you can see many areas with alternating symbols&amp;amp;nbsp; $($... $\rm ABABAB$ ... $)$.}}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This example is worth noting:&lt;br /&gt;
*If you had not used the markup properties of the red and green sequences, you would have reached the respective result&amp;amp;nbsp; $H ≈ 0.72 \hspace{0.1cm}  \rm bit/symbol$&amp;amp;nbsp; only after lengthy calculations.&lt;br /&gt;
*The following pages show that for a source with mark properties the final value&amp;amp;nbsp; $H$&amp;amp;nbsp; can be determined from the entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; alone. &amp;amp;nbsp; Likewise, all entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; can also be calculated from&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k$-tuples in a simple manner &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $H_3$,&amp;amp;nbsp; $H_4$,&amp;amp;nbsp; $H_5$, ... &amp;amp;nbsp; $H_{100}$, ...&lt;br /&gt;
	&lt;br /&gt;
 &lt;br /&gt;
== Simplified entropy calculation for Markov sources ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Inf_T_1_2_S5_vers2.png|right|frame|Markov processes with&amp;amp;nbsp; $M = 2$&amp;amp;nbsp; states]]&lt;br /&gt;
We continue to assume the first-order symmetric binary Markov source.&amp;amp;nbsp; As on the previous page, we use the following nomenclature for&lt;br /&gt;
*the transition probabilities&amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}$, &amp;amp;nbsp; $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$,&amp;amp;nbsp; $p_{\rm {A\hspace{0. 01cm}|\hspace{0.01cm}A}}}= 1- p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$, &amp;amp;nbsp; $p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}B}} = 1 - p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}}$, &amp;amp;nbsp; &lt;br /&gt;
*the ergodic probabilities&amp;amp;nbsp; $p_{\text{A}}$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\text{B}}$,&lt;br /&gt;
*the compound probabilities, for example&amp;amp;nbsp; $p_{\text{AB}} = p_{\text{A}} - p_{\rm {B\hspace{0.01cm}|\hspace{0.01cm}A}}$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We now compute the&amp;amp;nbsp; [[Information_Theory/Sources with Memory#Entropy_in_Two_Tuple|Entropy of a two-tuple]]&amp;amp;nbsp; (with the unit &amp;quot;bit/two-tuple&amp;quot;):&lt;br /&gt;
 &lt;br /&gt;
:$$H_2\hspace{0.05cm}&#039; = p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}} + p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B}}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
If one now replaces the logarithms of the products by corresponding sums of logarithms, one gets the result&amp;amp;nbsp; $H_2\hspace{0.05cm}&#039; = H_1 + H_{\text{M}}$&amp;amp;nbsp; with  &lt;br /&gt;
:$$H_1 = p_{\rm A}  \cdot (p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A})\cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm A}} + p_{\rm B}  \cdot (p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} + p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B})\cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm B}} = p_{\rm A}  \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm A}} + p_{\rm B}  \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{p_{\rm B}} = H_{\rm bin} (p_{\rm A})= H_{\rm bin} (p_{\rm B})&lt;br /&gt;
 \hspace{0.05cm},$$&lt;br /&gt;
:$$H_{\rm M}= p_{\rm A}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm A}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}A}} + p_{\rm B}  \cdot p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm A\hspace{0.01cm}|\hspace{0.01cm}B}} + p_{\rm B}  \cdot p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B} \cdot {\rm log}_2\hspace{0.1cm}\frac {1}{ p_{\rm B\hspace{0.01cm}|\hspace{0.01cm}B}}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; Thus the&amp;amp;nbsp; &#039;&#039;&#039;second entropy approximation&#039;&#039;&#039;&amp;amp;nbsp; (with the unit &amp;quot;bit/symbol&amp;quot;):&lt;br /&gt;
:$$H_2 = {1}/{2} \cdot {H_2\hspace{0.05cm}&#039;} = {1}/{2} \cdot \big [ H_{\rm 1} + H_{\rm M} \big] &lt;br /&gt;
 \hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
is to be noted:&lt;br /&gt;
*The first summand&amp;amp;nbsp; $H_1$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; first entropy approximation depends only on the symbol probabilities.&lt;br /&gt;
*For a symmetrical markov process &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B}} = p_{\rm {B\hspace{0.01cm}|\hspace{0. 01cm}A}} $ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $p_{\text{A}}} = p_{\text{B}}} = 1/2$ &amp;amp;nbsp; this is the result for this first summand&amp;amp;nbsp; $H_1 = 1 \hspace{0.1cm} \rm bit/symbol$.&lt;br /&gt;
*The second summand&amp;amp;nbsp; $H_{\text{M}}$&amp;amp;nbsp; must be calculated according to the second of the two upper equations. &lt;br /&gt;
*For a symmetrical Markov process you get&amp;amp;nbsp; $H_{\text{M}}} = H_{\text{bin}}(p_{\rm {A\hspace{0.01cm}|\hspace{0.01cm}B})$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now, this result is extended to the&amp;amp;nbsp; $k$-th entropy approximation.&amp;amp;nbsp; Here, the advantage of Markov sources over other sources is taken advantage of, that the entropy calculation for&amp;amp;nbsp; $k$-tuples is very simple.&amp;amp;nbsp; For each Markov source, the following applies&lt;br /&gt;
 &lt;br /&gt;
:$$H_k = {1}/{k} \cdot \big [ H_{\rm 1} + (k-1) \cdot H_{\rm M}\big ] \hspace{0.3cm} \Rightarrow \hspace{0.3cm}&lt;br /&gt;
 H_2 = {1}/{2} \cdot \big [ H_{\rm 1} + H_{\rm M} \big ]\hspace{0.05cm}, \hspace{0.3cm}&lt;br /&gt;
 H_3 ={1}/{3} \cdot \big [ H_{\rm 1} + 2 \cdot H_{\rm M}\big ] \hspace{0.05cm},\hspace{0.3cm}&lt;br /&gt;
 H_4 = {1}/{4} \cdot \big [ H_{\rm 1} + 3 \cdot H_{\rm M}\big ] &lt;br /&gt;
 \hspace{0.05cm},\hspace{0.15cm}{\rm usw.}$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; With the boundary condition for&amp;amp;nbsp; $k \to \infty$, one obtains the actual source entropy&lt;br /&gt;
:$$H = \lim_{k \rightarrow \infty} H_k = H_{\rm M} \hspace{0.05cm}.$$&lt;br /&gt;
From this simple result important insights for the entropy calculation follow:&lt;br /&gt;
*For Markov sources it is sufficient to determine the entropy approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$.&amp;amp;nbsp; Thus, the entropy of a Markov source is &lt;br /&gt;
:$$H = 2 \cdot H_2 - H_{\rm 1}  \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*Through&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$&amp;amp;nbsp; all further entropy approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; are also fixed for&amp;amp;nbsp; $k \ge 3$&amp;amp;nbsp;:&lt;br /&gt;
 &lt;br /&gt;
:$$H_k = \frac{2-k}{k} \cdot H_{\rm 1} + \frac{2\cdot (k-1)}{k} \cdot H_{\rm 2}&lt;br /&gt;
 \hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, these approximations are not very important.&amp;amp;nbsp; Mostly only the limit value&amp;amp;nbsp; $H$.&amp;amp;nbsp; For sources without markov properties the approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; are calculated only to be able to estimate this limit value, i.e. the actual entropy.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Notes&#039;&#039;: &lt;br /&gt;
*In the&amp;amp;nbsp; [[Aufgaben:1.5_Binäre_Markovquelle|Task 1.5]]&amp;amp;nbsp; the above equations are applied to the more general case of an asymmetric binary source.&lt;br /&gt;
*All equations on this page also apply to non-binary Markov sources&amp;amp;nbsp; $(M &amp;gt; 2)$ as shown on the next page.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Non-binary Markov sources == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:P_ID2243__Inf_T_1_2_S6_neu.png|right|frame|Ternary and Quaternary First Order Markov Source]]&lt;br /&gt;
&lt;br /&gt;
The following equations apply to each Markov source regardless of the symbol range:&lt;br /&gt;
 &lt;br /&gt;
:$$H = 2 \cdot H_2 - H_{\rm 1}  \hspace{0.05cm},$$&lt;br /&gt;
:$$H_k = {1}/{k} \cdot \big [ H_{\rm 1} + (k-1) \cdot H_{\rm M}\big ] \hspace{0.05cm},$$&lt;br /&gt;
:$$ \lim_{k \rightarrow \infty} H_k = H &lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
These equations allow the simple calculation of the entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; from the approximations&amp;amp;nbsp; $H_1$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$.&lt;br /&gt;
&lt;br /&gt;
We now look at the transition diagrams sketched on the right&lt;br /&gt;
*a ternary Markov source&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; $($&amp;amp;nbsp; $M = 3$,&amp;amp;nbsp; blue coloring$)$ and &lt;br /&gt;
*a quaternary Markov source&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; $(M = 4$,&amp;amp;nbsp; red color$)$. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
In&amp;amp;nbsp; [[Aufgaben:1.6_Nichtbinäre_Markovquellen|Exercise 1.6]]&amp;amp;nbsp; the entropy approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; and the source entropy&amp;amp;nbsp; $H$&amp;amp;nbsp; are calculated as the limit of&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k \to \infty$&amp;amp;nbsp;. &amp;amp;nbsp; The results are shown in the following figure.&amp;amp;nbsp; All entropies specified there have the unit &amp;quot;bit/symbol&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2244__Inf_T_1_2_S6b_neu.png|center|frame|Entropies for&amp;amp;nbsp; $\rm MQ3$,&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; and the&amp;amp;nbsp; $\rm AMI-Code$]]. &lt;br /&gt;
&lt;br /&gt;
These results can be interpreted as follows:&lt;br /&gt;
*For the ternary markov source&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; the entropy approximations of&amp;amp;nbsp; $H_1 = 1,500$&amp;amp;nbsp; above&amp;amp;nbsp; $H_2 = 1,375$&amp;amp;nbsp; up to the limit&amp;amp;nbsp; $H = 1,250$&amp;amp;nbsp; continuously decreasing&amp;amp;nbsp; paths&amp;amp;nbsp; $M = 3$&amp;amp;nbsp; the decision content is&amp;amp;nbsp; $H_0 = 1,585$.&lt;br /&gt;
*For the quaternary Markov source&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; one receives&amp;amp;nbsp; $H_0 = H_1 = 2,000$&amp;amp;nbsp; (since four equally probable states) and&amp;amp;nbsp; $H_2 = 1.5$. &amp;amp;nbsp; From the&amp;amp;nbsp; $H_1$-&amp;amp;nbsp; and&amp;amp;nbsp; $H_2$-value all entropy approximations&amp;amp;nbsp; $H_k$&amp;amp;nbsp; and the final value&amp;amp;nbsp; $H = 1.000$&amp;amp;nbsp; can be calculated.&lt;br /&gt;
*The two models&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; were created during the attempt to calculate the&amp;amp;nbsp; [[Information_Theory/Sources_with_Memory#The_Entropy_of_AMI.E2.80. 93Codes|AMI-Code]]&amp;amp;nbsp; to be described information theoretically by Markov sources.&amp;amp;nbsp; The symbols&amp;amp;nbsp; $\rm M$,&amp;amp;nbsp; $\rm N$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm P$&amp;amp;nbsp; stand for &amp;quot;minus&amp;quot;, &amp;quot;zero&amp;quot; and &amp;quot;plus&amp;quot;.&lt;br /&gt;
*The entropy approximations&amp;amp;nbsp; $H_1$,&amp;amp;nbsp; $H_2$&amp;amp;nbsp; and&amp;amp;nbsp; $H_3$&amp;amp;nbsp; of the AMI code (green markers) were calculated in the&amp;amp;nbsp; [[Tasks:1.4_Entropy Approximations_Hk|Task A1.4]]&amp;amp;nbsp; on the calculation of&amp;amp;nbsp; $H_4$,&amp;amp;nbsp; $H_5$, ... had to be omitted for reasons of effort.&amp;amp;nbsp; But the final value of&amp;amp;nbsp; $H_k$&amp;amp;nbsp; for&amp;amp;nbsp; $k \to \infty$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $H = 1,000$ is known.&lt;br /&gt;
*You can see that the Markov model&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; for&amp;amp;nbsp; $H_0 = 1,585$,&amp;amp;nbsp; $H_1 = 1,500$&amp;amp;nbsp; and&amp;amp;nbsp; $H_2 = 1,375$&amp;amp;nbsp; yields exactly the same numerical values as the AMI code. &amp;amp;nbsp; On the other hand&amp;amp;nbsp; $H_3$&amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $(1,333$&amp;amp;nbsp; instead of&amp;amp;nbsp; $1,292)$&amp;amp;nbsp; and especially the final value&amp;amp;nbsp; $H$&amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $(1,250$&amp;amp;nbsp; compared to&amp;amp;nbsp; $1,000)$.&lt;br /&gt;
*The model&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; differs from the AMI code&amp;amp;nbsp; $(M = 3)$&amp;amp;nbsp; with respect to the decision content&amp;amp;nbsp; $H_0$&amp;amp;nbsp; and also with respect to all entropy approximations&amp;amp;nbsp; $H_k$. &amp;amp;nbsp; Nevertheless, $\rm MQ4$&amp;amp;nbsp; is the more suitable model for the AMI code, since the final value&amp;amp;nbsp; $H = 1,000$&amp;amp;nbsp; is the same.&lt;br /&gt;
*The model&amp;amp;nbsp; $\rm MQ3$&amp;amp;nbsp; yields entropy values that are too large, since the sequences&amp;amp;nbsp; $\rm PNP$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm MNM$&amp;amp;nbsp; are possible here, which cannot occur in the AMI code. &amp;amp;nbsp; Already with the approximation&amp;amp;nbsp; $H_3$&amp;amp;nbsp; the difference is slightly noticeable, in the final value&amp;amp;nbsp; $H$&amp;amp;nbsp; even clearly&amp;amp;nbsp; $(1.25$&amp;amp;nbsp; instead of&amp;amp;nbsp; $1)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the model&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; the state&amp;amp;nbsp; &amp;quot;Null&amp;quot;&amp;amp;nbsp; was split into two states&amp;amp;nbsp; $\rm N$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm O$&amp;amp;nbsp; (see upper right figure on this page):&lt;br /&gt;
*Here applies to the state&amp;amp;nbsp; $\rm N$: &amp;amp;nbsp; The current binary symbol&amp;amp;nbsp; $\rm L$&amp;amp;nbsp; is displayed with the amplitude value&amp;amp;nbsp; $0$&amp;amp;nbsp; (zero), as per the AMI rule.&amp;amp;nbsp; The next occurring&amp;amp;nbsp; $\rm H$ symbol, on the other hand, is displayed as&amp;amp;nbsp; $-1$&amp;amp;nbsp; (minus), because the last&amp;amp;nbsp; $\rm H$ symbol was encoded as&amp;amp;nbsp; $+1$&amp;amp;nbsp; (plus).&lt;br /&gt;
*The current binary symbol&amp;amp;nbsp; $\rm O$&amp;amp;nbsp; is also displayed with the state&amp;amp;nbsp; $\rm L$&amp;amp;nbsp; with the ternary value&amp;amp;nbsp; $0$&amp;amp;nbsp;. &amp;amp;nbsp; In contrast to the state&amp;amp;nbsp; $\rm N$&amp;amp;nbsp; however, the next occurring&amp;amp;nbsp; $\rm H$ symbol is now displayed as&amp;amp;nbsp; $+1$&amp;amp;nbsp; (Plus) since the last&amp;amp;nbsp; $\rm H$ symbol was encoded as&amp;amp;nbsp; $-1$&amp;amp;nbsp; (Minus).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; &lt;br /&gt;
*The&amp;amp;nbsp; $\rm MQ4$&amp;amp;ndash;Output sequence actually follows the rules of the AMI code and assigns the entropy&amp;amp;nbsp; $H = 1.000 \hspace{0.15cm} \rm bit/symbol$&amp;amp;nbsp; up. &lt;br /&gt;
*Because of the new state&amp;amp;nbsp; $\rm O$&amp;amp;nbsp; but is now&amp;amp;nbsp; $H_0 = 2.000 \hspace{0.15cm} \rm bit/symbol$&amp;amp;nbsp; $($against&amp;amp;nbsp; $1.585 \hspace{0.15cm} \rm bit/symbol)$&amp;amp;nbsp; clearly too large. &lt;br /&gt;
*Also all&amp;amp;nbsp; $H_k$ approximations are larger than in AMI code. &lt;br /&gt;
*First for &amp;amp;nbsp;$k \to \infty$&amp;amp;nbsp; the model&amp;amp;nbsp; $\rm MQ4$&amp;amp;nbsp; and the AMI code match exactly: &amp;amp;nbsp; $H = 1,000 \hspace{0.15cm} \rm bit/symbol$.}}&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
== Exercises for chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:1.3 Entropienäherungen|Aufgabe 1.3: Entropienäherungen]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.4 Entropienäherungen für den AMI-Code|Aufgabe 1.4: Entropienäherungen für den AMI-Code]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.4Z Entropie der AMI-Codierung|Zusatzaufgabe 1.4Z: Entropie der AMI-Codierung]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.5 Binäre Markovquelle|Aufgabe 1.5: Binäre Markovquelle]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.5Z Symmetrische Markovquelle|Aufgabe 1.5Z: Symmetrische Markovquelle]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.6 Nichtbinäre Markovquellen|Aufgabe 1.6: Nichtbinäre Markovquellen]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.6Z Ternäre Markovquelle|Aufgabe 1.6Z:Ternäre Markovquelle]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Memoryless_Sources&amp;diff=35017</id>
		<title>Information Theory/Discrete Memoryless Sources</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Memoryless_Sources&amp;diff=35017"/>
		<updated>2020-10-28T01:39:07Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{FirstPage}}&lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Entropie wertdiskreter Nachrichtenquellen&lt;br /&gt;
|Vorherige Seite=&lt;br /&gt;
|Nächste Seite=Nachrichtenquellen mit Gedächtnis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== # OVERVIEW OF THE FIRST MAIN CHAPTER # ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This first chapter describes the calculation and the meaning of entropy.&amp;amp;nbsp; According to the Shannonian information definition, entropy is a measure of the mean uncertainty about the outcome of a statistical event or the uncertainty in the measurement of a stochastic quantity.&amp;amp;nbsp; Somewhat casually expressed, the entropy of a random quantity quantifies its &amp;quot;randomness&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
In detail are discussed:&lt;br /&gt;
&lt;br /&gt;
*the &#039;&#039;decision content&#039;&#039;&amp;amp;nbsp; and the &#039;&#039;entropy&#039;&#039;&amp;amp;nbsp; of a memoryless news source,&lt;br /&gt;
*the &#039;&#039;binary entropy function&#039;&#039;&amp;amp;nbsp; and its application to &#039;&#039;non-binary sources&#039;&#039;,&lt;br /&gt;
*the entropy calculation for &#039;&#039;memory sources&#039;&#039;&amp;amp;nbsp; and suitable approximations,&lt;br /&gt;
*the peculiarities of &#039;&#039;Markov sources&#039;&#039;&amp;amp;nbsp; regarding the entropy calculation,&lt;br /&gt;
*the procedure for sources with a large number of symbols, for example &#039;&#039;natural texts&#039;&#039;,&lt;br /&gt;
*the &#039;&#039;entropy estimates&#039;&#039;&amp;amp;nbsp; according to Shannon and Küpfmüller.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Further information on the topic as well as Exercises, simulations and programming exercises can be found in the experiment &amp;quot;Value Discrete Information Theory&amp;quot; of the practical course &amp;quot;Simulation Digitaler Übertragungssysteme&amp;quot; (english: Simulation of Digital Transmission Systems).&amp;amp;nbsp; This (former) LNT course at the TU Munich is based on&lt;br /&gt;
&lt;br /&gt;
*the Windows program&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Programme/WDIT.zip WDIT] &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link points to the ZIP version of the program and &lt;br /&gt;
*the associated&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Texte/Wertdiskrete_Informationstheorie.pdf Internship guide]  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link refers to the PDF version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model and requirements == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We consider a value discrete message source&amp;amp;nbsp; $\rm Q$, which gives a sequence&amp;amp;nbsp; $ \langle q_ν \rangle$&amp;amp;nbsp; of symbols. &lt;br /&gt;
*For the run variable &amp;amp;nbsp;$ν = 1$, ... , $N$, where&amp;amp;nbsp; $N$&amp;amp;nbsp; should be &amp;quot;sufficiently large&amp;quot;. &lt;br /&gt;
*Each individual source symbol &amp;amp;nbsp;$q_ν$&amp;amp;nbsp; comes from a symbol set&amp;amp;nbsp; $\{q_μ \}$&amp;amp;nbsp; where&amp;amp;nbsp; $μ = 1$, ... , $M$, where&amp;amp;nbsp; $M$&amp;amp;nbsp; denotes the symbol range:&lt;br /&gt;
 &lt;br /&gt;
:$$q_{\nu} \in \left \{ q_{\mu}  \right \}, \hspace{0.25cm}{\rm with}\hspace{0.25cm} \nu = 1, \hspace{0.05cm} \text{ ...}\hspace{0.05cm} , N\hspace{0.25cm}{\rm and}\hspace{0.25cm}\mu = 1,\hspace{0.05cm} \text{ ...}\hspace{0.05cm} , M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The figure shows a quaternary message source&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; with the alphabet&amp;amp;nbsp; $\rm \{A, \ B, \ C, \ D\}$&amp;amp;nbsp; and an exemplary sequence of length&amp;amp;nbsp; $N = 100$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2227_Inf_T_1_1_S1a_neu.png|frame|Memoryless Quaternary Message Source]]&lt;br /&gt;
&lt;br /&gt;
The following requirements apply:&lt;br /&gt;
*The quaternary news source is fully described by&amp;amp;nbsp; $M = 4$&amp;amp;nbsp; symbol probabilities&amp;amp;nbsp; $p_μ$.&amp;amp;nbsp; In general it applies:&lt;br /&gt;
:$$\sum_{\mu = 1}^M \hspace{0.1cm}p_{\mu} = 1 \hspace{0.05cm}.$$&lt;br /&gt;
*The message source is memoryless, i.e., the individual sequence elements are&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Statistical Dependence and Independence#General_definition_of_statistical_dependence|statistically independent of each other]]:&lt;br /&gt;
:$${\rm Pr} \left (q_{\nu} = q_{\mu} \right ) = {\rm Pr} \left (q_{\nu} = q_{\mu} \hspace{0.03cm} | \hspace{0.03cm} q_{\nu -1}, q_{\nu -2}, \hspace{0.05cm} \text{ ...}\hspace{0.05cm}\right ) \hspace{0.05cm}.$$&lt;br /&gt;
*Since the alphabet consists of symbols&amp;amp;nbsp; (and not of random variables)&amp;amp;nbsp;, the specification of&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Expected_Values_and_Moments|expected values]]&amp;amp;nbsp; (linear mean, quadratic mean, dispersion, etc.) is not possible here, but also not necessary from an information-theoretical point of view.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These properties will now be illustrated with an example.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_1_S1b_vers2.png|right|frame|Relative frequencies as a function of&amp;amp;nbsp; $N$]]&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp;&lt;br /&gt;
For the symbol probabilities of a quaternary source applies: &lt;br /&gt;
:$$p_{\rm A} = 0.4 \hspace{0.05cm},\hspace{0.2cm}p_{\rm B} = 0.3 \hspace{0.05cm},\hspace{0.2cm}p_{\rm C} = 0.2 \hspace{0.05cm},\hspace{0.2cm} &lt;br /&gt;
p_{\rm D} = 0.1\hspace{0.05cm}.$$&lt;br /&gt;
For an infinitely long sequence&amp;amp;nbsp; $(N \to \infty)$ &lt;br /&gt;
*the&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/From_Random_Experiment_to_Random_Variable#Bernoulli&#039;s_Law_of_Large_Numbers|relative frequencies]]&amp;amp;nbsp; $h_{\rm A}$,&amp;amp;nbsp; $h_{\rm B}$,&amp;amp;nbsp; $h_{\rm C}$,&amp;amp;nbsp; $h_{\rm D}$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; a-posteriori parameters &lt;br /&gt;
*were identical to the&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Some_Basic_Definitions#Event_and_Event_set|probabilities]]&amp;amp;nbsp; $p_{\rm A}$,&amp;amp;nbsp; $p_{\rm B}$,&amp;amp;nbsp; $p_{\rm C}$,&amp;amp;nbsp; $p_{\rm D}$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; a-priori parameters. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With smaller&amp;amp;nbsp; $N$&amp;amp;nbsp; deviations may occur, as the adjacent table (result of a simulation) shows. &lt;br /&gt;
&lt;br /&gt;
*In the graphic above an exemplary sequence is shown with&amp;amp;nbsp; $N = 100$&amp;amp;nbsp; symbols. &lt;br /&gt;
*Due to the set elements&amp;amp;nbsp; $\rm A$,&amp;amp;nbsp; $\rm B$,&amp;amp;nbsp; $\rm C$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm D$&amp;amp;nbsp; no mean values can be given. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, if you replace the symbols with numerical values, for example&amp;amp;nbsp; $\rm A \Rightarrow 1$, &amp;amp;nbsp; $\rm B \Rightarrow 2$, &amp;amp;nbsp; $\rm C \Rightarrow 3$, &amp;amp;nbsp; $\rm D \Rightarrow 4$, then you will get &amp;lt;br&amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; time averaging &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; crossing line &amp;amp;nbsp; &amp;amp;nbsp; or &amp;amp;nbsp; &amp;amp;nbsp; ensemble averaging &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; expected value formation&lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Moments of a Discrete Random Variable#Linear_Average_-_Direct_Component|linear average]] :&lt;br /&gt;
:$$m_1 = \overline { q_{\nu} } = {\rm E} \big [ q_{\mu} \big ] = 0.4 \cdot 1 + 0.3 \cdot 2 + 0.2 \cdot 3 + 0.1 \cdot 4&lt;br /&gt;
= 2 \hspace{0.05cm},$$ &lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Moments of a Discrete Random Variable#Square_mean_.E2.80.93_Variance_.E2.80.93_Scattering |square mean]]:&lt;br /&gt;
:$$m_2 = \overline { q_{\nu}^{\hspace{0.05cm}2}  } = {\rm E} \big [ q_{\mu}^{\hspace{0.05cm}2} \big ] = 0.4 \cdot 1^2 + 0.3 \cdot 2^2 + 0.2 \cdot 3^2 + 0.1 \cdot 4^2&lt;br /&gt;
= 5 \hspace{0.05cm},$$&lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Expected_Values_and_Moments#Some_often_used_Central_Moments|standard deviation]] (scattering) according to the &amp;quot;Theorem of Steiner&amp;quot;:&lt;br /&gt;
:$$\sigma = \sqrt {m_2 - m_1^2} = \sqrt {5 - 2^2} = 1 \hspace{0.05cm}.$$}}	&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
==Decision content - Message content==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[https://de.wikipedia.org/wiki/Claude_Shannon Claude Elwood Shannon]&amp;amp;nbsp; defined in 1948 in the standard work of information theory&amp;amp;nbsp; [Sha48]&amp;lt;ref name=&#039;Sha48&#039;&amp;gt;Shannon, C.E.: A Mathematical Theory of Communication. In: Bell Syst. Techn. J. 27 (1948), pp. 379-423 and pp. 623-656.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; the concept of information as &amp;quot;decrease of uncertainty about the occurrence of a statistical event&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
Let us make a mental experiment with&amp;amp;nbsp; $M$&amp;amp;nbsp; possible results, which are all equally probable: &amp;amp;nbsp; $p_1 = p_2 = \hspace{0.05cm} \text{ ...}\hspace{0.05cm} = p_M = 1/M \hspace{0.05cm}.$ &lt;br /&gt;
&lt;br /&gt;
Under this assumption applies:&lt;br /&gt;
*Is&amp;amp;nbsp; $M = 1$, then each individual attempt will yield the same result and therefore there is no uncertainty about the output.&lt;br /&gt;
*On the other hand, an observer learns about an experiment with&amp;amp;nbsp; $M = 2$, for example the &amp;quot;coin toss&amp;quot; with the set of events&amp;amp;nbsp; $\big \{\rm \boldsymbol{\rm Z}, \rm \boldsymbol{\rm W} \big \}$&amp;amp;nbsp; and the probabilities&amp;amp;nbsp; $p_{\rm Z} = p_{\rm W} = 0. 5$, a gain in information; The uncertainty regarding&amp;amp;nbsp; $\rm Z$ &amp;amp;nbsp;resp.&amp;amp;nbsp; $\rm W$&amp;amp;nbsp; is resolved.&lt;br /&gt;
*In the experiment &amp;quot;dice&amp;quot;&amp;amp;nbsp; $(M = 6)$&amp;amp;nbsp; and even more in roulette&amp;amp;nbsp; $(M = 37)$&amp;amp;nbsp; the gained information is even more significant for the observer than in the &amp;quot;coin toss&amp;quot; when he learns which number was thrown or which ball fell.&lt;br /&gt;
*Finally it should be considered that the experiment&amp;amp;nbsp; &amp;quot;triple coin toss&amp;quot;&amp;amp;nbsp; with the&amp;amp;nbsp; $M = 8$&amp;amp;nbsp; possible results&amp;amp;nbsp; $\rm ZZZ$,&amp;amp;nbsp; $\rm ZZW$,&amp;amp;nbsp; $\rm ZWZ$,&amp;amp;nbsp; $\rm ZWW$,&amp;amp;nbsp; $\rm WZZ$,&amp;amp;nbsp; $\rm WZW$,&amp;amp;nbsp; $\rm WWZ$,&amp;amp;nbsp; $\rm WWW$&amp;amp;nbsp; provides three times the information as the single coin toss&amp;amp;nbsp; $(M = 2)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following definition fulfills all the requirements listed here for a quantitative information measure for equally probable events, indicated only by the symbol range&amp;amp;nbsp; $M$.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp; The&amp;amp;nbsp; &#039;&#039;&#039;decision content&#039;&#039;&#039; &amp;amp;nbsp; of a message source depends only on the symbol range&amp;amp;nbsp; $M$&amp;amp;nbsp; and results in&lt;br /&gt;
 &lt;br /&gt;
:$$H_0 = {\rm log}\hspace{0.1cm}M = {\rm log}_2\hspace{0.1cm}M \hspace{0.15cm} {\rm (in \ 	&amp;amp;#8220;bit&amp;quot;)}&lt;br /&gt;
= {\rm ln}\hspace{0.1cm}M \hspace{0.15cm}\text {(in 	&amp;amp;#8220;nat&amp;quot;)}&lt;br /&gt;
= {\rm lg}\hspace{0.1cm}M \hspace{0.15cm}\text {(in 	&amp;amp;#8220;Hartley&amp;quot;)}\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*The term&amp;amp;nbsp; &#039;&#039;message content&#039;&#039; is also commonly used for this. &lt;br /&gt;
*Since&amp;amp;nbsp; $H_0$&amp;amp;nbsp; indicates the maximum value of the&amp;amp;nbsp; [[Information_Theory/Sources with Memory#Information_Content_and_Entropy|Entropy]]&amp;amp;nbsp; $H$&amp;amp;nbsp;, $H_\text{max}$&amp;amp;nbsp; is also used in our tutorial as short notation&amp;amp;nbsp;. }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Please note our nomenclature:&lt;br /&gt;
*The logarithm will be called &amp;quot;log&amp;quot; in the following, independent of the base. &lt;br /&gt;
*The relations mentioned above are fulfilled due to the following properties:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm log}\hspace{0.1cm}1 = 0 \hspace{0.05cm},\hspace{0.2cm}&lt;br /&gt;
{\rm log}\hspace{0.1cm}37 &amp;gt; {\rm log}\hspace{0.1cm}6 &amp;gt; {\rm log}\hspace{0.1cm}2\hspace{0.05cm},\hspace{0.2cm}&lt;br /&gt;
{\rm log}\hspace{0.1cm}M^k = k \cdot {\rm log}\hspace{0.1cm}M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
* Usually we use the logarithm to the base&amp;amp;nbsp; $2$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;Logarithm dualis&#039;&#039;&amp;amp;nbsp; $\rm (ld)$, where the pseudo unit &amp;quot;bit&amp;quot;, more precisely:&amp;amp;nbsp; &amp;quot;bit/symbol&amp;quot;, is then added:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm ld}\hspace{0.1cm}M = {\rm log_2}\hspace{0.1cm}M = \frac{{\rm lg}\hspace{0.1cm}M}{{\rm lg}\hspace{0.1cm}2}&lt;br /&gt;
= \frac{{\rm ln}\hspace{0.1cm}M}{{\rm ln}\hspace{0.1cm}2} &lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*In addition, you can find in the literature some additional definitions, which are based on the natural logarithm&amp;amp;nbsp; $\rm (ln)$&amp;amp;nbsp; or the logarithm&amp;amp;nbsp; $\rm (lg)$&amp;amp;nbsp;.&lt;br /&gt;
 &lt;br /&gt;
==Information content and entropy ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We now waive the previous requirement that all&amp;amp;nbsp; $M$&amp;amp;nbsp; possible results of an experiment are equally probable.&amp;amp;nbsp; In order to keep the spelling as compact as possible, we define for this page only:&lt;br /&gt;
 &lt;br /&gt;
:$$p_1 &amp;gt; p_2 &amp;gt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;gt; p_\mu &amp;gt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;gt; p_{M-1} &amp;gt; p_M\hspace{0.05cm},\hspace{0.4cm}\sum_{\mu = 1}^M p_{\mu} = 1 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
We now consider the &#039;&#039;information content&#039;&#039;&amp;amp;nbsp; of the individual symbols, where we denote the &amp;quot;logarithm dualis&amp;quot; with $\log_2$:&lt;br /&gt;
 &lt;br /&gt;
:$$I_\mu = {\rm log_2}\hspace{0.1cm}\frac{1}{p_\mu}= -\hspace{0.05cm}{\rm log_2}\hspace{0.1cm}{p_\mu}&lt;br /&gt;
\hspace{0.5cm}{\rm (Einheit\hspace{-0.15cm}: \hspace{0.15cm}bit\hspace{0.15cm}oder\hspace{0.15cm}bit/Symbol)}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
You can see:&lt;br /&gt;
*because of&amp;amp;nbsp; $p_μ ≤ 1$&amp;amp;nbsp; the information content is never negative.&amp;amp;nbsp; In the borderline case&amp;amp;nbsp; $p_μ \to 1$&amp;amp;nbsp; goes&amp;amp;nbsp; $I_μ \to 0$. &lt;br /&gt;
*However for&amp;amp;nbsp; $I_μ = 0$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $p_μ = 1$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $M = 1$&amp;amp;nbsp; the decision content is also&amp;amp;nbsp; $H_0 = 0$.&lt;br /&gt;
*For decreasing probabilities&amp;amp;nbsp; $p_μ$&amp;amp;nbsp; the information content increases continuously:&lt;br /&gt;
 &lt;br /&gt;
:$$I_1 &amp;lt; I_2 &amp;lt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;lt; I_\mu &amp;lt;\hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;lt; I_{M-1} &amp;lt; I_M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; &#039;&#039;&#039;The more improbable an event is, the greater is its information content&#039;&#039;&#039;.&amp;amp;nbsp; This fact is also found in daily life:&lt;br /&gt;
*&amp;quot;6 right ones&amp;quot; in the lottery are more likely to be noticed than &amp;quot;3 right ones&amp;quot; or no win at all.&lt;br /&gt;
*A tsunami in Asia also dominates the news in Germany for weeks as opposed to the almost standard Deutsche Bahn delays.&lt;br /&gt;
*A series of defeats of Bayern Munich leads to huge headlines in contrast to a winning series.&amp;amp;nbsp; With 1860 Munich exactly the opposite is the case.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, the information content of a single symbol (or event) is not very interesting.&amp;amp;nbsp; On the other hand &lt;br /&gt;
*by ensemble averaging over all possible symbols&amp;amp;nbsp; $q_μ$ &amp;amp;nbsp;bzw.&amp;amp;nbsp; &lt;br /&gt;
*by time averaging over all elements of the sequence&amp;amp;nbsp; $\langle q_ν \rangle$&lt;br /&gt;
&lt;br /&gt;
one of the central variables of information theory. &lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp; The&amp;amp;nbsp; &#039;&#039;&#039;Entropy&#039;&#039;&#039;&amp;amp;nbsp; $H$&amp;amp;nbsp; of a source indicates the &#039;&#039;mean information content of all symbols&#039;&#039;&amp;amp;nbsp;:&lt;br /&gt;
 &lt;br /&gt;
:$$H = \overline{I_\nu} = {\rm E}\hspace{0.01cm}[I_\mu] = \sum_{\mu = 1}^M p_{\mu} \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{p_\mu}=&lt;br /&gt;
 -\sum_{\mu = 1}^M p_{\mu} \cdot{\rm log_2}\hspace{0.1cm}{p_\mu} \hspace{0.5cm}\text{(unit: bit, more precisely: bit/symbol)} &lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The overline marks again a time averaging and&amp;amp;nbsp; $\rm E[\text{...}]$&amp;amp;nbsp; a ensemble averaging.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Entropy is among other things a measure for&lt;br /&gt;
*the mean uncertainty about the outcome of a statistical event,&lt;br /&gt;
*the &amp;quot;randomness&amp;quot; of this event,&amp;amp;nbsp; and&lt;br /&gt;
*the average information content of a random variable.	 &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
==Binary entropy function ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
At first we will restrict ourselves to the special case&amp;amp;nbsp; $M = 2$&amp;amp;nbsp; and consider a binary source, which returns the two symbols&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; &amp;amp;nbsp; The occurrence probabilities are &amp;amp;nbsp; $p_{\rm A} = p$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm B} = 1 - p$.&lt;br /&gt;
&lt;br /&gt;
For the entropy of this binary source applies: &lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm bin} (p) = p \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{\hspace{0.1cm}p\hspace{0.1cm}} + (1-p) \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{1-p} \hspace{0.5cm}{\rm (Einheit\hspace{-0.15cm}: \hspace{0.15cm}bit\hspace{0.15cm}oder\hspace{0.15cm}bit/Symbol)}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The function is called&amp;amp;nbsp; $H_\text{bin}(p)$&amp;amp;nbsp; the&amp;amp;nbsp; &#039;&#039;&#039;binary entropy function&#039;&#039;&#039;.&amp;amp;nbsp; The entropy of a source with a larger symbol range&amp;amp;nbsp; $M$&amp;amp;nbsp; can often be expressed using&amp;amp;nbsp; $H_\text{bin}(p)$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp;&lt;br /&gt;
The figure shows the binary entropy function for the values&amp;amp;nbsp; $0 ≤ p ≤ 1$&amp;amp;nbsp; of the symbol probability of&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; $($or also of&amp;amp;nbsp; $\rm B)$.&amp;amp;nbsp; You can see&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_1_S4_vers2.png|frame|Binary entropy function as function of&amp;amp;nbsp; $p$|right]]&lt;br /&gt;
*The maximum value&amp;amp;nbsp; $H_\text{max} = 1\; \rm bit$&amp;amp;nbsp; results for&amp;amp;nbsp; $p = 0.5$, thus for equally probable binary symbols.&amp;amp;nbsp; Then &amp;amp;nbsp; $\rm A$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; contribute the same amount to entropy.&lt;br /&gt;
* $H_\text{bin}(p)$&amp;amp;nbsp; is symmetrical about&amp;amp;nbsp; $p = 0.5$.&amp;amp;nbsp; A source with&amp;amp;nbsp; $p_{\rm A} = 0.1$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm B} = 0. 9$&amp;amp;nbsp; has the same entropy&amp;amp;nbsp; $H = 0.469 \; \rm bit$&amp;amp;nbsp; as a source with&amp;amp;nbsp; $p_{\rm A} = 0.9$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm B} = 0.1$.&lt;br /&gt;
*The difference&amp;amp;nbsp; $ΔH = H_\text{max} - H$ gives&amp;amp;nbsp; the&amp;amp;nbsp; &#039;&#039;redundancy&#039;&#039;&amp;amp;nbsp; of the source and&amp;amp;nbsp; $r = ΔH/H_\text{max}$&amp;amp;nbsp; the&amp;amp;nbsp; &#039;&#039;relative redundancy&#039;&#039;. &amp;amp;nbsp; In the example,&amp;amp;nbsp; $ΔH = 0.531\; \rm bit$&amp;amp;nbsp; and&amp;amp;nbsp; $r = 53.1 \rm \%$.&lt;br /&gt;
*For&amp;amp;nbsp; $p = 0$&amp;amp;nbsp; this results in&amp;amp;nbsp; $H = 0$, since the symbol sequence &amp;amp;nbsp;$\rm B \ B \ B \text{...}$&amp;amp;nbsp; can be predicted with certainty. &amp;amp;nbsp; Actually, the symbol range is now only&amp;amp;nbsp; $M = 1$.&amp;amp;nbsp; The same applies to&amp;amp;nbsp; $p = 1$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; symbol sequence &amp;amp;nbsp;$\rm A \ A \ A \ text{...}$.&lt;br /&gt;
*$H_\text{bin}(p)$&amp;amp;nbsp; is always a&amp;amp;nbsp; &#039;&#039;concave function&#039;&#039;, since the second derivative after the parameter&amp;amp;nbsp; $p$&amp;amp;nbsp; is negative for all values of&amp;amp;nbsp; $p$&amp;amp;nbsp;: &lt;br /&gt;
:$$\frac{ {\rm d}^2H_{\rm bin} (p)}{ {\rm d}\,p^2} = \frac{- 1}{ {\rm ln}(2) \cdot p \cdot (1-p)}&amp;lt; 0&lt;br /&gt;
\hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
==Message sources with a larger symbol range==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In the&amp;amp;nbsp; [[Information_Theory/Sources with Memory#Model_and_Prerequisites|first section]]&amp;amp;nbsp; of this chapter we have a quaternary message source&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; with the symbol probabilities&amp;amp;nbsp; $p_{\rm A} = 0. 4$, &amp;amp;nbsp; $p_{\rm B} = 0.3$, &amp;amp;nbsp; $p_{\rm C} = 0.2$ &amp;amp;nbsp; and&amp;amp;nbsp; $ p_{\rm D} = 0.1$&amp;amp;nbsp; considered.&amp;amp;nbsp; This source has the following entropy:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm quat} = 0.4 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.4} + 0.3 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0. 3} + 0.2 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.2}+ 0.1 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.1}.$$&lt;br /&gt;
&lt;br /&gt;
For numerical calculation, the detour via the decimal logarithm&amp;amp;nbsp; $\lg \ x = {\rm log}_{10} \ x$&amp;amp;nbsp;, is often necessary. Since the &#039;&#039;logarithm dualis&#039;&#039;&amp;amp;nbsp; $ {\rm log}_2 \ x$&amp;amp;nbsp; is mostly not found on pocket calculators.&lt;br /&gt;
&lt;br /&gt;
:$$H_{\rm quat}=\frac{1}{{\rm lg}\hspace{0.1cm}2} \cdot \left [ 0.4 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.4} + 0.3 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0. 3} + 0.2 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.2} + 0.1 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.1} \right ] = 1.845\,{\rm bit}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 3:}$&amp;amp;nbsp;&lt;br /&gt;
Now there are certain symmetries between the symbol probabilities: &lt;br /&gt;
[[File:Inf_T_1_1_S5_vers2.png|frame|Entropy of binary source and quaternary source]]&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = p_{\rm D} = p \hspace{0.05cm},\hspace{0.4cm}p_{\rm B} = p_{\rm C} = 0.5 - p \hspace{0.05cm},\hspace{0.3cm}{\rm with} \hspace{0.15cm}0 \le p \le 0.5 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
In this case, the binary entropy function can be used to calculate the entropy:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm quat} = 2 \cdot p \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{\hspace{0.1cm}p\hspace{0.1cm} } + 2 \cdot (0.5-p) \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.5-p}$$&lt;br /&gt;
$$\Rightarrow \hspace{0.3cm} H_{\rm quat} = 1 + H_{\rm bin}(2p) \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The graphic shows as a function of&amp;amp;nbsp; $p$&lt;br /&gt;
*the entropy of the quaternary source (blue) &lt;br /&gt;
*in comparison to the entropy course of the binary source (red). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the quaternary source only the abscissa&amp;amp;nbsp; $0 ≤ p ≤ 0.5$&amp;amp;nbsp; is allowed. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
You can see from the blue curve for the quaternary source:&lt;br /&gt;
*The maximum entropy&amp;amp;nbsp; $H_\text{max} = 2 \; \rm bit/symbol$&amp;amp;nbsp; results for&amp;amp;nbsp; $p = 0.25$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; equally probable symbols: &amp;amp;nbsp; $p_{\rm A} = p_{\rm B} = p_{\rm C} = p_{\rm A} = 0.25$.&lt;br /&gt;
*With&amp;amp;nbsp; $p = 0$&amp;amp;nbsp; resp.&amp;amp;nbsp; $p = 0.5$&amp;amp;nbsp; the quaternary source degenerates to a binary source with&amp;amp;nbsp; $p_{\rm B} = p_{\rm C} = 0. 5$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm A} = p_{\rm D} = 0$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; entropy&amp;amp;nbsp; $H = 1 \; \rm bit/symbol$.&lt;br /&gt;
*The source with&amp;amp;nbsp; $p_{\rm A} = p_{\rm D} = 0.1$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm B} = p_{\rm C} = 0.4$&amp;amp;nbsp; has the following characteristics (each with the pseudo unit &amp;quot;bit/symbol&amp;quot;):&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp; &amp;amp;nbsp; &#039;&#039;&#039;(1)&#039;&#039;&#039; &amp;amp;nbsp; entropy: &amp;amp;nbsp; $H = 1 + H_{\rm bin} (2p) =1 + H_{\rm bin} (0.2) = 1.722,$&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp; &amp;amp;nbsp; &#039;&#039;&#039;(2)&#039;&#039;&#039; &amp;amp;nbsp; Redundancy: &amp;amp;nbsp; ${\rm \Delta }H = {\rm log_2}\hspace{0.1cm} M - H =2- 1.722= 0.278,$&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp; &amp;amp;nbsp; &#039;&#039;&#039;(3)&#039;&#039;&#039; &amp;amp;nbsp; relative redundancy: &amp;amp;nbsp; $r ={\rm \delta }H/({\rm log_2}\hspace{0.1cm} M) = 0.139\hspace{0.05cm}.$&lt;br /&gt;
&lt;br /&gt;
*The redundancy of the quaternary source with&amp;amp;nbsp; $p = 0.1$&amp;amp;nbsp; is equal to&amp;amp;nbsp; $ΔH = 0.278 \; \rm bit/symbol$&amp;amp;nbsp; and thus exactly the same as the redundancy of the binary source with&amp;amp;nbsp; $p = 0.2$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Exercises for chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:1.1 Wetterentropie|Aufgabe 1.1: Wetterentropie]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.1Z Binäre Entropiefunktion|Aufgabe 1.1Z: Binäre Entropiefunktion]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.2 Entropie von Ternärquellen|Aufgabe 1.2: Entropie von Ternärquellen]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==List of sources==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Memoryless_Sources&amp;diff=35016</id>
		<title>Information Theory/Discrete Memoryless Sources</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Memoryless_Sources&amp;diff=35016"/>
		<updated>2020-10-27T22:48:24Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{FirstPage}}&lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Entropie wertdiskreter Nachrichtenquellen&lt;br /&gt;
|Vorherige Seite=&lt;br /&gt;
|Nächste Seite=Nachrichtenquellen mit Gedächtnis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== # OVERVIEW OF THE FIRST MAIN CHAPTER # ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This first chapter describes the calculation and the meaning of entropy.&amp;amp;nbsp; According to the Shannonian information definition, entropy is a measure of the mean uncertainty about the outcome of a statistical event or the uncertainty in the measurement of a stochastic quantity.&amp;amp;nbsp; Somewhat casually expressed, the entropy of a random quantity quantifies its &amp;quot;randomness&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
In detail are discussed:&lt;br /&gt;
&lt;br /&gt;
*the &#039;&#039;decision content&#039;&#039;&amp;amp;nbsp; and the &#039;&#039;entropy&#039;&#039;&amp;amp;nbsp; of a memoryless news source,&lt;br /&gt;
*the &#039;&#039;binary entropy function&#039;&#039;&amp;amp;nbsp; and its application to &#039;&#039;non-binary sources&#039;&#039;,&lt;br /&gt;
*the entropy calculation for &#039;&#039;memory sources&#039;&#039;&amp;amp;nbsp; and suitable approximations,&lt;br /&gt;
*the peculiarities of &#039;&#039;Markov sources&#039;&#039;&amp;amp;nbsp; regarding the entropy calculation,&lt;br /&gt;
*the procedure for sources with a large number of symbols, for example &#039;&#039;natural texts&#039;&#039;,&lt;br /&gt;
*the &#039;&#039;entropy estimates&#039;&#039;&amp;amp;nbsp; according to Shannon and Küpfmüller.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Further information on the topic as well as Exercises, simulations and programming exercises can be found in the experiment &amp;quot;Value Discrete Information Theory&amp;quot; of the practical course &amp;quot;Simulation Digitaler Übertragungssysteme&amp;quot; (english: Simulation of Digital Transmission Systems).&amp;amp;nbsp; This (former) LNT course at the TU Munich is based on&lt;br /&gt;
&lt;br /&gt;
*the Windows program&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Programme/WDIT.zip WDIT] &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link points to the ZIP version of the program and &lt;br /&gt;
*the associated&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Texte/Wertdiskrete_Informationstheorie.pdf Internship guide]  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link refers to the PDF version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model and requirements == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We consider a value discrete message source&amp;amp;nbsp; $\rm Q$, which gives a sequence&amp;amp;nbsp; $ \langle q_ν \rangle$&amp;amp;nbsp; of symbols. &lt;br /&gt;
*For the run variable &amp;amp;nbsp;$ν = 1$, ... , $N$, where&amp;amp;nbsp; $N$&amp;amp;nbsp; should be &amp;quot;sufficiently large&amp;quot;. &lt;br /&gt;
*Each individual source symbol &amp;amp;nbsp;$q_ν$&amp;amp;nbsp; comes from a symbol set&amp;amp;nbsp; $\{q_μ \}$&amp;amp;nbsp; where&amp;amp;nbsp; $μ = 1$, ... , $M$, where&amp;amp;nbsp; $M$&amp;amp;nbsp; denotes the symbol range:&lt;br /&gt;
 &lt;br /&gt;
:$$q_{\nu} \in \left \{ q_{\mu}  \right \}, \hspace{0.25cm}{\rm with}\hspace{0.25cm} \nu = 1, \hspace{0.05cm} \text{ ...}\hspace{0.05cm} , N\hspace{0.25cm}{\rm and}\hspace{0.25cm}\mu = 1,\hspace{0.05cm} \text{ ...}\hspace{0.05cm} , M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The figure shows a quaternary message source&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; with the alphabet&amp;amp;nbsp; $\rm \{A, \ B, \ C, \ D\}$&amp;amp;nbsp; and an exemplary sequence of length&amp;amp;nbsp; $N = 100$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2227_Inf_T_1_1_S1a_neu.png|frame|Memoryless Quaternary Message Source]]&lt;br /&gt;
&lt;br /&gt;
The following requirements apply:&lt;br /&gt;
*The quaternary news source is fully described by&amp;amp;nbsp; $M = 4$&amp;amp;nbsp; symbol probabilities&amp;amp;nbsp; $p_μ$.&amp;amp;nbsp; In general it applies:&lt;br /&gt;
:$$\sum_{\mu = 1}^M \hspace{0.1cm}p_{\mu} = 1 \hspace{0.05cm}.$$&lt;br /&gt;
*The message source is memoryless, i.e., the individual sequence elements are&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Statistical Dependence and Independence#General_definition_of_statistical_dependence|statistically independent of each other]]:&lt;br /&gt;
:$${\rm Pr} \left (q_{\nu} = q_{\mu} \right ) = {\rm Pr} \left (q_{\nu} = q_{\mu} \hspace{0.03cm} | \hspace{0.03cm} q_{\nu -1}, q_{\nu -2}, \hspace{0.05cm} \text{ ...}\hspace{0.05cm}\right ) \hspace{0.05cm}.$$&lt;br /&gt;
*Since the alphabet consists of symbols&amp;amp;nbsp; (and not of random variables)&amp;amp;nbsp;, the specification of&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Expected_Values_and_Moments|expected values]]&amp;amp;nbsp; (linear mean, quadratic mean, dispersion, etc.) is not possible here, but also not necessary from an information-theoretical point of view.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These properties will now be illustrated with an example.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_1_S1b_vers2.png|right|frame|Relative frequencies as a function of&amp;amp;nbsp; $N$]]&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp;&lt;br /&gt;
For the symbol probabilities of a quaternary source applies: &lt;br /&gt;
:$$p_{\rm A} = 0.4 \hspace{0.05cm},\hspace{0.2cm}p_{\rm B} = 0.3 \hspace{0.05cm},\hspace{0.2cm}p_{\rm C} = 0.2 \hspace{0.05cm},\hspace{0.2cm} &lt;br /&gt;
p_{\rm D} = 0.1\hspace{0.05cm}.$$&lt;br /&gt;
For an infinitely long sequence&amp;amp;nbsp; $(N \to \infty)$ &lt;br /&gt;
*the&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/From_Random_Experiment_to_Random_Variable#Bernoulli&#039;s_Law_of_Large_Numbers|relative frequencies]]&amp;amp;nbsp; $h_{\rm A}$,&amp;amp;nbsp; $h_{\rm B}$,&amp;amp;nbsp; $h_{\rm C}$,&amp;amp;nbsp; $h_{\rm D}$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; a-posteriori parameters &lt;br /&gt;
*were identical to the&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Some_Basic_Definitions#Event_and_Event_set|probabilities]]&amp;amp;nbsp; $p_{\rm A}$,&amp;amp;nbsp; $p_{\rm B}$,&amp;amp;nbsp; $p_{\rm C}$,&amp;amp;nbsp; $p_{\rm D}$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; a-priori parameters. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With smaller&amp;amp;nbsp; $N$&amp;amp;nbsp; deviations may occur, as the adjacent table (result of a simulation) shows. &lt;br /&gt;
&lt;br /&gt;
*In the graphic above an exemplary sequence is shown with&amp;amp;nbsp; $N = 100$&amp;amp;nbsp; symbols. &lt;br /&gt;
*Due to the set elements&amp;amp;nbsp; $\rm A$,&amp;amp;nbsp; $\rm B$,&amp;amp;nbsp; $\rm C$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm D$&amp;amp;nbsp; no mean values can be given. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, if you replace the symbols with numerical values, for example&amp;amp;nbsp; $\rm A \Rightarrow 1$, &amp;amp;nbsp; $\rm B \Rightarrow 2$, &amp;amp;nbsp; $\rm C \Rightarrow 3$, &amp;amp;nbsp; $\rm D \Rightarrow 4$, then you will get &amp;lt;br&amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; time averaging &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; crossing line &amp;amp;nbsp; &amp;amp;nbsp; or &amp;amp;nbsp; &amp;amp;nbsp; ensemble averaging &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; expected value formation&lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Moments of a Discrete Random Variable#Linear_Average_-_Direct_Component|linear average]] :&lt;br /&gt;
:$$m_1 = \overline { q_{\nu} } = {\rm E} \big [ q_{\mu} \big ] = 0.4 \cdot 1 + 0.3 \cdot 2 + 0.2 \cdot 3 + 0.1 \cdot 4&lt;br /&gt;
= 2 \hspace{0.05cm},$$ &lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Moments of a Discrete Random Variable#Square_mean_.E2.80.93_Variance_.E2.80.93_Scattering |square mean]]:&lt;br /&gt;
:$$m_2 = \overline { q_{\nu}^{\hspace{0.05cm}2}  } = {\rm E} \big [ q_{\mu}^{\hspace{0.05cm}2} \big ] = 0.4 \cdot 1^2 + 0.3 \cdot 2^2 + 0.2 \cdot 3^2 + 0.1 \cdot 4^2&lt;br /&gt;
= 5 \hspace{0.05cm},$$&lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Expected_Values_and_Moments#Some_often_used_Central_Moments|standard deviation]] (scattering) according to the &amp;quot;Theorem of Steiner&amp;quot;:&lt;br /&gt;
:$$\sigma = \sqrt {m_2 - m_1^2} = \sqrt {5 - 2^2} = 1 \hspace{0.05cm}.$$}}	&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
==Decision content - Message content==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[https://de.wikipedia.org/wiki/Claude_Shannon Claude Elwood Shannon]&amp;amp;nbsp; defined in 1948 in the standard work of information theory&amp;amp;nbsp; [Sha48]&amp;lt;ref name=&#039;Sha48&#039;&amp;gt;Shannon, C.E.: A Mathematical Theory of Communication. In: Bell Syst. Techn. J. 27 (1948), pp. 379-423 and pp. 623-656.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; the concept of information as &amp;quot;decrease of uncertainty about the occurrence of a statistical event&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
Let us make a mental experiment with&amp;amp;nbsp; $M$&amp;amp;nbsp; possible results, which are all equally probable: &amp;amp;nbsp; $p_1 = p_2 = \hspace{0.05cm} \text{ ...}\hspace{0.05cm} = p_M = 1/M \hspace{0.05cm}.$ &lt;br /&gt;
&lt;br /&gt;
Under this assumption applies:&lt;br /&gt;
*Is&amp;amp;nbsp; $M = 1$, then each individual attempt will yield the same result and therefore there is no uncertainty about the output.&lt;br /&gt;
*On the other hand, an observer learns about an experiment with&amp;amp;nbsp; $M = 2$, for example the &amp;quot;coin toss&amp;quot; with the set of events&amp;amp;nbsp; $\big \{\rm \boldsymbol{\rm Z}, \rm \boldsymbol{\rm W} \big \}$&amp;amp;nbsp; and the probabilities&amp;amp;nbsp; $p_{\rm Z} = p_{\rm W} = 0. 5$, a gain in information; The uncertainty regarding&amp;amp;nbsp; $\rm Z$ &amp;amp;nbsp;resp.&amp;amp;nbsp; $\rm W$&amp;amp;nbsp; is resolved.&lt;br /&gt;
*In the experiment &amp;quot;dice&amp;quot;&amp;amp;nbsp; $(M = 6)$&amp;amp;nbsp; and even more in roulette&amp;amp;nbsp; $(M = 37)$&amp;amp;nbsp; the gained information is even more significant for the observer than in the &amp;quot;coin toss&amp;quot; when he learns which number was thrown or which ball fell.&lt;br /&gt;
*Finally it should be considered that the experiment&amp;amp;nbsp; &amp;quot;triple coin toss&amp;quot;&amp;amp;nbsp; with the&amp;amp;nbsp; $M = 8$&amp;amp;nbsp; possible results&amp;amp;nbsp; $\rm ZZZ$,&amp;amp;nbsp; $\rm ZZW$,&amp;amp;nbsp; $\rm ZWZ$,&amp;amp;nbsp; $\rm ZWW$,&amp;amp;nbsp; $\rm WZZ$,&amp;amp;nbsp; $\rm WZW$,&amp;amp;nbsp; $\rm WWZ$,&amp;amp;nbsp; $\rm WWW$&amp;amp;nbsp; provides three times the information as the single coin toss&amp;amp;nbsp; $(M = 2)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following definition fulfills all the requirements listed here for a quantitative information measure for equally probable events, indicated only by the symbol range&amp;amp;nbsp; $M$.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp; The&amp;amp;nbsp; &#039;&#039;&#039;decision content&#039;&#039;&#039; &amp;amp;nbsp; of a message source depends only on the symbol range&amp;amp;nbsp; $M$&amp;amp;nbsp; and results in&lt;br /&gt;
 &lt;br /&gt;
:$$H_0 = {\rm log}\hspace{0.1cm}M = {\rm log}_2\hspace{0.1cm}M \hspace{0.15cm} {\rm (in \ 	&amp;amp;#8220;bit&amp;quot;)}&lt;br /&gt;
= {\rm ln}\hspace{0.1cm}M \hspace{0.15cm}\text {(in 	&amp;amp;#8220;nat&amp;quot;)}&lt;br /&gt;
= {\rm lg}\hspace{0.1cm}M \hspace{0.15cm}\text {(in 	&amp;amp;#8220;Hartley&amp;quot;)}\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*The term&amp;amp;nbsp; &#039;&#039;message content&#039;&#039; is also commonly used for this. &lt;br /&gt;
*Since&amp;amp;nbsp; $H_0$&amp;amp;nbsp; indicates the maximum value of the&amp;amp;nbsp; [[Information_Theory/Sources with Memory#Information_Content_and_Entropy|Entropy]]&amp;amp;nbsp; $H$&amp;amp;nbsp;, $H_\text{max}$&amp;amp;nbsp; is also used in our tutorial as short notation&amp;amp;nbsp;. }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Please note our nomenclature:&lt;br /&gt;
*The logarithm will be called &amp;quot;log&amp;quot; in the following, independent of the base. &lt;br /&gt;
*The relations mentioned above are fulfilled due to the following properties:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm log}\hspace{0.1cm}1 = 0 \hspace{0.05cm},\hspace{0.2cm}&lt;br /&gt;
{\rm log}\hspace{0.1cm}37 &amp;gt; {\rm log}\hspace{0.1cm}6 &amp;gt; {\rm log}\hspace{0.1cm}2\hspace{0.05cm},\hspace{0.2cm}&lt;br /&gt;
{\rm log}\hspace{0.1cm}M^k = k \cdot {\rm log}\hspace{0.1cm}M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
* Usually we use the logarithm to the base&amp;amp;nbsp; $2$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;Logarithm dualis&#039;&#039;&amp;amp;nbsp; $\rm (ld)$, where the pseudo unit &amp;quot;bit&amp;quot;, more precisely:&amp;amp;nbsp; &amp;quot;bit/symbol&amp;quot;, is then added:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm ld}\hspace{0.1cm}M = {\rm log_2}\hspace{0.1cm}M = \frac{{\rm lg}\hspace{0.1cm}M}{{\rm lg}\hspace{0.1cm}2}&lt;br /&gt;
= \frac{{\rm ln}\hspace{0.1cm}M}{{\rm ln}\hspace{0.1cm}2} &lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*In addition, you can find in the literature some additional definitions, which are based on the natural logarithm&amp;amp;nbsp; $\rm (ln)$&amp;amp;nbsp; or the logarithm&amp;amp;nbsp; $\rm (lg)$&amp;amp;nbsp;.&lt;br /&gt;
 &lt;br /&gt;
==Information content and entropy ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We now waive the previous requirement that all&amp;amp;nbsp; $M$&amp;amp;nbsp; possible results of an experiment are equally probable.&amp;amp;nbsp; In order to keep the spelling as compact as possible, we define for this page only:&lt;br /&gt;
 &lt;br /&gt;
:$$p_1 &amp;gt; p_2 &amp;gt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;gt; p_\mu &amp;gt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;gt; p_{M-1} &amp;gt; p_M\hspace{0.05cm},\hspace{0.4cm}\sum_{\mu = 1}^M p_{\mu} = 1 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
We now consider the &#039;&#039;information content&#039;&#039;&amp;amp;nbsp; of the individual symbols, where we denote the &amp;quot;logarithm dualis&amp;quot; with $\log_2$:&lt;br /&gt;
 &lt;br /&gt;
:$$I_\mu = {\rm log_2}\hspace{0.1cm}\frac{1}{p_\mu}= -\hspace{0.05cm}{\rm log_2}\hspace{0.1cm}{p_\mu}&lt;br /&gt;
\hspace{0.5cm}{\rm (Einheit\hspace{-0.15cm}: \hspace{0.15cm}bit\hspace{0.15cm}oder\hspace{0.15cm}bit/Symbol)}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
You can see:&lt;br /&gt;
*because of&amp;amp;nbsp; $p_μ ≤ 1$&amp;amp;nbsp; the information content is never negative.&amp;amp;nbsp; In the borderline case&amp;amp;nbsp; $p_μ \to 1$&amp;amp;nbsp; goes&amp;amp;nbsp; $I_μ \to 0$. &lt;br /&gt;
*However for&amp;amp;nbsp; $I_μ = 0$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $p_μ = 1$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $M = 1$&amp;amp;nbsp; the decision content is also&amp;amp;nbsp; $H_0 = 0$.&lt;br /&gt;
*For decreasing probabilities&amp;amp;nbsp; $p_μ$&amp;amp;nbsp; the information content increases continuously:&lt;br /&gt;
 &lt;br /&gt;
:$$I_1 &amp;lt; I_2 &amp;lt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;lt; I_\mu &amp;lt;\hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;lt; I_{M-1} &amp;lt; I_M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; &#039;&#039;&#039;The more improbable an event is, the greater is its information content&#039;&#039;&#039;.&amp;amp;nbsp; This fact is also found in daily life:&lt;br /&gt;
*&amp;quot;6 right ones&amp;quot; in the lottery are more likely to be noticed than &amp;quot;3 right ones&amp;quot; or no win at all.&lt;br /&gt;
*A tsunami in Asia also dominates the news in Germany for weeks as opposed to the almost standard Deutsche Bahn delays.&lt;br /&gt;
*A series of defeats of Bayern Munich leads to huge headlines in contrast to a winning series.&amp;amp;nbsp; With 1860 Munich exactly the opposite is the case.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, the information content of a single symbol (or event) is not very interesting.&amp;amp;nbsp; On the other hand &lt;br /&gt;
*by ensemble averaging over all possible symbols&amp;amp;nbsp; $q_μ$ &amp;amp;nbsp;bzw.&amp;amp;nbsp; &lt;br /&gt;
*by time averaging over all elements of the sequence&amp;amp;nbsp; $\langle q_ν \rangle$&lt;br /&gt;
&lt;br /&gt;
one of the central variables of information theory. &lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp; The&amp;amp;nbsp; &#039;&#039;&#039;Entropy&#039;&#039;&#039;&amp;amp;nbsp; $H$&amp;amp;nbsp; of a source indicates the &#039;&#039;mean information content of all symbols&#039;&#039;&amp;amp;nbsp;:&lt;br /&gt;
 &lt;br /&gt;
:$$H = \overline{I_\nu} = {\rm E}\hspace{0.01cm}[I_\mu] = \sum_{\mu = 1}^M p_{\mu} \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{p_\mu}=&lt;br /&gt;
 -\sum_{\mu = 1}^M p_{\mu} \cdot{\rm log_2}\hspace{0.1cm}{p_\mu} \hspace{0.5cm}\text{(unit: bit, more precisely: bit/symbol)} &lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The overline marks again a time averaging and&amp;amp;nbsp; $\rm E[\text{...}]$&amp;amp;nbsp; a ensemble averaging.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Entropy is among other things a measure for&lt;br /&gt;
*the mean uncertainty about the outcome of a statistical event,&lt;br /&gt;
*the &amp;quot;randomness&amp;quot; of this event,&amp;amp;nbsp; and&lt;br /&gt;
*the average information content of a random variable.	 &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
==Binary entropy function ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
At first we will restrict ourselves to the special case&amp;amp;nbsp; $M = 2$&amp;amp;nbsp; and consider a binary source, which returns the two symbols&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; &amp;amp;nbsp; The occurrence probabilities are &amp;amp;nbsp; $p_{\rm A} = p$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm B} = 1 - p$.&lt;br /&gt;
&lt;br /&gt;
For the entropy of this binary source applies: &lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm bin} (p) = p \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{\hspace{0.1cm}p\hspace{0.1cm}} + (1-p) \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{1-p} \hspace{0.5cm}{\rm (Einheit\hspace{-0.15cm}: \hspace{0.15cm}bit\hspace{0.15cm}oder\hspace{0.15cm}bit/Symbol)}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The function is called&amp;amp;nbsp; $H_\text{bin}(p)$&amp;amp;nbsp; the&amp;amp;nbsp; &#039;&#039;&#039;binary entropy function&#039;&#039;&#039;.&amp;amp;nbsp; The entropy of a source with a larger symbol range&amp;amp;nbsp; $M$&amp;amp;nbsp; can often be expressed using&amp;amp;nbsp; $H_\text{bin}(p)$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp;&lt;br /&gt;
The figure shows the binary entropy function for the values&amp;amp;nbsp; $0 ≤ p ≤ 1$&amp;amp;nbsp; of the symbol probability of&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; $($or also of&amp;amp;nbsp; $\rm B)$.&amp;amp;nbsp; You can see&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_1_S4_vers2.png|frame|Binary entropy function as function of&amp;amp;nbsp; $p$|right]]&lt;br /&gt;
*The maximum value&amp;amp;nbsp; $H_\text{max} = 1\; \rm bit$&amp;amp;nbsp; results for&amp;amp;nbsp; $p = 0.5$, thus for equally probable binary symbols.&amp;amp;nbsp; Then &amp;amp;nbsp; $\rm A$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; contribute the same amount to entropy.&lt;br /&gt;
* $H_\text{bin}(p)$&amp;amp;nbsp; is symmetrical about&amp;amp;nbsp; $p = 0.5$.&amp;amp;nbsp; A source with&amp;amp;nbsp; $p_{\rm A} = 0.1$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm B} = 0. 9$&amp;amp;nbsp; has the same entropy&amp;amp;nbsp; $H = 0.469 \; \rm bit$&amp;amp;nbsp; as a source with&amp;amp;nbsp; $p_{\rm A} = 0.9$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm B} = 0.1$.&lt;br /&gt;
*The difference&amp;amp;nbsp; $ΔH = H_\text{max} - H$ gives&amp;amp;nbsp; the&amp;amp;nbsp; &#039;&#039;redundancy&#039;&#039;&amp;amp;nbsp; of the source and&amp;amp;nbsp; $r = ΔH/H_\text{max}$&amp;amp;nbsp; the&amp;amp;nbsp; &#039;&#039;relative redundancy&#039;&#039;. &amp;amp;nbsp; In the example,&amp;amp;nbsp; $ΔH = 0.531\; \rm bit$&amp;amp;nbsp; and&amp;amp;nbsp; $r = 53.1 \rm \%$.&lt;br /&gt;
*For&amp;amp;nbsp; $p = 0$&amp;amp;nbsp; this results in&amp;amp;nbsp; $H = 0$, since the symbol sequence &amp;amp;nbsp;$\rm B \ B \ B \text{...}$&amp;amp;nbsp; can be predicted with certainty. &amp;amp;nbsp; Actually, the symbol range is now only&amp;amp;nbsp; $M = 1$.&amp;amp;nbsp; The same applies to&amp;amp;nbsp; $p = 1$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; symbol sequence &amp;amp;nbsp;$\rm A \ A \ A \ text{...}$.&lt;br /&gt;
*$H_\text{bin}(p)$&amp;amp;nbsp; is always a&amp;amp;nbsp; &#039;&#039;concave function&#039;&#039;, since the second derivative after the parameter&amp;amp;nbsp; $p$&amp;amp;nbsp; is negative for all values of&amp;amp;nbsp; $p$&amp;amp;nbsp;: &lt;br /&gt;
:$$\frac{ {\rm d}^2H_{\rm bin} (p)}{ {\rm d}\,p^2} = \frac{- 1}{ {\rm ln}(2) \cdot p \cdot (1-p)}&amp;lt; 0&lt;br /&gt;
\hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
==Message sources with a larger symbol range==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In the&amp;amp;nbsp; [[Information_Theory/Sources with Memory#Model_and_Prerequisites|first section]]&amp;amp;nbsp; of this chapter we have a quaternary message source&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; with the symbol probabilities&amp;amp;nbsp; $p_{\rm A} = 0. 4$, &amp;amp;nbsp; $p_{\rm B} = 0.3$, &amp;amp;nbsp; $p_{\rm C} = 0.2$ &amp;amp;nbsp; and&amp;amp;nbsp; $ p_{\rm D} = 0.1$&amp;amp;nbsp; considered.&amp;amp;nbsp; This source has the following entropy:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm quat} = 0.4 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.4} + 0.3 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0. 3} + 0.2 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.2}+ 0.1 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.1}.$$&lt;br /&gt;
&lt;br /&gt;
For numerical calculation, the detour via the decimal logarithm&amp;amp;nbsp; $\lg \ x = {\rm log}_{10} \ x$&amp;amp;nbsp;, is often necessary. Since the &#039;&#039;logarithm dualis&#039;&#039;&amp;amp;nbsp; $ {\rm log}_2 \ x$&amp;amp;nbsp; is mostly not found on pocket calculators.&lt;br /&gt;
&lt;br /&gt;
:$$H_{\rm quat}=\frac{1}{{\rm lg}\hspace{0.1cm}2} \cdot \left [ 0.4 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.4} + 0.3 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0. 3} + 0.2 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.2} + 0.1 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.1} \right ] = 1.845\,{\rm bit}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 3:}$&amp;amp;nbsp;&lt;br /&gt;
Now there are certain symmetries between the symbol probabilities: &lt;br /&gt;
[[File:Inf_T_1_1_S5_vers2.png|frame|Entropy of binary source and quaternary source]]&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = p_{\rm D} = p \hspace{0.05cm},\hspace{0.4cm}p_{\rm B} = p_{\rm C} = 0.5 - p \hspace{0.05cm},\hspace{0.3cm}{\rm with} \hspace{0.15cm}0 \le p \le 0.5 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
In this case, the binary entropy function can be used to calculate the entropy:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm quat} = 2 \cdot p \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{\hspace{0.1cm}p\hspace{0.1cm} } + 2 \cdot (0.5-p) \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.5-p}$$&lt;br /&gt;
$$\Rightarrow \hspace{0.3cm} H_{\rm quat} = 1 + H_{\rm bin}(2p) \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The graphic shows as a function of&amp;amp;nbsp; $p$&lt;br /&gt;
*the entropy of the quaternary source (blue) &lt;br /&gt;
*in comparison to the entropy course of the binary source (red). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the quaternary source only the abscissa&amp;amp;nbsp; $0 ≤ p ≤ 0.5$&amp;amp;nbsp; is allowed. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
You can see from the blue curve for the quaternary source:&lt;br /&gt;
*The maximum entropy&amp;amp;nbsp; $H_\text{max} = 2 \; \rm bit/symbol$&amp;amp;nbsp; results for&amp;amp;nbsp; $p = 0.25$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; equally probable symbols: &amp;amp;nbsp; $p_{\rm A} = p_{\rm B} = p_{\rm C} = p_{\rm A} = 0.25$.&lt;br /&gt;
*With&amp;amp;nbsp; $p = 0$&amp;amp;nbsp; resp.&amp;amp;nbsp; $p = 0.5$&amp;amp;nbsp; the quaternary source degenerates to a binary source with&amp;amp;nbsp; $p_{\rm B} = p_{\rm C} = 0. 5$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm A} = p_{\rm D} = 0$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; entropy&amp;amp;nbsp; $H = 1 \; \rm bit/symbol$.&lt;br /&gt;
*The source with&amp;amp;nbsp; $p_{\rm A} = p_{\rm D} = 0.1$&amp;amp;nbsp; and&amp;amp;nbsp; $p_{\rm B} = p_{\rm C} = 0.4$&amp;amp;nbsp; has the following characteristics (each with the pseudo unit &amp;quot;bit/symbol&amp;quot;):&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp; &amp;amp;nbsp; &#039;&#039;&#039;(1)&#039;&#039;&#039; &amp;amp;nbsp; entropy: &amp;amp;nbsp; $H = 1 + H_{\rm bin} (2p) =1 + H_{\rm bin} (0.2) = 1.722,$&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp; &amp;amp;nbsp; &#039;&#039;&#039;(2)&#039;&#039;&#039; &amp;amp;nbsp; Redundancy: &amp;amp;nbsp; ${\rm \Delta }H = {\rm log_2}\hspace{0.1cm} M - H =2- 1.722= 0.278,$&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp; &amp;amp;nbsp; &#039;&#039;&#039;(3)&#039;&#039;&#039; &amp;amp;nbsp; relative redundancy: &amp;amp;nbsp; $r ={\rm \delta }H/({\rm log_2}\hspace{0.1cm} M) = 0.139\hspace{0.05cm}.$&lt;br /&gt;
&lt;br /&gt;
*The redundancy of the quaternary source with&amp;amp;nbsp; $p = 0.1$&amp;amp;nbsp; is equal to&amp;amp;nbsp; $ΔH = 0.278 \; \rm bit/symbol$&amp;amp;nbsp; and thus exactly the same as the redundancy of the binary source with&amp;amp;nbsp; $p = 0.2$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Exercises for chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:1.1 Wetterentropie|Aufgabe 1.1: Wetterentropie]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.1Z Binäre Entropiefunktion|Aufgabe 1.1Z: Binäre Entropiefunktion]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.2 Entropie von Ternärquellen|Aufgabe 1.2: Entropie von Ternärquellen]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Memoryless_Sources&amp;diff=35015</id>
		<title>Information Theory/Discrete Memoryless Sources</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Memoryless_Sources&amp;diff=35015"/>
		<updated>2020-10-27T22:03:07Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{FirstPage}}&lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Entropie wertdiskreter Nachrichtenquellen&lt;br /&gt;
|Vorherige Seite=&lt;br /&gt;
|Nächste Seite=Nachrichtenquellen mit Gedächtnis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== # OVERVIEW OF THE FIRST MAIN CHAPTER # ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This first chapter describes the calculation and the meaning of entropy.&amp;amp;nbsp; According to the Shannonian information definition, entropy is a measure of the mean uncertainty about the outcome of a statistical event or the uncertainty in the measurement of a stochastic quantity.&amp;amp;nbsp; Somewhat casually expressed, the entropy of a random quantity quantifies its &amp;quot;randomness&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
In detail are discussed:&lt;br /&gt;
&lt;br /&gt;
*the &#039;&#039;decision content&#039;&#039;&amp;amp;nbsp; and the &#039;&#039;entropy&#039;&#039;&amp;amp;nbsp; of a memoryless news source,&lt;br /&gt;
*the &#039;&#039;binary entropy function&#039;&#039;&amp;amp;nbsp; and its application to &#039;&#039;non-binary sources&#039;&#039;,&lt;br /&gt;
*the entropy calculation for &#039;&#039;memory sources&#039;&#039;&amp;amp;nbsp; and suitable approximations,&lt;br /&gt;
*the peculiarities of &#039;&#039;Markov sources&#039;&#039;&amp;amp;nbsp; regarding the entropy calculation,&lt;br /&gt;
*the procedure for sources with a large number of symbols, for example &#039;&#039;natural texts&#039;&#039;,&lt;br /&gt;
*the &#039;&#039;entropy estimates&#039;&#039;&amp;amp;nbsp; according to Shannon and Küpfmüller.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Further information on the topic as well as Exercises, simulations and programming exercises can be found in the experiment &amp;quot;Value Discrete Information Theory&amp;quot; of the practical course &amp;quot;Simulation Digitaler Übertragungssysteme&amp;quot; (english: Simulation of Digital Transmission Systems).&amp;amp;nbsp; This (former) LNT course at the TU Munich is based on&lt;br /&gt;
&lt;br /&gt;
*the Windows program&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Programme/WDIT.zip WDIT] &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link points to the ZIP version of the program and &lt;br /&gt;
*the associated&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Texte/Wertdiskrete_Informationstheorie.pdf Internship guide]  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link refers to the PDF version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model and requirements == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We consider a value discrete message source&amp;amp;nbsp; $\rm Q$, which gives a sequence&amp;amp;nbsp; $ \langle q_ν \rangle$&amp;amp;nbsp; of symbols. &lt;br /&gt;
*For the run variable &amp;amp;nbsp;$ν = 1$, ... , $N$, where&amp;amp;nbsp; $N$&amp;amp;nbsp; should be &amp;quot;sufficiently large&amp;quot;. &lt;br /&gt;
*Each individual source symbol &amp;amp;nbsp;$q_ν$&amp;amp;nbsp; comes from a symbol set&amp;amp;nbsp; $\{q_μ \}$&amp;amp;nbsp; where&amp;amp;nbsp; $μ = 1$, ... , $M$, where&amp;amp;nbsp; $M$&amp;amp;nbsp; denotes the symbol range:&lt;br /&gt;
 &lt;br /&gt;
:$$q_{\nu} \in \left \{ q_{\mu}  \right \}, \hspace{0.25cm}{\rm with}\hspace{0.25cm} \nu = 1, \hspace{0.05cm} \text{ ...}\hspace{0.05cm} , N\hspace{0.25cm}{\rm and}\hspace{0.25cm}\mu = 1,\hspace{0.05cm} \text{ ...}\hspace{0.05cm} , M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The figure shows a quaternary message source&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; with the alphabet&amp;amp;nbsp; $\rm \{A, \ B, \ C, \ D\}$&amp;amp;nbsp; and an exemplary sequence of length&amp;amp;nbsp; $N = 100$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2227_Inf_T_1_1_S1a_neu.png|frame|Memoryless Quaternary Message Source]]&lt;br /&gt;
&lt;br /&gt;
The following requirements apply:&lt;br /&gt;
*The quaternary news source is fully described by&amp;amp;nbsp; $M = 4$&amp;amp;nbsp; symbol probabilities&amp;amp;nbsp; $p_μ$.&amp;amp;nbsp; In general it applies:&lt;br /&gt;
:$$\sum_{\mu = 1}^M \hspace{0.1cm}p_{\mu} = 1 \hspace{0.05cm}.$$&lt;br /&gt;
*The message source is memoryless, i.e., the individual sequence elements are&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Statistical Dependence and Independence#General_definition_of_statistical_dependence|statistically independent of each other]]:&lt;br /&gt;
:$${\rm Pr} \left (q_{\nu} = q_{\mu} \right ) = {\rm Pr} \left (q_{\nu} = q_{\mu} \hspace{0.03cm} | \hspace{0.03cm} q_{\nu -1}, q_{\nu -2}, \hspace{0.05cm} \text{ ...}\hspace{0.05cm}\right ) \hspace{0.05cm}.$$&lt;br /&gt;
*Since the alphabet consists of symbols&amp;amp;nbsp; (and not of random variables)&amp;amp;nbsp;, the specification of&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Expected_Values_and_Moments|expected values]]&amp;amp;nbsp; (linear mean, quadratic mean, dispersion, etc.) is not possible here, but also not necessary from an information-theoretical point of view.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These properties will now be illustrated with an example.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_1_S1b_vers2.png|right|frame|Relative frequencies as a function of&amp;amp;nbsp; $N$]]&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp;&lt;br /&gt;
For the symbol probabilities of a quaternary source applies: &lt;br /&gt;
:$$p_{\rm A} = 0.4 \hspace{0.05cm},\hspace{0.2cm}p_{\rm B} = 0.3 \hspace{0.05cm},\hspace{0.2cm}p_{\rm C} = 0.2 \hspace{0.05cm},\hspace{0.2cm} &lt;br /&gt;
p_{\rm D} = 0.1\hspace{0.05cm}.$$&lt;br /&gt;
For an infinitely long sequence&amp;amp;nbsp; $(N \to \infty)$ &lt;br /&gt;
*the&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/From_Random_Experiment_to_Random_Variable#Bernoulli&#039;s_Law_of_Large_Numbers|relative frequencies]]&amp;amp;nbsp; $h_{\rm A}$,&amp;amp;nbsp; $h_{\rm B}$,&amp;amp;nbsp; $h_{\rm C}$,&amp;amp;nbsp; $h_{\rm D}$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; a-posteriori parameters &lt;br /&gt;
*were identical to the&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Some_Basic_Definitions#Event_and_Event_set|probabilities]]&amp;amp;nbsp; $p_{\rm A}$,&amp;amp;nbsp; $p_{\rm B}$,&amp;amp;nbsp; $p_{\rm C}$,&amp;amp;nbsp; $p_{\rm D}$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; a-priori parameters. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With smaller&amp;amp;nbsp; $N$&amp;amp;nbsp; deviations may occur, as the adjacent table (result of a simulation) shows. &lt;br /&gt;
&lt;br /&gt;
*In the graphic above an exemplary sequence is shown with&amp;amp;nbsp; $N = 100$&amp;amp;nbsp; symbols. &lt;br /&gt;
*Due to the set elements&amp;amp;nbsp; $\rm A$,&amp;amp;nbsp; $\rm B$,&amp;amp;nbsp; $\rm C$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm D$&amp;amp;nbsp; no mean values can be given. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, if you replace the symbols with numerical values, for example&amp;amp;nbsp; $\rm A \Rightarrow 1$, &amp;amp;nbsp; $\rm B \Rightarrow 2$, &amp;amp;nbsp; $\rm C \Rightarrow 3$, &amp;amp;nbsp; $\rm D \Rightarrow 4$, then you will get &amp;lt;br&amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; time averaging &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; crossing line &amp;amp;nbsp; &amp;amp;nbsp; or &amp;amp;nbsp; &amp;amp;nbsp; ensemble averaging &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; expected value formation&lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Moments of a Discrete Random Variable#Linear_Average_-_Direct_Component|linear average]] :&lt;br /&gt;
:$$m_1 = \overline { q_{\nu} } = {\rm E} \big [ q_{\mu} \big ] = 0.4 \cdot 1 + 0.3 \cdot 2 + 0.2 \cdot 3 + 0.1 \cdot 4&lt;br /&gt;
= 2 \hspace{0.05cm},$$ &lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Moments of a Discrete Random Variable#Square_mean_.E2.80.93_Variance_.E2.80.93_Scattering |square mean]]:&lt;br /&gt;
:$$m_2 = \overline { q_{\nu}^{\hspace{0.05cm}2}  } = {\rm E} \big [ q_{\mu}^{\hspace{0.05cm}2} \big ] = 0.4 \cdot 1^2 + 0.3 \cdot 2^2 + 0.2 \cdot 3^2 + 0.1 \cdot 4^2&lt;br /&gt;
= 5 \hspace{0.05cm},$$&lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Expected_Values_and_Moments#Some_often_used_Central_Moments|standard deviation]] (scattering) according to the &amp;quot;Theorem of Steiner&amp;quot;:&lt;br /&gt;
:$$\sigma = \sqrt {m_2 - m_1^2} = \sqrt {5 - 2^2} = 1 \hspace{0.05cm}.$$}}	&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
==Decision content - Message content==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[https://de.wikipedia.org/wiki/Claude_Shannon Claude Elwood Shannon]&amp;amp;nbsp; defined in 1948 in the standard work of information theory&amp;amp;nbsp; [Sha48]&amp;lt;ref name=&#039;Sha48&#039;&amp;gt;Shannon, C.E.: A Mathematical Theory of Communication. In: Bell Syst. Techn. J. 27 (1948), pp. 379-423 and pp. 623-656.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; the concept of information as &amp;quot;decrease of uncertainty about the occurrence of a statistical event&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
Let us make a mental experiment with&amp;amp;nbsp; $M$&amp;amp;nbsp; possible results, which are all equally probable: &amp;amp;nbsp; $p_1 = p_2 = \hspace{0.05cm} \text{ ...}\hspace{0.05cm} = p_M = 1/M \hspace{0.05cm}.$ &lt;br /&gt;
&lt;br /&gt;
Under this assumption applies:&lt;br /&gt;
*Is&amp;amp;nbsp; $M = 1$, then each individual attempt will yield the same result and therefore there is no uncertainty about the output.&lt;br /&gt;
*On the other hand, an observer learns about an experiment with&amp;amp;nbsp; $M = 2$, for example the &amp;quot;coin toss&amp;quot; with the set of events&amp;amp;nbsp; $\big \{\rm \boldsymbol{\rm Z}, \rm \boldsymbol{\rm W} \big \}$&amp;amp;nbsp; and the probabilities&amp;amp;nbsp; $p_{\rm Z} = p_{\rm W} = 0. 5$, a gain in information; The uncertainty regarding&amp;amp;nbsp; $\rm Z$ &amp;amp;nbsp;resp.&amp;amp;nbsp; $\rm W$&amp;amp;nbsp; is resolved.&lt;br /&gt;
*In the experiment &amp;quot;dice&amp;quot;&amp;amp;nbsp; $(M = 6)$&amp;amp;nbsp; and even more in roulette&amp;amp;nbsp; $(M = 37)$&amp;amp;nbsp; the gained information is even more significant for the observer than in the &amp;quot;coin toss&amp;quot; when he learns which number was thrown or which ball fell.&lt;br /&gt;
*Finally it should be considered that the experiment&amp;amp;nbsp; &amp;quot;triple coin toss&amp;quot;&amp;amp;nbsp; with the&amp;amp;nbsp; $M = 8$&amp;amp;nbsp; possible results&amp;amp;nbsp; $\rm ZZZ$,&amp;amp;nbsp; $\rm ZZW$,&amp;amp;nbsp; $\rm ZWZ$,&amp;amp;nbsp; $\rm ZWW$,&amp;amp;nbsp; $\rm WZZ$,&amp;amp;nbsp; $\rm WZW$,&amp;amp;nbsp; $\rm WWZ$,&amp;amp;nbsp; $\rm WWW$&amp;amp;nbsp; provides three times the information as the single coin toss&amp;amp;nbsp; $(M = 2)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following definition fulfills all the requirements listed here for a quantitative information measure for equally probable events, indicated only by the symbol range&amp;amp;nbsp; $M$.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp; The&amp;amp;nbsp; &#039;&#039;&#039;decision content&#039;&#039;&#039; &amp;amp;nbsp; of a message source depends only on the symbol range&amp;amp;nbsp; $M$&amp;amp;nbsp; and results in&lt;br /&gt;
 &lt;br /&gt;
:$$H_0 = {\rm log}\hspace{0.1cm}M = {\rm log}_2\hspace{0.1cm}M \hspace{0.15cm} {\rm (in \ 	&amp;amp;#8220;bit&amp;quot;)}&lt;br /&gt;
= {\rm ln}\hspace{0.1cm}M \hspace{0.15cm}\text {(in 	&amp;amp;#8220;nat&amp;quot;)}&lt;br /&gt;
= {\rm lg}\hspace{0.1cm}M \hspace{0.15cm}\text {(in 	&amp;amp;#8220;Hartley&amp;quot;)}\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*The term&amp;amp;nbsp; &#039;&#039;message content&#039;&#039; is also commonly used for this. &lt;br /&gt;
*Since&amp;amp;nbsp; $H_0$&amp;amp;nbsp; indicates the maximum value of the&amp;amp;nbsp; [[Information_Theory/Sources with Memory#Information_Content_and_Entropy|Entropy]]&amp;amp;nbsp; $H$&amp;amp;nbsp;, $H_\text{max}$&amp;amp;nbsp; is also used in our tutorial as short notation&amp;amp;nbsp;. }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Please note our nomenclature:&lt;br /&gt;
*The logarithm will be called &amp;quot;log&amp;quot; in the following, independent of the base. &lt;br /&gt;
*The relations mentioned above are fulfilled due to the following properties:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm log}\hspace{0.1cm}1 = 0 \hspace{0.05cm},\hspace{0.2cm}&lt;br /&gt;
{\rm log}\hspace{0.1cm}37 &amp;gt; {\rm log}\hspace{0.1cm}6 &amp;gt; {\rm log}\hspace{0.1cm}2\hspace{0.05cm},\hspace{0.2cm}&lt;br /&gt;
{\rm log}\hspace{0.1cm}M^k = k \cdot {\rm log}\hspace{0.1cm}M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
* Usually we use the logarithm to the base&amp;amp;nbsp; $2$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;Logarithm dualis&#039;&#039;&amp;amp;nbsp; $\rm (ld)$, where the pseudo unit &amp;quot;bit&amp;quot;, more precisely:&amp;amp;nbsp; &amp;quot;bit/symbol&amp;quot;, is then added:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm ld}\hspace{0.1cm}M = {\rm log_2}\hspace{0.1cm}M = \frac{{\rm lg}\hspace{0.1cm}M}{{\rm lg}\hspace{0.1cm}2}&lt;br /&gt;
= \frac{{\rm ln}\hspace{0.1cm}M}{{\rm ln}\hspace{0.1cm}2} &lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*In addition, you can find in the literature some additional definitions, which are based on the natural logarithm&amp;amp;nbsp; $\rm (ln)$&amp;amp;nbsp; or the logarithm&amp;amp;nbsp; $\rm (lg)$&amp;amp;nbsp;.&lt;br /&gt;
 &lt;br /&gt;
==Informationsgehalt und Entropie ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Wir verzichten nun auf die bisherige Voraussetzung, dass alle&amp;amp;nbsp; $M$&amp;amp;nbsp; möglichen Ergebnisse eines Versuchs gleichwahrscheinlich seien.&amp;amp;nbsp; Im Hinblick auf eine möglichst kompakte Schreibweise legen wir für diese Seite lediglich fest:&lt;br /&gt;
 &lt;br /&gt;
:$$p_1 &amp;gt; p_2 &amp;gt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;gt; p_\mu &amp;gt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm}  &amp;gt; p_{M-1} &amp;gt; p_M\hspace{0.05cm},\hspace{0.4cm}\sum_{\mu = 1}^M p_{\mu}  = 1 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Wir betrachten nun den &#039;&#039;Informationsgehalt&#039;&#039;&amp;amp;nbsp; der einzelnen Symbole, wobei wir den &amp;amp;bdquo;Logarithmus dualis&amp;amp;rdquo; mit $\log_2$ bezeichnen:&lt;br /&gt;
 &lt;br /&gt;
:$$I_\mu = {\rm log_2}\hspace{0.1cm}\frac{1}{p_\mu}= -\hspace{0.05cm}{\rm log_2}\hspace{0.1cm}{p_\mu}&lt;br /&gt;
\hspace{0.5cm}{\rm (Einheit\hspace{-0.15cm}: \hspace{0.15cm}bit\hspace{0.15cm}oder\hspace{0.15cm}bit/Symbol)}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Man erkennt:&lt;br /&gt;
*Wegen&amp;amp;nbsp; $p_μ ≤ 1$&amp;amp;nbsp; ist der Informationsgehalt nie negativ.&amp;amp;nbsp; Im Grenzfall&amp;amp;nbsp; $p_μ  \to  1$&amp;amp;nbsp; geht&amp;amp;nbsp; $I_μ  \to  0$. &lt;br /&gt;
*Allerdings ist für&amp;amp;nbsp; $I_μ = 0$  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp;  $p_μ = 1$  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp;  $M = 1$&amp;amp;nbsp; auch der Entscheidungsgehalt&amp;amp;nbsp; $H_0 = 0$.&lt;br /&gt;
*Bei abfallenden Wahrscheinlichkeiten&amp;amp;nbsp; $p_μ$&amp;amp;nbsp; nimmt der Informationsgehalt kontinuierlich zu:&lt;br /&gt;
 &lt;br /&gt;
:$$I_1 &amp;lt; I_2 &amp;lt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;lt; I_\mu &amp;lt;\hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;lt; I_{M-1} &amp;lt; I_M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Fazit:}$&amp;amp;nbsp; &#039;&#039;&#039;Je unwahrscheinlicher ein Ereignis ist, desto größer ist sein Informationsgehalt&#039;&#039;&#039;.&amp;amp;nbsp; Diesen Sachverhalt stellt man auch im täglichen Leben fest:&lt;br /&gt;
*„6 Richtige” im Lotto nimmt man sicher eher wahr als „3 Richtige” oder gar keinen Gewinn.&lt;br /&gt;
*Ein Tsunami in Asien dominiert auch die Nachrichten in Deutschland über Wochen im Gegensatz zu den fast standardmäßigen Verspätungen der Deutschen Bahn.&lt;br /&gt;
*Eine Niederlagenserie von Bayern München führt zu Riesen–Schlagzeilen im Gegensatz zu einer Siegesserie.&amp;amp;nbsp; Bei 1860 München ist genau das Gegenteil der Fall.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Der Informationsgehalt eines einzelnen Symbols (oder Ereignisses) ist allerdings nicht sehr interessant.&amp;amp;nbsp; Dagegen erhält man &lt;br /&gt;
*durch Scharmittelung über alle möglichen Symbole&amp;amp;nbsp; $q_μ$ &amp;amp;nbsp;bzw.&amp;amp;nbsp; &lt;br /&gt;
*durch Zeitmittelung über alle Elemente der Folge&amp;amp;nbsp; $\langle q_ν \rangle$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
eine der zentralen Größen der Informationstheorie. &lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp;  Die&amp;amp;nbsp; &#039;&#039;&#039;Entropie&#039;&#039;&#039;&amp;amp;nbsp; $H$&amp;amp;nbsp; einer Quelle gibt den &#039;&#039;mittleren Informationsgehalt aller Symbole&#039;&#039;&amp;amp;nbsp; an:&lt;br /&gt;
 &lt;br /&gt;
:$$H = \overline{I_\nu} = {\rm E}\hspace{0.01cm}[I_\mu] = \sum_{\mu = 1}^M p_{\mu} \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{p_\mu}=&lt;br /&gt;
 -\sum_{\mu = 1}^M p_{\mu} \cdot{\rm log_2}\hspace{0.1cm}{p_\mu} \hspace{0.5cm}\text{(Einheit:   bit, genauer:   bit/Symbol)} &lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Die überstreichende Linie kennzeichnet wieder eine Zeitmittelung und&amp;amp;nbsp; $\rm E[\text{...}]$&amp;amp;nbsp; eine Scharmittelung.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Entropie ist unter anderem ein Maß für&lt;br /&gt;
*die mittlere Unsicherheit über den Ausgang eines statistischen Ereignisses,&lt;br /&gt;
*die „Zufälligkeit” dieses Ereignisses,&amp;amp;nbsp; sowie&lt;br /&gt;
*den mittleren Informationsgehalt einer Zufallsgröße.	 &lt;br /&gt;
&lt;br /&gt;
==Binäre Entropiefunktion  ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Wir beschränken uns zunächst auf den Sonderfall&amp;amp;nbsp; $M = 2$&amp;amp;nbsp; und betrachten eine binäre Quelle, die die beiden Symbole&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; abgibt.&amp;amp;nbsp; Die Auftrittwahrscheinlichkeiten seien &amp;amp;nbsp; $p_{\rm A} = p$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm B} = 1 – p$.&lt;br /&gt;
&lt;br /&gt;
Für die Entropie dieser Binärquelle gilt:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm bin} (p) =  p \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{\hspace{0.1cm}p\hspace{0.1cm}} + (1-p) \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{1-p} \hspace{0.5cm}{\rm (Einheit\hspace{-0.15cm}: \hspace{0.15cm}bit\hspace{0.15cm}oder\hspace{0.15cm}bit/Symbol)}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Man nennt die Funktion&amp;amp;nbsp; $H_\text{bin}(p)$&amp;amp;nbsp; die&amp;amp;nbsp; &#039;&#039;&#039;binäre Entropiefunktion&#039;&#039;&#039;.&amp;amp;nbsp; Die Entropie einer Quelle mit größerem Symbolumfang&amp;amp;nbsp; $M$&amp;amp;nbsp; lässt sich häufig unter Verwendung von&amp;amp;nbsp; $H_\text{bin}(p)$&amp;amp;nbsp; ausdrücken.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 2:}$&amp;amp;nbsp;&lt;br /&gt;
Die Grafik zeigt die binäre Entropiefunktion für die Werte&amp;amp;nbsp; $0 ≤ p ≤ 1$&amp;amp;nbsp; der Symbolwahrscheinlichkeit von&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; $($oder auch von&amp;amp;nbsp; $\rm B)$.&amp;amp;nbsp; Man erkennt:&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_1_S4_vers2.png|frame|Binäre Entropiefunktion als Funktion von&amp;amp;nbsp; $p$|right]]&lt;br /&gt;
*Der Maximalwert&amp;amp;nbsp; $H_\text{max} = 1\; \rm  bit$&amp;amp;nbsp; ergibt sich für&amp;amp;nbsp; $p = 0.5$, also für gleichwahrscheinliche Binärsymbole.&amp;amp;nbsp; Dann liefern&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; jeweils den gleichen Beitrag zur Entropie.&lt;br /&gt;
* $H_\text{bin}(p)$&amp;amp;nbsp; ist symmetrisch um&amp;amp;nbsp; $p = 0.5$.&amp;amp;nbsp; Eine Quelle mit&amp;amp;nbsp; $p_{\rm A} = 0.1$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm B} = 0.9$&amp;amp;nbsp; hat die gleiche Entropie&amp;amp;nbsp;  $H = 0.469 \; \rm   bit$&amp;amp;nbsp; wie eine Quelle mit&amp;amp;nbsp; $p_{\rm A} = 0.9$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm B} = 0.1$.&lt;br /&gt;
*Die Differenz&amp;amp;nbsp; $ΔH = H_\text{max} - H$ gibt&amp;amp;nbsp; die&amp;amp;nbsp; &#039;&#039;Redundanz&#039;&#039;&amp;amp;nbsp; der Quelle an und&amp;amp;nbsp; $r = ΔH/H_\text{max}$&amp;amp;nbsp; die&amp;amp;nbsp; &#039;&#039;relative Redundanz&#039;&#039;.&amp;amp;nbsp; Im Beispiel ergeben sich&amp;amp;nbsp; $ΔH = 0.531\; \rm  bit$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $r = 53.1 \rm \%$.&lt;br /&gt;
*Für&amp;amp;nbsp; $p = 0$&amp;amp;nbsp; ergibt sich&amp;amp;nbsp; $H = 0$, da hier die Symbolfolge &amp;amp;nbsp;$\rm B \ B \ B \text{...}$&amp;amp;nbsp; mit Sicherheit vorhergesagt werden kann.&amp;amp;nbsp; Eigentlich ist nun der Symbolumfang nur noch&amp;amp;nbsp; $M = 1$.&amp;amp;nbsp; Gleiches gilt für&amp;amp;nbsp; $p = 1$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; Symbolfolge &amp;amp;nbsp;$\rm A \ A \ A \text{...}$.&lt;br /&gt;
*$H_\text{bin}(p)$&amp;amp;nbsp; ist stets eine&amp;amp;nbsp; &#039;&#039;konkave Funktion&#039;&#039;, da die zweite Ableitung nach dem Parameter&amp;amp;nbsp; $p$&amp;amp;nbsp; für alle Werte von&amp;amp;nbsp; $p$&amp;amp;nbsp; negativ ist: &lt;br /&gt;
:$$\frac{ {\rm d}^2H_{\rm bin} (p)}{ {\rm d}\,p^2} =  \frac{- 1}{ {\rm ln}(2) \cdot p \cdot (1-p)}&amp;lt; 0&lt;br /&gt;
\hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
==Nachrichtenquellen mit größerem Symbolumfang==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Im&amp;amp;nbsp; [[Information_Theory/Gedächtnislose_Nachrichtenquellen#Modell_und_Voraussetzungen|ersten Abschnitt]]&amp;amp;nbsp; dieses Kapitels haben wir eine quaternäre Nachrichtenquelle&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; mit den Symbolwahrscheinlichkeiten&amp;amp;nbsp; $p_{\rm A} = 0.4$, &amp;amp;nbsp; $p_{\rm B} = 0.3$, &amp;amp;nbsp; $p_{\rm C} = 0.2$ &amp;amp;nbsp; und&amp;amp;nbsp;  $ p_{\rm D} = 0.1$&amp;amp;nbsp; betrachtet.&amp;amp;nbsp; Diese Quelle besitzt die folgende Entropie:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm quat} = 0.4 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.4} + 0.3 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.2}+ 0.1 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.1}.$$&lt;br /&gt;
&lt;br /&gt;
Oft ist zur zahlenmäßigen Berechnung der Umweg über den Zehnerlogarithmus&amp;amp;nbsp; $\lg \ x = {\rm log}_{10} \ x$&amp;amp;nbsp; sinnvoll, da der &#039;&#039;Logarithmus dualis&#039;&#039;&amp;amp;nbsp; $ {\rm log}_2 \ x$&amp;amp;nbsp; meist auf Taschenrechnern nicht zu finden ist.&lt;br /&gt;
&lt;br /&gt;
:$$H_{\rm quat}=\frac{1}{{\rm lg}\hspace{0.1cm}2} \cdot \left [ 0.4 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.4} + 0.3 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.3} + 0.2 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.2}+ 0.1 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.1} \right ] = 1.845\,{\rm bit}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 3:}$&amp;amp;nbsp;&lt;br /&gt;
Nun bestehen zwischen den einzelnen Symbolwahrscheinlichkeiten gewisse Symmetrien: &lt;br /&gt;
[[File:Inf_T_1_1_S5_vers2.png|frame|Entropie von Binärquelle und Quaternärquelle]]&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = p_{\rm D} = p \hspace{0.05cm},\hspace{0.4cm}p_{\rm B} = p_{\rm C} = 0.5 - p \hspace{0.05cm},\hspace{0.3cm}{\rm mit} \hspace{0.15cm}0 \le p \le 0.5 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
In diesem Fall kann zur Entropieberechnung auf die binäre Entropiefunktion zurückgegriffen werden:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm quat} =  2 \cdot p \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{\hspace{0.1cm}p\hspace{0.1cm} } + 2 \cdot (0.5-p) \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.5-p}$$&lt;br /&gt;
:$$\Rightarrow \hspace{0.3cm} H_{\rm quat} =   1 + H_{\rm bin}(2p) \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Die Grafik zeigt  abhängig von&amp;amp;nbsp; $p$&lt;br /&gt;
*den Entropieverlauf der Quaternärquelle (blau) &lt;br /&gt;
*im Vergleich zum Entropieverlauf der Binärquelle (rot). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Für die Quaternärquelle ist nur der Abszissen&amp;amp;nbsp;  $0 ≤ p ≤ 0.5$&amp;amp;nbsp; zulässig. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Man erkennt aus der blauen Kurve für die Quaternärquelle:&lt;br /&gt;
*Die maximale Entropie&amp;amp;nbsp; $H_\text{max} = 2 \; \rm bit/Symbol$&amp;amp;nbsp; ergibt sich für&amp;amp;nbsp; $p = 0.25$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; gleichwahrscheinliche Symbole: &amp;amp;nbsp; $p_{\rm A} = p_{\rm B} = p_{\rm C} = p_{\rm A} = 0.25$.&lt;br /&gt;
*Mit&amp;amp;nbsp; $p = 0$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $p = 0.5$&amp;amp;nbsp; entartet die Quaternärquelle zu einer Binärquelle mit&amp;amp;nbsp; $p_{\rm B} = p_{\rm C} = 0.5$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm A} = p_{\rm D} = 0$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; Entropie&amp;amp;nbsp; $H = 1 \; \rm bit/Symbol$.&lt;br /&gt;
*Die Quelle mit&amp;amp;nbsp; $p_{\rm A} = p_{\rm D} = 0.1$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm B} = p_{\rm C} = 0.4$&amp;amp;nbsp; weist folgende Kennwerte auf (jeweils mit der Pseudoeinheit „bit/Symbol”):&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp;   &amp;amp;nbsp; &#039;&#039;&#039;(1)&#039;&#039;&#039; &amp;amp;nbsp; Entropie: &amp;amp;nbsp; $H = 1 + H_{\rm bin} (2p) =1 + H_{\rm bin} (0.2) = 1.722,$&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp;   &amp;amp;nbsp; &#039;&#039;&#039;(2)&#039;&#039;&#039; &amp;amp;nbsp; Redundanz: &amp;amp;nbsp; ${\rm \Delta }H = {\rm log_2}\hspace{0.1cm} M - H =2- 1.722= 0.278,$&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp;   &amp;amp;nbsp; &#039;&#039;&#039;(3)&#039;&#039;&#039; &amp;amp;nbsp; relative Redundanz: &amp;amp;nbsp; $r ={\rm \Delta }H/({\rm log_2}\hspace{0.1cm} M) = 0.139\hspace{0.05cm}.$&lt;br /&gt;
&lt;br /&gt;
*Die Redundanz  der Quaternärquelle mit&amp;amp;nbsp; $p = 0.1$&amp;amp;nbsp; ist gleich&amp;amp;nbsp; $ΔH = 0.278 \; \rm bit/Symbol$&amp;amp;nbsp; und damit genau so groß wie die Redundanz der Binärquelle mit&amp;amp;nbsp; $p = 0.2$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Aufgaben zum Kapitel==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:1.1 Wetterentropie|Aufgabe 1.1: Wetterentropie]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.1Z Binäre Entropiefunktion|Aufgabe 1.1Z: Binäre Entropiefunktion]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.2 Entropie von Ternärquellen|Aufgabe 1.2: Entropie von Ternärquellen]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Memoryless_Sources&amp;diff=35014</id>
		<title>Information Theory/Discrete Memoryless Sources</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Memoryless_Sources&amp;diff=35014"/>
		<updated>2020-10-27T22:02:18Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{FirstPage}}&lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Entropie wertdiskreter Nachrichtenquellen&lt;br /&gt;
|Vorherige Seite=&lt;br /&gt;
|Nächste Seite=Nachrichtenquellen mit Gedächtnis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== # OVERVIEW OF THE FIRST MAIN CHAPTER # ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This first chapter describes the calculation and the meaning of entropy.&amp;amp;nbsp; According to the Shannonian information definition, entropy is a measure of the mean uncertainty about the outcome of a statistical event or the uncertainty in the measurement of a stochastic quantity.&amp;amp;nbsp; Somewhat casually expressed, the entropy of a random quantity quantifies its &amp;quot;randomness&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
In detail are discussed:&lt;br /&gt;
&lt;br /&gt;
*the &#039;&#039;decision content&#039;&#039;&amp;amp;nbsp; and the &#039;&#039;entropy&#039;&#039;&amp;amp;nbsp; of a memoryless news source,&lt;br /&gt;
*the &#039;&#039;binary entropy function&#039;&#039;&amp;amp;nbsp; and its application to &#039;&#039;non-binary sources&#039;&#039;,&lt;br /&gt;
*the entropy calculation for &#039;&#039;memory sources&#039;&#039;&amp;amp;nbsp; and suitable approximations,&lt;br /&gt;
*the peculiarities of &#039;&#039;Markov sources&#039;&#039;&amp;amp;nbsp; regarding the entropy calculation,&lt;br /&gt;
*the procedure for sources with a large number of symbols, for example &#039;&#039;natural texts&#039;&#039;,&lt;br /&gt;
*the &#039;&#039;entropy estimates&#039;&#039;&amp;amp;nbsp; according to Shannon and Küpfmüller.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Further information on the topic as well as Exercises, simulations and programming exercises can be found in the experiment &amp;quot;Value Discrete Information Theory&amp;quot; of the practical course &amp;quot;Simulation Digitaler Übertragungssysteme&amp;quot; (english: Simulation of Digital Transmission Systems).&amp;amp;nbsp; This (former) LNT course at the TU Munich is based on&lt;br /&gt;
&lt;br /&gt;
*the Windows program&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Programme/WDIT.zip WDIT] &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link points to the ZIP version of the program and &lt;br /&gt;
*the associated&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Texte/Wertdiskrete_Informationstheorie.pdf Internship guide]  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link refers to the PDF version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model and requirements == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We consider a value discrete message source&amp;amp;nbsp; $\rm Q$, which gives a sequence&amp;amp;nbsp; $ \langle q_ν \rangle$&amp;amp;nbsp; of symbols. &lt;br /&gt;
*For the run variable &amp;amp;nbsp;$ν = 1$, ... , $N$, where&amp;amp;nbsp; $N$&amp;amp;nbsp; should be &amp;quot;sufficiently large&amp;quot;. &lt;br /&gt;
*Each individual source symbol &amp;amp;nbsp;$q_ν$&amp;amp;nbsp; comes from a symbol set&amp;amp;nbsp; $\{q_μ \}$&amp;amp;nbsp; where&amp;amp;nbsp; $μ = 1$, ... , $M$, where&amp;amp;nbsp; $M$&amp;amp;nbsp; denotes the symbol range:&lt;br /&gt;
 &lt;br /&gt;
:$$q_{\nu} \in \left \{ q_{\mu}  \right \}, \hspace{0.25cm}{\rm with}\hspace{0.25cm} \nu = 1, \hspace{0.05cm} \text{ ...}\hspace{0.05cm} , N\hspace{0.25cm}{\rm and}\hspace{0.25cm}\mu = 1,\hspace{0.05cm} \text{ ...}\hspace{0.05cm} , M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The figure shows a quaternary message source&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; with the alphabet&amp;amp;nbsp; $\rm \{A, \ B, \ C, \ D\}$&amp;amp;nbsp; and an exemplary sequence of length&amp;amp;nbsp; $N = 100$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2227_Inf_T_1_1_S1a_new.png|frame|Memoryless Quaternary Message Source]]&lt;br /&gt;
&lt;br /&gt;
The following requirements apply:&lt;br /&gt;
*The quaternary news source is fully described by&amp;amp;nbsp; $M = 4$&amp;amp;nbsp; symbol probabilities&amp;amp;nbsp; $p_μ$.&amp;amp;nbsp; In general it applies:&lt;br /&gt;
:$$\sum_{\mu = 1}^M \hspace{0.1cm}p_{\mu} = 1 \hspace{0.05cm}.$$&lt;br /&gt;
*The message source is memoryless, i.e., the individual sequence elements are&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Statistical Dependence and Independence#General_definition_of_statistical_dependence|statistically independent of each other]]:&lt;br /&gt;
:$${\rm Pr} \left (q_{\nu} = q_{\mu} \right ) = {\rm Pr} \left (q_{\nu} = q_{\mu} \hspace{0.03cm} | \hspace{0.03cm} q_{\nu -1}, q_{\nu -2}, \hspace{0.05cm} \text{ ...}\hspace{0.05cm}\right ) \hspace{0.05cm}.$$&lt;br /&gt;
*Since the alphabet consists of symbols&amp;amp;nbsp; (and not of random variables)&amp;amp;nbsp;, the specification of&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Expected_Values_and_Moments|expected values]]&amp;amp;nbsp; (linear mean, quadratic mean, dispersion, etc.) is not possible here, but also not necessary from an information-theoretical point of view.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These properties will now be illustrated with an example.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_1_S1b_vers2.png|right|frame|Relative frequencies as a function of&amp;amp;nbsp; $N$]]&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp;&lt;br /&gt;
For the symbol probabilities of a quaternary source applies: &lt;br /&gt;
:$$p_{\rm A} = 0.4 \hspace{0.05cm},\hspace{0.2cm}p_{\rm B} = 0.3 \hspace{0.05cm},\hspace{0.2cm}p_{\rm C} = 0.2 \hspace{0.05cm},\hspace{0.2cm} &lt;br /&gt;
p_{\rm D} = 0.1\hspace{0.05cm}.$$&lt;br /&gt;
For an infinitely long sequence&amp;amp;nbsp; $(N \to \infty)$ &lt;br /&gt;
*the&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/From_Random_Experiment_to_Random_Variable#Bernoulli&#039;s_Law_of_Large_Numbers|relative frequencies]]&amp;amp;nbsp; $h_{\rm A}$,&amp;amp;nbsp; $h_{\rm B}$,&amp;amp;nbsp; $h_{\rm C}$,&amp;amp;nbsp; $h_{\rm D}$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; a-posteriori parameters &lt;br /&gt;
*were identical to the&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Some_Basic_Definitions#Event_and_Event_set|probabilities]]&amp;amp;nbsp; $p_{\rm A}$,&amp;amp;nbsp; $p_{\rm B}$,&amp;amp;nbsp; $p_{\rm C}$,&amp;amp;nbsp; $p_{\rm D}$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; a-priori parameters. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With smaller&amp;amp;nbsp; $N$&amp;amp;nbsp; deviations may occur, as the adjacent table (result of a simulation) shows. &lt;br /&gt;
&lt;br /&gt;
*In the graphic above an exemplary sequence is shown with&amp;amp;nbsp; $N = 100$&amp;amp;nbsp; symbols. &lt;br /&gt;
*Due to the set elements&amp;amp;nbsp; $\rm A$,&amp;amp;nbsp; $\rm B$,&amp;amp;nbsp; $\rm C$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm D$&amp;amp;nbsp; no mean values can be given. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, if you replace the symbols with numerical values, for example&amp;amp;nbsp; $\rm A \Rightarrow 1$, &amp;amp;nbsp; $\rm B \Rightarrow 2$, &amp;amp;nbsp; $\rm C \Rightarrow 3$, &amp;amp;nbsp; $\rm D \Rightarrow 4$, then you will get &amp;lt;br&amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; time averaging &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; crossing line &amp;amp;nbsp; &amp;amp;nbsp; or &amp;amp;nbsp; &amp;amp;nbsp; ensemble averaging &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; expected value formation&lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Moments of a Discrete Random Variable#Linear_Average_-_Direct_Component|linear average]] :&lt;br /&gt;
:$$m_1 = \overline { q_{\nu} } = {\rm E} \big [ q_{\mu} \big ] = 0.4 \cdot 1 + 0.3 \cdot 2 + 0.2 \cdot 3 + 0.1 \cdot 4&lt;br /&gt;
= 2 \hspace{0.05cm},$$ &lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Moments of a Discrete Random Variable#Square_mean_.E2.80.93_Variance_.E2.80.93_Scattering |square mean]]:&lt;br /&gt;
:$$m_2 = \overline { q_{\nu}^{\hspace{0.05cm}2}  } = {\rm E} \big [ q_{\mu}^{\hspace{0.05cm}2} \big ] = 0.4 \cdot 1^2 + 0.3 \cdot 2^2 + 0.2 \cdot 3^2 + 0.1 \cdot 4^2&lt;br /&gt;
= 5 \hspace{0.05cm},$$&lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Expected_Values_and_Moments#Some_often_used_Central_Moments|standard deviation]] (scattering) according to the &amp;quot;Theorem of Steiner&amp;quot;:&lt;br /&gt;
:$$\sigma = \sqrt {m_2 - m_1^2} = \sqrt {5 - 2^2} = 1 \hspace{0.05cm}.$$}}	&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
==Decision content - Message content==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[https://de.wikipedia.org/wiki/Claude_Shannon Claude Elwood Shannon]&amp;amp;nbsp; defined in 1948 in the standard work of information theory&amp;amp;nbsp; [Sha48]&amp;lt;ref name=&#039;Sha48&#039;&amp;gt;Shannon, C.E.: A Mathematical Theory of Communication. In: Bell Syst. Techn. J. 27 (1948), pp. 379-423 and pp. 623-656.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; the concept of information as &amp;quot;decrease of uncertainty about the occurrence of a statistical event&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
Let us make a mental experiment with&amp;amp;nbsp; $M$&amp;amp;nbsp; possible results, which are all equally probable: &amp;amp;nbsp; $p_1 = p_2 = \hspace{0.05cm} \text{ ...}\hspace{0.05cm} = p_M = 1/M \hspace{0.05cm}.$ &lt;br /&gt;
&lt;br /&gt;
Under this assumption applies:&lt;br /&gt;
*Is&amp;amp;nbsp; $M = 1$, then each individual attempt will yield the same result and therefore there is no uncertainty about the output.&lt;br /&gt;
*On the other hand, an observer learns about an experiment with&amp;amp;nbsp; $M = 2$, for example the &amp;quot;coin toss&amp;quot; with the set of events&amp;amp;nbsp; $\big \{\rm \boldsymbol{\rm Z}, \rm \boldsymbol{\rm W} \big \}$&amp;amp;nbsp; and the probabilities&amp;amp;nbsp; $p_{\rm Z} = p_{\rm W} = 0. 5$, a gain in information; The uncertainty regarding&amp;amp;nbsp; $\rm Z$ &amp;amp;nbsp;resp.&amp;amp;nbsp; $\rm W$&amp;amp;nbsp; is resolved.&lt;br /&gt;
*In the experiment &amp;quot;dice&amp;quot;&amp;amp;nbsp; $(M = 6)$&amp;amp;nbsp; and even more in roulette&amp;amp;nbsp; $(M = 37)$&amp;amp;nbsp; the gained information is even more significant for the observer than in the &amp;quot;coin toss&amp;quot; when he learns which number was thrown or which ball fell.&lt;br /&gt;
*Finally it should be considered that the experiment&amp;amp;nbsp; &amp;quot;triple coin toss&amp;quot;&amp;amp;nbsp; with the&amp;amp;nbsp; $M = 8$&amp;amp;nbsp; possible results&amp;amp;nbsp; $\rm ZZZ$,&amp;amp;nbsp; $\rm ZZW$,&amp;amp;nbsp; $\rm ZWZ$,&amp;amp;nbsp; $\rm ZWW$,&amp;amp;nbsp; $\rm WZZ$,&amp;amp;nbsp; $\rm WZW$,&amp;amp;nbsp; $\rm WWZ$,&amp;amp;nbsp; $\rm WWW$&amp;amp;nbsp; provides three times the information as the single coin toss&amp;amp;nbsp; $(M = 2)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following definition fulfills all the requirements listed here for a quantitative information measure for equally probable events, indicated only by the symbol range&amp;amp;nbsp; $M$.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp; The&amp;amp;nbsp; &#039;&#039;&#039;decision content&#039;&#039;&#039; &amp;amp;nbsp; of a message source depends only on the symbol range&amp;amp;nbsp; $M$&amp;amp;nbsp; and results in&lt;br /&gt;
 &lt;br /&gt;
:$$H_0 = {\rm log}\hspace{0.1cm}M = {\rm log}_2\hspace{0.1cm}M \hspace{0.15cm} {\rm (in \ 	&amp;amp;#8220;bit&amp;quot;)}&lt;br /&gt;
= {\rm ln}\hspace{0.1cm}M \hspace{0.15cm}\text {(in 	&amp;amp;#8220;nat&amp;quot;)}&lt;br /&gt;
= {\rm lg}\hspace{0.1cm}M \hspace{0.15cm}\text {(in 	&amp;amp;#8220;Hartley&amp;quot;)}\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*The term&amp;amp;nbsp; &#039;&#039;message content&#039;&#039; is also commonly used for this. &lt;br /&gt;
*Since&amp;amp;nbsp; $H_0$&amp;amp;nbsp; indicates the maximum value of the&amp;amp;nbsp; [[Information_Theory/Sources with Memory#Information_Content_and_Entropy|Entropy]]&amp;amp;nbsp; $H$&amp;amp;nbsp;, $H_\text{max}$&amp;amp;nbsp; is also used in our tutorial as short notation&amp;amp;nbsp;. }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Please note our nomenclature:&lt;br /&gt;
*The logarithm will be called &amp;quot;log&amp;quot; in the following, independent of the base. &lt;br /&gt;
*The relations mentioned above are fulfilled due to the following properties:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm log}\hspace{0.1cm}1 = 0 \hspace{0.05cm},\hspace{0.2cm}&lt;br /&gt;
{\rm log}\hspace{0.1cm}37 &amp;gt; {\rm log}\hspace{0.1cm}6 &amp;gt; {\rm log}\hspace{0.1cm}2\hspace{0.05cm},\hspace{0.2cm}&lt;br /&gt;
{\rm log}\hspace{0.1cm}M^k = k \cdot {\rm log}\hspace{0.1cm}M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
* Usually we use the logarithm to the base&amp;amp;nbsp; $2$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;Logarithm dualis&#039;&#039;&amp;amp;nbsp; $\rm (ld)$, where the pseudo unit &amp;quot;bit&amp;quot;, more precisely:&amp;amp;nbsp; &amp;quot;bit/symbol&amp;quot;, is then added:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm ld}\hspace{0.1cm}M = {\rm log_2}\hspace{0.1cm}M = \frac{{\rm lg}\hspace{0.1cm}M}{{\rm lg}\hspace{0.1cm}2}&lt;br /&gt;
= \frac{{\rm ln}\hspace{0.1cm}M}{{\rm ln}\hspace{0.1cm}2} &lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*In addition, you can find in the literature some additional definitions, which are based on the natural logarithm&amp;amp;nbsp; $\rm (ln)$&amp;amp;nbsp; or the logarithm&amp;amp;nbsp; $\rm (lg)$&amp;amp;nbsp;.&lt;br /&gt;
 &lt;br /&gt;
==Informationsgehalt und Entropie ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Wir verzichten nun auf die bisherige Voraussetzung, dass alle&amp;amp;nbsp; $M$&amp;amp;nbsp; möglichen Ergebnisse eines Versuchs gleichwahrscheinlich seien.&amp;amp;nbsp; Im Hinblick auf eine möglichst kompakte Schreibweise legen wir für diese Seite lediglich fest:&lt;br /&gt;
 &lt;br /&gt;
:$$p_1 &amp;gt; p_2 &amp;gt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;gt; p_\mu &amp;gt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm}  &amp;gt; p_{M-1} &amp;gt; p_M\hspace{0.05cm},\hspace{0.4cm}\sum_{\mu = 1}^M p_{\mu}  = 1 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Wir betrachten nun den &#039;&#039;Informationsgehalt&#039;&#039;&amp;amp;nbsp; der einzelnen Symbole, wobei wir den &amp;amp;bdquo;Logarithmus dualis&amp;amp;rdquo; mit $\log_2$ bezeichnen:&lt;br /&gt;
 &lt;br /&gt;
:$$I_\mu = {\rm log_2}\hspace{0.1cm}\frac{1}{p_\mu}= -\hspace{0.05cm}{\rm log_2}\hspace{0.1cm}{p_\mu}&lt;br /&gt;
\hspace{0.5cm}{\rm (Einheit\hspace{-0.15cm}: \hspace{0.15cm}bit\hspace{0.15cm}oder\hspace{0.15cm}bit/Symbol)}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Man erkennt:&lt;br /&gt;
*Wegen&amp;amp;nbsp; $p_μ ≤ 1$&amp;amp;nbsp; ist der Informationsgehalt nie negativ.&amp;amp;nbsp; Im Grenzfall&amp;amp;nbsp; $p_μ  \to  1$&amp;amp;nbsp; geht&amp;amp;nbsp; $I_μ  \to  0$. &lt;br /&gt;
*Allerdings ist für&amp;amp;nbsp; $I_μ = 0$  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp;  $p_μ = 1$  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp;  $M = 1$&amp;amp;nbsp; auch der Entscheidungsgehalt&amp;amp;nbsp; $H_0 = 0$.&lt;br /&gt;
*Bei abfallenden Wahrscheinlichkeiten&amp;amp;nbsp; $p_μ$&amp;amp;nbsp; nimmt der Informationsgehalt kontinuierlich zu:&lt;br /&gt;
 &lt;br /&gt;
:$$I_1 &amp;lt; I_2 &amp;lt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;lt; I_\mu &amp;lt;\hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;lt; I_{M-1} &amp;lt; I_M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Fazit:}$&amp;amp;nbsp; &#039;&#039;&#039;Je unwahrscheinlicher ein Ereignis ist, desto größer ist sein Informationsgehalt&#039;&#039;&#039;.&amp;amp;nbsp; Diesen Sachverhalt stellt man auch im täglichen Leben fest:&lt;br /&gt;
*„6 Richtige” im Lotto nimmt man sicher eher wahr als „3 Richtige” oder gar keinen Gewinn.&lt;br /&gt;
*Ein Tsunami in Asien dominiert auch die Nachrichten in Deutschland über Wochen im Gegensatz zu den fast standardmäßigen Verspätungen der Deutschen Bahn.&lt;br /&gt;
*Eine Niederlagenserie von Bayern München führt zu Riesen–Schlagzeilen im Gegensatz zu einer Siegesserie.&amp;amp;nbsp; Bei 1860 München ist genau das Gegenteil der Fall.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Der Informationsgehalt eines einzelnen Symbols (oder Ereignisses) ist allerdings nicht sehr interessant.&amp;amp;nbsp; Dagegen erhält man &lt;br /&gt;
*durch Scharmittelung über alle möglichen Symbole&amp;amp;nbsp; $q_μ$ &amp;amp;nbsp;bzw.&amp;amp;nbsp; &lt;br /&gt;
*durch Zeitmittelung über alle Elemente der Folge&amp;amp;nbsp; $\langle q_ν \rangle$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
eine der zentralen Größen der Informationstheorie. &lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp;  Die&amp;amp;nbsp; &#039;&#039;&#039;Entropie&#039;&#039;&#039;&amp;amp;nbsp; $H$&amp;amp;nbsp; einer Quelle gibt den &#039;&#039;mittleren Informationsgehalt aller Symbole&#039;&#039;&amp;amp;nbsp; an:&lt;br /&gt;
 &lt;br /&gt;
:$$H = \overline{I_\nu} = {\rm E}\hspace{0.01cm}[I_\mu] = \sum_{\mu = 1}^M p_{\mu} \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{p_\mu}=&lt;br /&gt;
 -\sum_{\mu = 1}^M p_{\mu} \cdot{\rm log_2}\hspace{0.1cm}{p_\mu} \hspace{0.5cm}\text{(Einheit:   bit, genauer:   bit/Symbol)} &lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Die überstreichende Linie kennzeichnet wieder eine Zeitmittelung und&amp;amp;nbsp; $\rm E[\text{...}]$&amp;amp;nbsp; eine Scharmittelung.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Entropie ist unter anderem ein Maß für&lt;br /&gt;
*die mittlere Unsicherheit über den Ausgang eines statistischen Ereignisses,&lt;br /&gt;
*die „Zufälligkeit” dieses Ereignisses,&amp;amp;nbsp; sowie&lt;br /&gt;
*den mittleren Informationsgehalt einer Zufallsgröße.	 &lt;br /&gt;
&lt;br /&gt;
==Binäre Entropiefunktion  ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Wir beschränken uns zunächst auf den Sonderfall&amp;amp;nbsp; $M = 2$&amp;amp;nbsp; und betrachten eine binäre Quelle, die die beiden Symbole&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; abgibt.&amp;amp;nbsp; Die Auftrittwahrscheinlichkeiten seien &amp;amp;nbsp; $p_{\rm A} = p$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm B} = 1 – p$.&lt;br /&gt;
&lt;br /&gt;
Für die Entropie dieser Binärquelle gilt:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm bin} (p) =  p \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{\hspace{0.1cm}p\hspace{0.1cm}} + (1-p) \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{1-p} \hspace{0.5cm}{\rm (Einheit\hspace{-0.15cm}: \hspace{0.15cm}bit\hspace{0.15cm}oder\hspace{0.15cm}bit/Symbol)}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Man nennt die Funktion&amp;amp;nbsp; $H_\text{bin}(p)$&amp;amp;nbsp; die&amp;amp;nbsp; &#039;&#039;&#039;binäre Entropiefunktion&#039;&#039;&#039;.&amp;amp;nbsp; Die Entropie einer Quelle mit größerem Symbolumfang&amp;amp;nbsp; $M$&amp;amp;nbsp; lässt sich häufig unter Verwendung von&amp;amp;nbsp; $H_\text{bin}(p)$&amp;amp;nbsp; ausdrücken.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 2:}$&amp;amp;nbsp;&lt;br /&gt;
Die Grafik zeigt die binäre Entropiefunktion für die Werte&amp;amp;nbsp; $0 ≤ p ≤ 1$&amp;amp;nbsp; der Symbolwahrscheinlichkeit von&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; $($oder auch von&amp;amp;nbsp; $\rm B)$.&amp;amp;nbsp; Man erkennt:&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_1_S4_vers2.png|frame|Binäre Entropiefunktion als Funktion von&amp;amp;nbsp; $p$|right]]&lt;br /&gt;
*Der Maximalwert&amp;amp;nbsp; $H_\text{max} = 1\; \rm  bit$&amp;amp;nbsp; ergibt sich für&amp;amp;nbsp; $p = 0.5$, also für gleichwahrscheinliche Binärsymbole.&amp;amp;nbsp; Dann liefern&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; jeweils den gleichen Beitrag zur Entropie.&lt;br /&gt;
* $H_\text{bin}(p)$&amp;amp;nbsp; ist symmetrisch um&amp;amp;nbsp; $p = 0.5$.&amp;amp;nbsp; Eine Quelle mit&amp;amp;nbsp; $p_{\rm A} = 0.1$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm B} = 0.9$&amp;amp;nbsp; hat die gleiche Entropie&amp;amp;nbsp;  $H = 0.469 \; \rm   bit$&amp;amp;nbsp; wie eine Quelle mit&amp;amp;nbsp; $p_{\rm A} = 0.9$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm B} = 0.1$.&lt;br /&gt;
*Die Differenz&amp;amp;nbsp; $ΔH = H_\text{max} - H$ gibt&amp;amp;nbsp; die&amp;amp;nbsp; &#039;&#039;Redundanz&#039;&#039;&amp;amp;nbsp; der Quelle an und&amp;amp;nbsp; $r = ΔH/H_\text{max}$&amp;amp;nbsp; die&amp;amp;nbsp; &#039;&#039;relative Redundanz&#039;&#039;.&amp;amp;nbsp; Im Beispiel ergeben sich&amp;amp;nbsp; $ΔH = 0.531\; \rm  bit$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $r = 53.1 \rm \%$.&lt;br /&gt;
*Für&amp;amp;nbsp; $p = 0$&amp;amp;nbsp; ergibt sich&amp;amp;nbsp; $H = 0$, da hier die Symbolfolge &amp;amp;nbsp;$\rm B \ B \ B \text{...}$&amp;amp;nbsp; mit Sicherheit vorhergesagt werden kann.&amp;amp;nbsp; Eigentlich ist nun der Symbolumfang nur noch&amp;amp;nbsp; $M = 1$.&amp;amp;nbsp; Gleiches gilt für&amp;amp;nbsp; $p = 1$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; Symbolfolge &amp;amp;nbsp;$\rm A \ A \ A \text{...}$.&lt;br /&gt;
*$H_\text{bin}(p)$&amp;amp;nbsp; ist stets eine&amp;amp;nbsp; &#039;&#039;konkave Funktion&#039;&#039;, da die zweite Ableitung nach dem Parameter&amp;amp;nbsp; $p$&amp;amp;nbsp; für alle Werte von&amp;amp;nbsp; $p$&amp;amp;nbsp; negativ ist: &lt;br /&gt;
:$$\frac{ {\rm d}^2H_{\rm bin} (p)}{ {\rm d}\,p^2} =  \frac{- 1}{ {\rm ln}(2) \cdot p \cdot (1-p)}&amp;lt; 0&lt;br /&gt;
\hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
==Nachrichtenquellen mit größerem Symbolumfang==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Im&amp;amp;nbsp; [[Information_Theory/Gedächtnislose_Nachrichtenquellen#Modell_und_Voraussetzungen|ersten Abschnitt]]&amp;amp;nbsp; dieses Kapitels haben wir eine quaternäre Nachrichtenquelle&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; mit den Symbolwahrscheinlichkeiten&amp;amp;nbsp; $p_{\rm A} = 0.4$, &amp;amp;nbsp; $p_{\rm B} = 0.3$, &amp;amp;nbsp; $p_{\rm C} = 0.2$ &amp;amp;nbsp; und&amp;amp;nbsp;  $ p_{\rm D} = 0.1$&amp;amp;nbsp; betrachtet.&amp;amp;nbsp; Diese Quelle besitzt die folgende Entropie:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm quat} = 0.4 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.4} + 0.3 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.2}+ 0.1 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.1}.$$&lt;br /&gt;
&lt;br /&gt;
Oft ist zur zahlenmäßigen Berechnung der Umweg über den Zehnerlogarithmus&amp;amp;nbsp; $\lg \ x = {\rm log}_{10} \ x$&amp;amp;nbsp; sinnvoll, da der &#039;&#039;Logarithmus dualis&#039;&#039;&amp;amp;nbsp; $ {\rm log}_2 \ x$&amp;amp;nbsp; meist auf Taschenrechnern nicht zu finden ist.&lt;br /&gt;
&lt;br /&gt;
:$$H_{\rm quat}=\frac{1}{{\rm lg}\hspace{0.1cm}2} \cdot \left [ 0.4 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.4} + 0.3 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.3} + 0.2 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.2}+ 0.1 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.1} \right ] = 1.845\,{\rm bit}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 3:}$&amp;amp;nbsp;&lt;br /&gt;
Nun bestehen zwischen den einzelnen Symbolwahrscheinlichkeiten gewisse Symmetrien: &lt;br /&gt;
[[File:Inf_T_1_1_S5_vers2.png|frame|Entropie von Binärquelle und Quaternärquelle]]&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = p_{\rm D} = p \hspace{0.05cm},\hspace{0.4cm}p_{\rm B} = p_{\rm C} = 0.5 - p \hspace{0.05cm},\hspace{0.3cm}{\rm mit} \hspace{0.15cm}0 \le p \le 0.5 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
In diesem Fall kann zur Entropieberechnung auf die binäre Entropiefunktion zurückgegriffen werden:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm quat} =  2 \cdot p \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{\hspace{0.1cm}p\hspace{0.1cm} } + 2 \cdot (0.5-p) \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.5-p}$$&lt;br /&gt;
:$$\Rightarrow \hspace{0.3cm} H_{\rm quat} =   1 + H_{\rm bin}(2p) \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Die Grafik zeigt  abhängig von&amp;amp;nbsp; $p$&lt;br /&gt;
*den Entropieverlauf der Quaternärquelle (blau) &lt;br /&gt;
*im Vergleich zum Entropieverlauf der Binärquelle (rot). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Für die Quaternärquelle ist nur der Abszissen&amp;amp;nbsp;  $0 ≤ p ≤ 0.5$&amp;amp;nbsp; zulässig. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Man erkennt aus der blauen Kurve für die Quaternärquelle:&lt;br /&gt;
*Die maximale Entropie&amp;amp;nbsp; $H_\text{max} = 2 \; \rm bit/Symbol$&amp;amp;nbsp; ergibt sich für&amp;amp;nbsp; $p = 0.25$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; gleichwahrscheinliche Symbole: &amp;amp;nbsp; $p_{\rm A} = p_{\rm B} = p_{\rm C} = p_{\rm A} = 0.25$.&lt;br /&gt;
*Mit&amp;amp;nbsp; $p = 0$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $p = 0.5$&amp;amp;nbsp; entartet die Quaternärquelle zu einer Binärquelle mit&amp;amp;nbsp; $p_{\rm B} = p_{\rm C} = 0.5$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm A} = p_{\rm D} = 0$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; Entropie&amp;amp;nbsp; $H = 1 \; \rm bit/Symbol$.&lt;br /&gt;
*Die Quelle mit&amp;amp;nbsp; $p_{\rm A} = p_{\rm D} = 0.1$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm B} = p_{\rm C} = 0.4$&amp;amp;nbsp; weist folgende Kennwerte auf (jeweils mit der Pseudoeinheit „bit/Symbol”):&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp;   &amp;amp;nbsp; &#039;&#039;&#039;(1)&#039;&#039;&#039; &amp;amp;nbsp; Entropie: &amp;amp;nbsp; $H = 1 + H_{\rm bin} (2p) =1 + H_{\rm bin} (0.2) = 1.722,$&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp;   &amp;amp;nbsp; &#039;&#039;&#039;(2)&#039;&#039;&#039; &amp;amp;nbsp; Redundanz: &amp;amp;nbsp; ${\rm \Delta }H = {\rm log_2}\hspace{0.1cm} M - H =2- 1.722= 0.278,$&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp;   &amp;amp;nbsp; &#039;&#039;&#039;(3)&#039;&#039;&#039; &amp;amp;nbsp; relative Redundanz: &amp;amp;nbsp; $r ={\rm \Delta }H/({\rm log_2}\hspace{0.1cm} M) = 0.139\hspace{0.05cm}.$&lt;br /&gt;
&lt;br /&gt;
*Die Redundanz  der Quaternärquelle mit&amp;amp;nbsp; $p = 0.1$&amp;amp;nbsp; ist gleich&amp;amp;nbsp; $ΔH = 0.278 \; \rm bit/Symbol$&amp;amp;nbsp; und damit genau so groß wie die Redundanz der Binärquelle mit&amp;amp;nbsp; $p = 0.2$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Aufgaben zum Kapitel==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:1.1 Wetterentropie|Aufgabe 1.1: Wetterentropie]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.1Z Binäre Entropiefunktion|Aufgabe 1.1Z: Binäre Entropiefunktion]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.2 Entropie von Ternärquellen|Aufgabe 1.2: Entropie von Ternärquellen]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Memoryless_Sources&amp;diff=35013</id>
		<title>Information Theory/Discrete Memoryless Sources</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory/Discrete_Memoryless_Sources&amp;diff=35013"/>
		<updated>2020-10-27T21:55:06Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{FirstPage}}&lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=Entropie wertdiskreter Nachrichtenquellen&lt;br /&gt;
|Vorherige Seite=&lt;br /&gt;
|Nächste Seite=Nachrichtenquellen mit Gedächtnis&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== # OVERVIEW OF THE FIRST MAIN CHAPTER # ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This first chapter describes the calculation and the meaning of entropy.&amp;amp;nbsp; According to the Shannonian information definition, entropy is a measure of the mean uncertainty about the outcome of a statistical event or the uncertainty in the measurement of a stochastic quantity.&amp;amp;nbsp; Somewhat casually expressed, the entropy of a random quantity quantifies its &amp;quot;randomness&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
In detail are discussed:&lt;br /&gt;
&lt;br /&gt;
*the &#039;&#039;decision content&#039;&#039;&amp;amp;nbsp; and the &#039;&#039;entropy&#039;&#039;&amp;amp;nbsp; of a memoryless news source,&lt;br /&gt;
*the &#039;&#039;binary entropy function&#039;&#039;&amp;amp;nbsp; and its application to &#039;&#039;non-binary sources&#039;&#039;,&lt;br /&gt;
*the entropy calculation for &#039;&#039;memory sources&#039;&#039;&amp;amp;nbsp; and suitable approximations,&lt;br /&gt;
*the peculiarities of &#039;&#039;Markov sources&#039;&#039;&amp;amp;nbsp; regarding the entropy calculation,&lt;br /&gt;
*the procedure for sources with a large number of symbols, for example &#039;&#039;natural texts&#039;&#039;,&lt;br /&gt;
*the &#039;&#039;entropy estimates&#039;&#039;&amp;amp;nbsp; according to Shannon and Küpfmüller.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Further information on the topic as well as Exercises, simulations and programming exercises can be found in the experiment &amp;quot;Value Discrete Information Theory&amp;quot; of the practical course &amp;quot;Simulation Digitaler Übertragungssysteme&amp;quot; (english: Simulation of Digital Transmission Systems).&amp;amp;nbsp; This (former) LNT course at the TU Munich is based on&lt;br /&gt;
&lt;br /&gt;
*the Windows program&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Programme/WDIT.zip WDIT] &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link points to the ZIP version of the program and &lt;br /&gt;
*the associated&amp;amp;nbsp; [http://en.lntwww.de/downloads/Sonstiges/Texte/Wertdiskrete_Informationstheorie.pdf Internship guide]  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; the link refers to the PDF version.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Model and requirements == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We consider a value discrete message source&amp;amp;nbsp; $\rm Q$, which gives a sequence&amp;amp;nbsp; $ \langle q_ν \rangle$&amp;amp;nbsp; of symbols. &lt;br /&gt;
*For the run variable &amp;amp;nbsp;$ν = 1$, ... , $N$, where&amp;amp;nbsp; $N$&amp;amp;nbsp; should be &amp;quot;sufficiently large&amp;quot;. &lt;br /&gt;
*Each individual source symbol &amp;amp;nbsp;$q_ν$&amp;amp;nbsp; comes from a symbol set&amp;amp;nbsp; $\{q_μ \}$&amp;amp;nbsp; where&amp;amp;nbsp; $μ = 1$, ... , $M$, where&amp;amp;nbsp; $M$&amp;amp;nbsp; denotes the symbol range:&lt;br /&gt;
 &lt;br /&gt;
:$$q_{\nu} \in \left \{ q_{\mu}  \right \}, \hspace{0.25cm}{\rm with}\hspace{0.25cm} \nu = 1, \hspace{0.05cm} \text{ ...}\hspace{0.05cm} , N\hspace{0.25cm}{\rm and}\hspace{0.25cm}\mu = 1,\hspace{0.05cm} \text{ ...}\hspace{0.05cm} , M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
The figure shows a quaternary message source&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; with the alphabet&amp;amp;nbsp; $\rm \{A, \ B, \ C, \ D\}$&amp;amp;nbsp; and an exemplary sequence of length&amp;amp;nbsp; $N = 100$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2227__Inf_T_1_1_S1a_new.png|frame|Memoryless Quaternary Message Source]]&lt;br /&gt;
&lt;br /&gt;
The following requirements apply:&lt;br /&gt;
*The quaternary news source is fully described by&amp;amp;nbsp; $M = 4$&amp;amp;nbsp; symbol probabilities&amp;amp;nbsp; $p_μ$.&amp;amp;nbsp; In general it applies:&lt;br /&gt;
:$$\sum_{\mu = 1}^M \hspace{0.1cm}p_{\mu} = 1 \hspace{0.05cm}.$$&lt;br /&gt;
*The message source is memoryless, i.e., the individual sequence elements are&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Statistical Dependence and Independence#General_definition_of_statistical_dependence|statistically independent of each other]]:&lt;br /&gt;
:$${\rm Pr} \left (q_{\nu} = q_{\mu} \right ) = {\rm Pr} \left (q_{\nu} = q_{\mu} \hspace{0.03cm} | \hspace{0.03cm} q_{\nu -1}, q_{\nu -2}, \hspace{0.05cm} \text{ ...}\hspace{0.05cm}\right ) \hspace{0.05cm}.$$&lt;br /&gt;
*Since the alphabet consists of symbols&amp;amp;nbsp; (and not of random variables)&amp;amp;nbsp;, the specification of&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Expected_Values_and_Moments|Expected_values]]&amp;amp;nbsp; (linear mean, quadratic mean, dispersion, etc.) is not possible here, but also not necessary from an information-theoretical point of view.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These properties will now be illustrated with an example.&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_1_S1b_vers2.png|right|frame|Relative frequencies as a function of&amp;amp;nbsp; $N$]]&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp;&lt;br /&gt;
For the symbol probabilities of a quaternary source applies: &lt;br /&gt;
:$$p_{\rm A} = 0.4 \hspace{0.05cm},\hspace{0.2cm}p_{\rm B} = 0.3 \hspace{0.05cm},\hspace{0.2cm}p_{\rm C} = 0.2 \hspace{0.05cm},\hspace{0.2cm} &lt;br /&gt;
p_{\rm D} = 0.1\hspace{0.05cm}.$$&lt;br /&gt;
For an infinitely long sequence&amp;amp;nbsp; $(N \to \infty)$ &lt;br /&gt;
*were the&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/From_Random_Experiment_to_Random_Size#Bernoulli&#039;s_Law_of_Large_Numbers|relative frequencies]]&amp;amp;nbsp; $h_{\rm A}$,&amp;amp;nbsp; $h_{\rm B}$,&amp;amp;nbsp; $h_{\rm C}$,&amp;amp;nbsp; $h_{\rm D}$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; a-posteriori parameters &lt;br /&gt;
*identical with the&amp;amp;nbsp; [[Theory_of_Stochastic_Signals/Some_basic_definitions#Event_and_Event_set|probabilities]]&amp;amp;nbsp; $p_{\rm A}$,&amp;amp;nbsp; $p_{\rm B}$,&amp;amp;nbsp; $p_{\rm C}$,&amp;amp;nbsp; $p_{\rm D}$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; a-priori parameters. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With smaller&amp;amp;nbsp; $N$&amp;amp;nbsp; deviations may occur, as the adjacent table (result of a simulation) shows. &lt;br /&gt;
&lt;br /&gt;
*In the graphic above an exemplary sequence is shown with&amp;amp;nbsp; $N = 100$&amp;amp;nbsp; symbols. &lt;br /&gt;
*Due to the set elements&amp;amp;nbsp; $\rm A$,&amp;amp;nbsp; $\rm B$,&amp;amp;nbsp; $\rm C$&amp;amp;nbsp; and&amp;amp;nbsp; $\rm D$&amp;amp;nbsp; no mean values can be given. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, if you replace the symbols with numerical values, for example&amp;amp;nbsp; $\rm A \Rightarrow 1$, &amp;amp;nbsp; $\rm B \Rightarrow 2$, &amp;amp;nbsp; $\rm C \Rightarrow 3$, &amp;amp;nbsp; $\rm D \Rightarrow 4$, then you will get &amp;lt;br&amp;gt; &amp;amp;nbsp; &amp;amp;nbsp; time averaging &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; crossing line &amp;amp;nbsp; &amp;amp;nbsp; or &amp;amp;nbsp; &amp;amp;nbsp; coulter averaging &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; expected value formation&lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Moments of a Discrete Random Variable#Linear_Average_-_Direct_Component|Linear_Average]] :&lt;br /&gt;
:$$m_1 = \overline { q_{\nu} } = {\rm E} \big [ q_{\mu} \big ] = 0.4 \cdot 1 + 0.3 \cdot 2 + 0.2 \cdot 3 + 0.1 \cdot 4&lt;br /&gt;
= 2 \hspace{0.05cm},$$ &lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Moments of a Discrete Random Variable#Square_mean_.E2.80.93_Variance_.E2.80.93_Scattering |square mean]]:&lt;br /&gt;
:$$m_2 = \overline { q_{\nu}^{\hspace{0.05cm}2}  } = {\rm E} \big [ q_{\mu}^{\hspace{0.05cm}2} \big ] = 0.4 \cdot 1^2 + 0.3 \cdot 2^2 + 0.2 \cdot 3^2 + 0.1 \cdot 4^2&lt;br /&gt;
= 5 \hspace{0.05cm},$$&lt;br /&gt;
*for the [[Theory_of_Stochastic_Signals/Expected_Values_and_Moments#Some_often_used_Central_Moments|Standard Deviation]] (scattering) according to the &amp;quot;Theorem of Steiner&amp;quot;:&lt;br /&gt;
:$$\sigma = \sqrt {m_2 - m_1^2} = \sqrt {5 - 2^2} = 1 \hspace{0.05cm}.$$}}	&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
==Decision content - Message content==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[https://de.wikipedia.org/wiki/Claude_Shannon Claude Elwood Shannon]&amp;amp;nbsp; defined in 1948 in the standard work of information theory&amp;amp;nbsp; [Sha48]&amp;lt;ref name=&#039;Sha48&#039;&amp;gt;Shannon, C.E.: A Mathematical Theory of Communication. In: Bell Syst. Techn. J. 27 (1948), pp. 379-423 and pp. 623-656.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; the concept of information as &amp;quot;decrease of uncertainty about the occurrence of a statistical event&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
Let us make a mental experiment with&amp;amp;nbsp; $M$&amp;amp;nbsp; possible results, which are all equally probable: &amp;amp;nbsp; $p_1 = p_2 = \hspace{0.05cm} \text{ ...}\hspace{0.05cm} = p_M = 1/M \hspace{0.05cm}.$ &lt;br /&gt;
&lt;br /&gt;
Under this assumption applies:&lt;br /&gt;
*Is&amp;amp;nbsp; $M = 1$, then each individual attempt will yield the same result and therefore there is no uncertainty about the output.&lt;br /&gt;
*On the other hand, an observer learns about an experiment with&amp;amp;nbsp; $M = 2$, for example the &amp;quot;coin toss&amp;quot; with the set of events&amp;amp;nbsp; $\big \{\rm \bold symbol{\rm Z}(ahl), \rm \bold symbol{\rm W}(app) \big \}$&amp;amp;nbsp; and the probabilities&amp;amp;nbsp; $p_{\rm Z} = p_{\rm W} = 0. 5$, a gain in information; The uncertainty regarding&amp;amp;nbsp; $\rm Z$ &amp;amp;nbsp;resp.&amp;amp;nbsp; $\rm W$&amp;amp;nbsp; is resolved.&lt;br /&gt;
*In the experiment &amp;quot;dice&amp;quot;&amp;amp;nbsp; $(M = 6)$&amp;amp;nbsp; and even more in roulette&amp;amp;nbsp; $(M = 37)$&amp;amp;nbsp; the gained information is even more significant for the observer than in the &amp;quot;coin toss&amp;quot; when he learns which number was thrown or which ball fell.&lt;br /&gt;
*Finally it should be considered that the experiment&amp;amp;nbsp; &amp;quot;triple coin toss&amp;quot;&amp;amp;nbsp; with the&amp;amp;nbsp; $M = 8$&amp;amp;nbsp; possible results&amp;amp;nbsp; $\rm ZZZ$,&amp;amp;nbsp; $\rm ZZW$,&amp;amp;nbsp; $\rm ZWZ$,&amp;amp;nbsp; $\rm ZWW$,&amp;amp;nbsp; $\rm WZZ$,&amp;amp;nbsp; $\rm WZW$,&amp;amp;nbsp; $\rm WWZ$,&amp;amp;nbsp; $\rm WWW$&amp;amp;nbsp; provides three times the information as the single coin toss&amp;amp;nbsp; $(M = 2)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following definition fulfills all the requirements listed here for a quantitative information measure for equally probable events, indicated only by the symbol range&amp;amp;nbsp; $M$.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp; The&amp;amp;nbsp; &#039;&#039;&#039;decision content&#039;&#039;&#039; &amp;amp;nbsp; of a message source depends only on the symbol range&amp;amp;nbsp; $M$&amp;amp;nbsp; and results in&lt;br /&gt;
 &lt;br /&gt;
:$$H_0 = {\rm log}\hspace{0.1cm}M = {\rm log}_2\hspace{0.1cm}M \hspace{0.15cm} {\rm (in \ &amp;quot;bit&amp;quot;)}&lt;br /&gt;
= {\rm ln}\hspace{0.1cm}M \hspace{0.15cm}\text {(in &amp;quot;nat&amp;quot;)}&lt;br /&gt;
= {\rm lg}\hspace{0.1cm}M \hspace{0.15cm}\text {(in &amp;quot;Hartley&amp;quot;)}\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*The term&amp;amp;nbsp; &#039;&#039;message content&#039;&#039; is also commonly used for this. &lt;br /&gt;
*Since&amp;amp;nbsp; $H_0$&amp;amp;nbsp; indicates the maximum value of the&amp;amp;nbsp; [[Information_Theory/Memory_Message_Sources#Information_content_and_Entropy|Entropy]]&amp;amp;nbsp; $H$&amp;amp;nbsp;, $H_\text{max}$&amp;amp;nbsp; is also used in our tutorial as short notation&amp;amp;nbsp;. }}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Please note our nomenclature:&lt;br /&gt;
*The logarithm will be called &amp;quot;log&amp;quot; in the following, independent of the base. &lt;br /&gt;
*The relations mentioned above are fulfilled due to the following properties:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm log}\hspace{0.1cm}1 = 0 \hspace{0.05cm},\hspace{0.2cm}&lt;br /&gt;
{\rm log}\hspace{0.1cm}37 &amp;gt; {\rm log}\hspace{0.1cm}6 &amp;gt; {\rm log}\hspace{0.1cm}2\hspace{0.05cm},\hspace{0.2cm}&lt;br /&gt;
{\rm log}\hspace{0.1cm}M^k = k \cdot {\rm log}\hspace{0.1cm}M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
* Usually we use the logarithm to the base&amp;amp;nbsp; $2$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;Logarithm dualis&#039;&#039;&amp;amp;nbsp; $\rm (ld)$, where the pseudo unit &amp;quot;bit&amp;quot;, more precisely:&amp;amp;nbsp; &amp;quot;bit/symbol&amp;quot;, is then added:&lt;br /&gt;
 &lt;br /&gt;
:$${\rm ld}\hspace{0.1cm}M = {\rm log_2}\hspace{0.1cm}M = \frac{{\rm lg}\hspace{0.1cm}M}{{\rm lg}\hspace{0.1cm}2}&lt;br /&gt;
= \frac{{\rm ln}\hspace{0.1cm}M}{{\rm ln}\hspace{0.1cm}2} &lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
*In addition, you can find in the literature some additional definitions, which are based on the natural logarithm&amp;amp;nbsp; $\rm (ln)$&amp;amp;nbsp; or the logarithm&amp;amp;nbsp; $\rm (lg)$&amp;amp;nbsp;.&lt;br /&gt;
 &lt;br /&gt;
==Informationsgehalt und Entropie ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Wir verzichten nun auf die bisherige Voraussetzung, dass alle&amp;amp;nbsp; $M$&amp;amp;nbsp; möglichen Ergebnisse eines Versuchs gleichwahrscheinlich seien.&amp;amp;nbsp; Im Hinblick auf eine möglichst kompakte Schreibweise legen wir für diese Seite lediglich fest:&lt;br /&gt;
 &lt;br /&gt;
:$$p_1 &amp;gt; p_2 &amp;gt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;gt; p_\mu &amp;gt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm}  &amp;gt; p_{M-1} &amp;gt; p_M\hspace{0.05cm},\hspace{0.4cm}\sum_{\mu = 1}^M p_{\mu}  = 1 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Wir betrachten nun den &#039;&#039;Informationsgehalt&#039;&#039;&amp;amp;nbsp; der einzelnen Symbole, wobei wir den &amp;amp;bdquo;Logarithmus dualis&amp;amp;rdquo; mit $\log_2$ bezeichnen:&lt;br /&gt;
 &lt;br /&gt;
:$$I_\mu = {\rm log_2}\hspace{0.1cm}\frac{1}{p_\mu}= -\hspace{0.05cm}{\rm log_2}\hspace{0.1cm}{p_\mu}&lt;br /&gt;
\hspace{0.5cm}{\rm (Einheit\hspace{-0.15cm}: \hspace{0.15cm}bit\hspace{0.15cm}oder\hspace{0.15cm}bit/Symbol)}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Man erkennt:&lt;br /&gt;
*Wegen&amp;amp;nbsp; $p_μ ≤ 1$&amp;amp;nbsp; ist der Informationsgehalt nie negativ.&amp;amp;nbsp; Im Grenzfall&amp;amp;nbsp; $p_μ  \to  1$&amp;amp;nbsp; geht&amp;amp;nbsp; $I_μ  \to  0$. &lt;br /&gt;
*Allerdings ist für&amp;amp;nbsp; $I_μ = 0$  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp;  $p_μ = 1$  &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp;  $M = 1$&amp;amp;nbsp; auch der Entscheidungsgehalt&amp;amp;nbsp; $H_0 = 0$.&lt;br /&gt;
*Bei abfallenden Wahrscheinlichkeiten&amp;amp;nbsp; $p_μ$&amp;amp;nbsp; nimmt der Informationsgehalt kontinuierlich zu:&lt;br /&gt;
 &lt;br /&gt;
:$$I_1 &amp;lt; I_2 &amp;lt; \hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;lt; I_\mu &amp;lt;\hspace{0.05cm} \text{ ...}\hspace{0.05cm} &amp;lt; I_{M-1} &amp;lt; I_M \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Fazit:}$&amp;amp;nbsp; &#039;&#039;&#039;Je unwahrscheinlicher ein Ereignis ist, desto größer ist sein Informationsgehalt&#039;&#039;&#039;.&amp;amp;nbsp; Diesen Sachverhalt stellt man auch im täglichen Leben fest:&lt;br /&gt;
*„6 Richtige” im Lotto nimmt man sicher eher wahr als „3 Richtige” oder gar keinen Gewinn.&lt;br /&gt;
*Ein Tsunami in Asien dominiert auch die Nachrichten in Deutschland über Wochen im Gegensatz zu den fast standardmäßigen Verspätungen der Deutschen Bahn.&lt;br /&gt;
*Eine Niederlagenserie von Bayern München führt zu Riesen–Schlagzeilen im Gegensatz zu einer Siegesserie.&amp;amp;nbsp; Bei 1860 München ist genau das Gegenteil der Fall.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Der Informationsgehalt eines einzelnen Symbols (oder Ereignisses) ist allerdings nicht sehr interessant.&amp;amp;nbsp; Dagegen erhält man &lt;br /&gt;
*durch Scharmittelung über alle möglichen Symbole&amp;amp;nbsp; $q_μ$ &amp;amp;nbsp;bzw.&amp;amp;nbsp; &lt;br /&gt;
*durch Zeitmittelung über alle Elemente der Folge&amp;amp;nbsp; $\langle q_ν \rangle$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
eine der zentralen Größen der Informationstheorie. &lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp;  Die&amp;amp;nbsp; &#039;&#039;&#039;Entropie&#039;&#039;&#039;&amp;amp;nbsp; $H$&amp;amp;nbsp; einer Quelle gibt den &#039;&#039;mittleren Informationsgehalt aller Symbole&#039;&#039;&amp;amp;nbsp; an:&lt;br /&gt;
 &lt;br /&gt;
:$$H = \overline{I_\nu} = {\rm E}\hspace{0.01cm}[I_\mu] = \sum_{\mu = 1}^M p_{\mu} \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{p_\mu}=&lt;br /&gt;
 -\sum_{\mu = 1}^M p_{\mu} \cdot{\rm log_2}\hspace{0.1cm}{p_\mu} \hspace{0.5cm}\text{(Einheit:   bit, genauer:   bit/Symbol)} &lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Die überstreichende Linie kennzeichnet wieder eine Zeitmittelung und&amp;amp;nbsp; $\rm E[\text{...}]$&amp;amp;nbsp; eine Scharmittelung.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Entropie ist unter anderem ein Maß für&lt;br /&gt;
*die mittlere Unsicherheit über den Ausgang eines statistischen Ereignisses,&lt;br /&gt;
*die „Zufälligkeit” dieses Ereignisses,&amp;amp;nbsp; sowie&lt;br /&gt;
*den mittleren Informationsgehalt einer Zufallsgröße.	 &lt;br /&gt;
&lt;br /&gt;
==Binäre Entropiefunktion  ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Wir beschränken uns zunächst auf den Sonderfall&amp;amp;nbsp; $M = 2$&amp;amp;nbsp; und betrachten eine binäre Quelle, die die beiden Symbole&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; abgibt.&amp;amp;nbsp; Die Auftrittwahrscheinlichkeiten seien &amp;amp;nbsp; $p_{\rm A} = p$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm B} = 1 – p$.&lt;br /&gt;
&lt;br /&gt;
Für die Entropie dieser Binärquelle gilt:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm bin} (p) =  p \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{\hspace{0.1cm}p\hspace{0.1cm}} + (1-p) \cdot {\rm log_2}\hspace{0.1cm}\frac{1}{1-p} \hspace{0.5cm}{\rm (Einheit\hspace{-0.15cm}: \hspace{0.15cm}bit\hspace{0.15cm}oder\hspace{0.15cm}bit/Symbol)}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Man nennt die Funktion&amp;amp;nbsp; $H_\text{bin}(p)$&amp;amp;nbsp; die&amp;amp;nbsp; &#039;&#039;&#039;binäre Entropiefunktion&#039;&#039;&#039;.&amp;amp;nbsp; Die Entropie einer Quelle mit größerem Symbolumfang&amp;amp;nbsp; $M$&amp;amp;nbsp; lässt sich häufig unter Verwendung von&amp;amp;nbsp; $H_\text{bin}(p)$&amp;amp;nbsp; ausdrücken.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 2:}$&amp;amp;nbsp;&lt;br /&gt;
Die Grafik zeigt die binäre Entropiefunktion für die Werte&amp;amp;nbsp; $0 ≤ p ≤ 1$&amp;amp;nbsp; der Symbolwahrscheinlichkeit von&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; $($oder auch von&amp;amp;nbsp; $\rm B)$.&amp;amp;nbsp; Man erkennt:&lt;br /&gt;
&lt;br /&gt;
[[File:Inf_T_1_1_S4_vers2.png|frame|Binäre Entropiefunktion als Funktion von&amp;amp;nbsp; $p$|right]]&lt;br /&gt;
*Der Maximalwert&amp;amp;nbsp; $H_\text{max} = 1\; \rm  bit$&amp;amp;nbsp; ergibt sich für&amp;amp;nbsp; $p = 0.5$, also für gleichwahrscheinliche Binärsymbole.&amp;amp;nbsp; Dann liefern&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; jeweils den gleichen Beitrag zur Entropie.&lt;br /&gt;
* $H_\text{bin}(p)$&amp;amp;nbsp; ist symmetrisch um&amp;amp;nbsp; $p = 0.5$.&amp;amp;nbsp; Eine Quelle mit&amp;amp;nbsp; $p_{\rm A} = 0.1$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm B} = 0.9$&amp;amp;nbsp; hat die gleiche Entropie&amp;amp;nbsp;  $H = 0.469 \; \rm   bit$&amp;amp;nbsp; wie eine Quelle mit&amp;amp;nbsp; $p_{\rm A} = 0.9$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm B} = 0.1$.&lt;br /&gt;
*Die Differenz&amp;amp;nbsp; $ΔH = H_\text{max} - H$ gibt&amp;amp;nbsp; die&amp;amp;nbsp; &#039;&#039;Redundanz&#039;&#039;&amp;amp;nbsp; der Quelle an und&amp;amp;nbsp; $r = ΔH/H_\text{max}$&amp;amp;nbsp; die&amp;amp;nbsp; &#039;&#039;relative Redundanz&#039;&#039;.&amp;amp;nbsp; Im Beispiel ergeben sich&amp;amp;nbsp; $ΔH = 0.531\; \rm  bit$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $r = 53.1 \rm \%$.&lt;br /&gt;
*Für&amp;amp;nbsp; $p = 0$&amp;amp;nbsp; ergibt sich&amp;amp;nbsp; $H = 0$, da hier die Symbolfolge &amp;amp;nbsp;$\rm B \ B \ B \text{...}$&amp;amp;nbsp; mit Sicherheit vorhergesagt werden kann.&amp;amp;nbsp; Eigentlich ist nun der Symbolumfang nur noch&amp;amp;nbsp; $M = 1$.&amp;amp;nbsp; Gleiches gilt für&amp;amp;nbsp; $p = 1$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; Symbolfolge &amp;amp;nbsp;$\rm A \ A \ A \text{...}$.&lt;br /&gt;
*$H_\text{bin}(p)$&amp;amp;nbsp; ist stets eine&amp;amp;nbsp; &#039;&#039;konkave Funktion&#039;&#039;, da die zweite Ableitung nach dem Parameter&amp;amp;nbsp; $p$&amp;amp;nbsp; für alle Werte von&amp;amp;nbsp; $p$&amp;amp;nbsp; negativ ist: &lt;br /&gt;
:$$\frac{ {\rm d}^2H_{\rm bin} (p)}{ {\rm d}\,p^2} =  \frac{- 1}{ {\rm ln}(2) \cdot p \cdot (1-p)}&amp;lt; 0&lt;br /&gt;
\hspace{0.05cm}.$$}}&lt;br /&gt;
&lt;br /&gt;
==Nachrichtenquellen mit größerem Symbolumfang==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Im&amp;amp;nbsp; [[Information_Theory/Gedächtnislose_Nachrichtenquellen#Modell_und_Voraussetzungen|ersten Abschnitt]]&amp;amp;nbsp; dieses Kapitels haben wir eine quaternäre Nachrichtenquelle&amp;amp;nbsp; $(M = 4)$&amp;amp;nbsp; mit den Symbolwahrscheinlichkeiten&amp;amp;nbsp; $p_{\rm A} = 0.4$, &amp;amp;nbsp; $p_{\rm B} = 0.3$, &amp;amp;nbsp; $p_{\rm C} = 0.2$ &amp;amp;nbsp; und&amp;amp;nbsp;  $ p_{\rm D} = 0.1$&amp;amp;nbsp; betrachtet.&amp;amp;nbsp; Diese Quelle besitzt die folgende Entropie:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm quat} = 0.4 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.4} + 0.3 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.3} + 0.2 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.2}+ 0.1 \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.1}.$$&lt;br /&gt;
&lt;br /&gt;
Oft ist zur zahlenmäßigen Berechnung der Umweg über den Zehnerlogarithmus&amp;amp;nbsp; $\lg \ x = {\rm log}_{10} \ x$&amp;amp;nbsp; sinnvoll, da der &#039;&#039;Logarithmus dualis&#039;&#039;&amp;amp;nbsp; $ {\rm log}_2 \ x$&amp;amp;nbsp; meist auf Taschenrechnern nicht zu finden ist.&lt;br /&gt;
&lt;br /&gt;
:$$H_{\rm quat}=\frac{1}{{\rm lg}\hspace{0.1cm}2} \cdot \left [ 0.4 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.4} + 0.3 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.3} + 0.2 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.2}+ 0.1 \cdot {\rm lg}\hspace{0.1cm}\frac{1}{0.1} \right ] = 1.845\,{\rm bit}&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 3:}$&amp;amp;nbsp;&lt;br /&gt;
Nun bestehen zwischen den einzelnen Symbolwahrscheinlichkeiten gewisse Symmetrien: &lt;br /&gt;
[[File:Inf_T_1_1_S5_vers2.png|frame|Entropie von Binärquelle und Quaternärquelle]]&lt;br /&gt;
 &lt;br /&gt;
:$$p_{\rm A} = p_{\rm D} = p \hspace{0.05cm},\hspace{0.4cm}p_{\rm B} = p_{\rm C} = 0.5 - p \hspace{0.05cm},\hspace{0.3cm}{\rm mit} \hspace{0.15cm}0 \le p \le 0.5 \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
In diesem Fall kann zur Entropieberechnung auf die binäre Entropiefunktion zurückgegriffen werden:&lt;br /&gt;
 &lt;br /&gt;
:$$H_{\rm quat} =  2 \cdot p \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{\hspace{0.1cm}p\hspace{0.1cm} } + 2 \cdot (0.5-p) \cdot {\rm log}_2\hspace{0.1cm}\frac{1}{0.5-p}$$&lt;br /&gt;
:$$\Rightarrow \hspace{0.3cm} H_{\rm quat} =   1 + H_{\rm bin}(2p) \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Die Grafik zeigt  abhängig von&amp;amp;nbsp; $p$&lt;br /&gt;
*den Entropieverlauf der Quaternärquelle (blau) &lt;br /&gt;
*im Vergleich zum Entropieverlauf der Binärquelle (rot). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Für die Quaternärquelle ist nur der Abszissen&amp;amp;nbsp;  $0 ≤ p ≤ 0.5$&amp;amp;nbsp; zulässig. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Man erkennt aus der blauen Kurve für die Quaternärquelle:&lt;br /&gt;
*Die maximale Entropie&amp;amp;nbsp; $H_\text{max} = 2 \; \rm bit/Symbol$&amp;amp;nbsp; ergibt sich für&amp;amp;nbsp; $p = 0.25$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; gleichwahrscheinliche Symbole: &amp;amp;nbsp; $p_{\rm A} = p_{\rm B} = p_{\rm C} = p_{\rm A} = 0.25$.&lt;br /&gt;
*Mit&amp;amp;nbsp; $p = 0$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $p = 0.5$&amp;amp;nbsp; entartet die Quaternärquelle zu einer Binärquelle mit&amp;amp;nbsp; $p_{\rm B} = p_{\rm C} = 0.5$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm A} = p_{\rm D} = 0$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; Entropie&amp;amp;nbsp; $H = 1 \; \rm bit/Symbol$.&lt;br /&gt;
*Die Quelle mit&amp;amp;nbsp; $p_{\rm A} = p_{\rm D} = 0.1$&amp;amp;nbsp; und&amp;amp;nbsp; $p_{\rm B} = p_{\rm C} = 0.4$&amp;amp;nbsp; weist folgende Kennwerte auf (jeweils mit der Pseudoeinheit „bit/Symbol”):&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp;   &amp;amp;nbsp; &#039;&#039;&#039;(1)&#039;&#039;&#039; &amp;amp;nbsp; Entropie: &amp;amp;nbsp; $H = 1 + H_{\rm bin} (2p) =1 + H_{\rm bin} (0.2) = 1.722,$&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp;   &amp;amp;nbsp; &#039;&#039;&#039;(2)&#039;&#039;&#039; &amp;amp;nbsp; Redundanz: &amp;amp;nbsp; ${\rm \Delta }H = {\rm log_2}\hspace{0.1cm} M - H =2- 1.722= 0.278,$&lt;br /&gt;
&lt;br /&gt;
: &amp;amp;nbsp;   &amp;amp;nbsp; &#039;&#039;&#039;(3)&#039;&#039;&#039; &amp;amp;nbsp; relative Redundanz: &amp;amp;nbsp; $r ={\rm \Delta }H/({\rm log_2}\hspace{0.1cm} M) = 0.139\hspace{0.05cm}.$&lt;br /&gt;
&lt;br /&gt;
*Die Redundanz  der Quaternärquelle mit&amp;amp;nbsp; $p = 0.1$&amp;amp;nbsp; ist gleich&amp;amp;nbsp; $ΔH = 0.278 \; \rm bit/Symbol$&amp;amp;nbsp; und damit genau so groß wie die Redundanz der Binärquelle mit&amp;amp;nbsp; $p = 0.2$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Aufgaben zum Kapitel==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:1.1 Wetterentropie|Aufgabe 1.1: Wetterentropie]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.1Z Binäre Entropiefunktion|Aufgabe 1.1Z: Binäre Entropiefunktion]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:1.2 Entropie von Ternärquellen|Aufgabe 1.2: Entropie von Ternärquellen]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory&amp;diff=35008</id>
		<title>Information Theory</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory&amp;diff=35008"/>
		<updated>2020-10-26T23:18:25Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since the early beginnings of communications as an engineering discipline, many engineers and mathematicians have sought to find a quantitative measure of &lt;br /&gt;
*the $\rm Information$&amp;amp;nbsp; (in general: &amp;quot;the knowledge of something&amp;quot;) contained in a&amp;amp;nbsp; $\rm message$&amp;amp;nbsp; (here we understand &amp;quot;a collection of symbols and/or states&amp;quot;). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The (abstract) information is communicated by the (concrete) message and can be seen as an interpretation of a message. &lt;br /&gt;
&lt;br /&gt;
[https://de.wikipedia.org/wiki/Claude_Shannon Claude Elwood Shannon]&amp;amp;nbsp; succeeded in 1948 in establishing a consistent theory of the information content of messages, which was revolutionary in its time and created a new, still highly topical field of science:&amp;amp;nbsp; the theory named after him&amp;amp;nbsp; $\text{Shannon&#039;s Information Theory}$.&lt;br /&gt;
&lt;br /&gt;
The course material corresponds to a&amp;amp;nbsp; $\text{lecture with two semester hours per week (SWS) and one SWS exercise}$.&lt;br /&gt;
&lt;br /&gt;
Here is a table of contents based on the&amp;amp;nbsp; $\text{four main chapters}$&amp;amp;nbsp; with a total of&amp;amp;nbsp; $\text{13 individual chapters}$.  &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
===Inhalt===&lt;br /&gt;
{{Collapsible-Kopf}}&lt;br /&gt;
{{Collapse1| header=Entropy of Discrete Sources&lt;br /&gt;
| submenu= &lt;br /&gt;
*[[/Gedächtnislose Nachrichtenquellen/]]&lt;br /&gt;
*[[/Nachrichtenquellen mit Gedächtnis/]]&lt;br /&gt;
*[[/Natürliche wertdiskrete Nachrichtenquellen/]]&lt;br /&gt;
}}&lt;br /&gt;
{{Collapse2 | header=Source Coding - Data Compression&lt;br /&gt;
|submenu=&lt;br /&gt;
*[[/Allgemeine Beschreibung/]]&lt;br /&gt;
*[[/Komprimierung nach Lempel, Ziv und Welch/]]&lt;br /&gt;
*[[/Entropiecodierung nach Huffman/]]&lt;br /&gt;
*[[/Weitere Quellencodierverfahren/]]&lt;br /&gt;
}}&lt;br /&gt;
{{Collapse3 | header=Mutual Information Between Two Discrete Random Variables&lt;br /&gt;
|submenu=&lt;br /&gt;
*[[/Einige Vorbemerkungen zu zweidimensionalen Zufallsgrößen/]]&lt;br /&gt;
*[[/Verschiedene Entropien zweidimensionaler Zufallsgrößen/]]&lt;br /&gt;
*[[/Anwendung auf die Digitalsignalübertragung/]]&lt;br /&gt;
}}&lt;br /&gt;
{{Collapse4 | header=Information Theory for Continuous Random Variables&lt;br /&gt;
|submenu=&lt;br /&gt;
*[[/Differentielle Entropie/]]&lt;br /&gt;
*[[/AWGN–Kanalkapazität bei wertkontinuierlichem Eingang/]]&lt;br /&gt;
*[[/AWGN–Kanalkapazität bei wertdiskretem Eingang/]]&lt;br /&gt;
}}&lt;br /&gt;
{{Collapsible-Fuß}}&lt;br /&gt;
&lt;br /&gt;
In addition to these theory pages, we also offer Exercises and multimedia modules that could help to clarify the teaching material:&lt;br /&gt;
*[https://en.lntwww.de/Kategorie:Aufgaben_zu_Informationstheorie $\text{Exercises}$]&lt;br /&gt;
*[[LNTwww:Lernvideos_zu_Informationstheorie|$\text{Learning videos}$]]&lt;br /&gt;
*[[LNTwww:HTML5-Applets_zu_Informationstheorie|$\text{redesigned applets}$]], based on HTML5, also executable on smartphones&lt;br /&gt;
*[[LNTwww:SWF-Applets_zu_Informationstheorie|$\text{former Applets}$]], based on SWF, executable only under WINDOWS with &#039;&#039;Adobe Flash Player&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
$\text{More links:}$&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
$(1)$&amp;amp;nbsp; &amp;amp;nbsp; [[LNTwww:Literaturempfehlung_zu_Informationstheorie|$\text{Recommended literature for the book}$]]&lt;br /&gt;
&lt;br /&gt;
$(2)$&amp;amp;nbsp; &amp;amp;nbsp; [[LNTwww:Weitere_Hinweise_zum_Buch_Informationstheorie|$\text{General notes about the book}$]] &amp;amp;nbsp; (Authors,&amp;amp;nbsp; other participants,&amp;amp;nbsp; materials as a starting point for the book,&amp;amp;nbsp; list of sources)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
__NOEDITSECTION__&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory&amp;diff=35007</id>
		<title>Information Theory</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Information_Theory&amp;diff=35007"/>
		<updated>2020-10-26T23:17:34Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Since the early beginnings of communications as an engineering discipline, many engineers and mathematicians have sought to find a quantitative measure of &lt;br /&gt;
*the $\rm Information$&amp;amp;nbsp; (in general: &amp;quot;the knowledge of something&amp;quot;) contained in a&amp;amp;nbsp; $\rm message$&amp;amp;nbsp; (here we understand &amp;quot;a collection of symbols and/or states&amp;quot;). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The (abstract) information is communicated by the (concrete) message and can be seen as an interpretation of a message. &lt;br /&gt;
&lt;br /&gt;
[https://de.wikipedia.org/wiki/Claude_Shannon Claude Elwood Shannon]&amp;amp;nbsp; succeeded in 1948 in establishing a consistent theory of the information content of messages, which was revolutionary in its time and created a new, still highly topical field of science:&amp;amp;nbsp; the theory named after him&amp;amp;nbsp; $\text{Shannon&#039;s Information Theory}$.&lt;br /&gt;
&lt;br /&gt;
The course material corresponds to a&amp;amp;nbsp; $\text{lecture with two semester hours per week (SWS) and one SWS exercise}$.&lt;br /&gt;
&lt;br /&gt;
Here is a table of contents based on the&amp;amp;nbsp; $\text{four main chapters}$&amp;amp;nbsp; with a total of&amp;amp;nbsp; $\text{13 individual chapters}$.  &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
===Inhalt===&lt;br /&gt;
{{Collapsible-Kopf}}&lt;br /&gt;
{{Collapse1| header=Entropy of Discrete Sources&lt;br /&gt;
| submenu= &lt;br /&gt;
*[[/Gedächtnislose Nachrichtenquellen/]]&lt;br /&gt;
*[[/Nachrichtenquellen mit Gedächtnis/]]&lt;br /&gt;
*[[/Natürliche wertdiskrete Nachrichtenquellen/]]&lt;br /&gt;
}}&lt;br /&gt;
{{Collapse2 | header=Source Coding - Data Compression&lt;br /&gt;
|submenu=&lt;br /&gt;
*[[/Allgemeine Beschreibung/]]&lt;br /&gt;
*[[/Komprimierung nach Lempel, Ziv und Welch/]]&lt;br /&gt;
*[[/Entropiecodierung nach Huffman/]]&lt;br /&gt;
*[[/Weitere Quellencodierverfahren/]]&lt;br /&gt;
}}&lt;br /&gt;
{{Collapse3 | header=Mutual Information Between Two Discrete Random Variables&lt;br /&gt;
|submenu=&lt;br /&gt;
*[[/Einige Vorbemerkungen zu zweidimensionalen Zufallsgrößen/]]&lt;br /&gt;
*[[/Verschiedene Entropien zweidimensionaler Zufallsgrößen/]]&lt;br /&gt;
*[[/Anwendung auf die Digitalsignalübertragung/]]&lt;br /&gt;
}}&lt;br /&gt;
{{Collapse4 | header=Information Theory for Continuous Random Variables&lt;br /&gt;
|submenu=&lt;br /&gt;
*[[/Differentielle Entropie/]]&lt;br /&gt;
*[[/AWGN–Kanalkapazität bei wertkontinuierlichem Eingang/]]&lt;br /&gt;
*[[/AWGN–Kanalkapazität bei wertdiskretem Eingang/]]&lt;br /&gt;
}}&lt;br /&gt;
{{Collapsible-Fuß}}&lt;br /&gt;
&lt;br /&gt;
In addition to these theory pages, we also offer Exercises and multimedia modules that could help to clarify the teaching material:&lt;br /&gt;
*[https://en.lntwww.de/Kategorie:Aufgaben_zu_Informationstheorie $\text{Exercises}$]&lt;br /&gt;
*[[LNTwww:Lernvideos_zu_Informationstheorie|$\text{Learning videos}$]]&lt;br /&gt;
*[[LNTwww:HTML5-Applets_zu_Informationstheorie|$\text{redesigned applets}$]], based on HTML5, also executable on smartphones&lt;br /&gt;
*[[LNTwww:SWF-Applets_zu_Informationstheorie|$\text{former Applets}$]], based on SWF, executable only under WINDOWS with &#039;&#039;Adobe Flash Player&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
$\text{More links:}$&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
$(1)$&amp;amp;nbsp; &amp;amp;nbsp; [[LNTwww:Literaturempfehlung_zu_Informationstheorie|$\text{Recommended literature for the book}$]]&lt;br /&gt;
&lt;br /&gt;
$(2)$&amp;amp;nbsp; &amp;amp;nbsp; [[LNTwww:Weitere_Hinweise_zum_Buch_Informationstheorie|$\text{General notes about the book}$]] &amp;amp;nbsp; (Autoren,&amp;amp;nbsp; Weitere Beteiligte,&amp;amp;nbsp; Materialien als Ausgangspunkt des Buches,&amp;amp;nbsp; Quellenverzeichnis)&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
__NOTOC__&lt;br /&gt;
__NOEDITSECTION__&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/LTE-Advanced_-_a_Further_Development_of_LTE&amp;diff=35003</id>
		<title>Mobile Communications/LTE-Advanced - a Further Development of LTE</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/LTE-Advanced_-_a_Further_Development_of_LTE&amp;diff=35003"/>
		<updated>2020-10-19T07:04:54Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; {{LastPage}}&lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=LTE – Long Term Evolution&lt;br /&gt;
|Vorherige Seite=Physical Layer for LTE&lt;br /&gt;
|Nächste Seite=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== How fast is LTE really? ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Consumers are accustomed to being able to use (at least to a large extent) the speed offered by established cable-based services such as&amp;amp;nbsp; [[Examples_of_Communication_Systems/General_Description_of_DSL|DSL]]&amp;amp;nbsp; (&amp;lt;i&amp;gt;Digital Subscriber Line&amp;lt;/i&amp;gt;&amp;amp;nbsp;).&lt;br /&gt;
*But what is the situation with LTE?&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*What data rates can the individual LTE&amp;amp;ndash;user actually reach?&lt;br /&gt;
&lt;br /&gt;
It is much more difficult for the providers of mobile radio systems to provide concrete data rate information, since many influences that are difficult to predict have to be taken into account for a radio connection.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As already described in chapter&amp;amp;nbsp; [[Mobile_Communications/Technical Innovations of LTE# Multiple Antenna Systems|Technical Innovations of LTE]]&amp;amp;nbsp; according to the planning for 2011, data rates of up to 326 Mbit/s are possible in LTE&amp;amp;ndash;Downlink and approx. 86 Mbit/s in Uplink. These figures are only maximum achievable values. In reality, however, the speed is determined by a variety of factors. In the following we refer to the downlink, see&amp;amp;nbsp; [Gut10]&amp;lt;ref name=&#039;Gut10&#039;&amp;gt;Gutt, E.: &#039;&#039;LTE - a new dimension of mobile broadband use.&#039;&#039; [http://www.ltemobile.de/uploads/media/LTE_Einfuehrung_V1.pdf PDF document on the Internet], 2010.&amp;lt;/ref&amp;gt;:&lt;br /&gt;
*Since LTE is a so-called&amp;amp;nbsp; &amp;lt;i&amp;gt;Shared Medium&amp;lt;/i&amp;gt;&amp;amp;nbsp;, all users of a cell have to share the entire data rate. Note that voice transmission or normal use of the Internet generates less traffic than, for example,&amp;amp;nbsp; &amp;lt;i&amp;gt;Filesharing&amp;lt;/i&amp;gt;&amp;amp;nbsp; or similar.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The faster a user moves, the lower the available data rate will be. An elementary component of the LTE&amp;amp;ndash;specification is that for mobility up to 15 km/h the highest data rates are guaranteed and up to 300 km/h at least still &amp;quot;good functionality&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The highest data rate is achieved in close proximity to the base station. The further away a user is from the base station, the lower the data rate assigned to him, which can be explained by switching from 64&amp;amp;ndash;QAM or 16&amp;amp;ndash;QAM to 4&amp;amp;ndash;QAM (QPSK), among other things.&lt;br /&gt;
&lt;br /&gt;
*Shielding by walls and buildings or sources of interference of any kind limit the achievable data rate enormously. Optimal would be a Line of Sight connection between receiver and base station a scenario that is rather unusual.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reality in summer 2011 was as follows: &amp;amp;nbsp;LTE is already available in some countries (at least for testing purposes). In addition to LTE&amp;amp;ndash;pioneer Sweden, these include the USA and Germany. In various tests, download&amp;amp;ndash;speeds of between 5 and 12 Mbit/s were achieved, and in very good conditions up to 40 Mbit/s. Details can be found in the Internet article&amp;amp;nbsp; [Gol11]&amp;lt;ref name=&#039;Gol11&#039;&amp;gt;Goldman, D.: &#039;&#039;AT&amp;amp;T launching &#039;new&#039; new 4G network.&#039;&#039;  http://money.cnn.com/2011/05/25/technology/att_4g_lte/index.htm PDF&amp;amp;ndash;Document on the Internet], 2011&amp;lt;/ref&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Moreover, the 2011 LTE&amp;amp;ndash;network did not seem ready to replace the established wired Internet connections due to excessive delay times and the resulting occasional connection interruptions. However, the development in this area progressed with giant strides, so that this information from summer 2011 was not relevant for very long.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Some system improvements through LTE-Advanced==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
While the first LTE&amp;amp;ndash;systems corresponding to Release 8 of December 2008 slowly came onto the market in summer 2011, the successor was already on the doorstep. The Release 10 of the &amp;quot;3GPP&amp;quot; completed in June 2011 is &#039;&#039;Long Term Evolution&amp;amp;ndash;Advanced&#039;&#039;, or in short &amp;amp;nbsp;$\rm LTE-A$. It is the first technology to meet the requirements of the ITU (&amp;lt;i&amp;gt;International Telecommunication Union&amp;lt;/i&amp;gt;) for a 4G&amp;amp;ndash;standard. A summary of these requirements, also called &#039;&#039;IMT&amp;amp;ndash;Advanced&#039;&#039;, can be found in great detail in an [http://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-M.2134-2008-PDF-E.pdf ITU&amp;amp;ndash;Article (PDF).]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Without claiming to be exhaustive, some of the features of LTE&amp;amp;ndash;Advanced are mentioned here:&lt;br /&gt;
*The data rate should be up to 1 Gbit/s with little movement of the user and up to 100 Mbit/s with fast movement. In order to achieve these high data rates, some new technical specifications have been made, which will be briefly discussed here.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*LTE&amp;amp;ndash;Advanced supports bandwidths up to 100 MHz maximum, while the LTE&amp;amp;ndash;specification (after Release 8) provides only 20 MHz. The FDD&amp;amp;ndash;spectra no longer have to be divided symmetrically between uplink and downlink. For example, a higher channel bandwidth can be used for the downlink than for the uplink, which corresponds to the normal use of the mobile Internet with a smartphone.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*In the uplink of LTE&amp;amp;ndash;Advanced &amp;amp;nbsp; [[Mobile_Communications/The_Application_of_OFDMA_and_SC-FDMA_in_LTE#Functionality_of_SC.E2.80.93FDMA|SC&amp;amp;ndash;FDMA]]&amp;amp;nbsp; is also used. Since the 3GPP&amp;amp;ndash;Consortium was not satisfied with the SC&amp;amp;ndash;FDMA transmission in LTE, some essential improvements in the process were developed.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Another interesting novelty is the introduction of so-called &amp;quot;Relay Nodes&amp;quot;. Such a&amp;amp;nbsp; &amp;lt;i&amp;gt;Relay Node&amp;lt;/i&amp;gt;&amp;amp;nbsp; (RN) is placed at the edge of a cell to provide better transmission quality at the boundaries of a cell and thus increase the range of the cell.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:P ID2295 LTE T 4 5 S2 v1.png|right|frame|Functionality of Relay Nodes]]&lt;br /&gt;
A&amp;amp;nbsp; &amp;lt;i&amp;gt;Relay Node&amp;lt;/i&amp;gt;&amp;amp;nbsp; looks like a normal base station for a terminal device (&amp;lt;i&amp;gt;eNodeB&amp;lt;/i&amp;gt;). However, it only has to supply a relatively small area of operation and therefore does not need to be connected to the backbone in a complicated way. In most cases a relay node is connected to the next base station via directional radio.&lt;br /&gt;
&lt;br /&gt;
In this way, high data rates and good transmission quality without interruptions are guaranteed without great effort. By increasing the physical proximity to the base stations, the reception quality in buildings is also improved.&lt;br /&gt;
&lt;br /&gt;
Another feature added to LTE&amp;amp;ndash;A is known as&amp;amp;nbsp; &amp;lt;i&amp;gt;Coordinated Multiple Point Transmission and Reception&amp;lt;/i&amp;gt;&amp;amp;nbsp; (CoMP). This is an attempt to reduce the disturbing influence of intercell interference. With intelligent scheduling across several base stations, it is even possible to make intercell interference usable. The information for a terminal device is available at two adjacent base stations and can be transmitted simultaneously. Details on CoMP&amp;amp;ndash;technology can be found, for example, in the internet article&amp;amp;nbsp; [Wan13]&amp;lt;ref name=&#039;Wan13&#039;&amp;gt;Wannstrom, J.: &#039;&#039;LTE&amp;amp;ndash;Advanced&#039;&#039;.  http://www.3gpp.org/technologies/keywords-acronyms/97-lte-advanced PDF&amp;amp;ndash;Document on the Internet, 2011]&amp;lt;/ref&amp;gt;&amp;amp;nbsp; from 3gpp.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Intermediate status of 2011:}$&amp;amp;nbsp;*Thanks to the above measures in combination with many other improvements, primarily the introduction of 4&amp;amp;times;4&amp;amp;ndash;MIMO for the uplink and 8&amp;amp;times;8&amp;amp;ndash;MIMO in the downlink, it is possible to significantly increase the spectral efficiency (i.e. the transferable flow of information in one Hertz bandwidth within one second) of LTE&amp;amp;ndash;A compared to LTE, namely in the &amp;lt;i&amp;gt;downlink&amp;lt;/i&amp;gt; from 15 bit/s/Hz&amp;amp;nbsp; to &amp;amp;nbsp;$\text{30 bit/s/Hz}$&amp;amp;nbsp; and in the &amp;lt;i&amp;gt;uplink&amp;lt;/i&amp;gt; from 3. 75 bit/s/Hz to &amp;amp;nbsp;$\text{15 bit/s/Hz}$.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Of course, backwards compatibility with the previous LTE standard and previous mobile phone systems must also be guaranteed. Also with a UMTS&amp;amp;ndash;cell phone one should be able to dial into a LTE&amp;amp;ndash;network, even if one cannot use the LTE specific features.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*At the beginning of June 2011 the first tests of LTE&amp;amp;ndash;Advanced were conducted. Sweden, which has already set up the first commercial LTE&amp;amp;ndash;network, once again took the lead. Ericsson demonstrated for the first time a test system with practical, commercially available terminals and began commercial use of LTE&amp;amp;ndash;Advanced in 2013. &lt;br /&gt;
*In a YouTube&amp;amp;ndash;video, an LTE&amp;amp;ndash;test can be seen in a moving minibus, in which data rates of over 900 Mbit/s in the downlink and 300 Mbit/s in the uplink were achieved.}}&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standards in competition with LTE or LTE-Advanced ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to the LTE specified by the 3GPP&amp;amp;ndash;consortium, there are other standards that are intended to serve the purpose of fast mobile data transmission. The two most important ones are briefly discussed here:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;cdma2000&#039;&#039;&#039; (or &#039;&#039;IS&amp;amp;ndash;2000&#039;&#039;&amp;amp;nbsp;) and its further development &#039;&#039;&#039;UMB&#039;&#039;&#039; (&amp;lt;i&amp;gt;Ultra Mobile Broadband&amp;lt;/i&amp;gt;&amp;amp;nbsp;):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is a third-generation mobile communications standard that was specified and further developed by&amp;amp;nbsp; [http://www.3gpp2.org/ 3GPP2]&amp;amp;nbsp; (&amp;lt;i&amp;gt;Third Generation Partnership Project 2&amp;lt;/i&amp;gt;&amp;amp;nbsp;) Further information on&amp;amp;nbsp; &#039;&#039;cdma2000&#039;&#039;&amp;amp;nbsp; can be found in the section&amp;amp;nbsp; [[Mobile_Communications/Characteristics_of_UMTS#The_IMT.E2.80.932000.E2.80.93Standard|IMT&amp;amp;ndash;2000&amp;amp;ndash;Standard]]&amp;amp;nbsp; of the book &amp;quot;Examples of communication systems&amp;quot;. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Far less is known about the further development of this standard than about LTE. It is worth mentioning that for&amp;amp;nbsp; &#039;&#039;cdma2000&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;UMB&#039;&#039;&amp;amp;nbsp; there is a substandard specified exclusively for data transmission. The Cologne telecommunications provider&amp;amp;nbsp; &amp;lt;i&amp;gt;NetCologne&amp;lt;/i&amp;gt;&amp;amp;nbsp; has been offering mobile Internet in the 450 MHz range on this basis since 2011. Furthermore, cdma2000 is insignificant in Germany.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Note:&amp;lt;/i&amp;gt; &amp;amp;nbsp; The &amp;quot;3GPP2&amp;quot; was founded almost at the same time as the almost identically named&amp;amp;nbsp; [http://www.3gpp.org/ 3GPP]&amp;amp;nbsp; in December 1998, obviously due to ideological differences.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;WiMAX&#039;&#039;&#039; (&amp;lt;i&amp;gt;Worldwide Interoperability for Microwave Access&amp;lt;/i&amp;gt;&amp;amp;nbsp;):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This term refers to a wireless transmission technology based on the&amp;amp;nbsp; &#039;&#039;IEEE&amp;amp;ndash;Standard 802.16&#039;&#039;&amp;amp;nbsp;. It belongs to the family of the 802&amp;amp;ndash;standards like WLAN (802.11) and Ethernet (802.3). There are two different sub-specifications to WiMAX, namely&lt;br /&gt;
*one for operating a static connection that does not allow handover, and&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*one for the mobile operation, which is to compete with UMTS and LTE.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The potential of the static WiMAX&amp;amp;ndash;connections lies mainly in the long range with nevertheless comparatively high data rate. For this reason, static WiMAX was initially traded as DSL&amp;amp;ndash;alternative for thinly populated areas. For example, with a Line of Sight (LoS) connection between transmitter and receiver over 15 kilometers, about 4.5 Mbit/s are possible. In urban areas without line of sight, WiMAX still has a range of about 600 meters, a much better value than the 100 meters typically offered by WLAN.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the moment (2011) we are also working on a further development called &amp;quot;WiMAX2&amp;quot;. According to the initiators, WiMAX2 in the mobile version is a 4G&amp;amp;ndash;standard which, just like LTE&amp;amp;ndash;Advanced, can achieve data rates of up to 1 Gbit/s. WiMAX2 is to be implemented in practice by the end of 2011. It remains to be seen whether it will work on this date and with the predicted data rate.&lt;br /&gt;
&lt;br /&gt;
In Germany, WiMAX does not play a major role (at present), since both the German government in its broadband offensive and all major mobile phone operators&amp;amp;nbsp; &amp;lt;i&amp;gt;Long Term Evolution&amp;lt;/i&amp;gt;&amp;amp;nbsp; (LTE or LTE&amp;amp;ndash;A) have declared it to be the future of mobile data transmission.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Milestones in the development of LTE and LTE-Advanced ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Finally, a brief overview of some milestones in the development towards LTE from the perspective of 2011:&lt;br /&gt;
*&#039;&#039;&#039;2004&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; The Japanese telecommunications company&amp;amp;nbsp; [https://www.nttdocomo.co.jp/english/index.html NTT DoCoMo]&amp;amp;nbsp; proposes LTE as the new international mobile communications standard.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;09/2006&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; Nokia Siemens Networks (NSN) presents together with&amp;amp;nbsp; [http://www.nomor.de/ Nomor Research]&amp;amp;nbsp; for the first time an emulator of an LTE&amp;amp;ndash;network. For demonstration purposes, a HD&amp;amp;ndash;video is transmitted and two users play an interactive online game.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;02/2007&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; At the&amp;amp;nbsp; &amp;lt;i&amp;gt;3GSM World Congress&amp;lt;/i&amp;gt;, the world&#039;s largest mobile phone trade fair, the Swedish company&amp;amp;nbsp; [https://www.ericsson.com/ Ericsson]&amp;amp;nbsp; will demonstrate an LTE&amp;amp;ndash;system with 144 Mbit/s. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;04/2008&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; [https://de.wikipedia.org/wiki/NTT_DOCOMO DoCoMo]&amp;amp;nbsp; demonstrates an LTE data rate of 250 Mbit/s. Almost simultaneously &amp;amp;nbsp; [https://de.wikipedia.org/wiki/Nortel Nortel Networks Corp.]&amp;amp;nbsp; (Canada) achieved a data rate in vehicle speed of 100 km/h of at least 50 Mbit/s.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;10/2008&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; Test of the first working LTE&amp;amp;ndash;modem by Ericsson in Stockholm. This date is the starting point for the commercial use of LTE.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;12/2008&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; Completion of Release 8 of 3GPP, synonymous with LTE. The company&amp;amp;nbsp; [http://www.lg.com/de LG Electronics]&amp;amp;nbsp; develops the first LTE&amp;amp;ndash;chip for cell phones.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;03/2009&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; At the CeBIT in Hanover, Germany,&amp;amp;nbsp; [https://www.t-mobile.de/ T&amp;amp;ndash;Mobile]&amp;amp;nbsp; Video conferencing and online games from a moving car. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;12/2009&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; The world&#039;s first commercial LTE&amp;amp;ndash;network starts in downtown Stockholm, only 14 months after the start of the test phase.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;04/2010&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; 3GPP begins with the specification of Release 10, synonymous with LTE&amp;amp;ndash;A.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;05/2010&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; The LTE&amp;amp;ndash;frequency auction in Germany ends. At 4.4 billion euros, the proceeds are significantly lower than experts had expected and politicians had hoped for. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;08/2010&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; T-Mobile is building Germany&#039;s first commercially usable LTE&amp;amp;ndash;base station in Kyritz For a functioning operation, suitable terminals are still missing.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;12/2010&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; In Germany, the first major pilot tests are running on the networks of Telekom,&amp;amp;nbsp; [https://www.o2online.de/ O2]&amp;amp;nbsp; and&amp;amp;nbsp; [http://www.vodafone.de/ Vodafone]. In the meantime, corresponding LTE&amp;amp;ndash;routers are available.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;02/2011&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; In South Korea the first successful tests with the successor LTE&amp;amp;ndash;Advanced are being conducted.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;03/2011&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; The 3GPP Release 10 is completed.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;06/2011&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; Launch of the first German LTE&amp;amp;ndash;network in Cologne. By mid-2012, Deutsche Telekom will ensure that LTE&amp;amp;ndash;network is rolled out across a wide area in 100 additional cities.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Exercise to chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:Exercise 4.5: LTE vs LTE-Advanced]]&lt;br /&gt;
&lt;br /&gt;
==List of Sources==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/LTE-Advanced_-_a_Further_Development_of_LTE&amp;diff=35002</id>
		<title>Mobile Communications/LTE-Advanced - a Further Development of LTE</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/LTE-Advanced_-_a_Further_Development_of_LTE&amp;diff=35002"/>
		<updated>2020-10-18T23:57:33Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; {{LastPage}}&lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=LTE – Long Term Evolution&lt;br /&gt;
|Vorherige Seite=Physical Layer for LTE&lt;br /&gt;
|Nächste Seite=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== How fast is LTE really? ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Consumers are accustomed to being able to use (at least to a large extent) the speed offered by established cable-based services such as&amp;amp;nbsp; [[Examples_of_Communication_Systems/General_Description_of_DSL|DSL]]&amp;amp;nbsp; (&amp;lt;i&amp;gt;Digital Subscriber Line&amp;lt;/i&amp;gt;&amp;amp;nbsp;).&lt;br /&gt;
*But what is the situation with LTE?&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*What data rates can the individual LTE&amp;amp;ndash;user actually reach?&lt;br /&gt;
&lt;br /&gt;
It is much more difficult for the providers of mobile radio systems to provide concrete data rate information, since many influences that are difficult to predict have to be taken into account for a radio connection.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As already described in chapter&amp;amp;nbsp; [[Mobile_Communications/Technical Innovations of LTE# Multiple Antenna Systems|Technical Innovations of LTE]]&amp;amp;nbsp; according to the planning for 2011, data rates of up to 326 Mbit/s are possible in LTE&amp;amp;ndash;Downlink and approx. 86 Mbit/s in Uplink. These figures are only maximum achievable values. In reality, however, the speed is determined by a variety of factors. In the following we refer to the downlink, see&amp;amp;nbsp; [Gut10]&amp;lt;ref name=&#039;Gut10&#039;&amp;gt;Gutt, E.: &#039;&#039;LTE - a new dimension of mobile broadband use.&#039;&#039; [http://www.ltemobile.de/uploads/media/LTE_Einfuehrung_V1.pdf PDF document on the Internet], 2010.&amp;lt;/ref&amp;gt;:&lt;br /&gt;
*Since LTE is a so-called&amp;amp;nbsp; &amp;lt;i&amp;gt;Shared Medium&amp;lt;/i&amp;gt;&amp;amp;nbsp;, all users of a cell have to share the entire data rate. Note that voice transmission or normal use of the Internet generates less traffic than, for example,&amp;amp;nbsp; &amp;lt;i&amp;gt;Filesharing&amp;lt;/i&amp;gt;&amp;amp;nbsp; or similar.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The faster a user moves, the lower the available data rate will be. An elementary component of the LTE&amp;amp;ndash;specification is that for mobility up to 15 km/h the highest data rates are guaranteed and up to 300 km/h at least still &amp;quot;good functionality&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The highest data rate is achieved in close proximity to the base station. The further away a user is from the base station, the lower the data rate assigned to him, which can be explained by switching from 64&amp;amp;ndash;QAM or 16&amp;amp;ndash;QAM to 4&amp;amp;ndash;QAM (QPSK), among other things.&lt;br /&gt;
&lt;br /&gt;
*Shielding by walls and buildings or sources of interference of any kind limit the achievable data rate enormously. Optimal would be a Line of Sight connection between receiver and base station a scenario that is rather unusual.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reality in summer 2011 was as follows: &amp;amp;nbsp;LTE is already available in some countries (at least for testing purposes). In addition to LTE&amp;amp;ndash;pioneer Sweden, these include the USA and Germany. In various tests, download&amp;amp;ndash;speeds of between 5 and 12 Mbit/s were achieved, and in very good conditions up to 40 Mbit/s. Details can be found in the Internet article&amp;amp;nbsp; [Gol11]&amp;lt;ref name=&#039;Gol11&#039;&amp;gt;Goldman, D.: &#039;&#039;AT&amp;amp;T launching &#039;new&#039; new 4G network.&#039;&#039;  http://money.cnn.com/2011/05/25/technology/att_4g_lte/index.htm PDF&amp;amp;ndash;Document on the Internet], 2011&amp;lt;/ref&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Moreover, the 2011 LTE&amp;amp;ndash;network did not seem ready to replace the established wired Internet connections due to excessive delay times and the resulting occasional connection interruptions. However, the development in this area progressed with giant strides, so that this information from summer 2011 was not relevant for very long.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Some system improvements through LTE-Advanced==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
While the first LTE&amp;amp;ndash;systems corresponding to Release 8 of December 2008 slowly came onto the market in summer 2011, the successor was already on the doorstep. The Release 10 of the &amp;quot;3GPP&amp;quot; completed in June 2011 is &#039;&#039;Long Term Evolution&amp;amp;ndash;Advanced&#039;&#039;, or in short &amp;amp;nbsp;$\rm LTE-A$. It is the first technology to meet the requirements of the ITU (&amp;lt;i&amp;gt;International Telecommunication Union&amp;lt;/i&amp;gt;) for a 4G&amp;amp;ndash;standard. A summary of these requirements, also called &#039;&#039;IMT&amp;amp;ndash;Advanced&#039;&#039;, can be found in great detail in an [http://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-M.2134-2008-PDF-E.pdf ITU&amp;amp;ndash;Article (PDF).]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Without claiming to be exhaustive, some of the features of LTE&amp;amp;ndash;Advanced are mentioned here:&lt;br /&gt;
*The data rate should be up to 1 Gbit/s with little movement of the user and up to 100 Mbit/s with fast movement. In order to achieve these high data rates, some new technical specifications have been made, which will be briefly discussed here.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*LTE&amp;amp;ndash;Advanced supports bandwidths up to 100 MHz maximum, while the LTE&amp;amp;ndash;specification (after Release 8) provides only 20 MHz. The FDD&amp;amp;ndash;spectra no longer have to be divided symmetrically between uplink and downlink. For example, a higher channel bandwidth can be used for the downlink than for the uplink, which corresponds to the normal use of the mobile Internet with a smartphone.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*In the uplink of LTE&amp;amp;ndash;Advanced &amp;amp;nbsp; [[Mobile_Communications/The_Application_of_OFDMA_and_SC-FDMA_in_LTE#Functionality_of_SC.E2.80.93FDMA|SC&amp;amp;ndash;FDMA]]&amp;amp;nbsp; is also used. Since the 3GPP&amp;amp;ndash;Consortium was not satisfied with the SC&amp;amp;ndash;FDMA transmission in LTE, some essential improvements in the process were developed.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Another interesting novelty is the introduction of so-called &amp;quot;Relay Nodes&amp;quot;. Such a&amp;amp;nbsp; &amp;lt;i&amp;gt;Relay Node&amp;lt;/i&amp;gt;&amp;amp;nbsp; (RN) is placed at the edge of a cell to provide better transmission quality at the boundaries of a cell and thus increase the range of the cell.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:P ID2295 LTE T 4 5 S2 v1.png|right|frame|Functionality of Relay Nodes]]&lt;br /&gt;
A&amp;amp;nbsp; &amp;lt;i&amp;gt;Relay Node&amp;lt;/i&amp;gt;&amp;amp;nbsp; looks like a normal base station for a terminal device (&amp;lt;i&amp;gt;eNodeB&amp;lt;/i&amp;gt;). However, it only has to supply a relatively small area of operation and therefore does not need to be connected to the backbone in a complicated way. In most cases a relay node is connected to the next base station via directional radio.&lt;br /&gt;
&lt;br /&gt;
In this way, high data rates and good transmission quality without interruptions are guaranteed without great effort. By increasing the physical proximity to the base stations, the reception quality in buildings is also improved.&lt;br /&gt;
&lt;br /&gt;
Another feature added to LTE&amp;amp;ndash;A is known as&amp;amp;nbsp; &amp;lt;i&amp;gt;Coordinated Multiple Point Transmission and Reception&amp;lt;/i&amp;gt;&amp;amp;nbsp; (CoMP). This is an attempt to reduce the disturbing influence of intercell interference. With intelligent scheduling across several base stations, it is even possible to make intercell interference usable. The information for a terminal device is available at two adjacent base stations and can be transmitted simultaneously. Details on CoMP&amp;amp;ndash;technology can be found, for example, in the internet article&amp;amp;nbsp; [Wan13]&amp;lt;ref name=&#039;Wan13&#039;&amp;gt;Wannstrom, J.: &#039;&#039;LTE&amp;amp;ndash;Advanced&#039;&#039;.  http://www.3gpp.org/technologies/keywords-acronyms/97-lte-advanced PDF&amp;amp;ndash;Document on the Internet, 2011]&amp;lt;/ref&amp;gt;&amp;amp;nbsp; from 3gpp.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Intermediate status of 2011:}$&amp;amp;nbsp;*Thanks to the above measures in combination with many other improvements, primarily the introduction of 4&amp;amp;times;4&amp;amp;ndash;MIMO for the uplink and 8&amp;amp;times;8&amp;amp;ndash;MIMO in the downlink, it is possible to significantly increase the spectral efficiency (i.e. the transferable flow of information in one Hertz bandwidth within one second) of LTE&amp;amp;ndash;A compared to LTE, namely in the &amp;lt;i&amp;gt;downlink&amp;lt;/i&amp;gt; from 15 bit/s/Hz&amp;amp;nbsp; to &amp;amp;nbsp;$\text{30 bit/s/Hz}$&amp;amp;nbsp; and in the &amp;lt;i&amp;gt;uplink&amp;lt;/i&amp;gt; from 3. 75 bit/s/Hz to &amp;amp;nbsp;$\text{15 bit/s/Hz}$.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Of course, backwards compatibility with the previous LTE standard and previous mobile phone systems must also be guaranteed. Also with a UMTS&amp;amp;ndash;cell phone one should be able to dial into a LTE&amp;amp;ndash;network, even if one cannot use the LTE specific features.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*At the beginning of June 2011 the first tests of LTE&amp;amp;ndash;Advanced were conducted. Sweden, which has already set up the first commercial LTE&amp;amp;ndash;network, once again took the lead. Ericsson demonstrated for the first time a test system with practical, commercially available terminals and began commercial use of LTE&amp;amp;ndash;Advanced in 2013. &lt;br /&gt;
*In a YouTube&amp;amp;ndash;video, an LTE&amp;amp;ndash;test can be seen in a moving minibus, in which data rates of over 900 Mbit/s in the downlink and 300 Mbit/s in the uplink were achieved.}}&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standards in competition with LTE or LTE-Advanced ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to the LTE specified by the 3GPP&amp;amp;ndash;consortium, there are other standards that are intended to serve the purpose of fast mobile data transmission. The two most important ones are briefly discussed here:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;cdma2000&#039;&#039;&#039; (or &#039;&#039;IS&amp;amp;ndash;2000&#039;&#039;&amp;amp;nbsp;) and its further development &#039;&#039;&#039;UMB&#039;&#039;&#039; (&amp;lt;i&amp;gt;Ultra Mobile Broadband&amp;lt;/i&amp;gt;&amp;amp;nbsp;):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is a third-generation mobile communications standard that was specified and further developed by&amp;amp;nbsp; [http://www.3gpp2.org/ 3GPP2]&amp;amp;nbsp; (&amp;lt;i&amp;gt;Third Generation Partnership Project 2&amp;lt;/i&amp;gt;&amp;amp;nbsp;) Further information on&amp;amp;nbsp; &#039;&#039;cdma2000&#039;&#039;&amp;amp;nbsp; can be found in the section&amp;amp;nbsp; [[Mobile_Communications/The_Characteristics_of_UMTS#The_IMT.E2.80.932000.E2.80.93Standard|IMT&amp;amp;ndash;2000&amp;amp;ndash;Standard]]&amp;amp;nbsp; of the book &amp;quot;Examples of communication systems&amp;quot;. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Far less is known about the further development of this standard than about LTE. It is worth mentioning that for&amp;amp;nbsp; &#039;&#039;cdma2000&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;UMB&#039;&#039;&amp;amp;nbsp; there is a substandard specified exclusively for data transmission. The Cologne telecommunications provider&amp;amp;nbsp; &amp;lt;i&amp;gt;NetCologne&amp;lt;/i&amp;gt;&amp;amp;nbsp; has been offering mobile Internet in the 450 MHz range on this basis since 2011. Furthermore, cdma2000 is insignificant in Germany.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Note:&amp;lt;/i&amp;gt; &amp;amp;nbsp; The &amp;quot;3GPP2&amp;quot; was founded almost at the same time as the almost identically named&amp;amp;nbsp; [http://www.3gpp.org/ 3GPP]&amp;amp;nbsp; in December 1998, obviously due to ideological differences.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;WiMAX&#039;&#039;&#039; (&amp;lt;i&amp;gt;Worldwide Interoperability for Microwave Access&amp;lt;/i&amp;gt;&amp;amp;nbsp;):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This term refers to a wireless transmission technology based on the&amp;amp;nbsp; &#039;&#039;IEEE&amp;amp;ndash;Standard 802.16&#039;&#039;&amp;amp;nbsp;. It belongs to the family of the 802&amp;amp;ndash;standards like WLAN (802.11) and Ethernet (802.3). There are two different sub-specifications to WiMAX, namely&lt;br /&gt;
*one for operating a static connection that does not allow handover, and&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*one for the mobile operation, which is to compete with UMTS and LTE.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The potential of the static WiMAX&amp;amp;ndash;connections lies mainly in the long range with nevertheless comparatively high data rate. For this reason, static WiMAX was initially traded as DSL&amp;amp;ndash;alternative for thinly populated areas. For example, with a Line of Sight (LoS) connection between transmitter and receiver over 15 kilometers, about 4.5 Mbit/s are possible. In urban areas without line of sight, WiMAX still has a range of about 600 meters, a much better value than the 100 meters typically offered by WLAN.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the moment (2011) we are also working on a further development called &amp;quot;WiMAX2&amp;quot;. According to the initiators, WiMAX2 in the mobile version is a 4G&amp;amp;ndash;standard which, just like LTE&amp;amp;ndash;Advanced, can achieve data rates of up to 1 Gbit/s. WiMAX2 is to be implemented in practice by the end of 2011. It remains to be seen whether it will work on this date and with the predicted data rate.&lt;br /&gt;
&lt;br /&gt;
In Germany, WiMAX does not play a major role (at present), since both the German government in its broadband offensive and all major mobile phone operators&amp;amp;nbsp; &amp;lt;i&amp;gt;Long Term Evolution&amp;lt;/i&amp;gt;&amp;amp;nbsp; (LTE or LTE&amp;amp;ndash;A) have declared it to be the future of mobile data transmission.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Milestones in the development of LTE and LTE-Advanced ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Finally, a brief overview of some milestones in the development towards LTE from the perspective of 2011:&lt;br /&gt;
*&#039;&#039;&#039;2004&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; The Japanese telecommunications company&amp;amp;nbsp; [https://www.nttdocomo.co.jp/english/index.html NTT DoCoMo]&amp;amp;nbsp; proposes LTE as the new international mobile communications standard.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;09/2006&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; Nokia Siemens Networks (NSN) presents together with&amp;amp;nbsp; [http://www.nomor.de/ Nomor Research]&amp;amp;nbsp; for the first time an emulator of an LTE&amp;amp;ndash;network. For demonstration purposes, a HD&amp;amp;ndash;video is transmitted and two users play an interactive online game.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;02/2007&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; At the&amp;amp;nbsp; &amp;lt;i&amp;gt;3GSM World Congress&amp;lt;/i&amp;gt;, the world&#039;s largest mobile phone trade fair, the Swedish company&amp;amp;nbsp; [https://www.ericsson.com/ Ericsson]&amp;amp;nbsp; will demonstrate an LTE&amp;amp;ndash;system with 144 Mbit/s. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;04/2008&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; [https://de.wikipedia.org/wiki/NTT_DOCOMO DoCoMo]&amp;amp;nbsp; demonstrates an LTE data rate of 250 Mbit/s. Almost simultaneously &amp;amp;nbsp; [https://de.wikipedia.org/wiki/Nortel Nortel Networks Corp.]&amp;amp;nbsp; (Canada) achieved a data rate in vehicle speed of 100 km/h of at least 50 Mbit/s.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;10/2008&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; Test of the first working LTE&amp;amp;ndash;modem by Ericsson in Stockholm. This date is the starting point for the commercial use of LTE.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;12/2008&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; Completion of Release 8 of 3GPP, synonymous with LTE. The company&amp;amp;nbsp; [http://www.lg.com/de LG Electronics]&amp;amp;nbsp; develops the first LTE&amp;amp;ndash;chip for cell phones.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;03/2009&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; At the CeBIT in Hanover, Germany,&amp;amp;nbsp; [https://www.t-mobile.de/ T&amp;amp;ndash;Mobile]&amp;amp;nbsp; Video conferencing and online games from a moving car. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;12/2009&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; The world&#039;s first commercial LTE&amp;amp;ndash;network starts in downtown Stockholm, only 14 months after the start of the test phase.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;04/2010&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; 3GPP begins with the specification of Release 10, synonymous with LTE&amp;amp;ndash;A.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;05/2010&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; The LTE&amp;amp;ndash;frequency auction in Germany ends. At 4.4 billion euros, the proceeds are significantly lower than experts had expected and politicians had hoped for. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;08/2010&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; T-Mobile is building Germany&#039;s first commercially usable LTE&amp;amp;ndash;base station in Kyritz For a functioning operation, suitable terminals are still missing.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;12/2010&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; In Germany, the first major pilot tests are running on the networks of Telekom,&amp;amp;nbsp; [https://www.o2online.de/ O2]&amp;amp;nbsp; and&amp;amp;nbsp; [http://www.vodafone.de/ Vodafone]. In the meantime, corresponding LTE&amp;amp;ndash;routers are available.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;02/2011&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; In South Korea the first successful tests with the successor LTE&amp;amp;ndash;Advanced are being conducted.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;03/2011&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; The 3GPP Release 10 is completed.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;06/2011&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; Launch of the first German LTE&amp;amp;ndash;network in Cologne. By mid-2012, Deutsche Telekom will ensure that LTE&amp;amp;ndash;network is rolled out across a wide area in 100 additional cities.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Exercise to chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:Exercise 4.5: LTE vs LTE-Advanced]]&lt;br /&gt;
&lt;br /&gt;
==List of Sources==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/LTE-Advanced_-_a_Further_Development_of_LTE&amp;diff=35001</id>
		<title>Mobile Communications/LTE-Advanced - a Further Development of LTE</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/LTE-Advanced_-_a_Further_Development_of_LTE&amp;diff=35001"/>
		<updated>2020-10-18T23:57:00Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; {{LastPage}}&lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=LTE – Long Term Evolution&lt;br /&gt;
|Vorherige Seite=Physical Layer for LTE&lt;br /&gt;
|Nächste Seite=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== How fast is LTE really? ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Consumers are accustomed to being able to use (at least to a large extent) the speed offered by established cable-based services such as&amp;amp;nbsp; [[Examples_of_Communication_Systems/General_Description_of_DSL|DSL]]&amp;amp;nbsp; (&amp;lt;i&amp;gt;Digital Subscriber Line&amp;lt;/i&amp;gt;&amp;amp;nbsp;).&lt;br /&gt;
*But what is the situation with LTE?&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*What data rates can the individual LTE&amp;amp;ndash;user actually reach?&lt;br /&gt;
&lt;br /&gt;
It is much more difficult for the providers of mobile radio systems to provide concrete data rate information, since many influences that are difficult to predict have to be taken into account for a radio connection.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As already described in chapter&amp;amp;nbsp; [[Mobile_Communications/Technical Innovations of LTE# Multiple Antenna Systems|Technical Innovations of LTE]]&amp;amp;nbsp; according to the planning for 2011, data rates of up to 326 Mbit/s are possible in LTE&amp;amp;ndash;Downlink and approx. 86 Mbit/s in Uplink. These figures are only maximum achievable values. In reality, however, the speed is determined by a variety of factors. In the following we refer to the downlink, see&amp;amp;nbsp; [Gut10]&amp;lt;ref name=&#039;Gut10&#039;&amp;gt;Gutt, E.: &#039;&#039;LTE - a new dimension of mobile broadband use.&#039;&#039; [http://www.ltemobile.de/uploads/media/LTE_Einfuehrung_V1.pdf PDF document on the Internet], 2010.&amp;lt;/ref&amp;gt;:&lt;br /&gt;
*Since LTE is a so-called&amp;amp;nbsp; &amp;lt;i&amp;gt;Shared Medium&amp;lt;/i&amp;gt;&amp;amp;nbsp;, all users of a cell have to share the entire data rate. Note that voice transmission or normal use of the Internet generates less traffic than, for example,&amp;amp;nbsp; &amp;lt;i&amp;gt;Filesharing&amp;lt;/i&amp;gt;&amp;amp;nbsp; or similar.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The faster a user moves, the lower the available data rate will be. An elementary component of the LTE&amp;amp;ndash;specification is that for mobility up to 15 km/h the highest data rates are guaranteed and up to 300 km/h at least still &amp;quot;good functionality&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The highest data rate is achieved in close proximity to the base station. The further away a user is from the base station, the lower the data rate assigned to him, which can be explained by switching from 64&amp;amp;ndash;QAM or 16&amp;amp;ndash;QAM to 4&amp;amp;ndash;QAM (QPSK), among other things.&lt;br /&gt;
&lt;br /&gt;
*Shielding by walls and buildings or sources of interference of any kind limit the achievable data rate enormously. Optimal would be a Line of Sight connection between receiver and base station a scenario that is rather unusual.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reality in summer 2011 was as follows: &amp;amp;nbsp;LTE is already available in some countries (at least for testing purposes). In addition to LTE&amp;amp;ndash;pioneer Sweden, these include the USA and Germany. In various tests, download&amp;amp;ndash;speeds of between 5 and 12 Mbit/s were achieved, and in very good conditions up to 40 Mbit/s. Details can be found in the Internet article&amp;amp;nbsp; [Gol11]&amp;lt;ref name=&#039;Gol11&#039;&amp;gt;Goldman, D.: &#039;&#039;AT&amp;amp;T launching &#039;new&#039; new 4G network.&#039;&#039;  http://money.cnn.com/2011/05/25/technology/att_4g_lte/index.htm PDF&amp;amp;ndash;Document on the Internet], 2011&amp;lt;/ref&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Moreover, the 2011 LTE&amp;amp;ndash;network did not seem ready to replace the established wired Internet connections due to excessive delay times and the resulting occasional connection interruptions. However, the development in this area progressed with giant strides, so that this information from summer 2011 was not relevant for very long.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Some system improvements through LTE-Advanced==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
While the first LTE&amp;amp;ndash;systems corresponding to Release 8 of December 2008 slowly came onto the market in summer 2011, the successor was already on the doorstep. The Release 10 of the &amp;quot;3GPP&amp;quot; completed in June 2011 is &#039;&#039;Long Term Evolution&amp;amp;ndash;Advanced&#039;&#039;, or in short &amp;amp;nbsp;$\rm LTE-A$. It is the first technology to meet the requirements of the ITU (&amp;lt;i&amp;gt;International Telecommunication Union&amp;lt;/i&amp;gt;) for a 4G&amp;amp;ndash;standard. A summary of these requirements, also called &#039;&#039;IMT&amp;amp;ndash;Advanced&#039;&#039;, can be found in great detail in an [http://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-M.2134-2008-PDF-E.pdf ITU&amp;amp;ndash;Article (PDF).]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Without claiming to be exhaustive, some of the features of LTE&amp;amp;ndash;Advanced are mentioned here:&lt;br /&gt;
*The data rate should be up to 1 Gbit/s with little movement of the user and up to 100 Mbit/s with fast movement. In order to achieve these high data rates, some new technical specifications have been made, which will be briefly discussed here.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*LTE&amp;amp;ndash;Advanced supports bandwidths up to 100 MHz maximum, while the LTE&amp;amp;ndash;specification (after Release 8) provides only 20 MHz. The FDD&amp;amp;ndash;spectra no longer have to be divided symmetrically between uplink and downlink. For example, a higher channel bandwidth can be used for the downlink than for the uplink, which corresponds to the normal use of the mobile Internet with a smartphone.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*In the uplink of LTE&amp;amp;ndash;Advanced &amp;amp;nbsp; [[Mobile_Communications/The_Application_of_OFDMA_and_SC-FDMA_in_LTE#Functionality_of_SC.E2.80.93FDMA|SC&amp;amp;ndash;FDMA]]&amp;amp;nbsp; is also used. Since the 3GPP&amp;amp;ndash;Consortium was not satisfied with the SC&amp;amp;ndash;FDMA transmission in LTE, some essential improvements in the process were developed.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Another interesting novelty is the introduction of so-called &amp;quot;Relay Nodes&amp;quot;. Such a&amp;amp;nbsp; &amp;lt;i&amp;gt;Relay Node&amp;lt;/i&amp;gt;&amp;amp;nbsp; (RN) is placed at the edge of a cell to provide better transmission quality at the boundaries of a cell and thus increase the range of the cell.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:P ID2295 LTE T 4 5 S2 v1.png|right|frame|Functionality of Relay Nodes]]&lt;br /&gt;
A&amp;amp;nbsp; &amp;lt;i&amp;gt;Relay Node&amp;lt;/i&amp;gt;&amp;amp;nbsp; looks like a normal base station for a terminal device (&amp;lt;i&amp;gt;eNodeB&amp;lt;/i&amp;gt;). However, it only has to supply a relatively small area of operation and therefore does not need to be connected to the backbone in a complicated way. In most cases a relay node is connected to the next base station via directional radio.&lt;br /&gt;
&lt;br /&gt;
In this way, high data rates and good transmission quality without interruptions are guaranteed without great effort. By increasing the physical proximity to the base stations, the reception quality in buildings is also improved.&lt;br /&gt;
&lt;br /&gt;
Another feature added to LTE&amp;amp;ndash;A is known as&amp;amp;nbsp; &amp;lt;i&amp;gt;Coordinated Multiple Point Transmission and Reception&amp;lt;/i&amp;gt;&amp;amp;nbsp; (CoMP). This is an attempt to reduce the disturbing influence of intercell interference. With intelligent scheduling across several base stations, it is even possible to make intercell interference usable. The information for a terminal device is available at two adjacent base stations and can be transmitted simultaneously. Details on CoMP&amp;amp;ndash;technology can be found, for example, in the internet article&amp;amp;nbsp; [Wan13]&amp;lt;ref name=&#039;Wan13&#039;&amp;gt;Wannstrom, J.: &#039;&#039;LTE&amp;amp;ndash;Advanced&#039;&#039;.  http://www.3gpp.org/technologies/keywords-acronyms/97-lte-advanced PDF&amp;amp;ndash;Document on the Internet, 2011]&amp;lt;/ref&amp;gt;&amp;amp;nbsp; from 3gpp.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Intermediate status of 2011:}$&amp;amp;nbsp;*Thanks to the above measures in combination with many other improvements, primarily the introduction of 4&amp;amp;times;4&amp;amp;ndash;MIMO for the uplink and 8&amp;amp;times;8&amp;amp;ndash;MIMO in the downlink, it is possible to significantly increase the spectral efficiency (i.e. the transferable flow of information in one Hertz bandwidth within one second) of LTE&amp;amp;ndash;A compared to LTE, namely in the &amp;lt;i&amp;gt;downlink&amp;lt;/i&amp;gt; from 15 bit/s/Hz&amp;amp;nbsp; to &amp;amp;nbsp;$\text{30 bit/s/Hz}$&amp;amp;nbsp; and in the &amp;lt;i&amp;gt;uplink&amp;lt;/i&amp;gt; from 3. 75 bit/s/Hz to &amp;amp;nbsp;$\text{15 bit/s/Hz}$.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Of course, backwards compatibility with the previous LTE standard and previous mobile phone systems must also be guaranteed. Also with a UMTS&amp;amp;ndash;cell phone one should be able to dial into a LTE&amp;amp;ndash;network, even if one cannot use the LTE specific features.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*At the beginning of June 2011 the first tests of LTE&amp;amp;ndash;Advanced were conducted. Sweden, which has already set up the first commercial LTE&amp;amp;ndash;network, once again took the lead. Ericsson demonstrated for the first time a test system with practical, commercially available terminals and began commercial use of LTE&amp;amp;ndash;Advanced in 2013. &lt;br /&gt;
*In a YouTube&amp;amp;ndash;video, an LTE&amp;amp;ndash;test can be seen in a moving minibus, in which data rates of over 900 Mbit/s in the downlink and 300 Mbit/s in the uplink were achieved.}}&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Standards in competition with LTE or LTE-Advanced ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to the LTE specified by the 3GPP&amp;amp;ndash;consortium, there are other standards that are intended to serve the purpose of fast mobile data transmission. The two most important ones are briefly discussed here:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;cdma2000&#039;&#039;&#039; (or &#039;&#039;IS&amp;amp;ndash;2000&#039;&#039;&amp;amp;nbsp;) and its further development &#039;&#039;&#039;UMB&#039;&#039;&#039; (&amp;lt;i&amp;gt;Ultra Mobile Broadband&amp;lt;/i&amp;gt;&amp;amp;nbsp;):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is a third-generation mobile communications standard that was specified and further developed by&amp;amp;nbsp; [http://www.3gpp2.org/ 3GPP2]&amp;amp;nbsp; (&amp;lt;i&amp;gt;Third Generation Partnership Project 2&amp;lt;/i&amp;gt;&amp;amp;nbsp;) Further information on&amp;amp;nbsp; &#039;&#039;cdma2000&#039;&#039;&amp;amp;nbsp; can be found in the section&amp;amp;nbsp; [[Mobile_Communications/The_Characteristics_of_UMTS#The_IMT.E2.80.932000.E2.80.93Standard|IMT&amp;amp;ndash;2000&amp;amp;ndash;Standard]]&amp;amp;nbsp; of the book &amp;quot;Examples of communication systems&amp;quot;. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Far less is known about the further development of this standard than about LTE. It is worth mentioning that for&amp;amp;nbsp; &#039;&#039;cdma2000&#039;&#039;&amp;amp;nbsp; and&amp;amp;nbsp; &#039;&#039;UMB&#039;&#039;&amp;amp;nbsp; there is a substandard specified exclusively for data transmission. The Cologne telecommunications provider&amp;amp;nbsp; &amp;lt;i&amp;gt;NetCologne&amp;lt;/i&amp;gt;&amp;amp;nbsp; has been offering mobile Internet in the 450 MHz range on this basis since 2011. Furthermore, cdma2000 is insignificant in Germany.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Note:&amp;lt;/i&amp;gt; &amp;amp;nbsp; The &amp;quot;3GPP2&amp;quot; was founded almost at the same time as the almost identically named&amp;amp;nbsp; [http://www.3gpp.org/ 3GPP]&amp;amp;nbsp; in December 1998, obviously due to ideological differences.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;WiMAX&#039;&#039;&#039; (&amp;lt;i&amp;gt;Worldwide Interoperability for Microwave Access&amp;lt;/i&amp;gt;&amp;amp;nbsp;):&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This term refers to a wireless transmission technology based on the&amp;amp;nbsp; &#039;&#039;IEEE&amp;amp;ndash;Standard 802.16&#039;&#039;&amp;amp;nbsp;. It belongs to the family of the 802&amp;amp;ndash;standards like WLAN (802.11) and Ethernet (802.3). There are two different sub-specifications to WiMAX, namely&lt;br /&gt;
*one for operating a static connection that does not allow handover, and&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*one for the mobile operation, which is to compete with UMTS and LTE.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The potential of the static WiMAX&amp;amp;ndash;connections lies mainly in the long range with nevertheless comparatively high data rate. For this reason, static WiMAX was initially traded as DSL&amp;amp;ndash;alternative for thinly populated areas. For example, with a Line of Sight (LoS) connection between transmitter and receiver over 15 kilometers, about 4.5 Mbit/s are possible. In urban areas without line of sight, WiMAX still has a range of about 600 meters, a much better value than the 100 meters typically offered by WLAN.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the moment (2011) we are also working on a further development called &amp;quot;WiMAX2&amp;quot;. According to the initiators, WiMAX2 in the mobile version is a 4G&amp;amp;ndash;standard which, just like LTE&amp;amp;ndash;Advanced, can achieve data rates of up to 1 Gbit/s. WiMAX2 is to be implemented in practice by the end of 2011. It remains to be seen whether it will work on this date and with the predicted data rate.&lt;br /&gt;
&lt;br /&gt;
In Germany, WiMAX does not play a major role (at present), since both the German government in its broadband offensive and all major mobile phone operators&amp;amp;nbsp; &amp;lt;i&amp;gt;Long Term Evolution&amp;lt;/i&amp;gt;&amp;amp;nbsp; (LTE or LTE&amp;amp;ndash;A) have declared it to be the future of mobile data transmission.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Milestones in the development of LTE and LTE-Advanced ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Finally, a brief overview of some milestones in the development towards LTE from the perspective of 2011:&lt;br /&gt;
*&#039;&#039;&#039;2004&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; The Japanese telecommunications company&amp;amp;nbsp; [https://www.nttdocomo.co.jp/english/index.html NTT DoCoMo]&amp;amp;nbsp; proposes LTE as the new international mobile communications standard.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;09/2006&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; Nokia Siemens Networks (NSN) presents together with&amp;amp;nbsp; [http://www.nomor.de/ Nomor Research]&amp;amp;nbsp; for the first time an emulator of an LTE&amp;amp;ndash;network. For demonstration purposes, a HD&amp;amp;ndash;video is transmitted and two users play an interactive online game.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;02/2007&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; At the&amp;amp;nbsp; &amp;lt;i&amp;gt;3GSM World Congress&amp;lt;/i&amp;gt;, the world&#039;s largest mobile phone trade fair, the Swedish company&amp;amp;nbsp; [https://www.ericsson.com/ Ericsson]&amp;amp;nbsp; will demonstrate an LTE&amp;amp;ndash;system with 144 Mbit/s. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;04/2008&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; [https://de.wikipedia.org/wiki/NTT_DOCOMO DoCoMo]&amp;amp;nbsp; demonstrates an LTE data rate of 250 Mbit/s. Almost simultaneously &amp;amp;nbsp; [https://de.wikipedia.org/wiki/Nortel Nortel Networks Corp.]&amp;amp;nbsp; (Canada) achieved a data rate in vehicle speed of 100 km/h of at least 50 Mbit/s.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;10/2008&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; Test of the first working LTE&amp;amp;ndash;modem by Ericsson in Stockholm. This date is the starting point for the commercial use of LTE.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;12/2008&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; Completion of Release 8 of 3GPP, synonymous with LTE. The company&amp;amp;nbsp; [http://www.lg.com/de LG Electronics]&amp;amp;nbsp; develops the first LTE&amp;amp;ndash;chip for cell phones.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;03/2009&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; At the CeBIT in Hanover, Germany,&amp;amp;nbsp; [https://www.t-mobile.de/ T&amp;amp;ndash;Mobile]&amp;amp;nbsp; Video conferencing and online games from a moving car. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;12/2009&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; The world&#039;s first commercial LTE&amp;amp;ndash;network starts in downtown Stockholm, only 14 months after the start of the test phase.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;04/2010&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; 3GPP begins with the specification of Release 10, synonymous with LTE&amp;amp;ndash;A.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;05/2010&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; The LTE&amp;amp;ndash;frequency auction in Germany ends. At 4.4 billion euros, the proceeds are significantly lower than experts had expected and politicians had hoped for. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;08/2010&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; T-Mobile is building Germany&#039;s first commercially usable LTE&amp;amp;ndash;base station in Kyritz For a functioning operation, suitable terminals are still missing.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;12/2010&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; In Germany, the first major pilot tests are running on the networks of Telekom,&amp;amp;nbsp; [https://www.o2online.de/ O2]&amp;amp;nbsp; and&amp;amp;nbsp; [http://www.vodafone.de/ Vodafone]. In the meantime, corresponding LTE&amp;amp;ndash;routers are available.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;02/2011&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; In South Korea the first successful tests with the successor LTE&amp;amp;ndash;Advanced are being conducted.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;03/2011&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; The 3GPP Release 10 is completed.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;06/2011&#039;&#039;&amp;amp;nbsp; &amp;amp;nbsp; Launch of the first German LTE&amp;amp;ndash;network in Cologne. By mid-2012, Deutsche Telekom will ensure that LTE&amp;amp;ndash;network is rolled out across a wide area in 100 additional cities.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Exercise to chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:Exercise 4.5: LTE vs LTE-Advanced]]&lt;br /&gt;
&lt;br /&gt;
==List of Sources==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/Technical_Innovations_of_LTE&amp;diff=35000</id>
		<title>Mobile Communications/Technical Innovations of LTE</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/Technical_Innovations_of_LTE&amp;diff=35000"/>
		<updated>2020-10-18T18:00:58Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=LTE – Long Term Evolution&lt;br /&gt;
|Vorherige Seite=General Information on the LTE Mobile Communications Standard&lt;br /&gt;
|Nächste Seite=The Application of OFDMA and SC-FDMA in LTE&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== For voice transmission with LTE ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Unlike previous mobile phone standards, LTE only supports &#039;&#039;packet-oriented transmission&#039;&#039;. For voice transmission, however, a connection-oriented transmission with fixed reservation of resources would be better, since a &amp;quot;fragmented transmission&amp;quot;, as is the case with the packet-oriented method, is relatively complicated.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem of integrating voice transmission methods was one of the major challenges in the development of LTE, as voice transmission remains the largest source of revenue for network operators. There were a number of approaches, as it can be seen in the internet article &amp;amp;nbsp; [Gut10]&amp;lt;ref name=&#039;Gut10&#039;&amp;gt;Gutt, E.: &#039;&#039;LTE - a new dimension of mobile broadband use&#039;&#039;. [http://www.ltemobile.de/uploads/media/LTE_Einfuehrung_V1.pdf PDF document on the Internet], 2010.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(1)&#039;&#039;&#039; &amp;amp;nbsp; A very simple and obvious method is&amp;amp;nbsp; &amp;lt;i&amp;gt;Circuit Switched Fallback&amp;lt;/i&amp;gt;&amp;amp;nbsp; (&#039;&#039;&#039;CSFB&#039;&#039;&#039;). Here a wire-bound transmission is used for the voice transmission. The principle is:&lt;br /&gt;
*The terminal device logs on to the LTE&amp;amp;ndash;network and in parallel also to a GSM&amp;amp;ndash; or UMTS&amp;amp;ndash;network. When an incoming call is received, the terminal device receives a message from the&amp;amp;nbsp; &amp;lt;i&amp;gt;Mobile Management Entity&amp;lt;/i&amp;gt;&amp;amp;nbsp; (MME, control node in the LTE&amp;amp;ndash;network for user&amp;amp;ndash;authentication), whereupon a wire-bound transmission via the GSM&amp;amp;ndash; or the UMTS&amp;amp;ndash;network is established.&lt;br /&gt;
*A disadvantage of this solution (actually it is a &amp;quot;problem concealment&amp;quot;) is the greatly delayed connection establishment. In addition, CSFB prevents the complete conversion of the network to LTE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(2)&#039;&#039;&#039; &amp;amp;nbsp; Another possibility for the integration of voice in a packet-oriented transmission system is offered by&amp;amp;nbsp; &amp;lt;i&amp;gt;Voice over LTE via GAN&amp;lt;/i&amp;gt;&amp;amp;nbsp; (&#039;&#039;&#039;VoLGA&#039;&#039;&#039;), which is based on the from&amp;amp;nbsp; [[Mobile_Communications/General Information on the LTE Mobile Communications Standard#3GPP_. E2.80.93_Third_Generation_Partnership_Project| 3GPP]]&amp;amp;nbsp; developed by&amp;amp;nbsp; [[https://en.wikipedia.org/wiki/Generic_Access_Network Generic Access Network])&amp;amp;nbsp;. In brief, the principle can be described as follows:&lt;br /&gt;
* GAN enables line-related services via a packet-oriented network (IP&amp;amp;ndash;network), for example WLAN&amp;amp;nbsp; (&amp;lt;i&amp;gt;Wireless Local Area Network&amp;lt;/i&amp;gt;). With compatible end devices one can register oneself in the GSM&amp;amp;ndash;network over a WLAN&amp;amp;ndash;connection and use line-based services. VoLGA uses this functionality by replacing WLAN with LTE.&lt;br /&gt;
* The fast implementation of VoLGA is advantageous, as no lengthy new development or changes to the core network are necessary. However, a so-called&amp;amp;nbsp; &amp;lt;i&amp;gt;VoLGA Access Network Controller&amp;lt;/i&amp;gt;&amp;amp;nbsp; (VANC) must be added to the network as hardware. This takes care of the communication between the end device and the&amp;amp;nbsp; &amp;lt;i&amp;gt;Mobile Management Entity&amp;lt;/i&amp;gt;&amp;amp;nbsp; or the core network.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though VoLGA does not need to use a GSM&amp;amp;ndash; or UMTS&amp;amp;ndash;network for voice connections like CSFB, it was considered by the majority of the mobile community as an (unsatisfactory) bridge technology due to its user-friendliness. T&amp;amp;ndash;Mobile has long been a proponent of the VoLGA&amp;amp;ndash;technology, but also stopped further development in February 2011.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the following we describe a better solution proposal. Keywords are&amp;amp;nbsp; &amp;lt;i&amp;gt;IP Multimedia Subsystem&amp;lt;/i&amp;gt;&amp;amp;nbsp; (IMS) and&amp;amp;nbsp; &amp;lt;i&amp;gt;Voice over LTE&amp;lt;/i&amp;gt;&amp;amp;nbsp; (VoLTE). The operators in Germany switched to this technology relatively late: &amp;amp;nbsp; Vodafone and O2 Telefonica at the beginning of 2015, Telekom at the beginning of 2016. &lt;br /&gt;
&lt;br /&gt;
This is also the reason why the switch to LTE in Germany (and in Europe in general) was slower than in the USA.  Many customers did not want to pay the higher prices for LTE as long as there was no well-functioning solution for integrating voice transmission.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== VoLTE - Voice over LTE ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
From today&#039;s point of view (2016), the most promising approach to integrating voice services into the LTE&amp;amp;ndash;network, some of which are already established, is&amp;amp;nbsp; &amp;lt;i&amp;gt;Voice over LTE&amp;lt;/i&amp;gt;&amp;amp;nbsp; &amp;amp;ndash; in short: &#039;&#039;&#039;VoLTE&#039;&#039;&#039;. This standard, officially adopted by the&amp;amp;nbsp; [http://www.gsma.com/aboutus/ GSMA],&amp;amp;nbsp; the worldwide industry association of more than 800 mobile network operators and over 200 manufacturers of cell phones and network infrastructure, is exclusively IP&amp;amp;ndash;packet-oriented and is based on the&amp;amp;nbsp; &amp;lt;i&amp;gt;IP Multimedia Subsystem&amp;lt;/i&amp;gt;&amp;amp;nbsp; (&#039;&#039;&#039;IMS&#039;&#039;&#039;), which was already defined in the UMTS&amp;amp;ndash;Release 9 in 2010. The technical facts about IMS are:&lt;br /&gt;
*The IMS&amp;amp;ndash;basic protocol is the one from&amp;amp;nbsp; &amp;lt;i&amp;gt;Voice over IP&amp;lt;/i&amp;gt;&amp;amp;nbsp; known&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Session_Initiation_Protocol Session Initiation Protocol]&amp;amp;nbsp; (SIP).  This is a network protocol that can be used to establish and control connections between two users.&lt;br /&gt;
* This protocol enables the development of a completely (for data &amp;lt;u&amp;gt;and&amp;lt;/u&amp;gt; voice) IP-based network and is therefore future-proof.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The reason why the introduction of VoLTE has been delayed by four years compared to LTE&amp;amp;ndash;establishment in data traffic is due to the difficult interaction of &amp;quot;4G&amp;quot; with the older predecessor standards&amp;amp;nbsp; GSM&amp;amp;nbsp; (&amp;quot;2G&amp;quot;) and&amp;amp;nbsp; UMTS&amp;amp;nbsp; (&amp;quot;3G&amp;quot;). Here is an example:&lt;br /&gt;
*If a mobile phone user leaves his LTE&amp;amp;ndash;cell and switches to an area without 4G&amp;amp;ndash;coverage, an immediate switch to the next best standard (3G) must be made.&lt;br /&gt;
&lt;br /&gt;
*Language is transmitted here technically completely differently, no longer by many small data packets &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; &amp;quot;packet-switched&amp;quot; but sequentially in the logical and physical channels reserved especially for the user &amp;amp;nbsp; &amp;amp;#8658;&amp;amp;nbsp; &amp;quot;circuit-switched&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*This implementation must be so fast and smooth that the end customer does not notice anything. And this implementation must work for all mobile phone standards and technologies.&lt;br /&gt;
&lt;br /&gt;
According to all the experts, VoLTE will have a positive impact on mobile telephony in the same way that LTE has driven the mobile Internet forward since 2011. Key benefits for users are:&lt;br /&gt;
*A&amp;amp;nbsp; &amp;lt;i&amp;gt;higher voice quality&amp;lt;/i&amp;gt;, as VoLTE&amp;amp;nbsp; [[Examples_of_Communication_Systems/Nachrichtentechnische_Aspekte_von_UMTS#Verbesserungen_bez.C3.BCglich_Sprachcodierung| AMR&amp;amp;ndash;Wideband Codecs]]&amp;amp;nbsp; with 12.65 or 23.85 kbit/s. Furthermore, the VoLTE&amp;amp;ndash;data packets are prioritized for lowest possible latencies.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*An enormously&amp;amp;nbsp; &amp;lt;i&amp;gt;accelerated connection setup&amp;lt;/i&amp;gt; within one or two seconds, whereas with&amp;amp;nbsp; &amp;lt;i&amp;gt;Circuit Switched Fallback&amp;lt;/i&amp;gt; (CSFB) it takes an unpleasantly long time to establish a connection.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*A&amp;amp;nbsp; &amp;lt;i&amp;gt;low battery consumption&amp;lt;/i&amp;gt;, significantly lower than &amp;quot;2G&amp;quot; and &amp;quot;3G&amp;quot;, associated with a longer battery life. Also in comparison to the usual VoIP&amp;amp;ndash;services the power consumption is up to 40% lower.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the provider&#039;s point of view, the following advantages result:&lt;br /&gt;
*A&amp;amp;nbsp; &amp;lt;i&amp;gt;better spectral efficiency&amp;lt;/i&amp;gt;: &amp;amp;nbsp; Twice as many calls are possible in the same frequency band than with &amp;quot;3G&amp;quot;. In other words: &amp;amp;nbsp; More capacity is available for data services for the same number of calls.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*An easy implementation of&amp;amp;nbsp; [https://de.ryte.com/wiki/Rich_Media Rich Media Services]&amp;amp;nbsp; (RCS), for example for video telephony or future applications that can be used to attract new customers.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*A&amp;amp;nbsp; &amp;lt;i&amp;gt; better acceptance&amp;lt;/i&amp;gt;&amp;amp;nbsp; of the higher provisioning costs by LTE&amp;amp;ndash;customers if you don&#039;t need to outsource to a &amp;quot;low-value&amp;quot; network like &amp;quot;2G&amp;quot; or &amp;quot;3G&amp;quot; for telephony.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Bandwidth flexibility ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
LTE can be adapted to frequency bands of different widths with relatively little effort by using&amp;amp;nbsp; [[Modulation_Methods/Allgemeine_Beschreibung_von_OFDM#Das_Prinzip_von_OFDM_.E2.80.93_Systembetrachtung_im_Zeitbereich_.281.29|OFDM]]&amp;amp;nbsp; (&amp;quot;Orthogonal Frequency Division Multiplex&amp;quot;). This fact is an important feature for various reasons, see&amp;amp;nbsp; [Mey10]&amp;lt;ref name=&#039;Mey10&#039;&amp;gt;Meyer, M.: &#039;&#039;Siebenmeilenfunk.&#039;&#039; c&#039;t 2010, issue 25, 2010.&amp;lt;/ref&amp;gt;, especially for network operators:&lt;br /&gt;
*The frequency bands for LTE may vary in size depending on the legal requirements in different countries. The outcome of the state-specific auctions of LTE&amp;amp;ndash;frequencies (separated into FDD and TDD) has also influenced the width of the spectrum.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Often LTE is operated in the &amp;quot;frequency&amp;amp;ndash;neighborhood&amp;quot; established radio transmission systems, which are expected to be switched off soon. If the demand increases, LTE can be gradually expanded to the frequency range that is becoming available.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*For example, the migration of television channels after digitalization: &amp;amp;nbsp; A part of the LTE&amp;amp;ndash;network will be located in the VHF&amp;amp;ndash;frequency range around 800 MHz, which has now been freed up, see&amp;amp;nbsp; [[Mobile_Communications/General Information on the LTE Mobile Communications Standard#LTE_Frequency_Band_Splitting|Frequency_Band_Splitting Graphic]].&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Actually the bandwidths could be selected with a degree of fineness of up to 15 kHz (corresponding to an OFDMA&amp;amp;ndash;subcarrier). However, since this would unnecessarily produce overhead, a duration of&amp;amp;nbsp; &#039;&#039;&#039;one millisecond&#039;&#039;&#039;&amp;amp;nbsp; and a bandwidth of&amp;amp;nbsp; &#039;&#039;&#039;180 kHz&#039;&#039;&#039;&amp;amp;nbsp; has been specified as the smallest addressable LTE&amp;amp;ndash;resource. Such a block corresponds to twelve subcarriers (180 kHz divided by 15 kHz).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to keep the complexity and effort of hardware standardization as low as possible, a whole range of permissible bandwidths between 1.4 MHz and 20 MHz has been agreed upon. The following list &amp;amp;ndash; taken from&amp;amp;nbsp; [Ges08]&amp;lt;ref name=&#039;Ges08&#039;&amp;gt;Gessner, C.: &#039;&#039;UMTS Long Term Evolution (LTE): Technology Introduction.&#039;&#039; Rohde&amp;amp;Schwarz, 2008.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; &amp;amp;ndash; specifies the standardized bandwidths, the number of available blocks and the &amp;quot;overhead&amp;quot;:&lt;br /&gt;
*6 available blocks in the bandwidth 1.4 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead approx. 22.8%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*15 available blocks in the bandwidth 3 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead about 10%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*25 available blocks in the bandwidth 5 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead approx. 10%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*50 available blocks in the bandwidth 10 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead approx. 10%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*75 available blocks in the bandwidth 15 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead about 10%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*100 available blocks in the bandwidth 20 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead about 10%.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since otherwise some LTE&amp;amp;ndash;specific functions would not work, at least six blocks must be provided. &lt;br /&gt;
*The relative overhead is comparatively high at small channel bandwidth (1.4 MHz): &amp;amp;nbsp; (1.4 &amp;amp;ndash; 6 &amp;amp;middot; 0.18)/1.4 &amp;amp;asymp; 22.8%. &lt;br /&gt;
*From a bandwidth of 3 MHz the relative overhead is constant 10%. &lt;br /&gt;
*It also applies that all end devices must also support the maximum bandwidth of 20 MHz &amp;amp;nbsp; [Ges08]&amp;lt;ref name=&#039;Ges08&#039;&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== FDD, TDD and half duplex method==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:EN_Mob_T_4_2_S3a.png|right|frame|transmission scheme for FDD (top) or TDD (bottom)|class=fit]]&lt;br /&gt;
Another important innovation of LTE is the half&amp;amp;ndash;duplex&amp;amp;ndash;procedure, which is a mixture of the two already known from UMTS&amp;amp;nbsp; [[Examples_of_Communication_Systems/General_Description_of_UMTS#Full Duplex Procedure|Duplex Procedure]]&amp;amp;nbsp;:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Frequency Division Duplex&#039;&#039;&#039;&amp;amp;nbsp; (FDD), and&amp;lt;br&amp;gt;&lt;br /&gt;
*&#039;&#039;&#039;Time Division Duplex&#039;&#039;&#039;&amp;amp;nbsp; (TDD) .&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Such duplexing is necessary to ensure that uplink and downlink are clearly separated from each other and that transmission runs smoothly. The diagram illustrates the difference between FDD&amp;amp;ndash; and TDD&amp;amp;ndash;based transmission.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using the FDD and TDD methods, LTE can be operated in paired and unpaired frequency ranges.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
The two methods are present opposing requirements:&lt;br /&gt;
*FDD requires a paired spectrum, i.e. one frequency band for transmission from the base station to the terminal (downlink) and one for transmission in the opposite direction (uplink). Downlink and uplink can be used at the same time.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*TDD was designed for unpaired spectra. Now only one band is needed for uplink and downlink. However, transmitter and receiver must now alternate during transmission. The main problem of TDD is the required synchronicity of the networks.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the graphic above the differences between FDD and TDD can be seen. In TDD a &#039;&#039;Guard Period&#039;&#039; has to be inserted when changing from downlink to uplink (or vice versa) to avoid an overlapping of the signals.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Although FDD is likely to be used more in practice (and FDD&amp;amp;ndash;frequencies were also much more expensive for the providers), there are several reasons for TDD:&lt;br /&gt;
*Frequencies are a rare and expensive commodity, as the 2010 auction has shown.  But TDD needs only half of the frequency bandwidth.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The TDD technique allows different modes, which determine how much time should be used for downlink or uplink and can be adjusted to individual requirements.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the actual innovation, the &#039;&#039;&#039;Half&amp;amp;ndash;Duplex&amp;amp;ndash;Method&#039;&#039;&#039;, you need a paired spectrum as with FDD (see second graphic):&lt;br /&gt;
[[File:P ID2276 Mob T 4 2 S4b v1.png|right|frame|Transmission scheme for half-duplex|class=fit]] &lt;br /&gt;
*Base station transmitter and receiver still alternate like TDD.  Each terminal device can either transmit or receive at a given time.&lt;br /&gt;
*Through a second connection to another end device with swapped downlink/uplink&amp;amp;ndash;raster, the entire available bandwidth can still be fully used.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The main advantage of the half&amp;amp;ndash;duplex&amp;amp;ndash;process is that the use of the TDD&amp;amp;ndash;concept reduces the demands on the end devices and thus allows them to be produced at a lower cost.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that this aspect was of great importance in the standardization can also be seen in the use of OFDMA in the downlink and of SC&amp;amp;ndash;FDMA in the uplink: &lt;br /&gt;
*This results in a longer battery life of the end devices and allows the use of cheaper components. &lt;br /&gt;
*More about this can be found in chapter&amp;amp;nbsp; [[Mobile_Communications/The_Application_of_OFDMA_and_SC-FDMA_in_LTE | The Application of OFDMA and SC-FDMA in LTE]].&lt;br /&gt;
&lt;br /&gt;
== Multiple Antenna Systems==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If a radio system uses several transmitting and receiving antennas, one speaks of&amp;amp;nbsp; &#039;&#039;&#039;Multiple Input Multiple Output&#039;&#039;&#039;&amp;amp;nbsp; (MIMO). This is not an LTE&amp;amp;ndash;specific development. WLAN, for example, also uses this technology. &lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp; The principle of multi-antenna systems is illustrated in the following figure using the example of 2&amp;amp;times;2&amp;amp;ndash;MIMO (two transmitting and two receiving antennas).&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_2_S3b.png|right|frame|The difference between SISO and MIMO|class=fit]]&lt;br /&gt;
The new thing about LTE is not the actual use of&amp;amp;nbsp; &amp;lt;i&amp;gt;Multiple Input Multiple Output&amp;lt;/i&amp;gt;, but the particularly intensive one, namely 2&amp;amp;times;2&amp;amp;ndash;MIMO in the uplink and maximum 4&amp;amp;times;4&amp;amp;ndash;MIMO in the downlink. &lt;br /&gt;
&lt;br /&gt;
In the successor&amp;amp;nbsp; [[Mobile_Communications/LTE-Advanced - a Further Development of LTE|LTE&amp;amp;ndash;Advanced]]&amp;amp;nbsp; the use of MIMO is even more pronounced, namely &amp;quot;4&amp;amp;times;4&amp;quot; in the uplink and &amp;quot;8&amp;amp;times;8&amp;quot; in the opposite direction.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A MIMO&amp;amp;ndash;system has advantages compared to&amp;amp;nbsp; &#039;&#039;Single Input Single Output&#039;&#039;&amp;amp;nbsp; (SISO, only one transmitting and one receiving antenna). A distinction is made between several gains depending on the channel:&lt;br /&gt;
*&amp;lt;b&amp;gt;power gain&amp;lt;/b&amp;gt;&amp;amp;nbsp; according to the number of receiving antennas: &amp;amp;nbsp; &amp;lt;br&amp;gt;If the radio signals arriving via several antennas are combined in a suitable way&amp;amp;nbsp; ([https://en.wikipedia.org/wiki/Maximal-ratio_combining Maximal-ratio Combining]), the reception power is increased and the radio connection is improved. By doubling the antennas, a power gain of maximum 3 dB is achieved.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*&amp;lt;b&amp;gt;Diversity gain&amp;lt;/b&amp;gt; through spatial diversity (&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Antenna_diversity Spatial Diversity]): If several spatially separated receiving antennas are used in an environment with strong multipath propagation, the fading at the individual antennas is mostly independent from each other and the probability that all antennas are affected by fading at the same time is very low.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;b&amp;gt;Data rate gain&amp;lt;/b&amp;gt;: &amp;amp;nbsp; &amp;lt;br&amp;gt; This increases the efficiency of MIMO, especially in an environment with increased multipath propagation, especially when transmitter and receiver do not have a direct line of sight and the transmission is done via reflections. Tripling the number of antennas for the transmitter and receiver results in approximately twice the data rate.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, it is not possible for all advantages to occur simultaneously. Depending on the nature of the channel, it can also happen that one does not even have the choice of which advantage one wants to use.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition to the MIMO systems there are also the following intermediate stages:&lt;br /&gt;
*MISO&amp;amp;ndash;Systems&amp;amp;nbsp; (only one receiving antenna, therefore no power gain is possible), and&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*SIMO&amp;amp;ndash;Systems&amp;amp;nbsp; (only one transmitting antenna, only small diversity gain).&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp; The term &amp;quot;MIMO&amp;quot; summarizes multi-antenna techniques with different properties, each of which can be useful in certain situations. The following description is based on the four diagrams shown here.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_2_S5b.png|center|frame|Four multi-antenna procedures with different properties|class=fit]]&lt;br /&gt;
&lt;br /&gt;
*If the mostly independent channels of a MIMO&amp;amp;ndash;system are assigned to a single user (top left diagram), one speaks of&amp;amp;nbsp; &#039;&#039;&#039;Single&amp;amp;ndash;User MIMO&#039;&#039;. With 2&amp;amp;times;2&amp;amp;ndash;MIMO, the data rate is doubled compared to SISO&amp;amp;ndash;operation and with four transmit&amp;amp;ndash each; and receiving antennas, the data rate can be doubled again under good channel conditions.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::LTE allows maximum 4&amp;amp;times;4&amp;amp;ndash;MIMO but only in the downlink. Due to the complexity of multi-antenna systems, only laptops with LTE&amp;amp;ndash;modems can be used as receivers (end devices) for 4&amp;amp;times;4&amp;amp;ndash;MIMO. For a cell phone, the use of 2&amp;amp;times;2&amp;amp;ndash;MIMO is generally limited to 2&amp;amp;times;2&amp;amp;ndash;MIMO.&lt;br /&gt;
&lt;br /&gt;
*Contrary to Single&amp;amp;ndash;User MIMO, the goal with the&amp;amp;nbsp; &#039;&#039;&#039;Multi&amp;amp;ndash;User MIMO&#039;&#039;&#039;&amp;amp;nbsp; is not the maximum data rate for a receiver, but the maximization of the number of end devices that can use the network simultaneously (top right diagram). This involves transmitting different data streams to different users. This is particularly useful in places with high demand, such as airports or soccer stadiums.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Multi-antenna operation is not only used to maximize the number of users or data rate, but in the event of poor transmission conditions, multiple antennas can also combine their power to transmit data to a single user to improve the quality of reception. One then speaks of&amp;amp;nbsp; &#039;&#039;&#039;Beamforming&#039;&#039;&#039; &amp;amp;nbsp; (diagram below left), which also increases the range of a transmitting station.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The fourth possibility is&amp;amp;nbsp; &#039;&#039;&#039;antenna diversity&#039;&#039;&#039; &amp;amp;nbsp; (diagram below right). This increases the redundancy (regarding system design) and makes the transmission more robust against interferences. A simple example: &amp;amp;nbsp; There are four channels that all transmit the same data. If one channel fails, there are still three channels for information transport.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== System Architecture==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The LTE&amp;amp;ndash;architecture enables a transmission system based entirely on the IP&amp;amp;ndash;protocol. In order to achieve this goal, the system architecture specified for UMTS not only had to be changed in detail, but in some cases completely redesigned. In the process, other IP-based technologies such as&amp;amp;nbsp; &#039;&#039;mobile WiMAX&#039;&#039;&amp;amp;nbsp; or&amp;amp;nbsp; &#039;&#039;WLAN&#039;&#039;&amp;amp;nbsp; were also integrated in order to be able to switch to these networks without any problems.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In UMTS&amp;amp;ndash;networks (left graphic), the&amp;amp;nbsp; &amp;lt;i&amp;gt;Radio Network Controller&amp;lt;/i&amp;gt;&amp;amp;nbsp; (RNC) is inserted between a base station (NodeB) and the core network, which is mainly responsible for switching between different cells and which can lead to latency times of up to 100 milliseconds.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_2_S6.png|center|frame|System Architecture for UMTS (UTRAN) and LTE (EUTRAN)|class=fit]]&lt;br /&gt;
&lt;br /&gt;
The redesign of the base stations (&amp;quot;eNodeB&amp;quot; instead of &amp;quot;NodeB&amp;quot;) and the interface &amp;quot;X2&amp;quot; are the decisive further developments from UMTS towards LTE. The graphic on the right illustrates in particular the reduction in complexity compared to UMTS that goes hand in hand with the new technology (left graphic). &lt;br /&gt;
&lt;br /&gt;
The&amp;amp;nbsp; &#039;&#039;&#039;LTE&amp;amp;ndash;system architecture&#039;&#039; &amp;amp;nbsp; can be divided into two major areas:&lt;br /&gt;
*the LTE&amp;amp;ndash;core network&amp;amp;nbsp; &amp;lt;i&amp;gt;Evolved Packet Core&amp;lt;/i&amp;gt;&amp;amp;nbsp; (EPC),&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*the air interface&amp;amp;nbsp; &amp;lt;i&amp;gt;Evolved UMTS Terrestrial Radio Access Network&amp;lt;/i&amp;gt;&amp;amp;nbsp; (EUTRAN), a further development of&amp;amp;nbsp; [[Examples_of_Communication_Systems/UMTS Network Architecture#Access Level_Architecture_of_the_Access Level|&amp;lt;i&amp;gt;UMTS Terrestrial Radio Access Network&amp;lt;/i&amp;gt;]]&amp;amp;nbsp; (UTRAN).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EUTRAN transmits the data between the terminal and the LTE&amp;amp;ndash;base station (&amp;quot;eNodeB&amp;quot;) via the so-called S1&amp;amp;ndash;interface with two connections, one for the transmission of user data and a second for the transmission of signalling data.  You can see from the above graphic:&lt;br /&gt;
*The base stations are connected not only to the EPC but also to the neighboring base stations. These connections (X2&amp;amp;ndash;interfaces) have the effect that as few packets as possible are lost when the terminal device moves from the vicinity of one base station towards another.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*For this purpose, the base station whose service area the user is just leaving can pass on any cached data directly and quickly to the &amp;quot;new&amp;quot; base station. This ensures (largely) continuous transmission.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The functionality of the RNC is partly transferred to the base station and partly to the&amp;amp;nbsp; &amp;lt;i&amp;gt;Mobility Management Entity&amp;lt;/i&amp;gt;&amp;amp;nbsp; (MME) in the core network. This reduction of the interfaces significantly shortens the signal throughput time in the network and the handover to 20 milliseconds.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The LTE&amp;amp;ndash;system architecture is also designed so that future&amp;amp;nbsp; &amp;lt;i&amp;gt;Inter&amp;amp;ndash;NodeB&amp;amp;ndash;procedures&amp;lt;/i&amp;gt;&amp;amp;nbsp; (such as&amp;amp;nbsp; &amp;lt;i&amp;gt;Soft&amp;amp;ndash;Handover&amp;lt;/i&amp;gt;&amp;amp;nbsp; or&amp;amp;nbsp; &amp;lt;i&amp;gt;Cooperative Interference Cancellation&amp;lt;/i&amp;gt;) can be easily integrated.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== LTE&amp;amp;ndash;Core network: Backbone and Backhaul ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The LTE&amp;amp;ndash;core network&amp;amp;nbsp; &amp;lt;i&amp;gt;Evolved Packet Core&amp;lt;/i&amp;gt;&amp;amp;nbsp; (EPC) of a network operator, in the technical language&amp;amp;nbsp; &amp;lt;i&amp;gt;Backbone&amp;lt;/i&amp;gt;, consists of various network components. The EPC is connected to the base stations via the&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;. This means the connection of an upstream, usually hierarchically subordinated network node to a central network node.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Currently, the&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp; consists mainly of directional radio and so-called E1&amp;amp;ndash;lines. These are copper lines and allow a throughput of about 2 Mbit/s. For GSM&amp;amp;ndash; and UMTS&amp;amp;ndash;networks these connections were still sufficient, however, for the large-scale conceived&amp;amp;nbsp; [[Examples_of_Communication_Systems/Further Developments_of_UMTS#High.E2.80.93Speed_Downlink_Packet_Access| HSDPA]]&amp;amp;nbsp; such data rates are no longer adequate. For LTE such a&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp; is completely unusable:&lt;br /&gt;
*The slow cable network would slow down the fast wireless connections; overall, there would be no increase in speed.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Due to the low capacities of the lines with E1&amp;amp;ndash;standard, an expansion with further lines of the same construction would not be economical.&lt;br /&gt;
&lt;br /&gt;
In the course of the introduction of LTE, the&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp; had to be redesigned. It was important to keep an eye on future security, since the next generation&amp;amp;nbsp; &#039;&#039;LTE&amp;amp;ndash;Advanced&#039;&#039;&amp;amp;nbsp; was already in place before the introduction. If one believes the experts&#039; propaganda&amp;amp;nbsp; &#039;&#039;Moore&#039;s Law&#039;&#039; for mobile phone bandwidths, the most important factor for future security is the expensive new installation of better cables.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the purely packet-oriented transmission technology, the Ethernet&amp;amp;ndash;standard, which is also IP&amp;amp;ndash;based, is suitable for the LTE&amp;amp;ndash;Backhaul, which is realized with the help of optical fibers. In 2009, the company Fujitsu presented in the study&amp;amp;nbsp; [Fuj09]&amp;lt;ref name=&#039;Fuj09&#039;&amp;gt;Fujitsu Network Communications Inc.: &#039;&#039;4G Impacts to Mobile Backhaul.&#039;&#039; [http://www.fujitsu.com/downloads/TEL/fnc/whitepapers/4Gimpacts.pdf PDF Internet document].&amp;lt;/ref&amp;gt;,&amp;amp;nbsp; also the thesis that the current infrastructure will continue to play an important role for LTE&amp;amp;ndash;Backhaul for the next ten to fifteen years.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are two approaches for the generation change to an Ethernet&amp;amp;ndash;based&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp;:&lt;br /&gt;
*the parallel operation of the lines with E1 and Ethernet&amp;amp;ndash;standard,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*the immediate migration to an Ethernet-based&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The former would have the advantage that the network operators could continue to run voice traffic over the old lines and would only have to handle bandwidth-intensive data traffic over the more powerful lines. &lt;br /&gt;
&lt;br /&gt;
The second option raises some technical problems:&lt;br /&gt;
*The services previously transported through the slow E1-standard lines would have to be switched immediately to a packet-based procedure.&lt;br /&gt;
&lt;br /&gt;
*Ethernet does not offer (unlike the current standard) any&amp;amp;nbsp; &amp;lt;i&amp;gt;End&amp;amp;ndash;to&amp;amp;ndash;End&amp;amp;ndash;Synchronization&amp;lt;/i&amp;gt;, which can lead to severe delays or even service interruptions when changing radio cells, thus a huge loss of service quality. &lt;br /&gt;
* However, in the concept&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Synchronous_Ethernet Synchronous Ethernet]&amp;amp;nbsp; (SyncE), the Cisco company has already made suggestions as to how synchronization could be realized.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For conurbations, a direct conversion of the backhaul would certainly be worthwhile, as relatively few new cables would have to be laid for a comparatively high number of new users. &lt;br /&gt;
&lt;br /&gt;
In rural areas, however, major excavation work would quickly result in high costs. However, this is exactly the area which must be covered first, according to the&amp;amp;nbsp; [[Mobile_Communications/General Information on the LTE Mobile Communications Standard#LTE frequency band splitting|agreement reached]]&amp;amp;nbsp; between the federal government and the (German) mobile phone operators. Here, the mostly existing microwave radio link would have to be (and probably will be) extended to high data rates.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Exercises for Chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:Exercise 4.2: FDD, TDD and Half-Duplex]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Exercise 4.2Z: MIMO Applications in LTE]]&lt;br /&gt;
&lt;br /&gt;
==List of Sources==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/Technical_Innovations_of_LTE&amp;diff=34999</id>
		<title>Mobile Communications/Technical Innovations of LTE</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/Technical_Innovations_of_LTE&amp;diff=34999"/>
		<updated>2020-10-18T17:57:37Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=LTE – Long Term Evolution&lt;br /&gt;
|Vorherige Seite=General Information on the LTE Mobile Communications Standard&lt;br /&gt;
|Nächste Seite=The Application of OFDMA and SC-FDMA in LTE&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== For voice transmission with LTE ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Unlike previous mobile phone standards, LTE only supports &#039;&#039;packet-oriented transmission&#039;&#039;. For voice transmission, however, a connection-oriented transmission with fixed reservation of resources would be better, since a &amp;quot;fragmented transmission&amp;quot;, as is the case with the packet-oriented method, is relatively complicated.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem of integrating voice transmission methods was one of the major challenges in the development of LTE, as voice transmission remains the largest source of revenue for network operators. There were a number of approaches, as it can be seen in the internet article &amp;amp;nbsp; [Gut10]&amp;lt;ref name=&#039;Gut10&#039;&amp;gt;Gutt, E.: &#039;&#039;LTE - a new dimension of mobile broadband use&#039;&#039;. [http://www.ltemobile.de/uploads/media/LTE_Einfuehrung_V1.pdf PDF document on the Internet], 2010.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(1)&#039;&#039;&#039; &amp;amp;nbsp; A very simple and obvious method is&amp;amp;nbsp; &amp;lt;i&amp;gt;Circuit Switched Fallback&amp;lt;/i&amp;gt;&amp;amp;nbsp; (&#039;&#039;&#039;CSFB&#039;&#039;&#039;). Here a wire-bound transmission is used for the voice transmission. The principle is:&lt;br /&gt;
*The terminal device logs on to the LTE&amp;amp;ndash;network and in parallel also to a GSM&amp;amp;ndash; or UMTS&amp;amp;ndash;network. When an incoming call is received, the terminal device receives a message from the&amp;amp;nbsp; &amp;lt;i&amp;gt;Mobile Management Entity&amp;lt;/i&amp;gt;&amp;amp;nbsp; (MME, control node in the LTE&amp;amp;ndash;network for user&amp;amp;ndash;authentication), whereupon a wire-bound transmission via the GSM&amp;amp;ndash; or the UMTS&amp;amp;ndash;network is established.&lt;br /&gt;
*A disadvantage of this solution (actually it is a &amp;quot;problem concealment&amp;quot;) is the greatly delayed connection establishment. In addition, CSFB prevents the complete conversion of the network to LTE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(2)&#039;&#039;&#039; &amp;amp;nbsp; Another possibility for the integration of voice in a packet-oriented transmission system is offered by&amp;amp;nbsp; &amp;lt;i&amp;gt;Voice over LTE via GAN&amp;lt;/i&amp;gt;&amp;amp;nbsp; (&#039;&#039;&#039;VoLGA&#039;&#039;&#039;), which is based on the from&amp;amp;nbsp; [[Mobile_Communications/General Information on the LTE Mobile Communications Standard#3GPP_. E2.80.93_Third_Generation_Partnership_Project| 3GPP]]&amp;amp;nbsp; developed by&amp;amp;nbsp; [[https://en.wikipedia.org/wiki/Generic_Access_Network Generic Access Network])&amp;amp;nbsp;. In brief, the principle can be described as follows:&lt;br /&gt;
* GAN enables line-related services via a packet-oriented network (IP&amp;amp;ndash;network), for example WLAN&amp;amp;nbsp; (&amp;lt;i&amp;gt;Wireless Local Area Network&amp;lt;/i&amp;gt;). With compatible end devices one can register oneself in the GSM&amp;amp;ndash;network over a WLAN&amp;amp;ndash;connection and use line-based services. VoLGA uses this functionality by replacing WLAN with LTE.&lt;br /&gt;
* The fast implementation of VoLGA is advantageous, as no lengthy new development or changes to the core network are necessary. However, a so-called&amp;amp;nbsp; &amp;lt;i&amp;gt;VoLGA Access Network Controller&amp;lt;/i&amp;gt;&amp;amp;nbsp; (VANC) must be added to the network as hardware. This takes care of the communication between the end device and the&amp;amp;nbsp; &amp;lt;i&amp;gt;Mobile Management Entity&amp;lt;/i&amp;gt;&amp;amp;nbsp; or the core network.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though VoLGA does not need to use a GSM&amp;amp;ndash; or UMTS&amp;amp;ndash;network for voice connections like CSFB, it was considered by the majority of the mobile community as an (unsatisfactory) bridge technology due to its user-friendliness. T&amp;amp;ndash;Mobile has long been a proponent of the VoLGA&amp;amp;ndash;technology, but also stopped further development in February 2011.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the following we describe a better solution proposal. Keywords are&amp;amp;nbsp; &amp;lt;i&amp;gt;IP Multimedia Subsystem&amp;lt;/i&amp;gt;&amp;amp;nbsp; (IMS) and&amp;amp;nbsp; &amp;lt;i&amp;gt;Voice over LTE&amp;lt;/i&amp;gt;&amp;amp;nbsp; (VoLTE). The operators in Germany switched to this technology relatively late: &amp;amp;nbsp; Vodafone and O2 Telefonica at the beginning of 2015, Telekom at the beginning of 2016. &lt;br /&gt;
&lt;br /&gt;
This is also the reason why the switch to LTE in Germany (and in Europe in general) was slower than in the USA.  Many customers did not want to pay the higher prices for LTE as long as there was no well-functioning solution for integrating voice transmission.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== VoLTE - Voice over LTE ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
From today&#039;s point of view (2016), the most promising approach to integrating voice services into the LTE&amp;amp;ndash;network, some of which are already established, is&amp;amp;nbsp; &amp;lt;i&amp;gt;Voice over LTE&amp;lt;/i&amp;gt;&amp;amp;nbsp; &amp;amp;ndash; in short: &#039;&#039;&#039;VoLTE&#039;&#039;. This standard, officially adopted by the&amp;amp;nbsp; [http://www.gsma.com/aboutus/ GSMA],&amp;amp;nbsp; the worldwide industry association of more than 800 mobile network operators and over 200 manufacturers of cell phones and network infrastructure, is exclusively IP&amp;amp;ndash;packet-oriented and is based on the&amp;amp;nbsp; &amp;lt;i&amp;gt;IP Multimedia Subsystem&amp;lt;/i&amp;gt;&amp;amp;nbsp; (&#039;&#039;&#039;IMS&#039;&#039;&#039;), which was already defined in the UMTS&amp;amp;ndash;Release 9 in 2010. The technical facts about IMS are:&lt;br /&gt;
*The IMS&amp;amp;ndash;basic protocol is the one from&amp;amp;nbsp; &amp;lt;i&amp;gt;Voice over IP&amp;lt;/i&amp;gt;&amp;amp;nbsp; known&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Session_Initiation_Protocol Session Initiation Protocol]&amp;amp;nbsp; (SIP).  This is a network protocol that can be used to establish and control connections between two users.&lt;br /&gt;
* This protocol enables the development of a completely (for data &amp;lt;u&amp;gt;and&amp;lt;/u&amp;gt; voice) IP-based network and is therefore future-proof.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The reason why the introduction of VoLTE has been delayed by four years compared to LTE&amp;amp;ndash;establishment in data traffic is due to the difficult interaction of &amp;quot;4G&amp;quot; with the older predecessor standards&amp;amp;nbsp; GSM&amp;amp;nbsp; (&amp;quot;2G&amp;quot;) and&amp;amp;nbsp; UMTS&amp;amp;nbsp; (&amp;quot;3G&amp;quot;). Here is an example:&lt;br /&gt;
*If a mobile phone user leaves his LTE&amp;amp;ndash;cell and switches to an area without 4G&amp;amp;ndash;coverage, an immediate switch to the next best standard (3G) must be made.&lt;br /&gt;
&lt;br /&gt;
*Language is transmitted here technically completely differently, no longer by many small data packets &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; &amp;quot;packet-switched&amp;quot; but sequentially in the logical and physical channels reserved especially for the user &amp;amp;nbsp; &amp;amp;#8658;&amp;amp;nbsp; &amp;quot;circuit-switched&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*This implementation must be so fast and smooth that the end customer does not notice anything. And this implementation must work for all mobile phone standards and technologies.&lt;br /&gt;
&lt;br /&gt;
According to all the experts, VoLTE will have a positive impact on mobile telephony in the same way that LTE has driven the mobile Internet forward since 2011. Key benefits for users are:&lt;br /&gt;
*A&amp;amp;nbsp; &amp;lt;i&amp;gt;higher voice quality&amp;lt;/i&amp;gt;, as VoLTE&amp;amp;nbsp; [[Examples_of_Communication_Systems/Nachrichtentechnische_Aspekte_von_UMTS#Verbesserungen_bez.C3.BCglich_Sprachcodierung| AMR&amp;amp;ndash;Wideband Codecs]]&amp;amp;nbsp; with 12.65 or 23.85 kbit/s. Furthermore, the VoLTE&amp;amp;ndash;data packets are prioritized for lowest possible latencies.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*An enormously&amp;amp;nbsp; &amp;lt;i&amp;gt;accelerated connection setup&amp;lt;/i&amp;gt; within one or two seconds, whereas with&amp;amp;nbsp; &amp;lt;i&amp;gt;Circuit Switched Fallback&amp;lt;/i&amp;gt; (CSFB) it takes an unpleasantly long time to establish a connection.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*A&amp;amp;nbsp; &amp;lt;i&amp;gt;low battery consumption&amp;lt;/i&amp;gt;, significantly lower than &amp;quot;2G&amp;quot; and &amp;quot;3G&amp;quot;, associated with a longer battery life. Also in comparison to the usual VoIP&amp;amp;ndash;services the power consumption is up to 40% lower.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the provider&#039;s point of view, the following advantages result:&lt;br /&gt;
*A&amp;amp;nbsp; &amp;lt;i&amp;gt;better spectral efficiency&amp;lt;/i&amp;gt;: &amp;amp;nbsp; Twice as many calls are possible in the same frequency band than with &amp;quot;3G&amp;quot;. In other words: &amp;amp;nbsp; More capacity is available for data services for the same number of calls.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*An easy implementation of&amp;amp;nbsp; [https://de.ryte.com/wiki/Rich_Media Rich Media Services]&amp;amp;nbsp; (RCS), for example for video telephony or future applications that can be used to attract new customers.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*A&amp;amp;nbsp; &amp;lt;i&amp;gt; better acceptance&amp;lt;/i&amp;gt;&amp;amp;nbsp; of the higher provisioning costs by LTE&amp;amp;ndash;customers if you don&#039;t need to outsource to a &amp;quot;low-value&amp;quot; network like &amp;quot;2G&amp;quot; or &amp;quot;3G&amp;quot; for telephony.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Bandwidth flexibility ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
LTE can be adapted to frequency bands of different widths with relatively little effort by using&amp;amp;nbsp; [[Modulation_Methods/Allgemeine_Beschreibung_von_OFDM#Das_Prinzip_von_OFDM_.E2.80.93_Systembetrachtung_im_Zeitbereich_.281.29|OFDM]]&amp;amp;nbsp; (&amp;quot;Orthogonal Frequency Division Multiplex&amp;quot;). This fact is an important feature for various reasons, see&amp;amp;nbsp; [Mey10]&amp;lt;ref name=&#039;Mey10&#039;&amp;gt;Meyer, M.: &#039;&#039;Siebenmeilenfunk.&#039;&#039; c&#039;t 2010, issue 25, 2010.&amp;lt;/ref&amp;gt;, especially for network operators:&lt;br /&gt;
*The frequency bands for LTE may vary in size depending on the legal requirements in different countries. The outcome of the state-specific auctions of LTE&amp;amp;ndash;frequencies (separated into FDD and TDD) has also influenced the width of the spectrum.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Often LTE is operated in the &amp;quot;frequency&amp;amp;ndash;neighborhood&amp;quot; established radio transmission systems, which are expected to be switched off soon. If the demand increases, LTE can be gradually expanded to the frequency range that is becoming available.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*For example, the migration of television channels after digitalization: &amp;amp;nbsp; A part of the LTE&amp;amp;ndash;network will be located in the VHF&amp;amp;ndash;frequency range around 800 MHz, which has now been freed up, see&amp;amp;nbsp; [[Mobile_Communications/General Information on the LTE Mobile Communications Standard#LTE_Frequency_Band_Splitting|Frequency_Band_Splitting Graphic]].&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Actually the bandwidths could be selected with a degree of fineness of up to 15 kHz (corresponding to an OFDMA&amp;amp;ndash;subcarrier). However, since this would unnecessarily produce overhead, a duration of&amp;amp;nbsp; &#039;&#039;&#039;one millisecond&#039;&#039;&#039;&amp;amp;nbsp; and a bandwidth of&amp;amp;nbsp; &#039;&#039;&#039;180 kHz&#039;&#039;&#039;&amp;amp;nbsp; has been specified as the smallest addressable LTE&amp;amp;ndash;resource. Such a block corresponds to twelve subcarriers (180 kHz divided by 15 kHz).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to keep the complexity and effort of hardware standardization as low as possible, a whole range of permissible bandwidths between 1.4 MHz and 20 MHz has been agreed upon. The following list &amp;amp;ndash; taken from&amp;amp;nbsp; [Ges08]&amp;lt;ref name=&#039;Ges08&#039;&amp;gt;Gessner, C.: &#039;&#039;UMTS Long Term Evolution (LTE): Technology Introduction.&#039;&#039; Rohde&amp;amp;Schwarz, 2008.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; &amp;amp;ndash; specifies the standardized bandwidths, the number of available blocks and the &amp;quot;overhead&amp;quot;:&lt;br /&gt;
*6 available blocks in the bandwidth 1.4 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead approx. 22.8%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*15 available blocks in the bandwidth 3 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead about 10%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*25 available blocks in the bandwidth 5 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead approx. 10%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*50 available blocks in the bandwidth 10 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead approx. 10%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*75 available blocks in the bandwidth 15 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead about 10%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*100 available blocks in the bandwidth 20 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead about 10%.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since otherwise some LTE&amp;amp;ndash;specific functions would not work, at least six blocks must be provided. &lt;br /&gt;
*The relative overhead is comparatively high at small channel bandwidth (1.4 MHz): &amp;amp;nbsp; (1.4 &amp;amp;ndash; 6 &amp;amp;middot; 0.18)/1.4 &amp;amp;asymp; 22.8%. &lt;br /&gt;
*From a bandwidth of 3 MHz the relative overhead is constant 10%. &lt;br /&gt;
*It also applies that all end devices must also support the maximum bandwidth of 20 MHz &amp;amp;nbsp; [Ges08]&amp;lt;ref name=&#039;Ges08&#039;&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== FDD, TDD and half duplex method==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:EN_Mob_T_4_2_S3a.png|right|frame|transmission scheme for FDD (top) or TDD (bottom)|class=fit]]&lt;br /&gt;
Another important innovation of LTE is the half&amp;amp;ndash;duplex&amp;amp;ndash;procedure, which is a mixture of the two already known from UMTS&amp;amp;nbsp; [[Examples_of_Communication_Systems/General_Description_of_UMTS#Full Duplex Procedure|Duplex Procedure]]&amp;amp;nbsp;:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Frequency Division Duplex&#039;&#039;&#039;&amp;amp;nbsp; (FDD), and&amp;lt;br&amp;gt;&lt;br /&gt;
*&#039;&#039;&#039;Time Division Duplex&#039;&#039;&#039;&amp;amp;nbsp; (TDD) .&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Such duplexing is necessary to ensure that uplink and downlink are clearly separated from each other and that transmission runs smoothly. The diagram illustrates the difference between FDD&amp;amp;ndash; and TDD&amp;amp;ndash;based transmission.&amp;lt;br&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Using the FDD and TDD methods, LTE can be operated in paired and unpaired frequency ranges.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
The two methods are present opposing requirements:&lt;br /&gt;
*FDD requires a paired spectrum, i.e. one frequency band for transmission from the base station to the terminal (downlink) and one for transmission in the opposite direction (uplink). Downlink and uplink can be used at the same time.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*TDD was designed for unpaired spectra. Now only one band is needed for uplink and downlink. However, transmitter and receiver must now alternate during transmission. The main problem of TDD is the required synchronicity of the networks.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the graphic above the differences between FDD and TDD can be seen. In TDD a &#039;&#039;Guard Period&#039;&#039; has to be inserted when changing from downlink to uplink (or vice versa) to avoid an overlapping of the signals.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Although FDD is likely to be used more in practice (and FDD&amp;amp;ndash;frequencies were also much more expensive for the providers), there are several reasons for TDD:&lt;br /&gt;
*Frequencies are a rare and expensive commodity, as the 2010 auction has shown.  But TDD needs only half of the frequency bandwidth.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The TDD technique allows different modes, which determine how much time should be used for downlink or uplink and can be adjusted to individual requirements.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the actual innovation, the &#039;&#039;&#039;Half&amp;amp;ndash;Duplex&amp;amp;ndash;Method&#039;&#039;&#039;, you need a paired spectrum as with FDD (see second graphic):&lt;br /&gt;
[[File:P ID2276 Mob T 4 2 S4b v1.png|right|frame|Transmission scheme for half-duplex|class=fit]] &lt;br /&gt;
*Base station transmitter and receiver still alternate like TDD.  Each terminal device can either transmit or receive at a given time.&lt;br /&gt;
*Through a second connection to another end device with swapped downlink/uplink&amp;amp;ndash;raster, the entire available bandwidth can still be fully used.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The main advantage of the half&amp;amp;ndash;duplex&amp;amp;ndash;process is that the use of the TDD&amp;amp;ndash;concept reduces the demands on the end devices and thus allows them to be produced at a lower cost.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that this aspect was of great importance in the standardization can also be seen in the use of OFDMA in the downlink and of SC&amp;amp;ndash;FDMA in the uplink: &lt;br /&gt;
*This results in a longer battery life of the end devices and allows the use of cheaper components. &lt;br /&gt;
*More about this can be found in chapter&amp;amp;nbsp; [[Mobile_Communications/The_Application_of_OFDMA_and_SC-FDMA_in_LTE | The Application of OFDMA and SC-FDMA in LTE]].&lt;br /&gt;
&lt;br /&gt;
== Multiple Antenna Systems==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If a radio system uses several transmitting and receiving antennas, one speaks of&amp;amp;nbsp; &#039;&#039;&#039;Multiple Input Multiple Output&#039;&#039;&#039;&amp;amp;nbsp; (MIMO). This is not an LTE&amp;amp;ndash;specific development. WLAN, for example, also uses this technology. &lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp; The principle of multi-antenna systems is illustrated in the following figure using the example of 2&amp;amp;times;2&amp;amp;ndash;MIMO (two transmitting and two receiving antennas).&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_2_S3b.png|right|frame|The difference between SISO and MIMO|class=fit]]&lt;br /&gt;
The new thing about LTE is not the actual use of&amp;amp;nbsp; &amp;lt;i&amp;gt;Multiple Input Multiple Output&amp;lt;/i&amp;gt;, but the particularly intensive one, namely 2&amp;amp;times;2&amp;amp;ndash;MIMO in the uplink and maximum 4&amp;amp;times;4&amp;amp;ndash;MIMO in the downlink. &lt;br /&gt;
&lt;br /&gt;
In the successor&amp;amp;nbsp; [[Mobile_Communications/LTE%E2%80%93Advanced_%E2%80%93_a_Further_Development_of_LTE|LTE&amp;amp;ndash;Advanced]]&amp;amp;nbsp; the use of MIMO is even more pronounced, namely &amp;quot;4&amp;amp;times;4&amp;quot; in the uplink and &amp;quot;8&amp;amp;times;8&amp;quot; in the opposite direction.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A MIMO&amp;amp;ndash;system has advantages compared to&amp;amp;nbsp; &#039;&#039;Single Input Single Output&#039;&#039;&amp;amp;nbsp; (SISO, only one transmitting and one receiving antenna). A distinction is made between several gains depending on the channel:&lt;br /&gt;
*&amp;lt;b&amp;gt;power gain&amp;lt;/b&amp;gt;&amp;amp;nbsp; according to the number of receiving antennas: &amp;amp;nbsp; &amp;lt;br&amp;gt;If the radio signals arriving via several antennas are combined in a suitable way&amp;amp;nbsp; ([https://en.wikipedia.org/wiki/Maximal-ratio_combining Maximal-ratio Combining]), the reception power is increased and the radio connection is improved. By doubling the antennas, a power gain of maximum 3 dB.&amp;lt;br&amp;gt; is achieved.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;b&amp;gt;Diversity gain&amp;lt;/b&amp;gt; through spatial diversity (&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Antenna_diversity Spatial Diversity]): If several spatially separated receiving antennas are used in an environment with strong multipath propagation, the fading at the individual antennas is mostly independent from each other and the probability that all antennas are affected by fading at the same time is very low.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;b&amp;gt;Data rate gain&amp;lt;/b&amp;gt;: &amp;amp;nbsp; &amp;lt;br&amp;gt; This increases the efficiency of MIMO, especially in an environment with increased multipath propagation, especially when transmitter and receiver do not have a direct line of sight and the transmission is done via reflections. Tripling the number of antennas for the transmitter and receiver results in approximately twice the data rate.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, it is not possible for all advantages to occur simultaneously. Depending on the nature of the channel, it can also happen that one does not even have the choice of which advantage one wants to use.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition to the MIMO systems there are also the following intermediate stages:&lt;br /&gt;
*MISO&amp;amp;ndash;Systems&amp;amp;nbsp; (only one receiving antenna, therefore no power gain is possible), and&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*SIMO&amp;amp;ndash;Systems&amp;amp;nbsp; (only one transmitting antenna, only small diversity gain).&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp; The term &amp;quot;MIMO&amp;quot; summarizes multi-antenna techniques with different properties, each of which can be useful in certain situations. The following description is based on the four diagrams shown here.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_2_S5b.png|center|frame|Four multi-antenna procedures with different properties|class=fit]]&lt;br /&gt;
&lt;br /&gt;
*If the mostly independent channels of a MIMO&amp;amp;ndash;system are assigned to a single user (top left diagram), one speaks of&amp;amp;nbsp; &#039;&#039;&#039;Single&amp;amp;ndash;User MIMO&#039;&#039;. With 2&amp;amp;times;2&amp;amp;ndash;MIMO, the data rate is doubled compared to SISO&amp;amp;ndash;operation and with four transmit&amp;amp;ndash each; and receiving antennas, the data rate can be doubled again under good channel conditions.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::LTE allows maximum 4&amp;amp;times;4&amp;amp;ndash;MIMO but only in the downlink. Due to the complexity of multi-antenna systems, only laptops with LTE&amp;amp;ndash;modems can be used as receivers (end devices) for 4&amp;amp;times;4&amp;amp;ndash;MIMO. For a cell phone, the use of 2&amp;amp;times;2&amp;amp;ndash;MIMO is generally limited to 2&amp;amp;times;2&amp;amp;ndash;MIMO.&lt;br /&gt;
&lt;br /&gt;
*Contrary to Single&amp;amp;ndash;User MIMO, the goal with the&amp;amp;nbsp; &#039;&#039;&#039;Multi&amp;amp;ndash;User MIMO&#039;&#039;&#039;&amp;amp;nbsp; is not the maximum data rate for a receiver, but the maximization of the number of end devices that can use the network simultaneously (top right diagram). This involves transmitting different data streams to different users. This is particularly useful in places with high demand, such as airports or soccer stadiums.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Multi-antenna operation is not only used to maximize the number of users or data rate, but in the event of poor transmission conditions, multiple antennas can also combine their power to transmit data to a single user to improve the quality of reception. One then speaks of&amp;amp;nbsp; &#039;&#039;&#039;Beamforming&#039;&#039;&#039; &amp;amp;nbsp; (diagram below left), which also increases the range of a transmitting station.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The fourth possibility is&amp;amp;nbsp; &#039;&#039;&#039;antenna diversity&#039;&#039;&#039; &amp;amp;nbsp; (diagram below right). This increases the redundancy (regarding system design) and makes the transmission more robust against interferences. A simple example: &amp;amp;nbsp; There are four channels that all transmit the same data. If one channel fails, there are still three channels for information transport.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== System Architecture==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The LTE&amp;amp;ndash;architecture enables a transmission system based entirely on the IP&amp;amp;ndash;protocol. In order to achieve this goal, the system architecture specified for UMTS not only had to be changed in detail, but in some cases completely redesigned. In the process, other IP-based technologies such as&amp;amp;nbsp; &#039;&#039;mobile WiMAX&#039;&#039;&amp;amp;nbsp; or&amp;amp;nbsp; &#039;&#039;WLAN&#039;&#039;&amp;amp;nbsp; were also integrated in order to be able to switch to these networks without any problems.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In UMTS&amp;amp;ndash;networks (left graphic), the&amp;amp;nbsp; &amp;lt;i&amp;gt;Radio Network Controller&amp;lt;/i&amp;gt;&amp;amp;nbsp; (RNC) is inserted between a base station (NodeB) and the core network, which is mainly responsible for switching between different cells and which can lead to latency times of up to 100 milliseconds.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_2_S6.png|center|frame|System Architecture for UMTS (UTRAN) and LTE (EUTRAN)|class=fit]]&lt;br /&gt;
&lt;br /&gt;
The redesign of the base stations (&amp;quot;eNodeB&amp;quot; instead of &amp;quot;NodeB&amp;quot;) and the interface &amp;quot;X2&amp;quot; are the decisive further developments from UMTS towards LTE. The graphic on the right illustrates in particular the reduction in complexity compared to UMTS that goes hand in hand with the new technology (left graphic). &lt;br /&gt;
&lt;br /&gt;
The&amp;amp;nbsp; &#039;&#039;&#039;LTE&amp;amp;ndash;system architecture&#039;&#039; &amp;amp;nbsp; can be divided into two major areas:&lt;br /&gt;
*the LTE&amp;amp;ndash;core network&amp;amp;nbsp; &amp;lt;i&amp;gt;Evolved Packet Core&amp;lt;/i&amp;gt;&amp;amp;nbsp; (EPC),&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*the air interface&amp;amp;nbsp; &amp;lt;i&amp;gt;Evolved UMTS Terrestrial Radio Access Network&amp;lt;/i&amp;gt;&amp;amp;nbsp; (EUTRAN), a further development of&amp;amp;nbsp; [[Examples_of_Communication_Systems/UMTS Network Architecture#Access Level_Architecture_of_the_Access Level|&amp;lt;i&amp;gt;UMTS Terrestrial Radio Access Network&amp;lt;/i&amp;gt;]]&amp;amp;nbsp; (UTRAN).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EUTRAN transmits the data between the terminal and the LTE&amp;amp;ndash;base station (&amp;quot;eNodeB&amp;quot;) via the so-called S1&amp;amp;ndash;interface with two connections, one for the transmission of user data and a second for the transmission of signalling data.  You can see from the above graphic:&lt;br /&gt;
*The base stations are connected not only to the EPC but also to the neighboring base stations. These connections (X2&amp;amp;ndash;interfaces) have the effect that as few packets as possible are lost when the terminal device moves from the vicinity of one base station towards another.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*For this purpose, the base station whose service area the user is just leaving can pass on any cached data directly and quickly to the &amp;quot;new&amp;quot; base station. This ensures (largely) continuous transmission.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The functionality of the RNC is partly transferred to the base station and partly to the&amp;amp;nbsp; &amp;lt;i&amp;gt;Mobility Management Entity&amp;lt;/i&amp;gt;&amp;amp;nbsp; (MME) in the core network. This reduction of the interfaces significantly shortens the signal throughput time in the network and the handover to 20 milliseconds.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The LTE&amp;amp;ndash;system architecture is also designed so that future&amp;amp;nbsp; &amp;lt;i&amp;gt;Inter&amp;amp;ndash;NodeB&amp;amp;ndash;procedures&amp;lt;/i&amp;gt;&amp;amp;nbsp; (such as&amp;amp;nbsp; &amp;lt;i&amp;gt;Soft&amp;amp;ndash;Handover&amp;lt;/i&amp;gt;&amp;amp;nbsp; or&amp;amp;nbsp; &amp;lt;i&amp;gt;Cooperative Interference Cancellation&amp;lt;/i&amp;gt;) can be easily integrated.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== LTE&amp;amp;ndash;Core network: Backbone and Backhaul ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The LTE&amp;amp;ndash;core network&amp;amp;nbsp; &amp;lt;i&amp;gt;Evolved Packet Core&amp;lt;/i&amp;gt;&amp;amp;nbsp; (EPC) of a network operator, in the technical language&amp;amp;nbsp; &amp;lt;i&amp;gt;Backbone&amp;lt;/i, consists of various network components. The EPC is connected to the base stations via the&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;. This means the connection of an upstream, usually hierarchically subordinated network node to a central network node.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Currently, the&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp; consists mainly of directional radio and so-called E1&amp;amp;ndash;lines. These are copper lines and allow a throughput of about 2 Mbit/s. For GSM&amp;amp;ndash; and UMTS&amp;amp;ndash;networks these connections were still sufficient, however, for the large-scale conceived&amp;amp;nbsp; [[Examples_of_Communication_Systems/Further Developments_of_UMTS#High.E2.80.93Speed_Downlink_Packet_Access| HSDPA]]&amp;amp;nbsp; such data rates are no longer adequate. For LTE such a&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp; is completely unusable:&lt;br /&gt;
*The slow cable network would slow down the fast wireless connections; overall, there would be no increase in speed.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Due to the low capacities of the lines with E1&amp;amp;ndash;standard, an expansion with further lines of the same construction would not be economical.&lt;br /&gt;
&lt;br /&gt;
In the course of the introduction of LTE, the&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp; had to be redesigned. It was important to keep an eye on future security, since the next generation&amp;amp;nbsp; &#039;&#039;LTE&amp;amp;ndash;Advanced&#039;&#039;&amp;amp;nbsp; was already in place before the introduction. If one believes the experts&#039; propaganda&amp;amp;nbsp; &#039;&#039;Moore&#039;s Law&#039;&#039; for mobile phone bandwidths, the most important factor for future security is the expensive new installation of better cables.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to the purely packet-oriented transmission technology, the Ethernet&amp;amp;ndash;standard, which is also IP&amp;amp;ndash;based, is suitable for the LTE&amp;amp;ndash;Backhaul, which is realized with the help of optical fibers. In 2009, the company Fujitsu presented in the study&amp;amp;nbsp; [Fuj09]&amp;lt;ref name=&#039;Fuj09&#039;&amp;gt;Fujitsu Network Communications Inc.: &#039;&#039;4G Impacts to Mobile Backhaul.&#039;&#039; [http://www.fujitsu.com/downloads/TEL/fnc/whitepapers/4Gimpacts.pdf PDF Internet document].&amp;lt;/ref&amp;gt;,&amp;amp;nbsp; also the thesis that the current infrastructure will continue to play an important role for LTE&amp;amp;ndash;Backhaul for the next ten to fifteen years.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are two approaches for the generation change to an Ethernet&amp;amp;ndash;based&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp;:&lt;br /&gt;
*the parallel operation of the lines with E1 and Ethernet&amp;amp;ndash;standard,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*the immediate migration to an Ethernet-based&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The former would have the advantage that the network operators could continue to run voice traffic over the old lines and would only have to handle bandwidth-intensive data traffic over the more powerful lines. &lt;br /&gt;
&lt;br /&gt;
The second option raises some technical problems:&lt;br /&gt;
*The services previously transported through the slow E1-standard lines would have to be switched immediately to a packet-based procedure.&lt;br /&gt;
&lt;br /&gt;
*Ethernet does not offer (unlike the current standard) any&amp;amp;nbsp; &amp;lt;i&amp;gt;End&amp;amp;ndash;to&amp;amp;ndash;End&amp;amp;ndash;Synchronization&amp;lt;/i&amp;gt;, which can lead to severe delays or even service interruptions when changing radio cells, thus a huge loss of service quality. &lt;br /&gt;
* However, in the concept&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Synchronous_Ethernet Synchronous Ethernet]&amp;amp;nbsp; (SyncE), the Cisco company has already made suggestions as to how synchronization could be realized.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For conurbations, a direct conversion of the backhaul would certainly be worthwhile, as relatively few new cables would have to be laid for a comparatively high number of new users. &lt;br /&gt;
&lt;br /&gt;
In rural areas, however, major excavation work would quickly result in high costs. However, this is exactly the area which must be covered first, according to the&amp;amp;nbsp; [[Mobile_Communications/General Information on the LTE Mobile Communications Standard#LTE frequency band splitting|agreement reached]]&amp;amp;nbsp; between the federal government and the (German) mobile phone operators. Here, the mostly existing microwave radio link would have to be (and probably will be) extended to high data rates.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Exercises for Chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:Exercise 4.2: FDD, TDD and Half-Duplex]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Exercise 4.2Z: MIMO Applications in LTE]]&lt;br /&gt;
&lt;br /&gt;
==List of Sources==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/Physical_Layer_for_LTE&amp;diff=34998</id>
		<title>Mobile Communications/Physical Layer for LTE</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/Physical_Layer_for_LTE&amp;diff=34998"/>
		<updated>2020-10-18T16:56:05Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=LTE – Long Term Evolution&lt;br /&gt;
|Vorherige Seite=The Application of OFDMA and SC-FDMA in LTE&lt;br /&gt;
|Nächste Seite=LTE-Advanced - a Further Development of LTE&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== General Description==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The &amp;amp;nbsp; &amp;lt;i&amp;gt;Physical Layer&amp;lt;/i&amp;gt; is the lowest layer in the OSI&amp;amp;ndash;layer model of the&amp;amp;nbsp; &amp;lt;i&amp;gt;International Organization for Standardization&amp;lt;/i&amp;gt;&amp;amp;nbsp; (ISO), which is also called&amp;amp;nbsp; &amp;lt;i&amp;gt;Bit transmission layer&amp;lt;/i&amp;gt;&amp;amp;nbsp;. It describes the physical transmission of bit sequences in LTE and the operation of the various channels according to the 3GPP&amp;amp;ndash;specification. All specifications are valid for&amp;amp;nbsp; &#039;&#039;Frequency Division Duplex&#039;&#039;&amp;amp;nbsp; (FDD) as well as for&amp;amp;nbsp; &#039;&#039;Time Division Duplex&#039;&#039;&amp;amp;nbsp; (TDD).&lt;br /&gt;
&lt;br /&gt;
[[File:EN_LTE_T_4_4_S1b_v1.png|right|frame|Protocol Architecture for LTE|class=fit]]&lt;br /&gt;
The diagram shows the layers of the LTE&amp;amp;ndash;protocol architecture. The communication between the individual layers takes place via three different types of channels:&lt;br /&gt;
*Logical channels,&amp;lt;br&amp;gt;&lt;br /&gt;
*Transport channels,&amp;lt;br&amp;gt;&lt;br /&gt;
*physical channels. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This chapter deals with the communication between sender and receiver in the lowest, red highlighted&amp;amp;nbsp; &amp;lt;i&amp;gt;physical layer&amp;lt;/i&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Basically it should be noted:&lt;br /&gt;
*Exactly like the Internet, LTE uses exclusively packet-based transmission, i.e. without specifically assigning resources to a single user.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The design of the LTE&amp;amp;ndash;physical layer is therefore characterized by the principle of dynamically allocated network resources.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The physical layer plays a key role in the efficient allocation and utilization of available system resources.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
According to this graphic the physical layer communicates with &lt;br /&gt;
*the block&amp;amp;nbsp; &amp;lt;i&amp;gt;Medium Access Control&amp;lt;/i&amp;gt;&amp;amp;nbsp; (MAC) and exchanges information about the users and the regulation or control of the network via so-called transport channels,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*the block&amp;amp;nbsp; &amp;lt;i&amp;gt;Radio Resource Control&amp;lt;/i&amp;gt;&amp;amp;nbsp; (RRC), where control commands and measurements are continuously exchanged to adapt the transmission to the channel quality.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The complexity of the LTE&amp;amp;ndash;transmission is to be indicated by the following diagram, which has been directly adopted by the&amp;amp;nbsp; &amp;lt;i&amp;gt;European Telecommunications Standards Institute&amp;lt;/i&amp;gt;&amp;amp;nbsp; (ETSI). It shows the communication between the individual layers (channels) and applies exclusively to the downlink.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_LTE_T_4_4_S1.png|right|frame|Communication between the individual layers in the LTE downlink|class=fit]]&lt;br /&gt;
&lt;br /&gt;
*On the following pages we will take a closer look at the physical layer and the physical channels. We distinguish between uplink and downlink, but we will limit ourselves to the essentials. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*In reality, the individual channels take over a number of other functions, but their description would go beyond the scope of this tutorial. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*If you are interested, you can find a detailed description in&amp;amp;nbsp; [HT09]&amp;lt;ref name= &#039;HT09&#039;&amp;gt;Holma, H.; Toskala, A.: &#039;&#039;LTE for UMTS - OFDMA and SC-FDMA Based Radio Access.&#039;&#039; Wiley &amp;amp; Sons, 2009.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
== Physical channels in uplink==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
LTE uses the multiple access method [[Mobile_Communications/The_Application_of_OFDMA_and_SC-FDMA_in_LTE#Functionality_of_SC.E2.80.93FDMA|SC&amp;amp;ndash;FDMA]] in the uplink transmission from the terminal device to the base station. Accordingly, the following physical channels exist in the 3GPP&amp;amp;ndash;specification:&lt;br /&gt;
*&amp;lt;i&amp;gt;Physical Uplink Shared Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp; (PUSCH),&amp;lt;br&amp;gt;&lt;br /&gt;
*&amp;lt;i&amp;gt;Physical Random Access Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp; (PRACH),&amp;lt;br&amp;gt;&lt;br /&gt;
*&amp;lt;i&amp;gt;Physical Uplink Control Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp; (PUCCH).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The user data are transmitted in the physical channel&amp;amp;nbsp; &#039;&#039;&#039;PUSH&#039;&#039;&#039;&amp;amp;nbsp;. The transmission speed depends on how much bandwidth is available to the user at that moment. The transmission is based on dynamically allocated resources in time and frequency range with a resolution of one millisecond or 180 kHz.This allocation is performed by the&amp;amp;nbsp; [[Mobile_Communications/Physical_Layer_for_LTE# Scheduling for LTE| Scheduler]]&amp;amp;nbsp; in the base station (&amp;lt;i&amp;gt;eNodeB&amp;lt;/i&amp;gt;&amp;amp;nbsp;).  A terminal device cannot transmit any data without instructions from the base station.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The exception is the use of the physical channel&amp;amp;nbsp; &#039;&#039;&#039;PRACH&#039;&#039;, the only channel in the LTE&amp;amp;ndash;uplink with non&amp;amp;ndash;synchronized transmission. The function of this channel is the request for permission to send data via one of the other two physical channels. By sending a&amp;amp;nbsp; &amp;lt;i&amp;gt;Cyclic Prefix&amp;lt;/i&amp;gt;&amp;amp;nbsp; and a signature on the PRACH, the terminal and base station are synchronized and are thus ready for further transmissions.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The third uplink&amp;amp;ndash;channel&amp;amp;nbsp; &#039;&#039;&#039;PUCCH&#039;&#039;&#039;&amp;amp;nbsp; is used exclusively for the transmission of control signals. By this one understands &lt;br /&gt;
*positive and negative acknowledgements of receipt (ACK/NACK),&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Requests for repeated transmission (in case of NACK), and&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*the exchange of channel quality information between the terminal and the base station.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If, in addition to the control data, user data is sent from the terminal to the base station at the same time, such control signals can also be transmitted via the PUSCH. If no user data is to be transmitted, PUCCH is used instead.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A simultaneous use of PUSCH and PUCCH is not possible due to restrictions of the SC&amp;amp;ndash;transmission method SC&amp;amp;ndash;FDMA. If only one&amp;amp;nbsp; &amp;lt;i&amp;gt;Shared Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp; had been selected for all control information, one would have had to choose between&lt;br /&gt;
*intermittent problems with user data transmission, or&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*permanently too few resources for the control information.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The information about the channel quality is obtained by means of so-called reference symbols. As indicators for the channel quality this information is then sent&lt;br /&gt;
*to&amp;amp;nbsp; &amp;lt;i&amp;gt;Channel Quality Indicator&amp;lt;/i&amp;gt;&amp;amp;nbsp; (CQI), and&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*to&amp;amp;nbsp; &amp;lt;i&amp;gt;Rank Indicator&amp;lt;/i&amp;gt;&amp;amp;nbsp; (RI).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A detailed explanation of the quality guarantee can be found, for example, in&amp;amp;nbsp; [HR09]&amp;lt;ref name=&#039;HR09&#039;&amp;gt;Homayounfar, K.; Rohani, B.: &#039;&#039;CQI Measurement and Reporting in LTE: A New Framework.&#039;&#039; &lt;br /&gt;
IEICE Technical Report, Vol. 108, No. 445, 2009.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; and&amp;amp;nbsp; [HT09]&amp;lt;ref name=&#039;HT09&#039;&amp;gt;&amp;lt;/ref&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp; The reference symbols or channel quality information are distributed in the PUSCH according to the following graphic. &lt;br /&gt;
&lt;br /&gt;
[[File:EN_LTE_T_4_4_S2.png|center|frame|distribution of reference symbols and user data in PUSCH|class=fit]]&lt;br /&gt;
&lt;br /&gt;
This describes the arrangement of the useful information and the signaling data in a &amp;quot;virtual&amp;quot; subcarrier.&lt;br /&gt;
*Virtual because SC&amp;amp;ndash;FDMA does not have subcarriers like OFDMA.&lt;br /&gt;
&lt;br /&gt;
*The reference symbols are necessary to estimate the channel quality.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*This information is also transferred as&amp;amp;nbsp; &amp;lt;i&amp;gt;Channel Quality Indicator&amp;lt;/i&amp;gt;&amp;amp;nbsp; (CQI)&amp;amp;nbsp; or as&amp;amp;nbsp; &amp;lt;i&amp;gt;Rank Indicator&amp;lt;/i&amp;gt;&amp;amp;nbsp; (RI)&amp;amp;nbsp; via the PUSCH.}}&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Physical channels in downlink==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In contrast to the uplink, LTE uses the multiple access method&amp;amp;nbsp; [[Mobile_Communications/The_Application_of_OFDMA_and_SC-FDMA_in_LTE#Differences_between_OFDMA_and_SC.E2.80.93FDMA|OFDMA]] in the downlink, i.e. during transmission from the base station to the terminal. Accordingly, the 3GPP&amp;amp;ndash;Consortium specified the following physical channels for this purpose:&lt;br /&gt;
*&amp;lt;i&amp;gt;Physical Downlink Shared Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp; (PDSCH),&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;i&amp;gt;Physical Downlink Control Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp; (PDCCH),&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;i&amp;gt;Physical Control Format Indicator Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp; (PCFICH),&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;i&amp;gt;Physical Hybrid ARQ Indicator Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp; (PHICH),&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;i&amp;gt;Physical Broadcast Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp; (PBCH),&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;i&amp;gt;Physical Multicast Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp; (PMCH).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The user data are transmitted via the&amp;amp;nbsp; &#039;&#039;&#039;PDSCH&#039;&#039;&#039;&amp;amp;nbsp;. The resource allocation is done both in the time domain (with a resolution of one millisecond) and in the frequency domain (resolution: &amp;amp;nbsp;180 kHz). Due to the use of OFDMA as transmission method, the individual speed of each user depends on the number of assigned resource blocks (à 180 kHz). A &amp;lt;i&amp;gt;eNodeB&amp;lt;/i&amp;gt; allocates the resources related to the channel quality of each individual user.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The&amp;amp;nbsp; &#039;&#039;&#039;PDCCH&#039;&#039;&#039;&amp;amp;nbsp; contains all information regarding the allocation of resource blocks or bandwidth for both the uplink and the downlink. A terminal device thereby receives information about how many resources are available.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_LTE_T_4_4_S3.png|right|frame|Division between PDCCH and PDSCH in LTE downlink]]&lt;br /&gt;
The diagram shows an example of the division between the channels PDCCH and PDSCH:&lt;br /&gt;
*The PDCCH can occupy up to four symbols per subframe (in the graphic: two).&amp;lt;br&amp;gt;&lt;br /&gt;
*This leaves twelve time slots for the user data (i.e. for the channel PDSCH).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Via channel&amp;amp;nbsp; &#039;&#039;&#039;PCFICH&#039;&#039;&#039;&amp;amp;nbsp; the terminal device is informed how many symbols are to be assigned to the control information of the PDCCH. The purpose of this dynamic division between control&amp;amp;ndash; and user data is as follows:&lt;br /&gt;
*On the one hand, many users can be supported in this way, each with a low data rate. This scenario requires more tuning, which means that in this case the PDCCH would have to contain three or four symbols.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*On the other hand, the overhead caused by PDCCH can be reduced by assigning a high data rate to only a few concurrent users.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:EN_LTE_T_4_4_S3b.png|left|frame|Distribution of reference symbols in the downlink|class=fit]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to the PDCCH, reference symbols are also required in the downlink to estimate the channel quality and calculate the&amp;amp;nbsp; &amp;lt;i&amp;gt;Channel Quality Indicator&amp;lt;/i&amp;gt;&amp;amp;nbsp; (CQI)&amp;amp;nbsp;. These reference symbols are distributed over the subcarriers (different frequencies) or symbols (different times) as shown in the adjacent figure.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Regarding the other physical channels of the LTE&amp;amp;ndash;Downlink is to be noted:&lt;br /&gt;
*The only purpose of the downlink&amp;amp;ndash;channel&amp;amp;nbsp; &#039;&#039;&#039;PHICH&#039;&#039;&#039;&amp;amp;nbsp; (&amp;lt;i&amp;gt;Physical Hybrid ARQ Indicator Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp;) is to signal whether a packet sent in the uplink has arrived.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*On the broadcast&amp;amp;ndash;channel&amp;amp;nbsp; &#039;&#039;&#039;PBCH&#039;&#039;&#039;&amp;amp;nbsp; (&amp;lt;i&amp;gt;Physical Broadcast Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp;) the base stations send system information with operating parameters as well as synchronization signals, which are required for registration in the network, to all mobile terminals in the radio cell approximately every 40 milliseconds.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The Multicast&amp;amp;ndash;channel&amp;amp;nbsp; &#039;&#039;&#039;PMCH&#039;&#039;&#039;&amp;amp;nbsp; (&amp;lt;i&amp;gt;Physical Multicast Channel&amp;lt;/i&amp;gt;&amp;amp;nbsp;) has a similar purpose, information for so-called Multicast&amp;amp;ndash;transmissions &amp;amp;ndash; is sent to several receivers simultaneously through this channel. This could be, for example, mobile television via LTE, which is planned for a future release, or something similar.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Processes on the physical layer==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
By &amp;quot;processes in the physical layer&amp;quot; one understands different methods and procedures, which are used in the bit transmission layer. Among them fall among other things:&lt;br /&gt;
*&amp;lt;i&amp;gt;Timing Advance&amp;lt;/i&amp;gt;,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;i&amp;gt;Paging&amp;lt;/i&amp;gt;,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;i&amp;gt;Random Access&amp;lt;/i&amp;gt;,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;i&amp;gt;Channel Feedback Reporting&amp;lt;/i&amp;gt;,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;i&amp;gt;Power Control&amp;lt;/i&amp;gt;,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;i&amp;gt;Hybrid Adaptive Repeat and Request&amp;lt;/i&amp;gt;.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A complete list with the corresponding description can be found in&amp;amp;nbsp; [HT09]&amp;lt;ref name=&#039;HT09&#039;&amp;gt;&amp;lt;/ref&amp;gt;. Only the last two procedures will be discussed in more detail here.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Power control with LTE==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
By&amp;amp;nbsp; &amp;lt;i&amp;gt;Power Control&amp;lt;/i&amp;gt;&amp;amp;nbsp; one understands generally the control of the transmission power with the goal,&lt;br /&gt;
*to improve the transmission quality,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*to increase the network capacity, and&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*reduce the power consumption.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With regard to the last point, the standardization of LTE had to take this into account:&lt;br /&gt;
*On the one hand, the power consumption in the end devices was to be minimized in order to guarantee longer battery runtimes for them.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*On the other hand, it should be avoided that the base stations have to provide too much power.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With LTE,&amp;amp;nbsp; &amp;lt;i&amp;gt;Power Control&amp;lt;/i&amp;gt;&amp;amp;nbsp; is only applied in the uplink, whereas it is more of an &amp;quot;slow&amp;quot; power control. This means that the procedure specified in LTE does not have to react as quickly as for example in UMTS (&amp;lt;i&amp;gt;W&amp;amp;ndash;CDMA&amp;lt;/i&amp;gt;&amp;amp;nbsp;). The reason is that by using the orthogonal carrier system &amp;quot;SC&amp;amp;ndash;FMDA&amp;quot; the so-called&amp;amp;nbsp; [[Examples_of_Communication_Systems/Nachrichtentechnische_Aspekte_von_UMTS#Near.E2.80.93Far.E2.80.93Effekt| Near&amp;amp;ndash;Far&amp;amp;ndash;Problem]] does not exist.&lt;br /&gt;
&lt;br /&gt;
*To be precise, for LTE the &amp;lt;i&amp;gt;Power Control&amp;lt;/i&amp;gt;&amp;amp;nbsp; does not control the absolute power, but the spectral power density, i.e. the power per bandwidth.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Instead of trying to smooth power peaks by temporarily reducing the transmission power, power peaks can also be used to increase the data rate for a short time.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
All in all, LTE&amp;amp;ndash;power control is intended to find the optimum balance between the lowest possible power and at the same time interference that is still acceptable for the transmission quality (QoS). This is specifically achieved by estimating the loss during transmission and by calculating a correction factor according to the current site characteristics. The statements made here are largely taken from&amp;amp;nbsp; [DFJ08]&amp;lt;ref name =&#039;DFJ08&#039;&amp;gt;Dahlman, E., Furuskär A., Jading Y., Lindström M., Parkvall, S.: &#039;&#039;Key Features of the LTE Radio Interface.&#039;&#039; Ericsson Review No. 2, 2008.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Hybrid Adaptive Repeat and Request ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Every communication system needs a scheme for retransmission of lost data due to transmission errors to ensure sufficient transmission quality. In LTE&amp;amp;nbsp; &amp;lt;i&amp;gt;Hybrid Adaptive Repeat and Request&amp;lt;/i&amp;gt;&amp;amp;nbsp; (HARQ)&amp;amp;nbsp; was specified for this purpose. This procedure is also used in&amp;amp;nbsp; [[Examples_of_Communication_Systems/Weiterentwicklungen_von_UMTS#HARQ.E2.80.93Verfahren_und_Node_B_Scheduling| UMTS]]&amp;amp;nbsp; in a similar form.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The procedure based on the&amp;amp;nbsp; &amp;lt;i&amp;gt;stop&amp;amp;ndash;and&amp;amp;ndash;wait&amp;lt;/i&amp;gt;&amp;amp;ndash;technique is as follows:&lt;br /&gt;
*After a terminal device has received a packet from the base station, it is decoded and feedback is sent via the&amp;amp;nbsp; [[Mobile_Communications/Physical_Layer_for_LTE#Physical channels in uplink| PUCCH]].&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*In case of a failed transmission (&amp;quot;NACK&amp;quot;) the packet is resent. Only if the transmission was successful (Feedback: &amp;amp;nbsp;&amp;quot;ACK&amp;quot;), the next packet is sent.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to ensure continuous data transfer despite the stop&amp;amp;ndash;and&amp;amp;ndash;wait&amp;amp;ndash;procedure, LTE requires several simultaneous HARQ&amp;amp;ndash;processes. In LTE, eight parallel processes are used both in the uplink and in the downlink.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp; The graphic illustrates how it works with eight simultaneous HARQ&amp;amp;ndash;processes:&lt;br /&gt;
&lt;br /&gt;
[[File:EN_LTE_T_4_4_S4a.png|center|frame|HARQ in LTE with eight simultaneous processes|class=fit]] &lt;br /&gt;
*In this example, the first process fails in the first attempt to transfer packet 1. &lt;br /&gt;
*The receiver tells this &amp;quot;Fail&amp;quot; to the sender by a &amp;quot;NACK&amp;quot;. &lt;br /&gt;
*In contrast, the second parallel process is successful with its first packet: &amp;amp;nbsp; &amp;quot;Pass&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
*In the next step (i.e. after the other seven HARQ&amp;amp;ndash;processes have sent) the first HARQ retransmits its last sent packet due to the acknowledgement &amp;quot;NACK&amp;quot; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The second process sends a new packet due to the acknowledgement &amp;quot;ACK&amp;quot; now.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The other processes, which were ignored in this example, proceed in the same way.}}&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Modulation for LTE ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
LTE uses the modulation method&amp;amp;nbsp; [[Modulation_Methods/Quadratur%E2%80%93Amplitudenmodulation#Allgemeine_Beschreibung_und_Signalraumzuordnung|Quadrature&amp;amp;ndash;Amplitude Modulation]]. Different variants are available in the uplink as well as in the downlink, namely&lt;br /&gt;
&lt;br /&gt;
*4&amp;amp;ndash;QAM (identical to QPSK) &amp;amp;nbsp;&amp;amp;#8658;&amp;amp;nbsp; 2 bits per symbol,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*16&amp;amp;ndash;QAM &amp;amp;nbsp;&amp;amp;#8658;&amp;amp;nbsp; 4 bits per symbol,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*64&amp;amp;ndash;QAM &amp;amp;nbsp;&amp;amp;#8658;&amp;amp;nbsp; 6 bits per symbol.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The signal space constellations of these variants are shown in the following graphic.&amp;lt;br&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;Note:&amp;lt;/i&amp;gt; &amp;amp;nbsp; QAM is not an LTE&amp;amp;ndash;specific development, but is also used in many already established wired transmission methods, such as those of [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_DSL|DSL]] (&#039;&#039;Digital Subscriber Line&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
[[File:P ID2290 LTE T 4 4 S5a v2.png|center|frame|possible QAM signal space constellations in LTE|class=fit]]&lt;br /&gt;
&lt;br /&gt;
Depending on the environmental conditions and distance to the base station, the&amp;amp;nbsp; [[Mobile_Communications/Bit Transmission Layer_at_LTE#Scheduling_for_LTE|Scheduler]]&amp;amp;nbsp; selects the appropriate QAM&amp;amp;ndash;method (see figure):&lt;br /&gt;
[[File:P ID2291 LTE T 4 4 S5b v1.png|right|frame|Modulation method, depending on distance from base station]]&lt;br /&gt;
*64&amp;amp;ndash;QAM allows the best data rates, but is also the most susceptible to transmission interference and is therefore only used near the base stations.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The weaker the connection, the simpler the modulation method must be, which also reduces the spectral efficiency (in bit/s per Hertz).&amp;lt;br&amp;gt;&lt;br /&gt;
*Very robust is 4&amp;amp;ndash;QAM with only two bits per symbol (one each for real&amp;amp;ndash; and imaginary part). This can be used for much larger distances than 16&amp;amp;ndash;QAM.&amp;lt;br&amp;gt;, for example&lt;br /&gt;
&lt;br /&gt;
*Due to the exact same signal space constellation the 4&amp;amp;ndash;QAM is often called&amp;amp;nbsp; &amp;lt;i&amp;gt;Quaternary Phase Shift Keying&amp;lt;/i&amp;gt;&amp;amp;nbsp; (QPSK). The four signal space points are arranged in a square pattern (QAM&amp;amp;ndash;principle). But they also lie on a circle (characteristic of the PSK).&amp;lt;br&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:EN_LTE_T_4_4_S5c.png|left|frame|throughput depending on SNR|class=fit]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The left graphic from [MG08]&amp;lt;ref name=&#039;MG08&#039;&amp;gt;Myung, H.; Goodman, D.: &#039;&#039;Single Carrier FDMA - A New Air Interface for Long Term Evolution&#039;&#039;. West Sussex: John Wiley &amp;amp; Sons, 2008.&amp;lt;/ref&amp;gt; gives the following facts:&lt;br /&gt;
*With 4&amp;amp;ndash;QAM or QPSK (two bit/symbol) a throughput (&amp;lt;i&amp;gt;throughput&amp;lt;/i&amp;gt;&amp;amp;nbsp;) of almost one Mbit/s is achieved in the LTE&amp;amp;ndash;uplink with the assumptions made in&amp;amp;nbsp; [MG08]&amp;amp;nbsp; [MG08]&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
*Only above a certain signal strength (english: &amp;amp;nbsp; &amp;lt;i&amp;gt;Signal&amp;amp;ndash;to&amp;amp;ndash;Noise Ratio&amp;lt;/i&amp;gt;, SNR) a higher level QAM is used, for example 16&amp;amp;ndash;QAM (4 bit/symbol) or 64&amp;amp;ndash;QAM (8 bit/symbol).&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*If the SNR is sufficiently large, increasing the number of stages will lead to better results regarding the data throughput.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
It should be noted that the low-rate QPSK (4&amp;amp;ndash;QAM) is always used in the control channels, since this information&lt;br /&gt;
*on the one hand, do not require high data rates due to their small size, and&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*on the other hand should be received (almost) error-free due to their importance.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An exception is the channel&amp;amp;nbsp; [[Mobile_Communications/Physical Layer for LTE#Physical channels in uplink| PUSCH]]&amp;amp;nbsp; in the uplink, which transmits both user&amp;amp;ndash; and control data. For this reason, the same modulation type is used here for both signals.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Scheduling for LTE ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
All LTE&amp;amp;ndash;base stations contain a scheduler that can be switched between&lt;br /&gt;
*a total transfer rate as high as possible&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*with sufficiently good Quality of Service (QoS)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A QoS&amp;amp;ndash;criterion is for example the&amp;amp;nbsp; &amp;lt;i&amp;gt;packet delay duration&amp;lt;/i&amp;gt;. So the scheduler tries to optimize the overall situation by using algorithms.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Scheduling is necessary to ensure a fair distribution of resources. A concrete example is that a user who currently has a poor channel and therefore low efficiency must still be allocated sufficient resources, otherwise the desired (and guaranteed) transmission quality cannot be maintained.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The scheduler controls the selection of the modulation method and the subcarrier&amp;amp;ndash;mapping. The functionality of the scheduler is illustrated by the following graphic for the uplink. Similar statements apply to the downlink.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_LTE_T_4_4_S6.png|center|frame|Functionality of the scheduler in the LTE uplink|class=fit]]&lt;br /&gt;
&lt;br /&gt;
{{BlueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; Based on [SABM06]&amp;lt;ref name =&#039;SABM06&#039;&amp;gt;Schmidt, M.; Ahn, N.; Braun, V.; Mayer, H.P.: &#039;&#039;Performance of QoS and Channel-aware Packet Scheduling for LTE Downlink.&#039;&#039;  Alcatel-Lucent, 2006. &amp;lt;/ref&amp;gt;, [WGM07]&amp;lt;ref name =&#039;WGM07&#039;&amp;gt;Wang, X.; Giannakis, G.B.; Marques, A.G.: &#039;&#039;A Unified Approach to QoS - Guaranteed Scheduling or Channel-Adaptive Wireless Networks.&#039;&#039; Proceedings of the IEEE, Vol. 95, No. 12, Dec. 2007.&amp;lt;/ref&amp;gt; and [MG08]&amp;lt;ref name =&#039;MG08&#039;&amp;gt;&amp;lt;/ref&amp;gt; should be noted in summary:&lt;br /&gt;
*Scheduler algorithms are often very complicated due to the many optimization criteria, parameters and possible scenarios. Therefore, the design is usually based on an optimal system in which each base station knows the channel transmission functions sufficiently well at all times and transmission delays are unproblematic.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*From these boundary conditions, different approaches are created with the help of mathematical analyses&amp;amp;nbsp; [WGM07]&amp;lt;ref name =&#039;WGM07&#039;&amp;gt;&amp;lt;/ref&amp;gt;, whose effectiveness can only be verified by practical tests. A detailed description of such tests can be found in&amp;amp;nbsp; [MG08]&amp;lt;ref name =&#039;MG08&#039;&amp;gt;&amp;lt;/ref&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*In principle, the overall transmission rate can be increased by channel-dependent scheduling (exploiting frequency selectivity), but this involves a large overhead, since test signals must be sent over the entire bandwidth. The information has to be distributed to all end devices if the full optimization potential is to be exploited.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*In various tests, the clear and significant advantages (doubling of throughput) of channel-based scheduling were shown, but also the expected losses with faster moving users. More about this in the recommended document&amp;amp;nbsp; [SABM06]&amp;lt;ref name =&#039;SABM06&#039;&amp;gt;&amp;lt;/ref&amp;gt;.}}&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Due to many advantages, scheduling is an integral part of the LTE&amp;amp;ndash;Release 8, specified by 3GPP.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Exercises to chapter==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:Exercise 4.4: Modulation in LTE]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Exercise 4.4Z: Physical Channels in LTE]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==List of Sources==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/The_Application_of_OFDMA_and_SC-FDMA_in_LTE&amp;diff=34997</id>
		<title>Mobile Communications/The Application of OFDMA and SC-FDMA in LTE</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/The_Application_of_OFDMA_and_SC-FDMA_in_LTE&amp;diff=34997"/>
		<updated>2020-10-18T16:52:42Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=LTE – Long Term Evolution&lt;br /&gt;
|Vorherige Seite=Technical Innovations of LTE&lt;br /&gt;
|Nächste Seite=Physical Layer for LTE&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== General information on LTE transmission technology ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In contrast to its predecessor&amp;amp;nbsp; [[Mobile_Communications/Characteristics_of_UMTS|UMTS]],&amp;amp;nbsp;  &amp;lt;i&amp;gt;Long Term Evolution&amp;lt;/i&amp;gt;&amp;amp;nbsp; (LTE) uses a variant of the OFDM concept also used by&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Wireless_Local_Area_Network WLAN]&amp;amp;nbsp; to systematically divide the transmission resources. The multiple access method&amp;amp;nbsp; [[Modulation_Methods/Allgemeine_Beschreibung_von_OFDM#Das_Prinzip_von_OFDM_.E2.80.93_Systembetrachtung_im_Zeitbereich_.281.29| OFDM]]&amp;amp;nbsp; possesses the ability to protect the system against intermittent transmission disturbances, just like the UMTS&amp;amp;ndash;Technology&amp;amp;nbsp; [[Examples_of_Communication_Systems/Telecommunication_Aspects_of_UMTS#Application_of_CDMA.E2.80.93Procedure_in_UMTS| CDMA]].&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In principle, it would have been possible to adapt and expand the technologies used in the second and third generations of mobile communications in such a way that they also meet the required specifications for the fourth generation. However, the rapidly increasing complexity of CDMA when receiving signals on multiple paths made the technical implementation appear to make little sense.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_3_S1.png|center|frame|difference between OFDM and CDMA|class=fit]]&lt;br /&gt;
&lt;br /&gt;
The highly abstracted graphic shows the distribution of the complete bandwidth for individual subcarriers and explains the difference between CDMA (UMTS) and OFDM (LTE). &lt;br /&gt;
*In contrast to CDMA, OFDM has many subcarriers, typically even several hundred, with a bandwidth of only a few kilohertz each. &lt;br /&gt;
*To achieve this, the data stream is split and each of the many subcarriers is modulated individually with only a small bandwidth.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
LTE uses OFDMA, an OFDM-based transmission technology. Among the reasons for this are&amp;amp;nbsp; [HT09]&amp;lt;ref name=&#039;HT09&#039;&amp;gt;Holma, H.; Toskala, A.: &#039;&#039;LTE for UMTS - OFDMA and SC-FDMA Based Radio Access.&#039;&#039; Wiley &amp;amp; Sons, 2009.&amp;lt;/ref&amp;gt;:&lt;br /&gt;
*High performance in frequency controlled channels,&amp;lt;br&amp;gt;&lt;br /&gt;
*the low complexity in the receiver,&amp;lt;br&amp;gt;&lt;br /&gt;
*good spectral properties and bandwidth flexibility, and&amp;lt;br&amp;gt;&lt;br /&gt;
*compatibility with the latest receiver&amp;amp;ndash; and multi-antenna technologies.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the next page the differences between the multiple access methods OFDM and OFDMA are briefly explained.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Similarities and differences of OFDM and OFDMA ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The principle of&amp;amp;nbsp; &amp;lt;i&amp;gt;Orthogonal Frequency Division Multiplexing&amp;lt;/i&amp;gt;&amp;amp;nbsp; (OFDM) is explained in detail in chapter&amp;amp;nbsp; [[Examples_of_Communication_Systems/General Description of DSL#Motivation_for_xDSL|Motivation for xDSL]]&amp;amp;nbsp; of the book &amp;quot;Modulation methods&amp;quot;. The diagram above shows the frequency assignment for OFDM: &amp;amp;nbsp; OFDM splits the available frequency band into a large number of narrow-band subcarriers, it is important to note:&lt;br /&gt;
*To ensure that the individual subcarriers exhibit as little intercarrier&amp;amp;ndash;interference as possible, their frequencies are selected so that they are orthogonal to each other.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*This means: &amp;amp;nbsp; At the center frequency of each subcarrier, all other carriers have no spectral components. The goal is to select the currently most favorable resources for each user in order to obtain an overall optimal result.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*In concrete terms, this also means that the available resources are allocated to the user who can currently do the most with them, adapted to the respective network situation. &lt;br /&gt;
*For this purpose, the base station for the downlink to the terminal device measures the connection quality with the help of reference symbols.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_3_S2.png|center|frame|division of data blocks by frequency and time for OFDM and OFDMA|class=fit]]&lt;br /&gt;
&lt;br /&gt;
The lower diagram shows the allocation at&amp;amp;nbsp; &amp;lt;i&amp;gt;Orthogonal Frequency Division Multiple Access&amp;lt;/i&amp;gt;&amp;amp;nbsp; (OFDMA). You can see:&lt;br /&gt;
*For OFDMA the resource allocation after channel fluctuations is not limited to the time domain as with OFDM, but also the frequency domain is optimally included.&lt;br /&gt;
&lt;br /&gt;
*Thus the OFDMA&amp;amp;ndash;resource allocation is better adapted to the external circumstances than with OFDM. &lt;br /&gt;
*In order to make optimum use of this flexibility, however, coordination between the base station (&amp;lt;i&amp;gt;eNodeB&amp;lt;/i&amp;gt;) and the terminal equipment is necessary. More on this in chapter&amp;amp;nbsp; [[Examples_of_Communication_Systems/General Description of DSL|General Description of DSL]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Differences between OFDMA and SC-FDMA==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There are transmission methods such as&amp;amp;nbsp; &lt;br /&gt;
[https://de.wikipedia.org/wiki/WiMAX WiMAX], which use OFDMA in both directions. The LTE specification by the 3GPP consortium on the other hand specifies&lt;br /&gt;
*In&amp;amp;nbsp; &#039;&#039;&#039;Downlink&#039;&#039;&#039;&amp;amp;nbsp; (transmission from the base station to the terminal)&amp;amp;nbsp; &#039;&#039;&#039;OFDMA&#039;&#039;&#039;&amp;amp;nbsp; is used.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*In&amp;amp;nbsp; &#039;&#039;&#039;Uplink&#039;&#039;&#039;&amp;amp;nbsp; (transmission from terminal to base station) &amp;amp;nbsp; &#039;&#039;&#039;SC&amp;amp;ndash;FDMA&#039;&#039;&#039;&amp;amp;nbsp; (&amp;lt;i&amp;gt;Single Carrier Frequency Division Multiple Access&amp;lt;/i&amp;gt;&amp;amp;nbsp;) is used.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_3_S3.png|center|frame|Sender and Receiver Structure of a SC-FDMA System|class=fit]]&lt;br /&gt;
&lt;br /&gt;
From the graphic you can see that the two systems &amp;quot;SC&amp;amp;ndash;FDMA&amp;quot; and &amp;quot;OFDMA&amp;quot; are very similar. Or in other words: &amp;amp;nbsp; SC&amp;amp;ndash;FDMA is based on OFDMA (or vice versa).&lt;br /&gt;
*If you omit the components highlighted in red&amp;amp;nbsp; ${\rm DFT} \ (K)$&amp;amp;nbsp; and&amp;amp;nbsp; ${\rm IDFT} \ (K)$&amp;amp;nbsp; from SC&amp;amp;ndash;FDMA, you get the OFDMA&amp;amp;ndash;System.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The other blocks stand for Serial/Parallel&amp;amp;ndash;Converter (S/P), Parallel/Serial&amp;amp;ndash;Converter (P/S), D/A&amp;amp;ndash;Converter, A/D&amp;amp;ndash;Converter as well as Add/Remove Prefix.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The signal generation for SC&amp;amp;ndash;FDMA works similar to OFDMA, but with small changes that are important for mobile radio:&lt;br /&gt;
*The main difference is the additional&amp;amp;nbsp; [[Signal_Representation/Discrete_Fourier_Transform_(DFT)#Argumente_f.C3.BCr_die_diskrete_Realisierung_der_FT|discrete Fourier-Transformation]]&amp;amp;nbsp; (DFT).&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*This has to be done on the transmit side directly after the serial/parallel&amp;amp;ndash;conversion.&lt;br /&gt;
&lt;br /&gt;
*Thus, it is no longer a multi-carrier procedure, but a single-carrier&amp;amp;ndash;FDMA&amp;amp;ndash;variant.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*One speaks of &amp;quot;DFT&amp;amp;ndash;spread OFDM&amp;quot; because of the necessary DFT/IDFT&amp;amp;ndash;operations.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let us summarize these statements briefly: &lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{SC&amp;amp;ndash;FDMA is different from OFDMA}$&amp;amp;nbsp; in the following points&amp;amp;nbsp; [see also Internet article&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Single-carrier_FDMA Single-carrier FDMA]&amp;amp;nbsp; (in Wikipedia) and&amp;amp;nbsp; &lt;br /&gt;
[http://www.rfwireless-world.com/Articles/difference-between-SC-FDMA-and-OFDMA.html Difference between SC-FDMA and OFDMA.html]&amp;amp;nbsp; from (&#039;&#039;RF Wireless World&#039;&#039;)]:&lt;br /&gt;
 &lt;br /&gt;
*With SC&amp;amp;ndash;FDMA, the data symbols are sent in a group of simultaneously transmitted subcarriers instead of sending each symbol from a single orthogonal subcarrier as with OFDMA. &lt;br /&gt;
*This subcarrier group can then be considered a separate frequency band that transmits the data sequentially. This is where the name &amp;quot;Single Carrier FDMA&amp;quot; comes from.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*While with OFDMA the data symbols directly create the different subcarriers, with SC&amp;amp;ndash;FDMA they first pass a discrete Fourier transformation (DFT). Thus the data symbols are first transformed from the time domain into the frequency domain before they pass through the OFDM&amp;amp;ndash;procedure. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One can also describe the difference between OFDMA and SC&amp;amp;ndash;FDMA in such a way:&lt;br /&gt;
*In an OFDMA&amp;amp;ndash;transmission, each orthogonal subcarrier only contains the information of a single signal.&lt;br /&gt;
&lt;br /&gt;
*In contrast, with SC&amp;amp;ndash;FDMA, each individual subcarrier contains information about all signals transmitted in this period.}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This difference and the quasi&amp;amp;ndash;sequential transmission with SC&amp;amp;ndash;FDMA can be seen particularly well from the following diagram. This is taken from a PDF document from&amp;amp;nbsp; [http://www.keysight.com/main/application.jspx?cc=DE&amp;amp;lc=ger&amp;amp;ckey=1174746&amp;amp;nid=-34867.0.00&amp;amp;id=1174746 Agilent&amp;amp;ndash;3GPP.]&lt;br /&gt;
&lt;br /&gt;
[[File:P ID2301 Mob T 4 3 S3b v1.png|center|frame|Frequency band splitting for OFDMA and SC-FDMA|class=fit]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Functionality of SC-FDMA==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Now the SC&amp;amp;ndash;FDMA&amp;amp;ndash;transfer process shall be examined more in detail. The information for this comes largely from&amp;amp;nbsp; [MG08]&amp;lt;ref name=&#039;MG08&#039;&amp;gt;Myung, H.; Goodman, D.: &#039;&#039;Single Carrier FDMA – A New Air Interface for Long Term Evolution&#039;&#039;. West Sussex: John Wiley &amp;amp; Sons, 2008.&amp;lt;/ref&amp;gt;.. &lt;br /&gt;
&lt;br /&gt;
The purpose and function of the&amp;amp;nbsp; &amp;lt;i&amp;gt;Cyclic Prefix&amp;lt;/i&amp;gt;&amp;amp;nbsp; is not discussed in detail here. The reasons are the same as for OFDM and can be read in the section&amp;amp;nbsp; [[Modulation_Methods/Implementation of OFDM Systems#Cyclic Prefix|Cyclic Prefix]]&amp;amp;nbsp; of the book &amp;quot;Modulation_Methods&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The following description refers to the SC&amp;amp;ndash;FDMA&amp;amp;ndash;Sender shown here. Note that with LTE the modulation is adapted to the channel quality: &lt;br /&gt;
*In highly noisy channels 4&amp;amp;ndash;QAM (&amp;lt;i&amp;gt;Quadrature Amplitude Modulation&amp;lt;/i&amp;gt; with only four signal space points) is used.&lt;br /&gt;
* Under better conditions, the system then switches to a higher-level QAM, up to 64&amp;amp;ndash;QAM. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P ID2304 Mob T 4 3 S4a v3.png|center|frame|Received SC-FDMA Transmitter |class=fit]]&lt;br /&gt;
&lt;br /&gt;
The following also applies:&lt;br /&gt;
*An input data block consists of&amp;amp;nbsp; $K$&amp;amp;nbsp; complex modulation symbols&amp;amp;nbsp; $x_\nu$ which are generated at a rate of&amp;amp;nbsp; $R_{\rm Q}\ \big[\rm symbols/s \big]$&amp;amp;nbsp;. The discrete Fourier transform (DFT) generates&amp;amp;nbsp; $K$&amp;amp;nbsp; symbols&amp;amp;nbsp; $X_\mu$&amp;amp;nbsp; in the frequency domain, which are modulated on&amp;amp;nbsp; $K$&amp;amp;nbsp; from a total of&amp;amp;nbsp; $N$&amp;amp;nbsp; orthogonal subcarriers:  &lt;br /&gt;
::&amp;lt;math&amp;gt;X_\mu  =  \sum_{\nu = 0 }^{K-1}&lt;br /&gt;
  x_\nu \cdot  {\rm e}^{-{\rm j} \hspace{0.05cm}\cdot \hspace{0.05cm} { 2 \pi \hspace{0.05cm}\cdot \hspace{0.05cm} \nu &lt;br /&gt;
 \hspace{0.05cm}\cdot \hspace{0.05cm} \mu }/{K}} \hspace{0.05cm},&amp;lt;/math&amp;gt;&lt;br /&gt;
*The subcarriers are distributed over a larger bandwidth of&amp;amp;nbsp; $B_{\rm K} = N \cdot f_0$&amp;amp;nbsp; where&amp;amp;nbsp; $f_0 = 15 \ \rm kHz$&amp;amp;nbsp; is the smallest addressable bandwidth for LTE. Unused channels are shown as dashed lines in the example graphic.&amp;lt;br&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*The channel transmission rate is&amp;amp;nbsp; $R_{\rm C} = J \cdot R_{\rm Q}$&amp;amp;nbsp; with spreading factor&amp;amp;nbsp; $J = N/K$. This SC&amp;amp;ndash;FDMA&amp;amp;ndash;system could simultaneously process&amp;amp;nbsp; $J$&amp;amp;nbsp; orthogonal input signals. In the case of LTE, for example, the values are&amp;amp;nbsp; $K = 12$&amp;amp;nbsp; (smallest addressable block) and&amp;amp;nbsp; $N = 1024$. $J$&amp;amp;nbsp; thus also indicates the number of terminal devices that can be simultaneously connected to this base station.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*According to the so-called&amp;amp;nbsp; &amp;lt;i&amp;gt;subcarrier&amp;amp;ndash;mapping&amp;lt;/i&amp;gt;&amp;amp;nbsp;, which is the assignment of the symbols generated by the DFT to the available subcarriers, the symbols are then &amp;quot;mapped&amp;quot; to a certain bandwidth, for example &amp;amp;nbsp; $K = 12$&amp;amp;nbsp; maps to the range of&amp;amp;nbsp; $0 \ \text{...} \ 180 \ \rm kHz$&amp;amp;nbsp; or to the range of&amp;amp;nbsp; $180 \ \rm kHz \ \text{...} \ 360 \ \rm kHz$.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The IDFT&amp;amp;ndash;transformation (highlighted in blue above) transforms the output values&amp;amp;nbsp; $Y_\mu$&amp;amp;nbsp; on the frequency domain in its time representation&amp;amp;nbsp; $y_\nu$. These samples are then transformed by the parallel/serial&amp;amp;ndash;converter into a sequence suitable for transmission.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Different approaches for the subcarrier&amp;amp;ndash;Mapping==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The following figure illustrates three types of&amp;amp;nbsp; &amp;lt;i&amp;gt;Subcarrier&amp;amp;ndash;Mapping&amp;lt;/i&amp;gt;. To simplify the representation, we will limit ourselves here to the (very small) parameter values&amp;amp;nbsp; $K = 4$&amp;amp;nbsp; and&amp;amp;nbsp; $N = 12$.&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:EN_Mob_T_4_3_S4b.png|center|frame|various methods of subcarrier mapping|class=fit]]&lt;br /&gt;
&amp;lt;b&amp;gt;DFDMA&amp;lt;/b&amp;gt;&amp;amp;nbsp; or &amp;amp;nbsp;&amp;lt;i&amp;gt;Distributed Mapping&amp;lt;/i&amp;gt;: &amp;lt;br&amp;gt; Here the modulation symbols are distributed over a certain range of the available channel bandwidth.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;IFDMA&amp;lt;/b&amp;gt;&amp;amp;nbsp; or&amp;amp;nbsp; &amp;lt;i&amp;gt;Interleaved FDMA&amp;lt;/i&amp;gt;: &amp;lt;br&amp;gt;Special form of DFDMA, when the modulation symbols are distributed over the entire bandwidth with equal distances between them.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;LFDMA&amp;lt;/b&amp;gt;&amp;amp;nbsp; or&amp;amp;nbsp; &amp;lt;i&amp;gt;Localized Mapping&amp;lt;/i&amp;gt;: &amp;lt;br&amp;gt;The &amp;amp;nbsp;$K$&amp;amp;nbsp; modulation symbols are assigned directly to adjacent subcarriers. This corresponds to the current 3GPP&amp;amp;ndash;specification.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It can be shown that with SC&amp;amp;ndash;FDMA the transmitter does not have to be run through the following steps individually &lt;br /&gt;
*Discrete Fourier Transformation (DFT),&amp;lt;br&amp;gt;&lt;br /&gt;
*Subcarrier&amp;amp;ndash;Mapping, and&amp;lt;br&amp;gt;&lt;br /&gt;
*Inverse discrete Fourier transform (IDFT) or Fast&amp;amp;ndash;Fourier transform (IFFT)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Instead, these three operations can be realized together as one single linear operation. The complete and mathematically complex derivation can be found for example in&amp;amp;nbsp; [MG08]&amp;lt;ref name=&#039;MG08&#039;&amp;gt;Myung, H.; Goodman, D.: &#039;&#039;Single Carrier FDMA – A New Air Interface for Long Term Evolution&#039;&#039;. West Sussex: John Wiley &amp;amp; Sons, 2008.&amp;lt;/ref&amp;gt;. Each element&amp;amp;nbsp; $y_\nu$&amp;amp;nbsp; of the output sequence is then representable by a weighted sum of the input sequence elements&amp;amp;nbsp; $x_\nu$&amp;amp;nbsp; where the weights are complex-valued.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Hence, instead of the comparatively complicated Fourier transform, the operation is reduced&lt;br /&gt;
*to a multiplication with a complex number, and&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*the&amp;amp;nbsp; $J$&amp;amp;ndash; fold repetition of the input sequence&amp;amp;nbsp; $\langle x_\nu \rangle $.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In&amp;amp;nbsp; [[Aufgaben:Exercise 4.3: Subcarrier Mapping|Exercise 4.3]]&amp;amp;nbsp; the (transmit-side)&amp;amp;nbsp; &amp;lt;i&amp;gt;Subcarrier&amp;amp;ndash;Mapping&amp;lt;/i&amp;gt;&amp;amp;nbsp; is considered with more realistic values for&amp;amp;nbsp; $K$&amp;amp;nbsp; and&amp;amp;nbsp; $N$&amp;amp;nbsp; and its differences to the&amp;amp;nbsp; &amp;lt;i&amp;gt;Subcarrier&amp;amp;ndash;Demapping&amp;lt;/i&amp;gt;&amp;amp;nbsp; (at the receiver) are pointed out.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Advantages of SC-FDMA over OFDM==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The decisive advantage of SC&amp;amp;ndash;FDMA over OFDMA is its lower&amp;amp;nbsp; &amp;lt;i&amp;gt;Peak&amp;amp;ndash;to&amp;amp;ndash;Average Power&amp;amp;ndash;Ratio&amp;lt;/i&amp;gt;&amp;amp;nbsp; $\rm (PAPR)$ due to its single-carrier structure. This is the ratio of current peak power&amp;amp;nbsp; $P_{\rm max}$&amp;amp;nbsp; to average power&amp;amp;nbsp; $P_{\rm S}$.  $\rm PAPR$&amp;amp;nbsp; can also be expressed by the&amp;amp;nbsp; [[Digital Signal Transmission/Optimization of Baseband Transmission Systems#Systemoptimierung_bei_Spitzenwertbegrenzung|Crest&amp;amp;ndash;factor]]&amp;amp;nbsp; (quotient of the signal amplitudes). However, the two quantities are not identical.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:P ID2308 Mob T 4 3 S5a v2.png|right|frame|(complementary)&amp;amp;nbsp; $\rm PAPR$&amp;amp;ndash; distribution function for OFDM]]&lt;br /&gt;
&lt;br /&gt;
The graphic from the Internet&amp;amp;ndash;Document&amp;amp;nbsp; [Wu09]&amp;lt;ref name =&#039;Wu09&#039;&amp;gt;Wu, B.: &#039;&#039;Analyzing WiMAX Modulation Quality.&#039;&#039; [http://mwrf.com/Articles/Print.cfm?Ad=1&amp;amp;ArticleID=22022 PDF-Internet document,] 2009.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; shows in double&amp;amp;ndash;logarithmic representation the probability that with 64&amp;amp;ndash;QAM&amp;amp;ndash;OFDM the current power&amp;amp;nbsp; $P_{\rm max}$&amp;amp;nbsp; is above the average power&amp;amp;nbsp; $P_{\rm S}$&amp;amp;nbsp;. You can see:&lt;br /&gt;
*The probability of large &amp;quot;outliers&amp;quot; is small. For example, the average power is only exceeded in&amp;amp;nbsp; $0.1\%$&amp;amp;nbsp; of time by more than&amp;amp;nbsp; $\text{10 dB}$&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; marked in red.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Even if such high power peaks are very rare, they still pose a problem for the receiver&#039;s power amplifier.&lt;br /&gt;
&lt;br /&gt;
The power amplifiers should be operated in the linear range, otherwise the signal is distorted. Non-linearities arise in particular due to&lt;br /&gt;
*Intercarrier&amp;amp;ndash;Interference within the signal,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Interference from adjacent channels due to spectrum expansions.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, OFDM requires the amplifier to operate at a lower power level than its peak power most of the time, which can drastically reduce its efficiency.&lt;br /&gt;
&lt;br /&gt;
*Because one can regard SC&amp;amp;ndash;FDMA quasi as single carrier&amp;amp;ndash;transmission procedures, its $\rm PAPR$&amp;amp;nbsp; is lower than the one of OFDMA. &lt;br /&gt;
*Thus, for example, a so-called&amp;amp;nbsp; &amp;lt;i&amp;gt; pulse&amp;amp;ndash;shaping&amp;lt;/i&amp;gt;&amp;amp;ndash;filter can be used which reduces the&amp;amp;nbsp; $\rm PAPR$&amp;amp;nbsp;.&lt;br /&gt;
&lt;br /&gt;
The lower&amp;amp;nbsp; $\rm PAPR$&amp;amp;nbsp; is the main reason why in LTE&amp;amp;ndash;Uplink SC&amp;amp;ndash;FDMA is used and not OFDMA.&lt;br /&gt;
*A low&amp;amp;nbsp; $\rm PAPR$&amp;amp;nbsp; means longer battery life, an extremely important criterion for mobile phones/smartphones. &lt;br /&gt;
*At the same time, SC&amp;amp;ndash;FDMA offers similar performance and complexity to OFDMA. &lt;br /&gt;
*Since a long battery life is less important for the downlink, OFDMA is used here.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp; We consider an OFDM&amp;amp;ndash;system with&amp;amp;nbsp; $N$&amp;amp;nbsp; carriers, all with the same signal amplitude&amp;amp;nbsp; $A$. After a highly simplified calculation with the same proportionality factor we obtain:&lt;br /&gt;
&lt;br /&gt;
*the maximum signal power is proportional to&amp;amp;nbsp; $(N \cdot A)^2$, and&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*the average signal power is proportional to&amp;amp;nbsp; $N \cdot A^2$ .&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This results in the&amp;amp;nbsp; &amp;lt;i&amp;gt;Peak&amp;amp;ndash;to&amp;amp;ndash;Average Power&amp;amp;ndash;Ratio&amp;lt;/i&amp;gt;&amp;amp;nbsp; ${\rm PAPR} = N$, since it&#039;s the quotient of these two powers&amp;amp;nbsp; . Already with only two carriers this results in&amp;amp;nbsp; ${\rm PAPR} = 2$&amp;amp;nbsp; which corresponds to&amp;amp;nbsp; $\text{3 dB}$.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*So even with only two carriers the amplifier must always operate &amp;amp;nbsp; $\text{3 dB}$&amp;amp;nbsp; below the maximum power to avoid signal distortion in case of signal peaks. &lt;br /&gt;
*As will be shown below,&amp;amp;nbsp; $\text{3 dB}$&amp;amp;nbsp; already means a decrease in efficiency to&amp;amp;nbsp; $85\%$.}}&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The&amp;amp;nbsp; &amp;lt;i&amp;gt;Peak&amp;amp;ndash;to&amp;amp;ndash;Average Power&amp;amp;ndash;Ratio&amp;lt;/i&amp;gt;&amp;amp;nbsp; $\rm (PAPR)$&amp;amp;nbsp; is directly related to the&amp;amp;nbsp; &#039;&#039;transmit amplifier efficiency&#039;&#039;. Maximum efficiency is achieved when the amplifier can operate in the vicinity of the saturation limit.&lt;br /&gt;
 &lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp; The graphic shows an example of an amplifier&#039;s characteristic curve, i.e. the output power plotted against the input power.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_3_S5b.png|right|frame|Decrease in amplifier efficiency &amp;lt;br&amp;gt;with increasing &amp;quot;Back-off&amp;quot;]]&lt;br /&gt;
*At&amp;amp;nbsp; $\rm PAPR = 1$&amp;amp;nbsp; $(\text{0 dB})$&amp;amp;nbsp; one could set the average power&amp;amp;nbsp; $P_{\rm S}$&amp;amp;nbsp; equal to the allowed peak power&amp;amp;nbsp; $P_{\rm max}$&amp;amp;nbsp;. According to the characteristic curve&amp;amp;nbsp; $P_{\rm out}/P_{\rm in}$&amp;amp;nbsp; the amplifier efficiency would be (exemplarily)&amp;amp;nbsp; $95\%$.&amp;lt;br&amp;gt;&lt;br /&gt;
*Nevertheless, for large&amp;amp;nbsp; $\rm PAPR$&amp;amp;nbsp; the amplifier must be operated below the saturation limit to avoid too much signal distortion.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are some numerical examples:&lt;br /&gt;
*At&amp;amp;nbsp; $\rm PAPR = 2$&amp;amp;nbsp; according to the rough calculation on the last page, the average transmit power would have to be chosen  $\text{3 dB}$&amp;amp;nbsp; lower than the allowed power,&amp;amp;nbsp; so that&amp;amp;nbsp; $P_{\rm max}$&amp;amp;nbsp; would not be exceeded at any time. The efficiency would then decrease to&amp;amp;nbsp; $85\%$&amp;amp;nbsp;. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*A Back&amp;amp;ndash;off from&amp;amp;nbsp; $\text{3 dB}$&amp;amp;nbsp; is usually not sufficient, but in practice values between&amp;amp;nbsp; $\text{5 dB}$&amp;amp;nbsp; and&amp;amp;nbsp; $\text{8 dB}$&amp;amp;nbsp; ,&amp;amp;nbsp; taken from &amp;amp;nbsp;[Hin08]&amp;lt;ref name=&#039;Hin08&#039;&amp;gt;Hindelang, T.: &#039;&#039;Mobile Communications. Lecture Manuscript.&#039;&#039; Chair of Communications Engineering, TU Munich, 2008.&amp;lt;/ref&amp;gt;. According to the above curve, however, at &amp;amp;nbsp; $\text{5 dB}$&amp;amp;nbsp; the efficiency already drops to only &amp;amp;nbsp; $70\%$ (System&amp;amp;nbsp; $\rm S1$, green line).&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*With the System&amp;amp;nbsp; $\rm S2$&amp;amp;nbsp; all signal peaks reduced in &amp;amp;nbsp; $\text{8 dB}$&amp;amp;nbsp; can be transmitted by the amplifier without distortion, but the amplifier efficiency is then only&amp;amp;nbsp; $40\%$. As can be seen in the first graphic on this page, strong distortions still occur about &amp;amp;nbsp; $2\%$ of the time.&lt;br /&gt;
&lt;br /&gt;
*If the average transmit power is &amp;amp;nbsp; $P_{\rm S} = 100\, \rm mW$, then with a&amp;amp;nbsp; $\rm PAPR = 9 \ \text{(8 dB)}$&amp;amp;nbsp; the amplifier must work up to&amp;amp;nbsp; $P_{\rm max} = 900\, \rm mW$&amp;amp;nbsp; without distortion, with&amp;amp;nbsp; $\rm PAPR = 2 \ \text{(8 dB)}$&amp;amp;nbsp; on the other hand, only up to&amp;amp;nbsp; $200 \, \rm mW$. The difference between the two amplifiers is an enormous cost factor.}}&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Conclusion:}$&amp;amp;nbsp; Based on this information we can summarize:&lt;br /&gt;
*OFDM with a large back&amp;amp;ndash;off in the uplink would lead to problems, namely extremely short battery life of the mobile devices. Therefore, SC&amp;amp;ndash;FDMA is used in the LTE&amp;amp;ndash;uplink.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*In addition, the complexity of SC&amp;amp;ndash;FDMA is generally lower than other methods, which means cheaper terminals&amp;amp;nbsp; [MLG06]&amp;lt;ref name=&#039;MLG06&#039;&amp;gt;Myung, H.; Lim, J.; &#039;&#039;Goodman, D.: Single Carrier FDMA for Uplink Wireless Transmission.&#039;&#039; IEEE Vehicular Technology Magazine, Vol. 1, No. 3, 2006.&amp;lt;/ref&amp;gt;. If the CDMA used in UMTS were extended to the 4G&amp;amp;ndash;standard, the receiver complexity would increase significantly due to the high frequency diversity in the channel&amp;amp;nbsp; [IXIA09]&amp;lt;ref name=&#039;IXIA09&#039;&amp;gt;&#039;&#039;SC-FDMA - Single Carrier FDMA in LTE.&#039;&#039; (PDF document on the Internet), 2009.&amp;lt;/ref&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*However, the frequency domain equalization with SC&amp;amp;ndash;FDMA is more complicated than with OFDMA. This is the main reason why SC&amp;amp;ndash;FDMA is only used in the uplink. So these complicated equalizers have to be installed only in the base stations and not in the terminals.}}&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Exercices to the Chapter ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:Exercise 4.3: Subcarrier Mapping]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Exercise 4.3Z: Multiple-Access Methods in LTE]]&lt;br /&gt;
&lt;br /&gt;
==List of Sources==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/Technical_Innovations_of_LTE&amp;diff=34996</id>
		<title>Mobile Communications/Technical Innovations of LTE</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Mobile_Communications/Technical_Innovations_of_LTE&amp;diff=34996"/>
		<updated>2020-10-17T20:36:02Z</updated>

		<summary type="html">&lt;p&gt;Rosa: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=LTE – Long Term Evolution&lt;br /&gt;
|Vorherige Seite=General Information on the LTE Mobile Communications Standard&lt;br /&gt;
|Nächste Seite=The Application of OFDMA and SC-FDMA in LTE&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== For voice transmission with LTE ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Unlike previous mobile phone standards, LTE only supports &#039;&#039;packet-oriented transmission&#039;&#039;. For voice transmission, however, a connection-oriented transmission with fixed reservation of resources would be better, since a &amp;quot;fragmented transmission&amp;quot;, as is the case with the packet-oriented method, is relatively complicated.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem of integrating voice transmission methods was one of the major challenges in the development of LTE, as voice transmission remains the largest source of revenue for network operators. There were a number of approaches, as it can be seen in the internet article &amp;amp;nbsp; [Gut10]&amp;lt;ref name=&#039;Gut10&#039;&amp;gt;Gutt, E.: &#039;&#039;LTE - a new dimension of mobile broadband use&#039;&#039;. [http://www.ltemobile.de/uploads/media/LTE_Einfuehrung_V1.pdf PDF document on the Internet], 2010.&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(1)&#039;&#039;&#039; &amp;amp;nbsp; A very simple and obvious method is&amp;amp;nbsp; &amp;lt;i&amp;gt;Circuit Switched Fallback&amp;lt;/i&amp;gt;&amp;amp;nbsp; (&#039;&#039;&#039;CSFB&#039;&#039;&#039;). Here a wire-bound transmission is used for the voice transmission. The principle is:&lt;br /&gt;
*The terminal device logs on to the LTE&amp;amp;ndash;network and in parallel also to a GSM&amp;amp;ndash; or UMTS&amp;amp;ndash;network. When an incoming call is received, the terminal device receives a message from the&amp;amp;nbsp; &amp;lt;i&amp;gt;Mobile Management Entity&amp;lt;/i&amp;gt;&amp;amp;nbsp; (MME, control node in the LTE&amp;amp;ndash;network for user&amp;amp;ndash;authentication), whereupon a wire-bound transmission via the GSM&amp;amp;ndash; or the UMTS&amp;amp;ndash;network is established.&lt;br /&gt;
*A disadvantage of this solution (actually it is a &amp;quot;problem concealment&amp;quot;) is the greatly delayed connection establishment. In addition, CSFB prevents the complete conversion of the network to LTE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(2)&#039;&#039;&#039; &amp;amp;nbsp; Another possibility for the integration of voice in a packet-oriented transmission system is offered by&amp;amp;nbsp; &amp;lt;i&amp;gt;Voice over LTE via GAN&amp;lt;/i&amp;gt;&amp;amp;nbsp; (&#039;&#039;&#039;VoLGA&#039;&#039;&#039;), which is based on the from&amp;amp;nbsp; [[Mobile_Communications/General Information on the LTE Mobile Communications Standard#3GPP_. E2.80.93_Third_Generation_Partnership_Project| 3GPP]]&amp;amp;nbsp; developed by&amp;amp;nbsp; [[https://en.wikipedia.org/wiki/Generic_Access_Network Generic Access Network])&amp;amp;nbsp;. In brief, the principle can be described as follows:&lt;br /&gt;
* GAN enables line-related services via a packet-oriented network (IP&amp;amp;ndash;network), for example WLAN&amp;amp;nbsp; (&amp;lt;i&amp;gt;Wireless Local Area Network&amp;lt;/i&amp;gt;). With compatible end devices one can register oneself in the GSM&amp;amp;ndash;network over a WLAN&amp;amp;ndash;connection and use line-based services. VoLGA uses this functionality by replacing WLAN with LTE.&lt;br /&gt;
* The fast implementation of VoLGA is advantageous, as no lengthy new development or changes to the core network are necessary. However, a so-called&amp;amp;nbsp; &amp;lt;i&amp;gt;VoLGA Access Network Controller&amp;lt;/i&amp;gt;&amp;amp;nbsp; (VANC) must be added to the network as hardware. This takes care of the communication between the end device and the&amp;amp;nbsp; &amp;lt;i&amp;gt;Mobile Management Entity&amp;lt;/i&amp;gt;&amp;amp;nbsp; or the core network.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even though VoLGA does not need to use a GSM&amp;amp;ndash; or UMTS&amp;amp;ndash;network for voice connections like CSFB, it was considered by the majority of the mobile community as an (unsatisfactory) bridge technology due to its user-friendliness. T&amp;amp;ndash;Mobile has long been a proponent of the VoLGA&amp;amp;ndash;technology, but also stopped further development in February 2011.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the following we describe a better solution proposal. Keywords are&amp;amp;nbsp; &amp;lt;i&amp;gt;IP Multimedia Subsystem&amp;lt;/i&amp;gt;&amp;amp;nbsp; (IMS) and&amp;amp;nbsp; &amp;lt;i&amp;gt;Voice over LTE&amp;lt;/i&amp;gt;&amp;amp;nbsp; (VoLTE). The operators in Germany switched to this technology relatively late: &amp;amp;nbsp; Vodafone and O2 Telefonica at the beginning of 2015, Telekom at the beginning of 2016. &lt;br /&gt;
&lt;br /&gt;
This is also the reason why the switch to LTE in Germany (and in Europe in general) was slower than in the USA.  Many customers did not want to pay the higher prices for LTE as long as there was no well-functioning solution for integrating voice transmission.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== VoLTE - Voice over LTE ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
From today&#039;s point of view (2016), the most promising approach to integrating voice services into the LTE&amp;amp;ndash;network, some of which are already established, is&amp;amp;nbsp; &amp;lt;i&amp;gt;Voice over LTE&amp;lt;/i&amp;gt;&amp;amp;nbsp; &amp;amp;ndash; in short: &#039;&#039;&#039;VoLTE&#039;&#039;. This standard, officially adopted by the&amp;amp;nbsp; [http://www.gsma.com/aboutus/ GSMA],&amp;amp;nbsp; the worldwide industry association of more than 800 mobile network operators and over 200 manufacturers of cell phones and network infrastructure, is exclusively IP&amp;amp;ndash;packet-oriented and is based on the&amp;amp;nbsp; &amp;lt;i&amp;gt;IP Multimedia Subsystem&amp;lt;/i&amp;gt;&amp;amp;nbsp; (&#039;&#039;&#039;IMS&#039;&#039;&#039;), which was already defined in the UMTS&amp;amp;ndash;Release 9 in 2010. The technical facts about IMS are:&lt;br /&gt;
*The IMS&amp;amp;ndash;basic protocol is the one from&amp;amp;nbsp; &amp;lt;i&amp;gt;Voice over IP&amp;lt;/i&amp;gt;&amp;amp;nbsp; known&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Session_Initiation_Protocol Session Initiation Protocol]&amp;amp;nbsp; (SIP).  This is a network protocol that can be used to establish and control connections between two users.&lt;br /&gt;
* This protocol enables the development of a completely (for data &amp;lt;u&amp;gt;and&amp;lt;/u&amp;gt; voice) IP-based network and is therefore future-proof.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The reason why the introduction of VoLTE has been delayed by four years compared to LTE&amp;amp;ndash;establishment in data traffic is due to the difficult interaction of &amp;quot;4G&amp;quot; with the older predecessor standards&amp;amp;nbsp; GSM&amp;amp;nbsp; (&amp;quot;2G&amp;quot;) and&amp;amp;nbsp; UMTS&amp;amp;nbsp; (&amp;quot;3G&amp;quot;). Here is an example:&lt;br /&gt;
*If a mobile phone user leaves his LTE&amp;amp;ndash;cell and switches to an area without 4G&amp;amp;ndash;coverage, an immediate switch to the next best standard (3G) must be made.&lt;br /&gt;
&lt;br /&gt;
*Language is transmitted here technically completely differently, no longer by many small data packets &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; &amp;quot;packet-switched&amp;quot; but sequentially in the logical and physical channels reserved especially for the user &amp;amp;nbsp; &amp;amp;#8658;&amp;amp;nbsp; &amp;quot;circuit-switched&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*This implementation must be so fast and smooth that the end customer does not notice anything. And this implementation must work for all mobile phone standards and technologies.&lt;br /&gt;
&lt;br /&gt;
According to all the experts, VoLTE will have a positive impact on mobile telephony in the same way that LTE has driven the mobile Internet forward since 2011. Key benefits for users are:&lt;br /&gt;
*A&amp;amp;nbsp; &amp;lt;i&amp;gt;higher voice quality&amp;lt;/i&amp;gt;, as VoLTE&amp;amp;nbsp; [[Examples_of_Communication_Systems/Nachrichtentechnische_Aspekte_von_UMTS#Verbesserungen_bez.C3.BCglich_Sprachcodierung| AMR&amp;amp;ndash;Wideband Codecs]]&amp;amp;nbsp; with 12.65 or 23.85 kbit/s. Furthermore, the VoLTE&amp;amp;ndash;data packets are prioritized for lowest possible latencies.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*An enormously&amp;amp;nbsp; &amp;lt;i&amp;gt;accelerated connection setup&amp;lt;/i&amp;gt; within one or two seconds, whereas with&amp;amp;nbsp; &amp;lt;i&amp;gt;Circuit Switched Fallback&amp;lt;/i&amp;gt; (CSFB) it takes an unpleasantly long time to establish a connection.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*A&amp;amp;nbsp; &amp;lt;i&amp;gt;low battery consumption&amp;lt;/i&amp;gt;, significantly lower than &amp;quot;2G&amp;quot; and &amp;quot;3G&amp;quot;, associated with a longer battery life. Also in comparison to the usual VoIP&amp;amp;ndash;services the power consumption is up to 40% lower.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From the provider&#039;s point of view, the following advantages result:&lt;br /&gt;
*A&amp;amp;nbsp; &amp;lt;i&amp;gt;better spectral efficiency&amp;lt;/i&amp;gt;: &amp;amp;nbsp; Twice as many calls are possible in the same frequency band than with &amp;quot;3G&amp;quot;. In other words: &amp;amp;nbsp; More capacity is available for data services for the same number of calls.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*An easy implementation of&amp;amp;nbsp; [https://de.ryte.com/wiki/Rich_Media Rich Media Services]&amp;amp;nbsp; (RCS), for example for video telephony or future applications that can be used to attract new customers.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*A&amp;amp;nbsp; &amp;lt;i&amp;gt; better acceptance&amp;lt;/i&amp;gt;&amp;amp;nbsp; of the higher provisioning costs by LTE&amp;amp;ndash;customers if you don&#039;t need to outsource to a &amp;quot;low-value&amp;quot; network like &amp;quot;2G&amp;quot; or &amp;quot;3G&amp;quot; for telephony.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Bandwidth flexibility ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
LTE can be adapted to frequency bands of different widths with relatively little effort by using&amp;amp;nbsp; [[Modulation_Methods/Allgemeine_Beschreibung_von_OFDM#Das_Prinzip_von_OFDM_.E2.80.93_Systembetrachtung_im_Zeitbereich_.281.29|OFDM]]&amp;amp;nbsp; (&amp;quot;Orthogonal Frequency Division Multiplex&amp;quot;). This fact is an important feature for various reasons, see&amp;amp;nbsp; [Mey10]&amp;lt;ref name=&#039;Mey10&#039;&amp;gt;Meyer, M.: &#039;&#039;Siebenmeilenfunk.&#039;&#039; c&#039;t 2010, issue 25, 2010.&amp;lt;/ref&amp;gt;, especially for network operators:&lt;br /&gt;
*The frequency bands for LTE may vary in size depending on the legal requirements in different countries. The outcome of the state-specific auctions of LTE&amp;amp;ndash;frequencies (separated into FDD and TDD) has also influenced the width of the spectrum.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Often LTE is operated in the &amp;quot;frequency&amp;amp;ndash;neighborhood&amp;quot; established radio transmission systems, which are expected to be switched off soon. If the demand increases, LTE can be gradually expanded to the frequency range that is becoming available.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*For example, the migration of television channels after digitalization: &amp;amp;nbsp; A part of the LTE&amp;amp;ndash;network will be located in the VHF&amp;amp;ndash;frequency range around 800 MHz, which has now been freed up, see&amp;amp;nbsp; [[Mobile_Communications/General Information on the LTE Mobile Communications Standard#LTE_Frequency_Band_Splitting|Frequency_Band_Splitting Graphic]].&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Actually the bandwidths could be selected with a degree of fineness of up to 15 kHz (corresponding to an OFDMA&amp;amp;ndash;subcarrier). However, since this would unnecessarily produce overhead, a duration of&amp;amp;nbsp; &#039;&#039;&#039;one millisecond&#039;&#039;&#039;&amp;amp;nbsp; and a bandwidth of&amp;amp;nbsp; &#039;&#039;&#039;180 kHz&#039;&#039;&#039;&amp;amp;nbsp; has been specified as the smallest addressable LTE&amp;amp;ndash;resource. Such a block corresponds to twelve subcarriers (180 kHz divided by 15 kHz).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In order to keep the complexity and effort of hardware standardization as low as possible, a whole range of permissible bandwidths between 1.4 MHz and 20 MHz has been agreed upon. The following list &amp;amp;ndash; taken from&amp;amp;nbsp; [Ges08]&amp;lt;ref name=&#039;Ges08&#039;&amp;gt;Gessner, C.: &#039;&#039;UMTS Long Term Evolution (LTE): Technology Introduction.&#039;&#039; Rohde&amp;amp;Schwarz, 2008.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; &amp;amp;ndash; specifies the standardized bandwidths, the number of available blocks and the &amp;quot;overhead&amp;quot;:&lt;br /&gt;
*6 available blocks in the bandwidth 1.4 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead approx. 22.8%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*15 available blocks in the bandwidth 3 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead about 10%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*25 available blocks in the bandwidth 5 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead approx. 10%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*50 available blocks in the bandwidth 10 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead approx. 10%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*75 available blocks in the bandwidth 15 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead about 10%,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*100 available blocks in the bandwidth 20 MHz &amp;amp;nbsp; &amp;amp;#8658; &amp;amp;nbsp; relative overhead about 10%.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since otherwise some LTE&amp;amp;ndash;specific functions would not work, at least six blocks must be provided. &lt;br /&gt;
*The relative overhead is comparatively high at small channel bandwidth (1.4 MHz): &amp;amp;nbsp; (1.4 &amp;amp;ndash; 6 &amp;amp;middot; 0.18)/1.4 &amp;amp;asymp; 22.8%. &lt;br /&gt;
*From a bandwidth of 3 MHz the relative overhead is constant 10%. &lt;br /&gt;
*It also applies that all end devices must also support the maximum bandwidth of 20 MHz &amp;amp;nbsp; [Ges08]&amp;lt;ref name=&#039;Ges08&#039;&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== FDD, TDD and half duplex method==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:EN_Mob_T_4_2_S3a.png|right|frame|transmission scheme for FDD (top) or TDD (bottom)|class=fit]]&lt;br /&gt;
Another important innovation of LTE is the half&amp;amp;ndash;duplex&amp;amp;ndash;procedure, which is a mixture of the two already known from UMTS&amp;amp;nbsp; [[Examples_of_Communication_Systems/General_Description_of_UMTS#Full Duplex Procedure|Duplex Procedure]]&amp;amp;nbsp;:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Frequency Division Duplex&#039;&#039;&#039;&amp;amp;nbsp; (FDD), and&amp;lt;br&amp;gt;&lt;br /&gt;
*&#039;&#039;&#039;Time Division Duplex&#039;&#039;&#039;&amp;amp;nbsp; (TDD) .&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Such duplexing is necessary to ensure that uplink and downlink are clearly separated from each other and that transmission runs smoothly. The diagram illustrates the difference between FDD&amp;amp;ndash; and TDD&amp;amp;ndash;based transmission.&amp;lt;br&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Using the FDD and TDD methods, LTE can be operated in paired and unpaired frequency ranges.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
The two methods are present opposing requirements:&lt;br /&gt;
*FDD requires a paired spectrum, i.e. one frequency band for transmission from the base station to the terminal (downlink) and one for transmission in the opposite direction (uplink). Downlink and uplink can be used at the same time.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*TDD was designed for unpaired spectra. Now only one band is needed for uplink and downlink. However, transmitter and receiver must now alternate during transmission. The main problem of TDD is the required synchronicity of the networks.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the graphic above the differences between FDD and TDD can be seen. In TDD a &#039;&#039;Guard Period&#039;&#039; has to be inserted when changing from downlink to uplink (or vice versa) to avoid an overlapping of the signals.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Although FDD is likely to be used more in practice (and FDD&amp;amp;ndash;frequencies were also much more expensive for the providers), there are several reasons for TDD:&lt;br /&gt;
*Frequencies are a rare and expensive commodity, as the 2010 auction has shown.  But TDD needs only half of the frequency bandwidth.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The TDD technique allows different modes, which determine how much time should be used for downlink or uplink and can be adjusted to individual requirements.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the actual innovation, the &#039;&#039;&#039;Half&amp;amp;ndash;Duplex&amp;amp;ndash;Method&#039;&#039;&#039;, you need a paired spectrum as with FDD (see second graphic):&lt;br /&gt;
[[File:P ID2276 Mob T 4 2 S4b v1.png|right|frame|Transmission scheme for half-duplex|class=fit]] &lt;br /&gt;
*Base station transmitter and receiver still alternate like TDD.  Each terminal device can either transmit or receive at a given time.&lt;br /&gt;
*Through a second connection to another end device with swapped downlink/uplink&amp;amp;ndash;raster, the entire available bandwidth can still be fully used.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The main advantage of the half&amp;amp;ndash;duplex&amp;amp;ndash;process is that the use of the TDD&amp;amp;ndash;concept reduces the demands on the end devices and thus allows them to be produced at a lower cost.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that this aspect was of great importance in the standardization can also be seen in the use of OFDMA in the downlink and of SC&amp;amp;ndash;FDMA in the uplink: &lt;br /&gt;
*This results in a longer battery life of the end devices and allows the use of cheaper components. &lt;br /&gt;
*More about this can be found in chapter&amp;amp;nbsp; [[Mobile_Communications/The_Application_of_OFDMA_and_SC-FDMA_in_LTE | The Application of OFDMA and SC-FDMA in LTE]].&lt;br /&gt;
&lt;br /&gt;
== Multiple Antenna Systems==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If a radio system uses several transmitting and receiving antennas, one speaks of&amp;amp;nbsp; &#039;&#039;&#039;Multiple Input Multiple Output&#039;&#039;&#039;&amp;amp;nbsp; (MIMO). This is not an LTE&amp;amp;ndash;specific development. WLAN, for example, also uses this technology. &lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 1:}$&amp;amp;nbsp; The principle of multi-antenna systems is illustrated in the following figure using the example of 2&amp;amp;times;2&amp;amp;ndash;MIMO (two transmitting and two receiving antennas).&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_2_S3b.png|right|frame|The difference between SISO and MIMO|class=fit]]&lt;br /&gt;
The new thing about LTE is not the actual use of&amp;amp;nbsp; &amp;lt;i&amp;gt;Multiple Input Multiple Output&amp;lt;/i&amp;gt;, but the particularly intensive one, namely 2&amp;amp;times;2&amp;amp;ndash;MIMO in the uplink and maximum 4&amp;amp;times;4&amp;amp;ndash;MIMO in the downlink. &lt;br /&gt;
&lt;br /&gt;
In the successor&amp;amp;nbsp; [[Mobile_Communications/LTE%E2%80%93Advanced_%E2%80%93_a_Further_Development_of_LTE|LTE&amp;amp;ndash;Advanced]]&amp;amp;nbsp; the use of MIMO is even more pronounced, namely &amp;quot;4&amp;amp;times;4&amp;quot; in the uplink and &amp;quot;8&amp;amp;times;8&amp;quot; in the opposite direction.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A MIMO&amp;amp;ndash;system has advantages compared to&amp;amp;nbsp; &#039;&#039;Single Input Single Output&#039;&#039;&amp;amp;nbsp; (SISO, only one transmitting and one receiving antenna). A distinction is made between several gains depending on the channel:&lt;br /&gt;
*&amp;lt;b&amp;gt;power gain&amp;lt;/b&amp;gt;&amp;amp;nbsp; according to the number of receiving antennas: &amp;amp;nbsp; &amp;lt;br&amp;gt;If the radio signals arriving via several antennas are combined in a suitable way&amp;amp;nbsp; ([https://en.wikipedia.org/wiki/Maximal-ratio_combining Maximal-ratio Combining]), the reception power is increased and the radio connection is improved. By doubling the antennas, a power gain of maximum 3 dB.&amp;lt;br&amp;gt; is achieved.&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;b&amp;gt;Diversity gain&amp;lt;/b&amp;gt; through spatial diversity (&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Antenna_diversity Spatial Diversity]): If several spatially separated receiving antennas are used in an environment with strong multipath propagation, the fading at the individual antennas is mostly independent from each other and the probability that all antennas are affected by fading at the same time is very low.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*&amp;lt;b&amp;gt;Data rate gain&amp;lt;/b&amp;gt;: &amp;amp;nbsp; &amp;lt;br&amp;gt; This increases the efficiency of MIMO, especially in an environment with increased multipath propagation, especially when transmitter and receiver do not have a direct line of sight and the transmission is done via reflections. Tripling the number of antennas for the transmitter and receiver results in approximately twice the data rate.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, it is not possible for all advantages to occur simultaneously. Depending on the nature of the channel, it can also happen that one does not even have the choice of which advantage one wants to use.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In addition to the MIMO systems there are also the following intermediate stages:&lt;br /&gt;
*MISO&amp;amp;ndash;Systems&amp;amp;nbsp; (only one receiving antenna, therefore no power gain is possible), and&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*SIMO&amp;amp;ndash;Systems&amp;amp;nbsp; (only one transmitting antenna, only small diversity gain).&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Example 2:}$&amp;amp;nbsp; The term &amp;quot;MIMO&amp;quot; summarizes multi-antenna techniques with different properties, each of which can be useful in certain situations. The following description is based on the four diagrams shown here.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_2_S5b.png|center|frame|Four multi-antenna procedures with different properties|class=fit]]&lt;br /&gt;
&lt;br /&gt;
*If the mostly independent channels of a MIMO&amp;amp;ndash;system are assigned to a single user (top left diagram), one speaks of&amp;amp;nbsp; &#039;&#039;&#039;Single&amp;amp;ndash;User MIMO&#039;&#039;. With 2&amp;amp;times;2&amp;amp;ndash;MIMO, the data rate is doubled compared to SISO&amp;amp;ndash;operation and with four transmit&amp;amp;ndash each; and receiving antennas, the data rate can be doubled again under good channel conditions.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
::LTE allows maximum 4&amp;amp;times;4&amp;amp;ndash;MIMO but only in the downlink. Due to the complexity of multi-antenna systems, only laptops with LTE&amp;amp;ndash;modems can be used as receivers (end devices) for 4&amp;amp;times;4&amp;amp;ndash;MIMO. For a cell phone, the use of 2&amp;amp;times;2&amp;amp;ndash;MIMO is generally limited to 2&amp;amp;times;2&amp;amp;ndash;MIMO.&lt;br /&gt;
&lt;br /&gt;
*Contrary to Single&amp;amp;ndash;User MIMO, the goal with the&amp;amp;nbsp; &#039;&#039;&#039;Multi&amp;amp;ndash;User MIMO&#039;&#039;&#039;&amp;amp;nbsp; is not the maximum data rate for a receiver, but the maximization of the number of end devices that can use the network simultaneously (top right diagram). This involves transmitting different data streams to different users. This is particularly useful in places with high demand, such as airports or soccer stadiums.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Multi-antenna operation is not only used to maximize the number of users or data rate, but in the event of poor transmission conditions, multiple antennas can also combine their power to transmit data to a single user to improve the quality of reception. One then speaks of&amp;amp;nbsp; &#039;&#039;&#039;Beamforming&#039;&#039;&#039; &amp;amp;nbsp; (diagram below left), which also increases the range of a transmitting station.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*The fourth possibility is&amp;amp;nbsp; &#039;&#039;&#039;antenna diversity&#039;&#039;&#039; &amp;amp;nbsp; (diagram below right). This increases the redundancy (regarding system design) and makes the transmission more robust against interferences. A simple example: &amp;amp;nbsp; There are four channels that all transmit the same data. If one channel fails, there are still three channels for information transport.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Systemarchitektur==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die LTE&amp;amp;ndash;Architektur ermöglicht ein vollständig auf dem IP&amp;amp;ndash;Protokoll basierendes Übertragungssystem. Um dieses Ziel zu erreichen, musste die für UMTS spezifizierte Systemarchitektur nicht nur im Detail verändert, sondern teilweise komplett neu konzipiert werden. Dabei wurden auch andere IP&amp;amp;ndash;basierte Technologien wie&amp;amp;nbsp; &#039;&#039;mobiles WiMAX&#039;&#039;&amp;amp;nbsp; oder&amp;amp;nbsp; &#039;&#039;WLAN&#039;&#039;&amp;amp;nbsp; integriert, um in diese Netze problemlos wechseln zu können.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In UMTS&amp;amp;ndash;Netzen (linke Grafik) ist zwischen einer Basisstation (NodeB) und dem Kernnetz noch der&amp;amp;nbsp; &amp;lt;i&amp;gt;Radio Network Controller&amp;lt;/i&amp;gt;&amp;amp;nbsp; (RNC) zwischengeschaltet, der für den Wechsel zwischen verschiedenen Zellen hauptverantwortlich ist und der zu Latenzzeiten von bis zu 100 Millisekunden führen kann.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:EN_Mob_T_4_2_S6.png|center|frame|Systemarchitektur bei UMTS (UTRAN) und LTE (EUTRAN)|class=fit]]&lt;br /&gt;
&lt;br /&gt;
Die Neukonzipierung der Basisstationen (&amp;amp;bdquo;eNodeB&amp;amp;rdquo; anstelle von &amp;amp;bdquo;NodeB&amp;amp;rdquo;) und die Schnittstelle &amp;amp;bdquo;X2&amp;amp;rdquo; sind die entscheidenden Weiterentwicklungen von UMTS hin zu LTE. Die rechte Grafik illustriert insbesondere die mit der neuen Technologie einhergegangene Reduzierung der Komplexität gegenüber UMTS (linke Grafik). &lt;br /&gt;
&lt;br /&gt;
Die&amp;amp;nbsp; &#039;&#039;&#039;LTE&amp;amp;ndash;Systemarchitektur&#039;&#039;&#039;&amp;amp;nbsp; lässt sich in zwei große Bereiche einteilen:&lt;br /&gt;
*das LTE&amp;amp;ndash;Kernnetz&amp;amp;nbsp; &amp;lt;i&amp;gt;Evolved Packet Core&amp;lt;/i&amp;gt;&amp;amp;nbsp; (EPC),&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*die Luftschnittstelle&amp;amp;nbsp; &amp;lt;i&amp;gt;Evolved UMTS Terrestrial Radio Access Network&amp;lt;/i&amp;gt;&amp;amp;nbsp; (EUTRAN) &amp;amp;ndash; eine Weiterentwicklung von&amp;amp;nbsp; [[Examples_of_Communication_Systems/UMTS%E2%80%93Netzarchitektur#Architektur_der_Zugangsebene|&amp;lt;i&amp;gt;UMTS Terrestrial Radio Access Network&amp;lt;/i&amp;gt;]]&amp;amp;nbsp; (UTRAN).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
EUTRAN überträgt die Daten zwischen dem Endgerät und der LTE&amp;amp;ndash;Basisstation (&amp;amp;bdquo;eNodeB&amp;amp;rdquo;)  über die sogenannte S1&amp;amp;ndash;Schnittstelle mit zwei Verbindungen,  eine für die Übertragung von Nutzdaten und eine zweite für die Übertragung von Signalisierungsdaten.  Aus obiger Grafik erkennt man:&lt;br /&gt;
*Die Basisstationen sind außer mit dem EPC auch mit den benachbarten Basisstationen verbunden. Diese Verbindungen (X2&amp;amp;ndash;Schnittstellen) bewirken, dass  möglichst wenige Pakete verloren gehen, wenn sich das Endgerät aus dem Umkreis einer Basisstation in Richtung einer anderen bewegt.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Dazu kann die Basisstation, deren Versorgungsgebiet der Nutzer gerade verlässt, eventuell noch zwischengespeicherte Daten direkt und schnell an die &amp;amp;bdquo;neue&amp;amp;rdquo; Basisstation weitergeben. Damit ist eine (weitgehend) durchgehende Übertragung sichergestellt.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Die Funktionalität des RNC geht zum Teil in die Basisstation, zum anderen in die&amp;amp;nbsp; &amp;lt;i&amp;gt;Mobility Management Entity&amp;lt;/i&amp;gt;&amp;amp;nbsp; (MME) im Kernnetz über. Diese Reduktion der Schnittstellen verkürzt die Signaldurchlaufzeit im Netzwerk und das Handover signifikant auf 20 Millisekunden.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Die LTE&amp;amp;ndash;Systemarchitektur ist zudem so ausgelegt, dass sich zukünftig&amp;amp;nbsp; &amp;lt;i&amp;gt;Inter&amp;amp;ndash;NodeB&amp;amp;ndash;Verfahren&amp;lt;/i&amp;gt;&amp;amp;nbsp; (wie&amp;amp;nbsp; &amp;lt;i&amp;gt;Soft&amp;amp;ndash;Handover&amp;lt;/i&amp;gt;&amp;amp;nbsp; oder&amp;amp;nbsp; &amp;lt;i&amp;gt;Cooperative Interference Cancellation&amp;lt;/i&amp;gt;) einfach integrieren lassen.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== LTE&amp;amp;ndash;Kernnetz:  Backbone und Backhaul ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Das LTE&amp;amp;ndash;Kernnetz&amp;amp;nbsp; &amp;lt;i&amp;gt;Evolved Packet Core&amp;lt;/i&amp;gt;&amp;amp;nbsp; (EPC) eines Netzbetreibers &amp;amp;ndash; in der Fachsprache&amp;amp;nbsp;  &amp;lt;i&amp;gt;Backbone&amp;lt;/i&amp;gt;&amp;amp;nbsp; &amp;amp;ndash; besteht aus verschiedenen Netzwerkkomponenten. Das EPC ist mit den Basisstationen über das&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp; (englische Bezeichnung für &amp;lt;i&amp;gt;Rücktransport&amp;lt;/i&amp;gt;) verbunden. Darunter versteht man die Anbindung eines vorgelagerten, meist hierarchisch untergeordneten Netzknotens an einen zentralen Netzknoten.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Momentan besteht das&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp; zum Großteil aus Richtfunk und sogenannten E1&amp;amp;ndash;Leitungen. Diese sind Kupferleitungen und erlauben einen Durchsatz von ca. 2 Mbit/s. Für GSM&amp;amp;ndash; und UMTS&amp;amp;ndash;Netzwerke waren diese Verbindungen noch ausreichend, aber bereits für großflächig konzipertes&amp;amp;nbsp; [[Examples_of_Communication_Systems/Weiterentwicklungen_von_UMTS#High.E2.80.93Speed_Downlink_Packet_Access| HSDPA]]&amp;amp;nbsp; reichen solche Datenraten nicht mehr. Für LTE ist ein solches&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp; komplett unbrauchbar:&lt;br /&gt;
*Das langsame Kabelnetzwerk würde die schnellen Funkverbindungen ausbremsen; insgesamt wäre kein Geschwindigkeitszuwachs festzustellen.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Aufgrund der geringen Kapazitäten der Leitungen mit E1&amp;amp;ndash;Standard wäre auch ein Ausbau mit weiteren baugleichen Leitungen nicht wirtschaftlich.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Im Zuge der LTE&amp;amp;ndash;Einführung musste also das&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp; neu entworfen werden. Dabei war es wichtig, Zukunftssicherheit im Auge zu behalten, stand doch die nächste Generation&amp;amp;nbsp; &#039;&#039;LTE&amp;amp;ndash;Advanced&#039;&#039;&amp;amp;nbsp; bereits vor der Einführung. Schenkt man dem von Experten propagierten&amp;amp;nbsp; &amp;lt;i&amp;gt;Moore&#039;s Law&amp;lt;/i&amp;gt;&amp;amp;nbsp; für Mobilfunkbandbreiten Glauben, so ist die teure Neuverlegung besserer Kabel  der wichtigste Faktor für die Zukunftssicherheit.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Aufgrund der rein paketorientierten Übertragungstechnik bietet sich für das LTE&amp;amp;ndash;Backhaul der ebenfalls IP&amp;amp;ndash;basierte Ethernet&amp;amp;ndash;Standard an, der mit Hilfe von Lichtwellenleitern realisiert wird. Die Firma Fujitsu stellte 2009 in der Studie&amp;amp;nbsp;  [Fuj09]&amp;lt;ref name=&#039;Fuj09&#039;&amp;gt;Fujitsu Network Communications Inc.: &#039;&#039;4G Impacts to Mobile Backhaul.&#039;&#039; [http://www.fujitsu.com/downloads/TEL/fnc/whitepapers/4Gimpacts.pdf PDF–Internetdokument].&amp;lt;/ref&amp;gt;&amp;amp;nbsp; zudem die These auf, dass die momentane Infrastruktur noch für die nächsten zehn bis fünfzehn Jahre eine wichtige Rolle für das LTE&amp;amp;ndash;Backhaul spielen wird.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Für den Generationenwechsel hin zu einem Ethernet&amp;amp;ndash;basierten&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;&amp;amp;nbsp; gibt es zwei Ansätze:&lt;br /&gt;
*der parallele Betrieb der Leitungen mit E1 und Ethernet&amp;amp;ndash;Standard,&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*die sofortige Migration zu einem auf Ethernet basierenden&amp;amp;nbsp; &amp;lt;i&amp;gt;Backhaul&amp;lt;/i&amp;gt;.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ersteres hätte den Vorteil, dass die Netzbetreiber den Sprachverkehr weiterhin über die alten Leitungen laufen lassen könnten und ausschließlich den bandbreitenintensiven Datenverkehr über die leistungsfähigeren Leitungen abwickeln müssten. &lt;br /&gt;
&lt;br /&gt;
Die zweite Möglichkeit wirft einige technische Probleme auf:&lt;br /&gt;
*Die vorher durch die langsamen E1-Standard&amp;amp;ndash;Leitungen transportierten Dienste müssten sofort auf ein paketbasiertes Verfahren umgestellt werden.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Ethernet bietet (anders als der jetzige Standard) bisher keine&amp;amp;nbsp; &amp;lt;i&amp;gt;End&amp;amp;ndash;to&amp;amp;ndash;End&amp;amp;ndash;Synchronisierung&amp;lt;/i&amp;gt;, was beim Funkzellenwechsel zu starken Verzögerungen bis hin zu Dienstunterbrechungen führen kann &amp;amp;ndash; also eine gewaltige Einbuße der Servicequalität. &lt;br /&gt;
*Im Konzept&amp;amp;nbsp; [https://en.wikipedia.org/wiki/Synchronous_Ethernet Synchronous Ethernet]&amp;amp;nbsp; (SyncE) wurden jedoch von der Fa. Cisco bereits Vorschläge unterbreitet, wie die Synchronisation realisiert werden könnte.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Für Ballungsgebiete wäre eine direkte Umstellung des Backhauls sicher lohnenswert, da für eine vergleichsweise hohe Zahl an neuen Nutzern nur relativ wenige neue Kabel verlegt werden müssten. &lt;br /&gt;
&lt;br /&gt;
Im ländlichen Raum ergäben sich aber durch größere Grabungsarbeiten schnell hohe Kosten. Dies ist aber genau der Bereich, der laut der&amp;amp;nbsp; [[Mobile_Communications/Allgemeines_zum_Mobilfunkstandard_LTE#LTE.E2.80.93Frequenzbandaufteilung|getroffenen  Vereinbarung]]&amp;amp;nbsp; zwischen der Bundesregierung  und den (deutschen) Mobilfunkbetreibern als erstes abgedeckt werden muss. Hier müsste (und wird wohl) der meist vorhandene Richtfunk auf hohe Datenraten erweitert werden.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Aufgaben zum Kapitel==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:4.2 FDD, TDD und Halb–Duplex|Aufgabe 4.2: FDD, TDD und Halb–Duplex]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Aufgabe_4.2Z:_MIMO–Anwendungen_bei_LTE|Aufgabe 4.2Z: MIMO–Anwendungen bei LTE]]&lt;br /&gt;
&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Weiterentwicklungen_von_UMTS&amp;diff=34995</id>
		<title>Examples of Communication Systems/Weiterentwicklungen von UMTS</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Weiterentwicklungen_von_UMTS&amp;diff=34995"/>
		<updated>2020-10-13T15:40:19Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Weiterentwicklungen von UMTS to Examples of Communication Systems/Further Developments of UMTS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Examples of Communication Systems/Further Developments of UMTS]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Further_Developments_of_UMTS&amp;diff=34994</id>
		<title>Examples of Communication Systems/Further Developments of UMTS</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Further_Developments_of_UMTS&amp;diff=34994"/>
		<updated>2020-10-13T15:40:19Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Weiterentwicklungen von UMTS to Examples of Communication Systems/Further Developments of UMTS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{LastPage}} &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=UMTS – Universal Mobile Telecommunications System&lt;br /&gt;
|Vorherige Seite=Nachrichtentechnische Aspekte von UMTS&lt;br /&gt;
|Nächste Seite=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==High–Speed Downlink Packet Access==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Um dem steigenden Bedarf an höheren Datenraten im Mobilfunk gerecht zu werden und eine immer bessere Dienstgüte zu gewährleisten, wurde der Standard UMTS–Release 99 bis heute (2008) in fünf Phasen weiterentwickelt. In der Grafik sind die einzelnen Entwicklungsphasen zeitlich dargestellt.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID3115__Bei_T_4_4_S1_v1.png|right|frame|Weiterentwicklung von UMTS zwischen 2000 und 2008]]&lt;br /&gt;
&lt;br /&gt;
Die wichtigsten Weiterentwicklungen waren&lt;br /&gt;
*das UMTS Release 5 mit&amp;amp;nbsp; &#039;&#039;&#039;HSDPA&#039;&#039;&#039;&amp;amp;nbsp; und&lt;br /&gt;
*das UMTS Release 6 mit&amp;amp;nbsp; &#039;&#039;&#039;HSUPA&#039;&#039;&#039;.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Für diese beiden Standards standen vor allem die Steigerung der zur Verfügung gestellten Datenraten für Downlink und Uplink sowie eine größere Bandbreiteneffizienz und Zellenkapazität im Vordergrund. Zusammen ergeben HSDPA und HSUPA den&amp;amp;nbsp; &#039;&#039;&#039;HSPA–Standard&#039;&#039;&#039;.&lt;br /&gt;
*2002 wurde &#039;&#039;High–Speed Downlink Packet Access&#039;&#039; – abgekürzt&amp;amp;nbsp; &#039;&#039;&#039;HSDPA&#039;&#039;&#039;&amp;amp;nbsp; – mit dem UMTS Release 5 spezifiziert und 2006 eingeführt, um Datenrate und Durchsatz gegenüber dem ursprünglichen UMTS–Standard zu steigern sowie Antwortzeiten bei paketvermittelter Übertragung zu verkürzen.&lt;br /&gt;
*In HSPDA betragen die zur Verfügung gestellten Datenraten zwischen&amp;amp;nbsp; $\text{500 kbit/s}$&amp;amp;nbsp; und&amp;amp;nbsp; $\text{3.6 Mbit/s}$&amp;amp;nbsp;  – theoretisch sogar bis&amp;amp;nbsp; $\text{14.4 Mbit/s}$. Im Vergleich zur Datenrate von UMTS R’99&amp;amp;nbsp; $\text{(14.4 kbit/s}$&amp;amp;nbsp; bis&amp;amp;nbsp; $\text{2 Mbit/s)}$&amp;amp;nbsp;  stellen diese Werte eine Verdoppelung bis Vervierfachung dar.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1561__Bei_T_4_4_S2_v2.png|right|frame|Kennzeichen von HSDPA]]&lt;br /&gt;
Folgende technische Verfahren tragen zur Steigerung der Leistungsfähigkeit von HSDPA gegenüber UMTS bei. Im Schaubild sind diese Features zusammengestellt:&lt;br /&gt;
*Einführung eines zusätzlichen gemeinsam genutzten Kanals:&amp;amp;nbsp; &#039;&#039;&#039;HS–PDSCH&#039;&#039;&#039;,&lt;br /&gt;
*Verwendung des&amp;amp;nbsp; &#039;&#039;&#039;Hybrid–ARQ&#039;&#039;&#039;–Verfahrens,&lt;br /&gt;
*Minimierung der&amp;amp;nbsp; &#039;&#039;&#039;Verzögerungszeiten&#039;&#039;&#039;,&lt;br /&gt;
*Einführung eines&amp;amp;nbsp; &#039;&#039;&#039;Node B Schedulings&#039;&#039;&#039;,&lt;br /&gt;
*Verwendung von&amp;amp;nbsp; &#039;&#039;&#039;adaptiver&#039;&#039;&#039; Modulation, Codierung und Übertragungsrate.&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Zusätzliche Kanäle in HSDPA==  	 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Der&amp;amp;nbsp; &#039;&#039;High–Speed Downlink Physical High Speed Channel&#039;&#039;&amp;amp;nbsp; – Kurzbezeichnung&amp;amp;nbsp; &#039;&#039;&#039;HS–PDCH&#039;&#039;&#039;&amp;amp;nbsp; – ist ein Hochgeschwindigkeits–Transportkanal, der für die Übertragung von Teilnehmerdaten verwendet wird. Er vereinigt die Eigenschaften eines gemeinsam genutzten und eines dedizierten Kanals:&lt;br /&gt;
*Im Downlink können ein oder mehrere Kanäle von mehreren Teilnehmern gleichzeitig verwendet werden. Dies ermöglicht die simultane Übertragung gleicher Daten an unterschiedliche Teilnehmer sowie eine signifikante Erhöhung der Übertragungsgeschwindigkeit durch Bündelung mehrerer Kanäle dieser Art.&lt;br /&gt;
*In einem jeden HS–PDCH beträgt der Spreizfaktor&amp;amp;nbsp; $J = 16$. Dies bedeutet, dass in einer Zelle theoretisch bis zu fünfzehn solcher Kanäle gleichzeitig verwendet werden können. In der Praxis werden jedoch stets nur zwischen fünf und zehn Kanäle genutzt, da die restlichen Kanäle für den Betrieb anderer Dienste benötigt werden.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1552__Bei_T_4_4_S3_v1.png|center|frame|Transportkanäle, logische Kanäle und physikalische Kanäle bei HSPA]]&lt;br /&gt;
&lt;br /&gt;
Die Ressourcenzuteilung für den&amp;amp;nbsp; &#039;&#039;High–Speed Shared Data Channel&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;HS–DSCH&#039;&#039;&#039;)&amp;amp;nbsp; erfolgt über so genannte&amp;amp;nbsp; &#039;&#039;High–Speed Shared Control Channels&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;HS–SCCH&#039;&#039;&#039;). Ein Empfänger muss daher in der Lage sein, bis zu vier solcher Kanäle gleichzeitig empfangen und decodieren zu können.&lt;br /&gt;
&lt;br /&gt;
*Zusätzlich zu den oben vorgestellten Kanälen wird ein&amp;amp;nbsp; &#039;&#039;Dedicated Physical Control Channel&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;DPCCH&#039;&#039;&#039;) für die Übertragung von Kontrolldaten im Uplink und ein&amp;amp;nbsp; &#039;&#039;Dedicated Control Channel&#039;&#039;&amp;amp;nbsp;  (&#039;&#039;&#039;DCCH&#039;&#039;&#039;)&amp;amp;nbsp; für die Lokalisierungsprozedur im Down– und Uplink genutzt. &lt;br /&gt;
*Für die Übertragung von IP–Nutzdaten in der Aufwärtsrichtung ist jeweils ein &amp;amp;nbsp; &#039;&#039;Dedicated Traffic Channel&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;DTCH&#039;&#039;&#039;)&amp;amp;nbsp; verantwortlich.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==HARQ–Verfahren und &#039;&#039;Node B Scheduling&#039;&#039;  ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Weiteres Merkmal von HSDPA ist die Reduzierung der Paketumlaufzeit&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Round–Trip Delay&#039;&#039;, RTD)&amp;amp;nbsp; und die Verwendung des HARQ–Verfahrens:&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;&#039;Paketumlaufzeit&#039;&#039;&#039;&amp;amp;nbsp; wurde durch HSDPA auf&amp;amp;nbsp; $\text{70 ms}$&amp;amp;nbsp; gesenkt (gegenüber&amp;amp;nbsp; $\text{160 ... 200 ms}$&amp;amp;nbsp;  bei UMTS R’99), was für einige Anwendungen&amp;amp;nbsp; (zum Beispiel Web–Browsing)&amp;amp;nbsp; von großer Bedeutung ist. Diese Reduzierung wurde durch Verringern der Transportblocklänge auf ca.&amp;amp;nbsp; $2$&amp;amp;nbsp; Millisekunden erreicht (vorher hatte diese&amp;amp;nbsp; $\text{10 ms}$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $\text{20 ms}$&amp;amp;nbsp; betragen).&lt;br /&gt;
*In jedem &amp;amp;bdquo;Node B&amp;amp;rdquo; wurde ein&amp;amp;nbsp; &#039;&#039;Hybrid Automatic Repeat Request&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;HARQ&#039;&#039;&#039;)&amp;amp;nbsp; implementiert, um die Übertragungsverzögerungen zu minimieren. Dieser Mechanismus verhindert, dass es durch das erneute Übertragen von fehlerhaften Blöcken zu signifikanten Verzögerungen kommt. Solche Verzögerungen können nämlich vom TCP–Protokoll als Blockierungen interpretiert werden, was dann zu weiteren Verzögerungen führt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unter Verwendung des HARQ–Mechanismus und mit Transportblocklängen von&amp;amp;nbsp; $\text{2 ms}$&amp;amp;nbsp; betragen die Übertragungsverzögerungen in HSPDA weniger als&amp;amp;nbsp; $\text{10 ms}$. Dies ist eine entscheidende Verbesserung im Vergleich zu UMTS, bei dem eine Fehlerdetektion (verbunden mit einer erneuten Übertragung) ca.&amp;amp;nbsp; $\text{90 ms}$&amp;amp;nbsp; in Anspruch nimmt.&lt;br /&gt;
&lt;br /&gt;
Beim HARQ–Verfahren wird bei jedem einzelnen Transportrahmen die Detektion eines bzw. keines Fehlers&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Acknowledgement&#039;&#039;, ACK/NACK)&amp;amp;nbsp; quittiert. Dieses Verfahren wird als&amp;amp;nbsp; &#039;&#039;&#039;Stop and Wait&#039;&#039;&#039;&amp;amp;nbsp; (SAW) bezeichnet.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1553__Bei_T_4_4_S4a_v1.png|right|frame|Vergrößerung der Datenrate durch HARQ]]&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 1:}$&amp;amp;nbsp;  &lt;br /&gt;
Die Grafik zeigt die erreichbare Datenrate in Abhängigkeit des Quotienten&amp;amp;nbsp; $E_{\rm B}/N_0$&amp;amp;nbsp; (in dB). &lt;br /&gt;
&lt;br /&gt;
*Man erkennt entscheidende Verbesserungen durch den HARQ–Mechanismus, insbesondere bei kleinen Werten von&amp;amp;nbsp; $E_{\rm B}/N_0$. &lt;br /&gt;
*Dagegen wird mit HARQ die Datenrate nicht weiter vergrößert, wenn&amp;amp;nbsp; $10 · \lg \E_{\rm B}/N_0 &amp;gt; 2 \ \rm dB$&amp;amp;nbsp; ist.}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Die folgende Grafik soll die&amp;amp;nbsp; &#039;&#039;&#039;Funktionsweise des HARQ–Verfahrens&#039;&#039;&#039;&amp;amp;nbsp; verdeutlichen. Es sind folgende Schritte zu unterscheiden:&lt;br /&gt;
*Vor dem Senden informiert die Basisstation den Empfänger mit Hilfe des Kanals&amp;amp;nbsp; &#039;&#039;&#039;HS–SCCH&#039;&#039;&#039;&amp;amp;nbsp; über eine bevorstehende Übertragung, wobei ein&amp;amp;nbsp; &#039;&#039;&#039;HS–SCCH&#039;&#039;&#039;–Rahmen über drei Zeitschlitze verfügt.&lt;br /&gt;
*Die Kontrolldaten kommen beim Empfänger an und werden unmittelbar nach Ankunft des ersten&amp;amp;nbsp; &#039;&#039;&#039;SCCH&#039;&#039;&#039;–Zeitschlitzes ausgewertet. &lt;br /&gt;
*Die Datenübertragung auf dem&amp;amp;nbsp; &#039;&#039;&#039;HS–PDSCH&#039;&#039;&#039;&amp;amp;nbsp; startet, sobald der Teilnehmer die ersten zwei Zeitschlitze des Kontrolldatenblocks erhalten hat.&lt;br /&gt;
*Innerhalb von&amp;amp;nbsp; $\text{5 ms}$&amp;amp;nbsp; nach Erhalt eines Datenrahmens muss der Empfänger den gesamten Rahmen decodiert und auf Fehler überprüft haben.&lt;br /&gt;
*Bei fehlerfreier Übertragung wird eine positive Quittierung&amp;amp;nbsp; (&#039;&#039;&#039;ACK&#039;&#039;&#039;)&amp;amp;nbsp; in Aufwärtsrichtung versendet, ansonsten wird ein&amp;amp;nbsp; &#039;&#039;Non Acknowledgement&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;NACK&#039;&#039;&#039;)&amp;amp;nbsp; geschickt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1554__Bei_T_4_4_S4b_v1.png|center|frame|Zum  HARQ–Verfahren]]&lt;br /&gt;
&lt;br /&gt;
Da der HARQ einen neuen Rahmen erst versendet, wenn die Quittierung der bereits übertragenen Rahmen vorliegt, muss der Empfänger in der Lage sein, bis zu acht HARQs zu verwalten. Dies garantiert die richtige Reihenfolge und dadurch die richtige Verarbeitung der Daten in den höheren Ebenen.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Zu erwähnen ist auch:}$&amp;amp;nbsp;  &lt;br /&gt;
&lt;br /&gt;
Zusätzlich zu HARQ wurde in dem&amp;amp;nbsp; &#039;&#039;UMTS Release 5&#039;&#039;&amp;amp;nbsp; ein&amp;amp;nbsp; &#039;&#039;&#039;Node B Scheduling&#039;&#039;&#039;&amp;amp;nbsp; eingeführt, um auf Veränderungen der Übertragungsbedingungen einzelner Teilnehmer (zum Beispiel durch Fading) schnell reagieren zu können. &lt;br /&gt;
*Mit Hilfe dieses Schedulings wird entschieden, welche Rahmen welchem Übertragungskanal zugewiesen werden.&lt;br /&gt;
*Bei dem Scheduling werden Prioritäten vergeben. Ein Rahmen wird erst gesendet, wenn er über die höchste Priorität verfügt, was gleichbedeutend damit ist, dass er mit der größten Wahrscheinlichkeit richtig empfangen wird. &lt;br /&gt;
*Durch dieses Scheduling wird die zur Verfügung gestellte Bandbreite besser ausgenutzt und die Zellenkapazität signifikant gesteigert.}}&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
==Adaptive Modulation, Codierung und Übertragungsrate==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
In HSDPA werden die Signale&amp;amp;nbsp; &#039;&#039;adaptiv moduliert&#039;&#039;. Das bedeutet:&lt;br /&gt;
*Unter guten Übertragungsbedingungen wird eine höherstufige Modulation&amp;amp;nbsp; (16–QAM bzw. 64–QAM)&amp;amp;nbsp; verwendet.&lt;br /&gt;
*Bei schlechteren Bedingungen wird auf&amp;amp;nbsp; &#039;&#039;Quaternary Phase Shift Keying&#039;&#039;&amp;amp;nbsp; (QPSK) oder  4–QAM umgeschaltet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zusätzlich zur Modulation kann die Codierung sowie die Anzahl der von einem Teilnehmer gleichzeitig verwendeten&amp;amp;nbsp; &#039;&#039;&#039;HS–DSCH&#039;&#039;&#039;–Kanäle je nach Kanalqualität flexibel und schnell &amp;amp;nbsp;$($alle&amp;amp;nbsp; $\text{2 ms)}$&amp;amp;nbsp; verändert werden. Trotz der gleichzeitigen Verwendung von adaptiver Modulation und adaptiver Codierung wird die Leistung stets konstant gehalten.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1558__Bei_T_4_4_S5_v1.png|right|frame|Adaptive Modulation und Codierung in HSDPA]]&lt;br /&gt;
&lt;br /&gt;
Die Leistungsregelung läuft in HSDPA unterschiedlich zu UMTS R’99 ab:&lt;br /&gt;
*Die Sendeleistung wird stets an die Signalqualität angepasst, während die Bandbreite möglichst konstant gehalten werden sollte.&lt;br /&gt;
*Nur falls die Leistung nicht mehr erhöht werden kann, wird der Spreizfaktor vergrößert und damit die Datenrate heruntergesetzt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die maximal erreichbare Datenrate hängt vorwiegend von der&amp;amp;nbsp; &#039;&#039;Leistungsfähigkeit des Empfängers&#039;&#039;&amp;amp;nbsp; und vom&amp;amp;nbsp; &#039;&#039;Transportformat und den Ressourcenkombinationen&#039;&#039;&amp;amp;nbsp; (TFRC) ab.&lt;br /&gt;
&lt;br /&gt;
In der Tabelle sind verschiedene Parameterkombinationen für Modulation und Coderate angegeben und die daraus resultierenden Bitraten zu ersehen. Nicht berücksichtigt ist hier der &#039;&#039;Overhead&#039;&#039;.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
==High–Speed Uplink Packet Access==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Seit UMTS R’99 wurden die Spezifikationen für den Uplink nicht mehr weiterentwickelt, obwohl die bidirektionalen symmetrischen Anwendungen immer mehr an Bedeutung gewonnen haben und immer größere Anforderungen an die Übertragungsgeschwindigkeiten gestellt wurden. Die Datenraten betrugen bis zur Einführung von Release 6 zwischen&amp;amp;nbsp; $\text{64 kbit/s}$&amp;amp;nbsp; und&amp;amp;nbsp; $\text{128 kbit/s}$, bei idealen Bedingungen bis zu&amp;amp;nbsp; $\text{384 kbit/s}$.&lt;br /&gt;
&lt;br /&gt;
Mit dem UMTS Release 6 wurde 2004&amp;amp;nbsp; &#039;&#039;High-Speed Uplink Packet Access&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;HSUPA&#039;&#039;&#039;)&amp;amp;nbsp; definiert und 2007 eingeführt. Dadurch wurden die Datenraten auf der Aufwärtsstrecke erheblich gesteigert. Diese betragen theoretisch bis zu&amp;amp;nbsp; $\text{5.8 Mbit/s}$. In der Praxis werden – unter Berücksichtigung der gleichzeitigen Übertragung für mehrere Nutzer und der Empfängerkapazität – immerhin Übertragungsraten bis ca.&amp;amp;nbsp; $\text{800 kbit/s}$&amp;amp;nbsp; erreicht.&lt;br /&gt;
&lt;br /&gt;
Die wesentliche Verbesserung durch HSUPA ist auf die Einführung eines zusätzlichen Aufwärtskanals zurückzuführen, dem&amp;amp;nbsp; &#039;&#039;Enhanced Dedicated Channel&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;E-DCH&#039;&#039;&#039;). Dieser minimiert unter anderem in den dedizierten Uplink–Kanälen den Einfluss von Anwendungen mit stark unterschiedlichen und teilweise sehr intensiven Datenaufkommen&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Bursty Traffic&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
Obwohl der&amp;amp;nbsp; &#039;&#039;&#039;E–DCH&#039;&#039;&#039;&amp;amp;nbsp; ein dedizierter Transportkanal ist, garantiert er dem Teilnehmer allerdings keine feste Bandbreite in Aufwärtsrichtung, wie es bei UMTS R’99 der Fall ist. Diese flexible und effiziente Zuteilung der Bandbreite in Abhängigkeit der Kanalbedingungen erlaubt eine wesentliche Steigerung der Zellenkapazität.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID3116__Bei_T_4_4_S6_v1.png|right|frame|HSUPA im Überblick]]&lt;br /&gt;
Neben dem neuen Transportkanal&amp;amp;nbsp; (&#039;&#039;&#039;E–DCH&#039;&#039;&#039;)&amp;amp;nbsp; wurden auch im Uplink&amp;amp;nbsp; (&#039;&#039;&#039;HSUPA&#039;&#039;&#039;)&amp;amp;nbsp; analog zum Downlink&amp;amp;nbsp; (&#039;&#039;&#039;HSDPA&#039;&#039;&#039;)&amp;amp;nbsp; zusätzlich folgende Verfahren eingeführt:&lt;br /&gt;
*&#039;&#039;Node B Scheduling&#039;&#039;,&lt;br /&gt;
*&#039;&#039;Hybrid Automatic Repeat Request&#039;&#039;&amp;amp;nbsp; (HARQ).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Verwendung von HSUPA im Uplink ist nur dann sinnvoll, wenn es mit HSDPA im Downlink kombiniert wird. Ihr Zusammenwirken steigert die Leistungsfähigkeit des Gesamtsystems signifikant.&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==UTRAN Long Term Evolution==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:P_ID1559__Bei_T_4_4_S7_v1.png|right|frame|Von&amp;amp;nbsp; &#039;&#039;&#039;UMTS&#039;&#039;&#039;&amp;amp;nbsp; zu&amp;amp;nbsp; &#039;&#039;&#039;LTE&#039;&#039;&#039;]]&lt;br /&gt;
&#039;&#039;Long Term Evolution&#039;&#039;&amp;amp;nbsp; $\rm (LTE)$&amp;amp;nbsp; stellt ein Mobilfunksystem der vierten Generation dar, das von der&amp;amp;nbsp; [http://www.3gpp.org/ 3gpp]&amp;amp;nbsp; parallel zu den unterschiedlichen Weiterentwicklungsphasen von UMTS entworfen und standardisiert wurde, um den stetig wachsenden Anforderungen an zukünftige Mobilfunksysteme gerecht zu werden. Dieses System wird auch als&amp;amp;nbsp; &#039;&#039;High Speed OFDM Packet Access&#039;&#039;&amp;amp;nbsp; $\rm (HSOPA)$&amp;amp;nbsp;  bezeichnet.&lt;br /&gt;
&lt;br /&gt;
Das Schaubild fasst die Entwicklung der Mobilfunksysteme aus der Sicht des Jahres 2011 zusammen.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
LTE wurde als zukunftsweisende Alternative zu den Mobilfunksystemen der dritten Generation rntwickelt. Die Grundzüge von LTE wurden 2004 definiert, konkrete Anforderungen wurden aber erst 2006 erstellt. Erste Systeme begannen 2011 mit dem Betrieb.&lt;br /&gt;
&lt;br /&gt;
Nachfolgend sind einige Merkmale von UTRAN–LTE stichpunktartig und kommentarlos aufgelistet:&lt;br /&gt;
*Die für GSM und UMTS zugewiesenen&amp;amp;nbsp; &#039;&#039;Frequenzbereiche&#039;&#039;&amp;amp;nbsp; werden weiterhin verwendet. Es ist eine Erweiterung in den Bereich um&amp;amp;nbsp; $\text{2600 MHz}$&amp;amp;nbsp; geplant.&lt;br /&gt;
*Es können zwischen&amp;amp;nbsp; $200$&amp;amp;nbsp; und&amp;amp;nbsp; $400$&amp;amp;nbsp; aktive Teilnehmer gleichzeitig versorgt werden, was eine Steigerung der &amp;amp;nbsp;&#039;&#039;Zellenkapazität&#039;&#039;&amp;amp;nbsp; gegenüber UMTS um den Faktor&amp;amp;nbsp;$2$&amp;amp;nbsp; bis&amp;amp;nbsp;$3$&amp;amp;nbsp; bedeutet.&lt;br /&gt;
*Die Reichweite beträgt&amp;amp;nbsp; $\text{5 km}$&amp;amp;nbsp;  (bei optimaler Güte) bis zu&amp;amp;nbsp; $\text{100 km}$&amp;amp;nbsp; (mit reduzierter Qualität). Die &#039;&#039;maximalen Datenraten&#039;&#039; werden mit&amp;amp;nbsp; $\text{100 Mbit/s}$&amp;amp;nbsp; im Downlink und&amp;amp;nbsp; $\text{50 Mbit/s}$&amp;amp;nbsp; im Uplink angegeben.&lt;br /&gt;
*Die &#039;&#039;Verzögerungszeiten&#039;&#039; werden auf weniger als&amp;amp;nbsp; $\text{5 ms}$&amp;amp;nbsp; bei größeren Bandbreitenzuweisungen und auf&amp;amp;nbsp; $\text{10 ms}$&amp;amp;nbsp; bei kleineren Bandbreitenzuweisungen herabgesetzt.&lt;br /&gt;
*Die Bandbreiten können mit&amp;amp;nbsp; $\text{1.25 MHz}$,&amp;amp;nbsp; $\text{2.5 MHz}$,&amp;amp;nbsp; $\text{5 MHz}$,&amp;amp;nbsp; $\text{10 MHz}$,&amp;amp;nbsp; $\text{15 MHz}$&amp;amp;nbsp; und&amp;amp;nbsp; $\text{20 MHz}$&amp;amp;nbsp; in einem sehr weiten Bereich flexibel zugewiesen werden .&lt;br /&gt;
*Die verwendeten Vielfachzugriffsverfahren sind&amp;amp;nbsp; &#039;&#039;Orthogonal Frequency Division Multiple Access&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;OFDMA&#039;&#039;&#039;)&amp;amp;nbsp; im Downlink und&amp;amp;nbsp; &#039;&#039;Single Carrier Frequency Division Multiple Multiplexing&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;SC–FDMA&#039;&#039;&#039;)&amp;amp;nbsp; im Uplink.&lt;br /&gt;
*Trotz dieser vielfachen Neuerungen gibt es Kompatibilität zu den Mobilfunksystemen vorheriger Generationen geben und ein nahtloser Übergang zu diesen ist möglich.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Eine detaillierte Beschreibung von LTE finden Sie im vierten Hauptkapitel des Buches&amp;amp;nbsp; [[Mobile Kommunikation]]. Dieses entstand allerdings ebenfalls schon 2011, also kurz nach Einführung. Für neuere Artikel verweisen wir auf das Internet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
== Aufgabe zum Kapitel== &lt;br /&gt;
&amp;lt;br&amp;gt; 	 &lt;br /&gt;
[[Aufgaben:4.8_HSDPA_und_HSUPA|Aufgabe 4.8: HSDPA und HSUPA]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Nachrichtentechnische_Aspekte_von_UMTS&amp;diff=34993</id>
		<title>Examples of Communication Systems/Nachrichtentechnische Aspekte von UMTS</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Nachrichtentechnische_Aspekte_von_UMTS&amp;diff=34993"/>
		<updated>2020-10-13T15:40:06Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Nachrichtentechnische Aspekte von UMTS to Examples of Communication Systems/Telecommunications Aspects of UMTS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Examples of Communication Systems/Telecommunications Aspects of UMTS]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Telecommunications_Aspects_of_UMTS&amp;diff=34992</id>
		<title>Examples of Communication Systems/Telecommunications Aspects of UMTS</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Telecommunications_Aspects_of_UMTS&amp;diff=34992"/>
		<updated>2020-10-13T15:40:06Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Nachrichtentechnische Aspekte von UMTS to Examples of Communication Systems/Telecommunications Aspects of UMTS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=UMTS – Universal Mobile Telecommunications System&lt;br /&gt;
|Vorherige Seite=UMTS–Netzarchitektur&lt;br /&gt;
|Nächste Seite=Weiterentwicklungen von UMTS&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Verbesserungen bezüglich Sprachcodierung == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Im Kapitel&amp;amp;nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_GSM|GSM]]&amp;amp;nbsp; (&#039;&#039;Global System for Mobile Communications&#039;&#039;)&amp;amp;nbsp; dieses Buches wurden bereits mehrere Sprachcodecs ausführlich beschrieben:&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Zur Erinnerung:}$&amp;amp;nbsp;  &lt;br /&gt;
Ein Sprachcodec dient zur Reduzierung der Datenrate eines digitalisierten Sprach– oder Musiksignals. &lt;br /&gt;
*Dabei wird Redundanz und Irrelevanz aus dem Originalsignal entfernt. &lt;br /&gt;
*Das Kunstwort &amp;amp;bdquo;Codec&amp;amp;rdquo; weist darauf hin, dass die gleiche Funktionseinheit sowohl zur Codierung wie auch zur Decodierung verwendet wird.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Unter anderem wurde der&amp;amp;nbsp; [[Examples_of_Communication_Systems/Sprachcodierung#Adaptive_Multi.E2.80.93Rate_Codec|Adaptive Multi-Rate Codec]]&amp;amp;nbsp; (AMR) vorgestellt, der im Frequenzbereich von&amp;amp;nbsp; $\text{300 Hz}$&amp;amp;nbsp; bis&amp;amp;nbsp; $\text{3400 Hz}$&amp;amp;nbsp; ein dynamisches Umschalten zwischen acht verschiedenen Modi (Einzelcodecs) unterschiedlicher Datenrate im Bereich von&amp;amp;nbsp; $\text{4.75 kbit/s}$&amp;amp;nbsp;  bis&amp;amp;nbsp; $\text{12.2 kbit/s}$&amp;amp;nbsp; erlaubt und auf&amp;amp;nbsp; [[Examples_of_Communication_Systems/Sprachcodierung#Algebraic_Code_Excited_Linear_Prediction|ACELP]]&amp;amp;nbsp; (&#039;&#039;Algebraic Code Excited Linear Prediction&#039;&#039;) basiert.&lt;br /&gt;
&lt;br /&gt;
Auch in UMTS Release 99 und UMTS Release 4 werden diese AMR–Codecs unterstützt. Sie erlauben im Vergleich zu den früheren Sprachcodecs (&#039;&#039;Full–Rate, Half–Rate&#039;&#039; und &#039;&#039;Enhanced Full–Rate Vocoder&#039;&#039;)&lt;br /&gt;
*eine Unabhängigkeit von den Kanalbedingungen und der Netzauslastung,&lt;br /&gt;
*die Möglichkeit, die Datenraten an die Bedingungen anzupassen,&lt;br /&gt;
*einen verbesserten flexiblen Fehlerschutz bei stärkerer Funkstörung, und&lt;br /&gt;
*dadurch insgesamt eine bessere Sprachqualität.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Im Jahre 2001 wurde vom 3gpp–Forum&amp;amp;nbsp; (&#039;&#039;3rd Generation Partnership Project&#039;&#039;)&amp;amp;nbsp; und der &#039;&#039;International Telecommuncation Union&#039;&#039;&amp;amp;nbsp; (ITU)&amp;amp;nbsp; für das UMTS Release 5 der neue Sprachcodec&amp;amp;nbsp; &#039;&#039;&#039;Wideband–AMR&#039;&#039;&#039;&amp;amp;nbsp; spezifiziert. Dieser ist eine Weiterentwicklung des AMR und bietet&lt;br /&gt;
[[File:P_ID1532__Bei_T_4_3_S1.png|right|frame|Zusammenstellung der Wideband–AMR–Modi]]&lt;br /&gt;
*eine erweiterte Bandbreite von&amp;amp;nbsp; $\text{50 Hz}$&amp;amp;nbsp; bis&amp;amp;nbsp; $\text{7 kHz}$&amp;amp;nbsp; $($Abtastfrequenz&amp;amp;nbsp; $\text{16 kHz})$,&lt;br /&gt;
*insgesamt neun Modi zwischen&amp;amp;nbsp; $\text{6.6 kbit/s}$&amp;amp;nbsp; und&amp;amp;nbsp; $\text{23.85 kbit/s}$&amp;amp;nbsp;  (wovon aber nur fünf Modi genutzt werden), und&lt;br /&gt;
*eine verbesserte Sprachqualität und einen besseren (natürlicheren) Klang.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Die Tabelle gibt eine Übersicht über die verschiedenen Modi und deren Bitumfang. Sie können sich die Qualität dieser Sprachcodierverfahren bei Sprache und Musik mit dem interaktiven SWF&amp;amp;ndash;Applet&amp;amp;nbsp; [[Applets:Qualität_verschiedener_Sprach–Codecs_(Applet)|Qualität verschiedener Sprach–Codecs]]&amp;amp;nbsp; verdeutlichen (&#039;&#039;Hinweis:&#039;&#039;&amp;amp;nbsp; &amp;amp;bdquo;Nur für &amp;amp;bdquo;Windows&amp;amp;rdquo; geeignet! &amp;amp;nbsp; Adobe Flashplayer erforderlich!).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Anmerkung&#039;&#039;: &amp;amp;nbsp; Die untere Grenzfrequenz von Wideband-AMR ist zwar mit&amp;amp;nbsp; $\text{50 Hz}$&amp;amp;nbsp;z spezifiziert, aber auf Grund verwendeter Vorfilter ist diese meist – und auch in der Audio–Demo – auf&amp;amp;nbsp; $\text{200 Hz}$&amp;amp;nbsp; angehoben, um die Störanfälligkeit zu reduzieren und die Kenndaten von Handy–Lautsprechern und –Mikrofonen zu berücksichtigen.&lt;br /&gt;
 	 &lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Einige Merkmale von Wideband–AMR}$&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
*Die Sprachdaten werden an den Codec als PCM–codierte Sprache mit&amp;amp;nbsp; $16000$&amp;amp;nbsp; Abtastwerten pro Sekunde geliefert. Die Sprachcodierung erfolgt in Blöcken von&amp;amp;nbsp; $\text{20 ms}$&amp;amp;nbsp; und die Datenrate wird alle&amp;amp;nbsp; $\text{20 ms}$&amp;amp;nbsp; angepasst.&lt;br /&gt;
*Das Frequenzband&amp;amp;nbsp; $\text{(50 Hz}$&amp;amp;nbsp; bis&amp;amp;nbsp; $\text{7000 Hz})$&amp;amp;nbsp; wird in zwei Teilbänder unterteilt, die unterschiedlich codiert werden, um mehr Bit den subjektiv wichtigen Frequenzen zuweisen zu können. Das obere Band&amp;amp;nbsp; $\text{(6400 Hz}$&amp;amp;nbsp; bis&amp;amp;nbsp; $\text{7000 Hz})$&amp;amp;nbsp; wird nur im höchsten Modus $($mit&amp;amp;nbsp; $\text{23.85 kbit/s)}$&amp;amp;nbsp; übertragen. In allen anderen Modi werden bei der Codierung nur die Frequenzen&amp;amp;nbsp; $\text{50 Hz}$&amp;amp;nbsp; bis&amp;amp;nbsp; $\text{6400 Hz}$&amp;amp;nbsp; berücksichtigt.&lt;br /&gt;
*Wideband–AMR unterstützt&amp;amp;nbsp; &#039;&#039;Discontinuous Transmission&#039;&#039;&amp;amp;nbsp; (DTX). Dieses Feature bedeutet, dass die Übertragung bei Sprachpausen angehalten wird, wodurch sowohl der Energieverbrauch der Mobilstation als auch die Gesamtinterferenz an der Luftschnittstelle gesenkt werden. Dieses Verfahren ist auch unter dem Namen&amp;amp;nbsp; &#039;&#039;Source–Controlled Rate&#039;&#039;&amp;amp;nbsp; (SCR) bekannt.&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;Voice Activity Detection&#039;&#039;&amp;amp;nbsp; (VAD) ermittelt, ob gerade gesprochen wird oder nicht und fügt auch bei kürzeren Sprachpausen einen SID–Rahmen&amp;amp;nbsp; (&#039;&#039;Silence Descriptor&#039;&#039;)&amp;amp;nbsp; ein. Dem Teilnehmer wird das Gefühl einer kontinuierlichen Verbindung suggeriert, indem der Decoder während Sprachpausen synthetisch erzeugtes Hintergrundgeräusch&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Comfort Noise&#039;&#039;)&amp;amp;nbsp; einfügt.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Anwendung des CDMA–Verfahrens bei UMTS==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
UMTS verwendet das Vielfachzugriffsverfahren&amp;amp;nbsp; &#039;&#039;Direct Sequence Code Division Multiple Access&#039;&#039;&amp;amp;nbsp; $\rm (DS–CDMA)$, das bereits im Kapitel [[Modulation_Methods/PN–Modulation#Blockschaltbild_und_.C3.A4quivalentes_Tiefpass.E2.80.93Modell|PN–Modulation]]&amp;amp;nbsp; des Buches „Modulationsverfahren” besprochen wurde.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1533__Bei_T_4_3_S2c_v1.png|center|frame|CDMA–Übertragungssystem für zwei Teilnehmer]]&lt;br /&gt;
&lt;br /&gt;
Hier folgt eine kurze Zusammenfassung dieses Verfahrens entsprechend der Grafik, die ein solches System im äquivalenten Tiefpassbereich und stark vereinfacht beschreibt:&lt;br /&gt;
*Die beiden Datensignale&amp;amp;nbsp; $q_1(t)$&amp;amp;nbsp; und&amp;amp;nbsp; $q_2(t)$&amp;amp;nbsp; sollen den gleichen Kanal nutzen, ohne sich gegenseitig zu stören. Die Bitdauer beträgt jeweils&amp;amp;nbsp; $T_{\rm B}$.&lt;br /&gt;
*Jedes der Datensignale wird mit einem zugeordneten Spreizcode –&amp;amp;nbsp; $c_1(t)$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $c_2(t)$&amp;amp;nbsp; – multipliziert.&lt;br /&gt;
*Es wird das Summensignal&amp;amp;nbsp; $s(t) = q_1(t) · c_1(t) + q_2(t) · c_2(t)$&amp;amp;nbsp; gebildet und übertragen.&lt;br /&gt;
*Beim Empfänger werden die gleichen Spreizcodes&amp;amp;nbsp; $c_1(t)$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $c_2(t)$&amp;amp;nbsp; zugesetzt und damit die Signale wieder voneinander getrennt.&lt;br /&gt;
*Unter der Voraussetzung, dass die Spreizcodes orthogonal sind und dass das AWGN–Rauschen klein ist, gilt für die beiden rekonstruierten Signale am Empfängerausgang:&lt;br /&gt;
:$$v_1(t) = q_1(t) \ \text{und} \ v_2(t) = q_2(t).$$&lt;br /&gt;
*Bei AWGN–Rauschsignal&amp;amp;nbsp; $n(t)$&amp;amp;nbsp; und orthogonalen Spreizfolgen wird dabei die Fehlerwahrscheinlichkeit durch andere Teilnehmer nicht verändert.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 1:}$&amp;amp;nbsp;  &lt;br /&gt;
Die Grafik zeigt oben drei Datenbit&amp;amp;nbsp; $(+1, -1, +1)$&amp;amp;nbsp; des rechteckförmigen Quellensignals&amp;amp;nbsp; $q_1(t)$&amp;amp;nbsp; von Teilnehmer &#039;&#039;&#039;1&#039;&#039;&#039;, jeweils mit der Symboldauer&amp;amp;nbsp; $T_{\rm B}$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1534__Bei_T_4_3_S2a_v1.png|right|frame|Signale bei &#039;&#039;Direct–Sequence&#039;&#039; Bandspreizung]] &lt;br /&gt;
*Die Symboldauer&amp;amp;nbsp; $T_{\rm C}$&amp;amp;nbsp; des Spreizcodes&amp;amp;nbsp; $c_1(t)$&amp;amp;nbsp; – die man auch&amp;amp;nbsp; &#039;&#039;&#039;Chipdauer&#039;&#039;&#039;&amp;amp;nbsp; nennt – ist um den Faktor&amp;amp;nbsp; $4$&amp;amp;nbsp; kleiner. &lt;br /&gt;
*Durch die Multiplikation&amp;amp;nbsp; $s_1(t) = q_1(t) · c_1(t)$&amp;amp;nbsp; entsteht ein Chipstrom der Länge&amp;amp;nbsp; $12 · T_{\rm C}$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Weiter erkennt man aus dieser Darstellung, dass das Signal&amp;amp;nbsp; $s_1(t)$&amp;amp;nbsp; höherfrequenter ist als&amp;amp;nbsp; $q_1(t)$. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Deshalb spricht man auch von&amp;amp;nbsp; &#039;&#039;&#039;Bandspreizung&#039;&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Spread Spectrum&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
*Der CDMA–Empfänger macht diese wieder rückgängig, was als&amp;amp;nbsp; &#039;&#039;&#039;Bandstauchung&#039;&#039;&#039;&amp;amp;nbsp; bezeichnet wird.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Zusammenfassend kann man sagen:}$ &amp;amp;nbsp;  &lt;br /&gt;
Durch die Anwendung von&amp;amp;nbsp; $\rm (DS–CDMA)$&amp;amp;nbsp; auf eine Nutzbitfolge&lt;br /&gt;
*vergrößert sich dessen Bandbreite um den&amp;amp;nbsp; &#039;&#039;Spreizfaktor&#039;&#039;&amp;amp;nbsp; $J = T_{\rm B}/T_{\rm C}$ &amp;amp;ndash; dieser ist gleich der Anzahl der&amp;amp;nbsp; &#039;&#039;Chips pro Bit&#039;&#039;;&lt;br /&gt;
*ist die Chiprate&amp;amp;nbsp; $R_{\rm C}$&amp;amp;nbsp; um den Faktor&amp;amp;nbsp; $J$&amp;amp;nbsp; größer als die Bitrate&amp;amp;nbsp; $R_{\rm B}$;&lt;br /&gt;
*ist die Bandbreite des gesamten CDMA–Signals um&amp;amp;nbsp; $J$&amp;amp;nbsp; größer als die Bandbreite jedes einzelnen Nutzers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Das heißt:&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; $\text{Bei UMTS steht jedem Teilnehmer die gesamte Bandbreite über die gesamte Sendedauer zur Verfügung}$. &lt;br /&gt;
&lt;br /&gt;
Erinnern wir uns:&amp;amp;nbsp; Bei GSM werden als Vielfachzugriffsverfahren sowohl&amp;amp;nbsp; &#039;&#039;Frequency Division Multiple Access&#039;&#039;&amp;amp;nbsp; als auch&amp;amp;nbsp; &#039;&#039;Time Division Multiple Access&#039;&#039;&amp;amp;nbsp; verwendet.&lt;br /&gt;
*Hier verfügt jeder Teilnehmer nur über ein begrenztes Frequenzband (FDMA), und&lt;br /&gt;
*er hat nur innerhalb von Zeitschlitzen Zugriff auf den Kanal (TDMA).}}&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Spreizcodes und Verwürfelung bei UMTS==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die Spreizcodes für UMTS sollen&lt;br /&gt;
*zueinander orthogonal sein, um eine gegenseitige Beeinflussung der Teilnehmer zu vermeiden,&lt;br /&gt;
*eine flexible Realisierung unterschiedlicher Spreizfaktoren&amp;amp;nbsp; $J$&amp;amp;nbsp; ermöglichen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Der hier dargelegte Sachverhalt wird auch durch das interaktive Applet&amp;amp;nbsp; [[Applets:OVSF-Codes_(Applet)|OVSF–Codes]]&amp;amp;nbsp; verdeutlicht.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 2:}$&amp;amp;nbsp;  &lt;br /&gt;
Ein Beispiel hierfür sind die&amp;amp;nbsp; &#039;&#039;&#039;Codes mit variablem Spreizfaktor&#039;&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Orthogonal Variable Spreading Faktor&#039;&#039;, &#039;&#039;&#039;OVSF&#039;&#039;&#039;), die Codes der Längen von&amp;amp;nbsp; $J = 4$&amp;amp;nbsp; bis&amp;amp;nbsp; $J = 512$&amp;amp;nbsp; bereitstellen. Diese können, wie in der Grafik zu sehen ist, mit Hilfe eines Codebaums erstellt werden. Dabei entstehen bei jeder Verzweigung aus einem Code&amp;amp;nbsp; $C$&amp;amp;nbsp; zwei neue Codes&amp;amp;nbsp; $(+C \ +\hspace{-0.1cm}C)$&amp;amp;nbsp; und&amp;amp;nbsp; $(+C \ -\hspace{-0.1cm}C)$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1535__Bei_T_4_3_S3c_v1.png|right|frame|Schaubild zur OVSF–Codefamilie]]&lt;br /&gt;
&lt;br /&gt;
Anzumerken ist, dass kein Vorgänger und Nachfolger eines Codes benutzt werden darf.  &lt;br /&gt;
*Im gezeichneten Beispiel könnten also acht Spreizcodes mit Spreizfaktor&amp;amp;nbsp; $J = 8$&amp;amp;nbsp; verwendet werden. &lt;br /&gt;
*Auch die vier gelb hinterlegten Codes – einmal mit&amp;amp;nbsp; $J = 2$, einmal mit&amp;amp;nbsp; $J = 4$&amp;amp;nbsp; und zweimal mit&amp;amp;nbsp; $J = 8$&amp;amp;nbsp; – sind möglich. &lt;br /&gt;
*Die unteren vier Codes mit dem Spreizfaktor&amp;amp;nbsp; $J = 8$&amp;amp;nbsp; können aber nicht herangezogen werden, da diese alle mit &amp;amp;bdquo;$+1 \ –\hspace{-0.1cm}1$&amp;amp;rdquo; beginnen, was bereits durch den OVSF–Code mit Spreizfaktor&amp;amp;nbsp; $J = 2$&amp;amp;nbsp; belegt ist.}} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Um mehr Spreizcodes zu erhalten und so mehr Teilnehmer versorgen zu können, wird nach der Bandspreizung mit&amp;amp;nbsp; $c(t)$&amp;amp;nbsp; die Folge mit&amp;amp;nbsp; $w(t)$&amp;amp;nbsp; chipweise nochmals verwürfelt, ohne dass eine weitere Spreizung stattfindet. Der&amp;amp;nbsp; &#039;&#039;&#039;Verwürfelungscode&#039;&#039;&#039; $w(t)$&amp;amp;nbsp; hat die gleiche Länge und die selbe Rate wie&amp;amp;nbsp; $c(t)$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1536__Bei_T_4_3_S3a_v1.png|left|frame|Zusätzliche Verwürfelung nach Spreizung]]&lt;br /&gt;
&lt;br /&gt;
Durch die Verwürfelung&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Scrambling&#039;&#039;)&amp;amp;nbsp; verlieren die Codes ihre vollständige Orthogonalität; man nennt sie &#039;&#039;quasi–othogonal&#039;&#039;. &lt;br /&gt;
&lt;br /&gt;
*Bei diesen Codes ist zwar die&amp;amp;nbsp; [[Modulation_Methods/Spreizfolgen_für_CDMA#Eigenschaften_der_Korrelationsfunktionen|Kreuzkorrelationsfunktion]]&amp;amp;nbsp; (KKF) zwischen unterschiedlichen Spreizcodes ungleich Null. &lt;br /&gt;
*Sie zeichnen sich aber durch eine ausgeprägte&amp;amp;nbsp; [[Modulation_Methods/Spreizfolgen_für_CDMA#Eigenschaften_der_Korrelationsfunktionen|Autokorrelationsfunktion]]&amp;amp;nbsp; um den Nullpunkt aus, was die Detektion am Empfänger erleichtert.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1538__Bei_T_4_3_S3d_v3.png|right|frame|Typische Spreiz– und Verwürfelungscodes für UMTS]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Verwendung quasi–orthogonaler Codes macht Sinn, da die Menge an orthogonalen Codes begrenzt ist und durch die Verwürfelung verschiedene Teilnehmer auch gleiche Spreizcodes verwenden können.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Die Tabelle fasst einige Daten der Spreiz– und Verwürfelungscodes zusammen.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
[[File:P_ID1537__Bei_T_4_3_S3b_v2.png|right|frame|Generator zur Erzeugung von Goldcodes]]&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 3:}$&amp;amp;nbsp;  &lt;br /&gt;
Bei  UMTS werden für die Verwürfelung so genannte&amp;amp;nbsp; &#039;&#039;&#039;Goldcodes&#039;&#039;&#039;&amp;amp;nbsp; verwendet. Die Grafik aus&amp;amp;nbsp; [3gpp]&amp;lt;ref&amp;gt;3gpp Group: &#039;&#039;UMTS Release 6 – Technical Specification&#039;&#039; 25.213 V6.4.0., Sept. 2005.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; zeigt das Blockschaltbild zur schaltungstechnischen Erzeugung solcher Codes. &lt;br /&gt;
*Dabei werden zunächst zwei unterschiedliche Pseudonoise–Folgen gleicher Länge&amp;amp;nbsp; $($hier:&amp;amp;nbsp; $N = 18)$&amp;amp;nbsp; mit Hilfe von Schieberegistern parallel erzeugt und mit&amp;amp;nbsp; &#039;&#039;Exklusiv–Oder–Gatter&#039;&#039;&amp;amp;nbsp; bitweise addiert.&lt;br /&gt;
*Im Uplink hat jede Mobilstation einen eigenen Verwürfelungscode und die Trennung der einzelnen Kanäle erfolgt über den jeweils gleichen Code.&lt;br /&gt;
*Dagegen hat im Downlink jedes Versorgungsgebiet eines &amp;amp;bdquo;Node B&amp;amp;bdquo; einen gemeinsamen Verwürfelungscode.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Kanalcodierung  bei UMTS==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Ebenso wie bei GSM erfahren EFR– und AMR–codierte Sprachdaten im UMTS einen zweistufigen Fehlerschutz, bestehend aus&lt;br /&gt;
*Bildung von CRC–Prüfbits&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Cyclic Redundancy Check&#039;&#039;), und&lt;br /&gt;
*anschließender Faltungscodierung&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Convolutional Coding&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Diese Verfahren unterscheiden sich jedoch von denjenigen bei GSM durch eine größere Flexibilität, da sie bei UMTS unterschiedliche Datenraten berücksichtigen müssen.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1539__Bei_T_4_3_S4a_v1.png|right|frame|Einfügen von CRC&amp;amp;ndash; und Tailbits bei UMTS]]&lt;br /&gt;
Für die&amp;amp;nbsp; &#039;&#039;&#039;Fehlererkennung&#039;&#039;&#039;&amp;amp;nbsp; mittels CRC werden je nach Größe des Transportblockes&amp;amp;nbsp; $\text{(10 ms}$&amp;amp;nbsp; oder&amp;amp;nbsp; $\text{20 ms})$&amp;amp;nbsp; acht, zwölf, sechzehn oder &amp;amp;nbsp;$24$&amp;amp;nbsp; CRC–Bit gebildet und an diesen angehängt. &lt;br /&gt;
*Am Ende eines jeden Rahmens werden außerdem acht Tailbits eingefügt, die Synchronisationszwecken dienen. &lt;br /&gt;
*Die Grafik zeigt einen beispielhaften Transportblock des &#039;&#039;&#039;DCH&#039;&#039;&#039;–Kanals mit &amp;amp;nbsp;$164$&amp;amp;nbsp; Nutzdatenbits, an den &amp;amp;nbsp;$16$&amp;amp;nbsp; CRC–Prüfbits und acht Tailbits angehängt werden.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Für die&amp;amp;nbsp;  &#039;&#039;&#039;Fehlerkorrektur&#039;&#039;&#039;&amp;amp;nbsp; kommen bei UMTS – je nach Datenrate – zwei verschiedene Verfahren zum Einsatz:&lt;br /&gt;
*Bei niedrigen Datenraten werden wie bei GSM&amp;amp;nbsp; [[Kanalcodierung/Grundlagen_der_Faltungscodierung|Faltungscodes]]&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Convolutional Codes&#039;&#039;)&amp;amp;nbsp; mit den Coderaten&amp;amp;nbsp; $R = 1/2$&amp;amp;nbsp; oder&amp;amp;nbsp; $R = 1/3$&amp;amp;nbsp; verwendet. Diese werden mit acht Speicherelementen eines rückgekoppelten Schieberegisters&amp;amp;nbsp; $(256$&amp;amp;nbsp; Zustände$)$ erzeugt. Der Codiergewinn beträgt mit der Coderate&amp;amp;nbsp; $R = 1/3$&amp;amp;nbsp; und bei niedrigen Fehlerraten ca.&amp;amp;nbsp; $4.5$&amp;amp;nbsp; bis&amp;amp;nbsp; $6$&amp;amp;nbsp; dB.&lt;br /&gt;
*Bei höheren Datenraten verwendet man&amp;amp;nbsp; [[Kanalcodierung/Grundlegendes_zu_den_Turbocodes|Turbo–Codes]]&amp;amp;nbsp; der Rate&amp;amp;nbsp; $R = 1/3$. Das Schieberegister besteht hier aus drei Speicherzellen, die insgesamt acht Zustände annehmen können. Der Gewinn der Turbo–Codes ist gegenüber Faltungscodes um&amp;amp;nbsp; $2$&amp;amp;nbsp; bis&amp;amp;nbsp; $3$&amp;amp;nbsp; dB größer und abhängig von der Anzahl der Iterationen. Sie benötigen dafür einen Prozessoren mit hoher Rechenleistung und es kann es zu relativ großen Verzögerungen kommen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nach der Kanalcodierung werden die Daten wie bei GSM einer&amp;amp;nbsp; [[Examples_of_Communication_Systems/Gesamtes_GSM–Übertragungssystem#Interleaving_bei_Sprachsignalen|Interleaver]]&amp;amp;nbsp; zugeführt, um empfangsseitig die durch Fading entstandenen Bündelfehler auflösen zu können. Schließlich werden zur&amp;amp;nbsp; &#039;&#039;Ratenanpassung&#039;&#039;&amp;amp;nbsp; der entstandenen Daten an den physikalischen Kanal einzelne Bit nach einem vorgegebenen Algorithmus entfernt&amp;amp;nbsp; (&#039;&#039;Puncturing&#039;&#039;)&amp;amp;nbsp; oder wiederholt&amp;amp;nbsp; (&#039;&#039;Repetition&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1540__Bei_T_4_3_S4b_v1.png|right|frame|Fehlerkorrekturmechanismen bei UMTS]]&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 4:}$&amp;amp;nbsp;  &lt;br /&gt;
Die Grafik zeigt zunächst die Zunahme der Bits durch einen Faltungs– oder Turbocode der Rate&amp;amp;nbsp; $R =1/3$, wobei aus dem&amp;amp;nbsp; $188$–Bit–Zeitrahmen (nach der CRC–Prüfsumme und den Tailbits) ein&amp;amp;nbsp; $564$–Bit–Rahmen entsteht.&lt;br /&gt;
*Danach folgt eine erste externe Verschachtelung und dann eine zweite interne Verschachtelung. &lt;br /&gt;
*Nach dieser wird der Zeitrahmen in vier Unterrahmen mit jeweils&amp;amp;nbsp; $141$&amp;amp;nbsp; Bit aufgeteilt und diese werden anschließend durch eine Ratenanpassung an den physikalischen Kanal angepasst.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Frequenzgänge und Impulsformung bei UMTS==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:UMTS_Bild_1.png|right|frame|Blockschaltbild des optimalen Nyquistentzerrers bei idealem Kanal|class=fit]]&lt;br /&gt;
In diesem Abschnitt gehen wir von folgendem Blockschaltbild eines Binärsystems bei idealem Kanal aus &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $H_{\rm K}(f) = 1$.&lt;br /&gt;
&lt;br /&gt;
Insbesondere gelte:&lt;br /&gt;
&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;Sendeimpulsfilter&#039;&#039;&amp;amp;nbsp; wandelt die binären&amp;amp;nbsp; $\{0, \ 1\}$ Daten in physikalische Signale. Das Filter wird beschrieben durch den Frequenzgang&amp;amp;nbsp; $H_{\rm S}(f)$, der formgleich mit dem Spektrum eines einzelnen Sendeimpulses ist.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Bei UMTS ist das Empfangsfilter&amp;amp;nbsp; $H_{\rm E}f) = H_{\rm S}(f)$&amp;amp;nbsp; an den Sender angepasst&amp;amp;nbsp; (&#039;&#039;Matched–Filter&#039;&#039;)&amp;amp;nbsp; und der Gesamtfrequenzgang&amp;amp;nbsp; $H(f) = H_{\rm S}(f) · H_{\rm E}(f)$&amp;amp;nbsp; erfüllt das&amp;amp;nbsp; [[Digitalsignalübertragung/Eigenschaften_von_Nyquistsystemen#Erstes_Nyquistkriterium_im_Frequenzbereich|erste Nyquistkriterium]]:&lt;br /&gt;
:$$ H(f) = H_{\rm CRO}(f)  =   \left\{ \begin{array}{c}    1 \\  0 \\  \cos^2 \left( \frac {\pi \cdot (|f| - f_1)}{2 \cdot (f_2 - f_1)} \right)\end{array} \right.\quad&lt;br /&gt;
\begin{array}{*{1}c} {\rm{f\ddot{u}r}} \\ {\rm{f\ddot{u}r}}\\  {\rm sonst }\hspace{0.05cm}.  \end{array}&lt;br /&gt;
\begin{array}{*{20}c} |f| \le f_1,  \\ |f| \ge f_2,\\   \\\end{array}$$&lt;br /&gt;
 &lt;br /&gt;
Das bedeutet: &amp;amp;nbsp; Zeitlich aufeinander folgende Impulse stören sich nicht gegenseitig  &amp;amp;nbsp; ⇒  &amp;amp;nbsp; es treten keine&amp;amp;nbsp; [[Digitalsignalübertragung/Ursachen_und_Auswirkungen_von_Impulsinterferenzen|Impulsinterferenzen]]&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Intersymbol Interference&#039;&#039;, ISI) auf. Die zugehörige Zeitfunktion lautet:&lt;br /&gt;
&lt;br /&gt;
:$$h(t) = h_{\rm CRO}(t) ={\rm si}(\pi \cdot t/ T_{\rm C}) \cdot \frac{\cos(r \cdot \pi t/T_{\rm C})}{1- (2r \cdot  t/T_{\rm C})^2}. $$&lt;br /&gt;
 &lt;br /&gt;
*„CRO” steht hierbei für&amp;amp;nbsp; [[Linear_and_Time_Invariant_Systems/Einige_systemtheoretische_Tiefpassfunktionen#Cosinus-Rolloff-Tiefpass|Cosinus–Rolloff]]&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Raised Cosine&#039;&#039;). &lt;br /&gt;
*Die Summe&amp;amp;nbsp; $f_1 + f_2$&amp;amp;nbsp; ist gleich dem Kehrwert der Chipdauer&amp;amp;nbsp; $T_{\rm C} = 260 \ \rm ns$, also gleich&amp;amp;nbsp; $3.84 \ \rm MHz$. &lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;Rolloff–Faktor&#039;&#039;&amp;amp;nbsp; (wir bleiben bei der in&amp;amp;nbsp; $\rm LNTwww$&amp;amp;nbsp; gewählten Bezeichnung&amp;amp;nbsp; $r$, im UMTS–Standard wird hierfür&amp;amp;nbsp; $\alpha$&amp;amp;nbsp; verwendet)&lt;br /&gt;
&lt;br /&gt;
:$$r =  \frac{f_2 - f_1}{f_2 + f_1} $$&lt;br /&gt;
 &lt;br /&gt;
:wurde bei UMTS zu&amp;amp;nbsp; $r = 0.22$&amp;amp;nbsp; festgelegt. Die beiden Eckfrequenzen sind somit&lt;br /&gt;
&lt;br /&gt;
:$$f_1 = {1}/(2 T_{\rm C}) \cdot (1-r) \approx 1.5\,{\rm MHz}, \hspace{0.2cm}&lt;br /&gt;
f_2 ={1}/(2 T_{\rm C})  \cdot (1+r) \approx 2.35\,{\rm MHz}.$$&lt;br /&gt;
 &lt;br /&gt;
*Die erforderliche Bandbreite beträgt&amp;amp;nbsp; $B = 2 · f_2 = 4.7 \ \rm MHz$. Für jeden UMTS–Kanal steht somit mit&amp;amp;nbsp; $5 \ \rm MHz$&amp;amp;nbsp; ausreichend Bandbreite zur Verfügung.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1547__Bei_T_4_3_S5b_v1.png|right|frame|Cosinus–Rolloff–Spektrum und Impulsantwort]]&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Fazit:}$&amp;amp;nbsp;  Die Grafik zeigt &lt;br /&gt;
*links das (normierte) Nyquistspektrum&amp;amp;nbsp; $H(f)$, und &lt;br /&gt;
*rechts den zugehörigen Nyquistimpuls&amp;amp;nbsp; $h(t)$, dessen Nulldurchgänge im Abstand&amp;amp;nbsp; $T_{\rm C}$&amp;amp;nbsp; äquidistant sind. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
$\text{Es ist zu beachten:}$&lt;br /&gt;
* Das Sendefilter&amp;amp;nbsp; $H_{\rm S}(f)$&amp;amp;nbsp; und das Matched–Filter&amp;amp;nbsp; $H_{\rm E}(f)$&amp;amp;nbsp; sind jeweils&amp;amp;nbsp;  [[Digitalsignalübertragung/Optimierung_der_Basisbandübertragungssysteme#Wurzel.E2.80.93Nyquist.E2.80.93Systeme|Wurzel–Cosinus–Rolloff–förmig]]&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Root Raised Cosine&#039;&#039;). Erst das Produkt&amp;amp;nbsp; $H(f) = H_{\rm S}(f) · H_{\rm E}(f)$&amp;amp;nbsp; führt zum Cosinus–Rolloff.&lt;br /&gt;
*Das bedeutet auch: &amp;amp;nbsp; Die Impulsantworten&amp;amp;nbsp; $h_{\rm S}(t)$&amp;amp;nbsp; und&amp;amp;nbsp; $h_{\rm E}(t)$&amp;amp;nbsp;  erfüllen für sich allein die erste Nyquistbedingung nicht. Erst die Kombination aus beiden (im Zeitbereich die Faltung) führt zu den gewünschten äquidistanten Nulldurchgängen.}}&lt;br /&gt;
&lt;br /&gt;
==Modulationsverfahren bei UMTS== &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die bei UMTS eingesetzten&amp;amp;nbsp; &#039;&#039;&#039;Modulationsverfahren&#039;&#039;&#039;&amp;amp;nbsp; können wie folgt zusammengefasst werden:&lt;br /&gt;
*In der Abwärtsrichtung&amp;amp;nbsp; (&#039;&#039;Downlink&#039;&#039;)&amp;amp;nbsp; wird zur Modulation&amp;amp;nbsp; &#039;&#039;Quaternary Phase Shift Keying&#039;&#039;&amp;amp;nbsp; (QPSK) verwendet &amp;amp;ndash; sowohl bei &#039;&#039;FDD&#039;&#039;&amp;amp;nbsp; als auch bei &#039;&#039;TDD&#039;&#039;. Dabei werden Nutzdaten (DPDCH–Kanal) und Kontrolldaten (DPCCH–Kanal) zeitlich gemultiplext.&lt;br /&gt;
*Bei &#039;&#039;TDD&#039;&#039;&amp;amp;nbsp; wird das Signal in Aufwärtsrichtung&amp;amp;nbsp; (&#039;&#039;Uplink&#039;&#039;)&amp;amp;nbsp; ebenfalls mittels QPSK moduliert, nicht aber bei &#039;&#039;FDD&#039;&#039;. Hier wird vielmehr eine&amp;amp;nbsp; &#039;&#039;&#039;zweifache binäre PSK&#039;&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Dual Channel–BPSK&#039;&#039;) verwendet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bei&amp;amp;nbsp; &#039;&#039;Dual–Channel BPSK&#039;&#039;&amp;amp;nbsp; wird zwar ebenfalls der QPSK–Signalraum genutzt, aber in &#039;&#039;Inphase&#039;&#039;– und &#039;&#039;Quadratur–Komponente&#039;&#039;&amp;amp;nbsp; werden unterschiedliche Kanäle übertragen. Pro Modulationsschritt werden also zwei Chips übertragen. Die Brutto–Chiprate ist daher doppelt so groß wie die Modulationsrate von&amp;amp;nbsp; $3.84$ Mchip pro Sekunde.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1548__Bei_T_4_3_S5c_v1.png|right|frame|Zweifache BPSK (&#039;&#039;Binary Phase Shift Keying&#039;&#039;)]]&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 5:}$&amp;amp;nbsp;  &lt;br /&gt;
Die Grafik zeigt dieses I/Q–Multiplexing–Verfahren, wie es auch bezeichnet wird, im äquivalenten Tiefpassbereich:&lt;br /&gt;
*Die gespreizten Nutzdaten des DPDCH–Kanals werden auf die Inphase–Komponente und die gespreizten Kontrolldaten des DPCCH–Kanals auf die Quadratur–Komponente moduliert und übertragen.&lt;br /&gt;
*Nach der Modulation wird die Quadratur–Komponente mit der Wurzel des Leistungsverhältnisses&amp;amp;nbsp; $G$&amp;amp;nbsp; zwischen den beiden Kanälen gewichtet, um den Einfluss des Leistungsunterschieds zwischen&amp;amp;nbsp; $I$&amp;amp;nbsp; und&amp;amp;nbsp; $Q$&amp;amp;nbsp; zu minimieren.&lt;br /&gt;
*Abschließend wird das komplexe Summensignal&amp;amp;nbsp; $(I +{\rm  j} · Q)$&amp;amp;nbsp; mit einem ebenfalls komplexen Verwürfelungscode multipliziert.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Fazit:}$&amp;amp;nbsp;  Ein Vorteil der zweifachen BPSK–Modulation ist die Möglichkeit der Verwendung&amp;amp;nbsp; &#039;&#039;&#039;stromsparender Verstärker&#039;&#039;&#039;.&lt;br /&gt;
*Zeitmultiplex von Nutz– und Kontrolldaten wie im&amp;amp;nbsp; &#039;&#039;Uplink&#039;&#039;&amp;amp;nbsp; ist aber im&amp;amp;nbsp; &#039;&#039;Downlink&#039;&#039;&amp;amp;nbsp; nicht möglich. &lt;br /&gt;
*Ein Grund hierfür ist der Einsatz von&amp;amp;nbsp; &#039;&#039;Discontinuous Transmission&#039;&#039;&amp;amp;nbsp; (DTX) und die damit verbundenen zeitlichen Einschränkungen.}}&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
==Single–User–Empfänger==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Aufgabe eines CDMA–Empfängers ist es, aus der Summe der gespreizten Datenströme die gesendeten Daten der einzelnen Teilnehmer zu separieren und zu rekonstruieren. Dabei unterscheidet man zwischen&amp;amp;nbsp;  &#039;&#039;Single–User&#039;&#039;–Empfängern und&amp;amp;nbsp;  &#039;&#039;Multi–User&#039;&#039;–Empfängern.&lt;br /&gt;
&lt;br /&gt;
Im Downlink von UMTS wird stets ein&amp;amp;nbsp; &#039;&#039;Single–User&#039;&#039;–Empfänger verwendet, da in der Mobilstation eine gemeinsame Detektion aller Teilnehmer wegen der Vielzahl aktiver Teilnehmer sowie der Länge der Verwürfelungscodes und des asynchronen Betriebs zu aufwändig wäre.&lt;br /&gt;
&lt;br /&gt;
Ein solcher Empfänger besteht aus einer Bank unabhängiger Korrelatoren. Jeder einzelne der insgesamt&amp;amp;nbsp; $J$&amp;amp;nbsp; Korrelatoren gehört zu einer spezifischen Spreizfolge. Die Korrelation wird meist in einer so genannten&amp;amp;nbsp; &#039;&#039;Korrelatordatenbank&#039;&#039;&amp;amp;nbsp; softwaremäßig gebildet. &lt;br /&gt;
&lt;br /&gt;
Dabei erhält man am Korrelatorausgang die Summe aus&lt;br /&gt;
*der&amp;amp;nbsp; &#039;&#039;Autokorrelationsfunktion&#039;&#039;&amp;amp;nbsp; des Spreizcodes und&lt;br /&gt;
*der&amp;amp;nbsp; &#039;&#039;Kreuzkorrelationsfunktion&#039;&#039;&amp;amp;nbsp; aller anderen Teilnehmer mit dem teilnehmereigenen Spreizcode.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1549__Bei_T_4_3_S6a_v1.png|right|frame|Single–User–Empfänger mit Matched–Filter]]&lt;br /&gt;
Die Grafik zeigt die einfachste Realisierung eines solchen Empfängers mit Matched–Filter.&lt;br /&gt;
*Das Empfangssignal&amp;amp;nbsp; $r(t)$&amp;amp;nbsp; wird zunächst mit dem Spreizcode&amp;amp;nbsp; $c(t)$&amp;amp;nbsp; des betrachteten Teilnehmers multipliziert, was als&amp;amp;nbsp; &#039;&#039;Bandstauchung&#039;&#039;&amp;amp;nbsp; oder&amp;amp;nbsp; &#039;&#039;Entspreizung&#039;&#039;&amp;amp;nbsp; bezeichnet wird (gelbe Hinterlegung).&lt;br /&gt;
*Danach folgt die Faltung mit der Impulsantwort des Matched–Filters&amp;amp;nbsp; (&#039;&#039;Root Raised Cosine&#039;&#039;), um das SNR zu maximieren, und die Abtastung im Bittakt&amp;amp;nbsp; ( $T_{\rm B}$ ).&lt;br /&gt;
*Abschließend erfolgt die Schwellenwertentscheidung, die das Sinkensignal&amp;amp;nbsp; $v(t)$ und&amp;amp;nbsp; damit die Datenbit des betrachteten Teilnehmers liefert.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Bitte beachten Sie:}$&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
Beim AWGN–Kanal haben die Bandspreizung beim Sender und die daran angepasste Bandstauchung beim Empfänger wegen&amp;amp;nbsp; $c(t)^2 = 1$&amp;amp;nbsp; keinen Einfluss auf die Bitfehlerwahrscheinlichkeit. Wie in&amp;amp;nbsp; [[Aufgaben:Aufgabe_4.5:_PN-Modulation| Aufgabe 4.5]]&amp;amp;nbsp; gezeigt wird, gilt auch mit Bandspreizung/Bandstauchung bei optimalem Empfänger unabhängig vom Spreizgrad&amp;amp;nbsp; $J$:&lt;br /&gt;
&lt;br /&gt;
:$$p_{\rm B} =  {\rm Q} \left( \hspace{-0.05cm} \sqrt { {2 \cdot E_{\rm B} }/{N_{\rm 0} } } \hspace{0.05cm} \right )\hspace{0.05cm}.  $$&lt;br /&gt;
 &lt;br /&gt;
Dieses Ergebnis lässt sich wie folgt begründen:  &amp;lt;br&amp;gt;Die statistischen Eigenschaften von weißem Rauschen&amp;amp;nbsp; $n(t)$&amp;amp;nbsp; werden durch die Multiplikation mit dem&amp;amp;nbsp; $±1$–Signal&amp;amp;nbsp; $c(t)$&amp;amp;nbsp; nicht verändert.}}&lt;br /&gt;
&lt;br /&gt;
==RAKE–Empfänger==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Ein weiterer Empfänger für die Single–User–Detektion ist der&amp;amp;nbsp;  &#039;&#039;&#039;RAKE–Empfänger&#039;&#039;&#039;, der bei einem Mehrwegekanal zu deutlichen Verbesserungen führt. Die Grafik zeigt seinen Aufbau für einen Zweiwegekanal mit&amp;amp;nbsp;&lt;br /&gt;
[[File:P_ID1560__Bei_T_4_3_S6b_v1.png|right|frame|Struktur des RAKE–Empfängers (Darstellung im äquivalenten Tiefpassbereich)]]&lt;br /&gt;
*einem direkten Pfad mit Koeffizient&amp;amp;nbsp; $h_0$&amp;amp;nbsp; und Verzögerungszeit&amp;amp;nbsp; $τ_0$,&lt;br /&gt;
*einem Echo mit Koeffizient&amp;amp;nbsp; $h_1$&amp;amp;nbsp; und Verzögerungszeit&amp;amp;nbsp; $τ_1$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zur Vereinfachung werden hier die Koeffizienten&amp;amp;nbsp; $h_0$&amp;amp;nbsp; und&amp;amp;nbsp; $h_1$&amp;amp;nbsp; als reell angenommen. Aufgrund der Darstellung im äquivalenten Tiefpassbereich könnten diese auch komplex sein.&lt;br /&gt;
&lt;br /&gt;
Aufgabe des RAKE–Empfängers ist es, die Signalenergien aller Pfade&amp;amp;nbsp; (in diesem Beispiel nur zwei)&amp;amp;nbsp; auf einen einzigen Zeitpunkt zu konzentrieren. Er arbeitet demnach wie eine&amp;amp;nbsp; &#039;&#039;Harke&#039;&#039;&amp;amp;nbsp; für den Garten, was auch die deutsche Übersetzung für „RAKE” ist.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Legt man einen Diracimpuls zur Zeit&amp;amp;nbsp; $t = 0$&amp;amp;nbsp; an den Kanaleingang an, so gibt es am Ausgang des RAKE–Empfängers drei Diracimpulse: &lt;br /&gt;
:$$ s(t) = \delta(t) \hspace{0.3cm}\Rightarrow\hspace{0.3cm}&lt;br /&gt;
y(t) = h_0 \cdot h_1 \cdot \delta(t - 2\tau_0) +  (h_0^2 + h_1^2) \cdot \delta(t - \tau_0 -  \tau_1)+&lt;br /&gt;
h_0 \cdot h_1 \cdot \delta(t - 2\tau_1) .$$&lt;br /&gt;
  &lt;br /&gt;
*Die Signalenenergie konzentriert sich auf den Zeitpunkt&amp;amp;nbsp; $τ_0 + τ_1$. Von den insgesamt vier Wegen tragen zwei dazu bei (mittlerer Term). &lt;br /&gt;
*Die Diracfunktionen bei&amp;amp;nbsp; $2τ_0$&amp;amp;nbsp; und&amp;amp;nbsp; $2τ_1$&amp;amp;nbsp; bewirken zwar Impulsinterferenzen. Ihre Gewichte sind aber deutlich kleiner als die des Hauptpfades.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 6:}$&amp;amp;nbsp;  &lt;br /&gt;
Mit den Kanalparametern&amp;amp;nbsp; $h_0 = 0.8$&amp;amp;nbsp; und&amp;amp;nbsp; $h_1 = 0.6$&amp;amp;nbsp; beinhaltet der Hauptpfad $($mit Gewicht&amp;amp;nbsp; $h_0)$&amp;amp;nbsp; nur&amp;amp;nbsp; $0.82/(0.82 + 0.62) = 64\%$&amp;amp;nbsp; der gesamten Signalenergie. Mit RAKE–Empfänger und den gleichen Gewichten lautet die obige Gleichung:&lt;br /&gt;
 &lt;br /&gt;
:$$ y(t) = 0.48  \cdot \delta(t - 2\tau_0) +  1.0 \cdot \delta(t - \tau_0 -  \tau_1)+&lt;br /&gt;
0.48 \cdot \delta(t - 2\tau_1) .$$&lt;br /&gt;
&lt;br /&gt;
Der Anteil des Hauptpfades an der Gesamtenergie beträgt nun&amp;amp;nbsp; ${1^2}/{(1^2 + 0.48^2 + 0.48^2)} ≈ 68\%.$}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
RAKE–Empfänger werden zur Implementierung in mobilen Geräten bevorzugt, sind aber bei vielen aktiven Teilnehmern nur begrenzt leistungsfähig. Bei einem Mehrwegekanal mit vielen&amp;amp;nbsp; $(M)$&amp;amp;nbsp; Pfaden hat auch der RAKE&amp;amp;nbsp; $M$&amp;amp;nbsp; Finger. Der Hauptfinger&amp;amp;nbsp; (&#039;&#039;Main Finger&#039;&#039;)&amp;amp;nbsp; – auch &#039;&#039;Searcher&#039;&#039;&amp;amp;nbsp; genannt – ist dafür verantwortlich, die individuellen Pfade der Mehrfachausbreitung zu identifizieren und einzuordnen. Er sucht die stärksten Pfade und weist diese zusammen mit ihren Steuerinformationen anderen Fingern zu. Dabei wird die Zeit– und Frequenzsynchronisation aller Finger kontinuierlich mit den Kontrolldaten des empfangenen Signals verglichen.&lt;br /&gt;
&lt;br /&gt;
==Multi–User–Empfänger ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Bei einem Single–User–Empfänger wird nur das Datensignal eines Teilnehmers entschieden, während alle anderen Teilnehmersignale als zusätzliches Rauschen betrachtet werden. Die Fehlerrate eines solchen Detektors wird jedoch dann sehr groß sein, wenn große&amp;amp;nbsp; &#039;&#039;Intrazellinterferenzen&#039;&#039;&amp;amp;nbsp; (viele aktive Teilnehmer in der betrachteten Funkzelle) oder&amp;amp;nbsp; &#039;&#039;Interzellinterferenzen&#039;&#039;&amp;amp;nbsp; (stark störende Teilnehmer in Nachbarzellen) vorliegen.&lt;br /&gt;
&lt;br /&gt;
Dagegen treffen&amp;amp;nbsp; &#039;&#039;&#039;Multi–User–Empfänger&#039;&#039;&#039;&amp;amp;nbsp; (Mehrbenutzerempfänger)&amp;amp;nbsp; eine gemeinsame Entscheidung für alle aktiven Teilnehmer. Deren Eigenschaften können wie folgt zusammengefasst werden:&lt;br /&gt;
*Ein Multi–User–Empfänger betrachtet die Interferenzen anderer Teilnehmer nicht als Rauschen, sondern nutzt auch die in den Interferenzsignalen enthaltenen Informationen zur Detektion.&lt;br /&gt;
*Der Empfänger ist aufwändig zu realisieren und die Algorithmen sind äußerst rechenintensiv. Er beinhaltet eine extrem große Korrelatordatenbank gefolgt von einem gemeinsamen Detektor.&lt;br /&gt;
*Dem Multi–User–Empfänger müssen die Spreizcodes aller aktiven Teilnehmer bekannt sein. Diese Voraussetzung schließt einen Einsatz im UMTS–Downlink (also bei der Mobilstation) aus. Dagegen sind den Basisstationen alle teilnehmerspezifischen Spreizcodes a priori bekannt, so dass im Uplink die Mehrbenutzerdetektion auch tatsächlich zur Anwendung kommt.&lt;br /&gt;
*Manche Detektionsalgorithmen verlangen zusätzlich die Kenntnis anderer Signalparameter wie Energien und Verzögerungszeiten. Der gemeinsame Detektor – das Herzstück des Empfängers – ist dafür verantwortlich, den jeweiligen passenden Detektionsalgorithmus anzuwenden. Beispiele für die Mehrbenutzerdetektion sind&amp;amp;nbsp; &#039;&#039;Decorrelating Detection&#039;&#039;&amp;amp;nbsp; und&amp;amp;nbsp; &#039;&#039;Interference Cancellation&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Near–Far–Effekt==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Der Near–Far–Effekt ist ausschließlich ein Problem des Uplinks, also der Übertragung von mobilen Teilnehmern zu einer Basisstation. Wir betrachten ein Szenario mit zwei unterschiedlich weit von der Basisstation entfernten Nutzern entsprechend folgender Grafik. Diese kann man wie folgt interpretieren:&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2498__Mob_T_3_2_S3b_neu_v3.png|rightr|frame|Szenarien zum Near–Far–Effekt]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Senden beide Mobilstationen mit gleicher Leistung, so ist die Empfangsleistung des roten Nutzers&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; an der Basisstation aufgrund des Pfadverlustes deutlich kleiner als die des blauen Nutzers&amp;amp;nbsp; $\rm B$ (linkes Szenario). In großen Makrozellen kann der Unterschied bis zu&amp;amp;nbsp; $100 \ \rm dB$&amp;amp;nbsp; ausmachen. Dadurch wird das rote Signal weitgehend durch das blaue verdeckt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Man kann den Near–Far–Effekt weitgehend vermeiden, wenn der weiter entfernte Nutzer&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; mit höherer Leistung sendet als Nutzer&amp;amp;nbsp; $\rm B$, wie im rechten Szenario angedeutet. An der Basisstation ist dann die Empfangsleistung beider Mobilstationen (nahezu) gleich.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&#039;&#039;Anmerkung&#039;&#039;: &amp;amp;nbsp; Bei einem idealisierten System&amp;amp;nbsp; (Einwegekanal, ideale A/D–Wandler, vollständig lineare Verstärker&amp;amp;nbsp;) sind die übertragenen Daten der Nutzer orthogonal zueinander und man könnte die Nutzer auch bei sehr unterschiedlichen Empfangsleistungen einzeln detektieren. Diese Aussage gilt für UMTS&amp;amp;nbsp; (Mehrfachzugriffsverfahren:&amp;amp;nbsp; CDMA)&amp;amp;nbsp; ebenso wie für für das 2G–System GSM&amp;amp;nbsp; (FDMA/TDMA)&amp;amp;nbsp; und für das 4G–System LTE&amp;amp;nbsp; (TDMA/OFDMA).&lt;br /&gt;
&lt;br /&gt;
In der Realität ist jedoch die Orthogonalität aufgrund folgender Ursachen nicht immer gegeben:&lt;br /&gt;
*verschiedene Empfangspfade &amp;amp;nbsp; ⇒  &amp;amp;nbsp; Mehrwegekanal,&lt;br /&gt;
*nicht ideale Eigenschaften der Spreiz– und Scramblingcodes bei CDMA,&lt;br /&gt;
*Asynchronität der Nutzer im Zeitbereich&amp;amp;nbsp; (Grundlaufzeit der Pfade),&lt;br /&gt;
* Asynchronität der Nutzer im Frequenzbereich&amp;amp;nbsp; (nicht ideale Oszillatoren und Dopplerverschiebung aufgrund der Mobilität der Nutzer).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Folglich sind die Nutzer nicht mehr orthogonal zueinander und der Störabstand des zu detektierenden Nutzers gegenüber den anderen Teilnehmern ist nicht beliebig hoch. Bei&amp;amp;nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_GSM|GSM]]&amp;amp;nbsp; und&amp;amp;nbsp; [[Mobile_Communications/Allgemeines_zum_Mobilfunkstandard_LTE|LTE]]&amp;amp;nbsp; kann man von Störabständen von&amp;amp;nbsp; $25 \ \rm dB$&amp;amp;nbsp; und mehr ausgehen, bei UMTS (CDMA) jedoch nur von ca.&amp;amp;nbsp; $15 \ \rm dB$, bei hochratiger Datenübertragung eher noch von etwas weniger.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==Träger–zu–Interferenz–Leistungsverhältnis (CIR)== 	&lt;br /&gt;
&lt;br /&gt;
Unter&amp;amp;nbsp; &#039;&#039;&#039;Kapazität&#039;&#039;&#039;&amp;amp;nbsp; wird allgemein die Anzahl der verfügbaren Übertragungskanäle pro Zelle verstanden werden. Da aber bei UMTS die Teilnehmerzahl im Gegensatz zum GSM nicht streng begrenzt ist, lässt sich hier keine feste Kapazität angeben.&lt;br /&gt;
*Bei perfekten Codes stören sich die Teilnehmer gegenseitig nicht. Dadurch wird die maximale Nutzerzahl allein durch den Spreizfaktor&amp;amp;nbsp; $J$&amp;amp;nbsp; und die verfügbare Anzahl der zueinander orthogonalen Codes bestimmt, die aber ebenfalls limitiert ist.&lt;br /&gt;
*Praxisnäher sind nichtperfekte, nur quasi–orthogonale Codes. Hier wird die „Kapazität” einer Funkzelle vorwiegend durch die entstehenden Interferenzen bzw. das&amp;amp;nbsp; &#039;&#039;Träger–zu–Interferenz–Leistungsverhältnis&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Carrier–to–Interference Ratio&#039;&#039;, CIR) bestimmt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1542__Bei_T_4_3_S8_v1.png|right|frame|&#039;&#039;Carrier–to–Interference Ratio&#039;&#039;&amp;amp;nbsp; (&amp;amp;bdquo;CIR&amp;amp;rdquo;) abhängig von der Teilnehmerzahl]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;Wie aus dieser Grafik zu ersehen ist, hängt das &amp;amp;bdquo;CIR&amp;amp;rdquo; direkt von der Anzahl aktiver Teilnehmer ab. Je mehr Teilnehmer aktiv sind, desto mehr Interferenzleistung entsteht und desto kleiner wird das &amp;amp;bdquo;CIR&amp;amp;rdquo;. &lt;br /&gt;
&lt;br /&gt;
Desweiteren hängt dieses für UMTS entscheidende Kriterium auch von folgenden Größen ab:&lt;br /&gt;
*der Topologie und dem Nutzerverhalten (aufgerufene Dienste),&lt;br /&gt;
*dem Spreizfaktor&amp;amp;nbsp; $J$&amp;amp;nbsp; und der Orthogonalität des verwendeten Spreizcodes.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Um den störenden Einfluss der Interferenzleistung auf die Übertragungsqualität zu begrenzen, gibt es zwei Möglichkriten:&lt;br /&gt;
*&#039;&#039;&#039;Zellatmung&#039;&#039;&#039;:&amp;amp;nbsp; Nimmt bei UMTS die Anzahl der aktiven Teilnehmer signifikant zu, so wird der Zellenradius verkleinert und&amp;amp;nbsp; (wegen der nun weniger Teilnehmer in der Zelle)&amp;amp;nbsp; auch die aktuelle Interferenzleistung geringer. Für die Versorgung der Teilnehmer am Rande der verkleinerten Zelle springt dann eine weniger belastete Nachbarzelle ein.&lt;br /&gt;
*&#039;&#039;&#039;Leistungsregelung&#039;&#039;&#039;:&amp;amp;nbsp; Überschreitet die Gesamtinterferenzleistung innerhalb einer Funkzelle einen vorgegebenen Grenzwert, so wird die Sendeleistung aller Teilnehmer entsprechend herabgesetzt und/oder die Datenrate reduziert, was eine schlechtere Übertragungsqualität für alle zur Folge hat. Hierzu mehr auf der nächsten Seite.&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
==Leistung und Leistungsregelung in UMTS==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Als Regelgröße bei der Leistungsregelung in UMTS wird das Verhältnis zwischen der Signalleistung und der Interferenzleistung verwendet. Dabei gibt es Unterschiede zwischen dem FDD– und TDD–Modus.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1543__Bei_T_4_3_S10a_v1.png|right|frame|Leistungsregelung im FDD–Modus]]&lt;br /&gt;
Wir betrachten die&amp;amp;nbsp; &#039;&#039;Leistungsregelung im FDD–Modus&#039;&#039;&amp;amp;nbsp; genauer. In der Grafik erkennt man zwei verschiedene Regelkreise:&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;innere Regelkreis&#039;&#039;&#039;&amp;amp;nbsp; steuert die Sendeleistung auf der Basis von Zeitschlitzen, wobei in jedem Zeitschlitz ein Leistungskommando übertragen wird. Die Leistung des Senders wird mit Hilfe der CIR–Schätzungen im Empfänger und den Vorgaben des&amp;amp;nbsp; &#039;&#039;Radio Network Controllers&#039;&#039;&amp;amp;nbsp; (RNC) aus dem äußeren Regelkreis bestimmt und verändert.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;äußere Regelkreis&#039;&#039;&#039;&amp;amp;nbsp; regelt auf Basis von Rahmen mit $10$ Millisekunden Dauer. Er wird im RNC realisiert und ist dafür zuständig, den Soll–Wert für den inneren Regelkreis zu bestimmen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Der Ablauf der FDD–Leistungsregelung sieht folgendermaßen aus:&lt;br /&gt;
*Der RNC gibt einen Sollwert für das Träger–zu–Interferenz–Verhältnis (CIR–Sollwert) vor.&lt;br /&gt;
*Der Empfänger schätzt den CIR–Istwert und generiert Steuerkommandos für den Sender.&lt;br /&gt;
*Der Sender ändert entsprechend dieser Steuerkommandos die Sendeleistung.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Das Prinzip der&amp;amp;nbsp; &#039;&#039;Leistungsregelung im TDD–Modus&#039;&#039;&amp;amp;nbsp; ähnelt der oben vorgestellten Regelung für den FDD–Modus, in der Abwärtsrichtung sind sie sogar praktisch identisch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Fazit:}$&amp;amp;nbsp; &lt;br /&gt;
Die&amp;amp;nbsp; &#039;&#039;&#039;TDD–Leistungsregelung&#039;&#039;&#039;&amp;amp;nbsp; ist viel langsamer und dadurch auch unpräziser als bei&amp;amp;nbsp; &#039;&#039;&#039;FDD&#039;&#039;&#039;. Eine schnelle Leistungsregelung ist in diesem Fall aber auch gar nicht möglich, da jeder Teilnehmer jeweils nur einen Bruchteil des Zeitrahmens zur Verfügung hat.}}&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
==Link–Budget == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Bei der Planung von UMTS–Netzen ist die Berechnung des Link-Budgets ein wichtiger Schritt. Die Kenntnis des Link–Budgets ist sowohl bei der Dimensionierung der Versorgungsgebiete als auch für die Bestimmung der Kapazität und der Dienstgüte–Anforderungen erforderlich. &lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
&#039;&#039;&#039;Ziel des Link–Budgets&#039;&#039;&#039;&amp;amp;nbsp; ist die Berechnung der&amp;amp;nbsp; &#039;&#039;&#039;maximalen Zellgröße&#039;&#039;&#039;&amp;amp;nbsp; unter Berücksichtigung folgender Kriterien:&lt;br /&gt;
*Art und Datenrate der Services,&lt;br /&gt;
*Topologie der Umgebung,&lt;br /&gt;
*Systemkonfiguration (Lage und Leistung der Basisstationen, Handover–Gewinn),&lt;br /&gt;
*Service–Anforderungen (Verfügbarkeit),&lt;br /&gt;
*Art der Mobilstation (Geschwindigkeit, Leistung),&lt;br /&gt;
*finanzielle und wirtschaftliche Aspekte.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1545__Bei_T_4_4_S9.png|right|frame|Budget für einen Sprachübertragungskanal]]&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 7:}$&amp;amp;nbsp;  &lt;br /&gt;
Die Berechnung des Link–Budgets wird am Beispiel eines Sprachübertragungskanals im UMTS–Downlink dargestellt. Zu den beispielhaften Zahlenwerten ist zu bemerken:&lt;br /&gt;
*Die Sendeleistung  betrage&amp;amp;nbsp;  $P_{\rm S}  =19 \ \rm dBm$, was ca.&amp;amp;nbsp; $79 \ \rm mW$&amp;amp;nbsp; entspricht. Hierbei ist der Antennenverlust mit&amp;amp;nbsp; $2\ \rm  dB$&amp;amp;nbsp; berücksichtigt.&lt;br /&gt;
*Die Rauschleistung&amp;amp;nbsp; $P_{\rm R} = 5 · 10^{-11} \ \rm mW$&amp;amp;nbsp; ist das Produkt aus UMTS–Bandbreite und Rauschleistungsdichte &amp;amp;nbsp; &amp;lt;br&amp;gt;&amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $P_{\rm R} = -103 \ \rm dBm $.&lt;br /&gt;
*Die Interferenzleistung ist&amp;amp;nbsp; $P_{\rm I} = –99\ \rm  dBm$&amp;amp;nbsp; entsprechend&amp;amp;nbsp; $1.25 · 10^{-10} \ \rm mW$. &lt;br /&gt;
*Damit ergibt sich die gesamte Störleistung zu&amp;amp;nbsp; $P_{\rm R+I} = P_{\rm R} + P_{\rm I} = 1.25 · 10^{-10} \ \rm mW$&amp;amp;nbsp; &amp;lt;br&amp;gt;&amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $P_{\rm R+I} =- 97.5\ \rm  dBm$.&lt;br /&gt;
*Die Antennenempfindlichkeit ergibt sich zu&amp;amp;nbsp; $-97.5 - 27 + 5 - 17 + 3.5 = - 133  \ \rm dBm$. Ein großer negativer Wert ist hierbei „gut”.&lt;br /&gt;
*Der maximal zulässige Pfadverlust soll  möglichst groß sein. Man erhält im Beispiel&amp;amp;nbsp; $19 - (-133) = 152 \ \rm  dB$.&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;Link–Budget&#039;&#039;&#039;&amp;amp;nbsp; beinhaltet den Margin für Fading und den Handover–Gewinn und beträgt im Beispiel&amp;amp;nbsp; $140 \ \rm  dB$.&lt;br /&gt;
*Der &#039;&#039;&#039;maximale Zellradius&#039;&#039;&#039; lässt sich aus dem Link–Budget mit einer [https://en.wikipedia.org/wiki/Path_loss empirischen Formel] von Okumura–Hata bestimmen. Es gilt:&lt;br /&gt;
:$$ {r}\ [{\rm km}] = 10^{({\rm LinkBudget}- 137)/35}= 10^{0.0857}\approx 1.22 . $$&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Anmerkungen&#039;&#039;: &amp;amp;nbsp;&lt;br /&gt;
*Die Angabe&amp;amp;nbsp; $\rm dB$&amp;amp;nbsp; kennzeichnet eine logarithmische Leistungsangabe, bezogen auf&amp;amp;nbsp; $1 \ \rm W$.&lt;br /&gt;
* Dagegen bezieht sich&amp;amp;nbsp; $\rm dBm$&amp;amp;nbsp; auf die Leistung&amp;amp;nbsp; $1 \ \rm mW$.}}&lt;br /&gt;
&lt;br /&gt;
==UMTS–Funkressourcenverwaltung == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Zentrale Aufgabe der&amp;amp;nbsp; &#039;&#039;&#039;Funkressourcenverwaltung&#039;&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Radio Resource Management&#039;&#039;, RRM) ist die dynamische Anpassung der Funkübertragungsparameter an die aktuelle Situation&amp;amp;nbsp; (Fading, Bewegung der Mobilstation, Auslastung, usw.)&amp;amp;nbsp; mit dem Ziel,&lt;br /&gt;
[[File:P_ID1546__Bei_T_4_3_S11_v1.png|right|frame|Radio Resource Management in UMTS]]&lt;br /&gt;
*die Übertragungs– und Teilnehmerkapazitäten zu steigern,&lt;br /&gt;
*die individuelle Übertragungsqualität zu verbessern und&lt;br /&gt;
*die vorhandenen Funkressourcen ökonomisch zu nutzen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nachfolgend werden die im Schaubild zusammengestellten wichtigsten RRM–Mechanismen erläutert.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sendeleistungsregelung&#039;&#039;&#039;  &lt;br /&gt;
&amp;lt;br&amp;gt;Das&amp;amp;nbsp; &#039;&#039;Radio Resource Management&#039;&#039;&amp;amp;nbsp; versucht, die Empfangsleistung und damit das Träger–zu–Interferenz–Verhältnis (CIR) am Empfänger konstant zu halten oder zumindest zu vermeiden, dass ein vorgegebener Grenzwert unterschritten wird. &lt;br /&gt;
&lt;br /&gt;
Ein Beispiel für die Notwendigkeit der Leistungsregelung ist der&amp;amp;nbsp; [[Examples_of_Communication_Systems/Nachrichtentechnische_Aspekte_von_UMTS#Near.E2.80.93Far.E2.80.93Effekt|Near–Far–Effekt]], der bekanntlich zu einem Verbindungsabbruch führen kann.&lt;br /&gt;
&lt;br /&gt;
Die Schrittweite der Leistungsregelung beträgt&amp;amp;nbsp; $1 \ \rm dB$&amp;amp;nbsp; oder&amp;amp;nbsp; $2 \ \rm dB$, die Frequenz der Regelungskommandos ist&amp;amp;nbsp; $1500$&amp;amp;nbsp; Kommandos pro Sekunde.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Regelung der Datenrate&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;br&amp;gt;Bei UMTS ist ein Austausch zwischen Datenrate und Übertragungsqualität möglich, die sich über die Wahl des Spreizfaktors realisieren lässt. Eine Verdopplung des Spreizfaktors entspricht hierbei einer Halbierung der Datenrate und erhöht die Qualität um&amp;amp;nbsp; $3\ \rm dB$&amp;amp;nbsp; (Spreizgewinn).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Zugangskontrolle&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;br&amp;gt;Um Überlastsituationen im Gesamtnetz zu vermeiden, wird vor dem Aufbau einer neuen Verbindung überprüft, ob die notwendigen Ressourcen vorhanden sind. Andernfalls wird die neue Verbindung abgewiesen. Diese Überprüfung wird durch Abschätzung der Sendeleistungsverteilung nach der Aufnahme der neuen Verbindung realisiert.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Lastregelung&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;br&amp;gt;Diese wird aktiv, wenn trotz Zugangskontrolle eine Überlast auftritt. In diesem Fall wird ein Handover zu einem anderen &amp;amp;bdquo;Node B&amp;amp;rdquo; initiiert und – falls dies nicht möglich ist – werden die Datenraten bestimmter Teilnehmer gesenkt.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Handover&#039;&#039;&#039; &lt;br /&gt;
&amp;lt;br&amp;gt;Die Funkressourcenverwaltung ist schließlich auch für das Handover verantwortlich, um unterbrechungsfreie Verbindungen zu gewährleisten. Die Zuordnung der Mobilstationen zu den einzelnen Funkzellen erfolgt auf Grundlage von CIR–Messungen.&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
==Aufgaben zum Kapitel == 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:Aufgabe_4.5:_Pseudo_Noise-Modulation|Aufgabe 4.5: Pseudo Noise-Modulation]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Aufgabe_4.5Z:_Zur_Bandspreizung_bei_UMTS|Aufgabe 4.5Z: Zur Bandspreizung bei UMTS]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Aufgabe_4.6:_OVSF-Codes|Aufgabe 4.6: OVSF-Codes]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Aufgabe_4.7:_Zum_RAKE-Empfänger|Aufgabe 4.7: Zum RAKE-Empfänger]]&lt;br /&gt;
&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/UMTS%E2%80%93Netzarchitektur&amp;diff=34991</id>
		<title>Examples of Communication Systems/UMTS–Netzarchitektur</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/UMTS%E2%80%93Netzarchitektur&amp;diff=34991"/>
		<updated>2020-10-13T15:39:54Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/UMTS–Netzarchitektur to Examples of Communication Systems/UMTS Network Architecture&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Examples of Communication Systems/UMTS Network Architecture]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/UMTS_Network_Architecture&amp;diff=34990</id>
		<title>Examples of Communication Systems/UMTS Network Architecture</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/UMTS_Network_Architecture&amp;diff=34990"/>
		<updated>2020-10-13T15:39:54Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/UMTS–Netzarchitektur to Examples of Communication Systems/UMTS Network Architecture&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=UMTS – Universal Mobile Telecommunications System&lt;br /&gt;
|Vorherige Seite=Allgemeine Beschreibung von UMTS&lt;br /&gt;
|Nächste Seite=Nachrichtentechnische Aspekte von UMTS&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Basiseinheiten der Systemarchitektur  ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Bei der Architektur von UMTS–Netzen unterscheidet man vier grundlegende logische Einheiten. Die Interaktion dieser Einheiten ermöglicht das Bedienen und das Betreiben des Gesamtnetzes.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1511__Bei_T_4_2_S1_v1.png|center|frame|Basiseinheiten der UMTS&amp;amp;ndash;Systemarchitektur]]&lt;br /&gt;
&lt;br /&gt;
In der Grafik erkennt man:&lt;br /&gt;
*$\rm Universal \ Subscriber  \ Identity  \ Module  \ (USIM)$&amp;amp;nbsp; – Das USIM ist eine entnehmbare IC–Karte, die Funkinformationen und Informationen zur eindeutigen Identifizierung und Authentifizierung des Teilnehmers enthält. Sie unterscheidet sich von der herkömmlichen SIM–Karte durch erweiterte Sicherheitsfunktionen, größere Speicherkapazität und einen integrierten Mikroprozessor, der zur Ausführung von Programmen dient.&lt;br /&gt;
&lt;br /&gt;
*$\rm Mobile \  Equipment  \ (ME)$&amp;amp;nbsp; – Ausgestattet mit einer USIM–Karte stellt das UMTS–Endgerät sowohl die Funkschnittstelle für die Datenübertragung als auch die Bedienelemente für die Benutzer bereit. Es unterscheidet sich von der gängigen GSM–Mobilstation durch eine erweiterte Funktionalität, Multimedia–Anwendungen sowie komplexere und vielfältigere Dienste. Vielfach finden sich auch die Bezeichnungen&amp;amp;nbsp; &#039;&#039;User Equipment&#039;&#039;&amp;amp;nbsp; (UE) und&amp;amp;nbsp; &#039;&#039;Terminal Equipment&#039;&#039;&amp;amp;nbsp; (TE).&lt;br /&gt;
*$\rm Radio \  Access  \ Network  \ (RAN)$&amp;amp;nbsp;  – Darunter versteht man die Festnetzinfrastruktur von UMTS, die für die Funkübertragung und die damit verbundenen Aufgaben zuständig ist. Das RAN enthält die Basisstationen&amp;amp;nbsp; (&#039;&#039;Node B&#039;&#039;)&amp;amp;nbsp; und die Kontrollknoten&amp;amp;nbsp; (&#039;&#039;Radio Network Controller&#039;&#039; – RNC), die das RAN und das&amp;amp;nbsp; &#039;&#039;Core Network&#039;&#039;&amp;amp;nbsp; verbinden.&lt;br /&gt;
*$\rm Core  \ Network  \ (CN)$&amp;amp;nbsp; – Dieses stellt das Weitverkehrsnetz dar und ist für den Datentransport verantwortlich. Es enthält Vermittlungseinrichtungen (SGSN, GGSN) zu externen Netzen und Datenbanken zur Mobilitäts– und Teilnehmerverwaltung (HLR, VLR). Das&amp;amp;nbsp; &#039;&#039;Core Network&#039;&#039;&amp;amp;nbsp; enthält auch die Netzmanagement–Einrichtungen&amp;amp;nbsp; (&#039;&#039;Operation and Maintenance Center&#039;&#039; – OMC), die zur Verwaltung des Gesamtnetzes erforderlich sind.&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Domänen und Schnittstellen == 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:P_ID1512__Bei_T_4_2_S2_v1.png|right|frame|Basiseinheiten der UMTS&amp;amp;ndash;Systemarchitektur]]&lt;br /&gt;
Die auf der letzten Seite aufgeführten Einheiten des UMTS–Netzes werden in so genannte&amp;amp;nbsp; &#039;&#039;&#039;Domänen&#039;&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Domains&#039;&#039;) zusammengefasst. &lt;br /&gt;
&lt;br /&gt;
Darunter versteht man Funktionsblöcke, die zur Standardisierung und zur Untersuchung der funktionalen Einheiten und Schnittstellen innerhalb des UMTS–Netzes dienen.&lt;br /&gt;
&lt;br /&gt;
Man unterscheidet zwei Hauptkategorien von Domänen, nämlich&lt;br /&gt;
*die&amp;amp;nbsp; &#039;&#039;User Equipment Domain&#039;&#039;, und&lt;br /&gt;
*die&amp;amp;nbsp; &#039;&#039;Infrastructure Domain&#039;&#039;.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Die&amp;amp;nbsp; $\rm User \ Equipment \ Domain$&amp;amp;nbsp; enthält alle Funktionen, die einen Zugang zum UMTS–Netz ermöglichen, wie zum Beispiel Verschlüsselungsfunktionen für die Übertragung der Daten über die Funkschnittstelle. Man kann diese Domäne in zwei Domänen unterteilen:&lt;br /&gt;
*die&amp;amp;nbsp; &#039;&#039;&#039;USIM Domain&#039;&#039;&#039;&amp;amp;nbsp; – die SIM–Karte ist ein Teil dieser Domäne;&lt;br /&gt;
*die&amp;amp;nbsp; &#039;&#039;&#039;Mobile Equipment Domain&#039;&#039;&#039;&amp;amp;nbsp; – Sie enthält alle Funktionen, über die ein Endgerät verfügt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Diese beiden Domänen sind über die&amp;amp;nbsp; &#039;&#039;Cu–Schnittstelle&#039;&#039;&amp;amp;nbsp; verbunden, die die elektrischen und physikalischen Spezifikationen sowie den Protokollstapel zwischen USIM–Karte und Endgerät umfasst. Dadurch können USIM–Karten verschiedener Netzbetreiber mit allen Endgeräten betrieben werden.&lt;br /&gt;
&lt;br /&gt;
Eine weitere wichtige Schnittstelle ist die&amp;amp;nbsp; &#039;&#039;Uu–Schnittstelle&#039;&#039;, die die Radioverbindung zwischen der Mobilstation und der&amp;amp;nbsp; &#039;&#039;Infrastructure Domain&#039;&#039;&amp;amp;nbsp; herstellt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die&amp;amp;nbsp; $\rm Infrastructure \ Domain$&amp;amp;nbsp; gliedert sich in die zwei folgenden Domänen:&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;&#039;Access Network Domain&#039;&#039;&#039;&amp;amp;nbsp; fasst alle Basisstationen – die bei UMTS „Node B” genannt werden – und die Funktionen des&amp;amp;nbsp; &#039;&#039;Radio Access Networks&#039;&#039;&amp;amp;nbsp; (RAN) zusammen.&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;&#039;Core Network Domain&#039;&#039;&#039;&amp;amp;nbsp; ist für die möglichst fehlerfreie Übermittlung und den Transport der Nutzerdaten verantwortlich.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Diese beiden Domänen sind über eine&amp;amp;nbsp; &#039;&#039;Iu–Schnittstelle&#039;&#039;&amp;amp;nbsp; verbunden. Diese Schnittstelle ist für die Datenvermittlung zwischen dem&amp;amp;nbsp; &#039;&#039;Access Network&#039;&#039;&amp;amp;nbsp; und dem&amp;amp;nbsp; &#039;&#039;Core Network&#039;&#039;&amp;amp;nbsp; verantwortlich und stellt die Trennung zwischen der Transportebene und der Funknetzebene dar.&lt;br /&gt;
&lt;br /&gt;
Die&amp;amp;nbsp; &#039;&#039;Core Network Domain&#039;&#039;&amp;amp;nbsp; kann wiederum in drei Unterdomänen unterteilt werden:&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;Serving Network Domain&#039;&#039;&amp;amp;nbsp; enthält alle Funktionen und Informationen, die für den Zugang zum UMTS–Netz nötig sind.&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;Home Network Domain&#039;&#039;&amp;amp;nbsp; enthält alle Funktionalitäten, die im Heimatnetz eines (fremden) Teilnehmers durchgeführt werden.&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;Transit Network Domain&#039;&#039;&amp;amp;nbsp; ist ein so genanntes Transitnetz. Dieses wird nur dann wirksam, wenn Datenbankabfragen im Heimatnetz des Teilnehmers durchzuführen sind und das&amp;amp;nbsp; &#039;&#039;Serving Network&#039;&#039;&amp;amp;nbsp; nicht direkt mit dem&amp;amp;nbsp; &#039;&#039;Home Network&#039;&#039;&amp;amp;nbsp; verbunden ist.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Architektur der Zugangsebene == 	 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
UMTS–Netze unterstützen sowohl Leitungsvermittlung als auch Paketvermittlung:&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Unterscheidungsmerkmale:}$&amp;amp;nbsp;&lt;br /&gt;
*Bei der&amp;amp;nbsp; &#039;&#039;&#039;Leitungsvermittlung&#039;&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Circuit Switching&#039;&#039;, CS) wird der Funkkanal während der gesamten Dauer der Verbindung den beiden Kommunikationspartnern so lange zugewiesen, bis alle Informationen übertragen wurden. Erst danach wird der Kanal freigegeben.&lt;br /&gt;
&lt;br /&gt;
*Bei der&amp;amp;nbsp; &#039;&#039;&#039;Paketvermittlung&#039;&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Packet Switching&#039;&#039;, PS) können die Teilnehmer den Kanal nicht exklusiv nutzen, sondern der Datenstrom wird im Sender in kleine Datenpakete – jeweils mit der Zieladresse im Header – aufgeteilt, und erst danach versendet. Der Kanal wird von mehreren Teilnehmern gemeinsam benutzt.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1514__Bei_T_4_2_S3_v1.png|right|frame|Struktureller Aufbau eines UMTS-Netzes]]&lt;br /&gt;
&lt;br /&gt;
Die beiden Modi erkennt man auch in der Zugangsebene des UMTS–Netzes im&amp;amp;nbsp; &#039;&#039;Core Network&#039;&#039;&amp;amp;nbsp; (CN) wieder, die  nebenstehend dargestellt ist. &lt;br /&gt;
&lt;br /&gt;
Die Zugangsebene kann man in zwei Hauptblöcke unterteilen:&lt;br /&gt;
&lt;br /&gt;
Das&amp;amp;nbsp; $\rm UMTS \ Terrestrial  \ Radio  \ Access  \ Network  \ (UTRAN)$&amp;amp;nbsp; sichert die Funkübertragung von Daten zwischen der Transportebene und der Funknetzebene. &lt;br /&gt;
&lt;br /&gt;
Zum UTRAN gehören die Basisstationen und die Kontrollknoten, deren Funktionen nachfolgend genannt werden:&lt;br /&gt;
*Ein&amp;amp;nbsp; &#039;&#039;&#039;Node B&#039;&#039;&#039;&amp;amp;nbsp; – wie eine UMTS–Basisstation meist genannt wird – umfasst die Antennenanlage sowie den CDMA–Empfänger und ist unmittelbar mit den ME–Funkschnittstellen verbunden. Zu seinen Aufgaben gehören die Datenratenanpassung, Daten– und Kanal(de)codierung, Interleaving sowie Modulation bzw. Demodulation. Jeder &amp;amp;bdquo;Node B&amp;amp;rdquo; kann eine oder mehrere Zellen versorgen.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;Radio Network Controller&#039;&#039;&#039;&amp;amp;nbsp; (RNC) ist für die Steuerung der Basisstationen verantwortlich. Ebenso ist er innerhalb der Zellen zuständig für die Rufannahmesteuerung, Verschlüsselung und Entschlüsselung, ATM–Vermittlung, Kanalzuweisung, Handover und Leistungssteuerung.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Das&amp;amp;nbsp; $\rm Core  \ Network  \  (CN)$&amp;amp;nbsp; ist für die Vermittlung der Daten&amp;amp;nbsp; (sowohl &amp;amp;nbsp;&#039;&#039;circuit-switched&#039;&#039;&amp;amp;nbsp; als auch &amp;amp;nbsp;&#039;&#039;packet-switched&#039;&#039;)&amp;amp;nbsp; innerhalb des UMTS–Netzes zuständig. &lt;br /&gt;
&lt;br /&gt;
Dazu enthält es bei&amp;amp;nbsp; &#039;&#039;Leitungsvermittlung&#039;&#039;&amp;amp;nbsp; folgende Hardware– und Softwarekomponenten:&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;Mobile Services Switching Center&#039;&#039;&#039;&amp;amp;nbsp; (MSC) ist zuständig für das Routing von Gesprächen, Lokalisierung, Authentifizierung, das Handover und die Verschlüsselung von Teilnehmerdaten.&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;Home Location Register&#039;&#039;&#039;&amp;amp;nbsp; (HLR) enthält alle Teilnehmerdaten wie zum Beispiel Tarifmodell, Telefonnummer sowie die zugehörigen dienstspezifischen Berechtigungen und Schlüssel.&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;Visitor Location Register&#039;&#039;&#039;&amp;amp;nbsp; (VLR) enthält Ortsinformationen über lokal registrierte Nutzer und Kopien der Datensätze aus dessen HLR. Diese Daten sind dynamisch:&amp;amp;nbsp; Sobald der Teilnehmer seinen Aufenthaltsort ändert, werden diese Informationen verändert.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Bei&amp;amp;nbsp; &#039;&#039;paketvermittelter Übertragung&#039;&#039;&amp;amp;nbsp; gibt es folgende Einrichtungen bzw. Register:&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;Serving GPRS Support Node&#039;&#039;&#039;&amp;amp;nbsp; (SGSN) ist anstelle von MSC und VLR zuständig für Routing und Authentifizierung und hält eine lokale Kopie der Teilnehmerinformationen gespeichert.&lt;br /&gt;
*Am&amp;amp;nbsp; &#039;&#039;&#039;Gateway GPRS Support Node&#039;&#039;&#039;&amp;amp;nbsp; (GGSN) gibt es Übergänge zu anderen Paketdatennetzen wie zum Beispiel dem Internet. Eingetroffene Pakete werden durch eine integrierte Firewall gefiltert und an den entsprechenden SGSN weitergeleitet.&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;GPRS Register&#039;&#039;&#039;&amp;amp;nbsp; (GR) ist Teil des&amp;amp;nbsp; &#039;&#039;Home Location Register&#039;&#039;&amp;amp;nbsp; (HLR) und enthält zusätzliche Informationen, die für die paketvermittelte Übertragung benötigt werden.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Physikalische Kanäle  ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Physikalische Kanäle dienen der Kommunikation auf der physikalischen Ebene der Funkschnittstelle und werden innerhalb einer Basisstation (&amp;amp;bdquo;Node B&amp;amp;bdquo;) verarbeitet. Dabei unterscheidet man zwischen den&amp;amp;nbsp; &#039;&#039;dedizierten physikalischen Kanälen&#039;&#039;&amp;amp;nbsp; und&amp;amp;nbsp; &#039;&#039;gemeinsam genutzten physikalischen Kanälen&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1515__Bei_T_4_2_S4a_v1.png|right|frame|Aufbau der dedizierten physikalischen Kanäle]]&lt;br /&gt;
&lt;br /&gt;
Die&amp;amp;nbsp; $\rm dedizierten \  physikalischen  \  Kanäle$&amp;amp;nbsp; werden einzelnen Kommunikationspartnern fest zugewiesen. Zu diesen gehören:&lt;br /&gt;
*&#039;&#039;Dedicated Physical Data Channel&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;DPDCH&#039;&#039;&#039;)&amp;amp;nbsp; – Dabei handelt es sich um einen unidirektionalen Uplink–Kanal, der Nutz– und Signalisierungsdaten aus höheren Schichten transportiert.&lt;br /&gt;
*&#039;&#039;Dedicated Physical Control Channel&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;DPCCH&#039;&#039;&#039;)&amp;amp;nbsp; – Dieser Kontrollkanal enthält Informationen der physikalischen Schicht für die Steuerung der Übertragung, Leitungssteuerungs–Kommandos und Transportformat–Indikatoren, um nur einige Beispiele zu nennen.&lt;br /&gt;
*&#039;&#039;Dedicated Physical Channel&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;DPCH&#039;&#039;&#039;)&amp;amp;nbsp; – Dieser Kanal umfasst den &#039;&#039;&#039;DPDCH&#039;&#039;&#039; und den &#039;&#039;&#039;DPCCH&#039;&#039;&#039; im Downlink und hat eine Länge von&amp;amp;nbsp; $2560$&amp;amp;nbsp; Chips.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Grafik zeigt den strukturellen Aufbau des &#039;&#039;&#039;DPDCH&#039;&#039;&#039; (blau), des &#039;&#039;&#039;DPCCH&#039;&#039;&#039; (rot) sowie des einhüllenden &#039;&#039;&#039;DPCH&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
*Im &#039;&#039;&#039;DPCH&#039;&#039;&#039; werden in&amp;amp;nbsp; $10 \ \rm ms$&amp;amp;nbsp; genau&amp;amp;nbsp; $15 · 2560 = 38400$&amp;amp;nbsp; Chips übertragen, woraus sich für die Chiprate&amp;amp;nbsp; $3.84 \ \rm Mchip/s$&amp;amp;nbsp; ergibt.&lt;br /&gt;
*Die Nutzdaten im &#039;&#039;&#039;DPDCH&#039;&#039;&#039; werden aufgesplittet und pro Zeitschlitz werden – je nach Spreizfaktor&amp;amp;nbsp; $J$&amp;amp;nbsp; – zwischen&amp;amp;nbsp; $10$&amp;amp;nbsp; Bit&amp;amp;nbsp; $($falls&amp;amp;nbsp;   $J = 256 )$&amp;amp;nbsp; und&amp;amp;nbsp; $640$&amp;amp;nbsp;Bit&amp;amp;nbsp; $($falls&amp;amp;nbsp;   $J = 4)$&amp;amp;nbsp; Bit übertragen. &lt;br /&gt;
*Im &#039;&#039;&#039;DPCCH&#039;&#039;&#039; werden einheitlich pro Zeitschlitz zehn Kontrollbits übertragen.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
In der Tabelle sind die von allen Teilnehmern&amp;amp;nbsp; $\rm gemeinsam \   genutzten  \  physikalischen  \  Kanäle$&amp;amp;nbsp; aufgelistet. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1516__Bei_T_4_2_S4b.png|right|frame|Gemeinsam genutzte Kanäle in UMTS]]&lt;br /&gt;
Im Folgenden werden die Eigenschaften einiger ausgewählter Kanäle beschrieben:&lt;br /&gt;
*Der &#039;&#039;&#039;CCPCH&#039;&#039;&#039; ist ein Downlink–Kanal mit zwei Unterkanälen. Der &#039;&#039;&#039;P–CCPCH&#039;&#039;&#039; beinhaltet Daten, die für den Betrieb innerhalb einer Funkzelle notwendig sind, während der &#039;&#039;&#039;S–CCPCH&#039;&#039;&#039; Daten enthält, die für die Paging–Prozedur und für den Transport von Kontrolldaten verantwortlich sind.&lt;br /&gt;
*Der &#039;&#039;&#039;PDSCH&#039;&#039;&#039; und der &#039;&#039;&#039;PUSCH&#039;&#039;&#039; sind gemeinsam genutzte Kanäle, die sowohl Nutzdaten als auch Kontrolldaten transportieren können. Der erste ist allein für den Downlink zuständig, der zweite für den Uplink.&lt;br /&gt;
*Der &#039;&#039;&#039;PRACH&#039;&#039;&#039; kontrolliert die Nachrichtenübertragung des Zufallszugriffkanals &#039;&#039;&#039;RACH&#039;&#039;&#039;, während der &#039;&#039;&#039;PCPCH&#039;&#039;&#039; für den Transport von Datenpaketen nach dem CDMA/CD–Verfahren zuständig ist.	&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die folgenden Kanäle sind für die Steuerung und Synchronisierung des Gesamtsystems verantwortlich:&lt;br /&gt;
* Der &#039;&#039;&#039;CPICH&#039;&#039;&#039; ermittelt die Zugehörigkeit der Mobilstation zu einer Basisstation. &lt;br /&gt;
*Der &#039;&#039;&#039;SCH&#039;&#039;&#039; dient zur Zellsuche und Synchronisation der Mobilstation.&lt;br /&gt;
*Der &#039;&#039;&#039;AICH&#039;&#039;&#039; überprüft und ermittelt die Verfügbarkeit des Systems. &lt;br /&gt;
*Der &#039;&#039;&#039;PICH&#039;&#039;&#039; ist für den Funkruf bei der Teilnehmerlokalisierung zuständig.&lt;br /&gt;
 	 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Logische Kanäle  ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die logischen Kanäle befinden sich in der MAC (&#039;&#039;Medium Access Control&#039;&#039;)–Referenzschicht und werden durch den Typ der übertragenen Daten gekennzeichnet. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1517__Bei_T_4_2_S5.png|right|frame|Logische Kanäle in UMTS]] &lt;br /&gt;
Die in der Tabelle zusammengestellten logischen Kanäle lassen sich in zwei Klassen unterteilen, nämlich in&lt;br /&gt;
&lt;br /&gt;
*Kontrollkanäle&amp;amp;nbsp; (&#039;&#039;Control Channels&#039;&#039;):&lt;br /&gt;
:Über die&amp;amp;nbsp; &#039;&#039;&#039;Kontrollkanäle&#039;&#039;&#039;&amp;amp;nbsp; (mit Endung &#039;&#039;&#039;CCH&#039;&#039;&#039;)&amp;amp;nbsp; werden sowohl Kontrollinformationen&amp;amp;nbsp; (&#039;&#039;&#039;BCCH&#039;&#039;&#039;)&amp;amp;nbsp; als auch Paging–Informationen&amp;amp;nbsp; (&#039;&#039;&#039;PCCH&#039;&#039;&#039;)&amp;amp;nbsp; transportiert. Darüber können auch teilnehmerspezifische Signalisierungsdaten&amp;amp;nbsp; (&#039;&#039;&#039;DCCH&#039;&#039;&#039;)&amp;amp;nbsp; oder Transportinformationen zwischen den Teilnehmergeräten und dem UTRAN&amp;amp;nbsp; (&#039;&#039;&#039;CCCH&#039;&#039;&#039;)&amp;amp;nbsp; ausgetauscht werden. &lt;br /&gt;
*Verkehrskanäle&amp;amp;nbsp; (&#039;&#039;Traffic Channels&#039;&#039;):&lt;br /&gt;
:Über die&amp;amp;nbsp; &#039;&#039;&#039;Verkehrskanäle&#039;&#039;&#039;&amp;amp;nbsp;  (mit Endung&amp;amp;nbsp; &#039;&#039;&#039;TCH&#039;&#039;&#039;)&amp;amp;nbsp; werden Teilnehmerinformationen ausgetauscht. Während der&amp;amp;nbsp; &#039;&#039;&#039;DTCH&#039;&#039;&#039;&amp;amp;nbsp; einem mobilen Teilnehmer zum Nutzdatentransport individuell zugewiesen werden kann, wird ein&amp;amp;nbsp; &#039;&#039;&#039;CTCH&#039;&#039;&#039;&amp;amp;nbsp; vorwiegend an alle oder an eine vordefinierte Teilnehmergruppe  vergeben.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt; &lt;br /&gt;
== Transportkanäle  ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Transportkanäle befinden sich in der physikalischen Schicht des&amp;amp;nbsp; [https://de.wikipedia.org/wiki/OSI-Modell ISO/OSI–Schichtenmodells]. Sie&lt;br /&gt;
*werden durch die Parameter der Datenübertragung (z.B. die Datenrate) gekennzeichnet,&lt;br /&gt;
*gewährleisten die gewünschten Anforderungen bezüglich der Fehlerschutzmechanismen, und&lt;br /&gt;
*legen die Art der Datenübertragung – so zu sagen das „WIE” – fest.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Man unterscheidet zwei Klassen von Transportkanälen, nämlich dedizierte und gemeinsam genutzte Transportkanäle.&lt;br /&gt;
&lt;br /&gt;
Zur Klasse der&amp;amp;nbsp; $\rm dedizierten \ Transportkanäle$&amp;amp;nbsp; (&#039;&#039;Dedicated Transport Channels&#039;&#039; – &#039;&#039;&#039;DTCH&#039;&#039;&#039;) gehören die&amp;amp;nbsp; &#039;&#039;Dedicated Channels&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;DCH&#039;&#039;&#039;), die Teilnehmern fest zugewiesen werden. &lt;br /&gt;
*Ein&amp;amp;nbsp; &#039;&#039;&#039;DCH&#039;&#039;&#039;&amp;amp;nbsp; transportiert sowohl Nutzdaten als auch Kontrolldaten (Handover–Daten, Messdaten, ...) an die höheren Schichten, in denen sie dann interpretiert und verarbeitet werden.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zu den&amp;amp;nbsp; $\rm gemeinsam \  genutzten \ Transportkanälen$&amp;amp;nbsp; (&#039;&#039;Common Transport Channels&#039;&#039;&amp;amp;nbsp; – &#039;&#039;&#039;CTCH&#039;&#039;&#039;)&amp;amp;nbsp; gehören beispielsweise:&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;Broadcast Channel&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;BCH&#039;&#039;&#039;)&amp;amp;nbsp; ist ein Downlink–Kanal, der netzbetreiberspezifische Daten der Funkzelle&amp;amp;nbsp; (zum Beispiel:&amp;amp;nbsp; &#039;&#039;Access Random Codes&#039;&#039;&amp;amp;nbsp; zur Signalisierung eines Verbindungsaufbaus)&amp;amp;nbsp; an die Teilnehmer verteilt. Charakteristisch ist seine relativ hohe Leistung und niedrige Datenrate $($nur&amp;amp;nbsp; $\text{3.4 kbit/s)}$, um allen Nutzern einen möglichst fehlerfreien Empfang und hohen Prozessgewinn zu ermöglichen.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;Forward Access Channel&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;FACH&#039;&#039;&#039;)&amp;amp;nbsp; ist ein Downlink–Kanal, zuständig für den Transport von Kontrolldaten. Eine Zelle kann mehrere FACH–Kanäle enthalten, wobei einer der Kanäle eine niedrige Datenrate aufweisen muss, um allen Nutzern die Auswertung seiner Daten zu ermöglichen.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;Random Access Channel&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;RACH&#039;&#039;&#039;)&amp;amp;nbsp; ist ein unidirektionaler Uplink–Kanal. Der Teilnehmer kann damit den Wunsch äußern, eine Funkverbindung aufbauen zu wollen. Außerdem können darüber auch kleine Datenmengen übertragen werden.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;Common Packet Channel&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;CPCH&#039;&#039;&#039;)&amp;amp;nbsp; ist ein unidirektionaler Uplink–Datenkanalfür paketorientierte Dienste und eine Erweiterung des RACH–Kanals.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;Paging Channel&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;PCH&#039;&#039;&#039;)&amp;amp;nbsp; ist ein unidirektionaler Downlink–Kanal zur Lokalisierung eines Teilnehmers mit Daten für die Paging–Prozedur.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1523__Bei_T_4_2_S6_v1.png|right|frame|Verbindungsaufbau bei UMTS]] &lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 1:}$&amp;amp;nbsp;&lt;br /&gt;
Die Grafik soll die Interaktion zwischen den Transportkanälen &amp;amp;nbsp;&#039;&#039;&#039;RACH&#039;&#039;&#039;&amp;amp;nbsp; und &amp;amp;nbsp;&#039;&#039;&#039;FACH&#039;&#039;&#039;&amp;amp;nbsp; mit den logischen Kanälen &amp;amp;nbsp;&#039;&#039;&#039;CCCH&#039;&#039;&#039;&amp;amp;nbsp; und &amp;amp;nbsp;&#039;&#039;&#039;DCCH&#039;&#039;&#039;&amp;amp;nbsp; bei einem einfachen Verbindungsaufbau erläutern.&lt;br /&gt;
&lt;br /&gt;
Einige Erklärungen zu diesem Schaubild:&lt;br /&gt;
*Ein mobiler Teilnehmer&amp;amp;nbsp; (&#039;&#039;Mobile Equipment&#039;&#039;, ME)&amp;amp;nbsp; äußert den Wunsch für einen Verbindungsaufbau. Als erstes wird dann mit Hilfe des logischen Kanals&amp;amp;nbsp;   &#039;&#039;&#039;CCCH&#039;&#039;&#039;&amp;amp;nbsp; und des Transportkanals&amp;amp;nbsp;   &#039;&#039;&#039;RACH&#039;&#039;&#039;&amp;amp;nbsp; eine Verbindungsanfrage über den UTRAN an den&amp;amp;nbsp; &#039;&#039;Radio Network Controller&#039;&#039;&amp;amp;nbsp; (RNC) gesendet.&lt;br /&gt;
*Hierzu wird das&amp;amp;nbsp; &#039;&#039;&#039;RRC&#039;&#039;&#039;–Protokoll&amp;amp;nbsp; (&#039;&#039;Radio Resource Control&#039;&#039;)&amp;amp;nbsp; verwendet, das die Aufgabe hat, die Signalisierung zwischen dem Teilnehmer und UTRAN/RNC zu gewährleisten.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;Radio Network Controller&#039;&#039;&amp;amp;nbsp; (RNC) antwortet auf diese Anfrage über den Transportkanal&amp;amp;nbsp;    &#039;&#039;&#039;FACH&#039;&#039;&#039;. Dabei werden dem Teilnehmer die nötigen Kontrolldaten für den Verbindungsaufbau übersendet.&lt;br /&gt;
*Erst danach wird die Verbindung mit Hilfe des logischen Kanals&amp;amp;nbsp;  &#039;&#039;&#039;DCCH&#039;&#039;&#039;&amp;amp;nbsp; tatsächlich aufgebaut.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Kommunikation innerhalb des ISO/OSI–Schichtenmodells==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die Kommunikation zwischen den verschiedenen Schichten des ISO/OSI–Modells wird durch die auf den letzten Seiten vorgestellten logischen, physikalischen und Transport–Kanäle sichergestellt. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1518__Bei_T_4_2_S7a_87.png|right|frame|Abbildung der Kanäle bei UMTS]] &lt;br /&gt;
Die Grafik rechts zeigt die Struktur sowohl für die Aufwärtsrichtung (Uplink) als auch für die Abwärtsrichtung (Downlink).&lt;br /&gt;
&lt;br /&gt;
Um die Funktionsfähigkeit und den Datenaustausch innerhalb des Gesamtmodells zu garantieren, müssen diese entsprechend der Grafik aufeinander abgebildet werden:&lt;br /&gt;
*Zunächst erfolgt die Abbildung des logischen Kanals auf den Transportkanal,&lt;br /&gt;
*danach die Abbildung des Transportkanals auf einen physikalischen Kanal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1519__Bei_T_4_2_S7b_v1.png|left|frame|Ausschnitt aus dem ISO/OSI–Schichtenmodell]] &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Die untere (linke)  Grafik soll einen Gesamtüberblick über die Struktur der drei untersten Schichten des ISO/OSI–Modells geben und die Interaktionen der verschiedenen Kanalarten vermitteln.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
== Zellulare Architektur von UMTS == 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Um ein flächendeckendes Netz mit geringer Sendeleistung und ausreichender Frequenzökonomie zu ermöglichen, werden auch bei UMTS wie bei GSM Funkzellen eingerichtet. Die Funkzellen sind im UMTS–Netz&amp;amp;nbsp; $($Trägerfrequenz um&amp;amp;nbsp; $\text{2 GHz)}$&amp;amp;nbsp; deutlich kleiner als bei GSM&amp;amp;nbsp; $($Trägerfrequenz um&amp;amp;nbsp; $\text{900 MHz)}$, da bei gleicher Sendeleistung die Reichweite von Funksignalen mit steigender Frequenz abnimmt.&lt;br /&gt;
&lt;br /&gt;
Die Grafik zeigt die&amp;amp;nbsp; &#039;&#039;&#039;Zellenstruktur&#039;&#039;&#039;&amp;amp;nbsp; von UMTS. Man erkennt daraus einen hierarchischen Aufbau und drei Typen von Funkzellen:&lt;br /&gt;
[[File:P_ID1520__Bei_T_4_2_S8a_60.png|right|frame|Zellenaufbau bei UMTS]] &lt;br /&gt;
*&#039;&#039;&#039;Makrozellen&#039;&#039;&#039;&amp;amp;nbsp; sind mit vier bis sechs Kilometer Durchmesser die größten Zellen. Sie erlauben relativ schnelle Bewegungungen. Beispielsweise ist eine Bewegungsgeschwindigkeit bis zu maximal&amp;amp;nbsp; $500\ \rm  km/h$&amp;amp;nbsp; zulässig, wenn die Datenrate&amp;amp;nbsp; $144 \ \rm  kbit/s$&amp;amp;nbsp; beträgt. Eine Makrozelle kann möglicherweise eine Vielzahl von Mikro– und Pikozellen überlagern.&lt;br /&gt;
*&#039;&#039;&#039;Mikrozellen&#039;&#039;&#039;&amp;amp;nbsp; sind mit ein bis zwei Kilometer Durchmesser deutlich kleiner als Makrozellen. Sie erlauben höhere Datenraten bis&amp;amp;nbsp; $384 \ \rm  kbit/s$, dafür aber nur langsamere Bewegungsgeschwindigkeiten. Zum Beispiel ist bei der maximalen Datenrate die maximal zulässige Geschwindigkeit nur noch&amp;amp;nbsp;  $120\ \rm   km/h$. Eine Mikrozelle überlagert keine, eine oder eine Vielzahl von Pikozellen.&lt;br /&gt;
*&#039;&#039;&#039;Pikozellen&#039;&#039;&#039;&amp;amp;nbsp; versorgen nur sehr kleine Gebiete mit etwa&amp;amp;nbsp; $100$&amp;amp;nbsp; Meter Durchmesser, aber sehr hohem Datenaufkommen. Sie werden in hochverdichteten Orten wie zum Beispiel Flughäfen, Stadien, usw. eingesetzt. Zulässig sind theoretisch Datenraten bis&amp;amp;nbsp; $2\ \rm    Mbit/s$.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Da UMTS als Vielfachzugriffsverfahren&amp;amp;nbsp; [[Modulationsverfahren/Aufgaben_und_Klassifizierung#FDMA.2C_TDMA_und_CDMA|Code Division Multiple Access]]&amp;amp;nbsp; (CDMA) verwendet, benutzen alle Teilnehmer den gleichen Frequenzkanal. Dies resultiert in einer relativ hohen Interferenzleistung und einem sehr niedrigen&amp;amp;nbsp; &#039;&#039;Träger–zu–Interferenz–Abstand&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Carrier–to–Interference Ratio&#039;&#039;, CIR). Dieser ist zumindest deutlich kleiner als bei&amp;amp;nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_GSM|GSM]], das auf FDMA und TDMA basiert.&lt;br /&gt;
&lt;br /&gt;
Ein niedriges CIR kann die Übertragungsqualität erheblich beeinträchtigen, nämlich dann, wenn sich die Signale unterschiedlicher Teilnehmer destruktiv überlagern, was zu Informationsverlust führt.&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Man unterscheidet zwei Arten von Interferenzen::}$&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
*$\rm Intrazellinterferenz$&amp;amp;nbsp; entsteht durch die Verwendung des gleichen Frequenzkanals von mehreren Teilnehmern innerhalb der gleichen Zelle.&lt;br /&gt;
*$\rm Interzellinterferenz$&amp;amp;nbsp; tritt auf, wenn Teilnehmer verschiedener Zellen den gleichen Frequenzkanal benutzen.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1521__Bei_T_4_2_S10_v1.png|right|frame|Interzellinterferenz vs. Intrazellinterferenz]] &lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 2:}$&amp;amp;nbsp;&lt;br /&gt;
Die Grafik veranschaulicht beide Arten der Zellinterferenz. &lt;br /&gt;
*In der linken Zelle kommt es zu&amp;amp;nbsp; &#039;&#039;&amp;lt;u&amp;gt;Intra&amp;lt;/u&amp;gt;zellinterferenzen&#039;&#039;, wenn die beiden Frequenzen&amp;amp;nbsp; $f_1$&amp;amp;nbsp; und&amp;amp;nbsp; $f_2$&amp;amp;nbsp; identisch sind.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Dagegen gibt es&amp;amp;nbsp; &#039;&#039;&amp;lt;u&amp;gt;Inter&amp;lt;/u&amp;gt;zellinterferenz&#039;&#039;, wenn in den beiden rechten Funkzellen gleiche Frequenzen verwendet werden&amp;amp;nbsp; $(f_3 = f_4)$. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Intrazellinterferenzen sind wegen des geringen Abstands der Intrazellstörer meistens gravierender als Interzellinterferenzen, das heißt, sie bewirken ein deutlich kleineres&amp;amp;nbsp; &#039;&#039;Carrier–to–Interference Ratio&#039;&#039; (CIR).}}&lt;br /&gt;
&lt;br /&gt;
== Was versteht man unter Zellatmung? == 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Um den Einfluss der Interferenzleistung auf die Übertragungsqualität zu begrenzen, wird bei UMTS die so genannte&amp;amp;nbsp; $\rm Zellatmung$&amp;amp;nbsp; eingesetzt. Diese lässt sich wie folgt beschreiben:&lt;br /&gt;
*Nimmt die Anzahl der aktiven Teilnehmer und damit die aktuelle Interferenzleistung zu, so wird der Zellenradius verkleinert.&lt;br /&gt;
*Da nun weniger Teilnehmer in der Zelle senden, wird damit auch der störende Einfluss der Zellinterferenz geringer.&lt;br /&gt;
*Für die Versorgung der am Rande einer ausgelasteten Zelle stehenden Teilnehmer springt dann die weniger belastete Nachbarzelle ein.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Eine Alternative zur Zellatmung ist, dass man die Gesamtsendeleistung innerhalb der Zelle verringert, was allerdings auch eine Reduzierung der Sende– und damit auch der Empfangsqualität bedeutet.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 3:}$&amp;amp;nbsp;&lt;br /&gt;
In der Grafik erkennt man, dass die Anzahl der aktiven Teilnehmer (pro Flächeneinheit) im Versorgungsgebiet von links nach rechts zunimmt.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1522__Bei_T_4_2_S8b_v1.png|right|frame|Zur Verdeutlichung von &amp;amp;bdquo;Zellatmung&amp;amp;rdquo; bei UMTS]] &lt;br /&gt;
&lt;br /&gt;
*Lässt man die Zellengröße gleich, so gibt es in der Zelle mehr aktive Teilnehmer als vorher und dementsprechend nimmt die Qualität aufgrund der Intrazellinterferenzen deutlich ab.&lt;br /&gt;
*Verkleinert man dagegen die Zellengröße im gleichen Maße, wie die Teilnehmerzahl zunimmt, so sind in einer Zelle nicht mehr Teilnehmer aktiv als vorher (nach dieser Skizze:&amp;amp;nbsp; sieben) und die Qualität bleibt (in etwa) erhalten.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Handover in UMTS == 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Um den Übergang zwischen verschiedenen Zellen für Mobilfunkteilnehmer möglichst unterbrechungsfrei erscheinen zu lassen, wird bei leitungsvermittelten UMTS–Diensten – wie auch bei GSM – ein Handover eingesetzt. Man unterscheidet bei UMTS zwei Arten:&lt;br /&gt;
*$\rm Hard \ Handover$: &amp;amp;nbsp; Hierbei wird zu einem bestimmten Zeitpunkt die Verbindung hart zu einem anderen &amp;amp;bdquo;Node B&amp;amp;rdquo; umgeschaltet. Diese Art von Handover geschieht im TDD–Modus während des Umschaltens zwischen Sender und Empfänger.&lt;br /&gt;
*$\rm Soft \ Handover$: &amp;amp;nbsp; Dabei kann ein Mobiltelefon mit bis zu drei Basisstationen kommunizieren. Die Übergabe eines Teilnehmers von einem &amp;amp;bdquo;Node B&amp;amp;rdquo; zu einem anderen erfolgt allmählich, bis der Teilnehmer diesen Bereich endgültig verlässt. Man spricht in diesem Zusammenhang von&amp;amp;nbsp; &#039;&#039;Makrodiversität&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die&amp;amp;nbsp; &#039;&#039;Downlink–Daten&#039;&#039;&amp;amp;nbsp; werden im&amp;amp;nbsp; &#039;&#039;Radio Network Controller&#039;&#039;&amp;amp;nbsp; (RNC) aufgeteilt&amp;amp;nbsp; (&#039;&#039;Splitting&#039;&#039;), über die beteiligten Basisstationen ausgestrahlt und in der Mobilstation wieder zusammengesetzt&amp;amp;nbsp; (&#039;&#039;Rake Processing&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
Im&amp;amp;nbsp; &#039;&#039;Uplink&#039;&#039;&amp;amp;nbsp; werden hingegen die gesendeten Daten von allen beteiligten Basisstationen empfangen. Die Zusammenlegung der Daten&amp;amp;nbsp; (&#039;&#039;Soft Combining&#039;&#039;)&amp;amp;nbsp; findet im RNC statt. Dieser leitet anschließend die Daten an das&amp;amp;nbsp; &#039;&#039;Core Network&#039;&#039;&amp;amp;nbsp; (CN) weiter.&lt;br /&gt;
&lt;br /&gt;
Man unterscheidet bei&amp;amp;nbsp; &#039;&#039;Soft Handover&#039;&#039;&amp;amp;nbsp; drei Sonderfälle:&lt;br /&gt;
*Bei&amp;amp;nbsp; &#039;&#039;&#039;Softer Handover&#039;&#039;&#039;&amp;amp;nbsp; wird ein Teilnehmer über verschiedene Pfade der gleichen Basisstation versorgt. &lt;br /&gt;
*Dagegen geschieht bei&amp;amp;nbsp; &#039;&#039;&#039;Intra–RNC Handover&#039;&#039;&#039;&amp;amp;nbsp; die Versorgung der Teilnehmer über zwei verschiedene Basisstationen, die an denselben RNC angeschlossen sind. &amp;lt;br&amp;gt;Das&amp;amp;nbsp; &#039;&#039;Combining und Splitting&#039;&#039;&amp;amp;nbsp; der Daten findet in dem gemeinsamen RNC statt.&lt;br /&gt;
*Ist der Teilnehmer in einem Gebiet, das von zwei benachbarten&amp;amp;nbsp; &#039;&#039;Radio Network Controllern&#039;&#039;&amp;amp;nbsp; verwaltet wird, so liegt &#039;&#039;&#039;Inter–RNC Handover&#039;&#039;&#039; vor. &lt;br /&gt;
**Der erste RNC &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; &#039;&#039;Serving RNC&#039;&#039;&amp;amp;nbsp; (SRNC) übernimmt die Kommunikation mit dem&amp;amp;nbsp; &#039;&#039;Core Network&#039;&#039;&amp;amp;nbsp; und ist für&amp;amp;nbsp; &#039;&#039;Combining und Splitting&#039;&#039;&amp;amp;nbsp; verantwortlich. &lt;br /&gt;
**Der zweite RNC &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; &#039;&#039;Drift RNC&#039;&#039;&amp;amp;nbsp; (DRNC) übernimmt die Kommunikation mit dem&amp;amp;nbsp; SRNC&amp;amp;nbsp; und mit dem von ihm verwalteten &amp;amp;bdquo;Node B&amp;amp;rdquo;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1524__Bei_T_4_1_S10.png|right|frame|Zur Verdeutlichung verschiedener Handover&amp;amp;ndash;Strategien]] &lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 4:}$&amp;amp;nbsp;&lt;br /&gt;
Wir gehen von folgendem Szenario aus. Das Fahrzeug startet bei&amp;amp;nbsp; $\rm A$,  bewegt sich nach rechts und passiert verschiedene Basisstationen, die jeweils mit einem&amp;amp;nbsp; &#039;&#039;Radio Network Controller&#039;&#039;&amp;amp;nbsp; (RNC) verbunden sind. Die Buchstaben markieren verschiedene Fahrzeugpositionen.&lt;br /&gt;
&lt;br /&gt;
* Bei den Positionen&amp;amp;nbsp;  $\rm A$,&amp;amp;nbsp; $\rm C$,&amp;amp;nbsp; $\rm E$,&amp;amp;nbsp; $\rm G$,&amp;amp;nbsp; $\rm I$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm K$&amp;amp;nbsp; gibt es stets nur eine RNC–Verbindung, also auch&amp;amp;nbsp; &#039;&#039;kein Handover&#039;&#039;.&lt;br /&gt;
* Bei&amp;amp;nbsp; $\rm B$,&amp;amp;nbsp; $\rm F$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm J$&amp;amp;nbsp; ist das Fahrzeug mit zwei Basisstationen des gleichen  RNC in Kontakt &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; &#039;&#039;Intra–RNC Handover&#039;&#039;.&lt;br /&gt;
*Bei&amp;amp;nbsp; $\rm D$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm H$&amp;amp;nbsp; ist das Fahrzeug mit zwei Basisstationen zweier  RNCs in Kontakt &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; &#039;&#039;Inter–RNC Handover&#039;&#039;.&lt;br /&gt;
*Voraussetzung hierfür ist allerdings, dass die Koordination der beiden RNCs durch das&amp;amp;nbsp; &#039;&#039;Core Network&#039;&#039;&amp;amp;nbsp; (CN) funkioniert. Ansonsten: &amp;amp;nbsp; &#039;&#039;Hard Handover&#039;&#039;.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==IP–basierte Netze == 	 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Mit dem UMTS Release 5 wurden unter Anderem&amp;amp;nbsp; &#039;&#039;&#039;IP–basierte Netze&#039;&#039;&#039;&amp;amp;nbsp; (&#039;&#039;IP Core Networks&#039;&#039;)&amp;amp;nbsp; eingeführt. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1525__Bei_T_4_2_S9_v1.png|right|frame|Netzarchitektur von UMTS &amp;amp;ndash; Release 5]] &lt;br /&gt;
*Dabei werden sowohl die Nutzdaten als auch die Kontrolldaten über ein internes IP–Netz übertragen. &lt;br /&gt;
*Das bedeutet, dass sowohl leitungsvermittelte Dienste als auch paketvermittelte Dienste auf der Basis von IP–Protokollen erbracht werden.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Grafik zeigt diese Netzarchitektur in schematischer Weise. Im Vergleich zur ursprünglichen UMTS–Netzarchitektur (Release 99) wurde das Netz um folgende Knoten ergänzt:&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;Media Gateway&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;MGW&#039;&#039;&#039;)&amp;amp;nbsp; ist für die Wiedergewinnung der in&amp;amp;nbsp; &#039;&#039;Voice–over–IP&#039;&#039;&amp;amp;nbsp; (VoIP) konvertierten Sprachpakete in herkömmliche Sprachdaten verantwortlich.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;Home Subscriber Server&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;HSS&#039;&#039;&#039;)&amp;amp;nbsp; fasst die aus dem&amp;amp;nbsp; &#039;&#039;UMTS Release 99&#039;&#039;&amp;amp;nbsp; bekannten Register&amp;amp;nbsp; &#039;&#039;&#039;HLR&#039;&#039;&#039;&amp;amp;nbsp; und&amp;amp;nbsp; &#039;&#039;&#039;VLR&#039;&#039;&#039;&amp;amp;nbsp; zusammen.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;Call State Control Function&#039;&#039;&amp;amp;nbsp;  (&#039;&#039;&#039;CSCF&#039;&#039;&#039;)–Knoten ist für die gesamte Steuerung des IP–Netzes in&amp;amp;nbsp; &#039;&#039;UMTS Release 5&#039;&#039;&amp;amp;nbsp; zuständig und stellt die Kommunikation zwischen CSCF–Knoten und Teilnehmer über das&amp;amp;nbsp; &#039;&#039;Session Initiation Protocol&#039;&#039;&amp;amp;nbsp; (SIP) her.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Es spricht vieles für den Einsatz einer solchen IP–basierten Netzarchitektur, da diese eine Reihe von Verbesserungen bereitstellt. &lt;br /&gt;
&lt;br /&gt;
Wesentliche&amp;amp;nbsp; &#039;&#039;&#039;Vorteile&#039;&#039;&#039;&amp;amp;nbsp; von IP–Netzen sind:&lt;br /&gt;
*eine zukunftsweisende Alternative zur jetzigen Auslegung,&lt;br /&gt;
*eine preiswerte Routing–Technologie  &amp;amp;nbsp; ⇒ &amp;amp;nbsp; große Einsparungen bei der Vermittlungstechnik,&lt;br /&gt;
*große Flexibilität bei der Einführung neuer Dienste, und&lt;br /&gt;
*eine leichte Implementierung von Netzüberwachungstechniken.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Entscheidende&amp;amp;nbsp; &#039;&#039;&#039;Nachteile&#039;&#039;&#039;&amp;amp;nbsp; dieser Architektur sind derzeit (2011) allerdings auch:&lt;br /&gt;
*die mühsame Integration der Infrastruktur der zweiten Mobilfunkgeneration,&lt;br /&gt;
*die Notwendigkeit von Übergangsknoten zur Konvertierung der Daten in so genannten Gateways, und&lt;br /&gt;
*das Fehlen eines eindeutigen und zuverlässigen Sicherheitskonzeptes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Aufgaben zum Kapitel == 	&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
[[Aufgaben:Aufgabe_4.3:_UMTS–Zugangsebene|Aufgabe 4.3: UMTS–Zugangsebene]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Aufgabe_4.4:_Zellulare_UMTS-Architektur|Aufgabe 4.4: Zellulare UMTS-Architektur]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_UMTS&amp;diff=34989</id>
		<title>Examples of Communication Systems/Allgemeine Beschreibung von UMTS</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_UMTS&amp;diff=34989"/>
		<updated>2020-10-13T15:39:41Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Allgemeine Beschreibung von UMTS to Examples of Communication Systems/General Description of UMTS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Examples of Communication Systems/General Description of UMTS]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/General_Description_of_UMTS&amp;diff=34988</id>
		<title>Examples of Communication Systems/General Description of UMTS</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/General_Description_of_UMTS&amp;diff=34988"/>
		<updated>2020-10-13T15:39:41Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Allgemeine Beschreibung von UMTS to Examples of Communication Systems/General Description of UMTS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=UMTS – Universal Mobile Telecommunications System&lt;br /&gt;
|Vorherige Seite=Weiterentwicklungen des GSM&lt;br /&gt;
|Nächste Seite=UMTS–Netzarchitektur&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== # ÜBERBLICK ZUM VIERTEN HAUPTKAPITEL # ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
$\rm U$niversal $\rm M$obile $\rm T$elecommunications $\rm S$ystem (UMTS) ist ein Mobilfunksystem der dritten Generation, das bei seiner Einführung eine zukunftsweisende Alternative zu den bis dahin verwendeten Mobilfunksystemen darstellen sollte. Es bietet im Vergleich zu GSM nicht nur eine hochwertigere Sprachqualität, sondern dank seiner schnelleren und paketvermittelten Übertragung auch eine Vielfalt erweiterter Dienste und Funktionalitäten.&lt;br /&gt;
&lt;br /&gt;
UMTS wurde Ende der 1990er Jahre im Rahmen einer Zusammenarbeit zwischen der&amp;amp;nbsp; &#039;&#039;International Telecommunication Union&#039;&#039;&amp;amp;nbsp; (ITU) und dem 3GPP–Forum (&#039;&#039;3rd Generation Partnership Project&#039;&#039;) standardisiert und ist seit 2004 in Europa kommerziell verfügbar. In Deutschland wurden bis Ende 2007 mehr als 10 Millionen Nutzer registriert. Weltweit nutzen derzeit (2011) um die 200 Millionen Teilnehmer UMTS oder ähnliche Mobilfunksysteme der dritten Generation.&lt;br /&gt;
&lt;br /&gt;
Dieses Kapitel beinhaltet im Einzelnen:&lt;br /&gt;
&lt;br /&gt;
*UMTS als Mobilfunksystem der dritten Generation,&lt;br /&gt;
*die Dienste und Sicherheitsaspekte in UMTS,&lt;br /&gt;
*die UMTS–Netzarchitektur,&lt;br /&gt;
*die physikalischen, logischen und Transportkanäle sowie deren Interaktionen,&lt;br /&gt;
*die zellulare Architektur in UMTS und deren Mechanismen,&lt;br /&gt;
*die in UMTS verwendete Sprach– und Kanalcodierung,&lt;br /&gt;
*die Bandspreizung und CDMA als Basis von UMTS,&lt;br /&gt;
*die Funkressourcenverwaltung und Leistungsregelung in UMTS–Netzen,&lt;br /&gt;
*die Weiterentwicklungen von UMTS wie HSDPA und HSUPA,&lt;br /&gt;
*ein Ausblick auf Long Term Evolution (LTE), dem Schlagwort der vierten Generation. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Dem 4G–Standard&amp;amp;nbsp; [[Mobile_Communications/Allgemeines_zum_Mobilfunkstandard_LTE|Long Term Evolution]]&amp;amp;nbsp; (LTE) ist im Buch „Mobile Kommunikation” ein eigenes Kapitel gewidmet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Anforderungen an Mobilfunksysteme der dritten Generation==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die wichtigste Motivation zur Entwicklung von&amp;amp;nbsp; &#039;&#039;Mobilfunksystemen der dritten Generation&#039;&#039;&amp;amp;nbsp; war die Erkenntnis, dass die Systeme der zweiten Generation den Bandbreitenbedarf zur Nutzung multimedialer Dienste nicht zufrieden stellen konnten. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1481__Bei_T_4_1_S1_v2.png|right|frame|Entwicklung der Mobilfunksysteme von 1995 bis 2006]]&lt;br /&gt;
Die Grafik zeigt die Entwicklung der Mobilfunksysteme seit 1995 hinsichtlich Leistungsfähigkeit bzw. Datenübertragungsrate aus Sicht des Jahres 2007:&lt;br /&gt;
* Die  angegebenen Datenraten  für HSUPA $($Uplink, bis&amp;amp;nbsp; $\text{3 Mbit/s)}$&amp;amp;nbsp; und HSDPA $($Downlink, bis&amp;amp;nbsp; $\text{7 Mbit/s)}$&amp;amp;nbsp; waren für 2006/2007 realistisch.&lt;br /&gt;
*In den Spezifikationen wurden dagegen für den Uplink&amp;amp;nbsp; $\text{5.8 Mbit/s}$ und für den Downlink&amp;amp;nbsp; $\text{14.4 Mbit/s}$ (also deutlich höhere Maximalwerte) genannt, die  in der Praxis wohl aber nicht erreichbar sein werden.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Die Mobilfunksysteme der dritten Generation sollten über eine größere Bandbreite als das damals bereits etablierte&amp;amp;nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_GSM|GSM]]&amp;amp;nbsp; verfügen und genügend Reserve an Leistungsfähigkeit aufweisen, um auch bei den stetig wachsenden Anforderungen eine hohe Dienstgüte gewährleisten zu können.&lt;br /&gt;
&lt;br /&gt;
Bei der Entwicklung der Systeme der dritten Generation hat die&amp;amp;nbsp; &#039;&#039;International Telecommunication Union&#039;&#039;&amp;amp;nbsp; (ITU) eine wichtige Rolle gespielt. Sie hat unter anderem einen Anforderungskatalog erstellt, der ihre Eigenschaften festlegte. Dieser Anforderungskatalog umfasst folgende Rahmenbedingungen:&lt;br /&gt;
*Hohe Datenraten von&amp;amp;nbsp; $\text{144 kbit/s}$  (Standard) bis&amp;amp;nbsp; $\text{2 Mbit/s}$&amp;amp;nbsp; (In-door),&lt;br /&gt;
*symmetrische und asymmetrische Datenübertragung (IP–Dienste),&lt;br /&gt;
*leitungsvermittelte (&#039;&#039;circuit–switched&#039;&#039;) und paketvermittelte (&#039;&#039;packed–switched&#039;&#039;) Übertragung,&lt;br /&gt;
*hohe Sprachqualität und hohe Spektraleffizienz,&lt;br /&gt;
*nahtloser Übergang von und zu Systemen der zweiten Generation,&lt;br /&gt;
*globale Erreichbarkeit und Verbreitung,&lt;br /&gt;
*Anwendungen unabhängig vom verwendeten Netz (&#039;&#039;Virtual Home Environment&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Der IMT–2000–Standard==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Im Jahre 1992 wurde von der&amp;amp;nbsp; &#039;&#039;International Telecommuncation Union&#039;&#039;&amp;amp;nbsp; (ITU) der Standard&amp;amp;nbsp; $\rm IMT\hspace{-0.1cm}-\hspace{-0.1cm}2000$&amp;amp;nbsp; $($&#039;&#039;International Mobile Telecommunications at&#039;&#039;&amp;amp;nbsp; 2000 MHz$)$&amp;amp;nbsp; ins Leben gerufen, der die genannten Anforderungen ermöglichen sollte. Dieser umfasst eine Reihe verschiedener Mobilfunksysteme der dritten Generation, die im Laufe der Standardisierung aneinander angenähert wurden, um die Entwicklung von gemeinsamen Endgeräten für alle diese Standards zu ermöglichen.&lt;br /&gt;
&lt;br /&gt;
Um national unterschiedliche Vorarbeiten zu berücksichtigen und den Netzbetreibern die Möglichkeit zu geben, die bereits bestehenden Netzarchitekturen zum Teil weiter zu verwenden, beinhaltet IMT–2000 mehrere Einzelstandards. Diese lassen sich grob in vier Kategorien einteilen:&lt;br /&gt;
[[File:P_ID1482__Bei_T_4_1_S2_v1.png|right|frame|Die  &amp;amp;bdquo;IMT–Familie&amp;amp;rdquo; &amp;amp;ndash; ein Überblick]]&lt;br /&gt;
*$\rm W–CDMA$: &amp;amp;nbsp;  Dazu zählt man die FDD-Komponente des europäischen UMTS–Standards sowie das amerikanische cdma2000–System.&lt;br /&gt;
*$\rm TD–CDMA$: &amp;amp;nbsp;  Zu dieser Gruppe zählt die TDD–Komponente von UMTS sowie das chinesische TD–SCDMA, das mittlerweile in den UMTS–TDD–Standard integriert ist.&lt;br /&gt;
*$\rm TDMA$: &amp;amp;nbsp;  Eine Weiterentwicklung des GSM–Ablegers EDGE und des amerikanischen Pendants UWC–136, auch bekannt als DS–AMPS.&lt;br /&gt;
*$\rm FD–TDMA$: &amp;amp;nbsp;  Die Weiterentwicklung des europäischen Schnurlos–Telefonie–Standards DECT (&#039;&#039;Digital Enhanced Cordless Telecommunication&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Im Folgenden konzentrieren wir uns auf das in Europa entwickelte System&amp;amp;nbsp;  $\rm UMTS$&amp;amp;nbsp;  (&#039;&#039;Universal Mobile Telecommunications System&#039;&#039;), das die beiden erstgenannten Standards&amp;amp;nbsp;  sowie&amp;amp;nbsp; $\rm TD–CDMA$&amp;amp;nbsp; der Systemfamilie IMT–2000 unterstützt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Historische Entwicklung von UMTS == 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Es folgen einige Daten zur historischen Entwicklung von UMTS und der verwendeten Techniken. Weitere Informationen finden Sie zum Beispiel unter diesem [http://www.umtsworld.com/umts/history.htm Internet–Link].&lt;br /&gt;
*&#039;&#039;&#039;1940–1950&#039;&#039;&#039; &amp;amp;nbsp;  Erste militärische Anwendungen von Signalspreizverfahren.&lt;br /&gt;
*&#039;&#039;&#039;1949&#039;&#039;&#039; &amp;amp;nbsp; 	Erste Grundzüge des CDMA–Verfahrens durch C. E. Shannon und J. R. Pierce.&lt;br /&gt;
*&#039;&#039;&#039;1970&#039;&#039;&#039;	&amp;amp;nbsp; 	Verschiedene CDMA–Entwicklungen für militärische Systeme, z.B. GPS.&lt;br /&gt;
*&#039;&#039;&#039;1989–1992&#039;&#039;&#039;	&amp;amp;nbsp; 	Grundlagenforschung zu den Eigenschaften zukünftiger Mobilfunksysteme im Rahmen des EU–Programms RACE–1 &amp;lt;br&amp;gt;(&#039;&#039;Research, Analysis, Communication, Evaluation&#039;&#039;).&lt;br /&gt;
*&#039;&#039;&#039;1992&#039;&#039;&#039;	&amp;amp;nbsp; 	Erste Überlegungen zum Standard IMT–2000 durch die ITU.&lt;br /&gt;
*&#039;&#039;&#039;1992–1995&#039;&#039;&#039;	&amp;amp;nbsp; 	EU–Programm RACE–2 mit dem Schwerpunkt „Entwicklung von Systemkonzepten” – basierend auf den Ergebnissen von RACE–1.&lt;br /&gt;
*&#039;&#039;&#039;1996&#039;&#039;&#039;	&amp;amp;nbsp; 	Gründung des UMTS–Forums in Zürich – Umbenennung des geplanten europäischen Standards „W–CDMA” in „UMTS”.&lt;br /&gt;
*&#039;&#039;&#039;1998&#039;&#039;&#039;	&amp;amp;nbsp; 	Übernahme der beiden Modi &#039;&#039;W–CDMA&#039;&#039; und &#039;&#039;TD–CDMA&#039;&#039; in den UMTS–Standard auf der ETSI–SMG–Sitzung in Paris.&lt;br /&gt;
*&#039;&#039;&#039;1998&#039;&#039;&#039;	&amp;amp;nbsp; 	Gründung des &#039;&#039;3gpp&#039;&#039;–Forums (&#039;&#039;3rd Generation Partnership Project&#039;&#039;) durch die Gremien ETSI–SMG, T1P1, ARIB TTC und TTA.&lt;br /&gt;
*&#039;&#039;&#039;1999&#039;&#039;&#039;	&amp;amp;nbsp; 	Verabschiedung des Standards UMTS–R99 (Release 1999) durch die ETSI. Dieser gilt als Basis für die ersten verfügbaren UMTS–Endgeräte.&lt;br /&gt;
*&#039;&#039;&#039;2001	&#039;&#039;&#039;&amp;amp;nbsp; 	Verabschiedung der Release 4 als Weiterentwicklung von UMTS–R99: &#039;&#039;Quality of Service&#039;&#039; (QoS) wird nun an der Funkschnittstelle und im Festnetz unterstützt.&lt;br /&gt;
*&#039;&#039;&#039;2001	&#039;&#039;&#039;&amp;amp;nbsp; 	Erstes kommerzielle UMTS–Netz des norwegischen Unternehmens TELENOR.&lt;br /&gt;
*&#039;&#039;&#039;2002	&#039;&#039;&#039;&amp;amp;nbsp; 	Verabschiedung der UMTS Release 5: &amp;amp;nbsp; Die an das GSM–Festnetz angelehnte Architektur wird durch ein vollständig IP–basiertes Festnetz ersetzt.&lt;br /&gt;
*&#039;&#039;&#039;2002	&#039;&#039;&#039;&amp;amp;nbsp; 	Erste UMTS–Sprach– und Datenverbindung von Nortel Networks und Qualcomm. Diese Firmen gelten als Vorreiter bei der Umsetzung der UMTS–Technologie.&lt;br /&gt;
*&#039;&#039;&#039;2004&#039;&#039;&#039;	&amp;amp;nbsp; 	Verabschiedung der UMTS Release 6. Dieser Standard bietet dem Nutzer einen verbesserten QoS und dem Anbieter eine effektivere Ressourcenverwaltung.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1483__Bei_T_4_1_S3_v2_90.png|right|frame|Zur historischen Entwicklung von UMTS]]&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Zusammenfassung der historischen Entwicklung von UMTS}$&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Auch bei UMTS ist zwischen den ersten Konzeptüberlegungen und der endgültigen Einführung mehr als ein Jahrzehnt vergangen. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Dies war ähnlich wie bei der Einführung anderer Kommunikationssysteme, wie zum Beispiel bei&lt;br /&gt;
*[[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_ISDN|ISDN]]&amp;amp;nbsp; (&#039;&#039;Integrated Services Digital Network&#039;&#039;),&lt;br /&gt;
*[[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_DSL|DSL]]&amp;amp;nbsp; (&#039;&#039;Digital Subscriber Line&#039;&#039;),&lt;br /&gt;
*[[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_GSM|GSM]]&amp;amp;nbsp; (&#039;&#039;Global System for Mobile Communications&#039;&#039;).}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
==Frequenzspektren für UMTS==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Zuständig für die Zuweisung von Bandbreiten und Frequenzbänder der Kommunikationssysteme ist die&amp;amp;nbsp; &#039;&#039;International Telecommunication Union&#039;&#039;&amp;amp;nbsp; (ITU). Insbesondere bei UMTS gibt es aber Abweichungen zwischen den europäischen und den ITU–Frequenzzuweisungen, da manche Frequenzbänder in manchen Ländern schon von anderen Mobilfunksystemen belegt waren. Die Grafik zeigt die europäische (unten) sowie die ITU–Frequenzbelegung (oben).&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1526__Bei_T_4_1_S4_v1.png|right|frame|UMTS&amp;amp;ndash;Frequenzspektren]]&lt;br /&gt;
Hierbei bedeuten:&lt;br /&gt;
*$\rm GSM \ 1800$ – Frequenzband für den Downlink des GSM 1800,&lt;br /&gt;
*$\rm SAT$ – satellitengestützte Systeme &amp;lt;br&amp;gt;(jeweils&amp;amp;nbsp; $\text{30 MHz}$&amp;amp;nbsp; für Uplink und Downlink),&lt;br /&gt;
*$\rm DECT$ – &#039;&#039;Digital Enhanced Cordless Telecommunications&#039;&#039; &amp;lt;br&amp;gt;(Schnurlostelefon– Standard),&lt;br /&gt;
*$\rm UTRA–FDD$ – &#039;&#039;UMTS Terrestrial Radio Access–Frequency Division Duplex&#039;&#039;,&lt;br /&gt;
*$\rm UTRA–TDD$ – &#039;&#039;UMTS Terrestrial Radio Access–Time Division Duplex&#039;&#039;.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
$\rm UTRA–FDD$&amp;amp;nbsp; &amp;amp;ndash; oder kurz&amp;amp;nbsp; $\rm FDD$&amp;amp;nbsp; &amp;amp;ndash;  besteht aus zwölf gepaarten Uplink– und Downlink–Frequenzbändern zu je&amp;amp;nbsp; $\text{5 MHz}$&amp;amp;nbsp; Bandbreite. Die Frequenzbänder liegen in Europa &lt;br /&gt;
*zwischen&amp;amp;nbsp; $\text{1920 MHz}$&amp;amp;nbsp; und&amp;amp;nbsp; $\text{1980 MHz}$&amp;amp;nbsp; im Uplink, sowie &lt;br /&gt;
*zwischen&amp;amp;nbsp; $\text{2110 MHz}$&amp;amp;nbsp; und&amp;amp;nbsp; $\text{2170 MHz}$&amp;amp;nbsp; im Downlink.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Dagegen besteht&amp;amp;nbsp; $\rm UTRA–TDD$&amp;amp;nbsp;  &amp;amp;ndash;  oder kurz $\rm TDD$ &amp;amp;ndash;  aus fünf Frequenzbändern zu je&amp;amp;nbsp; $\text{5 MHz}$&amp;amp;nbsp; Bandbreite, in denen mittels Zeitmultiplex sowohl Uplink– als auch Downlink–Daten übertragen werden sollen. &lt;br /&gt;
*Für TDD sind die Frequenzen zwischen&amp;amp;nbsp; $\text{1900 MHz}$&amp;amp;nbsp; und&amp;amp;nbsp; $\text{1920 MHz}$&amp;amp;nbsp; (vier Kanäle) und zwischen&amp;amp;nbsp; $\text{2020 MHz}$&amp;amp;nbsp; und&amp;amp;nbsp; $\text{2025 MHz}$&amp;amp;nbsp; (ein Kanal) reserviert. &lt;br /&gt;
*Das Band zwischen&amp;amp;nbsp; $\text{2010 MHz}$&amp;amp;nbsp; und&amp;amp;nbsp; $\text{2020 MHz}$&amp;amp;nbsp; wurde noch nicht lizenziert und wird deswegen in Deutschland ebenfalls noch nicht genutzt.&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
== Vollduplexverfahren == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Um die beiden Übertragungsrichtungen Uplink und Downlink zu trennen, werden in UMTS zwei unterschiedliche Betriebsmodi unterstützt. Man unterscheidet:&lt;br /&gt;
*&#039;&#039;UMTS Terrestrial Radio Access Frequency Division Duplex&#039;&#039;&amp;amp;nbsp; (UTRA–FDD),&lt;br /&gt;
*&#039;&#039;UMTS Terrestrial Radio Access Time Division Duplex&#039;&#039;&amp;amp;nbsp; (UTRA–TDD).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Der wesentliche Unterschied zwischen diesen beiden Modi zeigt sich vor allem in der physikalischen Ebene des Protokollstapels. Die beiden Verfahren unterscheiden sich dabei sowohl in ihren Duplex– als auch in ihren Vielfachzugriffsverfahren.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1527__Bei_T_4_1_S3a_v1.png|center|frame|UTRA–FDD–Modus in UMTS]]&lt;br /&gt;
&lt;br /&gt;
Im&amp;amp;nbsp; $\rm UTRA–FDD$–Modus werden – wie in obiger Grafik zu sehen – die Uplink– und Downlink–Daten gleichzeitig auf unterschiedlichen, aber korrespondierenden gepaarten Frequenzblöcken zu je&amp;amp;nbsp; $\text{5 MHz}$&amp;amp;nbsp; übertragen. Dabei ist zu beachten:&lt;br /&gt;
*Daten verschiedener Teilnehmer werden auf dem gleichen Frequenzband gesendet und empfangen. &lt;br /&gt;
*Die Verwendung von verschiedenen CDMA–Spreizcodes ermöglicht die Trennung der jeweiligen Teilnehmerdaten.&lt;br /&gt;
*Es wird außerdem das&amp;amp;nbsp; &#039;&#039;TDMA–Verfahren&#039;&#039;&amp;amp;nbsp; verwendet, um periodische Funktionen wie zum Beispiel die Leistungssteuerung zu realisieren.&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;FDMA–Verfahren&#039;&#039;&amp;amp;nbsp; kann zusätzlich zu CDMA und TDMA genutzt werden, wenn der Netzbetreiber über mehr als einen Frequenzkanal verfügt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Der FDD–Modus wird nur in Europa und meist nur bei symmetrischen Diensten verwendet, deren Bandbreitenanforderungen im Uplink und im Downlink etwa gleich sind. Dies ist zum Beispiel bei der Sprachkommunikation oder der Videotelefonie der Fall.&lt;br /&gt;
 	 &lt;br /&gt;
&lt;br /&gt;
Im&amp;amp;nbsp; $\rm UTRA–TDD$–Modus werden Uplink– und Downlink–Daten im gleichen Frequenzband übertragen. Dabei werden Uplink und Downlink zeitlich getrennt, wie die folgende Grafik zeigt. Weiterhin gilt:&lt;br /&gt;
*Der Umschaltzeitpunkt (&#039;&#039;Switching Point&#039;&#039;) kann abhängig vom Datenvolumenverhältnis zwischen Uplink und Downlink flexibel gewählt werden.&lt;br /&gt;
*Die Teilnehmer werden beim TDD–Modus sowohl durch den Spreizcode&amp;amp;nbsp; (wie bei FDD)&amp;amp;nbsp; als auch durch den Zeitschlitz gekennzeichnet.&lt;br /&gt;
*Verfügt der Netzbetreiber mehrere Frequenzkanäle, so kann wie bei FDD  zusätzlich zu CDMA und TDMA noch FDMA zum Einsatz kommen.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1528__Bei_T_4_1_S3b_v1.png|center|frame|UTRA–TDD–Modus in UMTS]]&lt;br /&gt;
&lt;br /&gt;
Der TDD–Modus wird derzeit in Europa noch nicht genutzt und wird nach seiner Einführung hauptsächlich bei asymmetrischen Diensten (zum Beispiel: &amp;amp;nbsp; Downloads oder Surfen im Internet) eingesetzt werden, bei denen sich die Datenvolumina von Downlink und Uplink deutlich unterscheiden.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Eigenschaften des UMTS-Funkkanals==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Im UMTS–Funkkanal treten neben Interferenzen durch andere Teilnehmer und Rauschen zusätzlich eine Reihe unvorhersehbarer, störender und verzerrender Effekte auf, die sich zudem über der Zeit verändern. &lt;br /&gt;
&lt;br /&gt;
Bedingt durch Reflexionen sowie Streuungen und Beugungen an Objekten erfährt das gesendete Signal eine&amp;amp;nbsp; &#039;&#039;&#039;Mehrwegeausbreitung&#039;&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Mulipath Scattering&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1529__Bei_T_4_1_S5a_v2.png|right|frame|Szenario mit Mehrwegeausbreitung]] &lt;br /&gt;
*Dabei erreicht das Signal den Empfänger nicht nur über den direkten Pfad, sondern über mehrere Wege mit unterschiedlichen Laufzeiten und unterschiedlich gedämpft.&lt;br /&gt;
*Die Mehrwegeausbreitung wird von der Umgebung beeinflusst, zusätzlich aber auch von einer möglichen Bewegung des Teilnehmers, wie in der Grafik durch die Bewegungsgeschwindigkeit&amp;amp;nbsp; $v$&amp;amp;nbsp; angedeutet ist.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Der&amp;amp;nbsp; &#039;&#039;&#039;Pfadverlust&#039;&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Path–Loss&#039;&#039;) geht auf Ausbreitungseigenschaften elektromagnetischer Wellen zurück – siehe Seite&amp;amp;nbsp; [[Mobile_Communications/Distanzabhängige_Dämpfung_und_Abschattung#Gebr.C3.A4uchliches_Pfadverlustmodell|Gebräuchliches Pfadverlustmodell]]&amp;amp;nbsp; im Buch &amp;amp;bdquo;Mobile Kommunikation&amp;amp;rdquo;. Für die Untersuchung dieses Dämpfungsphänomens gehen wir von einem vereinfachten Pfadverlustmodell aus. Dieses besagt:&lt;br /&gt;
*Die Empfangsleistung eines Funksignals fällt mit der Entfernung&amp;amp;nbsp; $d$&amp;amp;nbsp; um&amp;amp;nbsp; $d^{–γ}$, wobei&amp;amp;nbsp; $γ$&amp;amp;nbsp; eine mediumsabhängige Konstante der Funkausbreitungswelle darstellt.&lt;br /&gt;
*Unter Berücksichtigung von konstruktiven oder destruktiven Bodenreflexionen nimmt die Konstante&amp;amp;nbsp; $γ$&amp;amp;nbsp; unterhalb des&amp;amp;nbsp; &amp;amp;bdquo;Break Points&amp;amp;rdquo;&amp;amp;nbsp; $d_0$&amp;amp;nbsp; Werte zwischen&amp;amp;nbsp; $2$&amp;amp;nbsp; und&amp;amp;nbsp; $3$&amp;amp;nbsp; an.&lt;br /&gt;
*Oberhalb dieses charakteristischen Punktes verstärken sich die Reflexionseffekte und die Ausbreitungskonstante&amp;amp;nbsp; $γ$&amp;amp;nbsp; wächst auf Werte zwischen&amp;amp;nbsp; $3.5$&amp;amp;nbsp; und&amp;amp;nbsp; $4$&amp;amp;nbsp; an.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1530__Bei_T_4_1_S5b_v1.png|right|frame|Pfadverlust (Dämpfung) abhängig von der Entfernung]]&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 1:}$&amp;amp;nbsp;&lt;br /&gt;
Rechts dargestellt ist der Pfadverlust (in dB) in Abhängigkeit der Entfernung&amp;amp;nbsp; $d$. Bei diesem Beispiel ist die Konstante&amp;amp;nbsp; $α_0 = 10^{–5}$&amp;amp;nbsp; $($also&amp;amp;nbsp; $50 \ \rm dB)$&amp;amp;nbsp; gesetzt und der Break Point liegt bei&amp;amp;nbsp; $d_0 = \ \rm 100 \ m$.&lt;br /&gt;
*Im linken Bereich&amp;amp;nbsp;  $(d \ll d_0)$&amp;amp;nbsp; gilt&amp;amp;nbsp; $γ \approx 2$.&lt;br /&gt;
*Für&amp;amp;nbsp; $d \gg d_0$&amp;amp;nbsp; ist dagegen&amp;amp;nbsp; $γ \approx 4$. &lt;br /&gt;
*Im Bereich um&amp;amp;nbsp; $d  = d_0$&amp;amp;nbsp; steigt die Ausbreitungskonstante koninuierlich von&amp;amp;nbsp;   $γ = 2$&amp;amp;nbsp; auf&amp;amp;nbsp; $γ = 4$&amp;amp;nbsp; an.}}&lt;br /&gt;
	 	 &lt;br /&gt;
&lt;br /&gt;
==Frequenz&amp;amp;ndash; und zeitselektives Fading==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Eine wesentliche Eigenschaft des Mobilfunkkanals stellt das so genannte&amp;amp;nbsp; &#039;&#039;&#039;Fading&#039;&#039;&#039;&amp;amp;nbsp; dar. Dieses entsteht durch zeitlich veränderliche Signalabschattungen&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Shadowing&#039;&#039;)&amp;amp;nbsp; und durch mögliche Bewegungen des Mobilfunkteilnehmers.&lt;br /&gt;
&lt;br /&gt;
Im Buch&amp;amp;nbsp; [[Mobile Kommunikation]]&amp;amp;nbsp; wird diese Art der Signalbeeinträchtigung ausführlich behandelt. &lt;br /&gt;
&lt;br /&gt;
Hier folgt nur eine kurze Zusammenfassung. Man unterscheidet zum einen:&lt;br /&gt;
*&#039;&#039;schnelles Fading&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Fast Fading&#039;&#039;&amp;amp;nbsp; oder&amp;amp;nbsp; &#039;&#039;Short Term Fading&#039;&#039;)&amp;amp;nbsp; mit kurzzeitigen Einbrüchen der Empfangsleistung im Mikrosekundenbereich,&lt;br /&gt;
*&#039;&#039;langsames Fading&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Long Term Fading&#039;&#039;)&amp;amp;nbsp; – also nur langsame Veränderungen (meist) im Sekundenbereich.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Fast Fading&#039;&#039;&amp;amp;nbsp; beeinträchtigt hauptsächlich Systeme mit großer Symboldauer, also kleiner Bandbreite. Da aber die Bandbreite bei UMTS sehr viel größer ist als bei GSM, ist dieses System weniger anfällig gegenüber &#039;&#039;Fast Fading&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Weiterhin lässt sich Fading auch noch wie folgt klassifizieren:&lt;br /&gt;
*&#039;&#039;&#039;Frequenzselektives Fading&#039;&#039;&#039;&amp;amp;nbsp; wird durch Mehrwegeausbreitung über Pfade mit unterschiedlichen Verzögerungszeiten verursacht. Als Folge dieses Fadings werden verschiedene Frequenzanteile durch die Leistungsübertragungsfunktion&amp;amp;nbsp; $|H_{\rm K}(f)|^2$&amp;amp;nbsp; des Kanals unterschiedlich gedämpft.&lt;br /&gt;
*&#039;&#039;&#039;Zeitselektives Fading&#039;&#039;&#039;&amp;amp;nbsp; entsteht durch eine Relativbewegung zwischen Sender und Empfänger. Dadurch kommt es abhängig von der Bewegungsrichtung&amp;amp;nbsp; (hin zum oder weg vom Sender)&amp;amp;nbsp; zu Frequenzverschiebungen, die physikalisch durch den&amp;amp;nbsp; &#039;&#039;Dopplereffekt&#039;&#039;&amp;amp;nbsp; beschrieben werden.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die  Fadingeigenschaften&amp;amp;nbsp; &amp;amp;bdquo;frequenzselektiv&amp;amp;rdquo;&amp;amp;nbsp; und&amp;amp;nbsp; &amp;amp;bdquo;zeitselektiv&amp;amp;rdquo;&amp;amp;nbsp; sollen nun noch etwas genauer erläutert werden, insbesondere wird dargelegt, unter welchen Bedingungen mit welchen dieser Fadingarten zu rechnen ist. Wir verweisen an dieser Stelle auch auf die beiden interaktiven Applets&lt;br /&gt;
:: [[Applets:Mehrwegeausbreitung_und_Frequenzselektivität_(Applet)|Mehrwegeausbreitung und Frequenzselektivität]]&amp;amp;nbsp; sowie&lt;br /&gt;
:: [[Applets:Zur_Verdeutlichung_des_Dopplereffekts_(Applet)|Zur Verdeutlichung des Dopplereffekts]].&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Kennzeichen von  frequenzselektivem Fading:}$&amp;amp;nbsp; &lt;br /&gt;
*Durch den Empfang verschiedener Streukomponenten mit unterschiedlichen Verzögerungszeiten entsteht eine&amp;amp;nbsp; &#039;&#039;&#039;Mehrwegeverbreiterung&#039;&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Delay Spread&#039;&#039;&amp;amp;nbsp;)&amp;amp;nbsp; $T_{\rm V}$, definiert als Differenz zwischen maximaler und minimaler Verzögerungszeit. Der Kehrwert hiervon ergibt näherungsweise die Kohärenzbandbreite&amp;amp;nbsp; $B_{\rm K}$.&lt;br /&gt;
*Man spricht dann von&amp;amp;nbsp; &#039;&#039;frequenzselektivem Fading&#039;&#039;, wenn die Kohärenzbandbreite&amp;amp;nbsp; $B_{\rm K}$&amp;amp;nbsp; sehr viel kleiner ist als die Signalbandbreite&amp;amp;nbsp; $B_{\rm S}$. Als Folge werden verschiedene Frequenzanteile durch den Kanal unterschiedlich gedämpft, woraus lineare Verzerrungen resultieren.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Kennzeichen von  zeitselektivem Fading:}$&amp;amp;nbsp; &lt;br /&gt;
*Beim zeitselektiven Fading entsteht eine so genannte&amp;amp;nbsp; &#039;&#039;&#039;Dopplerverbreiterung&#039;&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Doppler Spread&#039;&#039;&amp;amp;nbsp;)&amp;amp;nbsp; $B_{\rm D}$, die als Differenz zwischen der maximal und der minimal auftretenden Dopplerfrequenz definiert ist. &lt;br /&gt;
*Deren Kehrwert bezeichnet man als die&amp;amp;nbsp; &#039;&#039;Korrelationsdauer&#039;&#039;&amp;amp;nbsp; $T_{\rm D} = {1}/{B_{\rm D} }$. In manchen Literaturstellen wird diese Größe auch als&amp;amp;nbsp; &#039;&#039;Kohärenzzeit&#039;&#039;&amp;amp;nbsp; bezeichnet. Bei UMTS tritt immer dann&amp;amp;nbsp; &#039;&#039;zeitselektives Fading&#039;&#039;&amp;amp;nbsp; auf, wenn die&amp;amp;nbsp; &#039;&#039;Korrelationsdauer&#039;&#039;&amp;amp;nbsp; $T_{\rm D}$&amp;amp;nbsp; sehr viel kleiner ist als die Chipdauer&amp;amp;nbsp; $T_{\rm C}$.}} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 2:}$&amp;amp;nbsp;&lt;br /&gt;
Die linke Grafik verdeutlicht den Unterschied zwischen frequenzselektivem und nicht frequenzselektivem Fading:&lt;br /&gt;
[[File:P_ID1531__Bei_T_4_1_S7_v1.png|right|frame|Die roten Kurven verdeutlichen frequenzselektives und zeitselektives Fading]] &lt;br /&gt;
*Dargestellt ist die Leistungsübertragungsfunktion&amp;amp;nbsp; $\vert H_{\rm K}(f, t)\vert ^2$&amp;amp;nbsp; des Kanals zu einer festen Zeit&amp;amp;nbsp; $t$. &lt;br /&gt;
*Während blau nichtfrequenzselektives Fading mit&amp;amp;nbsp; $-5 \ \rm dB$&amp;amp;nbsp; eingezeichnet ist, zeigt die rote Kurve in der linken Grafik ein Beispiel von frequenzselektivem Fading.&lt;br /&gt;
*Unterschiedliche Frequenzanteile werden dabei unterschiedlich gedämpft.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die rechte Grafik zeigt schematisch zeitselektives Fading: &lt;br /&gt;
*Aufgetragen ist hier die Leistungsübertragungsfunktion&amp;amp;nbsp; $\vert H_{\rm K}(f, t) \vert ^2$&amp;amp;nbsp; des Kanals für eine feste Frequenz&amp;amp;nbsp; $f$. &lt;br /&gt;
*Die blaue Kurve gilt für nicht zeitselektives Fading: &amp;amp;nbsp;Zu jeder Zeit wird das Signal um&amp;amp;nbsp; $5 \ \rm dB$&amp;amp;nbsp; gedämpft.}}&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
==UMTS–Dienste == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die Einführung von UMTS hat sich unter anderem die Erweiterung und Diversifikation der angebotenen Mobilfunkdienste zum Ziel gesetzt. Ein UMTS–fähiges Endgerät muss zusätzlich zu den klassischen Diensten (Sprachübertragung, Messaging, usw.) eine Reihe komplexerer multimedialer Anwendungen und Funktionen unterstützen.&lt;br /&gt;
&lt;br /&gt;
Man kann diese Dienste – je nach Anwendung – in sechs Hauptkategorien unterteilen:&lt;br /&gt;
*&#039;&#039;Information&#039;&#039;&amp;amp;nbsp;: &amp;amp;nbsp; Internet–Surfen (&#039;&#039;Info–on–demand&#039;&#039;), Online–Printmedien,&lt;br /&gt;
*&#039;&#039;Kommunikation&#039;&#039;&amp;amp;nbsp;:  &amp;amp;nbsp; Video– und Audiokonferenz, Fax, ISDN, Messaging,&lt;br /&gt;
*&#039;&#039;Unterhaltung&#039;&#039;&amp;amp;nbsp;:  &amp;amp;nbsp; Mobile TV, Mobile Radio, Video–on–Demand, Online–Gaming,&lt;br /&gt;
*&#039;&#039;Geschäftlicher Bereich&#039;&#039;&amp;amp;nbsp;: &amp;amp;nbsp;  Interaktives Einkaufen, E–Commerce,&lt;br /&gt;
*&#039;&#039;Technischer Bereich&#039;&#039;&amp;amp;nbsp;:  &amp;amp;nbsp; Online–Betreuung, Distributionsservice (Sprache und Daten),&lt;br /&gt;
*&#039;&#039;Medizinischer Bereich&#039;&#039;&amp;amp;nbsp;:  &amp;amp;nbsp; Telemedizin.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1484__Bei_T_4_1_S9_v1.png|Right|frame|Zusammenstellung und Klassifizierung der UMTS–Dienste]]&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 3:}$&amp;amp;nbsp; In der Abbildung sind die UMTS&amp;amp;ndash;Dienste nach verschiedenen Merkmalen  klassifiziert: &lt;br /&gt;
#nach Datenrate in vertikaler Richtung,&lt;br /&gt;
#nach Art der Verbindung  (bidirektional, unidirektional, Broadcast) in horizontaler Richtung.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Anmerkungen:&#039;&#039; &lt;br /&gt;
*Die Höhe eines Kästchens gibt (in etwa) den Bereich für die erforderliche Datenrate an.&lt;br /&gt;
*Die Breite deutet näherungsweise auf den Datenumfang hin.}}&lt;br /&gt;
 	 &lt;br /&gt;
&lt;br /&gt;
==Sicherheitsaspekte == 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die Sicherheitsmerkmale in UMTS–Netzen basieren auf den gleichen Prinzipien wie bei GSM. Allerdings wurden einige GSM–Sicherheitsfunktionen entfernt, ersetzt oder ausgebaut. Dadurch wurden &lt;br /&gt;
*die Verschlüsselungsalgorithmen robuster, &lt;br /&gt;
*die Authentifizierungsalgorithmen strenger und &lt;br /&gt;
*die Kriterien zur Vertraulichkeit eines Teilnehmers enger.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die wesentlichen von GSM übernommenen Sicherheitsstandards bei UMTS sind:&lt;br /&gt;
#&amp;amp;nbsp;Authentifizierung des Teilnehmers,&lt;br /&gt;
#&amp;amp;nbsp;Vertraulichkeit der Teilnehmeridentität,&lt;br /&gt;
#&amp;amp;nbsp;Verschlüsselung der Funkschnittstelle.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zusätzlich zu diesen wurden bei UMTS noch weitere Sicherheitsmaßnahmen beachtet:&lt;br /&gt;
#&amp;amp;nbsp;Gegenseitige Authentifizierung, um die Nutzung falscher Basisstationen zu vermeiden,&lt;br /&gt;
#&amp;amp;nbsp;Verschlüsselung der Verbindung zwischen Basisstation und zugehörigem Kontrollknoten,&lt;br /&gt;
#&amp;amp;nbsp;Verschlüsselung und Authentifizierung der Sicherheitsdaten bei der Übertragung,&lt;br /&gt;
#&amp;amp;nbsp;Mechanismen zur Aktualisierung der Sicherheitsmerkmale.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Man kann die oben aufgeführten Sicherheitsmaßnahmen entsprechend der Grafik klassifizieren. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1486__Bei_T_4_1_S10_v1.png|right|frame|Zusammenstellung der Sicherheitsmaßnahmen bei UMTS]]&lt;br /&gt;
Man unterscheidet Sicherheitskonzepte für&lt;br /&gt;
*den sicheren&amp;amp;nbsp; &#039;&#039;&#039;Netzzugang&#039;&#039;&#039;&amp;amp;nbsp; (&#039;&#039;Network Access Security&#039;&#039;)&amp;amp;nbsp; für jeden Teilnehmer,&lt;br /&gt;
&lt;br /&gt;
*die&amp;amp;nbsp; &#039;&#039;&#039;Netzdomäne&#039;&#039;&#039;&amp;amp;nbsp; (&#039;&#039;Network Domain Security&#039;&#039;)&amp;amp;nbsp; – ein sicherer Austausch von Kontrolldaten zwischen den Knoten innerhalb der Netzdomäne wird sichergestellt,&lt;br /&gt;
&lt;br /&gt;
*die&amp;amp;nbsp; &#039;&#039;&#039;Teilnehmerdomäne&#039;&#039;&#039;&amp;amp;nbsp; (&#039;&#039;User Domain Security&#039;&#039;)&amp;amp;nbsp; – der Zugang zu den Endgeräten wird sichergestellt,&lt;br /&gt;
*die&amp;amp;nbsp; &#039;&#039;&#039;Anwendungsdomäne&#039;&#039;&#039;&amp;amp;nbsp; (&#039;&#039;Application Domain Security&#039;&#039;)&amp;amp;nbsp; – der sichere Austausch zwischen Anwendungen der Teilnehmerendgeräte und der Netzanbieter wird garantiert.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Der UMTS–Teilnehmer kann jederzeit erkennen, welche dieser Sicherheitsmaßnahmen in Betrieb sind und welche davon für bestimmte Dienste benötigt werden. Man spricht in diesem Zusammenhang von &#039;&#039;Sichtbarkeit&#039;&#039; und &#039;&#039;Konfigurierbarkeit&#039;&#039; der Sicherheit.&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Aufgaben zum Kapitel==&lt;br /&gt;
&amp;lt;br&amp;gt;	 &lt;br /&gt;
[[Aufgaben:Aufgabe_4.1:_Verschiedene_Duplexverfahren_bei_UMTS|Aufgabe 4.1: Verschiedene Duplexverfahren bei UMTS]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Aufgabe_4.2:_Grundlegendes_zum_UMTS-Funkkanal|Aufgabe 4.2: Grundlegendes zum UMTS-Funkkanal]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Weiterentwicklungen_des_GSM&amp;diff=34987</id>
		<title>Examples of Communication Systems/Weiterentwicklungen des GSM</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Weiterentwicklungen_des_GSM&amp;diff=34987"/>
		<updated>2020-10-13T15:39:26Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Weiterentwicklungen des GSM to Examples of Communication Systems/Further Developments of the GSM&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Examples of Communication Systems/Further Developments of the GSM]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Further_Developments_of_the_GSM&amp;diff=34986</id>
		<title>Examples of Communication Systems/Further Developments of the GSM</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Further_Developments_of_the_GSM&amp;diff=34986"/>
		<updated>2020-10-13T15:39:26Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Weiterentwicklungen des GSM to Examples of Communication Systems/Further Developments of the GSM&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=GSM – Global System for Mobile Communications&lt;br /&gt;
|Vorherige Seite=Gesamtes GSM–Übertragungssystem&lt;br /&gt;
|Nächste Seite=Allgemeine Beschreibung von UMTS&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Die verschiedenen Generationen des GSM==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
GSM wurde ursprünglich als ein paneuropäisches Mobilfunknetz konzipiert und entwickelt, vor allem für Telefongespräche und Fax. Die Datenübertragung bei konstanter niedriger Datenrate war sekundär.&lt;br /&gt;
Der GSM–Standard wurde nach der Darstellung in verschiedenen Phasen weiter entwickelt. So wurden neue Dienste ermöglicht.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1234__Bei_T_3_5_S1_v3.png|right|frame|Die verschiedenen Generationen des GSM]]&lt;br /&gt;
&lt;br /&gt;
Die Grafik aus&amp;amp;nbsp; [EVB01]&amp;lt;ref name=&#039;EVB01&#039;&amp;gt;Eberspächer, J.; Vögel, H.J.; Bettstetter, C.: &#039;&#039;Global System for Mobile Communication&#039;&#039;. 3. Auflage. Stuttgart: Teubner, 2001.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; zeigt die Weiterentwicklungen von GSM:&lt;br /&gt;
*Das bisher im dritten  Hauptkapiteln beschriebene&amp;amp;nbsp; $\rm GSM$&amp;amp;ndash;System beschränkt sich auf die beiden ersten Generationen. Die&amp;amp;nbsp; $\rm Phase \ 1$&amp;amp;nbsp; beinhaltete nur grundlegende Teledienste und einige wenige Zusatzdienste, die zur Markteinführung von GSM im Jahr 1991 verbindlich von allen damaligen Netzbetreibern angeboten werden konnten.&lt;br /&gt;
*Bereits die Standardisierung der&amp;amp;nbsp; $\rm Phase \ 2$&amp;amp;nbsp; in den Jahren von 1995 bis 1997 beinhaltete erste Weiterentwicklungen des GSM–Standards. Dadurch wurden die von&amp;amp;nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_ISDN|ISDN]]&amp;amp;nbsp; her bekannten Zusatzdienste auch für GSM schrittweise verfügbar gemacht und um einige neue Leistungsmerkmale ergänzt, so etwa Anklopfen&amp;amp;nbsp; (&#039;&#039;Call Waiting&#039;&#039;)&amp;amp;nbsp; oder Halten&amp;amp;nbsp; (&#039;&#039;Hold&#039;&#039;).&lt;br /&gt;
*In den Jahren 1997–2000 wurden neue Datendienste mit höherer Datenrate entwickelt, wie zum Beispiel &lt;br /&gt;
::  [[Examples_of_Communication_Systems/Weiterentwicklungen_des_GSM#High_Speed_Circuit.E2.80.93Switched_Data_.28HSCSD.29|High Speed Circuit–Switched Data]]&amp;amp;nbsp; (HSCSD), &lt;br /&gt;
::  [[Examples_of_Communication_Systems/Weiterentwicklungen_des_GSM#General_Packet_Radio_Service_.28GPRS.29|General Packet Radio Service]]&amp;amp;nbsp; (GPRS), und &lt;br /&gt;
::  [[Examples_of_Communication_Systems/Weiterentwicklungen_des_GSM#Enhanced_Data_Rates_for_GSM_Evolution|Enhanced Data Rates for GSM Evolution]]&amp;amp;nbsp; (EDGE).&lt;br /&gt;
:Diese neueren Datendienste werden der&amp;amp;nbsp; $\rm Phase \ 2+$&amp;amp;nbsp; (oder Generation 2.5) zugerechnet und sind in der Grafik grün hinterlegt.&lt;br /&gt;
*Zur dritten Mobilfunkgeneration gehört unter anderem&amp;amp;nbsp;  [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_UMTS|UMTS]]&amp;amp;nbsp; (&#039;&#039;Universal Mobile Telecommunications System&#039;&#039;). Dieser Standard ermöglichte deutlich höhere Datenübertragungsraten, als dies mit dem GSM–Standard möglich war. Er wird im vierten Hauptkapitel dieses Buches eingehend behandelt. In obiger Grafik ist dieses System der dritten Generation rot hinterlegt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Neuerungen der&amp;amp;nbsp; $\rm Phase \ 2+$&amp;amp;nbsp; betreffen fast alle Aspekte von GSM, von der Funkübertragung bis hin zur Verbindungssteuerung. Die damit ermöglichten neuen Datendienste werden auf den folgenden Seiten näher erklärt.&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==High Speed Circuit–Switched Data (HSCSD)==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:P_ID1235__Bei_T_3_5_S3_v1.png|right|frame|Bündelung mehrerer Zeitschlitze bei HSCSD]]&lt;br /&gt;
Durch den 1999 eingeführten GSM–Datenübertragungsstandard&amp;amp;nbsp; $\rm High \ Speed \ Circuit–Switched \  Data$&amp;amp;nbsp; (HSCSD) konnte durch eine verbesserte Kanalcodierung die Nutzdatenrate pro Verbindung von&amp;amp;nbsp; $9.6 \ \rm kbit/s$&amp;amp;nbsp; auf&amp;amp;nbsp; $14.4 \ \rm kbit/s$&amp;amp;nbsp; erhöht werden, wenn es die Übertragungsbedingungen erlaubten. &lt;br /&gt;
&lt;br /&gt;
Durch die Bündelung mehrerer benachbarter Zeitschlitze konnte die Datenrate noch weiter gesteigert werden. &lt;br /&gt;
&lt;br /&gt;
Die Datenrate hängt davon ab, wie viele Kanäle der Netzbetreiber für die Bündelung zur Verfügung stellt bzw. wie viele Kanäle das HSCSD–Handy verarbeiten kann.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Die Grafik erklärt das Prinzip der Bündelung mehrerer Zeitschlitze:&lt;br /&gt;
&lt;br /&gt;
*Jeder der acht physikalischen Kanäle (Zeitschlitze) eines Rahmens bietet maximal&amp;amp;nbsp; $14.4 \ \rm kbit/s$&amp;amp;nbsp; für die Datenkommunikation. HSCSD ermöglicht eine Kanalbündelung durch die Kombination mehrerer Zeitschlitze, wie sie auch bei ISDN verwendet wird. Man spricht in diesem Zusammenhang von&amp;amp;nbsp; &#039;&#039;Multislot Capability&#039;&#039;.&lt;br /&gt;
*Durch das Zusammenschalten aller acht Kanäle ergäben sich somit&amp;amp;nbsp; $\rm 8 · 14.4 \ kbit/s = 115.2 \ kbit/s$. Da jedoch die Verbindung zwischen dem&amp;amp;nbsp; &#039;&#039;Base Station Controller&#039;&#039;&amp;amp;nbsp; (BSC) und dem&amp;amp;nbsp; &#039;&#039;Mobile Switching Center&#039;&#039;&amp;amp;nbsp; (MSC) auf&amp;amp;nbsp; $64 \ \rm kbit/s$&amp;amp;nbsp; begrenzt ist, beschränkt man sich auf die Bündelung von vier Zeitschlitzen, woraus sich die maximale Übertragungsrate zu&amp;amp;nbsp; $57.6 \ \rm kbit/s$&amp;amp;nbsp; ergibt.&lt;br /&gt;
*Ein Vorteil der HSCSD–Technik gegenüber dem paketorientierten GPRS (siehe nächste Seite) ist die leitungsorientierte Datenübertragung. Dies ist insbesondere für Anwendungen von Vorteil, die gleichmäßige Bandbreiten benötigen, da hier der Übertragungskanal mit niemandem geteilt werden muss. Beispiele hierfür sind die Video– und die Bildübertragung.&lt;br /&gt;
*Nachteilig sind allerdings die höheren Übertragungskosten durch die Belegung mehrerer Kanäle. Diese Kanäle stehen somit für andere Mobilfunkteilnehmer nicht mehr zu Verfügung. In einer Funkzelle mit hoher Kanalauslastung kann es deshalb passieren, dass die Bündelung mehrerer Kanäle vom Netzbetreiber unterbunden wird.&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==General Packet Radio Service (GPRS)==  	 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Mit der GSM–Erweiterung&amp;amp;nbsp; $\rm General \ Packet \ Radio \ Service$&amp;amp;nbsp; (GPRS) wurde 2000 erstmals eine paketorientierte Datenübertragung ermöglicht. GPRS unterstützt sehr viele Protokolle (Internet Protocol, X.25, Datex–P, usw.) und erlaubt dem Mobilfunkteilnehmer, mit fremden Datennetzen (Internet oder firmeninternen Intranets) zu kommunizieren. GPRS war ein wichtiger Zwischenschritt in der Evolution der zellularen Mobilfunknetze in Richtung dritter Generation und hin zum mobilen Internet.&lt;br /&gt;
&lt;br /&gt;
Ein GPRS–Benutzer profitiert von kürzeren Zugriffzeiten und der höheren Datenrate $($bis&amp;amp;nbsp; $21.4 \ \rm kbit/s)$&amp;amp;nbsp; gegenüber dem herkömmlichen GSM&amp;amp;nbsp;  $(9.6 \ \rm kbit/s)$&amp;amp;nbsp; und HSCSD&amp;amp;nbsp; $(14.4 \ \rm kbit/s)$. Die Gebühren ergeben sich bei GPRS nicht aus der Verbindungsdauer, sondern aus der tatsächlich übertragenen Datenmenge. Deshalb muss nicht (wie bei HSCSD) ein Funkkanal dauerhaft für einen Benutzer reserviert werden.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1236__Bei_T_3_5_S3_v3.png|right|frame|GPRS–Systemarchitektur]]&lt;br /&gt;
&lt;br /&gt;
Zur Einführung von GPRS waren einige Modifikationen und Ergänzungen im GSM–Netz notwendig, die in der Grafik „GPRS–Systemarchitektur” aus&amp;amp;nbsp; [BVE99]&amp;lt;ref name =&#039;BVE99&#039;&amp;gt;Bettstetter, C.; Vögel, H.J.; Eberspächer, J.: &#039;&#039;GSM Phase 2+ General Packet Radio Service GPRS: Architecture, Protocols, and Air Interface&#039;&#039;. In: IEEE Communications Surveys &amp;amp; Tutorials, Vol. 2 (1999) No. 3, S. 2-14.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; zusammengefasst sind: &lt;br /&gt;
*Blaue Linien beschreiben Nutz– und Signalisierungsdaten. &lt;br /&gt;
*Die orange–gepunkteten Verbindungen kennzeichnen Signalisierungsdaten.&lt;br /&gt;
*&#039;&#039;&#039;Gb&#039;&#039;&#039;, &#039;&#039;&#039;Gc&#039;&#039;&#039;, &#039;&#039;&#039;Gd&#039;&#039;&#039;, usw. geben Schnittstellen von GPRS an. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zur Integration von GPRS in die bestehende GSM–Systemarchitektur wird diese um eine neue Klasse von Netzknoten erweitert. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die zusätzlichen GPRS–Komponenten – in der Grafik durch rote Kreise hervorgehoben – werden hier nur stichpunktartig erklärt:&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;&#039;GPRS Support Nodes&#039;&#039;&#039;&amp;amp;nbsp; (GSN) sind für die Übertragung und die Verkehrslenkung&amp;amp;nbsp; (&#039;&#039;Routing&#039;&#039;)&amp;amp;nbsp; der Datenpakete zwischen den Mobilstationen und den externen paketvermittelten Datennetzen verantwortlich. Hierbei unterscheidet man zwischen SGSN und GGSN, die miteinander über ein IP–basiertes GPRS–Backbone–Netz kommunizieren.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;Serving GPRS Support Node&#039;&#039;&#039;&amp;amp;nbsp; (SGSN) ist für das Mobilitätsmanagement zuständig und übernimmt für die Paketdatendienste eine ähnliche Funktion wie das&amp;amp;nbsp; &#039;&#039;Mobile Switching Center&#039;&#039;&amp;amp;nbsp; (MSC) für die verbindungsorientierten Sprachsignale.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;Gateway GPRS Support Node&#039;&#039;&#039;&amp;amp;nbsp; (GGSN) ist die Schnittstelle zu fremden paketorientierten Datennetzen. Er konvertiert die vom SGSN kommenden GPRS–Pakete in das entsprechende Protokoll&amp;amp;nbsp; (IP, X.25, ...)&amp;amp;nbsp; und sendet diese an das&amp;amp;nbsp; &#039;&#039;&#039;Packet Data Network&#039;&#039;&#039;&amp;amp;nbsp; (PDN) aus.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;GPRS–Luftschnittstelle&#039;&#039;&#039;  	&lt;br /&gt;
&lt;br /&gt;
Ein GPRS–Handy führt beim Einschalten als erstes die Prozedur „Cell Selection” durch, indem es nach einem Frequenzkanal mit GPRS–Daten sucht. Wurde ein solcher Kanal gefunden, so muss je nach Handyklasse das Handy manuell auf GPRS–Dienste eingestellt werden oder es kann automatisch und dynamisch zwischen GPRS und GSM umschalten. Man unterscheidet:&lt;br /&gt;
*Geräte der Klasse&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; können GPRS–Datendienste und GSM–Übertragungsdienste gleichzeitig übernehmen; die Kanalressourcen werden parallel paket– und durchschaltevermittelt überwacht.&lt;br /&gt;
*Bei Klasse&amp;amp;nbsp; $\rm B$&amp;amp;nbsp; werden die Signalisierungskanäle von GSM und GPRS gleichzeitig überwacht, solange kein Dienst durchgestellt ist. Der parallele GSM/GPRS–Betrieb ist aber nicht möglich.&lt;br /&gt;
*In der Klasse&amp;amp;nbsp; $\rm C$&amp;amp;nbsp; muss sich der Teilnehmer vorher entscheiden, ob er das Handy für GSM oder GPRS nutzen möchte, da Signalisierungskanäle nicht mehr simultan überwacht werden können.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Um die GSM–Funkschnittstelle auf den paketorientierten GPRS–Betrieb umstellen zu können, mussten die logischen Kanäle erweitert werden. Logische GPRS–Kanäle erkennt man an einem vorangestellten „P”, das die paketorientierte Betriebsart indiziert. Fast für alle logischen GSM–Kanäle gibt es das entsprechende GPRS–Äquivalent:&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;Packet Data Traffic Channel&#039;&#039;&amp;amp;nbsp; (PDTCH) wird bei GPRS als&amp;amp;nbsp; &#039;&#039;&#039;Verkehrskanal&#039;&#039;&#039;&amp;amp;nbsp; für den Nutzdatentransfer verwendet. Der entsprechende GSM–Kanal heißt TCH.&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;&#039;Signalisierungskanäle&#039;&#039;&#039;&amp;amp;nbsp; werden wie bei GSM in den&amp;amp;nbsp; &#039;&#039;Packet Broadcast Control Channel&#039;&#039;&amp;amp;nbsp; (PBCCH), den&amp;amp;nbsp; &#039;&#039;Packet Common Control Channel&#039;&#039;&amp;amp;nbsp; (PCCCH) und den&amp;amp;nbsp; &#039;&#039;Packet Dedicated Control Channel&#039;&#039;&amp;amp;nbsp; (PDCCH) unterteilt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
GPRS ermöglicht den Teilnehmern, Daten mit öffentlichen Datennetzen auszutauschen und verwendet dazu wie GSM die GMSK-Modulation und die FDMA/TDMA–Kombination mit acht Zeitschlitzen pro TDMA-Rahmen. Es ergeben sich folgende Unterschiede:&lt;br /&gt;
*Im GSM–Standard wird jeder aktiven Mobilstation genau ein Zeitschlitz eines TDMA–Rahmens zugewiesen. Dieser physikalische Kanal ist für die gesamte Dauer eines Rufes sowohl im Uplink als auch im Downlink für die Mobilstation reserviert.&lt;br /&gt;
*Bei GPRS können zur Ratensteigerung bis zu acht Zeitschlitze kombiniert werden. Außerdem werden Up– und Downlink separat zugewiesen. Die physikalischen Kanäle werden nur für die Dauer der Übertragung von Datenpaketen reserviert und anschließend wieder frei gegeben.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
&#039;&#039;&#039;GPRS–Kanalcodierung&#039;&#039;&#039; 	&lt;br /&gt;
&lt;br /&gt;
Im Gegensatz zum herkömmlichen GSM $($mit der Datenrate&amp;amp;nbsp; $9.6 \ \rm kbit/s)$&amp;amp;nbsp; sind bei GPRS vier mögliche&amp;amp;nbsp; &#039;&#039;Codierschemata&#039;&#039;&amp;amp;nbsp; definiert, die je nach Empfangsqualität genutzt werden können:&lt;br /&gt;
*Codierschema 1 $(\rm CS–1)$&amp;amp;nbsp; mit&amp;amp;nbsp; $9.05 \ \rm kbit/s$&amp;amp;nbsp; (181 Bit pro 20 ms),&lt;br /&gt;
*Codierschema 2 $(\rm CS–2)$&amp;amp;nbsp; mit&amp;amp;nbsp; $13.4 \ \rm kbit/s$&amp;amp;nbsp; (268 Bit pro 20 ms),&lt;br /&gt;
*Codierschema 3 $(\rm CS–3)$&amp;amp;nbsp; mit&amp;amp;nbsp; $15.6 \ \rm kbit/s$&amp;amp;nbsp; (312 Bit pro 20 ms),&lt;br /&gt;
*Codierschema 4 $(\rm CS–4)$&amp;amp;nbsp; mit&amp;amp;nbsp; $21.4 \ \rm kbit/s$&amp;amp;nbsp; (428 Bit pro 20 ms).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die kleinstmögliche Datenrate ist somit&amp;amp;nbsp; $9.05 \ \rm kbit/s$&amp;amp;nbsp; ($\rm CS–1$, ein Zeitschlitz), die maximale beträgt derzeit (2007)&amp;amp;nbsp; $171.2 \ \rm kbit/s$&amp;amp;nbsp; ($\rm CS–4$, acht Zeitschlitze). Diese theoretische Geschwindigkeit wird in der Praxis jedoch nicht erreicht, da die meisten aktuellen GPRS–Handys nur maximal eine Netto–Datenrate von&amp;amp;nbsp; $13.4 \ \rm kbit/s$&amp;amp;nbsp; ($\rm CS–2$) unterstützen. Die Grafik und die nachfolgenden Erklärungen beziehen sich auf diese Kombination.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1237__Bei_T_3_5_S5_v1a.png|center|frame|Zur Kanalcodierung bei GPRS]]&lt;br /&gt;
&lt;br /&gt;
*Die&amp;amp;nbsp; $268$&amp;amp;nbsp; Informationsbit werden zunächst durch sechs vorcodierte Bit des&amp;amp;nbsp; &#039;&#039;Uplink State Flags&#039;&#039;&amp;amp;nbsp; (USF), &amp;amp;nbsp;$16$&amp;amp;nbsp; Paritätsbit der so genannten&amp;amp;nbsp; &#039;&#039;Block Check Sequence&#039;&#039;&amp;amp;nbsp; (BCS) und vier Tailbits&amp;amp;nbsp; $(0000)$&amp;amp;nbsp; ergänzt. Letztere sind für die Terminierung der Faltungscodes notwendig.&lt;br /&gt;
*Zur Kanalcodierung wird der von GSM bekannte Faltungscode der Coderate&amp;amp;nbsp; $R_{\rm C} = 1/2$&amp;amp;nbsp; benutzt. Durch diesen werden die insgesamt&amp;amp;nbsp; $294$ Bit&amp;amp;nbsp; auf&amp;amp;nbsp; $588$&amp;amp;nbsp; Bit verdoppelt und somit ausreichend gegen Übertragungsfehler geschützt.&lt;br /&gt;
*Anschließend werden&amp;amp;nbsp; $132$&amp;amp;nbsp; dieser&amp;amp;nbsp; $588$&amp;amp;nbsp; Bit punktiert, so dass daraus schließlich ein Codewort der Länge&amp;amp;nbsp; $456$&amp;amp;nbsp; Bit $($Bitrate&amp;amp;nbsp; $22.8 \ \rm kbit/s)$&amp;amp;nbsp; resultiert. Damit ergibt sich eine resultierende Coderate (von Faltungscoder inklusive Punktierung) von&amp;amp;nbsp; $294/456 ≈ 65\%$.&lt;br /&gt;
*Nach der Kanalcodierung werden die Codewörter einem Blockinterleaver der Tiefe&amp;amp;nbsp; $4$&amp;amp;nbsp; zugeführt. Das Interleavingschema ist für alle vier Codierschemata identisch.&lt;br /&gt;
 &lt;br /&gt;
 	&lt;br /&gt;
 	 &lt;br /&gt;
== Enhanced Data Rates for GSM Evolution == 	 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die letzte GSM–Erweiterung&amp;amp;nbsp; $\rm Enhanced \ Data \ Rates \ for \ GSM–Evolution$&amp;amp;nbsp; (EDGE) mit dem Ziel, die Datenübertragungsrate in GSM–Netzen zu erhöhen, benutzt neben&amp;amp;nbsp; &#039;&#039;Gaussian Minimum Shift Keying&#039;&#039;&amp;amp;nbsp; (GMSK) als zusätzliches Modulationsverfahren&amp;amp;nbsp; &#039;&#039;8–Phase Shift Keying&#039;&#039;&amp;amp;nbsp; (8–PSK): &lt;br /&gt;
*Bei diesem gibt es acht verschiedene Symbole (bei GMSK nur zwei), die sich durch unterschiedliche Phasenlagen bei Vielfachen von&amp;amp;nbsp; $45^\circ$&amp;amp;nbsp; unterscheiden. &lt;br /&gt;
*Das bedeutet, dass mit jedem Symbol drei Datenbit übertragen werden können, wodurch die Datenrate im Vergleich zu GPRS um den Faktor&amp;amp;nbsp; $3$&amp;amp;nbsp; gesteigert wird.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mit der Definition von EDGE wird HSCSD zu&amp;amp;nbsp; &#039;&#039;Enhanced Circuit Switched Data&#039;&#039;&amp;amp;nbsp; (E–CSD) und GPRS zu&amp;amp;nbsp; &#039;&#039;Enhanced–GPRS&#039;&#039;&amp;amp;nbsp; (E–GPRS). T–mobile ist allerdings der einzige deutsche Netzbetreiber, der derzeit (2007) EDGE in seinem Netz anbietet.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1238__Bei_T_3_5_S6a_v1.png|right|frame|&#039;&#039;Normal Burst&#039;&#039;&amp;amp;nbsp; von EDGE bzw. E–GPRS]]&lt;br /&gt;
&lt;br /&gt;
Die Grafik zeigt den&amp;amp;nbsp; &#039;&#039;Normal Burst&#039;&#039;&amp;amp;nbsp; von EDGE bzw. E–GPRS. Man erkennt folgende Unterschiede zum&amp;amp;nbsp; &#039;&#039;Normal Burst&#039;&#039;&amp;amp;nbsp; bei GSM:&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;Normal Burst&#039;&#039;&amp;amp;nbsp; besteht bei EDGE aus&amp;amp;nbsp; $468.75$&amp;amp;nbsp; Bit anstelle der&amp;amp;nbsp; $156.25$&amp;amp;nbsp; Bit bei GSM, woraus die Verdreifachung der Datenrate ersichtlich ist.&lt;br /&gt;
*Wie bei GSM gibt es zwei&amp;amp;nbsp; &#039;&#039;Stealing Flags&#039;&#039;. Tailbits, Trainingssequenz und&amp;amp;nbsp; &#039;&#039;Guard Period&#039;&#039;&amp;amp;nbsp; werden jeweils verdreifacht. Damit verbleiben für das Datenfeld&amp;amp;nbsp; $57 · 3 + 2 = 173$ Bit.&lt;br /&gt;
*Somit werden bei E–GPRS im&amp;amp;nbsp; &#039;&#039;Normal Burst&#039;&#039;&amp;amp;nbsp; $346$&amp;amp;nbsp; Bit kanalcodierte Daten&amp;amp;nbsp; $($Coderate&amp;amp;nbsp; $R_{\rm C} =1/2)$&amp;amp;nbsp; pro&amp;amp;nbsp; $576.9\  \rm &amp;amp;micro; s$&amp;amp;nbsp; übertragen, was einer Netto–Datenrate von ca.&amp;amp;nbsp; $60 \  \rm kbit/s$&amp;amp;nbsp; entspricht.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Modulation and Coding Schemes bei E–GPRS&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Bei E–GPRS gibt es neun vom Betreiber auswählbare&amp;amp;nbsp; &#039;&#039;Modulation and Coding Schemes&#039;&#039;&amp;amp;nbsp; (MCS), die von den verwendeten Kanalcodier– und Modulationsverfahren abhängen.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1239__Bei_T_3_5_S6c.png|center|frame|Tabelle der&amp;amp;nbsp; &#039;&#039;Modulation and Coding Schemes&#039;&#039;&amp;amp;nbsp; bei E–GPRS]]&lt;br /&gt;
&lt;br /&gt;
Die Tabelle zeigt die möglichen Schemata von E–GPRS. Daraus ist zu erkennen:&lt;br /&gt;
*Die ersten vier Schemata verwenden wie GSM/GPRS das Modulationsverfahren GMSK mit einem bit Information pro Kanalzugriff, während bei&amp;amp;nbsp; $\rm MCS–5$, ... ,&amp;amp;nbsp; $\rm MCS–9$&amp;amp;nbsp; eine achtstufige Phasenmodulation (8–PSK) benutzt wird und damit drei  bit pro Symbol übertragen werden.&lt;br /&gt;
*Je kleiner die Coderate, desto größer ist die zugesetzte Redundanz und damit die Datensicherheit. Insbesondere zwischen&amp;amp;nbsp; $\rm MCS–4$&amp;amp;nbsp; $(R_{\rm C} = 1)$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm MCS–5$&amp;amp;nbsp; $(R_{\rm C} = 0.37)$&amp;amp;nbsp; nimmt die Coderate wegen der günstigeren Modulationsart trotz höherer Netto–Datenrate signifikant ab (letzte Spalte).&lt;br /&gt;
*Der aufwändigste Modus&amp;amp;nbsp; $\rm MCS–9$&amp;amp;nbsp; bietet gemäß der Tabelle eine Datenrate von&amp;amp;nbsp; $59.2 \  \rm kbit/s$&amp;amp;nbsp; und erlaubt theoretisch die gleichzeitige Belegung von acht Zeitschlitzen, was eine maximale Netto–Datenrate von&amp;amp;nbsp;  $473.6 \  \rm kbit/s$&amp;amp;nbsp; bedeuten würde. Allerdings ist dieser Modus&amp;amp;nbsp; $($mit $R_{\rm C} = 1)$&amp;amp;nbsp; nur bei extrem guten Bedingungen anwendbar und acht Zeitschlitze stehen auch nur selten zur Verfügung.&lt;br /&gt;
*Mit&amp;amp;nbsp; $\rm MCS–8$&amp;amp;nbsp; und sieben Zeitschlitzen kann man immerhin schon&amp;amp;nbsp;  $380.8 \  \rm kbit/s$&amp;amp;nbsp; erreichen und ist damit in der Größenordnung von&amp;amp;nbsp; &#039;&#039;&#039;Universal Mobile Telecommunications System&#039;&#039;&#039;&amp;amp;nbsp; (UMTS), dem bekanntesten Standard der dritten Mobilfunkgeneration, der derzeit&amp;amp;nbsp;  $384 \  \rm kbit/s$&amp;amp;nbsp; anbietet.&lt;br /&gt;
*EDGE verwendet die gleichen Frequenzen wie GSM, weshalb diese Technik besonders für Betreiber mit bestehender GSM–Infrastruktur interessant ist, die im Jahr 2000 keine der teueren UMTS–Lizenzen erworben haben und trotzdem eine ausreichend hohe Datenrate anbieten wollen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Das System UMTS wird im nachfolgenden vierten Hauptkapitel eingehend beschrieben.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==Aufgabe zum Kapitel== &lt;br /&gt;
&amp;lt;br&amp;gt; 	 &lt;br /&gt;
[[Aufgaben:Aufgabe_3.8:_General_Packet_Radio_Service|Aufgabe 3.8: General Packet Radio Service]]&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Gesamtes_GSM%E2%80%93%C3%9Cbertragungssystem&amp;diff=34985</id>
		<title>Examples of Communication Systems/Gesamtes GSM–Übertragungssystem</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Gesamtes_GSM%E2%80%93%C3%9Cbertragungssystem&amp;diff=34985"/>
		<updated>2020-10-13T15:39:12Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Gesamtes GSM–Übertragungssystem to Examples of Communication Systems/Entire GSM Transmission System&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Examples of Communication Systems/Entire GSM Transmission System]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Entire_GSM_Transmission_System&amp;diff=34984</id>
		<title>Examples of Communication Systems/Entire GSM Transmission System</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Entire_GSM_Transmission_System&amp;diff=34984"/>
		<updated>2020-10-13T15:39:12Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Gesamtes GSM–Übertragungssystem to Examples of Communication Systems/Entire GSM Transmission System&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=GSM – Global System for Mobile Communications&lt;br /&gt;
|Vorherige Seite=Sprachcodierung&lt;br /&gt;
|Nächste Seite=Weiterentwicklungen des GSM&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Komponenten der Sprach– und Datenübertragung==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Nachfolgend sehen Sie das Blockschaltbild des sendeseitigen GSM–Übertragungssystems, das &lt;br /&gt;
*sowohl für digitalisierte Sprachsignale &amp;amp;nbsp;$($Abtastrate:&amp;amp;nbsp; $8 \ \rm kHz$,&amp;amp;nbsp; Quantisierung:&amp;amp;nbsp; $13$ Bit &amp;amp;nbsp; ⇒ &amp;amp;nbsp; Datenrate:&amp;amp;nbsp; $104 \ \rm kbit/s)$ &lt;br /&gt;
*als auch für&amp;amp;nbsp; $9.6 \ \rm kbit/s$–Datensignale geeignet ist. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Komponenten für die Sprachübertragung sind blau, die für Daten rot und gemeinsame Blöcke grün dargestellt.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1216__Bei_T_3_4_S1_v1.png|center|frame|Komponenten der Sprach– und –Datenübertragung bei GSM]]&lt;br /&gt;
&lt;br /&gt;
Hier eine kurze Beschreibung der einzelnen Komponenten:&lt;br /&gt;
*Sprachsignale werden durch die Sprachcodierung von&amp;amp;nbsp; $104 \ \rm kbit/s$&amp;amp;nbsp; auf&amp;amp;nbsp; $13 \ \rm kbit/s$&amp;amp;nbsp; – also um den Faktor&amp;amp;nbsp; $8$ – komprimiert. Die in der Grafik angegebene Bitrate gilt für den Vollraten–Codec, der pro Sprachrahmen $($Dauer&amp;amp;nbsp; $T_{\rm R} = 20\ \rm  ms)$&amp;amp;nbsp; genau&amp;amp;nbsp; $260$&amp;amp;nbsp; Bit liefert.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;AMR–Codec&#039;&#039;&#039;&amp;amp;nbsp; liefert im höchsten Modus&amp;amp;nbsp; $12.2 \ \rm kbit/s$&amp;amp;nbsp; $(244$&amp;amp;nbsp; Bit pro Sprachrahmen$)$. Der Sprachcodec muss aber zusätzlich auch Informationen hinsichtlich des aktuellen Modus übertragen, so dass die Datenrate vor der Kanalcodierung ebenfalls&amp;amp;nbsp; $13 \ \rm kbit/s$&amp;amp;nbsp; beträgt.&lt;br /&gt;
*Aufgabe der gestrichelt eingezeichneten&amp;amp;nbsp; &#039;&#039;&#039;Voice Activity Detection&#039;&#039;&#039;&amp;amp;nbsp; ist es zu entscheiden, ob der aktuelle Sprachrahmen tatsächlich ein Sprachsignal enthält oder nur eine Sprachpause, während der die Leistung des Sendeverstärkers heruntergefahren werden sollte.&lt;br /&gt;
*Durch die&amp;amp;nbsp; &#039;&#039;&#039;Kanalcodierung&#039;&#039;&#039;&amp;amp;nbsp; wird wieder Redundanz hinzugefügt, um Fehlerkorrektur beim Empfänger zu ermöglichen. Pro Sprachrahmen gibt der Kanalcoder&amp;amp;nbsp; $456$&amp;amp;nbsp; Bit ab, woraus sich die Datenrate&amp;amp;nbsp; $22.8 \ \rm kbit/s$&amp;amp;nbsp; ergibt. Die wichtigeren Bit werden besonders geschützt.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;Interleaver&#039;&#039;&#039;&amp;amp;nbsp; verwürfelt die entstehende Bitfolge, um den Einfluss von Bündelfehlern zu vermindern. Die&amp;amp;nbsp; $456$&amp;amp;nbsp; Eingangsbit werden auf vier Zeitrahmen zu je&amp;amp;nbsp; $114$&amp;amp;nbsp; Bit aufgeteilt. Zwei aufeinander folgende Bits werden somit immer in zwei verschiedenen Bursts übertragen.&lt;br /&gt;
*Ein&amp;amp;nbsp; &#039;&#039;&#039;Datenkanal&#039;&#039;&#039;&amp;amp;nbsp; – im Bild rot markiert – unterscheidet sich von einem Sprachkanal (blau gekennzeichnet) nur durch die unterschiedliche Eingangsrate&amp;amp;nbsp; $(9.6 \ \rm kbit/s$&amp;amp;nbsp; statt&amp;amp;nbsp; $104 \ \rm kbit/s)$&amp;amp;nbsp; und die Verwendung eines zweiten, äußeren Kanalcoders anstelle des Sprachcodierers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die grün hinterlegten Komponenten gelten für die Sprach– und Datenübertragung gleichermaßen. Die erste gemeinsame Systemkomponente für Sprach– und Datenübertragung im Blockschaltbild des GSM–Senders ist die&amp;amp;nbsp; &#039;&#039;&#039;Verschlüsselung&#039;&#039;&#039;, die verhindern soll, dass Unbefugte Zugriff auf die Daten erhalten.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Dabei gibt es zwei grundsätzlich unterschiedliche Verschlüsselungsverfahren:&lt;br /&gt;
*&#039;&#039;&#039;Symmetrische Verschlüsselung&#039;&#039;&#039;:&amp;amp;nbsp; Diese kennt nur einen einzigen geheimen Schlüssel, der sowohl zur Verschlüsselung und Chiffrierung der Nachrichten im Sender als auch zur Entschlüsselung und Dechiffrierung im Empfänger benutzt wird. Der Schlüssel muss vor der Kommunikation erzeugt und zwischen den Kommunikationspartnern über einen sicheren Kanal ausgetauscht werden. Der Vorteil dieses im herkömmlichen GSM angewendeten Verschlüsselungsverfahrens ist, dass es sehr schnell arbeitet.&lt;br /&gt;
*&#039;&#039;&#039;Asymmetrische Verschlüsselung&#039;&#039;&#039;:&amp;amp;nbsp; Dieses Verfahren benutzt zwei unabhängige, aber zueinander passende asymmetrische Schlüssel. Es ist nicht möglich, mit einem Schlüssel den anderen zu berechnen. Der&amp;amp;nbsp; „Public Key”&amp;amp;nbsp; ist öffentlich zugänglich und dient der Verschlüsselung. Der&amp;amp;nbsp; „Private Key”&amp;amp;nbsp; ist geheim und wird bei der Entschlüsselung verwendet. Im Gegensatz zu den symmetrischen Verschlüsselungsverfahren sind die asymmetrischen Methoden wesentlich langsamer, bieten dafür aber auch eine höhere Sicherheit.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Der zweite grüne Block ist die&amp;amp;nbsp; &#039;&#039;&#039;Burstbildung&#039;&#039;&#039;, wobei es verschiedene Burstarten gibt. Beim&amp;amp;nbsp; &#039;&#039;Normal Burst&#039;&#039;&amp;amp;nbsp; werden die&amp;amp;nbsp; $114$&amp;amp;nbsp; codierten, verwürfelten und verschlüsselten Bit durch Hinzufügen von&amp;amp;nbsp; &#039;&#039;Guard Period&#039;&#039;, Signalisierungsbits, etc. auf&amp;amp;nbsp; $156.25$&amp;amp;nbsp; Bit abgebildet. Diese werden innerhalb eines Zeitschlitzes der Dauer&amp;amp;nbsp; $T_{\rm Z} = 576.9 \ \rm &amp;amp;micro; s$&amp;amp;nbsp; mittels des&amp;amp;nbsp; &#039;&#039;Modulationsverfahrens&#039;&#039;&amp;amp;nbsp; &amp;amp;bdquo;GMSK&amp;amp;rdquo; übertragen. Daraus ergibt sich die Brutto–Datenrate&amp;amp;nbsp; $270.833  \ \rm kbit/s$.&lt;br /&gt;
&lt;br /&gt;
Beim Empfänger gibt es in umgekehrter Reihenfolge die Blöcke &lt;br /&gt;
*Demodulation, &lt;br /&gt;
*Burstzerlegung, &lt;br /&gt;
*Entschlüsselung, &lt;br /&gt;
*De–Interleaving, &lt;br /&gt;
*Kanaldecodierung,&lt;br /&gt;
*Sprachdecodierung. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Auf den nächsten Seiten werden alle Blöcke von obigem Übertragungsschema im Detail vorgestellt.&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
==Codierung bei Sprachsignalen==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Uncodierte Funkdatenübertragung führt zu Bitfehlerraten im Prozentbereich. Mit&amp;amp;nbsp; [[Kanalcodierung]]&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Channel Coding&#039;&#039;) können aber manche Übertragungsfehler beim Empfänger erkannt oder sogar korrigiert werden. Die Bitfehlerrate lässt sich so auf Werte kleiner als&amp;amp;nbsp;  $10^{–5}$ reduzieren.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1219__Bei_T_3_4_S2_v1.png|right|frame|Zur Codierung von Sprachsignalen bei GSM]]&lt;br /&gt;
&lt;br /&gt;
Zunächst betrachten wir die GSM-Kanalcodierung für die Sprachkanäle, wobei als Sprachcoder der&amp;amp;nbsp; [[Examples_of_Communication_Systems/Sprachcodierung#GSM_Fullrate_Vocoder_.E2.80.93_Vollraten.E2.80.93Codec|Vollraten–Codec]]&amp;amp;nbsp; vorausgesetzt wird. Die Kanalcodierung eines Sprachrahmens von&amp;amp;nbsp; $20\ \rm  ms$&amp;amp;nbsp; Dauer erfolgt in vier aufeinander folgenden Schritten entsprechend der Grafik.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Aus der Beschreibung im Kapitel&amp;amp;nbsp; [[Examples_of_Communication_Systems/Sprachcodierung|Sprachcodierung]]&amp;amp;nbsp; ist zu ersehen, dass nicht alle&amp;amp;nbsp; $260$&amp;amp;nbsp; Bit den gleichen Einfluss auf die subjektiv empfundene Sprachqualität haben. &lt;br /&gt;
*Deshalb werden die Daten entsprechend ihrer Wichtigkeit in drei Klassen aufgeteilt: &amp;amp;nbsp; Die&amp;amp;nbsp; $50$&amp;amp;nbsp; wichtigsten Bit bilden die&amp;amp;nbsp; &#039;&#039;&#039;Klasse 1a&#039;&#039;&#039;, weitere&amp;amp;nbsp; $132$&amp;amp;nbsp; werden der&amp;amp;nbsp; &#039;&#039;&#039;Klasse 1b&#039;&#039;&#039;&amp;amp;nbsp; zugeteilt und die restlichen&amp;amp;nbsp; $78$&amp;amp;nbsp; Bit ergeben die eher unwichtige&amp;amp;nbsp; &#039;&#039;&#039;Klasse 2&#039;&#039;&#039;.&lt;br /&gt;
*Im nächsten Schritt wird für die&amp;amp;nbsp; $50$&amp;amp;nbsp; besonders wichtigen Bit der Klasse 1a mit einem rückgekoppelten Schieberegister eine drei Bit lange&amp;amp;nbsp; [[Examples_of_Communication_Systems/Verfahren_zur_Senkung_der_Bitfehlerrate_bei_DSL#Cyclic_Redundancy_Check|Cyclic Redundancy Check]]&amp;amp;nbsp; (CRC)–Prüfsumme berechnet. Das Generatorpolynom für diese CRC–Überprüfung lautet:&lt;br /&gt;
:$$G_{\rm CRC}(D) = D^3 + D +1\hspace{0.05cm}. $$&lt;br /&gt;
 &lt;br /&gt;
*Anschließend werden den insgesamt&amp;amp;nbsp; $185$&amp;amp;nbsp; Bit der Klasse 1a und 1b inclusive den drei (rot eingezeichneten) CRC–Paritätsbits noch vier (gelbe)&amp;amp;nbsp; &#039;&#039;Tailbits&#039;&#039; „0000”&amp;amp;nbsp; angehängt. Diese vier Bit initialisieren die vier Speicherregister des nachfolgenden Faltungscoders jeweils mit&amp;amp;nbsp; $0$, so dass für jeden Sprachrahmen von einem definierten Status ausgegangen werden kann.&lt;br /&gt;
*Der Faltungscode mit der Coderate&amp;amp;nbsp; $R_{\rm C} = 1/2$&amp;amp;nbsp; verdoppelt diese&amp;amp;nbsp; $189$&amp;amp;nbsp; wichtigsten Bit auf&amp;amp;nbsp; $378$&amp;amp;nbsp; Bit und schützt diese somit signifikant gegen Übertragungsfehler. Anschließend werden noch die&amp;amp;nbsp; $78$&amp;amp;nbsp; Bit der unwichtigeren Klasse 2 ungeschützt angehängt.&lt;br /&gt;
*Auf diese Weise ergeben sich nach der Kanalcodierung pro&amp;amp;nbsp; $20 \ \rm ms$–Sprachrahmen genau&amp;amp;nbsp; $456$&amp;amp;nbsp; Bit. Dies entspricht einer (codierten) Datenrate von&amp;amp;nbsp; $22.8\ \rm  kbit/s$&amp;amp;nbsp; gegenüber&amp;amp;nbsp; $13\ \rm  kbit/s$&amp;amp;nbsp; nach der Sprachcodierung. Die effektive Kanalcodierungsrate beträgt somit&amp;amp;nbsp; $260/456 = 57\%$.&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
==Interleaving bei Sprachsignalen==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Das Ergebnis der Faltungsdecodierung hängt nicht nur von der Häufigkeit der Übertragungsfehler ab, sondern auch von deren Verteilung. Um gute Korrekturergebnisse zu erzielen, sollte der Kanal kein Gedächtnis besitzen, sondern möglichst statistisch unabhängige Bitfehler liefern.&lt;br /&gt;
&lt;br /&gt;
Bei Mobilfunksystemen treten Übertragungsfehler aber meist in Blöcken&amp;amp;nbsp; (&#039;&#039;Error Bursts&#039;&#039;)&amp;amp;nbsp; auf. Durch den Einsatz der Interleaving–Technik werden solche Bündelfehler über mehrere Bursts gleichmäßig verteilt und so deren Auswirkungen abgeschwächt.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1226__Bei_T_3_4_S2b_v1.png|center|frame|Interleaving bei  GSM&amp;amp;ndash;Sprachsignalen]]&lt;br /&gt;
&lt;br /&gt;
Bei einem Sprachkanal arbeitet der Interleaver in folgender Weise:&lt;br /&gt;
*Die&amp;amp;nbsp; $456$&amp;amp;nbsp; Eingangsbit pro Sprachrahmen werden nach einem festen Algorithmus auf vier Blöcke zu je&amp;amp;nbsp; $114$&amp;amp;nbsp; Bit aufgeteilt. Wir bezeichnen diese für den&amp;amp;nbsp; $n$–ten Sprachrahmen mit&amp;amp;nbsp; $A_n$,&amp;amp;nbsp; $B_n$,&amp;amp;nbsp; $C_n$&amp;amp;nbsp; und&amp;amp;nbsp; $D_n$. Der Index&amp;amp;nbsp; $n-1$&amp;amp;nbsp; bezeichnet den vorhergehenden Rahmen und&amp;amp;nbsp; $n+1$&amp;amp;nbsp; den nachfolgenden.&lt;br /&gt;
*Der Block&amp;amp;nbsp; $A_n$&amp;amp;nbsp; wird weiterhin in zwei Unterblöcke&amp;amp;nbsp; $A_{{\rm g},\hspace{0.05cm}n}$&amp;amp;nbsp; und&amp;amp;nbsp; $A_{{\rm u},\hspace{0.05cm}n}$&amp;amp;nbsp; zu je&amp;amp;nbsp; $57$&amp;amp;nbsp; Bit unterteilt, wobei&amp;amp;nbsp; $A_{{\rm g},\hspace{0.05cm}n}$&amp;amp;nbsp; nur die geraden Bitpositionen und&amp;amp;nbsp; $A_{{\rm u},\hspace{0.05cm}n}$&amp;amp;nbsp; die ungeraden Bitpositionen von&amp;amp;nbsp; $A_n$&amp;amp;nbsp; bezeichnen. In der Grafik sind&amp;amp;nbsp; $A_{{\rm g},\hspace{0.05cm}n}$&amp;amp;nbsp; und&amp;amp;nbsp; $A_{{\rm u},\hspace{0.05cm}n}$&amp;amp;nbsp; an der roten bzw. blauen Hinterlegung zu erkennen.&lt;br /&gt;
*Der Unterblock&amp;amp;nbsp; $A_{{\rm g},\hspace{0.05cm}n}$&amp;amp;nbsp; des&amp;amp;nbsp; $n$–ten Sprachrahmens wird mit dem Block&amp;amp;nbsp; $A_{{\rm u},\hspace{0.05cm}n-1}$&amp;amp;nbsp; des vorherigen Rahmens zusammengefügt und ergibt die&amp;amp;nbsp; $114$&amp;amp;nbsp; Nutzdaten eines&amp;amp;nbsp; &#039;&#039;Normal Bursts&#039;&#039;:&amp;amp;nbsp; $\left (A_{{\rm g},\hspace{0.05cm}n}, A_{{\rm u},\hspace{0.05cm}n-1}\right )$.&amp;amp;nbsp; Gleiches gilt für die drei nächsten Bursts:&amp;amp;nbsp; $\left (B_{{\rm g},\hspace{0.05cm}n}, B_{{\rm u},\hspace{0.05cm}n-1}\right )$,&amp;amp;nbsp; $\left (C_{{\rm g},\hspace{0.05cm}n}, C_{{\rm u},\hspace{0.05cm}n-1}\right )$,&amp;amp;nbsp; $\left (D_{{\rm g},\hspace{0.05cm}n}, D_{{\rm u},\hspace{0.05cm}n-1}\right )$.&lt;br /&gt;
*In gleicher Weise werden die ungeraden Unterblöcke des&amp;amp;nbsp; $n$–ten Sprachrahmens mit den geraden Unterblöcken des nachfolgenden Rahmens verschachtelt:&amp;amp;nbsp; $\left (A_{{\rm g},\hspace{0.05cm}n+1}, A_{{\rm u},\hspace{0.05cm}n}\right )$, ... ,&amp;amp;nbsp; $\left (D_{{\rm g},\hspace{0.05cm}n+1}, D_{{\rm u},\hspace{0.05cm}n}\right )$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Fazit:}$&amp;amp;nbsp; Die hier beschriebene Verwürfelungsart wird&amp;amp;nbsp; &#039;&#039;block-diagonales Interleaving&#039;&#039;&amp;amp;nbsp; genannt, hier speziell vom Grad&amp;amp;nbsp; $8$: &lt;br /&gt;
*Dieses vermindert die Störanfälligkeit gegenüber Bündelfehlern. &lt;br /&gt;
*So werden niemals zwei aufeinander folgende Bit eines Datenblocks direkt hintereinander gesendet. &lt;br /&gt;
*Mehrbitfehler treten nach dem De–Interleaver isoliert auf und können so wirkungsvoller korrigiert werden.}}&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Codierung und Interleaving bei Datensignalen == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Für die GSM–Datenübertragung steht jedem Teilnehmer lediglich eine Nettodatenrate von&amp;amp;nbsp; $9.6\ \rm  kbit/s$&amp;amp;nbsp; zur Verfügung. Zur Fehlersicherung werden zwei Verfahren eingesetzt:&lt;br /&gt;
*&#039;&#039;&#039;Forward Error Correction&#039;&#039;&#039;&amp;amp;nbsp; (FEC, deutsch:&amp;amp;nbsp; Vorwärtsfehlerkorrektur)&amp;amp;nbsp; wird auf der physikalischen Schicht durch Anwendung von Faltungscodes realisiert.&lt;br /&gt;
*&#039;&#039;&#039;Automatic Repeat Request&#039;&#039;&#039;&amp;amp;nbsp; (ARQ); dabei werden auf der Sicherungsschicht defekte und nicht korrigierbare Pakete neu angefordert.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1221__Bei_T_3_4_S4_v1.png|center|frame|Zur Verdeutlichung von Codierung und Interleaving bei Datensignalen]]&lt;br /&gt;
&lt;br /&gt;
Die Grafik verdeutlicht Kanalcodierung und Interleaving für den Datenkanal mit&amp;amp;nbsp; $9.6\ \rm  kbit/s$, die im Gegensatz zur Kanalcodierung des Sprachkanals&amp;amp;nbsp; $($mit Bitfehlerrate&amp;amp;nbsp; $10^{–5}$... $10^{–6})$&amp;amp;nbsp; eine nahezu fehlerfreie Rekonstruktion der Daten erlaubt:&lt;br /&gt;
&lt;br /&gt;
*Die Datenbitrate von&amp;amp;nbsp; $9.6\ \rm  kbit/s$&amp;amp;nbsp; wird zuerst im&amp;amp;nbsp; &#039;&#039;Terminal Equipment&#039;&#039;&amp;amp;nbsp; der Mobilstation durch eine nicht GSM–spezifische Kanalcodierung um&amp;amp;nbsp; $25\%$&amp;amp;nbsp; auf&amp;amp;nbsp; $12\ \rm  kbit/s$&amp;amp;nbsp; erhöht, um eine Fehlererkennung in leitungsvermittelten Netzen zu ermöglichen.&lt;br /&gt;
*Bei der Datenübertragung sind alle Bit gleichwertig, so dass es im Gegensatz zur Codierung des Sprachkanals keine Klassen gibt. Die&amp;amp;nbsp; $240$ Bit&amp;amp;nbsp; pro&amp;amp;nbsp; $20 \ \rm ms$–Zeitrahmen werden zusammen mit vier Tailbits&amp;amp;nbsp; $0000$&amp;amp;nbsp; zu einem einzigen Datenrahmen zusammengefasst.&lt;br /&gt;
*Diese&amp;amp;nbsp; $244$&amp;amp;nbsp; Bit werden wie bei Sprachkanälen durch einen Faltungscoder der Rate&amp;amp;nbsp; $1/2$&amp;amp;nbsp; auf&amp;amp;nbsp; $488$&amp;amp;nbsp; Bit verdoppelt. Pro einlaufendem Bit werden zwei Codesymbole erzeugt, zum Beispiel gemäß den Generatorpolynomen&amp;amp;nbsp;  $G_0(D) = 1 + D^3 + D^4$&amp;amp;nbsp; und&amp;amp;nbsp; $G_1(D) = 1 + D + D^3 + D^4$:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1227__Bei_T_3_4_S4b_v1.png|center|frame|Bei GSM verwendeter Faltungscoder der Rate&amp;amp;nbsp; $1/2$]]&lt;br /&gt;
&lt;br /&gt;
*Der nachfolgende Interleaver erwartet – ebenso wie ein „Sprach–Interleaver” – als Eingabe nur&amp;amp;nbsp; $456$&amp;amp;nbsp; Bit pro Rahmen. Deshalb werden von den&amp;amp;nbsp; $488$&amp;amp;nbsp; Bit am Ausgang des Faltungscodierers noch&amp;amp;nbsp; $32$&amp;amp;nbsp; Bit an den Positionen&amp;amp;nbsp; $15 · j - 4 \ ( j = 1$, ... ,$ 32 )$&amp;amp;nbsp; entfernt („Punktierung”).&lt;br /&gt;
*Da die Datenübertragung weniger zeitkritisch ist als die Sprachübertragung, wird hier ein höherer Interleaving–Grad gewählt. Die&amp;amp;nbsp; $456$&amp;amp;nbsp; Bit werden auf bis zu&amp;amp;nbsp; $24$ Interleaver–Blöcke&amp;amp;nbsp; zu je&amp;amp;nbsp; $19$&amp;amp;nbsp; Bit verteilt, was bei Sprachdiensten aus Gründen der Echtzeitübertragung nicht möglich wäre.&lt;br /&gt;
*Danach werden die&amp;amp;nbsp; $456$&amp;amp;nbsp; Bit auf vier aufeinander folgende&amp;amp;nbsp; &#039;&#039;Normal Bursts&#039;&#039;&amp;amp;nbsp; aufgeteilt und versandt. Beim Einpacken in die Bursts werden wieder Gruppierungen gerader und ungerader Bits gebildet, ähnlich dem Interleaving im Sprachkanal.&lt;br /&gt;
 	 &lt;br /&gt;
&lt;br /&gt;
==Empfängerseite der GSM&amp;amp;ndash;Strecke – Decodierung == 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Der GSM–Empfänger (gelb hinterlegt) beinhaltet die GMSK-Demodulation, die Burstzerlegung, die Entschlüsselung, das De–Interleaving sowie die Kanal– und Sprachdecodierung.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1223__Bei_T_3_4_S6_v1.png|center|frame|Empfängerseitige Datenverarbeitung bei GSM]]&lt;br /&gt;
&lt;br /&gt;
Zu den beiden letzten Blöcken in obigem Bild ist anzumerken:&lt;br /&gt;
*Das Decodierverfahren wird durch die GSM–Spezifikation nicht vorgeschrieben, sondern ist den einzelnen Netzbetreibern überlassen. Die Leistungsfähigkeit ist vom eingesetzten Algorithmus zur Fehlerkorrektur abhängig.&lt;br /&gt;
*Zum Beispiel wird beim Decodierverfahren&amp;amp;nbsp; &#039;&#039;Maximum Likelihood Sequence Estimation&#039;&#039;&amp;amp;nbsp; (MLSE) die wahrscheinlichste Bitsequenz unter Verwendung des Viterbi–Algorithmus oder eines MAP–Empfängers&amp;amp;nbsp; (&#039;&#039;Maximum A–posteriori Probability&#039;&#039;)&amp;amp;nbsp; ermittelt.&lt;br /&gt;
*Nach der Fehlerkorrektur wird der&amp;amp;nbsp; &#039;&#039;Cyclic Redundancy Check&#039;&#039;&amp;amp;nbsp; (CRC) durchgeführt, wobei beim Vollraten–Codec der Grad des verwendeten CRC–Generatorpolynoms&amp;amp;nbsp; $G= 3$&amp;amp;nbsp; ist. Damit werden alle Fehlermuster bis zum Gewicht&amp;amp;nbsp; $3$&amp;amp;nbsp; und alle Bündelfehler bis zur Länge $4$ erkannt.&lt;br /&gt;
*Anhand des CRC wird über die Verwendbarkeit eines jeden Sprachrahmens entschieden. Ist das Testergebnis positiv, so werden im nachfolgenden Sprachdecoder aus den Sprachparametern&amp;amp;nbsp; $(260$ Bit pro Rahmen$)$&amp;amp;nbsp; die Sprachsignale synthetisiert.&lt;br /&gt;
*Sind Rahmen ausgefallen, so werden die Parameter früherer, als korrekt erkannter Rahmen zur Interpolation verwendet &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; &#039;&#039;Fehlerverschleierung&#039;&#039;. Treten mehrere nicht korrekte Sprachrahmen in Folge auf, so wird die Leistung kontinuierlich bis hin zur Stummschaltung abgesenkt.&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
== Aufgabe zum Kapitel==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:3.7_Komponenten_des_GSM–Systems|Aufgabe 3.7: Komponenten des GSM–Systems]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Sprachcodierung&amp;diff=34983</id>
		<title>Examples of Communication Systems/Sprachcodierung</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Sprachcodierung&amp;diff=34983"/>
		<updated>2020-10-13T15:38:57Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Sprachcodierung to Examples of Communication Systems/Voice Coding&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Examples of Communication Systems/Voice Coding]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Speech_Coding&amp;diff=34982</id>
		<title>Examples of Communication Systems/Speech Coding</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Speech_Coding&amp;diff=34982"/>
		<updated>2020-10-13T15:38:57Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Sprachcodierung to Examples of Communication Systems/Voice Coding&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=GSM – Global System for Mobile Communications&lt;br /&gt;
|Vorherige Seite=Funkschnittstelle&lt;br /&gt;
|Nächste Seite=Gesamtes GSM–Übertragungssystem&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Verschiedene Sprachcodierverfahren==  	 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Jedem GSM-Teilnehmer steht maximal die Netto–Datenrate&amp;amp;nbsp; $\text{22.8 kbit/s}$&amp;amp;nbsp; zur Verfügung, während im ISDN–Festnetz mit einer Datenrate von&amp;amp;nbsp; $\text{64 kbit/s}$&amp;amp;nbsp; (bei &amp;amp;nbsp;$8$&amp;amp;nbsp; Bit Quantisierung)&amp;amp;nbsp; bzw.&amp;amp;nbsp; $\text{104 kbit/s}$&amp;amp;nbsp; (bei&amp;amp;nbsp;$13$&amp;amp;nbsp;Bit Quantisierung)&amp;amp;nbsp; gearbeitet wird. Aufgabe der Sprachcodierung bei GSM ist die Beschränkung der Datenmenge zur Sprachsignalübertragung auf&amp;amp;nbsp; $\text{22.8 kbit/s}$&amp;amp;nbsp; und eine bestmögliche Reproduktion des Sprachsignals auf der Empfängerseite. Die Funktionen des GSM–Coders und des GSM–Decoders sind meist in einer Funktionseinheit zusammengefasst, die als &amp;amp;bdquo;Codec&amp;amp;rdquo; bezeichnet wird.&lt;br /&gt;
&lt;br /&gt;
Zur Sprachcodierung und –Decodierung werden verschiedene Signalverarbeitungsverfahren angewandt:&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;GSM Fullrate Vocoder&#039;&#039;&#039;&amp;amp;nbsp; (deutsch:&amp;amp;nbsp; GSM–Vollraten–Sprachcodec)&amp;amp;nbsp; wurde 1991 aus einer Kombination von drei Kompressionsmethoden für den GSM–Funkkanal standardisiert. Er basiert auf&amp;amp;nbsp; &#039;&#039;Linear Predictive Coding&#039;&#039;&amp;amp;nbsp; (LPC) in Verbindung mit&amp;amp;nbsp; &#039;&#039;Long Term Prediction&#039;&#039;&amp;amp;nbsp; (LTP) und&amp;amp;nbsp; &#039;&#039;Regular Pulse Excitation&#039;&#039;&amp;amp;nbsp; (RPE).&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;GSM Halfrate Vocoder&#039;&#039;&#039;&amp;amp;nbsp; (deutsch:&amp;amp;nbsp; GSM–Halbraten–Sprachcodec)&amp;amp;nbsp; wurde 1994 eingeführt und bietet die Möglichkeit, Sprache bei nahezu gleicher Qualität in einem halben Verkehrskanal $($Datenrate&amp;amp;nbsp; $\text{11.4 kbit/s})$&amp;amp;nbsp; zu übertragen.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;Enhanced Fullrate Vocoder&#039;&#039;&#039;&amp;amp;nbsp; (EFR–Codec) wurde 1995 standardisiert und implementiert, ursprünglich für das nordamerikanische DCS1900–Netz. Der EFR–Codec bietet gegenüber dem herkömmlichen Vollraten–Codec eine bessere Sprachqualität.&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;Adaptive Multi–Rate Codec&#039;&#039;&#039;&amp;amp;nbsp; (AMR–Codec) ist der neueste Sprachcodec für GSM. Er wurde 1997 standardisiert und 1999 vom&amp;amp;nbsp; &#039;&#039;Third Generation Partnership Project&#039;&#039;&amp;amp;nbsp; (3GPP) auch als Standard–Sprachcodec für Mobilfunksysteme der dritten Generation wie UMTS vorgeschrieben.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sie können sich die Qualität dieser Sprachcodierverfahren bei Sprache und Musik mit dem interaktiven Applet&amp;amp;nbsp; [[Applets:Qualität_verschiedener_Sprach–Codecs_(Applet)|Qualität verschiedener Sprach–Codecs ]]&amp;amp;nbsp; verdeutlichen. Diese Audio–Animation berücksichtigt auch den&amp;amp;nbsp; [https://de.wikipedia.org/wiki/Adaptive_Multi-Rate Wideband–AMR], der 2007 für UMTS entwickelt und standardisiert wurde. &lt;br /&gt;
&lt;br /&gt;
Im Gegensatz zum herkömmlichen AMR, bei dem das Sprachsignal auf den Frequenzbereich von&amp;amp;nbsp; $\text{300 Hz}$&amp;amp;nbsp;  bis &amp;amp;nbsp; $\text{3.4 kHz}$&amp;amp;nbsp; bandbegrenzt wird, geht man beim WB–AMR von einem Wideband–Signal &amp;amp;nbsp; $\text{(50 Hz – 7 kHz)}$&amp;amp;nbsp; aus. Dieser ist somit auch für Musiksignale geeignet.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==GSM Fullrate Vocoder – Vollraten–Codec==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Beim&amp;amp;nbsp; &#039;&#039;&#039;GSM–Vollraten-Codec&#039;&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Full Rate Vocoder&#039;&#039;) wird das analoge Sprachsignal im Frequenzbereich zwischen&amp;amp;nbsp; $300 \ \rm Hz$&amp;amp;nbsp; und&amp;amp;nbsp; $3400 \ \rm Hz$&amp;amp;nbsp; zunächst mit&amp;amp;nbsp; $8 \ \rm kHz$&amp;amp;nbsp; abgetastet und danach mit&amp;amp;nbsp; $13$&amp;amp;nbsp; Bit linear quantisiert (&#039;&#039;&#039;A/D–Wandlung&#039;&#039;&#039;), was eine Datenrate von&amp;amp;nbsp; $104 \ \rm kbit/s$&amp;amp;nbsp; ergibt. &lt;br /&gt;
[[File:P_ID1203__Bei_T_3_2_S2_v3.png|right|frame|LPC&amp;amp;ndash;, LTP&amp;amp;ndash; und RPE&amp;amp;ndash;Parameter beim GSM-Vollraten-Codec]]&lt;br /&gt;
Die Sprachcodierung erfolgt bei diesem Verfahren in vier Schritten:&lt;br /&gt;
*die Vorverarbeitung,&lt;br /&gt;
*die Einstellung des Kurzzeitanalyse–Filters&amp;amp;nbsp; (&#039;&#039;Linear Predictive Coding&#039;&#039;, LPC),&lt;br /&gt;
*die Steuerung des Langzeitanalyse–Filters&amp;amp;nbsp; (&#039;&#039;Long Term Prediction&#039;&#039;, LTP) und&lt;br /&gt;
*die Codierung des Restsignals durch eine Folge von Pulsen&amp;amp;nbsp; (&#039;&#039;Regular Pulse Excitation&#039;&#039;, RPE).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In der Grafik bezeichnet&amp;amp;nbsp; $s(n)$&amp;amp;nbsp; das im Abstand&amp;amp;nbsp; $T_{\rm A} = 125\ \rm &amp;amp;micro; s$&amp;amp;nbsp; abgetastete und quantisierte Sprachsignal nach der kontinuierlich durchgeführten Vorverarbeitung, wobei&lt;br /&gt;
*das digitalisierte Mikrofonsignal von einem eventuell vorhandenen Gleichsignalanteil (Offset) befreit wird, um bei der Decodierung einen störenden Pfeifton von ca.&amp;amp;nbsp; $2.6 \ \rm  kHz$&amp;amp;nbsp; bei der Wiedergewinnung der höheren Frequenzanteile zu vermeiden, und&lt;br /&gt;
*zusätzlich höhere Spektralanteile von&amp;amp;nbsp; $s(n)$&amp;amp;nbsp; angehoben werden, um die Rechengenauigkeit und Effektivität der nachfolgenden LPC–Analyse zu verbessern.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Tabelle zeigt die&amp;amp;nbsp; $76$&amp;amp;nbsp; Parameter&amp;amp;nbsp; $(260$ Bit$)$&amp;amp;nbsp; der Funktionseinheiten LPC, LTP und RPE. Die Bedeutung der einzelnen Größen wird auf den folgenden Seiten im Detail beschrieben.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1218__Bei_T_3_2_Sb2_v3.png|center|frame|Tabelle der Vollraten&amp;amp;ndash;Codec&amp;amp;ndash;Parameter]]&lt;br /&gt;
&lt;br /&gt;
Alle Verarbeitungsschritte (LPC, LTP, RPE) erfolgen jeweils in Blöcken von&amp;amp;nbsp; $20 \ \rm ms$&amp;amp;nbsp; Dauer über&amp;amp;nbsp; $160$&amp;amp;nbsp; Abtastwerte des vorverarbeiteten Sprachsignals, die man als&amp;amp;nbsp; &#039;&#039;&#039;GSM–Sprachrahmen&#039;&#039;&#039;&amp;amp;nbsp; bezeichnet. &lt;br /&gt;
*Beim Vollraten–Codec werden pro Sprachrahmen insgesamt&amp;amp;nbsp; $260$ Bit&amp;amp;nbsp; erzeugt, woraus sich eine Datenrate von&amp;amp;nbsp; $13  \ \rm kbit/s$&amp;amp;nbsp; ergibt. &lt;br /&gt;
*Dies entspricht einer Kompression des Sprachsignals um den Faktor&amp;amp;nbsp; $8$&amp;amp;nbsp; $(104  \ \rm kbit/s$ bezogen auf $13  \ \rm kbit/s)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
==Linear Predictive Coding – Kurzzeitprädiktion==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:P_ID1206__Bei_T_3_2_S3_v1.png|right|frame|Bausteine der GSM-Kurzzeitprädiktion (LPC)]]&lt;br /&gt;
Der Block&amp;amp;nbsp; &#039;&#039;&#039;Linear Predictive Coding&#039;&#039;&#039;&amp;amp;nbsp; (LPC) führt eine Kurzzeitprädiktion durch, das heißt, es werden die statistischen Abhängigkeiten der Abtastwerte untereinander in einem kurzen Bereich von einer Millisekunde ermittelt. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Es folgt eine Kurzbeschreibung des LPC–Prinzipschaltbildes:&lt;br /&gt;
*Zunächst wird dazu das zeitlich unbeschränkte Signal&amp;amp;nbsp; $s(n)$&amp;amp;nbsp; in Intervalle&amp;amp;nbsp; $s_{\rm R}(n)$&amp;amp;nbsp; von&amp;amp;nbsp; $20 \ \rm ms$ Dauer&amp;amp;nbsp; $(160$ Samples$)$ segmentiert. Die Laufvariable innerhalb eines solchen Sprachrahmens kann vereinbarungsgemäß die Werte&amp;amp;nbsp; $n = 1$, ... , $160$&amp;amp;nbsp; annehmen.&lt;br /&gt;
*Im ersten Schritt der&amp;amp;nbsp; &#039;&#039;&#039;LPC-Analyse&#039;&#039;&#039;&amp;amp;nbsp; werden Abhängigkeiten zwischen den Abtastwerten durch die Autokorrelationskoeffizienten  mit Indizes&amp;amp;nbsp; $0 ≤ k ≤ 8$&amp;amp;nbsp; quantifiziert:&lt;br /&gt;
:$$φ_{\rm s}(k) = \text{E}\big [s_{\rm R}(n) · s_{\rm R}(n + k)\big ].$$ &lt;br /&gt;
*Aus diesen neun AKF–Werten werden mit Hilfe der so genannten&amp;amp;nbsp; &#039;&#039;Schur–Rekursion&#039;&#039;&amp;amp;nbsp; acht Reflexionskoeffizienten&amp;amp;nbsp; $r_{k}$&amp;amp;nbsp; berechnet, die als Grundlage für die Einstellung der Koeffizienten des LPC–Analysefilters für den aktuellen Rahmen dienen.&lt;br /&gt;
*Die Koeffizienten&amp;amp;nbsp; $r_{k}$&amp;amp;nbsp; haben Werte zwischen&amp;amp;nbsp; $±1$. Schon geringe Änderungen der&amp;amp;nbsp; $r_{k}$&amp;amp;nbsp; am Rand ihres Wertesbereichs bewirken große Änderungen für die Sprachcodierung. Die acht Reflexionswerte&amp;amp;nbsp; $r_{k}$&amp;amp;nbsp; werden logarithmisch dargestellt &amp;amp;nbsp; ⇒  &amp;amp;nbsp; &#039;&#039;&#039;LAR–Parameter&#039;&#039;&#039; (&#039;&#039;Log Area Ratio&#039;&#039;):&lt;br /&gt;
:$${\rm LAR}(k) = \ln \ \frac{1-r_k}{1+r_k}, \hspace{1cm} k = 1,\hspace{0.05cm} \text{...}\hspace{0.05cm} , 8.$$ &lt;br /&gt;
 &lt;br /&gt;
*Anschließend werden die acht LAR–Parameter entsprechend ihrer subjektiven Bedeutung durch unterschiedlich viele Bit quantisiert, codiert und zur Übertragung bereitgestellt. Die beiden ersten Parameter werden mit je sechs Bit, die beiden nächsten mit je fünf Bit, $\rm LAR(5)$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm LAR(6)$&amp;amp;nbsp; mit je vier Bit und die beiden letzten &amp;amp;ndash; &amp;amp;nbsp; $\rm LAR(7)$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm LAR(8)$&amp;amp;ndash; &amp;amp;nbsp; mit je drei Bit dargestellt.&lt;br /&gt;
*Bei fehlerfreier Übertragung kann am Empfänger aus den acht LPC–Parametern&amp;amp;nbsp; (insgesamt&amp;amp;nbsp;  $36$&amp;amp;nbsp; Bit)&amp;amp;nbsp; mit dem entsprechenden LPC–Synthesefilter das ursprüngliche Signal&amp;amp;nbsp; $s(n)$&amp;amp;nbsp; wieder vollständig rekonstruiert werden, wenn man von den unvermeidbaren zusätzlichen Quantisierungsfehlern durch die digitale Beschreibung der LAR-Koeffizienten absieht.&lt;br /&gt;
*Weiterhin wird mit Hilfe des LPC–Filters das Prädiktionsfehlersignal&amp;amp;nbsp; $e_{\rm LPC}(n)$&amp;amp;nbsp; gewonnen. Dieses ist gleichzeitig das Eingangssignal für die nachfolgende Langzeitprädiktion. Das LPC–Filter ist nicht rekursiv und hat nur ein kurzes Gedächtnis von etwa einer Millisekunde.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 1:}$&amp;amp;nbsp;&lt;br /&gt;
Die Grafik aus&amp;amp;nbsp; [Kai05]&amp;lt;ref name =&#039;Kai05&#039;&amp;gt;Kaindl, M.: &#039;&#039;Kanalcodierung für Sprache und Daten in GSM-Systemen&#039;&#039;. Dissertation. Lehrstuhl für Nachrichtentechnik, TU München. VDI Fortschritt-Berichte, Reihe 10, Nr. 764, 2005.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; zeigt oben einen Ausschnitt des Sprachsignals&amp;amp;nbsp; $s(n)$&amp;amp;nbsp; und dessen Zeit–Frequenzdarstellung. Unten ist das LPC–Prädiktionsfehlersignal&amp;amp;nbsp; $e_{\rm LPC}(n)$&amp;amp;nbsp; dargestellt.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1207__Bei_T_3_2_S3b_v3.png|right|frame|LPC&amp;amp;ndash;Prädiktionsfehlersignal bei GSM (Zeit&amp;amp;ndash;Frequenzdarstellung)]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;Man erkennt aus diesen Bildern&lt;br /&gt;
*die kleinere Amplitude von&amp;amp;nbsp; $e_{\rm LPC}(n)$&amp;amp;nbsp; gegenüber&amp;amp;nbsp; $s(n)$,&lt;br /&gt;
*den deutlich reduzierten Dynamikumfang, und&lt;br /&gt;
*das flachere Spektrum des verbleibenden Signals.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Long Term Prediction – Langzeitprädiktion==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Bei der&amp;amp;nbsp; &#039;&#039;&#039;Long Term Prediction&#039;&#039;&#039;&amp;amp;nbsp; (LTP) wird die Eigenschaft des Sprachsignals ausgenutzt, dass es auch periodische Strukturen (stimmhafte Abschnitte) besitzt. Dieser Umstand wird dazu verwendet, um die im Signal vorhandene Redundanz zu reduzieren.&lt;br /&gt;
[[File:P_ID1208__Bei_T_3_2_S4_v1.png|right|frame|Bausteine der GSM-Langzeitprädiktion (LTP)]] &lt;br /&gt;
*Die Langzeitprädiktion (LTP–Analyse und –Filterung) wird viermal pro Sprachrahmen, also alle&amp;amp;nbsp; $5 \ \rm ms$&amp;amp;nbsp; durchgeführt. &lt;br /&gt;
*Die vier Subblöcke bestehen aus jeweils $40$ Samples und werden mit&amp;amp;nbsp; $i = 1$, ... , $4$&amp;amp;nbsp; nummeriert.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Es folgt eine Kurzbeschreibung gemäß dem obigen LTP&amp;amp;ndash;Prinzipschaltbild – siehe&amp;amp;nbsp; [Kai05]&amp;lt;ref name =&#039;Kai05&#039;&amp;gt;Kaindl, M.: &#039;&#039;Kanalcodierung für Sprache und Daten in GSM-Systemen&#039;&#039;. Dissertation. Lehrstuhl für Nachrichtentechnik, TU München. VDI Fortschritt-Berichte, Reihe 10, Nr. 764, 2005.&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
*Das Eingangssignal ist das Ausgangssignal&amp;amp;nbsp; $e_{\rm LPC}(n)$&amp;amp;nbsp; der Kurzzeitprädiktion. Die Signale nach der Segmentierung in vier Subblöcken werden mit&amp;amp;nbsp; $e_i(l)$&amp;amp;nbsp; bezeichnet, wobei jeweils&amp;amp;nbsp; $l = 1, 2$, ... , $40$&amp;amp;nbsp; gilt.&lt;br /&gt;
&lt;br /&gt;
*Zu dieser Analyse wird die Kreuzkorrelationsfunktion&amp;amp;nbsp; $φ_{ee\hspace{0.03cm}&#039;,\hspace{0.05cm}i}(k)$&amp;amp;nbsp;  des aktuellen Subblocks&amp;amp;nbsp; $i$&amp;amp;nbsp; des LPC–Prädiktionsfehlersignals&amp;amp;nbsp; $e_i(l)$&amp;amp;nbsp; mit dem rekonstruierten LPC–Restsignal&amp;amp;nbsp; $e\hspace{0.03cm}&#039;_i(l)$&amp;amp;nbsp; aus den drei vorherigen Teilrahmen berechnet. Das Gedächtnis dieses LTP–Prädiktors beträgt zwischen&amp;amp;nbsp; $5 \ \rm ms$&amp;amp;nbsp; und&amp;amp;nbsp; $15 \ \rm ms$&amp;amp;nbsp; und ist somit deutlich länger als das des LPC–Prädiktors&amp;amp;nbsp; $(1 \ \rm ms)$.&lt;br /&gt;
* $e\hspace{0.03cm}&#039;_i(l)$&amp;amp;nbsp; ist die Summe aus dem LTP–Filter–Ausgangssignal&amp;amp;nbsp; $y_i(l)$&amp;amp;nbsp; und dem Korrektursignal&amp;amp;nbsp; $e_{\rm RPE,\hspace{0.05cm}i}(l)$, das von der folgenden Komponente&amp;amp;nbsp; (&#039;&#039;Regular Pulse Excitation&#039;&#039;)&amp;amp;nbsp; für den&amp;amp;nbsp; $i$–ten Subblock bereitgestellt wird.&lt;br /&gt;
*Der Wert von&amp;amp;nbsp; $k$, für den die Kreuzkorrelationsfunktion&amp;amp;nbsp; $φ_{ee\hspace{0.03cm}&#039;,\hspace{0.05cm}i}(k)$&amp;amp;nbsp; maximal wird, bestimmt die für jeden Subblock&amp;amp;nbsp; $i$&amp;amp;nbsp; optimale LTP–Verzögerung&amp;amp;nbsp; $N(i)$. Die Verzögerungen&amp;amp;nbsp; $N(1)$&amp;amp;nbsp; bis&amp;amp;nbsp; $N(4)$&amp;amp;nbsp; werden jeweils mit sieben Bit quantisiert und zur Übertragung bereitgestellt.&lt;br /&gt;
*Der zu&amp;amp;nbsp; $N(i)$&amp;amp;nbsp; gehörige Verstärkungsfaktor&amp;amp;nbsp; $G(i)$&amp;amp;nbsp; – auch &#039;&#039;LTP–Gain&#039;&#039;&amp;amp;nbsp; genannt – wird so bestimmt, dass der an der Stelle&amp;amp;nbsp; $N(i)$&amp;amp;nbsp; gefundene Subblock nach Multiplikation mit&amp;amp;nbsp; $G(i)$&amp;amp;nbsp; am besten zum aktuellen Teilrahmen&amp;amp;nbsp; $e_i(l)$&amp;amp;nbsp; passt. Die Verstärkungsfaktoren&amp;amp;nbsp; $G(1)$&amp;amp;nbsp; bis&amp;amp;nbsp; $G(4)$&amp;amp;nbsp; werden jeweils mit zwei Bit quantisiert und ergeben zusammen mit&amp;amp;nbsp; $N(1)$, ..., $N(4)$&amp;amp;nbsp; die&amp;amp;nbsp; $36$&amp;amp;nbsp; Bit für die acht LTP–Parameter.&lt;br /&gt;
*Das Signal&amp;amp;nbsp; $y_i(l)$&amp;amp;nbsp; nach LTP–Analyse und –Filterung ist ein Schätzsignal für das LPC–Signal&amp;amp;nbsp; $e_i(l)$&amp;amp;nbsp; im&amp;amp;nbsp; $i$–ten Subblock. Die Differenz zwischen beiden ergibt das LTP–Restsignal&amp;amp;nbsp; $e_{\rm LTP,\hspace{0.05cm}i}(l)$, das an die nächste Funktionseinheit „RPE” weitergegeben wird.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 2:}$&amp;amp;nbsp;&lt;br /&gt;
Die Grafik aus&amp;amp;nbsp; [Kai05]&amp;lt;ref name =&#039;Kai05&#039;&amp;gt;Kaindl, M.: &#039;&#039;Kanalcodierung für Sprache und Daten in GSM-Systemen&#039;&#039;. Dissertation. Lehrstuhl für Nachrichtentechnik, TU München. VDI Fortschritt-Berichte, Reihe 10, Nr. 764, 2005.&amp;lt;/ref&amp;gt;&amp;amp;nbsp;  zeigt &lt;br /&gt;
*oben das LPC–Prädiktionsfehlersignal&amp;amp;nbsp; $e_{\rm LPC}(n)$&amp;amp;nbsp; – gleichzeitig das LTP-Eingangssignal, &lt;br /&gt;
*unten das Restfehlersignal&amp;amp;nbsp; $e_{\rm LTP}(n)$&amp;amp;nbsp; nach der Langzeitprädiktion. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1209__Bei_T_3_2_S4b_v2.png|right|frame|LTP&amp;amp;ndash;Prädiktionsfehlersignal bei GSM (Zeit&amp;amp;ndash;Frequenzdarstellung)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Es wird nur ein Subblock betrachtet. Deshalb wird hier für die diskrete Zeit bei LPC und LTP der gleiche Buchstabe&amp;amp;nbsp; $n$&amp;amp;nbsp; verwendet.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Man erkennt aus diesen Darstellungen &lt;br /&gt;
*die kleineren Amplituden von&amp;amp;nbsp; $e_{\rm LTP}(n)$&amp;amp;nbsp; gegenüber&amp;amp;nbsp; $e_{\rm LPC}(n)$&amp;amp;nbsp; und &lt;br /&gt;
*den deutlich reduzierten Dynamikumfang von&amp;amp;nbsp; $e_{\rm LTP}(n)$, &lt;br /&gt;
*besonders in periodischen, also stimmhaften Abschnitten. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Auch im Frequenzbereich zeigt sich eine Reduktion des Prädiktionsfehlersignals aufgrund der Langzeitprädiktion.}}&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Regular Pulse Excitation – RPE–Codierung == 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Das Signal nach LPC– und LTP–Filterung ist bereits redundanz&amp;amp;ndash;reduziert, das heißt, es benötigt eine geringere Bitrate als das abgetastete Sprachsignal&amp;amp;nbsp; $s(n)$. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1210__Bei_T_3_2_S5_v2.png|right|frame|Bausteine der Regular Pulse Excitation (RPE) bei GSM]] &lt;br /&gt;
*Nun wird in der nachfolgenden Funktionseinheit&amp;amp;nbsp; &#039;&#039;&#039;Regular Pulse Excitation&#039;&#039;&#039;&amp;amp;nbsp; (RPE) die Irrelevanz weiter verringert.&lt;br /&gt;
*Das bedeutet: &amp;amp;nbsp; Signalanteile, die für den subjektiven Höreindruck weniger wichtig sind, werden entfernt.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Zu diesem Blockschaltbild ist anzumerken:&lt;br /&gt;
*Die RPE–Codierung wird jeweils für&amp;amp;nbsp; $5 \ \rm ms$–Teilrahmen&amp;amp;nbsp; $(40$ Abtastwerte$)$&amp;amp;nbsp; durchgeführt. Dies ist hier durch den Index&amp;amp;nbsp; $i$&amp;amp;nbsp; im Eingangssignal&amp;amp;nbsp; $e_{\rm LTP},\hspace{0.03cm} i(l)$&amp;amp;nbsp; angedeutet, wobei mit&amp;amp;nbsp; $i = 1, 2, 3, 4$&amp;amp;nbsp; wieder die einzelnen Subblöcke durchnummeriert sind.&lt;br /&gt;
*Im ersten Schritt wird das LTP–Prädiktionsfehlersignal&amp;amp;nbsp; $e_{{\rm LTP}, \hspace{0.03cm}i}(l)$&amp;amp;nbsp; durch ein Tiefpassfilter auf etwa ein Drittel der ursprünglichen Bandbreite – also auf&amp;amp;nbsp; $1.3 \ \rm  kHz$&amp;amp;nbsp; – bandbegrenzt. Dies ermöglicht in einem zweiten Schritt eine Reduktion der Abtastrate um ca. den Faktor&amp;amp;nbsp; $3$.&lt;br /&gt;
*So wird das Ausgangssignal&amp;amp;nbsp; $x_i(l)$&amp;amp;nbsp; mit&amp;amp;nbsp; $l = 1$, ... , $40$&amp;amp;nbsp; durch Unterabtastung in vier Teilfolgen&amp;amp;nbsp; $x_{m, \hspace{0.03cm} i}(j)$&amp;amp;nbsp; mit&amp;amp;nbsp; $m = 1$, ... , $4$&amp;amp;nbsp; und&amp;amp;nbsp; $j = 1$, ... , $13$&amp;amp;nbsp; zerlegt. Diese Aufspaltung ist in der Grafik verdeutlicht.&lt;br /&gt;
*Die Teilfolgen&amp;amp;nbsp; $x_{m,\hspace{0.03cm} i}(j)$&amp;amp;nbsp; beinhalten folgende Abtastwerte des Signals&amp;amp;nbsp; $x_i(l)$:&lt;br /&gt;
**$m = 1$:  &amp;amp;nbsp;    $l = 1, \ 4, \ 7$, ... , $34, \ 37$ (rote Punkte),&lt;br /&gt;
**$m = 2$:   &amp;amp;nbsp;   $l = 2, \ 5, \ 8$, ... , $35, \ 38$ (grüne Punkte),&lt;br /&gt;
**$m = 3$:   &amp;amp;nbsp;   $l = 3, \ 6, \ 9$, ... , $36, \ 39$ (blaue Punkte),&lt;br /&gt;
**$m = 4$:   &amp;amp;nbsp;   $l = 4, \ 7, \ 10$, ... , $37, \ 40$ $($ebenfalls rot, weitgehend identisch mit&amp;amp;nbsp; $m = 1)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Für jeden Subblock&amp;amp;nbsp; $i$&amp;amp;nbsp; wird im Block&amp;amp;nbsp; &#039;&#039;RPE Grid Selection&#039;&#039;&amp;amp;nbsp; die Teilfolge&amp;amp;nbsp; $x_{m,\hspace{0.03cm}i}(j)$&amp;amp;nbsp; mit der höchsten Energie ausgewählt und der Index&amp;amp;nbsp; $M_i$&amp;amp;nbsp; der&amp;amp;nbsp; &#039;&#039;optimalen Folge&#039;&#039;&amp;amp;nbsp; mit zwei Bit quantisiert und als&amp;amp;nbsp; $\mathbf{M}(i)$ &amp;amp;nbsp; übertragen. Insgesamt benötigen die vier RPE–Teilfolgen–Indizes&amp;amp;nbsp; $\mathbf{M}(1)$, ... ,&amp;amp;nbsp; $\mathbf{M}(4)$&amp;amp;nbsp; somit acht Bit.&lt;br /&gt;
*Von der optimalen Teilfolge für den Subblock&amp;amp;nbsp; $i$&amp;amp;nbsp; $($mit Index&amp;amp;nbsp; $M_i)$&amp;amp;nbsp; wird das&amp;amp;nbsp; &#039;&#039;Betragsmaximum&#039;&#039;&amp;amp;nbsp; $x_{\rm max,\hspace{0.03cm}i}$&amp;amp;nbsp; ermittelt, dieser Wert mit sechs Bit logarithmisch quantisiert und als&amp;amp;nbsp; $\mathbf{x_{\rm max}}(i)$&amp;amp;nbsp; zur Übertragung bereit gestellt. Insgesamt benötigen die vier RPE–Blockamplituden&amp;amp;nbsp; $24$&amp;amp;nbsp; Bit.&lt;br /&gt;
*Zusätzlich wird für jeden Subblock&amp;amp;nbsp; $i$&amp;amp;nbsp; die optimale Teilfolge auf&amp;amp;nbsp; $x_{{\rm max},\hspace{0.03cm}i}$&amp;amp;nbsp; normiert. Die so erhaltenen&amp;amp;nbsp; $13$&amp;amp;nbsp; Abtastwerte werden anschließend mit jeweils drei Bit quantisiert und als&amp;amp;nbsp; $\mathbf{X}_j(i)$&amp;amp;nbsp; codiert übertragen. Die&amp;amp;nbsp; $4 · 13 · 3 = 156$&amp;amp;nbsp; Bit beschreiben den so genannten&amp;amp;nbsp; &#039;&#039;&#039;RPE–Pulse&#039;&#039;&#039;.&lt;br /&gt;
*Anschließend werden diese RPE–Parameter lokal wieder decodiert und als Signal&amp;amp;nbsp; $e_{{\rm RPE},\hspace{0.03cm}i}(l)$&amp;amp;nbsp; an das LTP–Synthesefilter im vorherigen Subblock zurückgeführt, woraus zusammen mit dem LTP–Schätzsignal&amp;amp;nbsp; $y_i(l)$&amp;amp;nbsp; das Signal&amp;amp;nbsp; $e\hspace{0.03cm}&#039;_i(l)$&amp;amp;nbsp; erzeugt wird (siehe&amp;amp;nbsp; [[Examples_of_Communication_Systems/Sprachcodierung#Long_Term_Prediction_.E2.80.93_Langzeitpr.C3.A4diktion|LTP&amp;amp;ndash;Grafik]]).&lt;br /&gt;
*Durch das Zwischenfügen von jeweils zwei Nullwerten zwischen zwei übertragenen RPE–Abtastwerten wird näherungsweise das Basisband von Null bis&amp;amp;nbsp; $1300 \ \rm Hz$&amp;amp;nbsp; in den Bereich von&amp;amp;nbsp; $1300 \ \rm Hz$&amp;amp;nbsp; bis&amp;amp;nbsp; $2600 \ \rm Hz$&amp;amp;nbsp; in Kehrlage und von&amp;amp;nbsp; $2600 \ \rm Hz$&amp;amp;nbsp; bis&amp;amp;nbsp; $3900 \ \rm Hz$&amp;amp;nbsp; in Normallage gefaltet. Dies ist der Grund für die notwendige Gleichsignalbefreiung in der Vorverarbeitung. Sonst entstünde durch die beschriebene Faltungsoperation ein störender Pfeifton bei&amp;amp;nbsp; $2.6 \ \rm kHz$.&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
==Halfrate Vocoder und Enhanced Fullrate Codec==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Nach der Standardisierung des Vollraten–Codecs im Jahre 1991 ging es in der Folgezeit um die Entwicklung neuer Sprachcodecs mit zwei spezifischen Zielen, nämlich um&lt;br /&gt;
*die bessere Ausnutzung der in GSM–Systemen verfügbaren Bandbreite, und&lt;br /&gt;
*die Verbesserung der Sprachqualität.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Diese Entwicklung kann wie folgt zusammengefasst werden:&lt;br /&gt;
*Bis 1994 wurde mit dem&amp;amp;nbsp; &#039;&#039;&#039;Halfrate Vocoder&#039;&#039;&#039;&amp;amp;nbsp; (deutsch:&amp;amp;nbsp; Halbraten-Codec)&amp;amp;nbsp; ein neues Verfahren entwickelt. Dieser hat eine Datenrate von&amp;amp;nbsp; $5.6 \ \rm kbit/s$&amp;amp;nbsp; und bietet so die Möglichkeit, Sprache in einem halben Verkehrskanal bei annähernd gleicher Qualität zu übertragen. Dadurch können auf einem Zeitschlitz zwei Gespräche gleichzeitig abgewickelt werden. Der Halbraten–Codec wurde allerdings von den Mobilfunkbetreibern nur dann eingesetzt, wenn eine Funkzelle überlastet war. Inzwischen spielt der Halfrate–Codec keine Rolle mehr.&lt;br /&gt;
*Um die Sprachqualität weiter zu verbessern, wurde 1995 der&amp;amp;nbsp; &#039;&#039;&#039;Enhanced Fullrate Codec&#039;&#039;&#039;&amp;amp;nbsp; (EFR–Codec) eingeführt. Dieses Sprachcodierverfahren – ursprünglich für das US–amerikanische DCS1900–Netz entwickelt – ist ein Vollraten–Codec mit der (etwas niedrigeren) Datenrate&amp;amp;nbsp; $12.2 \ \rm kbit/s$. Die Nutzung dieses Codecs muss natürlich vom Mobiltelefon unterstützt werden.&lt;br /&gt;
*Statt der RPE–LTP–Komprimierung (&#039;&#039;Regular Pulse Excitation – Long Term Prediction&#039;&#039;) beim herkömmlichen Vollraten–Codec wird bei dieser Weiterentwicklung zudem&amp;amp;nbsp; &#039;&#039;&#039;Algebraic Code Excitation Linear Prediction&#039;&#039;&#039;&amp;amp;nbsp; (ACELP) angewandt, was eine deutlich bessere Sprachqualität und eine ebenfalls verbesserte Fehlererkennung und –verschleierung bietet. Nähere Informationen darüber finden Sie auf der übernächsten Seite.&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
==Adaptive Multi–Rate Codec==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die bisher beschriebenen GSM–Codecs arbeiten hinsichtlich Sprach– und Kanalcodierung unabhängig von den Kanalbedingungen und der Netzauslastung stets mit einer festen Datenrate. 1997 wurde ein neues adaptives Sprachcodierverfahren für Mobilfunksysteme entwickelt und kurz darauf durch das &#039;&#039;European Telecommunications Standards Institute&#039;&#039; (ETSI) nach Vorschlägen der Firmen Ericsson, Nokia und Siemens standardisiert. Bei den Forschungsarbeiten zum Systemvorschlag der Siemens AG war der Lehrstuhl für Nachrichtentechnik der TU München, der dieses Lerntutorial $\rm LNTwww$ zur Verfügung stellt, entscheidend beteiligt. Näheres finden Sie unter&amp;amp;nbsp; [Hin02]&amp;lt;ref name =&#039;Hin02&#039;&amp;gt;Hindelang, T.: &#039;&#039;Source-Controlled Channel Decoding and Decoding for Mobile Communications&#039;&#039;. Dissertation. Lehrstuhl für Nachrichtentechnik, TU München. VDI Fortschritt-Berichte, Reihe 10, Nr. 695, 2002.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Der&amp;amp;nbsp; &#039;&#039;&#039;Adaptive Multi–Rate Codec&#039;&#039;&#039;&amp;amp;nbsp; – abgekürzt AMR – hat folgende Eigenschaften:&lt;br /&gt;
*Er passt sich flexibel an die aktuellen Kanalgegebenheiten und an die Netzauslastung an, indem er entweder im Vollraten–Modus (höhere Sprachqualität) oder im Halbraten–Modus (geringere Datenrate) arbeitet. Daneben gibt es noch etliche Zwischenstufen.&lt;br /&gt;
*Er bietet sowohl beim Vollraten– als auch beim Halbratenverkehrskanal eine verbesserte Sprachqualität, was auf die flexibel handhabbare Aufteilung der zur Verfügung stehenden Brutto–Kanaldatenrate zwischen Sprach– und Kanalcodierung zurückzuführen ist.&lt;br /&gt;
*Er besitzt eine größere Robustheit gegenüber Kanalfehlern als die Codecs aus der Frühzeit der Mobilfunktechnik. Dies gilt besonders beim Einsatz im Vollraten–Verkehrskanal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Der AMR–Codec stellt&amp;amp;nbsp; &#039;&#039;&#039;acht verschiedene Modi&#039;&#039;&#039;&amp;amp;nbsp; mit Datenraten zwischen&amp;amp;nbsp; $12.2 \ \rm kbit/s$&amp;amp;nbsp; $(244$&amp;amp;nbsp; Bit pro Rahmen von&amp;amp;nbsp; $20  \ \rm ms)$&amp;amp;nbsp; und&amp;amp;nbsp; $4.75  \ \rm kbit/s$&amp;amp;nbsp; $(95$ Bit pro Rahmen$)$ zur Verfügung. Drei Modi spielen eine herausgehobene Rolle, nämlich&lt;br /&gt;
* $12.2 \ \rm kbit/s$&amp;amp;nbsp; – der verbesserte GSM–Vollraten–Codec (EFR-Codec),&lt;br /&gt;
* $7.4 \ \rm kbit/s$&amp;amp;nbsp; – die Sprachkompression gemäß dem US–amerikanischen Standard IS–641, und&lt;br /&gt;
* $6.7 \ \rm kbit/s$&amp;amp;nbsp; – die EFR–Sprachübertragung des japanischen PDC–Mobilfunkstandards.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die folgenden Beschreibungen beziehen sich meist auf den Modus mit&amp;amp;nbsp; $12.2 \ \rm kbit/s$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID3113__Bei_T_3_3_S8c_v1.png|right|frame|Zusammenstellung der AMR&amp;amp;ndash;Parameter]] &lt;br /&gt;
&lt;br /&gt;
*Alle Vorgänger–Verfahren des AMR basieren auf der Minimierung des Prädiktionsfehlersignals durch eine Vorwärtsprädiktion in den Teilschritten LPC, LTP und RPE. &lt;br /&gt;
*Im Gegensatz dazu verwendet der AMR-Codec eine Rückwärtsprädiktion gemäß dem Prinzip „Analyse durch Synthese”. Dieses Codierungsprinzip bezeichnet man auch als&amp;amp;nbsp; &#039;&#039;&#039;Algebraic Code Excited Linear Prediction&#039;&#039;&#039;&amp;amp;nbsp; (ACELP).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In der Tabelle sind die Parameter des Adaptive Multi–Rate Codecs für zwei Modi zusammengestellt:&lt;br /&gt;
*&amp;amp;nbsp; $244$&amp;amp;nbsp; Bit pro&amp;amp;nbsp; $20 \ \rm  ms$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp;  Modus&amp;amp;nbsp; $12.2 \ \rm kbit/s$, &lt;br /&gt;
*&amp;amp;nbsp; $95$&amp;amp;nbsp; Bit pro&amp;amp;nbsp; $20 \ \rm  ms$ &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; Modus&amp;amp;nbsp; $4.75 \ \rm kbit/s$.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
== Algebraic Code Excited Linear Prediction==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die Grafik zeigt den auf&amp;amp;nbsp; &#039;&#039;&#039;ACELP&#039;&#039;&#039;&amp;amp;nbsp; basierenden&amp;amp;nbsp; &#039;&#039;&#039;AMR-Codec&#039;&#039;&#039;. Es folgt eine kurze Beschreibung des Prinzips. Eine detaillierte Beschreibung finden Sie zum Beispiel in&amp;amp;nbsp; [Kai05]&amp;lt;ref name =&#039;Kai05&#039;&amp;gt;Kaindl, M.: &#039;&#039;Kanalcodierung für Sprache und Daten in GSM-Systemen&#039;&#039;. Dissertation. Lehrstuhl für Nachrichtentechnik, TU München. VDI Fortschritt-Berichte, Reihe 10, Nr. 764, 2005.&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1212__Bei_T_3_2_S8_v1.png|center|frame|Algebraic Code Excited Linear Prediction &amp;amp;ndash; Prinzip]]&lt;br /&gt;
&lt;br /&gt;
*Das Sprachsignal&amp;amp;nbsp; $s(n)$, wie beim GSM–Vollraten–Sprachcodec mit&amp;amp;nbsp; $8 \ \rm kHz$&amp;amp;nbsp; abgetastet und mit&amp;amp;nbsp; $13$&amp;amp;nbsp; Bit quantisiert, wird vor der weiteren Verarbeitung in Rahmen&amp;amp;nbsp; $s_{\rm R}(n)$&amp;amp;nbsp; mit&amp;amp;nbsp; $n = 1$, ... , $160$&amp;amp;nbsp; bzw. in Subblöcke&amp;amp;nbsp; $s_i(l)$&amp;amp;nbsp; mit&amp;amp;nbsp; $i = 1, 2, 3, 4$&amp;amp;nbsp; und&amp;amp;nbsp; $l = 1$, ... , $40$&amp;amp;nbsp; segmentiert.&lt;br /&gt;
*Die Berechnung der LPC–Koeffizienten erfolgt im rot hinterlegten Block rahmenweise alle&amp;amp;nbsp; $20 \ \rm ms$&amp;amp;nbsp; entsprechend&amp;amp;nbsp; $160$&amp;amp;nbsp; Abtastwerten, da innerhalb dieser kurzen Zeitspanne die spektrale Einhüllende des Sprachsignal&amp;amp;nbsp; $s_{\rm R}(n)$&amp;amp;nbsp; als konstant angesehen werden kann.&lt;br /&gt;
*Zur LPC–Analyse wird meist ein Filter&amp;amp;nbsp; $A(z)$&amp;amp;nbsp; der Ordnung&amp;amp;nbsp; $10$&amp;amp;nbsp; gewählt. Beim höchstratigen Modus mit&amp;amp;nbsp; $12.2 \ \rm kbit/s$&amp;amp;nbsp; werden die aktuellen Koeffizienten&amp;amp;nbsp; $a_k \ ( k = 1$, ... , $10)$&amp;amp;nbsp; der Kurzzeitprädiktion alle&amp;amp;nbsp; $10\ \rm  ms$&amp;amp;nbsp; quantisiert, codiert und beim gelb hinterlegten Punkt &#039;&#039;&#039;1&#039;&#039;&#039; zur Übertragung bereitgestellt.&lt;br /&gt;
*Die weiteren Schritte des AMR werden alle&amp;amp;nbsp; $5 \ \rm ms$&amp;amp;nbsp; entsprechend den&amp;amp;nbsp; $40$&amp;amp;nbsp; Abtastwerten der Signale&amp;amp;nbsp; $s_i(l)$&amp;amp;nbsp; durchgeführt. Die Langzeitprädiktion (LTP) – im Bild blau umrandet – ist hier als adaptives Codebuch realisiert, in dem die Abtastwerte der vorangegangenen Subblöcke eingetragen sind.&lt;br /&gt;
*Für die Langzeitprädiktion (LTP) wird zunächst die Verstärkung&amp;amp;nbsp; $G_{\rm FCB}$&amp;amp;nbsp; für das&amp;amp;nbsp; &#039;&#039;Fixed Code Book&#039;&#039;&amp;amp;nbsp; (FCB) zu Null gesetzt, so dass eine Folge von&amp;amp;nbsp; $40$&amp;amp;nbsp; Samples des adaptiven Codebuchs am Eingang&amp;amp;nbsp; $u_i(l)$&amp;amp;nbsp; des durch die LPC festgelegten Sprachtraktfilters&amp;amp;nbsp; $A(z)^{–1}$&amp;amp;nbsp; anliegen. Der Index&amp;amp;nbsp; $i$&amp;amp;nbsp; bezeichnet den betrachteten Subblock.&lt;br /&gt;
*Durch Variation der beiden LTP–Parameter&amp;amp;nbsp; $N_{{\rm LTP},\hspace{0.05cm}i}$&amp;amp;nbsp; und&amp;amp;nbsp; $G_{{\rm LTP},\hspace{0.05cm}i}$&amp;amp;nbsp; soll für diesen&amp;amp;nbsp; $i$–ten Subblock erreicht werden, dass der quadratische Mittelwert – also die mittlere Leistung – des gewichteten Fehlersignals&amp;amp;nbsp; $w_i(l)$&amp;amp;nbsp; minimal wird.&lt;br /&gt;
*Das Fehlersignal&amp;amp;nbsp; $w_i(l)$&amp;amp;nbsp; ist gleich der Differenz zwischen dem aktuellen Sprachrahmen&amp;amp;nbsp; $s_i(l)$&amp;amp;nbsp; und dem Ausgangssignal&amp;amp;nbsp; $y_i(l)$&amp;amp;nbsp; des so genannten Sprachtraktfilters bei Anregung mit&amp;amp;nbsp; $u_i(l)$, unter Berücksichtigung des Wichtungsfilters&amp;amp;nbsp; $W(z)$&amp;amp;nbsp; zur Anpassung an die Spektraleigenschaften des menschlichen Gehörs.&lt;br /&gt;
*In anderen Worten: &amp;amp;nbsp; $W(z)$&amp;amp;nbsp; entfernt solche spektralen Anteile im Signal&amp;amp;nbsp; $e_i(l)$, die von einem „durchschnittlichen” Ohr nicht wahrgenommen werden. Beim Modus&amp;amp;nbsp; $12.2 \ \rm kbit/s$&amp;amp;nbsp; verwendet man&amp;amp;nbsp; $W(z) = A(z/γ_1)/A(z/γ_2)$&amp;amp;nbsp; mit konstanten Faktoren&amp;amp;nbsp; $γ_1 = 0.9$&amp;amp;nbsp; und&amp;amp;nbsp; $γ_2 = 0.6$.&lt;br /&gt;
*Für jeden Subblock kennzeichnet&amp;amp;nbsp; $N_{{\rm LTP},\hspace{0.05cm}i}$&amp;amp;nbsp; die bestmögliche LTP–Verzögerung, die zusammen mit der LTP–Verstärkung&amp;amp;nbsp; $G_{{\rm LTP},\hspace{0.05cm}i}$&amp;amp;nbsp; nach Mittelung bezüglich&amp;amp;nbsp; $l = 1$, ... , $40$&amp;amp;nbsp; den quadratischen Fehler&amp;amp;nbsp; $\text{E}[w_i(l)^2]$&amp;amp;nbsp; minimiert. Gestrichelte Linien kennzeichnen Steuerleitungen zur iterativen Optimierung.&lt;br /&gt;
*Man bezeichnet die beschriebene Vorgehensweise als&amp;amp;nbsp; &#039;&#039;&#039;Analyse durch Synthese&#039;&#039;&#039;. Nach einer ausreichend großen Anzahl an Iterationen wird der Subblock&amp;amp;nbsp; $u_i(l)$&amp;amp;nbsp; in das adaptive Codebuch aufgenommen. Die ermittelten LTP–Parameter&amp;amp;nbsp; $N_{{\rm LTP},\hspace{0.05cm}i}$&amp;amp;nbsp; und&amp;amp;nbsp; $G_{{\rm LTP},\hspace{0.05cm}i}$&amp;amp;nbsp; werden codiert und zur Übertragung bereitgestellt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Fixed Code Book &amp;amp;ndash; FCB==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:P_ID1214__Bei_T_3_2_S8b_v1.png|right|frame|Spureinteilung beim ACELP-Sprachcodec]]&lt;br /&gt;
Nach der Ermittlung der besten adaptiven Anregung erfolgt die Suche nach dem besten Eintrag im festen Codebuch (&#039;&#039;Fixed Code Book&#039;&#039;, FCB). &lt;br /&gt;
*Dieses liefert die wichtigste Information über das Sprachsignal. &lt;br /&gt;
*Zum Beispiel werden beim&amp;amp;nbsp; $12.2 \ \rm kbit/s$–Modus hieraus pro Subblock&amp;amp;nbsp; $40$&amp;amp;nbsp; Bit abgeleitet.&lt;br /&gt;
* Somit gehen in jedem Rahmen von&amp;amp;nbsp; $20$&amp;amp;nbsp; Millisekunden&amp;amp;nbsp; $160/244 ≈ 65\%$&amp;amp;nbsp; der Codierung auf den im Bild auf der letzten Seite grün umrandeten Block zurück.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Das Prinzip lässt sich anhand der Grafik in wenigen Stichpunkten wie folgt beschreiben:&lt;br /&gt;
*Im festen Codebuch kennzeichnet jeder Eintrag einen Puls, bei dem genau&amp;amp;nbsp; $10$&amp;amp;nbsp; der&amp;amp;nbsp; $40$&amp;amp;nbsp; Positionen mit&amp;amp;nbsp; $+1$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $-1$&amp;amp;nbsp; belegt sind. Erreicht wird dies gemäß der Grafik durch fünf Spuren mit jeweils acht Positionen, von denen genau zwei die Werte&amp;amp;nbsp; $±1$&amp;amp;nbsp; aufweisen und alle anderen Null sind.&lt;br /&gt;
*Ein roter Kreis in obiger Grafik&amp;amp;nbsp; $($an den Positionen&amp;amp;nbsp; $2,\ 11,\ 26,\ 30,\ 38)$&amp;amp;nbsp; kennzeichnet eine&amp;amp;nbsp; $+1$&amp;amp;nbsp; und  ein blauer eine&amp;amp;nbsp; $-1$&amp;amp;nbsp; $($im Beispiel bei&amp;amp;nbsp; $13,\ 17,\ 19,\ 24,\ 35)$. In jeder Spur werden die beiden belegten Positionen mit lediglich je drei Bit codiert (da es nur acht mögliche Positionen gibt).&lt;br /&gt;
*Für das Vorzeichen wird ein weiteres Bit verwendet, welches das Vorzeichen des erstgenannten Impulses definiert. Ist die Pulsposition des zweiten Impulses größer als die des ersten, so hat der zweite Impuls das gleiche Vorzeichen wie der erste, ansonsten das entgegengesetzte.&lt;br /&gt;
*In der ersten Spur des obigen Beispiels gibt es positive Pulse auf Position&amp;amp;nbsp; $2 \ (010)$&amp;amp;nbsp; und Position&amp;amp;nbsp; $5 \ (101)$, wobei die Positionszählung bei&amp;amp;nbsp; $0$&amp;amp;nbsp; beginnt. Diese Spur ist also gekennzeichnet durch die Positionen&amp;amp;nbsp; $010$&amp;amp;nbsp; und&amp;amp;nbsp; $101$&amp;amp;nbsp; sowie das Vorzeichen&amp;amp;nbsp; $1$&amp;amp;nbsp; (positiv).&lt;br /&gt;
*Die Kennzeichnung für die zweite Spur lautet: &amp;amp;nbsp; Positionen&amp;amp;nbsp; $011$&amp;amp;nbsp; und&amp;amp;nbsp; $000$, Vorzeichen&amp;amp;nbsp; $0$. Da hier die Pulse an Position&amp;amp;nbsp; $0$&amp;amp;nbsp; und&amp;amp;nbsp; $3$&amp;amp;nbsp; unterschiedliche Vorzeichen haben, steht&amp;amp;nbsp; $011$&amp;amp;nbsp; vor&amp;amp;nbsp; $000$. Das Vorzeichen $0$ &amp;amp;nbsp; ⇒  &amp;amp;nbsp; negativ bezieht sich auf den Puls an der erstgenannten Position&amp;amp;nbsp; $3$.&lt;br /&gt;
*Ein jeder Puls – bestehend aus&amp;amp;nbsp; $40$&amp;amp;nbsp; Impulsen, von denen allerdings&amp;amp;nbsp; $30$&amp;amp;nbsp; das Gewicht &amp;amp;bdquo;Null&amp;amp;rdquo; besitzen – ergibt ein stochastisches, rauschähnliches Akustiksignal, das nach Verstärkung mit&amp;amp;nbsp; $G_{{\rm LTP},\hspace{0.05cm}i}$&amp;amp;nbsp; und Formung durch das LPC–Sprachtraktfilter&amp;amp;nbsp; $A(z)^{–1}$&amp;amp;nbsp; den Sprachrahmen&amp;amp;nbsp; $s_i(l)$&amp;amp;nbsp; approximiert.&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
== Aufgaben zum Kapitel==  	&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
[[Aufgaben:Aufgabe_3.5:_GSM–Vollraten–Sprachcodec|Aufgabe 3.5: GSM–Vollraten–Sprachcodec]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Aufgabe_3.6:_Adaptive_Multi–Rate_Codec|Aufgabe 3.6: Adaptive Multi–Rate Codec]]&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Funkschnittstelle&amp;diff=34981</id>
		<title>Examples of Communication Systems/Funkschnittstelle</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Funkschnittstelle&amp;diff=34981"/>
		<updated>2020-10-13T15:38:43Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Funkschnittstelle to Examples of Communication Systems/Radio Interface&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Examples of Communication Systems/Radio Interface]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Radio_Interface&amp;diff=34980</id>
		<title>Examples of Communication Systems/Radio Interface</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Radio_Interface&amp;diff=34980"/>
		<updated>2020-10-13T15:38:43Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Funkschnittstelle to Examples of Communication Systems/Radio Interface&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=GSM – Global System for Mobile Communications&lt;br /&gt;
|Vorherige Seite=Allgemeine Beschreibung von GSM&lt;br /&gt;
|Nächste Seite=Sprachcodierung&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Logische Kanäle von GSM  ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Entscheidend für den ordnungsgemäßen Betrieb des GSM–Netzes und den Informationsaustausch zwischen Mobil– und Basisstation ist die&amp;amp;nbsp; &#039;&#039;&#039;Funkschnittstelle&#039;&#039;&#039;. Diese wird auch „Luftschnittstelle” oder „Physical Layer” genannt und definiert alle physikalischen Kanäle des GSM–Systems sowie deren Zuordnung zu den logischen Kanälen. Weiterhin ist die Funkschnittstelle für weitere Funktionalitäten wie zum Beispiel das&amp;amp;nbsp; &#039;&#039;Radio Subsystem Link Control&#039;&#039;&amp;amp;nbsp; zuständig.&lt;br /&gt;
&lt;br /&gt;
Beginnen wir mit den &#039;&#039;logischen Kanälen&#039;&#039;. Diese können entweder einen ganzen physikalischen Kanal oder auch nur einen Teil eines physikalischen Kanals belegen und unterteilen sich in zwei Kategorien:&lt;br /&gt;
*&#039;&#039;&#039;Traffic Channels&#039;&#039;&#039;&amp;amp;nbsp; (deutsch:&amp;amp;nbsp; Verkehrskanäle)&amp;amp;nbsp; werden ausschließlich für die Übertragung von Benutzerdatenströmen wie Sprache, Fax und Daten genutzt. Diese Kanäle sind für beide Richtungen &amp;amp;ndash; also&lt;br /&gt;
&lt;br /&gt;
:: &#039;&#039;Mobile Station&#039;&#039; (MS) &amp;amp;nbsp; ⇔ &amp;amp;nbsp; &#039;&#039;Base Station Subsystem&#039;&#039; (BSS) &lt;br /&gt;
&lt;br /&gt;
:ausgelegt und können entweder durch einen Vollraten–Verkehrskanal&amp;amp;nbsp; $\text{(13 kbit/s)}$&amp;amp;nbsp; oder von zwei Halbratenkanälen&amp;amp;nbsp; $\text{(je 5.6 kbit/s)}$&amp;amp;nbsp;  belegt werden.&lt;br /&gt;
*&#039;&#039;&#039;Control Channels&#039;&#039;&#039;&amp;amp;nbsp; (deutsch:&amp;amp;nbsp; Signalisierungskanäle)&amp;amp;nbsp; versorgen über die Funkschnittstelle alle aktiven Mobilstationen durch eine paketorientierte Signalisierung, um jederzeit Nachrichten von der&amp;amp;nbsp; &#039;&#039;Base Transceiver Station&#039;&#039;&amp;amp;nbsp; (BTS ) empfangen bzw. Nachrichten an die BTS senden zu können.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1192__Bei_T_3_2_S1.png|right|frame|Zusammenstellung der logischen Kanäle von GSM]]&lt;br /&gt;
&amp;lt;br&amp;gt;Die Tabelle listet die logischen Kanäle des GSM auf. &lt;br /&gt;
*Diese unterscheiden sich von den logischen ISDN–Kanälen durch ein zusätzliches „m” für „mobile”. &lt;br /&gt;
*Beispielsweise ist der „Bm–Kanal” vergleichbar mit dem B–Kanal des ISDN.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
==Uplink– und Downlink–Parameter ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die logischen Kanäle werden auf&amp;amp;nbsp; &#039;&#039;physikalische Kanäle&#039;&#039;&amp;amp;nbsp; abgebildet, die alle physikalischen Aspekte des Datentransportes beschreiben:&lt;br /&gt;
*die Frequenzbereiche für&amp;amp;nbsp; &#039;&#039;&#039;Uplink&#039;&#039;&#039;&amp;amp;nbsp; (Funkstrecke Mobilstation &amp;amp;nbsp; &amp;amp;rarr; &amp;amp;nbsp; Basisstation) und&amp;amp;nbsp;  &#039;&#039;&#039;Downlink&#039;&#039;&#039;&amp;amp;nbsp; (Funkstrecke Basisstation &amp;amp;nbsp; &amp;amp;rarr; &amp;amp;nbsp; Mobilstation),&lt;br /&gt;
*die Aufteilung zwischen&amp;amp;nbsp; &#039;&#039;Time Division Multiple Access&#039;&#039;&amp;amp;nbsp; (TDMA) und&amp;amp;nbsp; &#039;&#039;Frequency Division Multiple Access&#039;&#039;&amp;amp;nbsp; (FDMA),&lt;br /&gt;
*die&amp;amp;nbsp; &#039;&#039;Burststruktur&#039;&#039;, also die Belegung eines TDMA-Zeitschlitzes bei verschiedenen Anwendungen (Benutzer- und Signalisierungsdaten, Synchronisationsmarken, usw.),&lt;br /&gt;
*das Modulationsverfahren&amp;amp;nbsp; &#039;&#039;Gaussian Minimum Shift Keying&#039;&#039;&amp;amp;nbsp; (GMSK), eine Variante von&amp;amp;nbsp; &#039;&#039;Continuous Phase – Frequency Shift Keying&#039;&#039;&amp;amp;nbsp; (CP–FSK) mit großer Bandbreiteneffizienz.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die folgende Tabelle zeigt die Frequenzbereiche der standardisierten GSM–Systeme. Damit zwischen den beiden Richtungen keine Intermodulationsstörungen auftreten, liegt zwischen den Bändern für Uplink und Downlink ein Sicherheitsband, der so genannte&amp;amp;nbsp; &#039;&#039;Duplexabstand&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1193__Bei_T_3_2_S2.png|center|frame|Frequenzbereiche der standardisierten GSM–Systeme]]&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 1:}$&amp;amp;nbsp;&lt;br /&gt;
Beim&amp;amp;nbsp; $\text{System GSM 900}$&amp;amp;nbsp; (in Deutschland:&amp;amp;nbsp; D–Netz)&amp;amp;nbsp; beginnt der Uplink bei&amp;amp;nbsp; $\text{890 MHz}$&amp;amp;nbsp;  und der Downlink bei&amp;amp;nbsp; $\text{935 MHz}$. &lt;br /&gt;
*Der Duplexabstand beträgt somit&amp;amp;nbsp; $\text{45 MHz}$. &lt;br /&gt;
*Sowohl der Uplink als auch der Downlink besitzen eine Bandbreite von&amp;amp;nbsp; $\text{25 MHz}$. &lt;br /&gt;
*Abzüglich der Guard–Bänder an den beiden Rändern von jeweils&amp;amp;nbsp; $\text{100 kHz}$&amp;amp;nbsp; verbleiben&amp;amp;nbsp; $\text{24.8 MHz}$, die in&amp;amp;nbsp; $124$&amp;amp;nbsp; FDMA-Kanäle zu je&amp;amp;nbsp; $\text{200 kHz}$&amp;amp;nbsp; unterteilt sind.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Das $\text{DCS–Band}$&amp;amp;nbsp; (E–Netz) im Bereich um&amp;amp;nbsp; $\text{1800 MHz}$&amp;amp;nbsp; hat einen Duplexabstand von&amp;amp;nbsp; $\text{95 MHz}$&amp;amp;nbsp; und eine jeweilige Bandbreite von&amp;amp;nbsp; $\text{75 MHz}$. &lt;br /&gt;
*Unter Berücksichtigung der Guard–Bänder ergeben sich hier&amp;amp;nbsp; $374$&amp;amp;nbsp; FDMA–Kanäle zu je&amp;amp;nbsp; $\text{200 kHz}$.}}&lt;br /&gt;
 	 &lt;br /&gt;
&lt;br /&gt;
== Realisierung von FDMA und TDMA==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:P_ID1194__Bei_T_3_2_S3_v1.png|right|frame|Zusammenspiel zwischen FDMA und TDMA bei GSM]]&lt;br /&gt;
Beim GSM–System werden zwei Vielfachzugriffsverfahren parallel verwendet:&lt;br /&gt;
*Frequenzmultiplex&amp;amp;nbsp; (&#039;&#039;Frequency Division Multiple Access&#039;&#039;, FDMA)&amp;amp;nbsp; und&lt;br /&gt;
*Zeitmultiplex&amp;amp;nbsp; (&#039;&#039;Time Division Multiple Access&#039;&#039;, TDMA).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Grafik und die Beschreibung gilt für das System&amp;amp;nbsp; $\text{GSM 900}$, in Deutschland bekannt als D–Netz. &lt;br /&gt;
&lt;br /&gt;
Bei den anderen GSM–Systemen gelten vergleichbare Aussagen. &lt;br /&gt;
&lt;br /&gt;
Wir verweisen hier auch auf die Seite&amp;amp;nbsp; [[Examples_of_Communication_Systems/Funkschnittstelle#GSM.E2.80.93Rahmenstruktur|GSM–Rahmenstruktur]]&amp;amp;nbsp; und die&amp;amp;nbsp; [[Aufgaben:Aufgabe_3.3:_GSM–Rahmenstruktur|Aufgabe 3.3]].&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
*Sowohl im Uplink als auch im Downlink geschieht die Übertragung der Signalisierungs– und Verkehrsdaten parallel in&amp;amp;nbsp; $124$&amp;amp;nbsp; Frequenzkanälen, bezeichnet mit &amp;amp;bdquo;RFCH1&amp;amp;rdquo; bis &amp;amp;bdquo;RFCH124&amp;amp;rdquo;.&lt;br /&gt;
*Die Mittenfrequenz des Uplink–Kanals&amp;amp;nbsp; $n$&amp;amp;nbsp; liegt bei&amp;amp;nbsp; $890 \ {\rm MHz} + n · 0.2 \ {\rm MHz} \ \ ( n = 1$, ... , $124)$.&amp;amp;nbsp; Am oberen und unteren Ende des&amp;amp;nbsp; $25 \ {\rm MHz}$–Bandes gibt es Schutzbereiche von&amp;amp;nbsp; je $100 \ {\rm kHz}$.&lt;br /&gt;
*Der Kanal&amp;amp;nbsp; $n$&amp;amp;nbsp; im Downlink liegt um den Duplexabstand von&amp;amp;nbsp; $45 \ {\rm MHz}$&amp;amp;nbsp; über dem Kanal&amp;amp;nbsp; $n$&amp;amp;nbsp; im Uplink bei&amp;amp;nbsp; $935 \ {\rm MHz} + n · 0.2 \ {\rm MHz}$. Die Kanäle werden ebenso bezeichnet wie diejenigen in der Aufwärtsstrecke.&lt;br /&gt;
*Jeder Zelle werden einige Frequenzen per&amp;amp;nbsp; &#039;&#039;Cell Allocation&#039;&#039;&amp;amp;nbsp; (CA) zugewiesen. In benachbarten Zellen verwendet man verschiedene Frequenzen. Eine Teilmenge der CA ist für logische Kanäle reserviert. Die verbleibenden Kanäle werden einer Mobilstation als&amp;amp;nbsp; &#039;&#039;Mobile Allocation&#039;&#039;&amp;amp;nbsp; (MA) zugewiesen.&lt;br /&gt;
*Diese wendet man zum Beispiel bei Frequenzsprungverfahren&amp;amp;nbsp; (&#039;&#039;Frequency Hopping&#039;&#039;)&amp;amp;nbsp; an, wobei die Daten über verschiedene Frequenzkanäle gesendet werden. Die Übertragung wird dadurch stabiler gegenüber Kanalschwankungen. Meist erfolgt der Frequenzwechsel paketweise.&lt;br /&gt;
*Die einzelnen GSM–Frequenzkanäle werden durch Zeitmultiplex (TDMA) noch weiter unterteilt. Jeder FDMA–Kanal wird periodisch in so genannte&amp;amp;nbsp; &#039;&#039;TDMA–Rahmen&#039;&#039;&amp;amp;nbsp; aufgeteilt, die ihrerseits jeweils acht Zeitschlitze (Time–Slots) umfassen.&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;Zeitschlitze&#039;&#039;&amp;amp;nbsp; (TDMA–Kanäle) werden zyklisch den einzelnen Teilnehmern zugeordnet und beinhalten jeweils einen so genannten&amp;amp;nbsp;  &#039;&#039;Burst&#039;&#039;&amp;amp;nbsp; von&amp;amp;nbsp; $156.25$&amp;amp;nbsp; Bitperioden Länge. Jedem GSM-Nutzer steht in jedem TDMA–Rahmen genau einer der acht Zeitschlitze zur Verfügung.&lt;br /&gt;
*Die TDMA–Rahmen des Uplinks werden gegenüber denen des Downlinks mit drei Zeitschlitzen Verzögerung gesendet. Dies hat den Vorteil, dass die gleiche Hardware einer Mobilstation sowohl zum Senden als auch zum Empfangen einer Nachricht eingesetzt werden kann.&lt;br /&gt;
*Die Dauer eines Zeitschlitzes beträgt&amp;amp;nbsp; $T_{\rm Z} ≈ 577 \ \rm &amp;amp;micro; s$, die eines TDMA–Rahmens&amp;amp;nbsp; $4.615 \ \rm ms$. Diese Werte ergeben sich aus der GSM–Rahmenstruktur. Insgesamt&amp;amp;nbsp; $26$&amp;amp;nbsp; TDMA–Rahmen werden zu einem so genannten Multiframe der Dauer&amp;amp;nbsp; $120 \ \rm ms$&amp;amp;nbsp; zusammengefasst:&lt;br /&gt;
:$$T_{\rm Z} = \frac{120\,{\rm ms}}{8 \cdot 26} \approx 576.9\,{\rm &amp;amp;micro; s }\hspace{0.05cm}. $$&lt;br /&gt;
  	 &lt;br /&gt;
&lt;br /&gt;
==Die verschiedenen Burstarten bei GSM ==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Wie gerade gezeigt wurde, beinhaltet ein&amp;amp;nbsp; &#039;&#039;Burst&#039;&#039;&amp;amp;nbsp; jeweils&amp;amp;nbsp; $156.25$&amp;amp;nbsp; Bit und hat die Dauer&amp;amp;nbsp; $T_{\rm Z} ≈ 577 \ \rm &amp;amp;micro; s$. &lt;br /&gt;
[[File:P_ID1195__Bei_T_3_2_S4_v1.png|right|frame|Die verschiedenen Burstarten bei GSM]]&lt;br /&gt;
*Daraus berechnet sich die Bitdauer zu&amp;amp;nbsp; $T_{\rm B} ≈ 3.69 \  \rm &amp;amp;micro; s$. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Zur Vermeidung von Überlappungen von Bursts aufgrund unterschiedlicher Laufzeiten zwischen Mobil– und Basisstation ist am Ende eines jeden Bursts eine&amp;amp;nbsp; &#039;&#039;&#039;Guard Period&#039;&#039;&#039;&amp;amp;nbsp; (GP) eingefügt. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Dieser Sicherheitsabstand beträgt&amp;amp;nbsp; $8.25$&amp;amp;nbsp; Bitdauern, also&amp;amp;nbsp; $8.25 · 3.69 \ {\rm &amp;amp;micro; s} ≈ 30.5 \  {\rm &amp;amp;micro; s}$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Man unterscheidet fünf verschiedene Arten von Bursts, wie aus obigem Bild hervorgeht:&lt;br /&gt;
*Normal Burst (NB),&lt;br /&gt;
*Frequency Correction Burst  (FB),&lt;br /&gt;
*Synchronization Burst  (SB),&lt;br /&gt;
*Dummy Burst  (DB),&lt;br /&gt;
*Access Burst (AB).&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;Normal Burst&#039;&#039;&#039;&amp;amp;nbsp; wird eingesetzt, um Daten von Verkehrs– und Signalisierungskanälen zu übertragen. Die fehlerschutzcodierten Nutzdaten (blau, zwei mal&amp;amp;nbsp; $57$&amp;amp;nbsp; Bits) ergeben zusammen mit je drei Tailbits (rot, in dieser Zeit wird die Sendeleistung geregelt), zwei Signalisierungsbits (grün) und&amp;amp;nbsp; $26$&amp;amp;nbsp; Bits für die Trainingssequenz (gelb, erforderlich für die Kanalschätzung und Synchronisation) insgesamt&amp;amp;nbsp; $148$&amp;amp;nbsp; Bit. Dazu kommt die Guard Period von&amp;amp;nbsp; $8.25$ Bit&amp;amp;nbsp; (grau).&lt;br /&gt;
&lt;br /&gt;
::Die zwei (grünen) Signalisierungsbits – auch &#039;&#039;Stealing Flags&#039;&#039; genannt – zeigen an, ob der Burst lediglich Nutzdaten oder hochpriorisierte Signalisierungsinformationen transportiert, die immer verzögerungsfrei zu übertragen sind. Mit Hilfe der &#039;&#039;Trainingssequenz&#039;&#039; kann der Kanal geschätzt werden, was eine Voraussetzung für die Anwendung eines Entzerrers zur Verminderung von Impulsinterferenzen ist.&lt;br /&gt;
&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;Frequency Correction Burst&#039;&#039;&#039;&amp;amp;nbsp; wird zur Frequenzsynchronisierung einer Mobilstation verwendet. Alle Bits außer den Tailbits und der Guard Period sind hier auf logisch &amp;amp;bdquo;$0$&amp;amp;rdquo; gesetzt. Die wiederholte Ausstrahlung eines solchen Bursts auf dem&amp;amp;nbsp; &#039;&#039;Frequency Correction Channel&#039;&#039;&amp;amp;nbsp; (FCCH) entspricht einem unmodulierten Trägersignal mit der Frequenz&amp;amp;nbsp; $f_{\rm T} + Δf_{\rm A}$&amp;amp;nbsp; (Trägerfrequenz + Frequenzhub). Dieser Wert ergibt sich aus der Tatsache, dass das Modulationsverfahren&amp;amp;nbsp; [[Examples_of_Communication_Systems/Funkschnittstelle#Gaussian_Minimum_Shift_Keying_.28GMSK.29|Gaussian Minimum Shift Keying]]&amp;amp;nbsp; ein FSK–Sonderfall ist.&lt;br /&gt;
&lt;br /&gt;
*Mit dem&amp;amp;nbsp; &#039;&#039;&#039;Synchronization Burst&#039;&#039;&#039;&amp;amp;nbsp; werden Informationen übertragen, mit deren Hilfe sich eine Mobilstation zeitlich mit der BTS synchronisiert. Neben einer langen Midambel von&amp;amp;nbsp; $64$&amp;amp;nbsp; Bit enthält der &#039;&#039;Synchronization Burst&#039;&#039; die TDMA–Rahmen–Nummer und den&amp;amp;nbsp; &#039;&#039;Base Transceiver Station Identity Code&#039;&#039;&amp;amp;nbsp; (BSIC). Bei wiederholter Ausstrahlung eines solchen Bursts spricht man vom&amp;amp;nbsp; &#039;&#039;Synchronization Channel&#039;&#039;&amp;amp;nbsp; (SCH).&lt;br /&gt;
&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;Dummy Burst&#039;&#039;&#039;&amp;amp;nbsp; (DB) wird von jeder&amp;amp;nbsp; &#039;&#039;Base Transceiver Station&#039;&#039;&amp;amp;nbsp; (BTS) auf einer speziell ihr zugeteilten Frequenz&amp;amp;nbsp; (&#039;&#039;Cell Allocation&#039;&#039;)&amp;amp;nbsp; ausgesandt, wenn keine anderen Bursts zu versenden sind. Damit ist sichergestellt, dass eine Mobilstation stets Leistungsmessungen durchführen kann.&lt;br /&gt;
&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;&#039;Access Burst&#039;&#039;&#039;&amp;amp;nbsp; wird für wahlfreien Vielfachzugriff auf dem&amp;amp;nbsp; &#039;&#039;Random Access Channel&#039;&#039;&amp;amp;nbsp; (RACH) eingesetzt. Um die Wahrscheinlichkeit von Kollisionen auf dem RACH gering zu halten, besitzt der&amp;amp;nbsp; &#039;&#039;Access Burst&#039;&#039;&amp;amp;nbsp; eine wesentliche längere&amp;amp;nbsp; &#039;&#039;Guard Period&#039;&#039;&amp;amp;nbsp; von&amp;amp;nbsp; $68.25$&amp;amp;nbsp; Bitdauern als die übrigen Bursts.&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==GSM–Rahmenstruktur == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Durch die GSM–Rahmenstruktur erfolgt die Abbildung der logischen Kanäle auf physikalische Kanäle. Hierbei wird unterschieden zwischen&lt;br /&gt;
*der&amp;amp;nbsp; &#039;&#039;&#039;Abbildung in der Frequenz&#039;&#039;&#039;, basierend auf&amp;amp;nbsp; &#039;&#039;Cell Allocation&#039;&#039;&amp;amp;nbsp; (CA), &#039;&#039;Mobile Allocation&#039;&#039;&amp;amp;nbsp; (MA), der TDMA–Rahmennummer (FN) und den Vorschriften für das (optionale)&amp;amp;nbsp; &#039;&#039;Frequency Hopping&#039;&#039;,&lt;br /&gt;
*der&amp;amp;nbsp; &#039;&#039;&#039;Abbildung in der Zeit&#039;&#039;&#039;, wobei die TDMA–Rahmen mit jeweils acht Zeitschlitzen zur Übertragung der Bursts in Multiframes, Superframes und Hyperframes zusammengefasst werden.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1196__Bei_T_3_2_S5_v1.png|center|frame|Die GSM–Rahmenstruktur ]]&lt;br /&gt;
&lt;br /&gt;
Entsprechend diesem Bild gelten folgende Aussagen:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Multiframes&#039;&#039;&#039;&amp;amp;nbsp; werden für die Abbildung von logischen Kanälen auf physikalische Kanäle genutzt. Hierbei sind zwei Arten zu unterscheiden, solche mit&amp;amp;nbsp; $26$&amp;amp;nbsp; TDMA–Rahmen und einer Zyklusdauer von&amp;amp;nbsp; $120 \ \rm ms$ und solche mit&amp;amp;nbsp; $51$&amp;amp;nbsp; TDMA–Rahmen und einer Dauer von&amp;amp;nbsp; $235.4  \ \rm ms$.&lt;br /&gt;
&lt;br /&gt;
*Die Bursts der Verkehrskanäle (TCH) und der zugeordneten Steuerungskanäle (SACCH, FACCH) werden in jeweils&amp;amp;nbsp; $26$&amp;amp;nbsp; aufeinander folgenden TDMA-Rahmen übertragen. Dabei wird stets nur ein Zeitschlitz je TDMA-Rahmen für den jeweiligen Multiframe berücksichtigt.&lt;br /&gt;
*Von der Brutto–Datenrate pro Nutzer&amp;amp;nbsp; $\text{(ca. 33.9 kbit/s)}$&amp;amp;nbsp; sind &amp;amp;nbsp; $\text{9.2 kbit/s}$&amp;amp;nbsp; für Synchronisierung, Signalisierung und &#039;&#039;Guard Period&#039;&#039; reserviert und&amp;amp;nbsp; $\text{1.9 kbit/s}$&amp;amp;nbsp; für SACCH und IDLE. Die (codierten und verschlüsselten) Nutzdaten belegen bei Multiframe-Struktur mit&amp;amp;nbsp; $26$&amp;amp;nbsp; Rahmen nur&amp;amp;nbsp; $\text{22.8 kbit/s}$.&lt;br /&gt;
*Die Multiframe-Struktur mit&amp;amp;nbsp; $51$&amp;amp;nbsp; Rahmen (rechte Bildhälfte) dient dazu, mehrere logische Kanäle auf einen physikalischen Kanal zu multiplexen. In 51 aufeinander folgenden TDMA–Rahmen werden jeweils alle Daten der Signalisierungskanäle (außer FACCH und SACCH) übertragen.&lt;br /&gt;
*Ein&amp;amp;nbsp; &#039;&#039;&#039;Superframe&#039;&#039;&#039;&amp;amp;nbsp; besteht aus&amp;amp;nbsp; $1326$&amp;amp;nbsp; aufeinander folgenden TDMA-Rahmen&amp;amp;nbsp; $(51$&amp;amp;nbsp; Multiframes mit je&amp;amp;nbsp; $26$&amp;amp;nbsp;TDMA–Rahmen bzw. aus&amp;amp;nbsp; $26$&amp;amp;nbsp; Multiframes mit je&amp;amp;nbsp; $51$&amp;amp;nbsp; TDMA–Rahmen$)$ und dauert ca.&amp;amp;nbsp; $6.12$&amp;amp;nbsp; Sekunden.&lt;br /&gt;
&lt;br /&gt;
*Ein&amp;amp;nbsp; &#039;&#039;&#039;Hyperframe&#039;&#039;&#039;&amp;amp;nbsp; fasst jeweils&amp;amp;nbsp; $2048$&amp;amp;nbsp; Superframes&amp;amp;nbsp; (bzw.&amp;amp;nbsp; $2\hspace{0.08cm}715{\hspace0.08cm}648$&amp;amp;nbsp; TDMA–Rahmen)&amp;amp;nbsp; zusammen und wird mit seiner langen Zyklusdauerzur Synchronisierung der Nutzdatenverschlüsselung verwendet. Diese beträgt&amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; $\text{3 Stunden, 28 Minuten und 53.760 Sekunden}.$ &lt;br /&gt;
	 	&lt;br /&gt;
 &lt;br /&gt;
==Modulation bei GSM–Systemen==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Entsprechend den Aussagen der letzten Seiten müssen in einem Frequenzkanal&amp;amp;nbsp; $156.25$&amp;amp;nbsp; Bit pro Zeitschlitz&amp;amp;nbsp; $(0.5769 \ \rm ms)$&amp;amp;nbsp; übertragen werden. Dies entspricht einer Gesamtbitrate (für acht TDMA–Nutzer inkl. Kanalcodierung, Signalisierungs– und Synchronisationsinformation, etc.) von&amp;amp;nbsp; $R_{\rm ges} = 270\hspace{0.08cm}833 \ \rm bit/s$. &lt;br /&gt;
&lt;br /&gt;
Für diese Bitrate steht bei GSM eine Bandbreite von&amp;amp;nbsp; $B = 200  \ \rm kHz$&amp;amp;nbsp; zur Verfügung. Man benötigt deshalb ein Modulationsverfahren mit einer Bandbreiteneffizienz von mindestens&amp;amp;nbsp; $β =R_{\rm ges}/B = 1.35$.&lt;br /&gt;
&lt;br /&gt;
Beim GSM–Mobilfunk findet das Modulationsverfahren&amp;amp;nbsp; &#039;&#039;&#039;Gaussian Minimum Shift Keying&#039;&#039;&#039;&amp;amp;nbsp; (GMSK) Anwendung. Dieses wurde schon im Kapitel&amp;amp;nbsp; [[Modulationsverfahren/Nichtlineare_digitale_Modulation#GMSK_.E2.80.93_Gaussian_Minimum_Shift_Keying|Nichtlineare digitale Modulation]]&amp;amp;nbsp; des Buches &amp;amp;bdquo;Modulationsverfahren&amp;amp;rdquo; ausführlich behandelt. Hier folgt eine kurze, stichpunktartige Beschreibung:&lt;br /&gt;
*GMSK ist eine abgewandelte Form von&amp;amp;nbsp; &#039;&#039;&#039;Frequency Shift Keying&#039;&#039;&#039;&amp;amp;nbsp; (FSK). Diese ergibt sich, wenn man einen&amp;amp;nbsp; [[Modulationsverfahren/Frequenzmodulation_(FM)#Realisierung_eines_FM.E2.80.93Modulators|Frequenzmodulator]]&amp;amp;nbsp; mit einem binären bipolaren rechteckförmigen Eingangssignal betreibt.&lt;br /&gt;
*Ein solches FSK-Signal&amp;amp;nbsp; $s(t)$&amp;amp;nbsp; beinhaltet innerhalb einer jeden Symboldauer&amp;amp;nbsp; $T$&amp;amp;nbsp; jeweils nur eine einzige Augenblicksfrequenz&amp;amp;nbsp; $f_{\rm A}(t) = \rm const.$ Ist das (normierte) Eingangssignal gleich&amp;amp;nbsp; $+1$, so ist&amp;amp;nbsp; $f_{\rm A}(t)$&amp;amp;nbsp; gleich der Summe aus der Trägerfrequenz&amp;amp;nbsp; $f_{\rm T}$&amp;amp;nbsp; und dem Frequenzhub&amp;amp;nbsp; $Δf_{\rm A}$. Entsprechend gilt für den Amplitudenwert&amp;amp;nbsp; $-1$:  &amp;amp;nbsp;  $f_{\rm A}(t) = f_{\rm T} - Δf_{\rm A}$.&lt;br /&gt;
*Um eine einfache Demodulation zu ermöglichen, sollten die beiden Signale mit den Frequenzen&amp;amp;nbsp; $f_{\rm T} ± Δf$&amp;amp;nbsp; innerhalb der Symboldauer&amp;amp;nbsp; $T$&amp;amp;nbsp; orthogonal zueinander sein. Demzufolge muss gelten:&lt;br /&gt;
&lt;br /&gt;
:$$\int^{T} _{0} \cos \big(2 \pi t \cdot (f_{\rm T}+ \Delta f_{\rm A} )\big)\cdot \cos \big(2 \pi t \cdot (f_{\rm T}- \Delta f_{\rm A} )\big)\,{\rm&lt;br /&gt;
 d}t= 0\hspace{0.05cm}. $$&lt;br /&gt;
 &lt;br /&gt;
:Daraus ergibt sich für den&amp;amp;nbsp; &#039;&#039;&#039;Frequenzhub&#039;&#039;&#039;&amp;amp;nbsp; die Anforderung:&lt;br /&gt;
&lt;br /&gt;
:$$\Delta f_{\rm A} = \frac{k}{4 \cdot T}\hspace{0.2cm}{\rm&lt;br /&gt;
 mit}\hspace{0.2cm}k = 1, 2, 3, \text{...}$$&lt;br /&gt;
 &lt;br /&gt;
*Da bei FSK–Systemen der&amp;amp;nbsp; &#039;&#039;&#039;Modulationsindex&#039;&#039;&#039;&amp;amp;nbsp; zu&amp;amp;nbsp; $h = 2 · Δf_{\rm A} · T$&amp;amp;nbsp; definiert ist, folgt&amp;amp;nbsp; $h = k/2$. Der kleinste Wert unter Einhaltung der Orthogonalitätsbedingungen ist somit&amp;amp;nbsp; $h_{\rm min} = 0.5$.&lt;br /&gt;
*Ein FSK–System mit&amp;amp;nbsp; $h = 0.5$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $Δf_{\rm A}$ = ${1}/{4T}$&amp;amp;nbsp; bezeichnet man als&amp;amp;nbsp; [[Modulationsverfahren/Nichtlineare_digitale_Modulation#MSK_.E2.80.93_Minimum_Shift_Keying|Minimum Shift Keying]]&amp;amp;nbsp; – kurz MSK. Dieses wird in allen GSM-Systemen eingesetzt, da ein größerer Modulationindex als&amp;amp;nbsp; $h = 0.5$&amp;amp;nbsp; eine deutlich größere Bandbreite beanspruchen würde.&lt;br /&gt;
*Ein sehr schmales Spektrum ergibt sich allerdings nur dann, wenn an den Symbolgrenzen Phasensprünge durch Phasenwertanpassung vermieden werden. MSK gehört somit zu den&amp;amp;nbsp; &#039;&#039;Continuous Phase Frequency Shift Keying&#039;&#039;–Verfahren (&#039;&#039;&#039;CP–FSK&#039;&#039;&#039;, siehe nächste Seite).&lt;br /&gt;
*Vor dem Frequenzmodulator wird zusätzlich noch ein Tiefpass mit Gauß–Charakteristik eingefügt, wodurch die GSM–Bandbreite weiter verringert wird. Man nennt diese Modulationsart&amp;amp;nbsp; [[Examples_of_Communication_Systems/Funkschnittstelle#Gaussian_Minimum_Shift_Keying_.28GMSK.29|Gaussian Minimum Shift Keying]] (&#039;&#039;&#039;GMSK&#039;&#039;&#039;).&lt;br /&gt;
 	 &lt;br /&gt;
&lt;br /&gt;
==Kontinuierliche Phasenanpassung bei FSK  ==	 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Ausgehend vom Rechtecksignal&amp;amp;nbsp; $q(t)$&amp;amp;nbsp; und der Trägerfrequenz&amp;amp;nbsp; $f_{\rm T} = 4/T$&amp;amp;nbsp; betrachten wir die FSK–Signale&amp;amp;nbsp; $s_{\rm A}(t)$, ... ,&amp;amp;nbsp; $s_{\rm D}(t)$&amp;amp;nbsp; bei unterschiedlichem Frequenzhub&amp;amp;nbsp; $Δf_{\rm A}$  &amp;amp;nbsp; ⇒  &amp;amp;nbsp; Modulationindex $h = 2 · Δf_{\rm A} · T$. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1197__Bei_T_3_2_S7_v2.png|center|frame|Beispielhafte Signale zur kontinuierlichen Phasenanpassung]]&lt;br /&gt;
&lt;br /&gt;
Zu den Signalverläufen ist anzumerken (wir verweisen auch auf das interaktive Applet&amp;amp;nbsp; [[Applets:Frequency_Shift_Keying_%26_Continuous_Phase_Modulation|Frequency Shift Keying &amp;amp; Continuous Phase Modulation]]):&lt;br /&gt;
*Das Signal&amp;amp;nbsp; $s_{\rm A}(t)$&amp;amp;nbsp; ergibt sich mit&amp;amp;nbsp; $Δf_{\rm A} = 1/T$ &amp;amp;nbsp;  ⇒ &amp;amp;nbsp; Modulationsindex&amp;amp;nbsp; $h = 2$. Man erkennt die höhere Frequenz&amp;amp;nbsp; $f_1 = 5/T$&amp;amp;nbsp; $($für $a_ν = +1)$&amp;amp;nbsp; gegenüber der Frequenz&amp;amp;nbsp; $f_2 = 3/T$ &amp;amp;nbsp;$($für $a_ν = -1)$.&lt;br /&gt;
*Mit&amp;amp;nbsp; $Δf_{\rm A} = 0.5/T$&amp;amp;nbsp; $($Signal&amp;amp;nbsp; $s_{\rm {\rm B}}(t)$,&amp;amp;nbsp; $h = 1)$&amp;amp;nbsp; gilt&amp;amp;nbsp; $f_1 = 4.5/T$&amp;amp;nbsp; und&amp;amp;nbsp; $f_2 = 3.5/T$. An jeder Symbolgrenze tritt ein Phasensprung um&amp;amp;nbsp; $π$&amp;amp;nbsp; auf, wenn keine Phasenanpassung wie beim Signal&amp;amp;nbsp; $s_{\rm C}(t)$&amp;amp;nbsp; vorgenommen wird.&lt;br /&gt;
*Bei&amp;amp;nbsp; $s_{\rm C}(t)$&amp;amp;nbsp; wird im Bereich&amp;amp;nbsp; $0$ ... $T$&amp;amp;nbsp; der Koeffizient&amp;amp;nbsp; $a_1 = +1$&amp;amp;nbsp; durch&amp;amp;nbsp; $\cos(2π·f_1·t)$&amp;amp;nbsp; repräsentiert, während der ebenfalls positive Koeffizient&amp;amp;nbsp; $a_2 = +1$&amp;amp;nbsp; im Bereich&amp;amp;nbsp; $T$ ... $2T$&amp;amp;nbsp; zum Signal&amp;amp;nbsp; $-\cos(2π·f_1\hspace{0.01cm}·\hspace{0.01cm}(t-T))$&amp;amp;nbsp; führt. Durch diese Anpassung werden somit Phasensprünge vermieden.&lt;br /&gt;
*Das Signal&amp;amp;nbsp; $s_{\rm D}(t)$&amp;amp;nbsp; beschreibt das MSK-Signal&amp;amp;nbsp; $($Frequenzhub&amp;amp;nbsp; $Δf_{\rm A} = 0.25/T$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; Modulationsindex&amp;amp;nbsp; $h = 0.5)$, ebenfalls mit Phasenanpassung. Hier sind bei jeder Symbolgrenze – je nach den vorherigen Symbolen – vier unterschiedliche Anfangsphasen möglich.&lt;br /&gt;
*Bei&amp;amp;nbsp; $\rm GSM \ 900$&amp;amp;nbsp; beträgt die Trägerfrequenz&amp;amp;nbsp; $f_{\rm T} = 900\ \rm  MHz$&amp;amp;nbsp; und die Symboldauer ist&amp;amp;nbsp; $T ≈ 3.7 \ \rm &amp;amp;micro; s$. Mit dem Modulationsindex&amp;amp;nbsp; $h = 0.5$&amp;amp;nbsp; ergibt sich&amp;amp;nbsp; $Δf_{\rm A} ≈ 68 \ \rm kHz$. Die beiden Frequenzen&amp;amp;nbsp; $f_1 = 900.068\ \rm  MHz$&amp;amp;nbsp; und&amp;amp;nbsp; $f_2 = 899.932 \ \rm   MHz$&amp;amp;nbsp; liegen somit sehr eng beieinander.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
== Minimum Shift Keying (MSK) == 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die Grafik zeigt das Modell zur Erzeugung einer MSK–Modulation und typische Signalverläufe. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID2190__Bei_T_3_2_S6_v1.png|center|frame|Blockschaltbild zur Erzeugung einer MSK und entsprechende Signalverläufe]]&lt;br /&gt;
Man erkennt&lt;br /&gt;
&lt;br /&gt;
*am Punkt &amp;amp;nbsp;&#039;&#039;&#039;(1)&#039;&#039;&#039;&amp;amp;nbsp; das digitale Quellensignal, bestehend aus einer Folge von Diracimpulsen im Abstand&amp;amp;nbsp; $T$, gewichtet mit den Amplitudenkoeffizienten&amp;amp;nbsp; $a_ν ∈ \{-1, +1\}$:&lt;br /&gt;
&lt;br /&gt;
:$$q_\delta(t) = \sum_{\nu = - \infty}^{+\infty}a_{ \nu} \cdot \delta (t - \nu&lt;br /&gt;
\cdot T)\hspace{0.05cm}; $$&lt;br /&gt;
 &lt;br /&gt;
*am Punkt &amp;amp;nbsp;&#039;&#039;&#039;(2)&#039;&#039;&#039;&amp;amp;nbsp; das Rechtecksignal&amp;amp;nbsp; $q_{\rm R}(t)$&amp;amp;nbsp; nach Faltung mit dem Rechteckimpuls&amp;amp;nbsp; $g(t)$&amp;amp;nbsp; der Dauer&amp;amp;nbsp; $T$&amp;amp;nbsp; und der Höhe&amp;amp;nbsp; $1/T$&amp;amp;nbsp; (die Amplitude wurde aus Kompatibilitätsgründen zu späteren Seiten so gewählt):&lt;br /&gt;
&lt;br /&gt;
:$$q_{\rm R}(t) = \sum_{\nu = - \infty}^{+\infty}a_{ \nu} \cdot g (t - \nu&lt;br /&gt;
\cdot T)\hspace{0.05cm}; $$&lt;br /&gt;
 &lt;br /&gt;
*den Frequenzmodulator, der sich gemäß der Beschreibung im Kapitel&amp;amp;nbsp; [[Modulationsverfahren/Frequenzmodulation_(FM)#Signalverl.C3.A4ufe_bei_Frequenzmodulation|Signalverläufe bei FM]]&amp;amp;nbsp; als Integrator und nachgeschalteten Phasenmodulator realisieren lässt. Für das Signal am Punkt &amp;amp;nbsp;&#039;&#039;&#039;(3)&#039;&#039;&#039;&amp;amp;nbsp; gilt dann:&lt;br /&gt;
&lt;br /&gt;
:$$\phi(t) =  \frac{\pi}{2}\cdot  \int_{0}^{t}&lt;br /&gt;
q_{\rm R}(\tau)\hspace{0.1cm} {\rm d}\tau \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Phasenwerte bei der Symboldauer $T$ sind Vielfache von&amp;amp;nbsp; $π/2 \ (90^\circ)$, wobei der für MSK gültige Modulationsindex&amp;amp;nbsp; $h = 0.5$&amp;amp;nbsp; berücksichtigt ist. Der Phasenverlauf ist linear. Daraus ergibt sich am Punkt &amp;amp;nbsp;&#039;&#039;&#039;(4)&#039;&#039;&#039;&amp;amp;nbsp; des Blockschaltbildes das MSK–Signal zu&lt;br /&gt;
&lt;br /&gt;
:$$s(t)  =  s_0 \cdot \cos \big(2 \pi  f_{\rm T}  \hspace{0.05cm}t +&lt;br /&gt;
 \phi(t)\big) =   s_0 \cdot \cos \big(2 \pi \cdot t \cdot (f_{\rm T}+a_{ \nu} \cdot {\rm \Delta}f_{\rm A} )\big) \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
Die Realisierung von&amp;amp;nbsp; &#039;&#039;Minimum Shift Keying&#039;&#039;&amp;amp;nbsp; (MSK)  durch eine spezielle Variante von&amp;amp;nbsp; &#039;&#039;Offset–QPSK&#039;&#039;&amp;amp;nbsp; wird durch das interaktive Applet&amp;amp;nbsp; [[Applets:QPSK_und_Offset-QPSK_(Applet)|QPSK und Offset–QPSK]]&amp;amp;nbsp; verdeutlicht.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
&lt;br /&gt;
==Gaussian Minimum Shift Keying (GMSK)==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Ein Vorteil von MSK gegenüber anderen Modulationsarten ist der geringere Bandbreitenbedarf. Durch geringfügige Modifikationen hin zum&amp;amp;nbsp; [[Modulationsverfahren/Nichtlineare_digitale_Modulation#GMSK_.E2.80.93_Gaussian_Minimum_Shift_Keying|Gaussian Minimum Shift Keying]]&amp;amp;nbsp; – abgekürzt GMSK– ergibt sich nochmals eine schmaleres Spektrum.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1748__Mod_T_4_4_S9_neu.png |center|frame|  Blockschaltbild zur Erzeugung einer GMSK und entsprechende Signalverläufe]]&lt;br /&gt;
&lt;br /&gt;
Man erkennt aus dem Blockschaltbild folgende Unterschiede zum MSK (wir verweisen auf das interaktive Applet&amp;amp;nbsp; [[Applets:Frequency_Shift_Keying_%26_Continuous_Phase_Modulation|Frequency Shift Keying &amp;amp; Continuous Phase Modulation]]):&lt;br /&gt;
*Der Frequenzimpuls&amp;amp;nbsp; $g(t)$&amp;amp;nbsp; ist nun nicht mehr rechteckförmig wie der Impuls&amp;amp;nbsp; $g_{\rm R}(t)$, sondern weist flachere Flanken auf. Demzufolge ergibt sich auch ein weicherer Phasenverlauf am Punkt &amp;amp;nbsp;&#039;&#039;&#039;(3)&#039;&#039;&#039;&amp;amp;nbsp; als beim MSK–Verfahren (siehe letzte Seite), bei dem&amp;amp;nbsp; $ϕ(t)$&amp;amp;nbsp; symbolweise linear ansteigt bzw. abfällt.&lt;br /&gt;
*Man erreicht diese sanfteren Phasenübergänge bei GMSK durch ein&amp;amp;nbsp; &#039;&#039;&#039;Gaußtiefpassfilter&#039;&#039;&#039;&amp;amp;nbsp; mit dem Frequenzgang bzw. der Impulsantwort&lt;br /&gt;
&lt;br /&gt;
:$$H_{\rm G}(f) = {\rm e}^{-\pi \hspace{0.05cm} \cdot \hspace{0.05cm} \big({f}/(2 \hspace{0.05cm} \cdot \hspace{0.05cm} f_{\rm G})\big)^2} \hspace{0.2cm}\bullet\!\!-\!\!\!-\!\!\!-\!\!\circ\, \hspace{0.2cm}&lt;br /&gt;
 h_{\rm G}(t) = 2 f_{\rm G} \cdot {\rm e}^{-\pi\hspace{0.05cm} \cdot \hspace{0.05cm} (2 \hspace{0.05cm} \cdot \hspace{0.05cm} f_{\rm G}\hspace{0.05cm} \cdot \hspace{0.05cm} t)^2}\hspace{0.05cm}.$$&lt;br /&gt;
 &lt;br /&gt;
*Bei GSM ist die 3dB–Grenzfrequenz zu&amp;amp;nbsp; $f_{\rm 3dB} = 0.3/T$&amp;amp;nbsp; festgelegt. Wie in Aufgabe&amp;amp;nbsp; [[Aufgaben:3.4_GMSK–Modulation|Aufgabe 3.4]]&amp;amp;nbsp; gezeigt wird, gilt somit für die systemtheoretische Grenzfrequenz: &lt;br /&gt;
:$$f_{\rm G} ≈ 1.5 · f_{\rm 3dB} = 0.45/T.$$&lt;br /&gt;
*Der resultierende Frequenzimpuls&amp;amp;nbsp; $g(t)$&amp;amp;nbsp; am Punkt &amp;amp;nbsp;&#039;&#039;&#039;(2)&#039;&#039;&#039;&amp;amp;nbsp; des Blockschaltbildes ergibt sich aus der Faltung des Rechteckimpulses&amp;amp;nbsp; $g_{\rm R}(t)$&amp;amp;nbsp; mit der Impulsantwort&amp;amp;nbsp; $h_{\rm G}(t)$&amp;amp;nbsp; des Gaußtiefpasses zu&lt;br /&gt;
&lt;br /&gt;
:$$g(t) =  g_{\rm R}(t) \star h_{\rm G}(t)\hspace{0.05cm}. $$&lt;br /&gt;
 &lt;br /&gt;
*Das GMSK–modulierte Signal&amp;amp;nbsp; $s(t)$&amp;amp;nbsp; weist nun nicht mehr abschnittsweise (je Symboldauer) eine konstante Frequenz auf. &amp;lt;br&amp;gt;Diesen Unterschied zur MSK kann man allerdings aus dem Signalverlauf am Punkt&amp;amp;nbsp; &#039;&#039;&#039;(4)&#039;&#039;&#039; &amp;amp;nbsp;des Blockschaltbildes nur schwer erkennen.&lt;br /&gt;
&lt;br /&gt;
==Vor– und Nachteile von GMSK==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Hier werden die wichtigsten Merkmale des Modulationsverfahren&amp;amp;nbsp; &#039;&#039;Gaussian Minimum Shift Keying&#039;&#039;&amp;amp;nbsp; zusammenfassend aufgeführt. &lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=  &lt;br /&gt;
$\text{Fazit:}$&amp;amp;nbsp;&lt;br /&gt;
Der wesentliche Vorteil von GMSK ist der sehr geringe Bandbreitenbedarf.}} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1200__Bei_T_3_2_S10_v2.png|right|frame|Leistungsdichtespektren von QPSK und MSK]]&lt;br /&gt;
Die folgende Grafik wurde dem Buch&amp;amp;nbsp; [Kam04]&amp;lt;ref name =&#039;Kam04&#039;&amp;gt;Kammeyer, K.D.: &#039;&#039;Nachrichtenübertragung&#039;&#039;. Stuttgart: B.G. Teubner, 4. Auflage, 2004.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; entnommen.  &lt;br /&gt;
*Die linke Grafik zeigt das logarithmierte Leistungsdichtespektrum&amp;amp;nbsp; $10 · \text{lg} \ {\it Φ}_s(f)/{\it Φ}_0$&amp;amp;nbsp; des Verfahrens&amp;amp;nbsp; &#039;&#039;Minimum Shift Keying&#039;&#039;&amp;amp;nbsp; (MSK) im Vergleich zu&amp;amp;nbsp; &#039;&#039;Quaternary Phase Shift Keying&#039;&#039;&amp;amp;nbsp; (QPSK), wobei&amp;amp;nbsp; ${\it Φ}_0$&amp;amp;nbsp; „geeignet” gewählt wurde. &lt;br /&gt;
*Auf der Abszisse ist die normierte Frequenz&amp;amp;nbsp; $f · T_{\rm B}$&amp;amp;nbsp; aufgetragen. Bei MSK ist die Bitdauer&amp;amp;nbsp; $T_{\rm B}$&amp;amp;nbsp; gleich der Symboldauer&amp;amp;nbsp; $T$, während bei QPSK&amp;amp;nbsp; $T_{\rm B} = T/2$&amp;amp;nbsp; gilt. &lt;br /&gt;
*Im rechten Diagramm, das sich ausschließlich auf&amp;amp;nbsp; (G)MSK&amp;amp;nbsp; bezieht, könnte die Abszisse auch mit&amp;amp;nbsp; $f · T$&amp;amp;nbsp; beschriftet werden. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Man erkennt aus der linken Darstellung: &lt;br /&gt;
*Die erste Nullstelle im Leistungsdichtespektrum (LDS) tritt bei der QPSK (gestrichelte Kurve) beim normierten Abszissenwert&amp;amp;nbsp; $f · T_{\rm B} = 0.5$&amp;amp;nbsp; auf, bei der MSK dagegen erst bei&amp;amp;nbsp; $f · T_{\rm B} = 0.75$.&lt;br /&gt;
*Im weiteren Verlauf ergibt sich jedoch bei MSK ein deutlich schnellerer LDS–Abfall als der asymptotische&amp;amp;nbsp; $f^{-2}$–Abfall bei QPSK. &lt;br /&gt;
*Zu beachten ist, dass für die MSK ein Cosinusimpuls zur Spektralformung zugrunde liegt und für die QPSK ein Rechteckimpuls.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die rechte Darstellung zeigt den Einfluss der gaußförmigen Impulsformung bei GMSK auf das Leistungsdichtespektrum&amp;amp;nbsp; ${\it Φ}_s(f)$, wobei als Parameter die normierte 3dB–Grenzfrequenz verwendet wird.&lt;br /&gt;
*Je kleiner&amp;amp;nbsp; $f_{\rm 3\ dB}$&amp;amp;nbsp; ist, desto schmalbandiger ist das Leistungsdichtespektrum. Im GSM–Standard wurde&amp;amp;nbsp; $f_{\rm 3\ dB} · T$ = 0.3&amp;amp;nbsp; festgelegt. Mit diesem Wert wird die Bandbreite bereits entscheidend reduziert, was zu geringeren &amp;amp;nbsp;&#039;&#039;Nachbarkanalinterferenzen&#039;&#039;&amp;amp;nbsp; führt.&lt;br /&gt;
*Andererseits wirken sich mit dieser Grenzfrequenz die&amp;amp;nbsp; &#039;&#039;Impulsinterferenzen&#039;&#039;&amp;amp;nbsp; schon gravierend aus. Die Augenöffnung ist kleiner als&amp;amp;nbsp; $50\%$&amp;amp;nbsp; und es ist eine geeignete Entzerrung vorzusehen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Des Weiteren ist zu vermerken:&lt;br /&gt;
*Die binäre FSK stellt – auch bei kontinuierlicher Phasenanpassung – allgemein ein nichtlineares Modulationsverfahren dar. Deshalb ist eine kohärente Demodulation eigentlich nicht möglich.&lt;br /&gt;
*Eine Ausnahme bildet die MSK als Sonderfall für den Modulationsindex&amp;amp;nbsp; $h = 0.5$, die sich als&amp;amp;nbsp; &#039;&#039;Offset–QPSK&#039;&#039;&amp;amp;nbsp; linear realisieren lässt und somit auch kohärent demoduliert werden kann.&lt;br /&gt;
*Ohne Berücksichtigung der Impulsinterferenzen beträgt die&amp;amp;nbsp; &#039;&#039;Bitfehlerwahrscheinlichkeit&#039;&#039;&lt;br /&gt;
:$$p_{\rm B} = {\rm Q} \left(\sqrt{{E_{\rm B}}/{N_0}}\hspace{0.09cm}\right) =&lt;br /&gt;
{1}/{2}\cdot{\rm erfc} \left(\sqrt{{E_{\rm B}}/{2N_0}}\hspace{0.09cm}\right)&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
 &lt;br /&gt;
*Gegenüber der QPSK ergibt sich eine Degradation um&amp;amp;nbsp; $3\ \rm dB$. Das interaktive Applet&amp;amp;nbsp; [[Applets:Komplementäre_Gaußsche_Fehlerfunktionen|Komplementäre Gaußsche Fehlerfunktionen]]&amp;amp;nbsp; liefert die Zahlenwerte der hier verwendeten Funktionen&amp;amp;nbsp; ${\rm Q}(x)$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $1/2 \cdot {\rm erfc}(x)$. &lt;br /&gt;
*Ein Vorteil der GMSK gegenüber der QPSK ist, dass sich trotz der spektralen Formung des Grundimpulses eine konstante Hüllkurve ergibt. Deshalb spielen Nichtlinearitäten auf dem Kanal nicht eine so große Rolle als bei anderen Modulationsverfahren. Dies ermöglicht den Einsatz einfacher und kostengünstiger Leistungsverstärker, einen geringeren Leistungsverbrauch und damit auch längere Betriebsdauern akkubetriebener Geräte.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Radio Subsystem Link Control==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Eine weitere Funktion der Funkschnittstelle ist die Steuerung der Funkverbindung. So übernimmt das so genannte&amp;amp;nbsp; &#039;&#039;Radio Subsystem Link Control&#039;&#039;&amp;amp;nbsp; folgende Aufgaben:&lt;br /&gt;
&lt;br /&gt;
Es ist für die Messung der Empfangsqualität zuständig. Während einer aufgebauten Verkehrs– oder Signalisierungsverbindung erfolgt in regelmäßigen Abständen die Kanalvermessung der Mobilstation hinsichtlich Empfangsfeldstärke und Bitfehlerrate &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;&#039;Quality Monitoring&#039;&#039;&#039;. Diese Werte werden in einem Messreport zur Basisstation über den Signalisierungskanal SACCH übertragen und von dieser für die Leistungsregelung und das Handover verwendet.&lt;br /&gt;
&lt;br /&gt;
Die&amp;amp;nbsp; &#039;&#039;&#039;Power Control&#039;&#039;&#039;&amp;amp;nbsp; (deutsch:&amp;amp;nbsp; Leistungsregelung)&amp;amp;nbsp; ist erforderlich, damit alle Mobilstationen nur mit der minimal erforderlichen Energie abstrahlen. Die Sendeleistung kann adaptiv in Schritten von&amp;amp;nbsp; $2 \ \rm dBm$&amp;amp;nbsp; zwischen&amp;amp;nbsp; $43 \ \rm dBm$&amp;amp;nbsp; $\text{(Stufe 0:}$&amp;amp;nbsp; $20\ \rm  W$)&amp;amp;nbsp; und&amp;amp;nbsp; $13 \ \rm dBm$&amp;amp;nbsp; $\text{(Stufe 15:}$&amp;amp;nbsp; $20\ \rm  mW$)&amp;amp;nbsp; geregelt werden. Auch die Sendeleistung der Basisstationen wird in Schritten von&amp;amp;nbsp; $2 \ \rm dBm$&amp;amp;nbsp; angepasst, um optimale Netzkapazität zu erzielen. Eine Ausnahme bildet der BCCH–Träger mit konstanter Sendeleistung, um den Mobilstationen eine vergleichende Messung benachbarter BCCH–Träger zu ermöglichen.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID3110__Bei_T_3_2_S11_v2.png|right|frame|Adaptive Frame Alignment]]&lt;br /&gt;
&lt;br /&gt;
Das&amp;amp;nbsp; &#039;&#039;&#039;Adaptive Frame Alignment&#039;&#039;&#039;&amp;amp;nbsp; – also die adaptive Rahmensynchronisation – dient dazu, Kollisionen zwischen Uplink– und Downlinkdaten zu vermeiden, die von der Mobilstation um drei Zeitschlitze versetzt gesendet bzw. empfangen werden sollen. Dies zeigt nebenstehende Grafik.&lt;br /&gt;
&lt;br /&gt;
Im mittleren, gelb hinterlegten Bereich ist der Downlink dargestellt, wobei die Daten um die Zeit&amp;amp;nbsp; $T_{\rm R}$&amp;amp;nbsp; (&#039;&#039;Round Trip Delay Time&#039;&#039;)&amp;amp;nbsp; später bei der Mobilstation (MS) ankommen, als sie von der&amp;amp;nbsp; &#039;&#039;Base Transceiver Station&#039;&#039;&amp;amp;nbsp; (BTS) gesendet wurden (grüne Markierung).&lt;br /&gt;
&lt;br /&gt;
Im oberen Bereich ist der Uplink ohne &#039;&#039;Timing Advance&#039;&#039;&amp;amp;nbsp; dargestellt. &lt;br /&gt;
*Die MS beginnt genau drei Zeitschlitze nach dem Empfang mit dem Senden (blaue Markierung). &lt;br /&gt;
*Aufgrund der Verzögerungen im Downlink und Uplink erreicht der Zeitschlitz&amp;amp;nbsp; $0$&amp;amp;nbsp; die BTS nicht wie gefordert zu der Zeit&amp;amp;nbsp; $3T_{\rm Z}$, sondern um&amp;amp;nbsp; $2T_{\rm Z}$&amp;amp;nbsp; später (rote Markierung). &lt;br /&gt;
*Beim &#039;&#039;Timing Advance&#039;&#039; Uplink (untere Skizze) wird diese Verzögerung bereits von der Mobilstation kompensiert, indem die Daten um die Zeit&amp;amp;nbsp; $T_{\rm A} = 2T_{\rm R}$&amp;amp;nbsp; früher versandt werden und diese somit genau zeitsynchron bei der BTS ankommen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Für das&amp;amp;nbsp; &#039;&#039;Timing Advance&#039;&#039;&amp;amp;nbsp; stehen die Stufen $0 – 63$ zur Verfügung, wobei jede Stufe einer Bitdauer&amp;amp;nbsp; $T_{\rm B}$&amp;amp;nbsp; entspricht. &lt;br /&gt;
&lt;br /&gt;
*Das maximale&amp;amp;nbsp; &#039;&#039;Timing Advance&#039;&#039;&amp;amp;nbsp; beträgt somit&amp;amp;nbsp; $\rm 63 · 3.7 \ &amp;amp;micro; s ≈ 233 \ &amp;amp;micro;s$, so dass sich die maximale zulässige Laufzeit in einer Richtung zu&amp;amp;nbsp; $T_{\rm R} ≈ 116\ {\rm  &amp;amp;micro; s}$ ergibt. &lt;br /&gt;
*Daraus kann der erlaubte Zellenradius von GSM (Entfernung zwischen BTS und MS) berechnet werden:&amp;amp;nbsp; &lt;br /&gt;
:$$116\ \rm  &amp;amp;micro; s · 3 · 10^8 \ m/s ≈ 35 \ km.$$ &lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
== Aufgaben zum Kapitel== 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Aufgaben:3.3_GSM–Rahmenstruktur|Aufgabe 3.3: GSM–Rahmenstruktur]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:3.3Z_GSM_900_und_GSM_1800|Aufgabe 3.3Z: GSM 900 und GSM 1800]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:3.4_GMSK–Modulation|Aufgabe 3.4: GMSK–Modulation]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:Aufgabe_3.4Z:_FSK_mit_kontinuierlicher_Phase|Aufgabe 3.4Z: FSK mit kontinuierlicher Phase]]&lt;br /&gt;
&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_GSM&amp;diff=34979</id>
		<title>Examples of Communication Systems/Allgemeine Beschreibung von GSM</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_GSM&amp;diff=34979"/>
		<updated>2020-10-13T15:38:30Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Allgemeine Beschreibung von GSM to Examples of Communication Systems/General Description of GSM&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Examples of Communication Systems/General Description of GSM]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/General_Description_of_GSM&amp;diff=34978</id>
		<title>Examples of Communication Systems/General Description of GSM</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/General_Description_of_GSM&amp;diff=34978"/>
		<updated>2020-10-13T15:38:30Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Allgemeine Beschreibung von GSM to Examples of Communication Systems/General Description of GSM&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=GSM – Global System for Mobile Communications&lt;br /&gt;
|Vorherige Seite=Verfahren zur Senkung der Bitfehlerrate bei DSL&lt;br /&gt;
|Nächste Seite=Funkschnittstelle&lt;br /&gt;
}}&lt;br /&gt;
== # ÜBERBLICK ZUM DRITTEN HAUPTKAPITEL # ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Der momentan (2011)  weltweit führende Mobilfunkstandard ist GSM – &#039;&#039;Global System for Mobile Communications&#039;&#039;. Dieser wurde Ende der 1980er Jahre entwickelt und arbeitet vollständig digital. Er wird 2011 in mehr als 200 Ländern genutzt, vorwiegend zum Telefonieren über das Handy, daneben aber auch für Kurzmitteilungen (SMS) sowie für mobile leitungs– bzw. paketvermittelte Datenübertragung (HSCSD, GPRS, EDGE).&lt;br /&gt;
&lt;br /&gt;
Dieses Kapitel beinhaltet im Einzelnen:&lt;br /&gt;
*die allgemeine Beschreibung von GSM mit wichtigen Begriffsdefinitionen,&lt;br /&gt;
*die Funkschnittstelle von GSM und deren logische und physikalische Kanäle,&lt;br /&gt;
*die wichtigsten Sprachcodierverfahren zur Datenkomprimierung,&lt;br /&gt;
*das Gesamtübertragungsmodell von GSM zur Sprach– und Datenübertragung,&lt;br /&gt;
*die bei GSM angewandte Kanalcodierung mit&amp;amp;nbsp; &#039;&#039;Interleaving&#039;&#039;&amp;amp;nbsp; und&amp;amp;nbsp; &#039;&#039;Verschlüsselung&#039;&#039;, und&lt;br /&gt;
*die Weiterentwicklungen von GSM wie HSCSD, GPRS und EDGE. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Entstehung und Historie von GSM == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Der GSM–Standard wurde um 1990 mit dem Ziel eingeführt, ein einheitliches paneuropäisches mobiles Telefonsystem und –netz anbieten zu können. Die Nutzung zur Datenübertragung stand zunächst nicht im Mittelpunkt, wurde aber seitdem durch Zusatzspezifikationen hinsichtlich Datenrate stetig verbessert.&lt;br /&gt;
&lt;br /&gt;
Nachfolgend einige Daten zur historischen Entwicklung von GSM:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1982&#039;&#039;&#039;  &amp;amp;nbsp; Bei der „Conférence Européenne des Postes et Télécommunications” (CEPT) wird die&amp;amp;nbsp; &#039;&#039;Groupe Spécial Mobile&#039;&#039;&amp;amp;nbsp; – abgekürzt GSM – eingerichtet.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1987&#039;&#039;&#039;  &amp;amp;nbsp; Es wird eine Kooperation zwischen $17$ zukünftigen Betreibern aus $15$ europäischen Ländern gebildet und mit der GSM–Spezifikation begonnen.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1990&#039;&#039;&#039;  &amp;amp;nbsp; Die Phase 1 der GSM 900-Spezifikation (für 900 MHz) wird abgeschlossen und es beginnt die Anpassung für das System DCS 1800&amp;amp;nbsp; (&#039;&#039;Digital Cellular System&#039;&#039;)&amp;amp;nbsp; um die Frequenz 1.8 GHz.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1992&#039;&#039;&#039;  &amp;amp;nbsp; Die meisten europäischen GSM-Netzbetreiber beginnen den kommerziellen Betrieb, zunächst nur mit Sprachdiensten. Ende 1992 sind bereits 13 Netze in sieben Ländern „on air”.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1995&#039;&#039;&#039;  &amp;amp;nbsp; Die Phase 2 der GSM-Standardisierung beginnt. Diese beinhaltet Daten, SMS-Roaming, Fax sowie Anpassungen für GSM/PCS 1900, das im gleichen Jahr in den USA ans Netz geht.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1999&#039;&#039;&#039;  &amp;amp;nbsp; Mit der Einführung von WAP&amp;amp;nbsp; (&#039;&#039;Wireless Application Protocol&#039;&#039;)&amp;amp;nbsp; wird es erstmals möglich, Inhalte des Internets und andere interaktive Dienstangebote auf Mobilgeräte zu übertragen.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2000&#039;&#039;&#039;  &amp;amp;nbsp; Die Erweiterung GPRS&amp;amp;nbsp; (&#039;&#039;General Packet Radio Service&#039;&#039;)&amp;amp;nbsp; verbessert und vereinfacht zudem den drahtlosen Zugang zu paketvermittelten Datennetzen wie IP– oder X.25–Protokolle.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2000&#039;&#039;&#039;  &amp;amp;nbsp; Mit der Phase 2+ wird gleichzeitig EDGE&amp;amp;nbsp; (&#039;&#039;Enhanced Data Rates for GSM Evolution&#039;&#039;)&amp;amp;nbsp; eingeführt, womit die Datenrate gegenüber GPRS etwa um den Faktor $3$ gesteigert werden kann.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2006&#039;&#039;&#039;  &amp;amp;nbsp; Bis zum Jahr 2006 ist die Zahl der Netzbetreiber in $213$ Ländern/Gebieten weltweit auf $147$ angestiegen und es werden mehr als zwei Milliarden Teilnehmer versorgt. Allein in Deutschland gab es Ende 2005 schon mehr als $70$ Millionen GSM–Handys.&lt;br /&gt;
&lt;br /&gt;
Die derzeit (2011) eingesetzten GSM-Standards sind:&lt;br /&gt;
*$\text{GSM 900}$: &amp;amp;nbsp; Frequenzbereich um 900 MHz (D–Netze, in Deutschland TD1, Vodafone D2),&lt;br /&gt;
*$\text{GSM/DCS 1800}$: &amp;amp;nbsp; Frequenzbereich um 1800 MHz (E–Netze, in Deutschland alle Betreiber),&lt;br /&gt;
*$\text{GSM/PCS 1900}$: &amp;amp;nbsp; Frequenzbereich um 1900 MHz (vorwiegend in den USA eingesetzt).&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Zellularstruktur von GSM == 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Ein Charakteristikum von GSM ist die&amp;amp;nbsp; &#039;&#039;&#039;zellulare Netzstruktur&#039;&#039;&#039;, die für einfache Berechnungen häufig durch Hexagone – also durch Sechsecke – entsprechend der linken Grafik idealisiert beschrieben wird. Dadurch kann ein Versorgungsgebiet mit jeweils einer Basisstation pro Zelle lückenlos versorgt werden, wenn die Reichweite der Basisstation mindestens so groß ist wie der Zellenradius.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1180__Bei_T_3_1_S2_v1.png|center|frame|Idealisierte und realistische GSM-Zellenstruktur]]&lt;br /&gt;
&lt;br /&gt;
Aus dieser zellularen Struktur ergeben sich folgende Konsequenzen für das GSM–System:&lt;br /&gt;
*Der&amp;amp;nbsp; &#039;&#039;Zellenradius&#039;&#039;&amp;amp;nbsp; muss umso kleiner gewählt werden, je größer die Trägerfrequenz ist. Beim D-Netz&amp;amp;nbsp; $(f_{\rm T} ≈ 900 \ \rm MHz)$&amp;amp;nbsp; beträgt der maximale Zellenradius etwa&amp;amp;nbsp; $35 \ \rm  km$, beim E–Netz ist dieser aufgrund der höheren Frequenz&amp;amp;nbsp; $(f_{\rm T} ≈ 1800 \ \rm MHz)$&amp;amp;nbsp; mit&amp;amp;nbsp; $8 \ \rm  km$&amp;amp;nbsp; deutlich geringer.&lt;br /&gt;
*Bewegt sich ein mobiler Teilnehmer in dem Gebiet, so wird er verschiedene Zellen durchqueren und somit mit verschiedenen Basisstationen in Kontakt stehen. Ein nicht zu vernachlässigendes Problem ist das so genannte&amp;amp;nbsp; &#039;&#039;Handover&#039;&#039;&amp;amp;nbsp; beim Überqueren einer Zellgrenze während eines Gesprächs.&lt;br /&gt;
*Benutzt man in allen Zellen die gleiche Trägerfrequenz, so kann es bei Überreichweiten zu&amp;amp;nbsp; &#039;&#039;Interzellinterferenzen&#039;&#039;&amp;amp;nbsp; kommen. Häufig verwendet man deshalb in benachbarten Zellen andere Frequenzen. Im obigen Beispiel werden drei unterschiedliche Frequenzen benutzt, was durch die Farben weiß, gelb und blau angedeutet ist. Diesem Beispiel liegt der &#039;&#039;Reuse–Faktor&#039;&#039; $3$ zugrunde.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die rechte Grafik zeigt ein realistischeres Zellen–Layout mit unterschiedlich großen Zellen – je nach Teilnehmerdichte und Geländetopologie. &lt;br /&gt;
*Außerdem erkennt man, dass sich die Basisstation nicht immer im Zellenmittelpunkt befinden muss. &lt;br /&gt;
*Die Farben &amp;amp;bdquo;Weiß&amp;amp;rdquo; und &amp;amp;bdquo;Rot&amp;amp;rdquo; haben hier keine besondere Bedeutung.&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==GSM–Systemarchitektur und –Netzkomponenten==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
GSM ist ein hierarchisch gegliedertes System verschiedener Netzkomponenten. Es hat zwei wesentliche Bestandteile, die&amp;amp;nbsp; &#039;&#039;Mobilstationen&#039;&#039;&amp;amp;nbsp; (MS, Mobilteilnehmer) und das&amp;amp;nbsp; &#039;&#039;fest installierte GSM–Netz&#039;&#039;. Eine jede Mobilstation besteht im Wesentlichen aus zwei Einheiten:&lt;br /&gt;
*dem&amp;amp;nbsp; &#039;&#039;&#039;Mobile Equipment&#039;&#039;&#039;&amp;amp;nbsp; (ME): Jedem ME ist eine eindeutige Nummer, die so genannte&amp;amp;nbsp;  &#039;&#039;International Mobile Equipment Identity&#039;&#039;&amp;amp;nbsp; (IMEI) zugeteilt.&lt;br /&gt;
*dem&amp;amp;nbsp; &#039;&#039;&#039;Subscriber Identity Modul&#039;&#039;&#039;&amp;amp;nbsp; (SIM): Dieses ist ein kleiner, durch PIN geschützter Prozessor und Speicher, verantwortlich für die Zuordnung der Benutzerdaten und die Authentifizierung.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1181__Bei_T_3_1_S3_v1.png|right|frame|GSM&amp;amp;ndash;Systemarchitektur und &amp;amp;ndash;Netzkomponenten]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Grafik zeigt die Struktur für ein so genanntes&amp;amp;nbsp; &#039;&#039;Public Land Mobile Network&#039;&#039;&amp;amp;nbsp; (PLMN) des GSM, also die GSM–Systemarchitektur. Diese ist für die Sprachübertragung ausgelegt, aber auch für die Datenübertragung in eingeschränktem Maße geeignet. &lt;br /&gt;
&lt;br /&gt;
Aus dieser Grafik erkennt man:&lt;br /&gt;
*Die Mobilstation (MS) kommuniziert über Funk mit der nächstgelegenen&amp;amp;nbsp; &#039;&#039;&#039;Base Transceiver Station&#039;&#039;&#039;&amp;amp;nbsp; (BTS, Sende– und Empfangsbasisstation).&lt;br /&gt;
*Mehrere BTS werden gebietsweise zusammengefasst und sind als Einheit einem&amp;amp;nbsp; &#039;&#039;&#039;Base Station Controller&#039;&#039;&#039;&amp;amp;nbsp; (BSC, Kontrollstation) unterstellt.&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;Base Station Subsystem&#039;&#039;&#039;&amp;amp;nbsp; (BSS) besteht aus einer Vielzahl von BTS und mehreren BSC. In der Grafik ist ein solches BSS blau&amp;amp;ndash;gestrichelt umrandet.&lt;br /&gt;
*Jede BSC ist schließlich mit einem&amp;amp;nbsp; &#039;&#039;&#039;Mobile Switching Center&#039;&#039;&#039;&amp;amp;nbsp; (MSC, Vermittlungsrechner) verbunden, dessen Funktion mit einem Vermittlungsknoten im Festnetz vergleichbar ist.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die fest installierte GSM-Infrastruktur kann in drei Subnetze untergliedert werden:&lt;br /&gt;
*dem &#039;&#039;&#039;Base Station Subsystem&#039;&#039;&#039; (BSS, Funknetz-BSS) &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; Näheres siehe nächste Seite,&lt;br /&gt;
*dem &#039;&#039;&#039;Switching and Management Subsystem&#039;&#039;&#039; (SMSS, Mobilvermittlungsnetz) &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; Näheres siehe übernächste Seite, und&lt;br /&gt;
*dem &#039;&#039;&#039;Operation and Maintenance Subsystem&#039;&#039;&#039; (OMSS, Betrieb und Wartung). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Das OMSS sorgt für das Einrichten der Teilnehmer, die Überprüfung der Berechtigungen, die Sperrung der Geräte, die Gebührenerfassung, die Wartung der Netzkomponenten sowie die Steuerung des Verkehrsflusses. Es beinhaltet folgende Komponenten:&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;Operation and Maintenance Center&#039;&#039;&#039;&amp;amp;nbsp; (OMC) – grün umrandet – überwacht einen Teil des gesamten Mobilfunknetzes und löst die Steuerfunktionen des Netzes aus. Es unterteilt sich in die Komponenten&amp;amp;nbsp; &#039;&#039;&#039;OMC-B&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; Überwachung der&amp;amp;nbsp; &#039;&#039;Base Station Controller&#039;&#039;&amp;amp;nbsp; (BSC) und&amp;amp;nbsp; &#039;&#039;&#039;OMC-S&#039;&#039;&#039;&amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; Kontrolle des&amp;amp;nbsp; &#039;&#039;Mobile Switching Centers&#039;&#039;&amp;amp;nbsp; (MSC).&lt;br /&gt;
*Die Netzkontrolle kann auch  zentralisiert in einem&amp;amp;nbsp; &#039;&#039;&#039;Network Management Center&#039;&#039;&#039;&amp;amp;nbsp; (NMC) erfolgen, das den OMCs übergeordnet ist.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Weitere wichtige Funktionen/Aufgaben des&amp;amp;nbsp; &#039;&#039;Operation and Maintenance Centers&#039;&#039;&amp;amp;nbsp; (OMC) sind die Verwaltung des kommerziellen Betriebs, die Netzkonfiguration, das Sicherheitsmanagement und alle Wartungsarbeiten hinsichtlich Hardware und Software.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Base Station Subsystem (BSS) == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die folgende Grafik zeigt im linken Teil ein&amp;amp;nbsp; &#039;&#039;&#039;Base Station Subsystem&#039;&#039;&#039;, abgekürzt BSS. Ein solches Funknetz besteht aus folgenden Netzkomponenten:&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;&#039;Base Transceiver Station&#039;&#039;&#039;&amp;amp;nbsp; (BTS) stellt mindestens je einen Funkkanal für den Nutzverkehr bzw. die Signalisierung bereit. Sie besitzt neben dem HF–Teil (Sende– und Empfangseinrichtung) noch einige Komponenten zur Signal– und Protokollverarbeitung. An die BTS sind eine oder mehrere Antennen angeschlossen, die meist einen 120°–Sektor versorgen.&lt;br /&gt;
*Um die Basisstationseinheiten (BTS) klein halten zu können, ist die wesentliche Steuerungs- und Protokollintelligenz oft in den&amp;amp;nbsp; &#039;&#039;&#039;Base Station Controller&#039;&#039;&#039;&amp;amp;nbsp; (BSC) verlagert. Dabei können durchaus auch mehrere BTS von einem gemeinsamen BSC gesteuert werden.&lt;br /&gt;
*Bevor das Sprachsignal dem Vermittlungssystem übergeben wird, wandelt die&amp;amp;nbsp; &#039;&#039;&#039;Transcoding &amp;amp; Rate Adaption Unit&#039;&#039;&#039;&amp;amp;nbsp; (TRAU) die Rate des GSM-Sprachsignals von&amp;amp;nbsp; $\text{13 kbit/s}$ auf&amp;amp;nbsp; $\text{64 kbit/s}$. Des Weiteren übernimmt die TRAU auch die Ratenanpassung für die Datendienste.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1189__Bei_T_3_1_S4_v1.png|right|frame|GSM:&amp;amp;nbsp; &#039;&#039;Base Station Subsystem&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
Jeder BTS werden verschiedene Parameter zugeordnet, nämlich:&lt;br /&gt;
*Eine oder mehrere Funkzellen werden zu einer&amp;amp;nbsp; &#039;&#039;Location Area&#039;&#039;&amp;amp;nbsp; (LA) zusammengefasst. Jede LA erhält eine eigene Kennziffer – den so genannten&amp;amp;nbsp; &#039;&#039;Location Area Identifier&#039;&#039;&amp;amp;nbsp; (LAI). Dieser wird von der Basisstation auf dem&amp;amp;nbsp; &#039;&#039;Broadcast Control Channel&#039;&#039;&amp;amp;nbsp; (BCCH) regelmäßig ausgesendet.&lt;br /&gt;
*Dadurch kann jede Mobilstation über die LAI auch ihren aktuellen Aufenthaltsort feststellen. Bei einem Wechsel der&amp;amp;nbsp; &#039;&#039;Location Area&#039;&#039;&amp;amp;nbsp; fordert die Mobilstation ein&amp;amp;nbsp; &#039;&#039;Location Update&#039;&#039;&amp;amp;nbsp; an.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Weitere Parameter des&amp;amp;nbsp; &#039;&#039;Base Station Subsystems&#039;&#039;&amp;amp;nbsp; sind unter anderem:&lt;br /&gt;
*die&amp;amp;nbsp; &#039;&#039;Cell Allocation&#039;&#039;&amp;amp;nbsp; (CA): &amp;lt;br&amp;gt;Zuordnung eines Satzes von Frequenzen zu einer BTS,&lt;br /&gt;
*der&amp;amp;nbsp; &#039;&#039;Cell Identifier&#039;&#039;&amp;amp;nbsp; (CI): &amp;lt;br&amp;gt;Kennzeichnung der einzelnen Zellen innerhalb einer LA, &lt;br /&gt;
*der&amp;amp;nbsp; &#039;&#039;Base Transceiver Station Identity Code&#039;&#039;&amp;amp;nbsp; (BSIC): &amp;lt;br&amp;gt;Kennung der Basisstation.&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
==Switching and Management Subsystem (SMSS)  ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Das&amp;amp;nbsp; &#039;&#039;&#039;Switching and Management Subsystem&#039;&#039;&#039;&amp;amp;nbsp; (SMSS, deutsch: Mobilvermittlungsnetz) besteht aus den Mobilvermittlungszentren (MSC bzw. GMSC) und verschiedenen Datenbanken (VLR, HLR, AUC, EIR, etc.), wie die nachfolgende Grafik aus &amp;amp;nbsp;[BVE99]&amp;lt;ref name =&#039;BVE99&#039;&amp;gt;Bettstetter, C.; Vögel, H.J.; Eberspächer, J.: &#039;&#039;GSM Phase 2+ General Packet Radio Service GPRS: Architecture, Protocols, and Air Interface&#039;&#039;. In: IEEE Communications Surveys &amp;amp; Tutorials, Vol. 2 (1999) No. 3, S. 2-14.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; zeigt.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1184__Bei_T_3_1_S5_v5.png|right|frame|GSM:&amp;amp;nbsp; &#039;&#039;Switching and Management Subsystem&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zu dieser Darstellung ist zu bemerken:&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;Mobile Switching Center&#039;&#039;&#039;&amp;amp;nbsp; (MSC) – also das Mobilvermittlungszentrum – erfüllt die gleichen vermittlungstechnischen Funktionen wie ein Festnetz-Vermittlungsknoten, z.B. die Wegesuche und die Signalwegeschaltung. Zusätzlich muss ein MSC jedoch auch die Mobilität der Teilnehmer berücksichtigen (Aufenthaltsregistrierung, Handover beim Zellwechsel, und einiges mehr).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;Gateway Mobile Switching Center&#039;&#039;&#039;&amp;amp;nbsp; (GMSC) ist für die Verbindung zwischen Festnetz – zum Beispiel dem ISDN – und dem Mobilfunknetz verantwortlich. Wird beispielsweise ein Mobilfunkteilnehmer aus dem Festnetz angerufen, so ermittelt das GMSC im HLR (siehe unten) das zuständige MSC und vermittelt den Ruf weiter.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
MSC und GMSC haben Zugriff auf verschiedene Datenbanken:&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;Home Location Register&#039;&#039;&#039;&amp;amp;nbsp; (HLR, deutsch:&amp;amp;nbsp; Heimatregister)&amp;amp;nbsp; ist ein zentrales Register für die Teilnehmerdaten in einem PLMN. Es beinhaltet permanente Daten, aber auch temporäre, die zur Wegesuche für Rufe der eigenen Mobilteilnehmer benötigt werden.&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;Visitor Location Register&#039;&#039;&#039;&amp;amp;nbsp; (VLR, deutsch:&amp;amp;nbsp; Besucherregister)&amp;amp;nbsp; speichert die Daten aller Mobilstationen, die sich momentan im Verwaltungsbereich des zugehörigen MSC aufhalten, also auch die Teilnehmer anderer Netzbetreiber.&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;Authentication Center&#039;&#039;&#039;&amp;amp;nbsp; (AUC) ist für die Speicherung von vertraulichen Daten und von Schlüsseln verantwortlich.&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;Equipment Identity Register&#039;&#039;&#039;&amp;amp;nbsp; (EIR, deutsch: &amp;amp;nbsp;Geräteregister) &amp;amp;nbsp;speichert Seriennummern&amp;amp;nbsp; (&#039;&#039;International Mobile Station Equipment Identity&#039;&#039;, IMEI)&amp;amp;nbsp; der Endgeräte.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zwischen den Datenbanken (VLR, HLR, AUC, etc.) zweier an einer Sprachverbindung beteiligten Mobilvermittlungszentren gibt es einen ständigen Datenabgleich. Hierzu erforderlich sind verschiedene Kennzeichnungen für alle Teilnehmer, zum Beispiel:&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;Mobile Station Roaming Number&#039;&#039;&amp;amp;nbsp; (MSRN) ist eine temporäre, aufenthaltsabhängige ISDN-Nummer. Sie wird jeder Mobilstation vom lokal zuständigen VLR zugewiesen und vom HLR auf Anfrage an das GMSC weitergeleitet. Damit werden Rufe zu einer Mobilstation geroutet.&lt;br /&gt;
*Die&amp;amp;nbsp; &#039;&#039;Temporary Mobile Subscriber Identity&#039;&#039;&amp;amp;nbsp; (TMSI) ist eine weitere Kennnummer, die nur im Gebiet des VLR gültig ist und anstelle der&amp;amp;nbsp; &#039;&#039;International Mobile Subscriber Identity&#039;&#039;&amp;amp;nbsp; (IMSI) zur Adressierung einer Mobilstation verwendet wird. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=  &lt;br /&gt;
$\text{Beispiel 1:}$&amp;amp;nbsp;&lt;br /&gt;
Wir betrachten das Mobilfunknetz eines Betreibers&amp;amp;nbsp; $\rm A$, dessen Kunde der Teilnehmer&amp;amp;nbsp; &#039;&#039;&#039;1&#039;&#039;&#039;&amp;amp;nbsp; ist. &lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;Visited Location Register&#039;&#039;&amp;amp;nbsp; von Betreiber&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; – abgekürzt VLR(A) – enthält Informationen zum genauen Aufenthalt (In welcher Zelle? Welches BTS?) aller Teilnehmer. &lt;br /&gt;
*Für diesen Teilnehmer&amp;amp;nbsp; &#039;&#039;&#039;1&#039;&#039;&#039;&amp;amp;nbsp; stimmt der Eintrag im&amp;amp;nbsp; &#039;&#039;Home Location Register&#039;&#039;&amp;amp;nbsp; HLR(A) mit VLR(A) überein. So erkennt Betreiber&amp;amp;nbsp; $\rm A$, dass Teilnehmer&amp;amp;nbsp; &#039;&#039;&#039;1&#039;&#039;&#039;&amp;amp;nbsp; sein Kunde ist, und es wird eine Verbindung hergestellt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Der Teilnehmer&amp;amp;nbsp; &#039;&#039;&#039;2&#039;&#039;&#039;&amp;amp;nbsp; ist Kunde eines anderen Betreibers&amp;amp;nbsp; $\rm B$, der sich momentan per „Roaming“ im Netz&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; befindet. &lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;Visitor Location Register&#039;&#039;&amp;amp;nbsp; von Betreiber&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; – abgekürzt VLR(A) – enthält Informationen zum genauen Aufenthalt des fremden Teilnehmers&amp;amp;nbsp; &#039;&#039;&#039;2&#039;&#039;&#039;&amp;amp;nbsp; und eine Kopie von HLR(B) des Betreibers&amp;amp;nbsp; $\rm B$. &lt;br /&gt;
*Der Betreiber&amp;amp;nbsp; $\rm A$&amp;amp;nbsp; erkennt so diesen fremden Kunden und erteilt ihm die Freigabe für Roaming in seinem Netz&amp;amp;nbsp; $\rm A$. Voraussetzung ist allerdings, dass zwischen den Netzbetreibern ein Roaming–Vertrag besteht.}}&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Von GSM bereitgestellte Dienste  == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die GSM-Dienste sind in die drei Kategorien aufgeteilt:&lt;br /&gt;
*&#039;&#039;&#039;Bearer Services&#039;&#039;&#039;&amp;amp;nbsp; – Trägerdienste,&lt;br /&gt;
*&#039;&#039;&#039;Teleservices&#039;&#039;&#039;&amp;amp;nbsp; – Tele(matik)dienste,&lt;br /&gt;
*&#039;&#039;&#039;Supplementary Services&#039;&#039;&#039;&amp;amp;nbsp; – Zusatzdienste.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Träger– und Teledienste fasst man unter dem Oberbegriff „Telekommunikationsdienste” zusammen. Deshalb muss jedes&amp;amp;nbsp; &#039;&#039;Public Land Mobile Network&#039;&#039;&amp;amp;nbsp; (PLMN) die entsprechende Festnetz–Infrastruktur und eine Netzübergangsvermittlungsfunktion&amp;amp;nbsp; (&#039;&#039;Interworking Function&#039;&#039;, IWF)&amp;amp;nbsp; zur Verfügung stellen.&lt;br /&gt;
&lt;br /&gt;
Die&amp;amp;nbsp; &#039;&#039;&#039;Trägerdienste&#039;&#039;&#039;&amp;amp;nbsp; sind für die Datenübertragung grundlegend. Sie stellen die notwendigen technischen Einrichtungen zum gesicherten Transport der Nutzdaten bereit. Zu den reinen Transportdiensten gehören:&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1185__Bei_T_3_1_S6_v1.png|right|frame|Klassifizierung der GSM&amp;amp;ndash;Dienste]]&lt;br /&gt;
*synchrone leitungsvermittelte Datenübertragung&amp;amp;nbsp; &amp;lt;br&amp;gt;(mit 2400, 4800 oder 9600 bit/s),&lt;br /&gt;
*asynchrone leitungsvermittelte Datenübertragung&amp;amp;nbsp; &amp;lt;br&amp;gt;(mit 300 oder 1200 bit/s).&lt;br /&gt;
*synchrone paketvermittelte Datenübertragung &amp;amp;nbsp; &amp;lt;br&amp;gt;(mit 2400, 4800 oder 9600 bit/s).&lt;br /&gt;
*asynchrone paketvermittelte Datenübertragung&amp;amp;nbsp; &amp;lt;br&amp;gt;(mit 300 oder 9600 bit/s).&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Die Trägerdienste werden dazu noch in zwei verschiedene Modi unterteilt:&lt;br /&gt;
*Im so genannten&amp;amp;nbsp; &#039;&#039;transparenten Modus&#039;&#039;&amp;amp;nbsp; besteht eine durch Vorwärtsfehlerkorrektur gesicherte Verbindung zwischen Endgerät und MSC. Dieser Modus ist durch eine konstante Bitrate, eine konstante Übertragungsverzögerung und – abhängig vom jeweiligen Kanalzustand – eine schwankende Bitfehlerhäufigkeit gekennzeichnet.&lt;br /&gt;
*Dagegen basiert der&amp;amp;nbsp; &#039;&#039;nichttransparente Modus&#039;&#039;&amp;amp;nbsp; auf dem&amp;amp;nbsp; &#039;&#039;Radio Link Protocol&#039;&#039;&amp;amp;nbsp; (RLP). Durch ein zusätzliches ARQ–Verfahren&amp;amp;nbsp; (&#039;&#039;Automatic Repeat Request&#039;&#039;)&amp;amp;nbsp;  werden Blöcke mit zu vielen Bitfehlern zur Wiederübertragung angefordert, so dass die Netto–Bitrate und die Verzögerung stark von den Übertragungsbedingungen abhängen.&lt;br /&gt;
&lt;br /&gt;
==Die Teledienste von GSM  == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die zweite Kategorie der GSM-Dienste sind&amp;amp;nbsp; &#039;&#039;&#039;Teledienste&#039;&#039;&#039;. Diese sind Ende-zu-Ende-Dienste, für die in der Regel keine Netzübergangsumsetzung&amp;amp;nbsp; (&#039;&#039;Interworking Function&#039;&#039;, IWF)&amp;amp;nbsp; erforderlich ist. In obiger Grafik bezeichnet „MS–TE“ das Terminal–Equipment der Mobilstation.&lt;br /&gt;
&lt;br /&gt;
Die wichtigsten Teledienste sind:&lt;br /&gt;
*der&amp;amp;nbsp; &#039;&#039;&#039;Telefondienst&#039;&#039;&#039;. Dieser Basisdienst für die Übertragung digital–codierter Sprachsignale benutzt eine bidirektionale sowie symmetrische Punkt-zu-Punkt-Verbindung und bietet so genannte „Services” an, wie zum Beispiel Anrufumleitung, Anrufsperre und geschlossene Benutzergruppen;&lt;br /&gt;
*der&amp;amp;nbsp; &#039;&#039;&#039;Faxdienst&#039;&#039;&#039;, der zur Übertragung der Daten einen transparenten Trägerdienst nutzt;&lt;br /&gt;
*der&amp;amp;nbsp; &#039;&#039;&#039;Kurznachrichtendienst&#039;&#039;&#039; (englisch:&amp;amp;nbsp; &#039;&#039;Short Message Service&#039;&#039;, SMS), der von GSM seit 1996 bereitgestellt wird. Hiermit können Nachrichten mit einem verbindungslosen paketvermittelten Protokoll von oder zu einer Mobilstation übertragen werden. Hierzu muss ein Netzbetreiber ein Dienstzentrum&amp;amp;nbsp; (&#039;&#039;Service Center&#039;&#039;)&amp;amp;nbsp; einrichten.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Man unterscheidet zwei Typen von Kurznachrichten:&lt;br /&gt;
*&#039;&#039;&#039;Punkt-zu-Punkt-Nachrichten&#039;&#039;&#039;&amp;amp;nbsp; zwischen den Mobilstationen und einer Vermittlungsstelle mit einer maximalen Länge von 160 alphanumerischen Zeichen,&lt;br /&gt;
*&#039;&#039;&#039;Short Message Service Cell Broadcast&#039;&#039;&#039;&amp;amp;nbsp; (SMSCB). Diese Nachrichten werden nur in einem begrenzten, regionalen Gebiet ausgestrahlt und können von der Mobilstation nur im Ruhezustand empfangen werden. Die Länge ist auf 93 Zeichen beschränkt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die&amp;amp;nbsp; &#039;&#039;&#039;Zusatzdienste&#039;&#039;&#039;&amp;amp;nbsp; als dritte Kategorie der GSM–Dienste modifizieren und ergänzen die Funktionalität eines GSM–Telekommunikationsdienstes. GSM der Phase 1 bietet die gleichen Zusatzdienste an wie&amp;amp;nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_ISDN|ISDN]], beispielsweise Anrufanzeige, Rufumleitung&amp;amp;nbsp; (&#039;&#039;Call Forwarding&#039;&#039;)&amp;amp;nbsp; und Rufnummernsperre&amp;amp;nbsp; (&#039;&#039;Call Restriction&#039;&#039;).&lt;br /&gt;
&lt;br /&gt;
Neuere GSM–Dienste der Phase 2+ sind:&lt;br /&gt;
*[[Examples_of_Communication_Systems/Weiterentwicklungen_des_GSM#High_Speed_Circuit.E2.80.93Switched_Data_.28HSCSD.29|High Speed Circuit-Switched Data]]&amp;amp;nbsp; (HSCSD, Leitungsdatendienst),&lt;br /&gt;
*[[Examples_of_Communication_Systems/Weiterentwicklungen_des_GSM#General_Packet_Radio_Service_.28GPRS.29|General Packet Radio Service]]&amp;amp;nbsp; (GPRS, Paketdatendienst), sowie&lt;br /&gt;
*[[Examples_of_Communication_Systems/Weiterentwicklungen_des_GSM#Enhanced_Data_Rates_for_GSM_Evolution|Enhanced Data Rates for GSM Evolution]]&amp;amp;nbsp; (EDGE, höherratige Datenübertragung).&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Aufgaben zum Kapitel== 	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[3.1_GSM–Netzkomponenten|Aufgabe 3.1: GSM&amp;amp;ndash;Netzkomponenten]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgaben:3.2_GSM–Dienste|Aufgabe 3.2: GSM–Dienste]]&lt;br /&gt;
&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Verfahren_zur_Senkung_der_Bitfehlerrate_bei_DSL&amp;diff=34977</id>
		<title>Examples of Communication Systems/Verfahren zur Senkung der Bitfehlerrate bei DSL</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Verfahren_zur_Senkung_der_Bitfehlerrate_bei_DSL&amp;diff=34977"/>
		<updated>2020-10-13T15:38:10Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Verfahren zur Senkung der Bitfehlerrate bei DSL to Examples of Communication Systems/Methods to Reduce the Bit Error Rate in DSL&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Examples of Communication Systems/Methods to Reduce the Bit Error Rate in DSL]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Methods_to_Reduce_the_Bit_Error_Rate_in_DSL&amp;diff=34976</id>
		<title>Examples of Communication Systems/Methods to Reduce the Bit Error Rate in DSL</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/Methods_to_Reduce_the_Bit_Error_Rate_in_DSL&amp;diff=34976"/>
		<updated>2020-10-13T15:38:09Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/Verfahren zur Senkung der Bitfehlerrate bei DSL to Examples of Communication Systems/Methods to Reduce the Bit Error Rate in DSL&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=DSL – Digital Subscriber Line&lt;br /&gt;
|Vorherige Seite=xDSL als Übertragungstechnik&lt;br /&gt;
|Nächste Seite=Allgemeine Beschreibung von GSM&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Übertragungseigenschaften von Kupferkabeln  ==	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Wie schon im Kapitel&amp;amp;nbsp; [[Examples_of_Communication_Systems/Allgemeine_Beschreibung_von_DSL|Allgemeine  Beschreibung von DSL]]&amp;amp;nbsp; erwähnt, sind im Telefonleitungsnetz der Deutschen Telekom vorwiegend Kupfer–Doppeladern mit einem Durchmesser von&amp;amp;nbsp; $\text{0.4 mm}$&amp;amp;nbsp; verlegt. Der Teilnehmeranschlussbereich&amp;amp;nbsp; $\rm (TAL)$&amp;amp;nbsp; – häufig auch als „Last Mile” bezeichnet – ist in drei Segmente gegliedert: &lt;br /&gt;
*das Hauptkabel, &lt;br /&gt;
*das Verzweigungskabel, &lt;br /&gt;
*das Hausanschlusskabel. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Leitungslänge beträgt im Durchschnitt weniger als vier Kilometer. In den Städten ist die Kupferleitung in&amp;amp;nbsp; $90\%$&amp;amp;nbsp; aller Fälle kürzer als&amp;amp;nbsp; $\text{2.8 km}$.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1954__Bei_T_2_4_S1a_v1.png|center|frame|Aufbau des Teilnehmeranschlussbereichs]]&lt;br /&gt;
&lt;br /&gt;
Die hier besprochenen&amp;amp;nbsp; $\rm xDSL$–Varianten wurden speziell für den Einsatz auf solchen symmetrischen Kupfer–Doppeladern im Kabelverbund entwickelt. Um die technischen Anforderungen an die xDSL–Systeme besser verstehen zu können, muss ein genauer Blick auf die Übertragungseigenschaften und Störungen auf den Leiterpaaren gerichtet werden. Dieses Thema wurde schon im vierten Hauptkapitel  &amp;amp;nbsp;&#039;&#039;Eigenschaften elektrischer Leitungen&#039;&#039;&amp;amp;nbsp; des Buches&amp;amp;nbsp; [[Lineare zeitinvariante Systeme]]&amp;amp;nbsp; ausführlich behandelt und wird deshalb hier nur kurz anhand des&amp;amp;nbsp; [[Linear_and_Time_Invariant_Systems/Einige_Ergebnisse_der_Leitungstheorie#Wellenwiderstand_und_Reflexionen|Ersatzschaltbildes]]&amp;amp;nbsp; zusammengefasst:&lt;br /&gt;
*Die Leitungsübertragungseigenschaften werden durch den&amp;amp;nbsp; &#039;&#039;Wellenwiderstand&#039;&#039;&amp;amp;nbsp; $Z_{\rm W}(f)$&amp;amp;nbsp; und das Übertragungsmaß&amp;amp;nbsp; $γ(f)$&amp;amp;nbsp; vollständig charakterisiert. Beide Größen sind im allgemeinen komplex.&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;Dämpfungsmaß&#039;&#039;&amp;amp;nbsp; $α(f)$&amp;amp;nbsp; ist der Realteil des Übertragungsmaßes und beschreibt die Dämpfung der sich entlang der Leitung ausbreitenden Welle; $α(f)$&amp;amp;nbsp; ist eine gerade Funktion der Frequenz.&lt;br /&gt;
*Der ungerade Imaginärteil&amp;amp;nbsp; $β(f)$&amp;amp;nbsp; des komplexen Übertragungsmaßes heißt&amp;amp;nbsp; &#039;&#039;Phasenmaß&#039;&#039;&amp;amp;nbsp; und gibt die Phasendrehung der Signalwelle entlang der Leitung an.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1955__Bei_T_2_4_S1b_v1.png|right|frame|Dämpfungsmaß von Kupfer–Doppeladern]] &lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 1:}$&amp;amp;nbsp; Wir betrachten beispielhaft das rechts dargestellte Dämpfungsmaß, das auf empirische Untersuchungen der Deutschen Telekom zurückgeht. &lt;br /&gt;
&lt;br /&gt;
Die Kurven ergaben sich durch Mittelung über eine große Anzahl gemessener Leitungen von einem Kilometer Länge im Frequenzbereich bis&amp;amp;nbsp; $\text{30 MHz}$. Man erkennt:&lt;br /&gt;
* Das Dämpfungsmaß&amp;amp;nbsp; $α(f)$&amp;amp;nbsp; steigt etwa proportional mit der Wurzel der Frequenz an und wird mit steigendem Leiterdurchmesser&amp;amp;nbsp; $d$&amp;amp;nbsp; geringer.&lt;br /&gt;
* Die Dämpfungsfunktion&amp;amp;nbsp; $a(f)$&amp;amp;nbsp; steigt linear mit der Kabellänge&amp;amp;nbsp; $l$&amp;amp;nbsp; an: &lt;br /&gt;
:$$a(f) = α(f) · l.$$&lt;br /&gt;
&lt;br /&gt;
Beachten Sie den Unterschied zwischen &lt;br /&gt;
*&amp;amp;bdquo;$a$&amp;amp;rdquo; (für die Dämpfungsfunktion) und &lt;br /&gt;
*&amp;amp;bdquo;$alpha$&amp;amp;rdquo; (für das Dämpfungsmaß, bezogen auf die Länge).}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Für den Leitungsdurchmesser&amp;amp;nbsp; $\text{0.4 mm}$&amp;amp;nbsp; wurde in&amp;amp;nbsp; [PW95]&amp;lt;ref name =&#039;PW95&#039;&amp;gt;Pollakowski, M.; Wellhausen, H.W.: &#039;&#039;Eigenschaften symmetrischer Ortsanschlusskabel im Frequenzbereich bis 30 MHz&#039;&#039;. Mitteilung aus dem Forschungs- und Technologiezentrum der Deutschen Telekom AG, Darmstadt, Verlag für Wissenschaft und Leben Georg Heidecker, 1995.&amp;lt;/ref&amp;gt;&amp;amp;nbsp; eine empirische Näherungsformel für das Dämpfungsmaß angegeben:&lt;br /&gt;
&lt;br /&gt;
:$$\alpha(f) =  \left [ 5.1 + 14.3 \cdot \left (\frac{f}{\rm 1\,MHz}\right )^{0.59} \right ] \frac{\rm dB}{\rm km}&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
 &lt;br /&gt;
Wertet man diese Gleichung aus, so können folgende beispielhafte Werte genannt werden:&lt;br /&gt;
*Die Dämpfung&amp;amp;nbsp; $a(f)$&amp;amp;nbsp; einer Kupfer–Doppelader der Länge&amp;amp;nbsp; $l = 1 \ \rm km$&amp;amp;nbsp; mit Durchmesser&amp;amp;nbsp; $0.4 \ \rm mm$&amp;amp;nbsp; beträgt für die Signalfrequenz&amp;amp;nbsp; $10\ \rm  MHz$&amp;amp;nbsp; etwas mehr als&amp;amp;nbsp; $60\ \rm  dB$. &lt;br /&gt;
*Bei doppelter Frequenz&amp;amp;nbsp; $(20 \ \rm  MHz)$&amp;amp;nbsp; steigt der Dämpfungswert auf über&amp;amp;nbsp; $90 \ \rm  dB$. Es zeigt sich, dass die Dämpfung nicht exakt mit der Wurzel der Frequenz ansteigt, wie es bei alleiniger Betrachtung des Skin–Effekts der Fall wäre, da auch verschiedene andere Effekte zur Dämpfung beitragen.&lt;br /&gt;
*Wird die Kabellänge auf&amp;amp;nbsp; $l = 2 \ \rm  km$&amp;amp;nbsp; verdoppelt, so erreicht die Dämpfung einen Wert von mehr als&amp;amp;nbsp; $120 \ \rm  dB$&amp;amp;nbsp; $($bei&amp;amp;nbsp; $10 \ \rm  MHz)$,  was einem Amplitudendämpfungsfaktor kleiner als&amp;amp;nbsp; $10^{-6}$&amp;amp;nbsp; entspricht.&lt;br /&gt;
*Durch die Frequenzabhängigkeit von&amp;amp;nbsp; $α(f)$&amp;amp;nbsp; und&amp;amp;nbsp; $β(f)$&amp;amp;nbsp; kommt es sowohl zu&amp;amp;nbsp; &#039;&#039;Intersymbolinterferenzen&#039;&#039;&amp;amp;nbsp; $\rm (ISI)$&amp;amp;nbsp; als auch zu&amp;amp;nbsp; &#039;&#039;Inter&amp;amp;ndash;Carrier&amp;amp;ndash;Interferenzen&#039;&#039;&amp;amp;nbsp; $\rm (ICI)$. Vorzusehen ist also bei xDSL eine geeignete Entzerrung.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Im Kapitel&amp;amp;nbsp; [[Linear_and_Time_Invariant_Systems/Eigenschaften_von_Kupfer–Doppeladern|Eigenschaften von Kupfer–Doppeladern]]&amp;amp;nbsp; des Buches &amp;amp;bdquo;Lineare zeitinvariante Systeme&amp;amp;rdquo; wird diese Thematik ausführlich behandelt. Wir verweisen auf die beiden interaktiven Applets&amp;amp;nbsp; [[Applets:Dämpfung_von_Kupferkabeln|Dämpfung von Kupferkabeln]]&amp;amp;nbsp; und&amp;amp;nbsp; [[Applets:Zeitverhalten_von_Kupferkabeln|Zeitverhalten von Kupferkabeln]].&lt;br /&gt;
 	 &lt;br /&gt;
==Störungen bei der Übertragung  ==	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Jedes Nachrichtensystem wird durch Rauschen beeinflusst, das meist in erster Linie aus dem thermischen Widerstandsrauschen resultiert. Zusätzlich sind bei einer Zweidrahtleitung noch zu beachten:&lt;br /&gt;
*&#039;&#039;&#039;Reflexionen&#039;&#039;&#039;: &amp;amp;nbsp; Durch die gegenläufige Welle wird die Dämfung eines Leitungspaares erhöht, was im&amp;amp;nbsp; [[Linear_and_Time_Invariant_Systems/Einige_Ergebnisse_der_Leitungstheorie#Einfluss_von_Reflexionen_.E2.80.93_Betriebsd.C3.A4mpfung|Betriebsdämpfungsmaß]]&amp;amp;nbsp;  der Leitung berücksichtigt wird. Um solche Reflexion zu verhindern, müsste der Abschlusswiderstand&amp;amp;nbsp;  $Z_{\rm E}(f)$&amp;amp;nbsp;  identisch mit dem (komplexen und frequenzabhängigen) Wellenwiderstand&amp;amp;nbsp;  $Z_{\rm W}(f)$&amp;amp;nbsp;  gewählt werden. Dies ist in der Praxis schwierig. Deshalb werden die Abschlusswiderstände reell und konstant gewählt und die daraus resultierenden Reflexionen – wenn möglich – mit technischen Mitteln bekämpft.&lt;br /&gt;
*&#039;&#039;&#039;Nebensprechen&#039;&#039;&#039;: &amp;amp;nbsp; Dies ist dominante Störung bei leitungsgebundener Übertragung. Nebensprechen entsteht, wenn es durch induktive und kapazitive Kopplungen zwischen benachbarten Adern eines Kabelbündels zu gegenseitigen Beeinflussungen bei der Signalübertragung kommt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1956__Bei_T_2_4_S2a_v1.png|right|frame|Zum Entstehen von Nebensprechen]]&lt;br /&gt;
Beim Nebensprechen unterscheidet man zwischen zwei Typen (siehe Grafik):&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Nahnebensprechen&#039;&#039;&#039;&amp;amp;nbsp;  (englisch:&amp;amp;nbsp;  &#039;&#039;Near End Crosstalk&#039;&#039;, NEXT): Der störende Sender und der gestörte Empfänger befinden sich auf der gleichen Seite des Kabels.&lt;br /&gt;
*&#039;&#039;&#039;Fernnebensprechen&#039;&#039;&#039;&amp;amp;nbsp;  (englisch:&amp;amp;nbsp;  &#039;&#039;Far End Crosstalk&#039;&#039;, FEXT): Der störende Sender und der gestörte Empfänger befinden sich auf den gegenüberliegenden Seiten des Kabels.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Das Fernnebensprechen nimmt mit zunehmender Leitungslänge aufgrund der Dämpfung stark ab, so dass auch bei DSL das Nahnebensprechen dominant ist. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Fazit:}$&amp;amp;nbsp; &lt;br /&gt;
Zusammenfassend lässt sich sagen:&lt;br /&gt;
*Mit steigender Frequenz und abnehmendem Abstand zwischen den Leiterpaaren – wie innerhalb eines Sternvierers – nimmt das Nahnebensprechen zu. Weniger kritisch ist es, wenn sich die Adern in verschiedenen Grundbündeln befinden.&lt;br /&gt;
*Je nach eingesetzter Verseiltechnik, Abschirmung und Fertigungsgenauigkeit des Kabels tritt dieser Effekt unterschiedlich stark auf. Die Leitungslänge spielt dagegen bei Nahnebensprechen keine Rolle: &amp;amp;nbsp; Der eigene Sender wird durch das Kabel nicht gedämpft.&lt;br /&gt;
*Durch geschickte Belegung  kann man  das Nebensprechen signifikant reduzieren, zum Beispiel, indem man benachbarte Doppeladern mit verschiedenen Diensten belegt, die unterschiedliche und möglichst wenig überlappende Frequenzbänder nutzen.}}&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Signal&amp;amp;ndash;zu&amp;amp;ndash;Rausch–Verhältnis, Reichweite und Übertragungsrate == 	 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Zur Bewertung der Qualität eines Übertragungssystems wird meist das Signal–zu–Rausch–Verhältnis&amp;amp;nbsp; (&#039;&#039;Signal–to–Noise Ratio&#039;&#039;, SNR) vor dem Entscheider herangezogen. Dieses ist auch ein Maß für die zu erwartende Bitfehlerrate (BER). &lt;br /&gt;
*Signal und Rauschen im gleichen Frequenzband verringern das SNR und führen zu einer höheren Bitfehlerrate oder – bei vorgegebener Bitfehlerrate – zu einer niedrigeren Übertragungsrate.&lt;br /&gt;
*Die Zusammenhänge zwischen Sendeleistung, Kanalgüte (Kabeldämpfung und Störleistung) sowie erreichbarer Übertragungsrate können sehr gut durch Shannons Kanalkapazitätsformel verdeutlicht werden:&lt;br /&gt;
&lt;br /&gt;
:$$C \left [ \frac{\rm bit}{\rm Symbol} \right ] =  \frac {1}{2} \cdot \log_2 \left ( 1 + \frac{P_{\rm E}}{P_{\rm N}} \right )=&lt;br /&gt;
 \frac {1}{2} \cdot \log_2 \left ( 1 + \frac{\alpha_{\rm K}^2 \cdot P_{\rm S}}{P_{\rm N}} \right ) \hspace{0.05cm}.$$&lt;br /&gt;
 &lt;br /&gt;
Die&amp;amp;nbsp; &#039;&#039;&#039;Kanalkapazität&#039;&#039;&#039;&amp;amp;nbsp; $C$&amp;amp;nbsp; bezeichnet die maximale Übertragungsbitrate, mit der bei idealen Voraussetzungen (unter Anderem die bestmögliche Codierung mit unendlicher Blocklänge) übertragen werden kann &amp;amp;nbsp; ⇒ &amp;amp;nbsp;  &#039;&#039;Kanalcodierungstheorem&#039;&#039;. Näheres hierzu finden Sie im vierten Hauptkapitel &#039;&#039;Wertkontinuierliche Informationstheorie&#039;&#039;&amp;amp;nbsp; des Buches&amp;amp;nbsp; [[Informationstheorie]].&lt;br /&gt;
&lt;br /&gt;
Wir gehen davon aus, dass die Bandbreite durch die xDSL–Variante festliegt und dass Nahnebensprechen die dominante Störung ist. Dann kann die Übertragungsrate durch folgende Maßnahmen verbessert werden:&lt;br /&gt;
*Man vergrößert bei gegebener Sendeleistung&amp;amp;nbsp; $P_{\rm S}$&amp;amp;nbsp; und gegebenem Medium (zum Beipiel: &amp;amp;nbsp; Kupfer–Doppeladern mit 0.4 mm Durchmesser) die zur Demodulation nutzbare Empfangsleistung&amp;amp;nbsp; $P_{\rm E}$&amp;amp;nbsp; nur durch eine kürzere Leitungslänge.&lt;br /&gt;
*Man vermindert die Störleistung&amp;amp;nbsp; $P_{\rm N}$, was bei gegebener Bandbreite&amp;amp;nbsp; $B$&amp;amp;nbsp; durch eine erhöhte Nebensprechdämpfung zu erreichen wäre, die wiederum auch vom Übertragungsverfahren auf den benachbarten Leitungspaaren abhängt.&lt;br /&gt;
*Eine Erhöhung der Sendeleistung&amp;amp;nbsp; $P_{\rm S}$&amp;amp;nbsp; wäre hier nicht zielführend, da sich eine größere Sendeleistung gleichzeitig ungünstig auf das Nebensprechen auswirkt. Diese Maßnahme wäre nur bei einem AWGN–Kanal&amp;amp;nbsp; (Beispiel:&amp;amp;nbsp; Koaxialkabel) erfolgreich.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Diese Auflistung zeigt, dass bei xDSL ein direkter Zusammenhang zwischen Reichweite (Leitungslänge), Übertragungsrate und eingesetztem Übertragungsverfahren besteht. Aus der folgenden Grafik, die sich auf Messungen mit 1–DA–xDSL–Verfahren und 0.4mm–Kupferkabeln bei Versuchssystemen mit realitätsnahen Störbedingungen bezieht, erkennt man deutlich diese Abhängigkeiten.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1957__Bei_T_2_4_S3a_v1.png|right|frame|Reichweite und Gesamtbitrate bei ADSL und VDSL]]&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 2:}$&amp;amp;nbsp; &lt;br /&gt;
Die Grafik zeigt für einige ADSL– und VDSL–Varianten &lt;br /&gt;
*die Reichweite (maximale Kabellänge)&amp;amp;nbsp; $l_{\rm max}$&amp;amp;nbsp; und &lt;br /&gt;
*die Gesamtübertragungsrate&amp;amp;nbsp; $R_{\rm ges}$&amp;amp;nbsp; von Upstream (erste Angabe) &amp;lt;br&amp;gt;und Downstream (zweite Angabe). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Gesamtübertragungsrate liegt bei den betrachteten Systemen zwischen&amp;amp;nbsp; $2.2 \ \rm Mbit/s$&amp;amp;nbsp; und&amp;amp;nbsp; $53\ \rm  Mbit/s$. Die Reichweite bezieht sich hier auf eine Kupferdoppelader mit 0.4 mm Durchmesser.&lt;br /&gt;
&lt;br /&gt;
Die Tendenz der Messwerte ist in dieser Grafik als durchgezogene (blaue) Kurve eingezeichnet und kann als grobe Näherung folgendermaßen formuliert werden:&lt;br /&gt;
&lt;br /&gt;
:$$l_{\rm max}\,{\rm \big [in}\,\,{\rm km \big ] } =  \frac {20}{4 + R_{\rm ges}\,{\rm \big [in}\,\,{\rm Mbit/s \big ] } }  \hspace{0.05cm}.$$&lt;br /&gt;
 &lt;br /&gt;
Man erkennt, dass sich die Reichweite aller derzeitigen Systeme (etwa zwischen einem halben  und dreieinhalb Kilometer Leitungslänge) von dieser Faustformel um maximal $±25\%$ unterscheiden (gestrichelte Kurven).}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 3:}$&amp;amp;nbsp; &lt;br /&gt;
Im unteren Diagramm sind die Gesamtdatenübertragungsraten von ADSL2+ und VDSL(2) als Funktion der Leitungslänge dargestellt, wobei sich die (unterschiedlich) roten Kurven auf den Downstream und die beiden blauen Kuvren auf den Upstream beziehen.  Zugrunde liegt ein „worst-case”–Störszenario mit folgenden Randbedingungen:&lt;br /&gt;
[[File:P_ID1965__Bei_T_2_4_S3b_v1.png|right|frame|Übertragungsraten und Kabellängen bei ADSL2+ und VDSL(2)]]&lt;br /&gt;
*Kabelbündel mit 50 Kupferdoppeladern (0.4 mm Durchmesser), PE–isoliert,&lt;br /&gt;
*Ziel–Symbolfehlerrate $10^{–7}, 6 \ \text{dB}$ Margin (Reserve–SNR, um Ziel–Datenrate zu erreichen),&lt;br /&gt;
*gleichzeitiger Betrieb folgender Übertragungsverfahren: &lt;br /&gt;
**25 mal ADSL2+ über ISDN, &lt;br /&gt;
**14 mal ISDN, viermal SHDSL (1 Mbit/s), &lt;br /&gt;
**je fünfmal SHDSL (2 Mbit/s) und VDSL2 Bandplan 998, sowie &lt;br /&gt;
**zweimal HDSL.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Man erkennt aus dieser Darstellung: &lt;br /&gt;
*Bei kurzen Leitungslängen sind die erzielbaren Übertragungsraten bei  VDSL(2) deutlich höher als bei  ADSL2+. &lt;br /&gt;
*Ab einer Leitungslänge von etwa 1800 Meter ist dagegen ADSL2+ deutlich besser als VDSL(2). &lt;br /&gt;
*Dies ist darauf zurückzuführen, dass VDSL(2) in den unteren Frequenzbändern mit deutlich niedrigerer Sendeleistung arbeitet, um benachbarte Übertragungssysteme weniger zu stören. &lt;br /&gt;
*Mit zunehmender Leitungslänge werden die frequenzmäßig höher angesiedelten Subkanäle wegen der zunehmenden Dämpfung zur Datenübertragung unbrauchbar, was den Absturz der Datenrate erklärt.}}&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
==DSL–Fehlerkorrekturmaßnahmen im Überblick==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Um die Bitfehlerrate der xDSL–Systeme zu senken, wurden in den Spezifikationen eine Reihe von Verfahren geschickt miteinander kombiniert, um den zwei häufigsten Fehlerursachen entgegen zu wirken:&lt;br /&gt;
*Übertragungsfehler aufgrund von Impuls– und Nebensprechstörungen auf der Leitung: &amp;amp;nbsp; &amp;lt;br&amp;gt;Besonders bei hohen Datenraten liegen benachbarte Symbole im QAM–Signalraum eng beieinander, was die Fehlerwahrscheinlichkeit signifikant erhöht.&lt;br /&gt;
*Abschneiden von Signalspitzen aufgrund mangelnder Dynamik der Sendeverstärker (&#039;&#039;Clipping&#039;&#039;): &amp;amp;nbsp; &amp;lt;br&amp;gt;Dieses Abschneiden entspricht ebenfalls einer Impulsstörung und wirkt als zusätzliche farbige Rauschbelastung, die das SNR merkbar verschlechtert.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1959__Bei_T_2_4_S4_v1.png|right|frame|Vollständiges DSL/DMT-System]]&lt;br /&gt;
Beim DMT–Verfahren sind für Fehlerkorrekturmaßnahmen in den Signalprozessoren zwei Pfade realisiert. Die Bitzuordnung zu diesen Pfaden übernimmt ein Multiplexer mit Sync–Kontrolle.&lt;br /&gt;
*Beim&amp;amp;nbsp;  &#039;&#039;&#039;Fast–Path&#039;&#039;&#039;&amp;amp;nbsp;  setzt man auf geringe Wartezeiten (&#039;&#039;Latency&#039;&#039;). &lt;br /&gt;
*Beim&amp;amp;nbsp;  &#039;&#039;&#039;Interleaved–Path&#039;&#039;&#039;&amp;amp;nbsp;  stehen niedrige Bitfehlerraten im Vordergrund. Hier ist die Latenz aufgrund des Einsatzes eines Interleavers größer.&lt;br /&gt;
*Eine duale Latenz bedeutet die gleichzeitige Verwendung beider Pfade. Die &#039;&#039;ADSL Transceiver Units&#039;&#039;&amp;amp;nbsp;  müssen zumindest im Downstream  eine duale Latenz unterstützen.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Auf den restlichen Kapitelseiten werden für beide Pfade die Fehlerschutzverfahren erörtert. &lt;br /&gt;
&amp;lt;br&amp;gt;(Bei anderen Modulationsverfahren sind die beschriebenen Fehlerschutzmaßnahmen prinzipiell gleich, im Detail jedoch verschieden).&lt;br /&gt;
*Die Übertragungskette beginnt mit dem&amp;amp;nbsp; &#039;&#039;Cyclic Redundancy Check&#039;&#039;&amp;amp;nbsp; (CRC), der eine Prüfsumme über einen Überrahmen bildet, die beim Empfänger ausgewertet wird. &lt;br /&gt;
*Aufgabe des Scramblers ist es, lange Folgen von Einsen und Nullen umzuwandeln, um häufigere Signalwechsel zu erzeugen.&lt;br /&gt;
*Danach folgt die Vorwärtsfehlerkorrektur&amp;amp;nbsp; (&#039;&#039;Forward Error Correction&#039;&#039;, FEC), um empfangsseitig Bytefehler erkennen und eventuell sogar korrigieren zu können. &amp;lt;br&amp;gt;Standard ist bei xDSL eine Reed–Solomon–Codierung, oft kommt zusätzlich die Trellis–Codierung zum Einsatz.&lt;br /&gt;
*Aufgabe des&amp;amp;nbsp; &#039;&#039;Interleavers&#039;&#039;&amp;amp;nbsp; ist es, die empfangenen Codeworte über einen größeren Zeitbereich zu verteilen, um eventuell auftretende Übertragungsstörungen ebenfalls auf mehrere Codeworte zu verteilen und damit die Chancen einer Rekonstruktion zu erhöhen.&lt;br /&gt;
*Nach dem Durchlaufen der einzelnen Bitsicherungsverfahren werden die Datenströme von Fast– und Interleaved–Pfad im&amp;amp;nbsp; &#039;&#039;Tone Ordering&#039;&#039;&amp;amp;nbsp; zusammengeführt und bearbeitet. Hier werden auch die Bits den Trägerfrequenzen (Bins) zugewiesen.&lt;br /&gt;
*Außerdem werden im DMT-Sender nach der IDFT ein Schutzintervall und ein zyklisches Präfix eingefügt, das im DMT–Empfänger wieder entfernt wird. Dies stellt bei verzerrendem Kanal eine sehr einfache Realisierung der Signalentzerrung im Frequenzbereich dar.&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
==Cyclic Redundancy Check==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die&amp;amp;nbsp; &#039;&#039;zyklische Redundanzprüfung&#039;&#039;&amp;amp;nbsp; (englisch:&amp;amp;nbsp; &#039;&#039;Cyclic Redundancy Check&#039;&#039;, CRC) ist ein einfaches Verfahren auf Bitebene, um die Unversehrtheit der Daten bei der Übertragung oder der Duplizierung zu überprüfen. Das CRC–Prinzip wurde bereits im&amp;amp;nbsp; [[Examples_of_Communication_Systems/ISDN–Primärmultiplexanschluss#Rahmensynchronisation|ISDN–Kapitel]]&amp;amp;nbsp; im Detail beschrieben. &lt;br /&gt;
&lt;br /&gt;
Hier folgt eine kurze Zusammenfassung, wobei die bei den xDSL–Spezifikationen verwendete Nomenklatur verwendet wird:&lt;br /&gt;
*Vor der Datenübertragung wird für einen Datenblock&amp;amp;nbsp; $D(x)$&amp;amp;nbsp; mit&amp;amp;nbsp; $k$&amp;amp;nbsp; Bit &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; $d_0$, ... , $d_{k-1}$&amp;amp;nbsp; ein Prüfwert&amp;amp;nbsp; $C(x)$&amp;amp;nbsp; mit acht Bit gebildet und an die ursprüngliche Datenfolge angehängt. Die Variable&amp;amp;nbsp; $x$&amp;amp;nbsp; bezeichnet hierbei einen Verzögerungsoperator.&lt;br /&gt;
*$C(x)$&amp;amp;nbsp; ergibt sich als der Divisionsrest der Polynomdivision von&amp;amp;nbsp; $D(x)$&amp;amp;nbsp; durch das Prüfpolynom&amp;amp;nbsp; $G(x)$. Diese Operation wird durch Modulo–2–Gleichungen beschrieben:&lt;br /&gt;
:$$D(x) = d_0 \cdot x^{k-1} + d_1 \cdot x^{k-2} +  ...  + d_{k-2} \cdot x + d_{k-1}\hspace{0.05cm},$$&lt;br /&gt;
:$$G(x) =  x^8 + x^4 + x^3 + x^2 + 1 \hspace{0.05cm},$$&lt;br /&gt;
:$$C(x) = D(x) \cdot x^8 \,\,{\rm mod }\,\, G(x) = c_0 \cdot x^7 + c_1 \cdot x^6 +  \text{...}  + c_6 \cdot x + c_7&lt;br /&gt;
\hspace{0.05cm}.$$&lt;br /&gt;
 &lt;br /&gt;
*Beim Empfänger wird nach dem gleichen Verfahren erneut ein CRC–Wert gebildet und und mit dem übermittelten Prüfwert verglichen. Sind beide ungleich, so liegt mindestens ein Bitfehler vor.&lt;br /&gt;
*Auf diese Weise können Bitfehler erkannt werden, wenn diese nicht zu gehäuft sind. In der ADSL–Praxis ist das CRC–Verfahren ausreichend zur Bitfehlererkennung.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1968__Bei_T_2_4_S5_v1.png|center|frame|CRC&amp;amp;ndash;Prüfwertbildung bei ADSL]]&lt;br /&gt;
&lt;br /&gt;
Die Grafik zeigt eine beispielhafte Schaltung – realisierbar in Hardware oder Software – zur CRC–Prüfwertbildung mit dem für ADSL spezifizierten Generatorpolynom&amp;amp;nbsp; $G(x)$:&lt;br /&gt;
*Der zu prüfende Datenblock wird von links in die Schaltung eingebracht, der Ausgang rückgekoppelt und mit den Stellen des Generatorpolynoms&amp;amp;nbsp; $G(x)$&amp;amp;nbsp; exklusiv–oder–verknüpft. Nach Durchlauf des gesamten Datenblocks enthalten die Speicherelemente den CRC–Prüfwert&amp;amp;nbsp; $C(x)$.&lt;br /&gt;
*Anzumerken ist in diesem Zusammenhang, dass bei ADSL die Daten in so genannte Superframes (zu je 68 Rahmen) aufgespaltet werden. Jeder Rahmen beinhaltet Daten aus dem Fast– und Interleaved–Pfad. Zusätzlich werden Verwaltungs– und Synchronisations–Bits in spezifischen Rahmen übertragen.&lt;br /&gt;
*Pro ADSL–Superframe und pro Pfad werden acht CRC–Bits gebildet und als&amp;amp;nbsp; &#039;&#039;Fast Byte&#039;&#039;&amp;amp;nbsp; bzw.&amp;amp;nbsp; &#039;&#039;Sync Byte&#039;&#039;&amp;amp;nbsp; als erstes Byte von Rahmen&amp;amp;nbsp; $0$&amp;amp;nbsp; des nächsten Superframes übertragen.&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Scrambler und De–Scrambler==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Aufgabe des Scramblers ist es, lange Folgen von Einsen und Nullen so umzuwandeln, dass häufige Symbolwechsel erfolgen. &lt;br /&gt;
*Eine mögliche Realisierung stellt eine Schieberegisterschaltung mit rückgeführten Exklusiv–Oder–verknüpften Zweigen dar. &lt;br /&gt;
*Um beim Empfänger die ursprüngliche Binärfolge herzustellen, muss dort ein spiegelbildlich selbstsynchronisierender De–Scrambler verwendet werden.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Grafik zeigt links ein Beispiel eines bei DSL tatsächlich eingesetzten Scramblers mit 23 Speicherelementen. Rechts ist der zugehörige De–Scrambler dargestellt.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1960__Bei_T_2_4_S6a.png|center|frame|Scrambler und De–Scrambler bei  DSL/DMT-System]]&lt;br /&gt;
&lt;br /&gt;
Das sendeseitige Schieberegister wird mit einem beliebigen Startwert geladen, der keinen weiteren Einfluss auf die Funktion der Schaltung hat. Bezeichnet man mit&amp;amp;nbsp; $e_n$&amp;amp;nbsp; die Bits der binären Eingangsfolge und mit&amp;amp;nbsp; $a_n$&amp;amp;nbsp; die Bits am Ausgang, so gilt folgender Zusammenhang:&lt;br /&gt;
&lt;br /&gt;
:$$a_n =  e_n \oplus a_{n- 18}\oplus a_{n- 23}\hspace{0.05cm}.$$&lt;br /&gt;
 &lt;br /&gt;
Im Beispiel besteht die Eingangsfolge aus 80 aufeinander folgenden Einsen (linke obere graue Hinterlegung), die bitweise in den Scrambler geschoben werden. Die Ausgangsbitfolge weist dann – wie gewünscht – häufige Null–Eins–Wechsel auf.&lt;br /&gt;
&lt;br /&gt;
Der De–Scrambler (rechts dargestellt) kann zu jedem beliebigen Zeitpunkt gestartet werden. Am Ausgangsdatenstrom erkennt man,&lt;br /&gt;
*dass der De–Scramber zunächst einige (bis zu maximal 23) fehlerhafte Bits ausgibt,&lt;br /&gt;
*sich dann aber automatisch synchronisiert und&lt;br /&gt;
*anschließend die ursprüngliche Bitfolge (nur Einsen) fehlerfrei zurückgewinnt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Es ist zu beachten, dass für dieses Beispiel zwar die Bitübertragung als fehlerfrei angenommen wurde, aber auch der De–Scrambler mit einem beliebigen Startwert geladen werden kann, was bedeutet, dass zwischen beiden Schaltungen keine Synchronisierung erforderlich ist.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	 	 &lt;br /&gt;
==Vorwärtsfehlerkorrektur==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Zur Vorwärtsfehlerkorrektur (&#039;&#039;Forward Error Correction&#039;&#039;,&amp;amp;nbsp; FEC) wird bei allen xDSL–Varianten ein&amp;amp;nbsp; [[Kanalcodierung/Definition_und_Eigenschaften_von_Reed–Solomon–Codes|Reed–Solomon–Code]]&amp;amp;nbsp; (RS–Codierung) verwendet. Bei manchen Systemen – beispielsweise bei ADSL der Deutschen Telekom – wurde als zusätzliche Fehlerschutzmaßnahme &#039;&#039;Trellis Code Modulation&#039;&#039;&amp;amp;nbsp; (TCM) verbindlich festgelegt, auch wenn diese von den internationalen Gremien nur als „optional” spezifiziert wurde.&lt;br /&gt;
&lt;br /&gt;
Beide Verfahren werden im Buch&amp;amp;nbsp; [[Kanalcodierung]]&amp;amp;nbsp; ausführlich behandelt. Hier folgt eine kurze Zusammenfassung der Reed–Solomon–Codierung im Hinblick auf die Anwendung bei DSL:&lt;br /&gt;
*Mit der Reed–Solomon–Codierung werden Redundanz&amp;amp;ndash;Bytes für fest vereinbarte Stützstellen des Nutzdatenpolynoms generiert. Bei systematischer RS–Codierung wird ähnlich dem CRC–Verfahren ein Prüfwert berechnet und an den zu schützenden Datenblock angehängt.&lt;br /&gt;
*Die Daten werden jedoch nicht mehr bitweise, sondern byteweise verarbeitet. Demzufolge werden arithmetische Operationen nicht mehr im Galois–Feld&amp;amp;nbsp; $\rm GF( 2 )$&amp;amp;nbsp; ausgeführt, sondern in&amp;amp;nbsp; $\rm GF(2^8)$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Reed–Solomon–Prüfziffer lässt sich auch als Divisionsrest einer Polynomdivision ermitteln, bei xDSL mit folgenden Parametern:&lt;br /&gt;
*Anzahl&amp;amp;nbsp; $S$&amp;amp;nbsp; der zu überwachenden DMT–Symbole pro Reed–Solomon–Codewort&amp;amp;nbsp; $(S \ge 1$&amp;amp;nbsp; für den Fast–Puffer,&amp;amp;nbsp; $S =2^0$, ... , $2^4$&amp;amp;nbsp; für den Interleaved–Puffer$)$,&lt;br /&gt;
*Anzahl&amp;amp;nbsp; $K$&amp;amp;nbsp; der Nutzdatenbytes in den&amp;amp;nbsp; $S$&amp;amp;nbsp; DMT–Symbolen, definiert als Polynom&amp;amp;nbsp; $B(x)$&amp;amp;nbsp; vom Grad&amp;amp;nbsp; $K$, wobei das „B” auf Bytes hinweist,&lt;br /&gt;
*Anzahl&amp;amp;nbsp; $R$&amp;amp;nbsp; der RS–Prüfbytes&amp;amp;nbsp; $($gerade Zahl zwischen $2$ bis $16)$ pro Prüfwert (&amp;amp;bdquo;Fast&amp;amp;rdquo; oder &amp;amp;bdquo;Interleaved&amp;amp;rdquo;),&lt;br /&gt;
*Summe&amp;amp;nbsp; $N = K + R$&amp;amp;nbsp; der Nutzdatenbytes und Prüfbytes des Reed–Solomon–Codewortes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die Besonderheiten der Reed–Solomon–Codierung bei xDSL werden hier ohne weitere Kommentierung angegeben:&lt;br /&gt;
*Bei xDSL muss die Anzahl&amp;amp;nbsp; $R$&amp;amp;nbsp; der Prüfbytes ein ganzzahliges Vielfaches der Symbolanzahl&amp;amp;nbsp; $S$&amp;amp;nbsp; sein, damit diese im Nutzdatenpolynom gleichmäßig verteilt werden können.&lt;br /&gt;
*Die so genannten&amp;amp;nbsp;  [https://de.wikipedia.org/wiki/MDS-Code MDS–Codes]&amp;amp;nbsp; (&#039;&#039;Maximum Distance Separable&#039;&#039;) – eine Unterklasse der RS–Codes – erlauben die Korrektur von&amp;amp;nbsp; $R/2$&amp;amp;nbsp; verfälschten Nutzdatenbytes.&lt;br /&gt;
*Vom gewählten Reed–Solomon–Code für die DMT–Systeme ergibt sich als Einschränkung eine maximale Codewortlänge von&amp;amp;nbsp; $2^8–1 = 255$&amp;amp;nbsp; Byte entsprechend $2040$ Bit.&lt;br /&gt;
*Die Redundanz der Reed–Solomon–Codes kann bei ungünstigen Codeparametern eine beachtliche Datenmenge erzeugen, wodurch die Nettoübertragungsrate erheblich geschmälert wird.&lt;br /&gt;
*Es empfiehlt sich eine sinnvolle Aufteilung der Datenübertragungsmenge&amp;amp;nbsp; (&#039;&#039;Bruttodatenrate&#039;&#039;)&amp;amp;nbsp; in Nutzdaten&amp;amp;nbsp; (Nettodatenrate, &#039;&#039;Payload&#039;&#039;)&amp;amp;nbsp; und Fehlerschutzdaten&amp;amp;nbsp; (&#039;&#039;Overhead&#039;&#039;).&lt;br /&gt;
*Die Reed–Solomon–Codierung erzielt einen hohen Codiergewinn. Ein System ohne Codierung müsste für die gleiche Bitfehlerrate ein um&amp;amp;nbsp; $3 \ \rm dB$ größeres SNR aufweisen.&lt;br /&gt;
*Durch die&amp;amp;nbsp; &#039;&#039;Trellis–codierte Modulation&#039;&#039;&amp;amp;nbsp; (TCM) in Verbindung mit den anderen Fehlerschutzmaßnahmen fällt der Codiergewinnn höchst unterschiedlich aus; er bewegt sich zwischen&amp;amp;nbsp; $0 \ \rm dB$&amp;amp;nbsp; und&amp;amp;nbsp; $6 \ \rm dB$.&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Interleaving und De–Interleaving==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Gemeinsame Aufgabe von Interleaver (beim Sender) und De–Interleaver (beim Empfänger) ist es, die empfangenen Reed–Solomon–Codewörter über einen größeren Zeitbereich zu verteilen, um eventuell auftretende Übertragungsfehler auf mehrere Codeworte zu verteilen und damit die Chance einer korrekten Decodierung zu erhöhen.&lt;br /&gt;
&lt;br /&gt;
Das Interleaving ist durch den Parameter $D$ (&amp;amp;bdquo;Tiefe&amp;amp;rdquo;) charakterisiert, der Werte zwischen&amp;amp;nbsp; $2^0$&amp;amp;nbsp; und&amp;amp;nbsp; $2^9$&amp;amp;nbsp; annehmen kann. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1961__Bei_T_2_4_S8a_v1.png|right|frame|Zum DSL–Interleaving mit&amp;amp;nbsp; $D = 2$]]&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 4:}$&amp;amp;nbsp; Die Grafik verdeutlicht das Prinzip anhand der Reed–Solomon–Codeworte&amp;amp;nbsp; $A$,&amp;amp;nbsp; $B$,&amp;amp;nbsp; $C$&amp;amp;nbsp; mit jeweils fünf Byte sowie der Interleaver–Tiefe&amp;amp;nbsp; $D = 2$.&lt;br /&gt;
&lt;br /&gt;
Jedes Byte&amp;amp;nbsp; $B_i$&amp;amp;nbsp; des mittleren Reed–Solomon–Codewortes&amp;amp;nbsp; $B$&amp;amp;nbsp; wird um&amp;amp;nbsp; $V_i = (D - 1) · i$&amp;amp;nbsp; Bytes verzögert und es werden zwei Interleaver–Blöcke gebildet:&lt;br /&gt;
*Im ersten Block sind die Bytes&amp;amp;nbsp; $B_0$,&amp;amp;nbsp; $B_1$&amp;amp;nbsp; und&amp;amp;nbsp; $B_2$&amp;amp;nbsp; zusammen mit den Bytes&amp;amp;nbsp; $A_3$&amp;amp;nbsp; und&amp;amp;nbsp; $A_4$&amp;amp;nbsp; des vorherigen Codewortes zusammengefasst. &lt;br /&gt;
*Der zweite Block beinhaltet die Bytes&amp;amp;nbsp; $B_3$&amp;amp;nbsp; und&amp;amp;nbsp; $B_4$&amp;amp;nbsp; zusammen mit den Bytes&amp;amp;nbsp; $C_0$,&amp;amp;nbsp; $C_1$&amp;amp;nbsp; und&amp;amp;nbsp; $C_2$&amp;amp;nbsp; des nachfolgenden Codewortes.}} &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Diese &amp;amp;bdquo;Verwürfelung&amp;amp;rdquo; hat folgende Vorteile (vorausgesetzt,&amp;amp;nbsp; $D$ ist hinreichend groß):&lt;br /&gt;
*Die Fehlerkorrekturmöglichkeiten des Reed–Solomon–Codes werden verbessert.&lt;br /&gt;
*Die Nutzdatenrate bleibt gleich, wird also nicht vermindert (Redundanzfreiheit).&lt;br /&gt;
*Bei Störungen müssen nicht ganze Pakete auf Protokollebene wiederholt werden.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Nachteilig ist, dass es mit zunehmender Interleaver–Tiefe&amp;amp;nbsp; $D$&amp;amp;nbsp; zu merklichen Verzögerungszeiten (in der Größenordnung von Millisekunden) kommen kann, was für Echtzeitanwendungen große Probleme bereitet. Interleaving mit geringer Tiefe ist allerdings  nur bei genügend hohem Signal–zu–Rausch–Abstand sinnvoll.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 5:}$&amp;amp;nbsp; &lt;br /&gt;
Ein Beispiel für die Vorteile von Interleaver/De–Interleaver bei Vorhandensein von Bündelfehlern zeigt die untere Grafik:&lt;br /&gt;
&lt;br /&gt;
*In der ersten Zeile sind die Bytefolgen nach der Reed–Solomon–Codierung dargestellt, wobei jedes Codeworte beispielhaft aus sieben Bytes besteht.&lt;br /&gt;
*In der mittleren Zeile werden die Datenbytes durch das Interleaving mit&amp;amp;nbsp; $D = 3$&amp;amp;nbsp; verschoben, sodass zwischen&amp;amp;nbsp; $C_i$&amp;amp;nbsp; und&amp;amp;nbsp; $C_{i+1}$&amp;amp;nbsp; zwei fremde Bytes liegen und das Codewort auf drei Blöcke verteilt wird.&lt;br /&gt;
*Es sei nun angenommen, dass während der Übertragung eine Impulsstörung drei aufeinander folgende Bytes in einem einzigen Datenblock verfälscht wurden.&lt;br /&gt;
*Nach dem De–Interleaver ist die ursprüngliche Bytefolge der Reed–Solomon–Codewörter wieder hergestellt, wobei die drei fehlerhaften Bytes auf drei unabhängige Codewörter verteilt sind.&lt;br /&gt;
*Wurden bei der Reed–Solomon–Codierung jeweils zwei Redundanzbytes eingefügt, so lassen sich die nun separierten Byteverfälschungen vollständig korrigieren.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1962__Bei_T_2_4_S8b_v1.png|center|frame|Zum DSL–Interleaving mit&amp;amp;nbsp; $D = 3$]]}}&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
==Gain Scaling und Tone Ordering==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Eine besonders vorteilhafte Eigenschaft von DMT ist die Möglichkeit, die Subkanäle (englisch:&amp;amp;nbsp; &#039;&#039;Bins&#039;&#039;) individuell an die vorliegende Kanalcharakteristik anzupassen und eventuell &amp;amp;bdquo;Bins&amp;amp;rdquo; mit ungünstigem SNR ganz abzuschalten. Dabei wird wie folgt vorgegangen:&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1963__Bei_T_2_4_S9_v1.png|right|frame|Bit-Bin-Zuordnung anhand des SNR]]&lt;br /&gt;
*Vor dem Start der Übertragung – und eventuell auch dynamisch während des Betriebs – wird vom DMT–Modem für jeden &amp;amp;bdquo;Bin&amp;amp;rdquo; die Kanalcharakteristik gemessen und entsprechend dem SNR individuell die maximale Übertragungsrate festgelegt (siehe Grafik).&lt;br /&gt;
*Während der Initialisierung tauschen die&amp;amp;nbsp; &#039;&#039;ADSL Transceiver Units&#039;&#039;&amp;amp;nbsp; Bin–Informationen aus, zum Beispiel die jeweiligen „Bits/Bin” und die erforderliche Sendeleistung (&#039;&#039;Gain&#039;&#039;). Dabei sendet die&amp;amp;nbsp; $\rm ATU–C$&amp;amp;nbsp; Informationen über den Upstream und die&amp;amp;nbsp; $\rm ATU–R$&amp;amp;nbsp; Informationen über den Downstream.&lt;br /&gt;
*Diese Mitteilung hat die Form&amp;amp;nbsp; $\{b_i, g_i\}$&amp;amp;nbsp; wobei&amp;amp;nbsp; $b_i$&amp;amp;nbsp; (4 Bit) die Größe der Konstellation angibt. Für den Upstream gilt für den Index&amp;amp;nbsp; $i = 1$, ... , $31$&amp;amp;nbsp; und für den Downstream&amp;amp;nbsp; $i = 1$, ... , $255$.&lt;br /&gt;
*Der Gain&amp;amp;nbsp; $g_i$&amp;amp;nbsp; ist eine Festkommazahl mit zwölf Bit. Beispielsweise steht&amp;amp;nbsp; $g_i = 001.010000000$ für den Dezimalwert&amp;amp;nbsp; $1 + 1/4 =1.25$. Dieser gibt an, dass die Signalleistung von Kanal&amp;amp;nbsp; $i$&amp;amp;nbsp; um&amp;amp;nbsp; $1.94 \ \rm dB$&amp;amp;nbsp; höher sein muss als die Leistung des während der Kanalanalyse gesendeten Testsignals.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Beim gleichzeitigen Betrieb des Fast– und des Interleaved–Pfades (siehe&amp;amp;nbsp; [[Examples_of_Communication_Systems/Verfahren_zur_Senkung_der_Bitfehlerrate_bei_DSL#DSL.E2.80.93Fehlerkorrekturma.C3.9Fnahmen_im_.C3.9Cberblick|Grafik]]&amp;amp;nbsp; auf der Seite &amp;amp;bdquo;DSL&amp;amp;ndash;Fehlerkorrekturmaßnahmen&amp;amp;rdquo;) kann durch eine optimierte Trägerfrequenzbelegung (&#039;&#039;Tone Ordering&#039;&#039;) die Bitfehlerrate weiter gesenkt werden. Hintergrund dieser Maßnahme ist wieder das &#039;&#039;Clipping&#039;&#039;&amp;amp;nbsp; (Abschneiden von Spannungsspitzen), wodurch das SNR insgesamt verschlechtert wird. Dieses Verfahren beruht auf folgenden Regeln:&lt;br /&gt;
*Bins mit dichter Konstellation (viele Bits/Bin &amp;amp;nbsp; ⇒ &amp;amp;nbsp; größere Verfälschungswahrscheinlichkeit) werden dem Interleaved–Zweig zugeordnet, da dieser durch den zusätzlichen Interleaver per se zuverlässiger ist. Entsprechend werden die Subkanäle mit niederwertiger Belegung (wenige Bits/Bin) für den Fast–Datenpuffer reserviert.&lt;br /&gt;
*Gesendet werden dann neue Tabellen für Upstream und Downstream, in denen die Bins nicht mehr nach dem Index geordnet sind, sondern entsprechend den Bits/Bin–Verhältnissen. Anhand dieser neuen Tabelle ist es für die&amp;amp;nbsp; $\rm ATU–C$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $\rm ATU–R$&amp;amp;nbsp; möglich, die Bit–Extraktion erfolgreich durchzuführen.&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
==Einfügen von Guard–Intervall und zyklischem Präfix == 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Im Kapitel&amp;amp;nbsp; [[Modulationsverfahren/Realisierung_von_OFDM-Systemen#Guard.E2.80.93L.C3.BCcke_zur_Verminderung_der_Impulsinterferenzen| Realisierung von OFDM&amp;amp;ndash;Systemen]]&amp;amp;nbsp; des Buches &amp;amp;bdquo;Modulationsverfahren&amp;amp;rdquo; wurde bereits gezeigt, dass durch die Einfügung eines Schutzabstandes – man bezeichnet diesen auch als&amp;amp;nbsp; &#039;&#039;Guard–Intervall&#039;&#039;&amp;amp;nbsp; oder&amp;amp;nbsp; &#039;&#039;Guard–Lücke&#039;&#039; – die Bitfehlerrate bei Vorhandensein von linearen Kanalverzerrungen entscheidend verbessert werden kann.&lt;br /&gt;
&lt;br /&gt;
Wir gehen davon aus, dass sich die Kabelimpulsantwort&amp;amp;nbsp; $h_{\rm K}(t)$&amp;amp;nbsp; über die Zeitdauer&amp;amp;nbsp; $T_{\rm K}$&amp;amp;nbsp; erstreckt. Ideal wäre&amp;amp;nbsp; $h_{\rm K}(t) = δ(t)$&amp;amp;nbsp; und dementsprechend eine unendlich kurze Ausdehnung: &amp;amp;nbsp;  $T_{\rm K} = 0$. Bei verzerrendem Kanal&amp;amp;nbsp; $(T_{\rm K} &amp;gt; 0 )$&amp;amp;nbsp; gilt:&lt;br /&gt;
*Durch Einfügung eines &#039;&#039;Guard–Intervalls&#039;&#039;&amp;amp;nbsp; der Dauer&amp;amp;nbsp;  $T_{\rm G}$&amp;amp;nbsp; lassen sich &#039;&#039;Intersymbolinterferenzen&#039;&#039;&amp;amp;nbsp; zwischen den einzelnen DSL–Rahmen vermeiden, solange&amp;amp;nbsp;  $T_{\rm G}$ ≥  $T_{\rm K}$&amp;amp;nbsp; gilt. Diese Maßnahme führt allerdings zu einem Ratenverlust um den Faktor&amp;amp;nbsp; $T/(T +  T_{\rm G})$&amp;amp;nbsp; mit der Symboldauer&amp;amp;nbsp; $T = {1}/{f_0}$.&lt;br /&gt;
*Damit gibt es aber immer noch &#039;&#039;Inter–Carrier–Interferenzen&#039;&#039;&amp;amp;nbsp; zwischen den einzelnen Subträgern innerhalb des gleichen Rahmens, das heißt, die&amp;amp;nbsp; [[Modulationsverfahren/Allgemeine_Beschreibung_von_OFDM#Systembetrachtung_im_Frequenzbereich_bei_kausalem_Grundimpuls|DMT–Einzelspektren]]&amp;amp;nbsp; sind nicht mehr&amp;amp;nbsp; $\rm si$–förmig und es kommt zu einer De–Orthogonalisierung.&lt;br /&gt;
*Durch ein&amp;amp;nbsp; [[Modulationsverfahren/Realisierung_von_OFDM-Systemen#Zyklisches_Pr.C3.A4fix|zyklisches Präfix]]&amp;amp;nbsp; lässt sich auch dieser störende Effekt vermeiden. Dabei erweitert man den Sendevektor&amp;amp;nbsp; $\mathbf{s}$&amp;amp;nbsp; nach vorne um die letzten&amp;amp;nbsp; $L$&amp;amp;nbsp; Abtastwerte des IDFT–Ausgangs, wobei der Minimalwert für $L$ durch die Dauer&amp;amp;nbsp;  $T_{\rm K}$&amp;amp;nbsp; der Kabelimpulsantwort vorgegeben ist.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 6:}$&amp;amp;nbsp; &lt;br /&gt;
Die Grafik zeigt diese Maßnahme beim DSL/DMT–Verfahren, für das der Parameter&amp;amp;nbsp; $L = 32$&amp;amp;nbsp; festgelegt wurde. &lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1964__Bei_T_2_4_S10a_v1.png|right|frame|DMT&amp;amp;ndash;Sendesignal mit zyklischem Präfix]]&lt;br /&gt;
*Die Abtastwerte&amp;amp;nbsp; $s_{480}$ , ... , $s_{511}$&amp;amp;nbsp; werden als Präfix&amp;amp;nbsp; $(s_{-32}$ , ... , $s_{-1})$&amp;amp;nbsp; zum IDFT–Ausgangsvektor&amp;amp;nbsp; $(s_0$ , ... , $s_{511})$&amp;amp;nbsp; hinzugefügt.&lt;br /&gt;
*Das Sendesignal&amp;amp;nbsp; $s(t)$&amp;amp;nbsp; hat nun statt der Symboldauer&amp;amp;nbsp; $T ≈ 232 \ {\rm &amp;amp;micro; s}$&amp;amp;nbsp; die resultierende Dauer&amp;amp;nbsp; $T +  T_{\rm G} = 1.0625 \cdot T ≈ 246 \ {\rm &amp;amp;micro; s}$. Dadurch wird die Rate um den Faktor&amp;amp;nbsp; $0.94$&amp;amp;nbsp; verringert.&lt;br /&gt;
*Bei der empfangsseitigen Auswertung beschränkt man sich auf den Zeitbereich von&amp;amp;nbsp; $0$&amp;amp;nbsp; bis&amp;amp;nbsp; $T$. In diesem Zeitintervall ist der störende Einfluss der Impulsantwort bereits abgeklungen und die Subkanäle sind – ebenso wie bei idealem Kanal – zueinander orthogonal. &lt;br /&gt;
*Die Abtastwerte&amp;amp;nbsp; $s_{-32}$ , ... , $s_{-1}$&amp;amp;nbsp; werden am Empfänger verworfen – eine recht einfache Realisierung der Signalentzerrung.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Die letzte Grafik dieses Kapitels zeigt das gesamte DMT–Übertragungssystem, allerdings ohne die vorne beschriebenen&amp;amp;nbsp; [[Examples_of_Communication_Systems/Verfahren_zur_Senkung_der_Bitfehlerrate_bei_DSL#DSL.E2.80.93Fehlerkorrekturma.C3.9Fnahmen_im_.C3.9Cberblick| Fehlerschutzmaßnahmen]]. Man erkennt:&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1967__Bei_T_2_4_S10_v1.png|right|frame|DMT&amp;amp;ndash;System mit zyklischem Präfix]]&lt;br /&gt;
*Im Block „Addiere zyklisches Präfix” werden die Abtastwerte&amp;amp;nbsp; $s_{480}$, ... , $s_{511}$&amp;amp;nbsp; als&amp;amp;nbsp; $s_{-32}$, ... , $s_{-1}$&amp;amp;nbsp; hinzugefügt. Das Sendesignal&amp;amp;nbsp; $s(t)$&amp;amp;nbsp; hat somit den im&amp;amp;nbsp; $\text{Beispiel 6}$&amp;amp;nbsp; gezeigten Verlauf.&lt;br /&gt;
*Das Empfangsignal&amp;amp;nbsp; $r(t)$&amp;amp;nbsp; ergibt sich aus der Faltung von&amp;amp;nbsp; $s(t)$&amp;amp;nbsp; mit&amp;amp;nbsp; $h_{\rm K}(t)$. Nach A/D–Wandlung und Entfernen des zyklischen Präfix erhält man die Eingangswerte&amp;amp;nbsp; $r_0$, ... ,&amp;amp;nbsp;$ r_{511}$&amp;amp;nbsp; für die DFT.&lt;br /&gt;
*Die (komplexen) Ausgangswerte&amp;amp;nbsp; $D_k\hspace{0.01cm}&#039;$&amp;amp;nbsp; der DFT hängen nur vom jeweiligen (komplexen) Datenwert&amp;amp;nbsp; $D_k$&amp;amp;nbsp; ab. Unabhängig von anderen Daten&amp;amp;nbsp; $D_κ (κ ≠ k)$&amp;amp;nbsp; gilt mit dem Rauschwert&amp;amp;nbsp; $n_k\hspace{0.01cm}&#039;$:&lt;br /&gt;
&lt;br /&gt;
:$${D}_k\hspace{0.01cm}&#039; = \alpha_k \cdot {D}_k + {n}_k\hspace{0.01cm}&#039;, \hspace{0.2cm}\alpha_k = H_{\rm K}( f = f_k)&lt;br /&gt;
\hspace{0.05cm}. $$&lt;br /&gt;
 &lt;br /&gt;
*Jeder Träger&amp;amp;nbsp; $D_k$&amp;amp;nbsp; wird durch einen eigenen (komplexen) Faktor&amp;amp;nbsp; $α_k$, der nur vom Kanal abhängt, in seiner Amplitude und Phase verändert. Der Frequenzbereichsentzerrer hat nur die Aufgabe, den Koeffizienten&amp;amp;nbsp; $D_k\hspace{0.01cm}&#039;$&amp;amp;nbsp; mit dem inversen Wert&amp;amp;nbsp; ${1}/{α_k}$&amp;amp;nbsp; zu multiplizieren. Man erhält schließlich:&lt;br /&gt;
 &lt;br /&gt;
:$$ \hat{D}_k = {D}_k + {n}_k \hspace{0.05cm}.$$&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Fazit:}$&amp;amp;nbsp; &lt;br /&gt;
*Diese einfache Realisierungsmöglichkeit der vollständigen Entzerrung des stark verzerrenden Kabelfrequenzgangs war eines der entscheidenden Kriterien, dass sich bei&amp;amp;nbsp; $\rm xDSL$&amp;amp;nbsp; das&amp;amp;nbsp; $\rm DMT$–Verfahren gegenüber&amp;amp;nbsp; $\rm QAM$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm CAP$&amp;amp;nbsp; durchgesetzt hat. &lt;br /&gt;
*Meist findet direkt nach der A/D–Wandlung zusätzlich noch eine Vorentzerrung im Zeitbereich statt, um auch die Intersymbolinterferenzen zwischen benachbarten Rahmen zu vermeiden.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Aufgaben zum Kapitel==&lt;br /&gt;
&amp;lt;br&amp;gt;  	 &lt;br /&gt;
[[Aufgabe_2.5:_DSL–Fehlersicherungsmaßnahmen|Aufgabe 2.5: DSL–Fehlersicherungsmaßnahmen ]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgabe_2.5Z:_Reichweite_und_Bitrate_bei_ADSL|Aufgabe 2.5Z: Reichweite und Bitrate bei ADSL]]&lt;br /&gt;
&lt;br /&gt;
[[Aufgabe_2.6:_Zyklisches_Präfix|Aufgabe 2.6: Zyklisches Präfix]]&lt;br /&gt;
==Quellenverzeichnis==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/xDSL_als_%C3%9Cbertragungstechnik&amp;diff=34975</id>
		<title>Examples of Communication Systems/xDSL als Übertragungstechnik</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/xDSL_als_%C3%9Cbertragungstechnik&amp;diff=34975"/>
		<updated>2020-10-13T15:37:55Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/xDSL als Übertragungstechnik to Examples of Communication Systems/xDSL as Transmission Technology&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Examples of Communication Systems/xDSL as Transmission Technology]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/xDSL_as_Transmission_Technology&amp;diff=34974</id>
		<title>Examples of Communication Systems/xDSL as Transmission Technology</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/xDSL_as_Transmission_Technology&amp;diff=34974"/>
		<updated>2020-10-13T15:37:55Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/xDSL als Übertragungstechnik to Examples of Communication Systems/xDSL as Transmission Technology&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &lt;br /&gt;
{{Header&lt;br /&gt;
|Untermenü=DSL – Digital Subscriber Line&lt;br /&gt;
|Vorherige Seite=xDSL–Systeme&lt;br /&gt;
|Nächste Seite=Verfahren zur Senkung der Bitfehlerrate bei DSL&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Mögliche Bandbreitenbelegungen für xDSL==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die xDSL–Spezifikationen lassen den Betreibern viele Freiheiten hinsichtlich der Belegung. &lt;br /&gt;
&lt;br /&gt;
Zur notwendigen Richtungstrennung der xDSL–Signalübertragung nach&lt;br /&gt;
*Abwärtsrichtung vom Anbieter zum Kunden (&#039;&#039;Downstream&#039;&#039; mit möglichst hoher Datenrate),&lt;br /&gt;
*Aufwärtsrichtung vom Kunden zum Anbieter (&#039;&#039;Upstream&#039;&#039; mit meist niedrigerer Datenrate)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
wurden hierfür zwei Varianten standardisiert:&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Definition:}$&amp;amp;nbsp;&lt;br /&gt;
*Beim&amp;amp;nbsp; &#039;&#039;&#039;Frequenzgetrenntlageverfahren&#039;&#039;&#039;&amp;amp;nbsp; werden die Datenströme für die beiden Richtungen in zwei voneinander getrennten Frequenzbändern übertragen mit dem Vorteil, dass zur Trennung der Übertragungsrichtungen ein einfaches Filter genügt, was die technische Realisierung vereinfacht.&lt;br /&gt;
*Beim&amp;amp;nbsp; &#039;&#039;&#039;Frequenzgleichlageverfahren&#039;&#039;&#039;&amp;amp;nbsp; überlagern sich in einem bestimmten Teil die Spektren von Upstream und Downstream. Die Trennung erfolgt hier mit Hilfe einer Echokompensationsschaltung. Vorteile des Verfahrens sind der geringere Bandbreitenbedarf bei höheren (und damit stärker gedämpften) Frequenzen sowie eine größere Reichweite.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1936__Bei_T_2_3_S1_v1.png|right|frame|Frequenzgetrennt&amp;amp;ndash; und Frequenzgleichlageverfahren]]&lt;br /&gt;
Die Grafik stellt diese beiden Möglichkeiten vergleichend gegenüber.&lt;br /&gt;
&lt;br /&gt;
Grundsätzlich überlassen die Spezifikationen den Entwicklern/Betreibern die Entscheidung,&lt;br /&gt;
*xDSL alleine auf der Teilnehmeranschlussleitung zu betreiben, oder&lt;br /&gt;
*einen Mischbetrieb von xDSL mit den Telefondiensten POTS&amp;amp;nbsp; (&#039;&#039;Plain Old Telephone Service&#039;&#039;)&amp;amp;nbsp; oder ISDN&amp;amp;nbsp; (&#039;&#039;Integrated Services Digital Network&#039;&#039;)&amp;amp;nbsp; zu ermöglichen, &lt;br /&gt;
*und somit den von den beiden Telefondiensten belegten unteren Frequenzbereich für xDSL auszuschließen oder auch zu belegen.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt; &lt;br /&gt;
==ADSL–Bandbreitenbelegung in Deutschland==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Wegen der technisch deutlich einfacheren Realisierbarkeit fiel in Deutschland für ADSL und ADSL2+ die Entscheidung zugunsten&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1937__Bei_T_2_3_S2_v1.png|right|frame|ADSL–Bandbreitenbelegung in Deutschland]]&lt;br /&gt;
*des Frequenzgetrenntlageverfahrens,&lt;br /&gt;
*die generelle Reservierung des unteren Frequenzbereichs für ISDN.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Das Frequenzgleichlageverfahren wird zwar teilweise noch verwendet, aber eher selten.&lt;br /&gt;
&lt;br /&gt;
Bei den Übertragungsverfahren&lt;br /&gt;
* [[Modulation_Methods/Quadratur–Amplitudenmodulation|QAM]]&amp;amp;nbsp; (Quadratur–Amplitudenmodulation)  und&lt;br /&gt;
*[[Examples_of_Communication_Systems/xDSL_als_Übertragungstechnik#Carrierless_Amplitude_Phase_Modulation_.28CAP.29|CAP]]&amp;amp;nbsp; (&#039;&#039;Carrierless Amplitude Phase Modulation&#039;&#039;)  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
wird die für DSL verfügbare Bandbreite nicht weiter zerlegt. &lt;br /&gt;
&lt;br /&gt;
Dagegen werden beim Mehrträgerverfahren&amp;amp;nbsp;  [[Modulation_Methods/Weitere_OFDM–Anwendungen#Eine_Kurzbeschreibung_von_DSL_.E2.80.93_Digital_Subscriber_Line|DMT]]&amp;amp;nbsp; (&#039;&#039;Discrete Multitone Transmission&#039;&#039;)  der Aufwärtskanal und der Abwärtskanal in&amp;amp;nbsp; $N_{\rm Up}$&amp;amp;nbsp; bzw.&amp;amp;nbsp; $N_{\rm Down}$&amp;amp;nbsp; Subkanäle&amp;amp;nbsp; (englisch: &#039;&#039;Bins&#039;&#039;) zu je 4.3125 kHz aufgeteilt.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Außerdem ist zu obiger Grafik anzumerken:&lt;br /&gt;
*Telefondienste (POTS bzw. ISDN) und xDSL liegen in verschiedenen Frequenzbändern, was die gegenseitigen Störungen im Bündelkabel minimiert. Das signalstärkere ISDN stört somit nicht das parallel laufende xDSL und umgekehrt.&lt;br /&gt;
*Der untere Frequenzbereich bis $\text{120 kHz}$&amp;amp;nbsp; wurde für ISDN (wahlweise POTS) reserviert. Dieser Wert ergibt sich aus der ersten Nullstelle des&amp;amp;nbsp; [[Digitalsignalübertragung/Blockweise_Codierung_mit_4B3T-Codes#AKF_und_LDS_der_4B3T.E2.80.93Codes|ISDN–Spektrums mit 4B3T–Codierung]]. Oberhalb von $\text{120 kHz}$&amp;amp;nbsp; wird das ISDN–Spektrum vollständig unterdrückt.&lt;br /&gt;
*Zur Trennung von Telefon– und xDSL–Signal wird an beiden Enden der Zweidrahtleitung ein&amp;amp;nbsp; [[Examples_of_Communication_Systems/xDSL–Systeme#Komponenten_eines_DSL.E2.80.93Internetzugangs|Splitter]]&amp;amp;nbsp; eingesetzt, der je ein Tiefpassfilter und ein Hochpassfilter beinhaltet und auch die folgende Frequenzlücke bis $\text{138 kHz}$&amp;amp;nbsp; berücksichtigt.&lt;br /&gt;
*Nach dieser Belegungslücke folgt das ADSL–Upstream-Band von $\text{138 kHz}$&amp;amp;nbsp; bis&amp;amp;nbsp;$\text{276 kHz}$. Diese Bandbreite erlaubt die Übertragung von&amp;amp;nbsp; $N_{\rm Up} = 32$&amp;amp;nbsp; Subträgern zu je&amp;amp;nbsp;$\text{4.3125 kHz}$. Dieser Wert ergibt sich aus der Rahmenübertragungsgeschwindigkeit.&lt;br /&gt;
*Der anschließende Downstream–Bereich reicht bei ADSL bis $\text{1104 kHz}$, womit&amp;amp;nbsp; $N_{\rm Down} = 256$&amp;amp;nbsp; Subträger realisiert werden können. Die Trennung von Auf– und Abwärtskanal bei xDSL erfolgt über ein Bandpassfilter im Modem.&lt;br /&gt;
*Allerdings dürfen die ersten $64$ Subträger&amp;amp;nbsp; $($dies entspricht $\text{276 kHz)}$&amp;amp;nbsp; nicht belegt werden. Beim&amp;amp;nbsp; [[Examples_of_Communication_Systems/xDSL_als_Übertragungstechnik#M.C3.B6gliche_Bandbreitenbelegungen_f.C3.BCr_xDSL|Frequenzgleichlageverfahren]]&amp;amp;nbsp; wären nur $32$ Subträger auszusparen, wobei zu berücksichtigen ist, dass die Trennung von Aufwärts– und Abwärtsrichtung eine aufwändigere Realisierung erfordert.&lt;br /&gt;
*Bei ADSL2+ ist die Systembandbreite gleich $\text{2208 kHz}$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $N_{\rm Down} = 512$&amp;amp;nbsp; Subträger. Die Anzahl der auszusparenden Bins bleibt gegenüber ADSL unverändert. Berücksichtigt man, dass zwei Bins von Kontrollfunktionen (beispielsweise zur Synchronisation von Sender und Empfänger) belegt werden, so verbleiben $190$&amp;amp;nbsp; (ADSL) bzw. $446$&amp;amp;nbsp; (ADSL2+) Downstream–Kanäle für Nutzer.&lt;br /&gt;
*Die in Deutschland vorgeschriebene ISDN–Reservierung hat für xDSL allerdings die Konsequenz, dass die niedrigen Frequenzen, die bei einer Kupferleitung mit Abstand am wenigsten gedämpft werden und damit eigentlich am besten geeignet wären, nicht genutzt werden können.&lt;br /&gt;
*Aus der Frequenzanordnung erkennt man weiterhin, dass die Downstream–Subkanäle stärker gedämpft werden als die Upstream–Subkanäle (höhere Frequenzen) und demzufolge ein kleineres Signal–zu–Rauschleistungsverhältnis (SNR) aufweisen.&lt;br /&gt;
*Die Entscheidung&amp;amp;nbsp; „Upstream unterhalb Downstream”&amp;amp;nbsp; hängt damit zusammen, dass der Ausfall von Downstream–Kanälen nur eine vergleichsweise geringe Auswirkung auf die Übertragungsrate hat. Im Upstream würde sich ein solcher Ausfall prozentual viel stärker bemerkbar machen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==VDSL(2)–Bandbreitenbelegung==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Für VDSL(2) hat die ITU mehrere Profile festgelegt. Für die in Deutschland eingesetzten Systeme gilt zum Zeitpunkt der Erstellung dieses Kapitels (2010) die in der Grafik angegebene Frequenzbandbelegung gemäß dem VDSL(2) Plan 998b – Profil 17a (Annex B) der ITU. Die (leicht) hellere Farbgebung bei den höheren Frequenzen soll darauf hinweisen, dass diese Kanäle stärker gedämpft werden.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1939__Bei_T_2_3_S3_v1.png|center|frame|VDSL(2)–Bandbreitenbelegung in Deutschland]]&lt;br /&gt;
&lt;br /&gt;
Ohne Anspruch auf Vollständigkeit lässt sich dieser Belegungsplan wie folgt charakterisieren:&lt;br /&gt;
*Um höhere Bitraten zu erreichen, werden hier achtmal so viele Bins verwendet als bei ADSL2+. Damit beträgt die Systembandbreite&amp;amp;nbsp; $8 · \text{2208 MHz = 17664 MHz}$, womit Übertragungsraten bis zu $\text{100 Mbit/s}$&amp;amp;nbsp; (abhängig von Kabellänge und –beschaffenheit) ermöglicht werden.&lt;br /&gt;
*Auch hier gilt, dass die Frequenzbänder für die Upstream–Subkanäle immer bei den niedrigeren Frequenzen angeordnet sind, da die größere Kabeldämpfung (mit der Frequenz zunehmend) beim Upstream einen prozentual größeren Einfluss auf die Gesamtbitrate hat als beim Downstream.&lt;br /&gt;
*Bei VDSL(2)–Systemen wird stets das so genannte&amp;amp;nbsp; [[Examples_of_Communication_Systems/xDSL_als_Übertragungstechnik#M.C3.B6gliche_Bandbreitenbelegungen_f.C3.BCr_xDSL|Frequenzgetrenntlageverfahren]]&amp;amp;nbsp; verwendet. Eine Überlappung der Upstream– und Downstream–Frequenzbänder ist in der ITU–Spezifikation für VDSL(2) kategorisch ausgeschlossen.&lt;br /&gt;
*Bei den VDSL–Systemen in Deutschland sind die unteren Frequenzen wieder für ISDN reserviert. Danach folgen abwechselnd Bereiche für Upstream und Downstream. Aus den angegebenen Bereichsgrenzen erkennt man die gegenüber dem Downstream schmäleren Upstream–Bereiche.&lt;br /&gt;
*Man erkennt eine abwechselnde Anordnung von Upstream– und Downstreambereichen. Ein Grund hierfür ist, dass bei diesem breiten Spektrum vermieden werden soll, dass eine Richtung (zum Beispiel abwärts) nur stark gedämpfte  (also hohe) Frequenzen zugewiesen bekommt.&lt;br /&gt;
*Die VDSL(2)–Spezifikation sieht Belegungspläne bis zu Systembandbreiten von&amp;amp;nbsp; $\text{30 MHz}$&amp;amp;nbsp; (nach dem Profil 30a) vor, die Übertragungsraten bis etwa&amp;amp;nbsp; $\text{ 200 Mbit/s}$&amp;amp;nbsp; auf kurzen Strecken ermöglichen sollen. Dafür wird auch die Bandbreite der einzelnen Subkanäle gegenüber ADSL auf&amp;amp;nbsp; $\text{8.625 kHz}$&amp;amp;nbsp;  verdoppelt.&lt;br /&gt;
*Alle Belegungspläne werden mit verschiedenen Masken für das Leistungsdichtespektrum versehen, um so die maximale Sendeleistung und damit die Störung benachbarter Systeme im Kabelbündel (Nebensprechen) zu begrenzen.&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Übertragungsverfahren im Überblick==  	 	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Zu Beginn der verschiedenen Standardisierungsprozeduren für die einzelnen xDSL–Varianten wurden als Basis verschiedene Übertragungsverfahren festgelegt:&lt;br /&gt;
*[[Modulation_Methods/Pulscodemodulation|Pulscodemodulation  (PCM)]]&amp;amp;nbsp; für ISDN sowie&amp;amp;nbsp;&#039;&#039;Trellis Coded–Pulse Amplitude Modulation&#039;&#039;&amp;amp;nbsp; für HDSL2 und SHDSL/SDSL,&lt;br /&gt;
* [[Modulation_Methods/Quadratur–Amplitudenmodulation|Quadratur–Amplitudenmodulation (QAM)]]&amp;amp;nbsp;   für QAM–ADSL und QAM–VDSL,&lt;br /&gt;
* [[Examples_of_Communication_Systems/xDSL_als_Übertragungstechnik#Carrierless_Amplitude_Phase_Modulation_.28CAP.29|Carrierless Amplitude Phase Modulation (CAP)]]&amp;amp;nbsp;   für CAP–HDSL und CAP–ADSL,&lt;br /&gt;
*[[Modulation_Methods/Weitere_OFDM–Anwendungen#Eine_Kurzbeschreibung_von_DSL_.E2.80.93_Digital_Subscriber_Line|Discrete Multitone Transmission (DMT)]]&amp;amp;nbsp; für ADSL, ADSL2, ADSL 2+, VDSL und VDSL2.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mit zunehmender Forderung des Marktes nach höheren Übertragungsraten und den damit verbundenen Anforderungen kristallisierten sich zwei geeignete Hauptverfahren heraus, nämlich&amp;amp;nbsp; $\rm QAM/CAP$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm DMT$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Da sich die Hersteller von 1997 bis 2003 auch aus patentrechtlichen Gründen auf keinen gemeinsamen Standard einigen konnten (man spricht in diesem Zusammenhang sogar von &#039;&#039;Line Code Wars&#039;&#039;), kam es lange Zeit zur Koexistenz beider konkurrierender Verfahren. Bei den so genannten DSL–Olympics 2003 wurde schließlich die Entscheidung zugunsten von DMT getroffen,&lt;br /&gt;
*einerseits wegen der etwas besseren &amp;amp;bdquo;Performance&amp;amp;rdquo; allgemein,&lt;br /&gt;
*insbesondere aber wegen der höheren Robustheit gegenüber Schmalbandstörungen.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Insbesondere für die USA (viele Telefonfreileitungen und damit verbundene Probleme mit eingekoppelten Funksignalen) hat das zweite Argument eine große Rolle gespielt.&lt;br /&gt;
&lt;br /&gt;
Die heutigen (2010) in Deutschland vorwiegend angebotenen xDSL–Varianten ADSL2(+) und VDSL(2) basieren alle auf dem&amp;amp;nbsp;&#039;&#039;Discrete Multitone Transmission&#039;&#039;–Verfahren, wobei aber die einzelnen Subträger durchaus mit QAM–Signalen belegt sein können.&lt;br /&gt;
&lt;br /&gt;
Zunächst sollen aber in aller Kürze die Systeme&amp;amp;nbsp; $\rm xDSL–QAM$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm xDSL–CAP$&amp;amp;nbsp; betrachtet werden.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==Grundlagen der Quadratur–Amplitudenmodulation==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Die Grafik zeigt das Referenzmodell für ADSL–QAM, wobei wir uns hier nur mit den roten Funktionsblöcken&amp;amp;nbsp; &#039;&#039;QAM–Modulator&#039;&#039;&amp;amp;nbsp; und&amp;amp;nbsp; &#039;&#039;QAM–Demodulator&#039;&#039;&amp;amp;nbsp; beschäftigen wollen.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1940__Bei_T_2_3_S5_v2.png|center|frame|Referenzmodell für ADSL–QAM]]&lt;br /&gt;
&lt;br /&gt;
Die Trägerfrequenz&amp;amp;nbsp; $f_{\rm T}$&amp;amp;nbsp; liegt jeweils innerhalb des spezifizierten Auf– und Abwärts–Bandes der jeweiligen xDSL–Variante. Sie wird ebenso wie die Signalraumgröße (zwischen vier und 256 Signalraumpunkte) und die Symbolrate durch Kanalmessungen bei der Initialisierung der Übertragung festgelegt.&lt;br /&gt;
&lt;br /&gt;
Für ADSL–QAM wurden folgende Symbolraten $($in&amp;amp;nbsp; ${\rm kBaud} = 1000 \ \rm Symbole/s)$&amp;amp;nbsp; festgelegt:&lt;br /&gt;
[[File:P_ID1941__Bei_T_2_3_S5a_neu_v2.png|right|frame|Modell der Quadratur–Amplitudenmodulation]]&lt;br /&gt;
*$20$, $40$, $84$, $100$, $120$, $136$&amp;amp;nbsp; im Upstream,&lt;br /&gt;
*$40$, $126$, $160$, $252$, $336$, $504$, $806.4$, $1008$&amp;amp;nbsp; im Downstream.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Das Prinzip wurde bereits im Kapitel&amp;amp;nbsp; [[Modulation_Methods/Quadratur–Amplitudenmodulation|Quadratur–Amplitudenmodulation]]&amp;amp;nbsp; des Buches „Modulationsverfahren” ausführlich beschrieben. &lt;br /&gt;
&lt;br /&gt;
Hier folgt nur eine kurze Zusammenfassung anhand der unteren Grafik.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
*QAM ist ein Einträgermodulationsverfahren um die Trägerfrequenz&amp;amp;nbsp; $f_{\rm T}$. Zunächst erfolgt eine blockweise Seriell–/Parallelwandlung des Bitstroms und die Signalraumzuordnung.&lt;br /&gt;
*Aus jeweils&amp;amp;nbsp; $b$&amp;amp;nbsp; Binärsymbolen werden zwei mehrstufige Amplitudenkoeffizienten&amp;amp;nbsp; $a_{{\rm I}n}$&amp;amp;nbsp; und&amp;amp;nbsp; $a_{{\rm Q}n}$&amp;amp;nbsp; abgeleitet (Inphase– und Quadraturkomponente), wobei beide Koeffizienten jeweils einen von&amp;amp;nbsp; $M = 2^{b/2}$&amp;amp;nbsp; möglichen Amplitudenwerten annehmen können.&lt;br /&gt;
*Das in der Grafik betrachtete Beispiel gilt für die&amp;amp;nbsp; $\text{16–QAM}$&amp;amp;nbsp; mit&amp;amp;nbsp; $b = M = 4$&amp;amp;nbsp; und dementsprechend $16$&amp;amp;nbsp; Signalraumpunkten. Bei einer &amp;amp;nbsp; $\text{256–QAM}$&amp;amp;nbsp; würde&amp;amp;nbsp; $b = 8$&amp;amp;nbsp; und&amp;amp;nbsp; $M = 16$&amp;amp;nbsp; gelten&amp;amp;nbsp; $(2^b = M^2 = 256)$.	 &lt;br /&gt;
*Die Koeffizienten&amp;amp;nbsp; $a_{{\rm I}n}$&amp;amp;nbsp; und&amp;amp;nbsp; $a_{{\rm Q}n}$&amp;amp;nbsp; werden jeweils einem Diracpuls als Gewichte eingeprägt. Zur Impulsformung  verwendet man (wegen der geringen Bandbreite) meist ein Cosinus–Rolloff–Filter. Mit dem Sendegrundimpuls&amp;amp;nbsp; $g_s(t)$&amp;amp;nbsp; gilt dann in den beiden  Zweigen des Blockschaltbilds:&lt;br /&gt;
:$$ s_{\rm I}(t) = \sum_{n = - \infty}^{+\infty}a_{\rm&lt;br /&gt;
I\hspace{0.03cm}\it n} \cdot g_s (t - n \cdot T)\hspace{0.05cm},\hspace{0.5cm}&lt;br /&gt;
s_{\rm Q}(t) = \sum_{n = - \infty}^{+\infty}a_{\rm&lt;br /&gt;
Q\hspace{0.03cm}\it n} \cdot g_s (t - n \cdot&lt;br /&gt;
T)\hspace{0.05cm}.$$&lt;br /&gt;
*Anzumerken ist ferner, dass wegen der redundanzfreien Umsetzung auf einen höherstufigen Code die Symboldauer&amp;amp;nbsp; $T$&amp;amp;nbsp; dieser Signale um den Faktor&amp;amp;nbsp; $b$&amp;amp;nbsp; größer ist als die Bitdauer&amp;amp;nbsp; $T_{\rm B}$&amp;amp;nbsp; der binären Eingangsfolge. Im gezeichneten Beispiel (16–QAM) gilt&amp;amp;nbsp; $T = 4 · T_{\rm B}$.&lt;br /&gt;
*Das&amp;amp;nbsp; &#039;&#039;&#039;QAM–Sendesignal&#039;&#039;&#039;&amp;amp;nbsp; $s(t)$&amp;amp;nbsp; ist dann die Summe der beiden mit Cosinus bzw. Minus–Sinus multiplizierten Teilsignale (möglicherweise folgt noch eine Bandbegrenzung, um Interferenzen zu benachbarten Bändern zu verhindern, wie in der unteren Grafik angedeutet):&lt;br /&gt;
:$$s(t) = s_{\rm I}(t)  \cdot \cos (2 \pi f_{\rm T}\,t) - s_{\rm Q}(t)  \cdot \sin (2 \pi f_{\rm T}\,t)&lt;br /&gt;
\hspace{0.05cm}. $$ &lt;br /&gt;
*Die beiden Zweige&amp;amp;nbsp; $(\rm I$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm Q)$&amp;amp;nbsp; können wegen der Orthogonalität von Cosinus– und (Minus–) Sinus als zwei völlig getrennte&amp;amp;nbsp; [[Digitalsignalübertragung/Trägerfrequenzsysteme_mit_kohärenter_Demodulation#M.E2.80.93stufiges_Amplitude_Shift_Keying_.28M.E2.80.93ASK.29|$M$–stufige ASK–Systeme]]&amp;amp;nbsp; aufgefasst werden, die sich gegenseitig nicht stören, solange alle Komponenten optimal ausgelegt sind.&lt;br /&gt;
*Das bedeutet gleichzeitig: &amp;amp;nbsp; Die Quadratur–Amplitudenmodulation ermöglicht gegenüber einer&amp;amp;nbsp; [[Digitalsignalübertragung/Trägerfrequenzsysteme_mit_kohärenter_Demodulation#Binary_Phase_Shift_Keying_.28BPSK.29|&#039;&#039;Binary Phase Shift Keying&#039;&#039;]]&amp;amp;nbsp; (BPSK: Modulation nur mit Cosinus oder Sinus) eine Verdoppelung der Datenrate bei gleichbleibender Qualität.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID3120__Bei_T_2_3_S5b_v1.png|right|frame|Quadratur–Amplitudenmodulation als Bandpass– und Tiefpassmodell]]&lt;br /&gt;
Die letzte Grafik zeigt &lt;br /&gt;
*oben das Bandpass–Modell, &lt;br /&gt;
*unten das äquivalente Tiefpass–Modell. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In diesem kombiniert man Inphase– und Quadraturkoeffizient zum komplexen Amplitudenkoeffizienten &lt;br /&gt;
:$$a_n = a_{\text{I}n} + {\rm j} · a_{\text{Q}n}$$&lt;br /&gt;
und ersetzt zusätzlich das C–Signal&amp;amp;nbsp; $s(t)$&amp;amp;nbsp; durch das äquivalente Tiefpass–Signal &lt;br /&gt;
:$$s_{\rm TP}(t) = s_{\rm I}(t) + {\rm j} · s_{\rm Q}(t).$$&lt;br /&gt;
&lt;br /&gt;
Die Darstellung des QAM–Senders und des QAM–Empfängers ist Inhalt der Flash–Animation&amp;amp;nbsp;  [[Applets:Prinzip_der_Quadratur-Amplitudenmodulation_(Applet)|Prinzip der Quadratur–Amplitudenmodulation]].&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Fazit:}$&amp;amp;nbsp;&lt;br /&gt;
*Mit steigendem Bitanzahl&amp;amp;nbsp; $b$&amp;amp;nbsp; und damit größerer Anzahl definierter Symbole&amp;amp;nbsp; $(M^2)$&amp;amp;nbsp; nimmt die Bandbreiteneffizienz zu, aber es steigt auch der Aufwand für die Signalverarbeitung. &lt;br /&gt;
*Außerdem ist zu berücksichtigen, dass eine dichte QAM–Belegung nur bei ausreichend gutem Kanal angemessen ist.}}&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
==Mögliche QAM-Signalraumkonstellationen==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Wir betrachten noch an drei Beispielen mögliche Anordnungen der Signalraumpunkte bei der Quadratur–Amplitudenmodulation.&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 1:}$&amp;amp;nbsp;&lt;br /&gt;
Ein wichtiger QAM–Parameter ist die Bitanzahl&amp;amp;nbsp; $b$, die zum Amplitudenkoeffizientenpaar&amp;amp;nbsp; $(a_{\rm I}, a_{\rm Q})$&amp;amp;nbsp; verarbeitet werden. Hierbei ist&amp;amp;nbsp; $b$&amp;amp;nbsp; stets geradzahlig.&lt;br /&gt;
[[File:P_ID1943__Bei_T_2_3_S6a_v1.png|right|frame|Signalraumkonstellationen $\rm 4\hspace{0.05cm}&amp;amp;ndash;\hspace{-0.02cm}QAM$ und $\rm 16\hspace{0.05cm}&amp;amp;ndash;\hspace{-0.02cm}QAM$]] &lt;br /&gt;
&lt;br /&gt;
Ist&amp;amp;nbsp; $b = 2$, so kann sowohl&amp;amp;nbsp; $a_{\rm I}$&amp;amp;nbsp; als auch&amp;amp;nbsp; $a_{\rm Q}$&amp;amp;nbsp; nur die Werte&amp;amp;nbsp; $±1$&amp;amp;nbsp; annehmen und es ergibt sich die&amp;amp;nbsp; $\rm 4\hspace{0.05cm}&amp;amp;ndash;\hspace{-0.02cm}QAM$&amp;amp;nbsp; entsprechend der linken Konstellation. &lt;br /&gt;
&lt;br /&gt;
Entsprechend einer ITU–Empfehlung gilt dabei die Zuordnung:&lt;br /&gt;
:$$q_1 = 0, \ q_0 = 0 \, \Leftrightarrow \,a_{\rm I} = +1, \ a_{\rm Q} = +1,$$&lt;br /&gt;
:$$q_1 = 0, \ q_0 = 1 \, \Leftrightarrow \, a_{\rm I} = +1, \ a_{\rm Q} = -1,$$&lt;br /&gt;
:$$q_1 = 1, \ q_0 = 0 \, \Leftrightarrow \,a_{\rm I} = -1, \ a_{\rm Q} = +1,$$&lt;br /&gt;
:$$q_1 = 1, \ q_0 = 1 \, \Leftrightarrow \, a_{\rm I} = -1, \ a_{\rm Q} = -1.$$&lt;br /&gt;
  &lt;br /&gt;
Der gelb markierte Punkt&amp;amp;nbsp; &#039;&#039;&#039;10&#039;&#039;&#039; &amp;amp;nbsp;$(a_{\rm I} = -1, \ a_{\rm Q} = 1)$&amp;amp;nbsp; steht also für&amp;amp;nbsp; $q_1 = 1$&amp;amp;nbsp; und&amp;amp;nbsp; $q_0 = 0$.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mit&amp;amp;nbsp; $b = 4$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $M = 2^{b/2} = 4$&amp;amp;nbsp; kommt man zur&amp;amp;nbsp; $\rm 16\hspace{0.05cm}&amp;amp;ndash;\hspace{-0.02cm}QAM$ gemäß dem rechten Diagramm mit den möglichen Amplitudenkoeffizienten&amp;amp;nbsp; &lt;br /&gt;
:$$a_{\rm I} ∈ \{±3, ±1\}, \ \ a_{\rm Q} ∈ \{±3, ±1\}.$$ &lt;br /&gt;
&lt;br /&gt;
Die Zuordnung lässt sich mit Hilfe des links unten angegebenen Hilfsgrafen ermitteln, wie die folgenden Zahlenbeispiele verdeutlichen.&lt;br /&gt;
 &lt;br /&gt;
$\rm (A)$&amp;amp;nbsp; $q_3 = 1, \ q_2 = 0, \ q_1 = 1,\ q_0 = 1$ (gelbe Markierung): &lt;br /&gt;
*Die beiden höchstwertigen Bit&amp;amp;nbsp; (&#039;&#039;Most Significant Bit&#039;&#039;, MSB)&amp;amp;nbsp; &#039;&#039;&#039;10&#039;&#039;&#039;&amp;amp;nbsp; bestimmen entsprechend dem&amp;amp;nbsp; $\rm 4-QAM$–Diagramm den Quadranten, in dem das Symbol liegt. &lt;br /&gt;
*Die beiden niederwertigen Bit&amp;amp;nbsp; (&#039;&#039;&#039;11&#039;&#039;&#039;)&amp;amp;nbsp; legen zusammen mit dem Hilfsgrafen den Punkt innerhalb des Quadranten fest. Das Ergebnis ist&amp;amp;nbsp; $a_{\rm I} = -1$,&amp;amp;nbsp; $a_{\rm Q} = +3$.&lt;br /&gt;
$\rm (B)$&amp;amp;nbsp;  $q_3 = 0, \ q_2 = 1, \ q_1 = 1,\ q_0 = 0$ (grüne Markierung):&lt;br /&gt;
* Die beiden höchstwertigen Bit&amp;amp;nbsp; (&#039;&#039;Most Significant Bit&#039;&#039;, MSB)&amp;amp;nbsp; &#039;&#039;&#039;01&#039;&#039;&#039; verweisen hier auf den vierten Quadranten.&lt;br /&gt;
*Die beiden niederwertigen Bit&amp;amp;nbsp; (&#039;&#039;&#039;10&#039;&#039;&#039;)&amp;amp;nbsp; verweisen auf den grünen Punkt im vierten Quadranten: &amp;amp;nbsp; $a_{\rm I} = -3, \ a_{\rm Q} = -3$.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 2:}$&amp;amp;nbsp;&lt;br /&gt;
[[File:P_ID1943__Bei_T_2_3_S6a_v1.png|right|frame|Signalraumkonstellationen $\rm 4\hspace{0.05cm}&amp;amp;ndash;\hspace{-0.02cm}QAM$ und $\rm 16\hspace{0.05cm}&amp;amp;ndash;\hspace{-0.02cm}QAM$]]&lt;br /&gt;
Eine weitere Möglichkeit zur Beschriftung der Punkte bietet der Dezimalwert $D$.&lt;br /&gt;
*Der gelb markierte Punkt im&amp;amp;nbsp; $\rm 4\hspace{0.05cm}&amp;amp;ndash;\hspace{-0.02cm}QAM$–Diagramm ist binär mit&amp;amp;nbsp; &#039;&#039;&#039;10&#039;&#039;&#039;&amp;amp;nbsp; bezeichnet &amp;amp;nbsp; ⇒ &amp;amp;nbsp; dezimal&amp;amp;nbsp; $D = 2$. Dieser Punkt markiert gleichzeitig den Quadranten der&amp;amp;nbsp; $\rm 16\hspace{0.05cm}&amp;amp;ndash;\hspace{-0.02cm}QAM$.&lt;br /&gt;
*Die weitere Unterteilung ergibt sich aus der linken unteren Grafik. Dort steht beim gelben Punkt&amp;amp;nbsp; $4D + 3$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp;  &#039;&#039;&#039;11&#039;&#039;&#039;&amp;amp;nbsp; (dezimal). Deshalb steht der rechte obere Punkt (gelb markiert) im linken oberen Quadranten für dezimal&amp;amp;nbsp; $11$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; binär &#039;&#039;&#039;1011&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
*Für den grünen Punkt ergibt sich mit&amp;amp;nbsp; $D = 1$&amp;amp;nbsp; der Dezimalwert&amp;amp;nbsp; $4D + 2 ⇒ 6$, was der binären Darstellung&amp;amp;nbsp; &#039;&#039;&#039;0110&#039;&#039;&#039;&amp;amp;nbsp; entspricht. &lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Nach diesem Schema lassen sich auch die Signalraumkonstellationen für&amp;amp;nbsp; $\rm 64\hspace{0.05cm}&amp;amp;ndash;\hspace{-0.02cm}QAM$ &amp;amp;nbsp; ⇒ &amp;amp;nbsp; $(b = 6, \ M = 8)$&amp;amp;nbsp; und&amp;amp;nbsp; $\rm 256-QAM$&amp;amp;nbsp; ⇒ &amp;amp;nbsp; $(b = 8, \  M = 16)$&amp;amp;nbsp; entwickeln, worauf in der&amp;amp;nbsp; [[Aufgabe_2.3:_QAM–Signalraumbelegung|Aufgabe 2.3]]&amp;amp;nbsp; im Detail eingegangen wird.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1944__Bei_T_2_3_S6_v1.png|right|frame|Zur Fehlerwahrscheinlichkeit bei $\rm 16\hspace{0.05cm}&amp;amp;ndash;\hspace{-0.02cm}QAM$]]&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 3:}$&amp;amp;nbsp;&lt;br /&gt;
Wir betrachten noch für die beschriebene&amp;amp;nbsp; $\rm 16\hspace{0.05cm}&amp;amp;ndash;\hspace{-0.02cm}QAM$&amp;amp;nbsp; (linke Grafik, hier als ITU-Vorschlag bezeichnet) die sich ergebende Fehlerwahrscheinlichkeit bei AWGN–Rauschen: &lt;br /&gt;
*Es kann davon ausgegangen werden, dass ein Fehler zu einem horizontal oder vertikal benachbarten Symbol führt, wie für den linken oberen (grünen) Punkt angedeutet. &lt;br /&gt;
*Die Fehlerwahrscheinlichkeit $p$&amp;amp;nbsp; hängt von der Euklidischen Distanz der beiden Punkte und der AWGN–Rauschleistungsdichte&amp;amp;nbsp; $N_0$&amp;amp;nbsp; ab. &lt;br /&gt;
*Eine Verfälschung zum weiter entfernten blauen Punkt anstatt zu einem der beiden benachbarten gelben Punkte ist bei Gaußschem Rauschen eher unwahrscheinlich.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Alle Eckpunkte (grün hinterlegt) können nur in zwei Richtungen verfälscht werden. Dagegen haben die inneren QAM–Punkte (blau hinterlegt) vier direkte Nachbarn und die restlichen Symbole (gelb hinterlegt) drei. Für die (mittlere) Symbolfehlerwahrscheinlichkeit gilt dann:&lt;br /&gt;
&lt;br /&gt;
:$$p_{\rm S} =  {1}/{16} \cdot (4 \cdot 2 p + 8 \cdot 3  p +  4 \cdot 4   p) = 3p.$$&lt;br /&gt;
&lt;br /&gt;
Zur Berechnung der Bitfehlerwahrscheinlichkeit&amp;amp;nbsp; $p_{\rm B}$&amp;amp;nbsp; muss nun berücksichtigt werden, dass bei der linken Konstellation ein Symbolfehler &lt;br /&gt;
*nur zu einem Bitfehler &amp;amp;nbsp;(Beispiel: &#039;&#039;&#039;0100&#039;&#039;&#039; &amp;amp;nbsp; ⇒ &amp;amp;nbsp;  &#039;&#039;&#039;0110&#039;&#039;&#039;, innerhalb eines Quadranten) oder &lt;br /&gt;
*zu zwei Bitfehlern (Beispiel: &#039;&#039;&#039;1111&#039;&#039;&#039; &amp;amp;nbsp; ⇒ &amp;amp;nbsp; &#039;&#039;&#039;0101&#039;&#039;&#039;, zwischen benachbarten Quadranten) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
führt. Die Berechnung von&amp;amp;nbsp;  $p_{\rm B} $&amp;amp;nbsp;  ist hier mit einem gewissen Aufwand verbunden.&lt;br /&gt;
&lt;br /&gt;
Dagegen unterscheidet sich bei einer Gray–Codierung (rechtes Diagramm) jedes Symbol von seinen Nachbarn um genau ein Bit, und jeder Symbolfehler hat somit genau nur einen Bitfehler zur Folge. Da jedes einzelne Symbol vier Bit beinhaltet, gilt  in diesem Fall für die (mittlere) Bitfehlerwahrscheinlichkeit:&lt;br /&gt;
&lt;br /&gt;
:$$p_{\rm B} =  p_{\rm S}/4  = 3/4 \cdot p. $$}}&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Carrierless Amplitude Phase Modulation (CAP)==  	&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;Carrierless Amplitude Phase Modulation&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;CAP&#039;&#039;&#039;) ist eine bandbreiteneffiziente Variante der QAM, die sich mit digitalen Signalprozessoren sehr einfach realisieren lässt. Der Unterschied zur QAM liegt einzig darin, dass auf eine Modulation mit einem Trägersignal verzichtet werden kann.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1945__Bei_T_2_3_S7a_v1.png|center|frame|Modell der &#039;&#039;Carrierless Amplitude Phase Modulation&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
*Anstelle der Multiplikation mit Cosinus und Minus–Sinus wird hier eine digitale Filterung vorgenommen. $g_{\rm I}(t)$&amp;amp;nbsp; und&amp;amp;nbsp; $g_{\rm Q}(t)$&amp;amp;nbsp; sind die um&amp;amp;nbsp; $π/2$&amp;amp;nbsp; phasenverschobenen Impulsantworten zweier transversaler Bandpassfilter mit gleicher Amplitudencharakteristik. &lt;br /&gt;
*Beide sind zueinander orthogonal, das heißt, dass das Integral des Produkts&amp;amp;nbsp; $g_{\rm I}(t) · g_{\rm Q}(t)$&amp;amp;nbsp; über eine Symboldauer den Wert Null ergibt.&lt;br /&gt;
*Die so erzeugten Signale&amp;amp;nbsp; $s_{\rm I}(t)$&amp;amp;nbsp; und&amp;amp;nbsp; $s_{\rm Q}(t)$&amp;amp;nbsp; werden zusammengeführt, durch einen D/A–Wandler in ein zeitkontinuierliches Signal gewandelt und die bei der D/A–Wandlung erzeugten unerwünschten hochfrequenten Anteile vor dem Aussenden durch ein Tiefpassfilter (TP) eliminiert.&lt;br /&gt;
*Beim Empfänger wird das Signal&amp;amp;nbsp; $r(t)$&amp;amp;nbsp; zunächst mittels A/D–Wandler in ein zeitdiskretes Signal gewandelt und anschließend werden über zwei &#039;&#039;Finite–Impulse–Response&#039;&#039;–Filter (FIR–Filter) und nachgelagerte Entscheider die Inphase– und Quadratur–Symbole&amp;amp;nbsp; $a_{\rm I}$&amp;amp;nbsp; und&amp;amp;nbsp; $a_{\rm Q}$&amp;amp;nbsp; extrahiert.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1946__Bei_T_2_3_S7b_v1.png|right|frame|Referenzmodell für CAP–ADSL]]&lt;br /&gt;
&lt;br /&gt;
CAP war der de–facto–Standard bei den anfänglichen ADSL–Spezifikationen bis 1996. &lt;br /&gt;
*Die Frequenzen bis&amp;amp;nbsp; $\text{4 kHz}$&amp;amp;nbsp; wurden für POTS reserviert. &lt;br /&gt;
*Der Aufwärtskanal belegte den Frequenzbereich von&amp;amp;nbsp; $\text{15 - 160 kHz}$,&lt;br /&gt;
* und der Abwärtskanal die Frequenzen von&amp;amp;nbsp; $\text{240 kHz}$&amp;amp;nbsp; bis&amp;amp;nbsp; $\text{1.5 MHz}$. &lt;br /&gt;
*Die Grafik zeigt das Referenzmodell.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ein Problem bei CAP ist, dass ein „schlechter Kanal” dramatische Folgen auf die Übertragungsqualität hat. Deshalb findet man heute (2010) CAP–ADSL nur noch bei einigen wenigen HDSL–Varianten.&lt;br /&gt;
&lt;br /&gt;
 	 &lt;br /&gt;
==Grundlagen von DMT – Discrete Multitone Transmission==  	 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;Discrete Multitone Transmission&#039;&#039;&amp;amp;nbsp; (&#039;&#039;&#039;DMT&#039;&#039;&#039;) bezeichnet ein Mehrträgermodulationsverfahren, das nahezu identisch mit&amp;amp;nbsp; [[Modulation_Methods/Allgemeine_Beschreibung_von_OFDM|&#039;&#039;Orthogonal Frequency Division Multiplexing&#039;&#039;]]&amp;amp;nbsp; (&#039;&#039;&#039;OFDM&#039;&#039;&#039;) ist. Bei leitungsgebundener Übertragung spricht man meist von „DMT”, bei drahtloser Übertragung von „OFDM”.&lt;br /&gt;
&lt;br /&gt;
In beiden Fällen unterteilt man die gesamte Bandbreite in viele  schmalbandige äquidistante Subkanäle. Die jeweiligen Subträgersignale&amp;amp;nbsp; $s_k(t)$&amp;amp;nbsp; werden individuell mit komplexen Datensymbolen&amp;amp;nbsp; $D_k$&amp;amp;nbsp; beaufschlagt und die Summe der modulierten Subträgersignale wird als Sendesignal&amp;amp;nbsp; $s(t)$&amp;amp;nbsp; übertragen.&lt;br /&gt;
&lt;br /&gt;
Die Grafik verdeutlicht das Prinzip von OFDM und DMT im Frequenzbereich, wobei teilweise die für ADSL/DMT spezifizierten Werte verwendet sind:&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1947__Bei_T_2_3_S8a_v2.png|right|frame|Spektren bei OFDM und DMT]]&lt;br /&gt;
*$255$&amp;amp;nbsp; Subträger mit den Trägerfrequenzen&amp;amp;nbsp; $k · f_0$&amp;amp;nbsp;  $(k = 1$, ... , $255)$.&lt;br /&gt;
*Grundfrequenz&amp;amp;nbsp; $f_0 = 4.3125 \ \rm kHz$, da $4000$&amp;amp;nbsp; Datenrahmen pro Sekunde übertragen werden. &lt;br /&gt;
*Nach $68$ Datenrahmen wird jeweils ein Synchronisationsrahmen eingefügt. &lt;br /&gt;
*Aufgrund des zyklischen Präfix (siehe Kapitel&amp;amp;nbsp; [[Examples_of_Communication_Systems/Verfahren_zur_Senkung_der_Bitfehlerrate_bei_DSL#Einf.C3.BCgen_von_Guard.E2.80.93Intervall_und_zyklischem_Pr.C3.A4fix|Einfügen von Guard&amp;amp;ndash;Intervall und zyklischem Präfix]])&amp;amp;nbsp; muss die Symboldauer&amp;amp;nbsp; $T = 1/f_0$&amp;amp;nbsp; noch um den Faktor&amp;amp;nbsp; $16/17$&amp;amp;nbsp; verkürzt werden.&lt;br /&gt;
&amp;lt;br clear=all&amp;gt;&lt;br /&gt;
Ein wesentlicher Unterschied zwischen OFDM und DMT besteht darin, dass&lt;br /&gt;
*bei OFDM das dargestellte Spektrum&amp;amp;nbsp; $S(f)$&amp;amp;nbsp; in Wirklichkeit ein äquivalentes Tiefpass-Spektrum&amp;amp;nbsp; $S_{\rm TP}(f)$&amp;amp;nbsp; beschreibt und noch die Verschiebung um eine Trägerfrequenz&amp;amp;nbsp; $f_{\rm T}$&amp;amp;nbsp; zu berücksichtigen ist:&lt;br /&gt;
&lt;br /&gt;
:$$S_{\rm TP}(f ) = \sum_{k = 1}^{255} D_k \cdot \delta (f - k \cdot f_0)\hspace{0.3cm}\Rightarrow \hspace{0.3cm}&lt;br /&gt;
S(f) = \frac{1}{2} \big [ S_{\rm TP}(f - f_{\rm T}) + S^*_{\rm TP}(-(f + f_{\rm T}))\big ] &lt;br /&gt;
 \hspace{0.05cm},$$&lt;br /&gt;
 &lt;br /&gt;
*bei DMT dagegen noch die Anteile bei negativen Frequenzen berücksichtigt werden müssen, die mit den konjugiert–komplexen Spektralkoeffizienten zu gewichten sind:&lt;br /&gt;
&lt;br /&gt;
:$$S(f ) = \sum_{k = 1}^{255}  \big [ D_k \cdot \delta (f - k \cdot f_0) + D^*_k \cdot \delta (f + k \cdot f_0) \big ]&lt;br /&gt;
 \hspace{0.05cm}.$$&lt;br /&gt;
  &lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Bitte beachten Sie:}$&amp;amp;nbsp;&lt;br /&gt;
*Nach diesen Gleichungen besteht das komplexe OFDM–Signal&amp;amp;nbsp; $s_{\rm OFDM}(t)$&amp;amp;nbsp; aus&amp;amp;nbsp; $K = 255$&amp;amp;nbsp; komplexen Exponentialschwingungen. &lt;br /&gt;
*Das DMT–Signal&amp;amp;nbsp; $s_{\rm DMT}(t)$&amp;amp;nbsp; setzt sich aus ebenso vielen Cosinusschwingungen mit Frequenzen&amp;amp;nbsp; $k · f_0$&amp;amp;nbsp; zusammen (volle Belegung voausgesetzt). &lt;br /&gt;
*Trotz komplexer Koeffizienten&amp;amp;nbsp; $D_k$, die sich bei QAM–Belegung der Träger ergeben, ist das DMT–Signal wegen der konjugiert–komplexen Ergänzungen bei negativen Frequenzen stets reell.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Sowohl bei OFDM als auch bei der DMT ist allerdings das Sendesignal&amp;amp;nbsp; $s(t)$&amp;amp;nbsp; zeitlich genau auf die Symboldauer&amp;amp;nbsp; $T = 1/f_0 ≈ 232 \ {\rm &amp;amp;micro;s}$&amp;amp;nbsp; begrenzt, was der Multiplikation mit einem Rechteck der Dauer $T$&amp;amp;nbsp; bedeutet. Im Spektralbereich entspricht dies der Faltung mit einer Spaltfunktion&amp;amp;nbsp; $\text{si}(πfT)$:&lt;br /&gt;
*Aus jeder Diracfunktion bei&amp;amp;nbsp; $k · f_0$&amp;amp;nbsp; wird somit bei Berücksichtigung der zeitlichen Begrenzung eine si–Funktion an gleicher Stelle, wie im unteren Diagramm dargestellt.&lt;br /&gt;
*Benachbarte Subträgerspektren überlappen sich zwar auf der Frequenzachse, aber exakt bei&amp;amp;nbsp; $k · f_0$&amp;amp;nbsp; sind wieder die Koeffizienten&amp;amp;nbsp; $D_k$&amp;amp;nbsp; zu erkennen, da alle anderen Spektren hier Nullstellen aufweisen.&lt;br /&gt;
*Für die untere Grafik ist ein symmetrisches Rechteck angenommen. Ein Rechteck zwischen&amp;amp;nbsp; $0$&amp;amp;nbsp; und&amp;amp;nbsp; $T$&amp;amp;nbsp; hätte noch einen Phasenterm zur Folge. Es würde sich aber bezüglich&amp;amp;nbsp; $|S(f)|$&amp;amp;nbsp; nichts  ändern.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 4:}$&amp;amp;nbsp;&lt;br /&gt;
Geht man von den für den ADSL–Downstream günstigen Voraussetzungen aus, nämlich dass&lt;br /&gt;
*pro Sekunde $4000$ Rahmen übertragen werden,&lt;br /&gt;
*stets alle Subträger aktiv sind&amp;amp;nbsp; $(K = 255)$,&lt;br /&gt;
*jeder Träger mit einer 1024–QAM&amp;amp;nbsp; $(b = 10$, laut ITU&amp;amp;nbsp; $8 ≤ b ≤ 15 )$&amp;amp;nbsp; belegt ist, und&lt;br /&gt;
*ideale Bedingungen herrschen, so dass die in der Grafik erkennbare Orthogonalität erhalten bleibt,&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
so ergibt sich für die maximale Daten(bit)rate&amp;amp;nbsp; $R_{\rm B,\ max} = 4000 · K · b ≈ 10 \ \rm Mbit/s$. &lt;br /&gt;
&lt;br /&gt;
Spezifiziert ist der ADSL–Downstream allerdings nur mit&amp;amp;nbsp; $2 \ \rm Mbit/s$&amp;amp;nbsp; wegen&lt;br /&gt;
*der Aussparung der $64$ untersten Träger wegen ISDN und Upstream, &lt;br /&gt;
*der QAM–Belegung der stark gedämpften Träger mit weniger als $10$&amp;amp;nbsp; Bit, und &lt;br /&gt;
*der Berücksichtigung des zyklischen Präfix sowie einige betriebsbedingte Gründe.}}&lt;br /&gt;
	 &lt;br /&gt;
&lt;br /&gt;
==DMT–Realisierung mit IDFT/DFT==  &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:P_ID1949__Bei_T_2_3_S9a_v3.png|right|frame|MT&amp;amp;ndash;Gesamtsystem]]&lt;br /&gt;
Die obere Grafik zeigt das DMT–Gesamtsystem, wobei wir uns zunächst auf die beiden roten Blöcke konzentrieren. Die blauen Blöcke werden im [[Examples_of_Communication_Systems/Verfahren_zur_Senkung_der_Bitfehlerrate_bei_DSL|nächsten Kapitel]] behandelt.&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1951__Bei_T_2_3_S9b_v3.png|left|frame|DMT&amp;amp;ndash;Sender und &amp;amp;ndash;Empfänger]]&lt;br /&gt;
&amp;lt;br clear =all&amp;gt;&lt;br /&gt;
Vereinfacht lassen sich Sender und Empfänger wie in der linken Grafik darstellen:&lt;br /&gt;
&lt;br /&gt;
*Zur Durchführung der DMT–Modulation wird beim Sender ein Block an Eingangsbits in einem Datenpuffer angesammelt, der als ein Rahmen übertragen werden soll.&lt;br /&gt;
*Der QAM–Coder liefert pro Rahmen die komplexwertigen Datensymbole&amp;amp;nbsp; $D_1$, ... , $D_{255}$, die mit&amp;amp;nbsp; $D_0 = D_{256} = 0$&amp;amp;nbsp; sowie&amp;amp;nbsp; $D_k = D^\star_{512-k} \ (k = 257,$ ... , $511)$&amp;amp;nbsp; zum Vektor&amp;amp;nbsp; $\mathbf{D}$&amp;amp;nbsp; der Länge $512$ erweitert wird. &lt;br /&gt;
*Als Konsequenz&amp;amp;nbsp; [[Signal_Representation/Discrete_Fourier_Transform_(DFT)#Finite_Signaldarstellung|finiter Signale]]&amp;amp;nbsp; sind&amp;amp;nbsp; $D_{257}$, ... , $D_{511}$&amp;amp;nbsp; identisch mit&amp;amp;nbsp; $D_{–255}$, ... , $D_{–1}$.&lt;br /&gt;
*Die Spektralabtastwerte&amp;amp;nbsp; $\mathbf{D}$&amp;amp;nbsp; werden mittels der&amp;amp;nbsp; [[Signal_Representation/Discrete_Fourier_Transform_(DFT)#Inverse_Diskrete_Fouriertransformation|Inversen Diskreten Fouriertransformation]]&amp;amp;nbsp; (IDFT) in den Vektor&amp;amp;nbsp; $\mathbf{s}$&amp;amp;nbsp; der Zeitsignalabtastwerte umgerechnet, ebenfalls mit Länge $512$. Wegen der konjugiert–komplexen Belegung im Spektralbereich ist&amp;amp;nbsp; $\text{Im}[\mathbf{s}] = 0$.&lt;br /&gt;
&lt;br /&gt;
*Nach Parallel/Seriell– und Digital/Analog–Wandlung und Tiefpassfilterung von&amp;amp;nbsp; $\text{Re}[\mathbf{s}]$&amp;amp;nbsp; ergibt sich das physikalische und damit reelle sowie zeitkontinuierliche Sendesignal&amp;amp;nbsp; $s(t)$. Für dieses gilt im Bereich&amp;amp;nbsp; $0 ≤ t ≤ T$&amp;amp;nbsp; (Faktor $2$, da jeweils zwei Koeffizienten zu Cosinus/Sinus beitragen):&lt;br /&gt;
&lt;br /&gt;
:$$s(t) = \sum_{k = 1}^{255}  \big [ 2 \cdot{\rm Re}\{D_k\} \cdot \cos(2\pi \cdot k  f_0 \cdot t ) - 2 \cdot{\rm Im}\{D_k\} \cdot \sin(2\pi \cdot k  f_0 \cdot t )\big ] \hspace{0.05cm}. $$&lt;br /&gt;
 &lt;br /&gt;
*Das Empfangssignal bei Übertragung über den AWGN–Kanal ist&amp;amp;nbsp; $r(t) = s(t) + n(t)$. Nach A/D– und S/P–Wandlung kann&amp;amp;nbsp; $r(t)$&amp;amp;nbsp; durch den (reellen) Vektor&amp;amp;nbsp; $\mathbf{r}$&amp;amp;nbsp; ausgedrückt werden. Die&amp;amp;nbsp; [[Signal_Representation/Discrete_Fourier_Transform_(DFT)#Von_der_kontinuierlichen_zur_diskreten_Fouriertransformation|Diskrete Fouriertransformation]]&amp;amp;nbsp; (DFT) liefert dann Schätzwerte für die gesendeten Spektralkoeffizienten.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:P_ID1952__Bei_T_2_3_S10_v1.png|right|frame|Belegung des DMT&amp;amp;ndash;Frequenzbandes mit QAM-Koeffizienten]]&lt;br /&gt;
{{GraueBox|TEXT=&lt;br /&gt;
$\text{Beispiel 5:}$&amp;amp;nbsp;	 &lt;br /&gt;
Betrachten wir als Beispiel den ADSL/DMT–Downstream. &lt;br /&gt;
*In der linken oberen Grafik erkennt man die Beträge&amp;amp;nbsp; $\vert D_k\vert $&amp;amp;nbsp; der belegten Subkanäle $64$, ... , $255$. Die Träger $0$, ... , $63$ für den reservierten Frequenzbereich von ISDN und Upstream sind auf Null gesetzt. &lt;br /&gt;
*Rechts daneben sind die Spektralkoeffizienten&amp;amp;nbsp; $D_{64}$, ... , $D_{255}$&amp;amp;nbsp; in der komplexen Zahlenebene dargestellt, wobei der Signalraum sehr groß gewählt ist.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:P_ID1953__Bei_T_2_3_S10b_v1.png|left|frame|Sendesignal bei obiger DMT&amp;amp;ndash;Belegung]]&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Die zweite (linke) Grafik zeigt das Sendesignal&amp;amp;nbsp; $s(t)$&amp;amp;nbsp; für die Rahmendauer&amp;amp;nbsp; $T = {1}/{f_0} ≈ 232 \ \rm &amp;amp;micro;s$, das sich durch Tiefpass–Filterung der IDFT–Werte&amp;amp;nbsp; $s_0$, ... , $s_{511}$&amp;amp;nbsp; ergibt. Dieses Nutzsignal sieht nahezu aus wie Rauschen. Man erkennt: &lt;br /&gt;
&lt;br /&gt;
*Das Hauptproblem der DMT ist der ungünstige Crestfaktor &amp;amp;nbsp; &amp;amp;rArr; &amp;amp;nbsp; das Verhältnis von Maximalwert&amp;amp;nbsp; $s_{\rm max}$&amp;amp;nbsp; und Effektivwert&amp;amp;nbsp; $s_{\rm eff}$&amp;amp;nbsp; (Wurzel aus der mittleren Leistung). &lt;br /&gt;
*Der im beispielhaften Signalverlauf erkennbare große Dynamikbereich stellt hohe Anforderungen an die Linearität der Verstärker. &lt;br /&gt;
*Bei Begrenzung des Aussteuerbereichs werden die Spitzen von&amp;amp;nbsp; $s(t)$&amp;amp;nbsp; abgeschnitten.&lt;br /&gt;
* Dies wirkt wie eine Impulsstörung und eine zusätzliche Rauschbelastung für das System darstellt.}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{BlaueBox|TEXT=&lt;br /&gt;
$\text{Zusammenfassend lässt sich sagen:}$&amp;amp;nbsp;	&lt;br /&gt;
*&#039;&#039;Discrete Multitone Transmission&#039;&#039;&amp;amp;nbsp; (DMT) ist im Prinzip die parallele Realisierung vieler schmalbandiger QAM–Modems mit unterschiedlichen Trägern und verhältnismäßig geringen Datenübertragungsraten. Die geringe Bandbreite pro Subträger ermöglicht eine lange Symboldauer, vermindert somit den Einfluss von Intersymbolinterferenzen und verringert den Entwicklungsaufwand für die Entzerrung.&lt;br /&gt;
*Ein wesentlicher Grund für den Erfolg von DMT ist die technisch einfache Realisierung. IDFT und DFT werden mit digitalen Signalprozessoren in Echtzeit gebildet. Die Vektoren  besitzen die Länge $512$ (Zweierpotenz). Deshalb kann der besonders schnelle FFT–Algorithmus (&#039;&#039;Fast Fourier Transformation&#039;&#039;) angewendet werden.}}&lt;br /&gt;
&lt;br /&gt;
	 &lt;br /&gt;
==Aufgaben zum Kapitel == 	 &lt;br /&gt;
&lt;br /&gt;
[[2.3_QAM–Signalraumbelegung|Aufgabe 2.3: QAM–Signalraumbelegung]]&lt;br /&gt;
&lt;br /&gt;
[[2.3Z_xDSL–Frequenzband|Aufgabe 2.3Z: xDSL–Frequenzband]]&lt;br /&gt;
&lt;br /&gt;
[[2.4_DSL/DMT_mit_IDFT/DFT|Aufgabe 2.4: DSL/DMT_mit_IDFT/DFT]]&lt;br /&gt;
&lt;br /&gt;
[[2.4Z_Wiederholung_zur_IDFT|Aufgabe 2.4Z: Wiederholung zur IDFT]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Display}}&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
	<entry>
		<id>https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/XDSL_Systems&amp;diff=34973</id>
		<title>Examples of Communication Systems/XDSL Systems</title>
		<link rel="alternate" type="text/html" href="https://en.lntwww.lnt.ei.tum.de/index.php?title=Examples_of_Communication_Systems/XDSL_Systems&amp;diff=34973"/>
		<updated>2020-10-13T15:37:37Z</updated>

		<summary type="html">&lt;p&gt;Rosa: Rosa moved page Examples of Communication Systems/XDSL Systems to Examples of Communication Systems/xDSL Systems&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Examples of Communication Systems/xDSL Systems]]&lt;/div&gt;</summary>
		<author><name>Rosa</name></author>
	</entry>
</feed>