Monday, December 3, 2012

DateNetwork Review, chapter 3.

Chapter3: Delay model:
//-------------------------------------------------------------------------
3.1 introduction
multiplexing: TDM, FDM, statistical multiplexing.
//-------------------------------------------------------------------------
3.2 queuing models - little's theorem
Very general, very important
\[
N = \lambda T
\]
N: average/typical number of customers in the system
T: average delay in the system
$\lambda$: arrival rate.

example1:
P{a server is busy} = $\lambda$ * {average service time}

//-------------------------------------------------------------------------
3.3 MM1
TOOL: Markov chain.
main results:
\[
\rho = \frac{\lambda}{\mu} \\
p_0 =  1 - \rho \\
p_n = \rho^n (1-\rho) \\
N = \frac{\rho}{1-\rho} \\
T = \frac{N}{T} \\
W = T - \frac{1}{\mu} \\
N_Q = N - \rho = \lambda W
\]


An every interesting question to ask is: 
"is the queue typical seen by an arrival?".
it is possible to be non-typical, for example, deterministic arrival 1/min, service time 0.5 min,
average system N = 0.5
however, the customer in system seen by an arrival is always 0.
conclusion:
$a_n$ : occupancy distribution upon arrival
$p_n$ : steady state  occupancy distribution
$d_n$ :  occupancy distribution upon departure.
if arrival process is poisson: $a_n = p_n$.
if markov process is a birth death process: $d_n = a_n$.

//-------------------------------------------------------------------------
3.4 MMm, MMinfty, MMmm
MMm
Markov chain solve for steady state distribution, $p_n$.
define queueing probability:
$P_{queueing} = \sum_{n = m}^{\infty} p_n $
known as Erlang C formula, honoring Denmark's A.K. Erlang.

\[
\rho = \frac{\lambda}{m \mu} \\
N_Q = P_Q \frac{\rho}{1-\rho} \\
W =  N_Q / \lambda \\
T = W + \frac{1}{\mu} \\
N  = \lambda T \\
\]
note the connection with MM1 here.
explanation: for those who come and join the queue, the system works like $m\mu$ superserver. Write down the MarkovChain for the number of customers waiting in the queue, then it is same as MM1 system.

$M/M/\infty$
This case is very interesting, because:
\[
p_n  = (\frac{\lambda}{\mu})^n \frac{e^{-\frac{\lambda}{\mu}}}{n!} \Rightarrow Poisson(\frac{\lambda}{\mu})
\]
What is system delay? No queuing time !!!! W = 0
\[
T = \frac{1}{\mu} \\
N =  \frac{\lambda}{\mu} \\
N_Q = 0 \\
W = 0
\]
Intuitively.


M/M/m/m Loss system
again, our tool is markov chain, solve for $p_n$.

actually, there is an easier way. This is a truncated $MM\infty$ case, for those who are accepted, the system indeed has $\infty$ servers. Everything stays the same except for a normalization constant.
and note the common term $e^{-\frac{\lambda}{\mu}}$ cancels.

\[
p_0 = [\sum_{n = 0}^{m} (\frac{\lambda}{\mu})^n \frac{1}{n!}]^{-1}
p_n = p_0 (\frac{\lambda}{\mu})^n \frac{1}{n!}
\]
The most important quantity is the blocking probability (Erlang B formua): $p_m$.
with such probability, any new arrival seen m customer already in system, and it will be rejected.


//-------------------------------------------------------------------------
3.5 MG1
Most important formula: Pollaczek - Khinchin formula. PK formula.
\[
\bar{X} = \textrm{average service time} \\
\bar{X^2} = \textrm{second moment of average service time} \\
W = \frac{\lambda \bar{X^2}}{2(1-\rho)}
\]
other quantities can be found using Little's law.

example1:
MM1
\[
\bar{X} =  \frac{1}{\mu}
\Rightarrow
E[X^2] = \frac{2}{\mu ^2}
W = \frac{\rho}{\mu-\lambda}
\]
MD1
\[
E[X^2] = E[X]^2 = \frac{1}{\mu ^2}
\Rightarrow
W = \frac{\rho}{2(\mu - \lambda)}
\]
we note MD1 has only half the queuing delay than the MM1.
FOLK's law: the more random, the more delay.

Proof of PK formula by the author of the book. 
how long will a new arrival has to wait in queue?
$$W = R + \frac{1}{\mu} N_Q$$
R: mean residual time.
$N_Q$: number of customer in queue seen by arrival.
using little law: $N_Q = \lambda * W$
\[
W = \frac{R}{1-\rho}
\]
What is R?
in the interval [0,t], $M(t)$ is the number of service completed.
$$
\begin{split}
\frac{1}{t} \int_{0}^{t} r(\tau) d\tau &= \frac{1}{t} \sum_{i = 1}^{M(t)} \frac{x_i ^2}{2}\\
&= \frac{M(t)}{t} \sum_{i = 1}^{M(t)} \frac{x_i ^2}{2 M(t)} \\
R &= \lambda \frac{E[X^2]}{2}
\end{split}
$$
by letting $t\rightarrow \infty$

MG1 with vacation.
At the end of busy period(with probability $1-\rho$), a server that goes to vacation from time to time, with
$E[V]$: average vacation time.
$E[V^2]$: second moment of average vacation time.
We note that service time and vacation time are almost indistinguishable.
Treat R as "residual service time or vacation time."
The derivation is similar to the proof of PK formula:
$$
\begin{split}
R &= \lim_{t\rightarrow \infty} {\frac{M(t)}{t} \sum_{i = 1}^{M(t)}\frac{X_i ^2}{2M(t)}+ \frac{L(t)}{t} \sum_{i = 1}^{L(t)}\frac{V_i ^2}{2L(t)}} \\
& = \frac{\lambda E[X^2]}{2} + \frac{(1-\rho)E[V^2]}{2E[V]} \\
\frac{L(t)}{t} & \rightarrow \frac{1-\rho}{E[V]}
\end{split}
$$
where $L(t)$ is the number of vacation completed during [0, t].
Basically, the vacations increase $p_0$, however, the system can still be viewed as two subsystems:
1. when the system is working, this gives the first part of the R.
2. w.p. $1-\rho$, the system goes to vacation, leading to the second part of the R.
finally:

$$
W = \frac{\lambda E[X^2]}{2(1-\rho)} + \frac{E[V^2]}{2E[V]}
$$

MG1 with reservation.
let consider only a single user, for the i th packet:
1. R: residual till the end of the current transmission, which could either be really transmission or reservation making.
2. N*E[X]: transmission of the before-coming packets.
3. V: (denoted same as vacation) it must wait a full reservation slot, in which its reservation is made.
The system is designed s.t. before its transmission, there bound to be a reservation slot.

again
$$
\begin{split}
V &\leftarrow E[V] \\

R &\leftarrow \frac{\lambda E[X^2]}{2}  + \frac{E[V^2]}{2E[V]} (1-\rho) \\

W &= \frac{R+V}{1-\rho} = \frac{\lambda E[X^2]}{2(1-\rho)} + \frac{E[V^2]}{2E[V]} + \frac{E[V]}{1-\rho}
\end{split}
$$
This is fully gated system, meaning ony those packets that arrived prior to the user's preceding reservation interval are transmitted.
therefore every packet has to experience a full reservation slot.

Exhaustive system: all available packets of a user are transmitted during the corresponding data interval.
partially gated system: transmit those packets arrived up to the beginning of the data interval.

Note the exhaustive system is equivalent to the vacation system, only those who arrive see vacation/reservation have to wait.


Priority queuing MG1
non-preemptive: high class CANNOT interrupt low class at service.
preemptive: high class interrupt low class at service, no resumption for the low class. 臭流氓!
Assuming class 1 has the highest priority, class n has the lowest.

non-preemptive is easier:
$\mu_i $  $\lambda_i $ are different.

during the waiting of class2, those later-comer class 1 will jump the queue!
$$
\begin{split}

W_1 &= R + \frac{N_Q ^1}{\mu_1} \\
N_Q^1 &= \lambda_1 W_1 \\
W_1 &= \frac{R}{1-\rho_1} \\

 \textrm{"class 1 jump the queue while class 2 waiting"} \\
W_2 &= R + \frac{N_Q ^1}{\mu_1} + \frac{N_Q ^2}{\mu_2}  + \frac{\lambda_1 W_2}{\mu_1} \\
W_2 &= \frac{R}{(1-\rho_1)(1- \rho_1 - \rho_2)} \\

 \textrm{"finally"} \\
W_k &= \frac{R}{(1-\rho_1 - \ldots - \rho_{k-1})(1-\rho_1 - \ldots - \rho_{k})} \\
 \text{"R like a weighted sum"} \\
R &= \frac{1}{2} \sum_{i=1}^{n} \lambda_i E[X_i ^2 ]

\end{split}
$$
The calculation involves all class 1 - n, because it is a non-preemptive system.



preemptive system:
The arrival sees the system as a MG1, ignoring the class (k+1) to n.

$$
\begin{split}
W_k &= \frac{R_k }{1-\rho_1 - \ldots - \rho_k } \\
R_k &= \frac{\sum_{i = 1}^{k} \lambda_i E[X_i ^2]}{2} \\
\end{split}
$$

an explanation for this.
If we only consider the first k class, the capacity assigned to class k is no longer $\rho _k$, but a normalized one:
$$
\frac{\rho_k}{\rho_1 + \ldots + \rho_k}
$$
therefore the service time becomes:
$$
\mu_k \Rightarrow \mu_k \frac{\rho_k}{\rho_1 + \ldots + \rho_k}
$$
which is a improvement, because the denominator is less than 1.
finally.
$$
W_k = R_k + N_Q ^k \frac{\rho_1 + \ldots + \rho_k}{\rho_k \mu_k} \\
= R_k + \lambda_k W_k  \frac{\rho_1 + \ldots + \rho_k}{\rho_k \mu_k} \\
= R_k + (\rho_1 + \ldots + \rho_k)W_k
$$
This leads to the same result.


//-------------------------------------------------------------------------
3.6 networks of transmission lines.
//-------------------------------------------------------------------------
3.7 time reversibility - Burke's theorem
backward transition probability:
$$
P_{ij}^* =  P{(X_m = j | X_{m+1} = i)} = \frac{p_j P_{ji}}{p_i}
$$
definition:
The markov chain is time reversible if $P_{ij}^*  = P_{ij}$, forward and backward transition happens with equal probability.

Chain is time reversible if and only if: $p_i P_{ij} = p_j P_{ji}$,(local balance equation) which is true for birth death process, all M/M/*.

Burke's theorem:
consider M/M/* system, with arrival rate $\lambda$, suppose system starts from steady state, then:
1. the departure rate is Poisson with $\lambda$.
2. at each time t, N in the system is independent of the sequence of departure times prior to t.
Proof: Since the chain is TR.
1. arrival and departure are indistinguishable.
(which also proves for birth death process, $d_n = a_n $)
2. at time t, N is is independent of the arrival sequence after t.

example: two MM1 in tandem.
the first queue is MM1 by nature.
the departure process of queue 1 is independent of the service time of queue2, therefore queue2 is also MM1. (the queue2 is not a MM1, it just happens behave like a MM1)
by part b of Burke's, at time t, N1 in queue1 is independent of prior to t departure, also of the N2 of queue2.
$$
P[N1 = n, N2 = m] = P[N1 = n] P[N2 = m]
$$


//-------------------------------------------------------------------------
3.8 Networks of queues - Jackson's theorem.
solve linear system:
$$
\lambda_j = r_j + \sum_{i = 1}^{K}\lambda_i P_{ij}
$$
then $\rho_j = \frac{\lambda_j }{\mu_j}$
assuming that $\rho _j <1$, for all j, then
$$
P(n)  = \prod _{j = 1}^{K} {\rho_j ^n(1-\rho_j)}
$$
as if they were all independent MM1.

closed Jackson's theorem
first, for state independent system:
solve the linear system:
$$
\lambda_j = \sum_{i = 1}^{K}\lambda_i P_{ij}

$$
define
$$
\rho_j= \frac{\lambda_j }{\mu_j}
$$
j = 1, ..., K

$$

G(M) = \sum_{n1+n2+\ldots+n_K = M} {\rho_1 ^{n_1} \rho_2 ^{n_2} \ldots \rho_K ^{n_K}}
$$

Closed Jackson network:
under the preceding assumption
$$
P(n_1, n_2, \ldots, n_K) = \frac{\rho_1 ^{n_1} \rho_2 ^{n_2} \ldots \rho_K ^{n_K}}{G(M)}
$$
where $G(M)$ can be viewed as a simple normalization constant.

The key is:
1. solve for the internal arrival rates.
2. treat each queue like independent MM1.




//-------------------------------------------------------------------------end


No comments:

Post a Comment