Monday, December 17, 2012

latex and bibtex

1 create a file .bib. 2 in the lyx file, the bottom of it, insert --- insert list/toc --- bibtex note: the year-author problem. setting --- biblio... --- citation style -> default.

Friday, December 7, 2012

john carmack quotes.

Focused, hard work is the real key to success. Keep your eyes on the goal, and just keep taking the next step towards completing it. If you aren't sure which way to do something, do it both ways and see which works better.

focus is a matter of deciding what things you're not going to do.


Wednesday, December 5, 2012

FSM Verilog project, Two FPGA transmission.

This project is to letting FPGA transmission, the final result:
reset FPGA A and B,

NOTE: about write FSM, usually we have 3 always blocks.

1. transition cur_state <= next_state, sequential, using <= statement.
2. what is the next_state? this part should be combinational, the sensitivity list should be any signal that could change the next_state, see following codes for examples. Inside this block, the code should be blocking using =.
3. what should we do in each state? this part is sequential, like a DFF device, using <=.

//---------------------Final functionality.
FPGA A set switches and push button, B's leds should lit up according to A's switches.
B set sw and push button, A's leds should lit up according to B's switches.

//-------------------data pattern:
uart_tx: 1 when idle, startBit 0, 8bit data, 1bit parity, 1bit stopbit (high)
//-----------------------transmitter.

`timescale 1ns / 1ps
//////////////////////////////////////////////////////////////////////////////////
// Engineer: Wenshuai Hou
// transmitter: when the button is pressed, it reads the 8bit value from the SW, send it out serially through uart_tx, and wait for rx_done for 15cycles, if not acked, retransmit.

 module transmitter(
input clock_50M,
input reset,
input [7:0] sw,
input rx_done,
input btn,
output reg uart_tx
    );

parameter [1:0] IDLE = 0,
XMIT = 1,
ACK = 2;

reg [1:0] curr, next;
reg [3:0] count, waiting;


// this always block is always sequential, <=
always @ (posedge clock_50M or posedge reset) begin
if(reset == 1) curr <= IDLE;
else curr <= next;
end

//this one determines the next_state, should be combinational.
always @ (curr or btn or count or waiting or rx_done) begin
case(curr)
IDLE:
next = (btn == 1)? XMIT: IDLE;

XMIT:
next = (count == 0)? ACK: XMIT;

ACK: begin
if(rx_done == 1)
next = IDLE;
else if(rx_done == 0 && waiting < 15)
next = ACK;
else
next = XMIT;

end

endcase

end

// this always block determines what the FSM does in each state.
reg [10:0] tx_data;
always @ (posedge clock_50M) begin

case(curr)
IDLE:
begin
uart_tx <= 1;
count <= 10;
waiting <= 0;
tx_data <= {1'b0, sw, (^sw), 1'b1};
end

XMIT:
begin
uart_tx <= tx_data[count];
count <= count - 1;
waiting <= 0;
end

ACK:
begin
uart_tx <= 1;
waiting <= waiting + 1;
count <= 10;
end
endcase

end


endmodule



//-----------------------receiver part 

//////////////////////////////////////////////////////////////////////////////////
// Company: 
// Engineer: Wenshuai Hou 

//////////////////////////////////////////////////////////////////////////////////
module reveiver(
input clock_50M, 
input reset,
input uart_rx,  //serial input
output reg tx_done, //ack to the transmitter
output reg [7:0] leds //lit up the LEDS based on the received word.
    );
parameter [1:0] IDLE = 0,
READ = 1,
ACK = 2;
reg [1:0] curr, next;
reg [9:0] rx_data;
reg [3:0] count; 
always @ (posedge clock_50M or posedge reset) begin
if(reset == 1) curr <=  IDLE;
else curr <= next;
end
always @ (curr or uart_rx or count) begin
case(curr)
IDLE: next = (uart_rx ==0)? READ:IDLE; 
// negedge of UART_RX signify the receiving. 
READ: next = (count == 9)? ACK: READ; // to the ACK if receive complete
ACK: next = IDLE;
endcase
end
always@ (posedge clock_50M) begin
case(curr)
IDLE: begin
count <= 0 ;
tx_done <= 0 ;
end
READ: begin
rx_data[count] <= uart_rx; //save the received values.
count <= count +1;
end
ACK: begin
tx_done <= ((^rx_data[8:0]) == 0); //check for even parity. 
leds <= rx_data[7:0];
end
endcase
end

endmodule



Monday, December 3, 2012

DateNetwork review chapter 4.

chapter 4, multiaccess communication. 
//--------------------------------------------------
4.1 introduction.
different types of networks. 

//--------------------------------------------------
4.2 slotted multiaccess and the aloha system.
Idealized slotted multiaccess model assumptions:
1. slotted system
2. Poisson arrival
3. collision or perfect reception.
4. 0, 1, e immediate feedback.
5. retransmission of collisions.
6a. no buffering, lower bound for delay, no queuing time. 
6b. infinite nodes.  gives upper bound for delay, because packets were in the same queue can interfere with each other, causing collision. 

Slotted aloha with 6a assumption.
m nodes, n backlogged, 
$q_a$ arrival transmission probability for each unbacklogged node. 
$q_r$ retransmission probability for each backlogged node.
using drift method:
$$
D_n  = (m-n) q_a  - P_{success} \\
P_{success} \approx G(n) e^{G(n)} \\
G(n) = (m-n) q_a + n q_r
$$
using the approximation $(1-x)^y \approx e^{-xy}$ when x is small.
G(n) is the attempted transmission in one slot, when there is n backlogged nodes.
The only case of interest is $q_r > q_a$. When $q_r$ is small enough for stability, it can be shown that the delay exceeds TDM.

assume the attempted transmissions in one slot is Poisson(G(n)),
The optimal throughput is obtained at $G(n) = 1 \rightarrow P_{success} = 1/e$,  $P_{idle} = P(Poisson(G(n)) = 0) = e^{-G(n) } = 1/e$,
therefore $P(collision ) = 1- 2/e$

instead of 6a, for 6b, $G(n) = \lambda + n q_r$, the arrive straight line becomes horizontal $\lambda$
meaning:
$$
D_n  = \lambda - P_{success}
$$
the undesirable stable point disappeared, but once n passes the unstable equilibrium, n tends to increase without bound, thus the throughput becomes 0.



Define statbility:
infinite node(6b), the system is stable for a given arrival rate if the expected delay is finite.

aloha is unstable.
//--------------------------------------------------
4.3 splitting algorithm
resolve the collision before move on, fully exploit the feedback information.
example:
2 node collision, CRP = E[Geo(1/2)] + 1 = 3.

two improvement:
1. automatically split after (e,0,), avoid a w.p.1 collision.
2. discard the R half after (e,e,).
Proof: after the first e, split to $l,r$.
$$
P(r | l>1, l+r >1) =  P(r | l>1) = P(r)
$$
which is the same as the untested.

FCFS splitting algorithm.
//--------------------------------------------------end


DateNetwork Review, chapter 3.

Chapter3: Delay model:
//-------------------------------------------------------------------------
3.1 introduction
multiplexing: TDM, FDM, statistical multiplexing.
//-------------------------------------------------------------------------
3.2 queuing models - little's theorem
Very general, very important
\[
N = \lambda T
\]
N: average/typical number of customers in the system
T: average delay in the system
$\lambda$: arrival rate.

example1:
P{a server is busy} = $\lambda$ * {average service time}

//-------------------------------------------------------------------------
3.3 MM1
TOOL: Markov chain.
main results:
\[
\rho = \frac{\lambda}{\mu} \\
p_0 =  1 - \rho \\
p_n = \rho^n (1-\rho) \\
N = \frac{\rho}{1-\rho} \\
T = \frac{N}{T} \\
W = T - \frac{1}{\mu} \\
N_Q = N - \rho = \lambda W
\]


An every interesting question to ask is: 
"is the queue typical seen by an arrival?".
it is possible to be non-typical, for example, deterministic arrival 1/min, service time 0.5 min,
average system N = 0.5
however, the customer in system seen by an arrival is always 0.
conclusion:
$a_n$ : occupancy distribution upon arrival
$p_n$ : steady state  occupancy distribution
$d_n$ :  occupancy distribution upon departure.
if arrival process is poisson: $a_n = p_n$.
if markov process is a birth death process: $d_n = a_n$.

//-------------------------------------------------------------------------
3.4 MMm, MMinfty, MMmm
MMm
Markov chain solve for steady state distribution, $p_n$.
define queueing probability:
$P_{queueing} = \sum_{n = m}^{\infty} p_n $
known as Erlang C formula, honoring Denmark's A.K. Erlang.

\[
\rho = \frac{\lambda}{m \mu} \\
N_Q = P_Q \frac{\rho}{1-\rho} \\
W =  N_Q / \lambda \\
T = W + \frac{1}{\mu} \\
N  = \lambda T \\
\]
note the connection with MM1 here.
explanation: for those who come and join the queue, the system works like $m\mu$ superserver. Write down the MarkovChain for the number of customers waiting in the queue, then it is same as MM1 system.

$M/M/\infty$
This case is very interesting, because:
\[
p_n  = (\frac{\lambda}{\mu})^n \frac{e^{-\frac{\lambda}{\mu}}}{n!} \Rightarrow Poisson(\frac{\lambda}{\mu})
\]
What is system delay? No queuing time !!!! W = 0
\[
T = \frac{1}{\mu} \\
N =  \frac{\lambda}{\mu} \\
N_Q = 0 \\
W = 0
\]
Intuitively.


M/M/m/m Loss system
again, our tool is markov chain, solve for $p_n$.

actually, there is an easier way. This is a truncated $MM\infty$ case, for those who are accepted, the system indeed has $\infty$ servers. Everything stays the same except for a normalization constant.
and note the common term $e^{-\frac{\lambda}{\mu}}$ cancels.

\[
p_0 = [\sum_{n = 0}^{m} (\frac{\lambda}{\mu})^n \frac{1}{n!}]^{-1}
p_n = p_0 (\frac{\lambda}{\mu})^n \frac{1}{n!}
\]
The most important quantity is the blocking probability (Erlang B formua): $p_m$.
with such probability, any new arrival seen m customer already in system, and it will be rejected.


//-------------------------------------------------------------------------
3.5 MG1
Most important formula: Pollaczek - Khinchin formula. PK formula.
\[
\bar{X} = \textrm{average service time} \\
\bar{X^2} = \textrm{second moment of average service time} \\
W = \frac{\lambda \bar{X^2}}{2(1-\rho)}
\]
other quantities can be found using Little's law.

example1:
MM1
\[
\bar{X} =  \frac{1}{\mu}
\Rightarrow
E[X^2] = \frac{2}{\mu ^2}
W = \frac{\rho}{\mu-\lambda}
\]
MD1
\[
E[X^2] = E[X]^2 = \frac{1}{\mu ^2}
\Rightarrow
W = \frac{\rho}{2(\mu - \lambda)}
\]
we note MD1 has only half the queuing delay than the MM1.
FOLK's law: the more random, the more delay.

Proof of PK formula by the author of the book. 
how long will a new arrival has to wait in queue?
$$W = R + \frac{1}{\mu} N_Q$$
R: mean residual time.
$N_Q$: number of customer in queue seen by arrival.
using little law: $N_Q = \lambda * W$
\[
W = \frac{R}{1-\rho}
\]
What is R?
in the interval [0,t], $M(t)$ is the number of service completed.
$$
\begin{split}
\frac{1}{t} \int_{0}^{t} r(\tau) d\tau &= \frac{1}{t} \sum_{i = 1}^{M(t)} \frac{x_i ^2}{2}\\
&= \frac{M(t)}{t} \sum_{i = 1}^{M(t)} \frac{x_i ^2}{2 M(t)} \\
R &= \lambda \frac{E[X^2]}{2}
\end{split}
$$
by letting $t\rightarrow \infty$

MG1 with vacation.
At the end of busy period(with probability $1-\rho$), a server that goes to vacation from time to time, with
$E[V]$: average vacation time.
$E[V^2]$: second moment of average vacation time.
We note that service time and vacation time are almost indistinguishable.
Treat R as "residual service time or vacation time."
The derivation is similar to the proof of PK formula:
$$
\begin{split}
R &= \lim_{t\rightarrow \infty} {\frac{M(t)}{t} \sum_{i = 1}^{M(t)}\frac{X_i ^2}{2M(t)}+ \frac{L(t)}{t} \sum_{i = 1}^{L(t)}\frac{V_i ^2}{2L(t)}} \\
& = \frac{\lambda E[X^2]}{2} + \frac{(1-\rho)E[V^2]}{2E[V]} \\
\frac{L(t)}{t} & \rightarrow \frac{1-\rho}{E[V]}
\end{split}
$$
where $L(t)$ is the number of vacation completed during [0, t].
Basically, the vacations increase $p_0$, however, the system can still be viewed as two subsystems:
1. when the system is working, this gives the first part of the R.
2. w.p. $1-\rho$, the system goes to vacation, leading to the second part of the R.
finally:

$$
W = \frac{\lambda E[X^2]}{2(1-\rho)} + \frac{E[V^2]}{2E[V]}
$$

MG1 with reservation.
let consider only a single user, for the i th packet:
1. R: residual till the end of the current transmission, which could either be really transmission or reservation making.
2. N*E[X]: transmission of the before-coming packets.
3. V: (denoted same as vacation) it must wait a full reservation slot, in which its reservation is made.
The system is designed s.t. before its transmission, there bound to be a reservation slot.

again
$$
\begin{split}
V &\leftarrow E[V] \\

R &\leftarrow \frac{\lambda E[X^2]}{2}  + \frac{E[V^2]}{2E[V]} (1-\rho) \\

W &= \frac{R+V}{1-\rho} = \frac{\lambda E[X^2]}{2(1-\rho)} + \frac{E[V^2]}{2E[V]} + \frac{E[V]}{1-\rho}
\end{split}
$$
This is fully gated system, meaning ony those packets that arrived prior to the user's preceding reservation interval are transmitted.
therefore every packet has to experience a full reservation slot.

Exhaustive system: all available packets of a user are transmitted during the corresponding data interval.
partially gated system: transmit those packets arrived up to the beginning of the data interval.

Note the exhaustive system is equivalent to the vacation system, only those who arrive see vacation/reservation have to wait.


Priority queuing MG1
non-preemptive: high class CANNOT interrupt low class at service.
preemptive: high class interrupt low class at service, no resumption for the low class. 臭流氓!
Assuming class 1 has the highest priority, class n has the lowest.

non-preemptive is easier:
$\mu_i $  $\lambda_i $ are different.

during the waiting of class2, those later-comer class 1 will jump the queue!
$$
\begin{split}

W_1 &= R + \frac{N_Q ^1}{\mu_1} \\
N_Q^1 &= \lambda_1 W_1 \\
W_1 &= \frac{R}{1-\rho_1} \\

 \textrm{"class 1 jump the queue while class 2 waiting"} \\
W_2 &= R + \frac{N_Q ^1}{\mu_1} + \frac{N_Q ^2}{\mu_2}  + \frac{\lambda_1 W_2}{\mu_1} \\
W_2 &= \frac{R}{(1-\rho_1)(1- \rho_1 - \rho_2)} \\

 \textrm{"finally"} \\
W_k &= \frac{R}{(1-\rho_1 - \ldots - \rho_{k-1})(1-\rho_1 - \ldots - \rho_{k})} \\
 \text{"R like a weighted sum"} \\
R &= \frac{1}{2} \sum_{i=1}^{n} \lambda_i E[X_i ^2 ]

\end{split}
$$
The calculation involves all class 1 - n, because it is a non-preemptive system.



preemptive system:
The arrival sees the system as a MG1, ignoring the class (k+1) to n.

$$
\begin{split}
W_k &= \frac{R_k }{1-\rho_1 - \ldots - \rho_k } \\
R_k &= \frac{\sum_{i = 1}^{k} \lambda_i E[X_i ^2]}{2} \\
\end{split}
$$

an explanation for this.
If we only consider the first k class, the capacity assigned to class k is no longer $\rho _k$, but a normalized one:
$$
\frac{\rho_k}{\rho_1 + \ldots + \rho_k}
$$
therefore the service time becomes:
$$
\mu_k \Rightarrow \mu_k \frac{\rho_k}{\rho_1 + \ldots + \rho_k}
$$
which is a improvement, because the denominator is less than 1.
finally.
$$
W_k = R_k + N_Q ^k \frac{\rho_1 + \ldots + \rho_k}{\rho_k \mu_k} \\
= R_k + \lambda_k W_k  \frac{\rho_1 + \ldots + \rho_k}{\rho_k \mu_k} \\
= R_k + (\rho_1 + \ldots + \rho_k)W_k
$$
This leads to the same result.


//-------------------------------------------------------------------------
3.6 networks of transmission lines.
//-------------------------------------------------------------------------
3.7 time reversibility - Burke's theorem
backward transition probability:
$$
P_{ij}^* =  P{(X_m = j | X_{m+1} = i)} = \frac{p_j P_{ji}}{p_i}
$$
definition:
The markov chain is time reversible if $P_{ij}^*  = P_{ij}$, forward and backward transition happens with equal probability.

Chain is time reversible if and only if: $p_i P_{ij} = p_j P_{ji}$,(local balance equation) which is true for birth death process, all M/M/*.

Burke's theorem:
consider M/M/* system, with arrival rate $\lambda$, suppose system starts from steady state, then:
1. the departure rate is Poisson with $\lambda$.
2. at each time t, N in the system is independent of the sequence of departure times prior to t.
Proof: Since the chain is TR.
1. arrival and departure are indistinguishable.
(which also proves for birth death process, $d_n = a_n $)
2. at time t, N is is independent of the arrival sequence after t.

example: two MM1 in tandem.
the first queue is MM1 by nature.
the departure process of queue 1 is independent of the service time of queue2, therefore queue2 is also MM1. (the queue2 is not a MM1, it just happens behave like a MM1)
by part b of Burke's, at time t, N1 in queue1 is independent of prior to t departure, also of the N2 of queue2.
$$
P[N1 = n, N2 = m] = P[N1 = n] P[N2 = m]
$$


//-------------------------------------------------------------------------
3.8 Networks of queues - Jackson's theorem.
solve linear system:
$$
\lambda_j = r_j + \sum_{i = 1}^{K}\lambda_i P_{ij}
$$
then $\rho_j = \frac{\lambda_j }{\mu_j}$
assuming that $\rho _j <1$, for all j, then
$$
P(n)  = \prod _{j = 1}^{K} {\rho_j ^n(1-\rho_j)}
$$
as if they were all independent MM1.

closed Jackson's theorem
first, for state independent system:
solve the linear system:
$$
\lambda_j = \sum_{i = 1}^{K}\lambda_i P_{ij}

$$
define
$$
\rho_j= \frac{\lambda_j }{\mu_j}
$$
j = 1, ..., K

$$

G(M) = \sum_{n1+n2+\ldots+n_K = M} {\rho_1 ^{n_1} \rho_2 ^{n_2} \ldots \rho_K ^{n_K}}
$$

Closed Jackson network:
under the preceding assumption
$$
P(n_1, n_2, \ldots, n_K) = \frac{\rho_1 ^{n_1} \rho_2 ^{n_2} \ldots \rho_K ^{n_K}}{G(M)}
$$
where $G(M)$ can be viewed as a simple normalization constant.

The key is:
1. solve for the internal arrival rates.
2. treat each queue like independent MM1.




//-------------------------------------------------------------------------end


harmonica tabs.

godfather:
  5   -6  7  -7    -6  7   -6  -7  -6  -5    6 
Speak soft-ly love and hold me warm a-gainst your  
  5 
heart, 
5  -6   7    -7    -6   7 -6    -7   -6   -5   5 
I feel your words, the ten-der trem-bling mo-ments 
 -4 
start, 
  -4  -5 -6 -7    -4  -5-6 -6 
We're in a world, our ve-ry own, 
 5   6  -5  -5   5   6 -5  -5   5   -7  7   -6 
Sha-ring a love that on-ly few have ev-er known, 
 -6   -6  6    6      -7   -6 -5   5 
Wine col-ored days, warmed by the sun, 
  5   6   5    -4     -4  -5 -4   5 
Deep vel-vet nights, when we are one, 
  5   -6   7   -7   -6  7  -6  -7   -6 -5   6 
Speak soft-ly love, so no one hears us but the 
 5  
sky, 
 5   -6   7  -7   -6  7     -6   -7   -6 -5  5 
The vows of love, we make, will live, un-til we  
 -4 
die, 
-4  -5  -6  -7    -5 -6  -7  6 
My life is yours and all be-cause, 
 5   6   -5 -5  5   6    -5   -5    5   -7  7  
You came in-to my world with love, so soft-ly, 
 -6 
love. 

Godfather 2. 
S-L-O-W-L-Y and S-O-F-T-L-Y (with lots of feeling) 

8   -8 8   -9   8 8   7 

8   -8 8   -9   8 8   -7 

8   -8 8    9  8 -8 -9  8 

7   -7 7   8   -8 -7   

8   -8 8   -9   8 8    7 

8   -8 8   -9   8 8   -7 

8   -8 8    9   8 -8 -9 8 

7   -7 7   8  -8 -7  

End with: 
6  6  6  6  -6  -7  7 


happy birthday to you 
6 6 -6 6 7 -7
6 6 -6 6 -8 7 
6 6 9 8 7 -7 -6
-9 -9 8 7 -8 7

Sunday, December 2, 2012

mac screenshot 截图

总是忘记 每次都要google
在这里写一下

Command-Shift-4, then select an area: Take a screenshot of an area and save it as a file on the desktop

Command-Control-Shift-4, then select an area: Take a screenshot of an area and save it to the clipboard

关于纸张的大小. size

常用的纸张有A4 A3.....  但是他们的大小 和 长宽的比例是多少呢?

一张A4 的对折就能变成一张A3的纸, 而且长宽的比例不变
这要求 如果A4 纸的长y 宽x
$$
y = \alpha *x \\
x = \alpha * y/2\\
\Rightarrow
\alpha = \sqrt{2}
$$
然后定义A0纸张的大小为$1 m^2 $ 比例为$\sqrt{2}$ 这样每次对折都可以得到 A1, A2,A3,A4,A5 ....
而造纸厂只需要造 和 储存 一种型号的纸张A0,

真聪明!

如何点pho

pho  -- pronounced as fuh


/////--------------------在天书菜单上能看到如下的肉类
tai - round eye steak 一般是生的 比较薄片的肉, 在汤里一汤就熟了. 如果是外带 到家已经是熟的了 就变成了 chin


chin - well done brisket 同tai 只不过是熟的


nam - well done flank 这个是熟的 有点像是干切牛肉的东西.

gau - fatty brisket  这个是我的最爱 , 是半肥半瘦的片状牛肉

gan - tendon 牛筋 也很好吃

sach - tripe 大爱牛百叶.

bo vien - meat ball  肉圆


-------------------------简单的英文


brisket -  牛胸肉
flank - 侧肉, 腰窝肉
tendon - 牛筋
tripe - 牛百叶,


---------------------具体吃法

点好了就等吃了 一般都有小份和 大份的.

先上一份豆芽, basil(九层塔), 青辣椒, 柠檬.
九层塔的叶子摘下来泡汤, 豆芽加入汤中, 柠檬挤出汁水入汤,  那个青辣椒真的很辣....

桌上一般还有一个红色的酱 还有海鲜酱. 随便加. ... 我喜欢挤到盘子上沾着肉吃 不然回影响汤的味道. 


------------------记得付钱!

latex on blogger, 支持latex.

首先 建立一个新的post(帖子),  然后转换到html mode. 在最上方加入:

<script type="text/x-mathjax-config">   MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}}); </script> <script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"> </script>

然后换回compose mode. 然后... 然后就成啦!

来试试
 \[P(E) = {n \choose k} p^k (1-p)^{n-k}\]

使用的latex 语法,short guide:
ftp://ftp.ams.org/pub/tex/doc/amsmath/short-math-guide.pdf
OR
http://fluidmind.org/articles/typesetting-equations/ (强烈推荐)

对于已经比较熟悉latex 的同志, 附上 latex cheating sheet.
http://www.stdout.org/~winston/latex/latexsheet.pdf

Saturday, December 1, 2012

关于 指数分布 最小和无记忆

如果有 x1, x2, .... ,xn, n 个 iid(独立同分布) 的指数分布。 expo(mu)
min(x1,...., xn) 也是一个指数分布 expo(n*mu)。 证明很简单:
P(min <y) = 1- P(min > y) = 1- (P(所有都大于 y)) =.....

而且要牢记 exponential 的memoryless(无记忆) 的特性

由此可以导出一些意思的东西。 

-------------------------------一道题目
你走进银行  一共有四个柜台,a,b,c,d 在被服务, 他们的服务时间服从指数分布 iid expo(1). i.e. 平均每人花一分钟, 在你后面没有别人进来, 

问你是最后一个离开的概率是多少? 
1/4! 多好看! 因为首先 abcd中有一个人先完成了手续就离开了 这是你上前被服务。 从这个时间点开始, 在柜台的四个人都是 崭新的exponential分布!(memoryless.)(至于这是否符合现实, 就不是我能管的事情了。 )
四个崭新的同分布, 其中任一个最大的概率是1/4.

问你要等待的时间是多少?
当然是expected time.  首先你要等一个E[min(a,d,c,d)] = 1/4, 因为 min(a,b,c,d) ~ expo(4). 
然后就是你的服务时间。 
总时间 = 1/4+1 = 5/4

问a是最后一个离开的概率是多少?
用全概率公式做
a 如果是第一个离开的, 那他/她是最后一个离开的概率是~~~~~~~~ 0 (废话)
a 如果不是第一个离开的,with probability 3/4, (w.p.) 这是你进入柜台。 那他是最后一个离开的概率是1/4
所以a 是最后一个离开的概率是 3/4*1/4 = 3/16

这个有一个巧解(不是巧姐!) ,
1. a.b.c,d 其实是不可分辨的 indistinguishable. 
2. 包括你在内的五个人中总有一个人 是要最后离开 (再次废话)
所以 排除你是最后一个离开的概率  然后平分:
(1-1/4)/4 = 3/16。

看 有时候废话的力量如此之大!


证明两个独立高斯变量的和差是独立的。

x1, x2, 是iid高斯 (mu, sigma^2)。 那么x1+x2, x1 - x2 也是独立的高斯。

这个题目以前证过, 从他们的概率分布开始 最后能推出和差的pdf。 前几天被别人的一句无心的话点醒了。:

“对于高斯变量, 不相关 等价于 独立”

对啊!

E[(x1+x2)(x1-x2)] = 0. QED.

当然值得注意的一些废话:
1. 独立高斯的和  差 也是高斯。
2. 两个高斯的和 差 的range = (-inft, +inft)




//---------------如果不是 identical distributed 的呢?
独立高斯
x1 ~ N (mu1, sig1^2 )
x2 ~ N (mu2, sig2^2 )

E[(x1+x2)(x1-x2)] =  E(x1^2) - E(x2^2) = sig1^2 + mu1^2 - sig2^2 - mu2^2

如果为0 , 就成了!



禅和摩托车修理艺术 读后感

昨天晚上扫完了这本书, 五味杂陈, 说不上来什么感觉。 总是被震撼到了。 赶紧上amazon 买了英文版的 寒假的时候打算好好看看。 如何能应用到生活中来。 

几点感想:

1. 
这本书值得多看几遍,书中也说的 做事别求快 要做到自己觉得完成了才算完成, 做到自己内心平静了才算好。 而不是做完了就算了。 就好像做题的举一反三一样。一道题有很多值得推敲的地方。多想想 想到自己觉得平静了就可以move on.

2. 
给我最最最最大的震撼就是一个人对思考 对真理的追求竟能如此痴狂  到最后疯了, 当然是在常人的意义上是疯了。 想想自己过的真是行尸走肉一般。 思考至疯 而不要娱乐至死。

3. 
这本书根本就不是写给大家看的, 至少一开始不是的。也许所有的伟大的作品一开始都只是作者的感情的倾泻。  他自己的自言自语只是为了把自己不完整。 他必须要一个人独自走过那个黑暗的峡谷。  后来他找到了自己心中的真理, 想分享给大家 觉得有这个义务 才会不停的投稿 不停的被拒 不停的投稿  121 次 才有了我昨天晚上的心灵上的震撼。 

4. 
人类的一些关键性的突破是疯子完成的 体制外的深刻思考促成了体制的改革, 于是想起一本书: 疯狂与文明 马上amazon 定下 等书。



未完 第二遍补全。 

概率中的 独立 和 不交合 independent vs. disjoint 事件

学而不思则罔!
ind -   独立
mutually exclusive, disjoint -  不交合, 不相关, 互斥。。。。。

如果两个事件不相关, 就是不能同时发生, 那么他们一定不独立!
disjoint ---> not ind
ind ----> not disjoint.  (独立则可同时发生)

所以两个事件之间的关系只有:
1. ind and not disjoint. 
2. not ind and disjoint. 

 具体到概率的应用上

///------------------------------废话 基础知识 。
如果两个事件 A B 独立 。P(AB ) = P(A)P(B). 
这个更多的是从bayes 的角度出发: P(A|B) = P(A)

对于任意两个时间  P(A or B ) = P(A) +P(B) +P(A and B), 可以从Venn diagram 有直观的了解(但是非严格证明)
进而, 两个事件disjoint:  P(A or B ) = P(A ) + P(B)

/// ---------------------一个例子
考虑跑两个银币。 TT, HH TH, HT.

1. ind and not disjoint. 
A =  第一次抛出头 = {HH,HT}
B = 第二次抛出头 = {TH, HH}
共同事件 HH, so they are no disjoint。

2. not ind and disjoint.
这个就多了去了。 
A =  {HH}
B = {TT}
两事件不能同时发生,不独立。