金数随机过程mc_第1页
金数随机过程mc_第2页
金数随机过程mc_第3页
金数随机过程mc_第4页
金数随机过程mc_第5页
已阅读5页,还剩23页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、6. Markov ChainState SpaceThe state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations of a random walker, or the coordinates of set of molecules, or spin configurations of the Ising model.Markov ProcessA stochastic process is a seque

2、nce of random variables X0, X1, , Xn, The process is characterized by the joint probability distribution P(X0, X1, )If P(Xn+1|X0, X1, Xn) = P(Xn+1|Xn) then it is a Markov process.Markov ChainA Markov chain is completely characterized by an initial probability distribution P0(X0), and the transition

3、matrix W(Xn-Xn+1) = P(Xn+1|Xn).Thus, the probability that a sequence of X0=a, X1=b, , Xn= n appears, is P0(a)W(a-b)W(b-c) W(.-n).Properties of Transition MatrixSince W(x-y) = P(y|x) is a conditional probability, we must have W(x-y) 0.Probability of going anywhere is 1, soy W(x - Y) = 1.EvolutionGive

4、n the current distribution, Pn(X), the distribution at the next step, n +1, is obtained fromPn+1(Y) = x Pn(X) W( X - Y) In matrix form, this is Pn+1 = Pn W.Chapman-Kolmogorov EquationWe note that the conditional probability of state after k step is P(Xk=b|X0=a) = Wkab. We havewhich, in matrix notati

5、on, is Wk+s=Wk Ws.Probability Distribution of States at Step nGiven the probability distribution P0 initially at n = 0, the distribution at step n isPn = P0 Wn (n-th matrix power of W)Example: Random WalkerA drinking walker walks in discrete steps. In each step, he has probability walk to the right,

6、 and probability to the left. He doesnt remember his previous steps.The QuestionsUnder what conditions Pn(X) is independent of time (or step) n and initial condition P0? And approaches a limit P(X)?Given W(X-X), compute P(X)Given P(X), how to construct W(X-X) ?Some Definitions: Recurrence and Transi

7、enceA state i is recurrent if we visit it infinite number of times when n - .P(Xn = i for infinitely many n) = 1.For a transient state j, we visit it only a finite number of times as n - . IrreducibleFrom any state I and any other state J, there is a nonzero probability that one can go from I to J a

8、fter some n steps.I.e., WnIJ 0, for some n.Absorbing StateA state, once it is there, can not move to anywhere else.Closed subset: once it is there, there is no escape from the set.Example125431,5 is closed, 3 is closed/absorbing.It is not irreducible. Aperiodic StateA state I is called aperiodic if

9、WnII 0 for all sufficiently large n.This means that probability for state I to go back to I after n step for all n nmax is nonzero.Invariant or Equilibrium DistributionIfwe say that the probability distribution P(x) is invariant with respect to the transition matrix W(x-x ).Convergence to Equilibriu

10、mLet W be irreducible and aperiodic, and suppose that W has an invariant distribution p. Then for any initial distribution, P(Xn=j) - pj, as n - for all j.This theorem tell us when do we expect a unique limiting distribution.Limit DistributionOne also hasindependent of the initial state i, such that

11、 P = P W, Pj = pj.Condition for Approaching EquilibriumThe irreducible and aperiodic condition can be combined to mean:For all state j and k, Wnjk 0 for sufficiently large n.This is also referred to as ergodic.Urn ExampleThere are two urns. Urn A has two balls, urn B has three balls. One draws a bal

12、l in each and switch them. There are two white balls, and three red balls.What are the states, the transition matrix W, and the equilibrium distribution P?The Transition MatrixNote that elements of W2 are all positive.12311/61/32/3Eigenvalue ProblemDetermine P is an eigenvalue problem:P = P WThe sol

13、ution isP1 = 1/10, P2 = 6/10, P3 = 3/10.What is the physical meaning of the above numbers?Convergence to Equilibrium DistributionLet P0 = (1, 0, 0)P1 = P0 W = (0, 1, 0)P2 = P1 W = P0 W2 = (1/6,1/2,1/3)P3 = P2 W = P0 W3 = (1/12,23/36,5/18)P4 = P3 W = P0 W4 = (0.106,0.587,0.3)P5 = P4 W = P0 W5 = (0.10

14、07, 0.5986, 0.3007) . . . P0 W = (0.1, 0.6, 0.3)Time ReversalSuppose X0, X1, , XN is a Markov chain with (irreducible) transition matrix W(X-X) and an equilibrium distribution P(X), what transition probability would result in a time-reversed process Y0 = XN, Y1=XN-1, YN=X0?AnswerThe new WR should be

15、 such thatP(x) WR(x-x) = P(x)W(x-x) (*)Original process P(x0,x1,.,xN) = P(x0) W(x0-x1) W(x1-x2) W(xN-1-xN) must be equal to reversed process P(xN,xN-1,x0) = P(XN) WR(XN-XN-1) WR(xN-1-XN-2) WR(x1-x0). The equation (*) satisfies this.Reversible Markov ChainA Markov chain is said reversible if it satis

16、fies detailed balance:P(X) W(X - Y) = P(Y) W(Y -X)Nearly all the Markov chains used in Monte Carlo method satisfy this condition by construction.An example of a chain that does not satisfy detailed balance1232/31/32/31/32/31/3Equilibrium distribution is P=(1/3,1/3,1/3). The reverse chain has transition matrix WR = WT (transpose of W). WR W.Realization of Samples in Monte Carlo and Markov Chain The

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论