Abstract
We provide a short proof of the Ray-Knight second generalized Theorem, using a martingale which can be seen (on the positive quadrant) as the Radon–Nikodym derivative of the reversed vertex-reinforced jump process measure with respect to the Markov jump process with the same conductances. Next we show that a variant of this process provides an inversion of that Ray-Knight identity. We give a similar result for the Ray-Knight first generalized Theorem.
Similar content being viewed by others
1 Introduction
Let \(G=(V,E,\sim )\) be a nonoriented connected finite graph without loops, with conductances \((W_e)_{e\in E}\); define, for all x, y \(\in V\), \(W_{x,y}=W_{\{x,y\}}{{\mathbb {1}}}_{x\sim y}\).
Let \(\mathcal {E}\) and L be respectively the associated Markov generator and Dirichlet form defined by, for all \(f\in {{\mathbb R}}^V\),
Let \(x_0\in V\) be a special point that will be fixed throughout the text. Let \(U=V\setminus \{x_0\}\), and let \(P^{G,U}\) be the unique probability on \({{\mathbb R}}^V\) under which \((\varphi _x)_{x\in V}\) is the centered Gaussian field with covariance \(E^{G,U}[\varphi _x\varphi _y]=g_U(x,y)\), where \(g_U\) is the Green function killed outside U, in other words
where \(G_U:=g_U(.,.)\), and \(\delta _0\) is the Dirac at 0 so that the integration is on \((\varphi _x)_{x\in U}\) with \(\varphi _{x_0}=0\).
Let \({{\mathbb P}}_{z_0}\) be the law under which \((X_t)_{t\geqslant 0}\) is a Markov Jump Process with conductances \((W_e)_{e\in E}\) (i.e. jump rates \(W_{ij}\) from i to j \(\in V\)) starting at \(z_0\) at time 0, with right-continuous paths and local times at \(x\in V\), \(t\geqslant 0\),
Let \(\tau _.\) be the right-continuous inverse of \(t\rightarrow \ell _{x_0}(t)\)
Our first aim is to provide a short proof of the generalized second Ray-Knight theorem.
Theorem 1
(Generalized second Ray-Knight theorem, [10]) For any \(u>0\),
This theorem is due to Eisenbaum, Kaspi, Marcus, Rosen and Shi [10] and is closely related to Dynkin’s isomorphism; see [8] for a first relation between Ray-Knight theorem and Dynkin’s isomorphism, [17] for an overview of the subject, and [12, 18] for the original papers of Knight and Ray see [14–16] for related work on the link between Markov loops and the Gaussian Free Field. Note that this result plays a crucial rôle in a recent work of Ding, Lee and Peres [6] on cover times of discrete Markov processes, and in the study of random interlacements, see for instance [20–22].
In Sect. 2, we give our short proof of Theorem 1, independent of any reference to the VRJP. Similar results would hold for Dynkin and Eisenbaum isomorphism theorems (see Sect. 5, and [17, 22]). We note that there is also a non-symmetric version of Dynkin’s isomorphism [13] (see also [9]), which our technique cannot provide as it is.
In Sect. 3, we explain how the martingale appearing in this proof is related to the Vertex reinforced jump process (VRJP). However, note that the proof does not need any reference to the VRJP.
This short proof in fact yields an identity that corresponds to an inversion of the Ray-Knight identity, proved in Sect. 4. Indeed, Theorem 1 gives an identity in law, but fails to give any information on the law of \((\ell _x(\tau _u), \varphi _x)_{x\in V}\) conditioned on \(\left( \ell _x(\tau _u)+\frac{1}{2}\varphi _x^2\right) _{x\in V}\). We provide below a process that describes this conditional law.
Finally, Sect. 5 yields the equivalent inversion for the generalized first Ray-Knight theorem.
Let \((\Phi _x)_{x\in V}\) be positive reals. As before, we fix the special point \(x_0\in V\). We consider the continuous time process \(({\check{Y}}_s)_{s\geqslant 0}\) with state space V defined as follows. We set
At time s, we consider the Ising model \((\sigma _x)_{x\in V}\) on G with interaction
and with boundary condition \(\sigma _{x_0}=+1\). We denote by
its partition function and by \(\langle \cdot \rangle _s\) its associated expectation, so that for example
The process \(\check{Y}\) is then defined as the jump process which, conditioned on the past at time s, if \(Y_s=i\), jumps from i to j at a rate
and stopped at the time
Note, using the positivity of the Ising model (see for instance [23], proposition 7.1), that \(\langle \sigma _x\rangle _s>0\) for all \(s<S\). Hence the process \(\check{Y}\) is defined up to time S.
We denote by \({{\mathbb P}}_{\Phi ,z}^{\check{Y}}\) the law of \({\check{Y}}\) starting from z and initial condition \(\Phi \), and stopped at time S. (Note that this law also depends on the choice of the “special point” \(x_0\)).
Lemma 1
Starting from any point \(z\in V\), the process \({\check{Y}}\) ends at \(x_0\), i.e.
This process provides an inversion to the second Ray-Knight identity, as stated in the following theorem.
Theorem 2
Let \(\ell \), \(\varphi \), \(\tau _u\) be as in Theorem 1 and set
Under \({{\mathbb P}}_{x_0}\otimes P^{G,U}\), we have
where \(\check{L}(S)\) is distributed under \({{\mathbb P}}_{\Phi ,x_0}^{\check{Y}}\) and, conditionally on \(\check{L}(S)\), \(\sigma \) is distributed according to the distribution of the Ising model with interaction \(J_{i,j}(S)=W_{i,j}\check{L}_i(S)\check{L}_j(S)\) with boundary condition \(\sigma _{x_0}=+1\).
Remark 1
Once \(\varphi \) is known, then obviously \(\ell (\tau _u)=(\Phi ^2-\varphi ^2)/2\) is also known: in other words, Theorem 2 is equivalent to the more precise identity
where \(\check{L}(S)\) and \(\sigma \) are distributed as in the statement of Theorem 2.
The proof of Theorem 2 is given in Sect. 4. Theorem 2 is a consequence of a more precise statement, cf Theorem 5, which gives the law of \((X_s)_{s\leqslant \tau _u}\) conditionally on \(\Phi \).
2 A new Proof of Theorem 1
Let G be a positive measurable test function. Letting \(d\varphi :=\delta (\varphi _{x_0})\prod _{x\in U}d\varphi _x\) and \(C=(2\pi )^{-\frac{|U|}{2}}(\det G_U)^{-1/2}\) be the normalizing constant of the Gaussian free field, we get
where in the last equality we decompose the integral according to the possible signs of \(\varphi \) so that the sum is on \(\sigma \in \{+1, -1\}^U\) with \(\sigma _{x_0}=+1\). In the following we simply write \(\sum _\sigma \) for this sum.
The strategy is now to make the change of variables \(\Phi =\sqrt{2\ell (\tau _u)+\varphi ^2}\). Given \(\ell =(\ell _i(t))_{i\in V, t\in {{\mathbb R}}_+}\), let
We first make the change of variables
which can be inverted by \(\varphi = \sqrt{\Phi _i^2-2\ell (\tau _u)}\). This yields, letting \(d\Phi :=\delta _{\sqrt{2u}}(\Phi _{x_0})\prod _{x\in U}d\Phi _x\),
Note that the Jacobian is taken over all coordinates but \(x_0\) (\(\Phi _{x_0}=\sqrt{2u}\)); if \(\Phi \in {D}_u\), then
Given \(\Phi \in {{\mathbb R}}_+^V\) such that \(\Phi _{x_0}=\sqrt{2u}\), we define for all \(i\in V\),
so that in (2.2) we have \(\varphi =\Phi (\tau _u)\). An important remark is that
Finally, we define for a configuration of signs \(\sigma \in \{+1,-1\}^V\) with \(\sigma _{x_0}=+1\),
Lemma 2
For any \(\Phi \in {{\mathbb R}}^V\) and \(\sigma \in \{-1,1\}^V\), the process \((M^{\sigma \Phi }_{t\wedge T})_{t\geqslant 0}\) is a uniformly integrable martingale.
Proof
Consider the Markov process \((\ell (t), X(t))\), which obviously has generator \( {\tilde{L}}(g)(\ell ,x)= (\frac{\partial }{\partial \ell _x} +L)g(\ell ,x). \) Let f be the function defined by \( f(\ell ,x)=\left( \prod _{y\ne x} \sigma _y \sqrt{\Phi _y^2-2\ell _y}\right) ^{-1} \). Note that, if \(t<T\),
since \(f(\ell ,x)\) does not depend on \(\ell _x\). Therefore, for \(t<T\),
which implies that \(M^{\sigma \Phi }_{t\wedge T}\) is a martingale, using for instance lemma 3.2 in [11] pp.174–175.
The condition that f is bounded in that result is not satisfied, but the proof remains true, noting that the following integrability conditions hold (see Problem 22 of Chapter 2, p.92 in [11]): first, using that \(({\tilde{L}}f/f)(\ell (t),X(t))\geqslant -\mathsf {Cst}(W)\) and \((|Lf|/f)(\ell (t),X(t))\leqslant \mathsf {Cst}(W)/\Phi _{X_t}(t)\), we deduce
Second, \(f(\ell (t),X(t))\) can be upper bounded by an integrable random variable, uniformly in t. Indeed, let us consider the extension of process \((X_t)_{t\in {{\mathbb R}}}\) to \({{\mathbb R}}\), and take the convention that the local times at all sites are 0 at time 0.
For all \(j\in V\), let \(s_j\) be the (possibly negative) local time at j, at the last jump from that site before reaching a local time \(\Phi _j^2/2\) at that site. Let, for all \(j\in V\),
Then the random variables \(m_j\), \(j\in V\), are independent exponential distributions with parameters \(W_j=\sum _{k\sim j}W_{jk}\), since the sequence of local times of jumps from j is a Poisson Point Process with intensity \(W_j\).
Now, for all \(t\in {{\mathbb R}}_+\), \(j\ne X_t\), \(\Phi _j(t\wedge T)\geqslant \sqrt{2m_j}\), so that \(f(\ell (t),X(t))\leqslant \mathsf {Cst}(\Phi )\prod _{j\in V}m_j^{-1/2}\).
This enables us to conclude, since \(\prod _{j\in V}m_j^{-1/2}\) is integrable, which also implies uniform integrability of \(M_{t\wedge T}\), using that \(|M_{t\wedge T}|\leqslant \mathsf {Cst}(W,\Phi ,t) f(\ell (t),X(t)).\) \(\square \)
Let us now consider the process
Lemma 3
For all \(x\ne x_0\), we have
Proof
Let \(\sigma ^x\) be the spin-flip of \(\sigma \) at x : \(\sigma ^x=\epsilon ^x \sigma \) with \(\epsilon ^x_y=-1\) if \(y=x\) and 1 if \(y\ne x\). If \(x\ne x_0\) and \(X_T=x\), then
Indeed, since \(\Phi _x(T)= 0\), then \(\sigma ^x\Phi (T)= {\sigma \Phi }(T)\), and the minus sign comes from the numerator of the product term in (2.5). By symmetry, the left-hand side of (2.8) is equal to
\(\square \)
It follows from Lemmas 2 and 3, by the optional stopping theorem (using the uniform integrability of \(M_{t\wedge T}\)), that
which concludes the proof of Theorem 1.
3 Link with vertex-reinforced jump process
The aim of this section is to point out a link between the Ray-Knight identity and a reversed version of the Vertex-Reinforced Jump Process (VRJP).
It is organized as follows. In Sect. 3.1 we compute the Radon–Nykodim derivative of the VRJP, which is similar to the martingale M in Sect. 2: the computation can be used in particular to provide a direct proof of exchangeability of VRJP. In Sect. 3.2 we introduce a time-reversed version of the VRJP, i.e. where the process subtracts rather than adds local time at the site where it stays (see Definition 2). Then we show in Theorem 4 that its Radon–Nykodim derivative is the martingale \(M^\sigma \) in Sect. 2 with positive spins \(\sigma \equiv +1\). Note that the “magnetized” reversed VRJP, defined in Sect. 1 and related to the inversion of Ray-Knight in Theorem 2, involves instead the sum \(N^\Phi \), cf (2.7), of all the martingales \(M^{\sigma \Phi }\).
3.1 The vertex-reinforced jump process and its Radon–Nykodim derivative
Definition 1
Given positive conductances on the edges of the graph \((W_e)_{e\in E}\) and initial positive local times \((\varphi _i)_{i\in V}\), the vertex-reinforced jump process (VRJP) is a continuous-time process \((Y_t)_{t\geqslant 0}\) on V, starting at time 0 at some vertex \(z\in V\) and such that, if Y is at a vertex \(i\in V\) at time t, then, conditionally on \((Y_s, s\leqslant t)\), the process jumps to a neighbour j of i at rate \(W_{i,j}L_j(t)\), where
The Vertex-Reinforced Jump Process was initially proposed by Werner in 2000, first studied by Davis and Volkov [4, 5], then Collevechio [2, 3], Basdevant and Singh [1], and Sabot and Tarrès [19].
Let D be the increasing functional
define the time-changed VRJP
and let, for all \(i\in V\) and \(t\geqslant 0\), \(\ell ^Z_i(t)\) be the local time of Z at time t.
Lemma 4
The inverse functional \(D^{-1}\) is given by
Conditionally on the past at time t, the process Z jumps from \(Z_t=i\) to a neighbour j at rate
Proof
The proof is elementary and already in [19] [Section 4.3, Proof of Theorem 2 ii)] in a slightly modified version, but we include it here for completeness. First note that, for all \(i\in V\),
since
Hence
which yields the expression for \(D^{-1}\). It remains to prove the last assertion:
\(\square \)
Let \({{\mathbb P}}_{x_0,t}\) (resp. \({{\mathbb P}}_{\varphi , x_0,t}^{Z}\)) be the distribution, starting from \(x_0\) and on the time interval [0, t], of the Markov Jump Process with conductances \((W_e)_{e\in E}\) (resp. the time-changed VRJP \((Z_t)_{s\in [0,t]}\) with conductances \((W_e)_{e\in E}\) and initial positive local times \((\varphi _i)_{i\in V}\)).
Theorem 3
The law of the time-changed VRJP Z on the interval [0, t] is absolutely continuous with respect to the law of the MJP X with rates \(W_{i,j}\), with Radon–Nikodym derivative given by
where \(\ell _j(t)\) is the local time of X at time t and site j defined in (1.1).
Proof
In the proof, we write \(\ell \) for the local time of both Z and X, since we consider Z and X on the canonical space with different probabilities. Let, for all \(\psi \in {{\mathbb R}}^V\), \(i\in V\), \(t\geqslant 0\),
First note that the probability, for the time-changed VRJP Z, of holding at a site \(v\in V\) on a time interval \([t_1,t_2]\) is
Second, conditionally on \((Z_u, u\leqslant t)\), the probability that Z jumps from \(Z_t=i\) to j in the time interval \([t,t+dt]\) is
Therefore the probability that, at time t, Z has followed a path \(Z_0=x_0\), \(x_1\), \(\ldots \), \(Z_t=x_n\) with jump times respectively in \([t_i,t_i+dt_i]\), \(i=1\ldots n\), where \(t_0=0<t_1<\cdots <t_n<t=t_{n+1}\), is
where we use that \(G_{x_i}(t_i)=G_{x_i}(t_{i+1})\), since Z stays at site \(x_i\) on the time interval \([t_i,t_{i+1}]\).
On the other hand, the probability that, at time t, X has followed the same path with jump times in the same intervals is
which concludes the proof. \(\square \)
Note that Theorem 3 can be used to show exchangeability of the VRJP, and provides a martingale for the Markov Jump Process, similar to \(M_t^\Phi \) in (2.5).
Recall that it is shown in [19] that the time-changed VRJP \((Z_t)_{t\geqslant 0}\) is a mixture of Markov Jump Processes, i.e. that there exist random variables \((U_i)_{i\in V}\in \mathcal {H}_0:=\{\sum _{\in V} u_i=0\}\) with a sigma supersymmetric hyperbolic distribution with parameters \((W_{ij}\varphi _i\varphi _j)_{\{i,j\}\in E}\) (see Section 6 of [19] and [7]) such that, conditionally on \((U_i)_{i\in V}\), \(Z_t\) is a Markov jump process starting from z, with jump rate from i to j
In particular, the discrete time process corresponding to the VRJP observed at jump times is exchangeable, and is a mixture of reversible Markov chains with conductances \(W_{i,j}e^{U_i+U_j}\).
3.2 The reversed VRJP and its Radon–Nykodim derivative
Definition 2
Given positive conductances on the edges of the graph \((W_e)_{e\in E}\) and initial positive local times \((\Phi _i)_{i\in V}\), the reversed vertex-reinforced jump process (RVRJP) is a continuous-time process \((\tilde{Y}_t)_{0\leqslant t\leqslant S}\), starting at time 0 at some vertex \(i_0\in V\) such that, if \(\tilde{Y}\) is at a vertex \(i\in V\) at time t, then, conditionally on \((\tilde{Y}_s, s\leqslant t)\), the process jumps to a neighbour j of i at rate \(W_{i,j}\tilde{L}_j(t)\), where
defined up until the stopping time \({\tilde{S}}\) where one of the local times hits 0, i.e.
Similarly as for Y, let us define the increasing functional
define the time-changed VRJP
and let, for all \(i\in V\) and \(t\geqslant 0\), \(\ell _i^{\tilde{Z}}(t)\) be the local time of \(\tilde{Z}\) at time t.
Then, similarly as in Lemma 4, conditionally on the past at time t, \(\tilde{Z}\) jumps from \(\tilde{Z}_t=i\) to a neighbour j at rate
and \(\tilde{Z}\) stops at time
Let \({{\mathbb P}}_{\Phi ,x_0,t}^{\tilde{Z}}\) be the distribution of \((\tilde{Z}_t)_{t\geqslant 0}\) on the time interval \([0,t\wedge \tilde{T}]\), starting from \(x_0\) and initial condition \(\Phi \).
An easy adaptation of the proof of Theorem 3 shows
Theorem 4
The law of the time-reversed VRJP \(\tilde{Z}\) on the interval \([0,t\wedge \tilde{T}]\) is absolutely continuous with respect to the law of the MJP X with rates \(W_{i,j}\), with Radon–Nikodym derivative given by
where \(\ell _j(t)\) (resp. T) is the local time of X at time t and site j (resp. the stopping time) defined in (1.1) (resp. in (2.3)).
Hence, the Radon–Nikodym derivative of the time-reversed VRJP with respect to the MJP is the martingale that appears in the proof of Theorem 1, more precisely
with the notations of Sect. 2. Note that this Radon–Nikodym derivative involves the martingale M with positive spins \(\sigma \equiv +1\). The “magnetized” inverse VRJP, defined in Sect. 1 and related to the inversion of Ray-Knight in Theorem 2, involves the sum all the martingales \(M^{\sigma \Phi }\): this is the purpose of next section.
4 Proof of Lemma 1 and Theorem 2
The proofs of Lemma 1 and Theorem 2 rely on a time change of the process \(\check{Y}\) which is in fact the same time change as the one appearing in Sect. 3 for \({\tilde{Y}}\) : let us define
define the time-changed VRJP
and let, for all \(i\in V\) and \(t\geqslant 0\), \({\ell }^{\check{Z}}_i(t)\) be the local time of \(\check{Z}\) at time t.
Then, similarly to Lemma 4, conditionally on the past at time t, \(\check{Z}\) jumps from \(\check{Z}_t=i\) to a neighbour j at rate
where we write \(\langle \cdot \rangle _{(t)}\) for \(\langle \cdot \rangle _{D^{-1}(t)}\) according to the notation of Sect. 1 : more precisely, \(\langle \cdot \rangle _{(t)}\) is the expectation for the Ising model with interaction
since the vectors of local times \({\ell }^{\check{Z}}\) and \(\check{L}\) are related by the formula
Clearly, this process is well defined up to time
Lemma 1 tells that \(\check{Z}_{\check{T}}=x_0\).
We denote by \({{\mathbb P}}^{\check{Z}}_{\Phi , z}\) the law of the process \(\check{Z}\) starting from the initial condition \(\Phi \) and initial state z up to the time \(\check{T}\) (as for \(\check{Y}\) this law depends on the choice of \(x_0\)).
We now prove a more precise version of Theorem 2, giving a description of the conditional law of the full process.
Theorem 5
With the notations of Theorem 2, under \({{\mathbb P}}_{x_0}\left( \cdot | \Phi \right) \), \((\left( X_t)_{t\in [0, \tau _u]}, \varphi \right) \) has the law of \(((({\check{Z}}(t))_{t\in [0,T]}, \sigma \Phi (T))\) where \(\check{Z}\) is distributed under \({{\mathbb P}}_{\Phi , x_0}^{\check{Z}}\) and \(\sigma \) is distributed according to the Ising model with interation \(W_{i,j}\Phi _i(T)\Phi _j(T)\).
We will adopt the following notation
Recall that \(M_t^{\varphi }\), \(N_t^\Phi \) and T are the processes (starting with the initial conditions \(\varphi \) and \(\Phi \)) and stopping times defined respectively in (2.5), (2.7) and (2.3), as a function of the path of the Markov process X up to time t. The proof of Theorem 5 is based on the following lemma.
Lemma 5
We have:
- (i):
-
For all \(t\leqslant T\),
$$\begin{aligned} N^{\Phi }_t =e^{\sum _{i\in V}W_i(\ell _i(t)-{1\over 2}\Phi _i^2)} F(D^{-1}(t))\langle \sigma _{X_t}\rangle _{(t)}\left( \frac{\prod _{j\ne x_0} \Phi _j(0)}{\prod _{j\ne X_{t}}\Phi _j(t)}\right) , \end{aligned}$$where \(F(D^{-1}(t))\) (resp. \(\langle \cdot \rangle _{(t)}\)) corresponds to the partition function (resp. distribution) of the Ising model with interaction \(J_{i,j}(D^{-1}(t))=J_{i,j} \Phi _i(t)\Phi _j(t)\), and \(W_i=\sum _{j\sim i} W_{i,j}\).
- (ii):
-
\( N_{T}=0 \) if \(X_{T}\ne x_0\).
- (iii):
-
Under \({{\mathbb P}}_{Z}\) (the law of the MJP \((X_t)\)), \(N^\Phi _{t\wedge T}\) is a positive martingale, more precisely, \({N^\Phi _{t\wedge T}/N_0^\Phi }\) is the Radon–Nykodim derivative of the measure \({{\mathbb P}}_{\Phi ,z}^{\check{Z}}\) with respect to the law of the MJP X starting from z and stopped at time T.
Proof of Lemma 5
(i) We expand the squares in the energy term, which yields
and the statement follows easily.
(ii) Same argument as in Lemma 3. This can also be seen from the expression in (i) since in this case all the interactions between x and its neighbors vanish, indeed, \(J_{x,y}(T)= 0\), using \(\Phi _x(T)=0\). This implies that the pinning \(\sigma _{x_0}=+1\) has no effect on the spin \(\sigma _x\), and therefore by symmetry that \(\langle \sigma _x\rangle _{(T)}=0\) if \(X_T=x\ne x_0\).
(iii) The fact that \(N^\Phi _t\) is a martingale follows directly from the martingale property of the \(M^{\sigma \Phi }_t\), cf Lemma 2. It is also a consequence of the Radon–Nykodim property proved below. The fact that \(N^\Phi _t\) is positive follows from the positive correlation in the Ising model : \(\langle \sigma _x\rangle _{(t)}=\langle \sigma _{x_0}\sigma _x\rangle _{(t)}\geqslant 0\), see for instance [23].
The beginning of the proof follows the same line of ideas as in the proof of Theorem 3. Similarly, we set
so that
First note that the probability, for the time-changed process \(\check{Z}\), of holding at a site \(v\in V\) on a time interval \([t_1,t_2]\) is
Second, conditionally on \((\check{Z}_u, u\leqslant t)\), the probability that \(\check{Z}\) jumps from \(\check{Z}_t=i\) to j in the time interval \([t,t+dt]\) is
Therefore the probability that, at time t, \(\check{Z}\) has followed a path \(\check{Z}_0=x_0\), \(x_1\), \(\ldots \), \(\check{Z}_t=x_n\) with jump times respectively in \([t_i,t_i+dt_i]\), \(i=1\ldots n\), where \(t_0=0<t_1<\cdots <t_n<t=t_{n+1}\), with \(t\leqslant \check{T}\), is
where we use that \(\check{G}_{x_{i-1}}(t_{i-1})=\check{G}_{x_{i-1}}(t_{i})\), since Z stays at site \(x_{i-1}\) on the time interval \([t_{i-1},t_{i}]\). We now use that
Finally, set
and,
We have \(\langle \sigma _{\check{Z}_t}\rangle _{(t)}=K(t)/H(t)\), so that
Now, since
we have that
These identities imply that the probability that, at time t, \(\check{Z}\) has followed a path \(\check{Z}_0=x_0\), \(x_1\), \(\ldots \), \(\check{Z}_t=x_n\) with jump times respectively in \([t_i,t_i+dt_i]\), \(i=1\ldots n\), where \(t_0=0<t_1<\cdots <t_n<t=t_{n+1}\), with \(t\leqslant \check{T}\), is
where in the last equality we used Lemma 5, (i). Finally, the probability that, at time t, the Markov jump process X has followed the same path with jump times in the same intervals is
This exactly tells that the Radon–Nykodim derivative of \(Z_{t\wedge \check{T}}\) under \( {{\mathbb P}}^{\check{Z}}_{\Phi }\) with respect to the law \({{\mathbb P}}\) of the Markov jump process is \({N^\Phi (t\wedge \check{T})\over N^\Phi (0)}\). \(\square \)
Proof of Lemma 1
By (ii) and (iii) of Lemma 5, we have, by the optional stopping theorem,
\(\square \)
Proof of Theorem 5
Let \(\psi ((X_t)_{t\in [0,\tau _u]}, \varphi )\mathop {=}\limits ^{notation}\psi (X, \varphi )\) and \(G(\Phi )\) be test functions. We are interested in the following expectation
where, as in the proof of Theorem 1, C is the normalizing constant of the Gaussian free field. Recall that \(\Phi = \sqrt{ \varphi ^2+2l(\tau _u)}\) and set \(\sigma = \mathsf {sign}(\varphi )\). As in the proof of Theorem 1 we change to variables \(\Phi \). Following the computation at the beginning of the proof of Theorem 1 up to Eq. (2.6), we deduce that (4.3) is equal to
If \(X_T=x_0\) then, using that \(\sigma _{X_t}=\sigma _{x_0}=1\) and the expansion in the proof of Lemma 5 (i), we deduce that
and, therefore,
This implies that
where in the last equality we used Lemma 5 (ii)–(iii). Since
it implies that \(CN_0^\Phi \) is the density of \(\Phi \) since by Theorem 1 we have \(\Phi \mathop {=}\limits ^{law} |\sqrt{2u}+\varphi |^2\) where \(\varphi \) has the law of the Gaussian free field \(P^{G,U}\). This exactly means that
\(\square \)
Proof of Theorem 2
From Theorem 5, we know that conditionally on \(\Phi \), \((\ell ,\varphi )\) has the law of \((\ell (T), \sqrt{\Phi ^2-2\ell (T)})\), where \(\ell (T)\) is the local time of \(\check{Z}\) under \({{\mathbb P}}^{\check{Z}}_{\Phi ,x_0}\). If we change back to the process \(\check{Y}\) we have, using (4.1),
hence, \({{\mathcal {L}}}((\ell ,\varphi )| \Phi )\) is the law of \(({1\over 2}(\Phi ^2-L^2(S)), L(S))\) for initial conditions \((\Phi , x_0)\). \(\square \)
5 Inversion of the generalized first Ray-Knight theorem
We use the same notation as in the first section. The generalized first Ray-Knight theorem concerns the local time of the Markov jump process starting at a point \(z_0\ne x_0\), stopped at its first hitting time of \(x_0\). Denote by
the first hitting time of \(x_0\).
Theorem 6
For any \(z_0\in V\) and any \(s> 0\),
Remark 2
This theorem is in general stated for \(s\ne 0\), but obviously we do not loose generality by restricting to \(s>0\).
This formally means that for any test function g,
Remark that the measure \((1+{\varphi _{z_0}\over s})P^{G,U}\) has mass 1 (since \(\varphi _{z_0}\) is centered) but is not positive. In fact, since the integrand depends only on \(\vert \varphi _x+s\vert \), \(x\in V\), everything can be written in terms of a positive measure. Indeed, if \(\sigma _x=\mathsf {sign}(\varphi _x+s)\), then conditionally on \(\vert \varphi _x +s\vert \), \(x\in V\), \(\sigma \) has the law of an Ising model with interaction \(J_{i,j} = W_{i,j} \vert \varphi _i +s\vert \vert \varphi _j +s\vert \) and boundary condition \(\sigma _{x_0}= +1 \). This implies that the right hand side of (5.1) can be written equivalently as
where \(\langle \sigma _{z_0}\rangle \) denotes the expectation of \(\sigma _{z_0}\) with respect to the Ising model described above. Since \(\sigma _{x_0}=+1\), we have that \({\langle \sigma _{z_0}\rangle \over s}\geqslant 0\), and \({\langle \sigma _{z_0}\rangle \over s}\vert s+ \varphi _{z_0}\vert d P^{G,U}\) is a probability measure.
We give now a counterpart of Theorem 2 for the generalized first Ray-Knight theorem. Consider the process \(\check{Y}\) defined in Sect. 1, starting from a point \(z_0\). Denote by \(\check{H}_{x_0}\) the first hitting time of \(x_0\) by the process \(\check{Y}\).
Obviously, Lemma 1 implies the following Lemma 6.
Lemma 6
Almost surely \(\check{H}_{x_0}\leqslant S\), where S is defined in (1.2).
Theorem 7
With the notation of Theorem 6, let
Under \({{\mathbb P}}_{z_0}\otimes P^{G,U}\), we have
where \(\check{L}(\check{H}_{x_0})\) is distributed under \({{\mathbb P}}^{\check{Z}}_{\Phi , z_0}\) and, conditionally on \(\check{L}(\check{H}_{x_0})\), \(\sigma \) is distributed according to the distribution of the Ising model with interaction \(J_{i,j}(\check{H}_{x_0})=W_{i,j}\check{L}_i(\check{H}_{x_0})\check{L}_j(\check{H}_{x_0})\) and boundary condition \(\sigma _{x_0}=+1\).
Similarly as for the generalized second Ray-Knight theorem, Theorem 7 is a consequence of the following more precise result. Let us consider, as in Sect. 4, the time-changed version \(\check{Z}\) of the process \(\check{Y}\).
Theorem 8
With the notation of Theorem 7, under \({{\mathbb P}}_{z_0}\left( \cdot | \Phi \right) \), \((\left( X_t)_{t\in [0, H_{x_0}]}, \varphi +s\right) \) has the law of \(((({\check{Z}}(t))_{t\in [0,\check{H}_{x_0}]}, \sigma \Phi (\check{H}_{x_0}))\) where \(\check{Z}\) is distributed under \({{\mathbb P}}_{\Phi , z_0}^{\check{Z}}\), \(\check{H}_{x_0}\) is the first hitting time of \(x_0\) by \(\check{Z}\), and \(\sigma \) is distributed according to the Ising model with interation \(W_{i,j}\Phi _i(\check{H}_{x_0})\Phi _j(\check{H}_{x_0})\) and boundary condition \(\sigma _{x_0}=+1\).
Proof
We only sketch the proof since it is very similar to the proof of Theorem 5. Let \(\psi ((X_t)_{t\in [0, H_{x_0}]}, \varphi +s)\mathop {=}\limits ^{notation}\psi (X, \varphi +s)\) and \(G(\Phi )\) be positive test functions. We are interested in the following expectation
where, as in the proof of Theorem 1, C is the normalizing constant of the Gaussian free field. Recall that \(\Phi = \sqrt{ (\varphi +s)^2+2\ell (H_{x_0})}\), set \(\sigma = \mathsf {sign}(\varphi +s)\) and define
As in the proof of Theorem 1 we change to variables \(\Phi \). An easy adaptation of the computation in the proof of Theorem 1 up to Eq. (2.6) yields that (5.2) is equal to
As in the proof of Theorem 5 we have that, if \(X_{T'}=x_0\), then
This implies that
using in the last equality an easy adaptation of Lemma 5 (ii)–(iii) for time \(T'\). Now
which implies that \(CN_0^\Phi \) is the density of \(\Phi \) since by Theorem 6 we have \(\Phi \mathop {=}\limits ^{law} |\varphi +s|^2\) under \((1+{\varphi _{z_0}\over s}) P^{G,U}\). This exactly means that
\(\square \)
References
Basdevant, A-L., Singh, A.: Continuous time vertex reinforced jump processes on Galton–Watson trees. (2010) Preprint, available on http://arxiv.org/abs/1005.3607
Collevecchio, A.: On the transience of processes defined on Galton–Watson trees. Ann. Probab. 34(3), 870–878 (2006)
Collevecchio, A.: Limit theorems for vertex-reinforced jump processes on regular trees. Electron. J. Probab. 14(66), 1936–1962 (2009)
Davis, B., Volkov, S.: Continuous time vertex-reinforced jump processes. Probab. Theory Relat. Fields 123(2), 281–300 (2002)
Davis, B., Volkov, S.: Vertex-reinforced jump processes on trees and finite graphs. Probab. Theory Relat. Fields 128(1), 42–62 (2004)
Ding, J., Lee, J.R., Peres, Y.: Cover times, blanket times, and majorizing measures. Ann. of Math. (2) 175(3), 1409–1471 (2012)
Disertori, M., Spencer, T., Zirnbauer, M.R.: Quasi-diffusion in a 3D supersymmetric hyperbolic sigma model. Commun. Math. Phys. 300(2), 435–486 (2010)
Eisenbaum, N.: Dynkin’s isomorphism theorem and the Ray-Knight theorems. Probab. Theory Relat. Fields 99(2), 321–335 (1994)
Eisenbaum, N., Kaspi, H.: On permanental processes. Stoch. Process. Appl. 119(5), 1401–1415 (2009)
Eisenbaum, N., Kaspi, H., Marcus, M.B., Rosen, J., Shi, Z.: A Ray-Knight theorem for symmetric Markov processes. Ann. Probab. 28(4), 1781–1796 (2000)
Ethier, Stewart N., Kurtz, Thomas G.: Markov processes. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons Inc., New York, 1986. (Characterization and convergence)
Knight, F.B.: Random walks and a sojourn density process of Brownian motion. Trans. Am. Math. Soc. 109, 56–86 (1963)
Le Jan, Y.: Dynkin’s isomorphism without symmetry. (2006) Preprint, available on http://arxiv.org/pdf/math/0610571.pdf
Le Jan, Yves: Markov loops and renormalization. Ann. Probab. 38(3), 1280–1319 (2010)
Le Jan, Y.: Markov paths, loops and fields, volume 2026 of Lecture Notes in Mathematics. Springer, Heidelberg, 2011. Lectures from the 38th Probability Summer School held in Saint-Flour, 2008, École d’Été de Probabilités de Saint-Flour. [Saint-Flour Probability Summer School]
Lupu, T.: From loop clusters and random interlacement to the free field. (2014) Preprint, available on http://arxiv.org/abs/1402.0298
Marcus, Michael B., Rosen, Jay: Markov processes, Gaussian processes, and Local times, Volume 100 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (2006)
Ray, Daniel: Sojourn times of diffusion processes. Ill. J. Math. 7, 615–630 (1963)
Sabot, C., Tarrès, P.: Edge-reinforced random walk, vertex-reinforced jump process and the supersymmetric hyperbolic sigma model. (2012) Preprint, available on http://arxiv.org/abs/1111.3991
Sznitman, A.-S.: An isomorphism theorem for random interlacements. Electron. Commun. Probab. 17(9), 9 (2012)
Sznitman, Alain-Sol: Random interlacements and the Gaussian free field. Ann. Probab. 40(6), 2400–2438 (2012)
Sznitman, A.-S.: Topics in occupation times and Gaussianfree fields. Zurich Lectures in Advanced Mathematics. EurMath Soc (EMS), Zürich (2012)
Werner, W.: Percolation et modèle d’Ising, vol. 16of Cours Spécialisés [Specialized Courses]. SociétéMathématique de France, Paris (2009)
Acknowledgments
We are grateful to Alain-Sol Sznitman and Jay Rosen for several useful comments on a first version of the manuscript. We thank also Yuval Peres for interesting discussions.
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was partly supported by the ANR project MEMEMO2, and the LABEX MILYON.
The first author is grateful to DMA, ENS, for his hospitality and financial support while part of this work was done.
Rights and permissions
About this article
Cite this article
Sabot, C., Tarres, P. Inverting Ray-Knight identity. Probab. Theory Relat. Fields 165, 559–580 (2016). https://doi.org/10.1007/s00440-015-0640-x
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-015-0640-x