1 Introduction

Let \(G=(V,E,\sim )\) be a nonoriented connected finite graph without loops, with conductances \((W_e)_{e\in E}\); define, for all x, y \(\in V\), \(W_{x,y}=W_{\{x,y\}}{{\mathbb {1}}}_{x\sim y}\).

Let \(\mathcal {E}\) and L be respectively the associated Markov generator and Dirichlet form defined by, for all \(f\in {{\mathbb R}}^V\),

$$\begin{aligned} Lf(x)= & {} \sum _{y\in V}W_{x,y}(f(y)-f(x))\\ \mathcal {E}(f,f)= & {} \frac{1}{2}\sum _{x,y\in V}W_{x,y}(f(x)-f(y))^2 \end{aligned}$$

Let \(x_0\in V\) be a special point that will be fixed throughout the text. Let \(U=V\setminus \{x_0\}\), and let \(P^{G,U}\) be the unique probability on \({{\mathbb R}}^V\) under which \((\varphi _x)_{x\in V}\) is the centered Gaussian field with covariance \(E^{G,U}[\varphi _x\varphi _y]=g_U(x,y)\), where \(g_U\) is the Green function killed outside U, in other words

$$\begin{aligned} P^{G,U}=\frac{1}{(2\pi )^{\frac{|U|}{2}}\sqrt{\det G_U}}\exp \left\{ -\frac{1}{2}\mathcal {E}(\varphi ,\varphi )\right\} \delta _0(\varphi _{x_0})\prod _{x\in U}d\varphi _x, \end{aligned}$$

where \(G_U:=g_U(.,.)\), and \(\delta _0\) is the Dirac at 0 so that the integration is on \((\varphi _x)_{x\in U}\) with \(\varphi _{x_0}=0\).

Let \({{\mathbb P}}_{z_0}\) be the law under which \((X_t)_{t\geqslant 0}\) is a Markov Jump Process with conductances \((W_e)_{e\in E}\) (i.e. jump rates \(W_{ij}\) from i to j \(\in V\)) starting at \(z_0\) at time 0, with right-continuous paths and local times at \(x\in V\), \(t\geqslant 0\),

$$\begin{aligned} \ell _x(t)=\int _0^t {{\mathbb {1}}}_{\{X_u=x\}}\,du. \end{aligned}$$
(1.1)

Let \(\tau _.\) be the right-continuous inverse of \(t\rightarrow \ell _{x_0}(t)\)

$$\begin{aligned} \tau _u=\inf \{t\geqslant 0;\,\ell _{x_0}(t)>u\},\, \text { for }u\geqslant 0. \end{aligned}$$

Our first aim is to provide a short proof of the generalized second Ray-Knight theorem.

Theorem 1

(Generalized second Ray-Knight theorem, [10]) For any \(u>0\),

$$\begin{aligned}&\left( \ell _x(\tau _u)+\frac{1}{2}\varphi _x^2\right) _{x\in V} \text { under }{{\mathbb P}}_{x_0}\otimes P^{G,U}, \text { has the same law as}\\&\left( \frac{1}{2}(\varphi _x+\sqrt{2u})^2\right) _{x\in V}\text { under }P^{G,U}. \end{aligned}$$

This theorem is due to Eisenbaum, Kaspi, Marcus, Rosen and Shi [10] and is closely related to Dynkin’s isomorphism; see [8] for a first relation between Ray-Knight theorem and Dynkin’s isomorphism, [17] for an overview of the subject, and [12, 18] for the original papers of Knight and Ray see [1416] for related work on the link between Markov loops and the Gaussian Free Field. Note that this result plays a crucial rôle in a recent work of Ding, Lee and Peres [6] on cover times of discrete Markov processes, and in the study of random interlacements, see for instance [2022].

In Sect. 2, we give our short proof of Theorem 1, independent of any reference to the VRJP. Similar results would hold for Dynkin and Eisenbaum isomorphism theorems (see Sect. 5, and [17, 22]). We note that there is also a non-symmetric version of Dynkin’s isomorphism [13] (see also [9]), which our technique cannot provide as it is.

In Sect. 3, we explain how the martingale appearing in this proof is related to the Vertex reinforced jump process (VRJP). However, note that the proof does not need any reference to the VRJP.

This short proof in fact yields an identity that corresponds to an inversion of the Ray-Knight identity, proved in Sect. 4. Indeed, Theorem 1 gives an identity in law, but fails to give any information on the law of \((\ell _x(\tau _u), \varphi _x)_{x\in V}\) conditioned on \(\left( \ell _x(\tau _u)+\frac{1}{2}\varphi _x^2\right) _{x\in V}\). We provide below a process that describes this conditional law.

Finally, Sect. 5 yields the equivalent inversion for the generalized first Ray-Knight theorem.

Let \((\Phi _x)_{x\in V}\) be positive reals. As before, we fix the special point \(x_0\in V\). We consider the continuous time process \(({\check{Y}}_s)_{s\geqslant 0}\) with state space V defined as follows. We set

$$\begin{aligned} \check{L}_i(s)= \Phi _i-\int _{0}^s {{\mathbb {1}}}_{\check{Y}_u=i} du. \end{aligned}$$

At time s, we consider the Ising model \((\sigma _x)_{x\in V}\) on G with interaction

$$\begin{aligned} J_{i,j}(s)= W_{i,j}\check{L}_i(s) \check{L}_j(s). \end{aligned}$$

and with boundary condition \(\sigma _{x_0}=+1\). We denote by

$$\begin{aligned} F(s)=\sum _{\sigma \in \{-1,+1\}^{V{\setminus }\{x_0\}}} e^{\sum _{\{i,j\}\in E} J_{i,j}(s) \sigma _i \sigma _j} \end{aligned}$$

its partition function and by \(\langle \cdot \rangle _s\) its associated expectation, so that for example

$$\begin{aligned} \langle \sigma _x\rangle _s = {1\over F(s)}\sum _{\sigma \in \{-1,+1\}^{V{\setminus }\{x_0\}}} \sigma _x e^{\sum _{\{i,j\}\in E} J_{i,j}(s) \sigma _i \sigma _j}>0 \end{aligned}$$

The process \(\check{Y}\) is then defined as the jump process which, conditioned on the past at time s, if \(Y_s=i\), jumps from i to j at a rate

$$\begin{aligned} W_{i,j} L_j(s) {\langle \sigma _j\rangle _s\over \langle \sigma _i\rangle _s}. \end{aligned}$$

and stopped at the time

$$\begin{aligned} S=\sup \{s\geqslant 0, \; \check{L}_i(s)> 0 \hbox { for all } i\}. \end{aligned}$$
(1.2)

Note, using the positivity of the Ising model (see for instance [23], proposition 7.1), that \(\langle \sigma _x\rangle _s>0\) for all \(s<S\). Hence the process \(\check{Y}\) is defined up to time S.

We denote by \({{\mathbb P}}_{\Phi ,z}^{\check{Y}}\) the law of \({\check{Y}}\) starting from z and initial condition \(\Phi \), and stopped at time S. (Note that this law also depends on the choice of the “special point” \(x_0\)).

Lemma 1

Starting from any point \(z\in V\), the process \({\check{Y}}\) ends at \(x_0\), i.e.

$$\begin{aligned} S<\infty \quad \hbox { and }\quad {\check{Y}}_S=x_0, \quad {{\mathbb P}}_{\Phi ,z}^{\check{Y}} \hbox { a.s.} \end{aligned}$$

This process provides an inversion to the second Ray-Knight identity, as stated in the following theorem.

Theorem 2

Let \(\ell \), \(\varphi \), \(\tau _u\) be as in Theorem 1 and set

$$\begin{aligned} \Phi _x = \sqrt{ \varphi _x^2+ 2 \ell _x(\tau _u)}. \end{aligned}$$

Under \({{\mathbb P}}_{x_0}\otimes P^{G,U}\), we have

$$\begin{aligned} {{\mathcal {L}}}\left( \varphi | \Phi \right) \mathop {=}\limits ^{law} (\sigma \check{L}(S)), \end{aligned}$$

where \(\check{L}(S)\) is distributed under \({{\mathbb P}}_{\Phi ,x_0}^{\check{Y}}\) and, conditionally on \(\check{L}(S)\), \(\sigma \) is distributed according to the distribution of the Ising model with interaction \(J_{i,j}(S)=W_{i,j}\check{L}_i(S)\check{L}_j(S)\) with boundary condition \(\sigma _{x_0}=+1\).

Remark 1

Once \(\varphi \) is known, then obviously \(\ell (\tau _u)=(\Phi ^2-\varphi ^2)/2\) is also known: in other words, Theorem 2 is equivalent to the more precise identity

$$\begin{aligned} {{\mathcal {L}}}\left( (\ell (\tau _u), \varphi ) | \Phi \right) \mathop {=}\limits ^{law} \left( {1\over 2}(\Phi ^2-\check{L}^2(S)), \sigma \check{L}(S)\right) , \end{aligned}$$

where \(\check{L}(S)\) and \(\sigma \) are distributed as in the statement of Theorem 2.

The proof of Theorem 2 is given in Sect. 4. Theorem 2 is a consequence of a more precise statement, cf Theorem 5, which gives the law of \((X_s)_{s\leqslant \tau _u}\) conditionally on \(\Phi \).

2 A new Proof of Theorem 1

Let G be a positive measurable test function. Letting \(d\varphi :=\delta (\varphi _{x_0})\prod _{x\in U}d\varphi _x\) and \(C=(2\pi )^{-\frac{|U|}{2}}(\det G_U)^{-1/2}\) be the normalizing constant of the Gaussian free field, we get

$$\begin{aligned}&\mathbb {E}_{x_0}\otimes P^{G,U}\left[ G\left( \ell _x(\tau _u)+\frac{1}{2}\varphi _x^2\right) _{x\in V}\right] \nonumber \\&\quad =C\,\,\mathbb {E}_{x_0}\left[ \int _{{{\mathbb R}}^U} G\left( \ell _x(\tau _u)+\frac{1}{2}\varphi _x^2\right) \exp \left( -{1\over 2}\mathcal {E}(\varphi ,\varphi )\right) d\varphi \right] \nonumber \\&\quad =C\,\,\mathbb {E}_{x_0}\left[ \sum _{\sigma \in \{+1,-1\}^V, \sigma _{x_0}=+1} \int _{{{\mathbb R}}_+^U} G\left( \ell _x(\tau _u)+\frac{1}{2}\varphi _x^2\right) \exp \left( -{1\over 2}\mathcal {E}(\sigma \varphi ,\sigma \varphi )\right) d\varphi \right] \end{aligned}$$
(2.1)

where in the last equality we decompose the integral according to the possible signs of \(\varphi \) so that the sum is on \(\sigma \in \{+1, -1\}^U\) with \(\sigma _{x_0}=+1\). In the following we simply write \(\sum _\sigma \) for this sum.

The strategy is now to make the change of variables \(\Phi =\sqrt{2\ell (\tau _u)+\varphi ^2}\). Given \(\ell =(\ell _i(t))_{i\in V, t\in {{\mathbb R}}_+}\), let

$$\begin{aligned} D_{u}:=\{\Phi \in {{\mathbb R}}_+^V: \,\,\Phi _{x_0}=\sqrt{2u}, \,\, \Phi _x^2/2\geqslant \ell _x(\tau _u)\text { for all } x\in V{\setminus }\{x_0\}\} \end{aligned}$$

We first make the change of variables

$$\begin{aligned}&T_{u}: {{\mathbb R}}_+^V\cap \{\varphi _{x_0}=0\}\rightarrow \,\,D_{u}\\&\quad \varphi \mapsto \,\,\Phi =T_u(\varphi )=\left( \sqrt{2\ell _i(\tau _u)+\varphi _i^2}\right) _{i\in V}. \end{aligned}$$

which can be inverted by \(\varphi = \sqrt{\Phi _i^2-2\ell (\tau _u)}\). This yields, letting \(d\Phi :=\delta _{\sqrt{2u}}(\Phi _{x_0})\prod _{x\in U}d\Phi _x\),

$$\begin{aligned}&\mathbb {E}_{x_0}\otimes P^{G,U}\left[ G\left( \ell _x(\tau _u)+\frac{1}{2}\varphi _x^2\right) _{x\in V}\right] \nonumber \\&\quad =C\,\,\mathbb {E}_{x_0}\left[ \int _{{{\mathbb R}}_+^U}\sum _{\sigma } G\left( \left( \frac{\Phi _x^2}{2}\right) \right) \exp \left( -{1\over 2}\mathcal {E}(\sigma \varphi ,\sigma \varphi )\right) |\mathsf {Jac}(T_{u}^{-1})(\Phi )|{{\mathbb {1}}}_{\Phi \in {D}_u}d\Phi \right] \nonumber \\&\quad =C\,\,\mathbb {E}_{x_0}\left[ \int _{{{\mathbb R}}_+^U}\sum _{\sigma } G\left( \left( \frac{\Phi _x^2}{2}\right) \right) \exp \left( -{1\over 2}\mathcal {E}(\sigma \varphi ,\sigma \varphi )\right) \left( \prod _{x\in U}\frac{\Phi _x}{\varphi _x}\right) {{\mathbb {1}}}_{\Phi \in {D}_u}d\Phi \right] . \end{aligned}$$
(2.2)

Note that the Jacobian is taken over all coordinates but \(x_0\) (\(\Phi _{x_0}=\sqrt{2u}\)); if \(\Phi \in {D}_u\), then

$$\begin{aligned} |\mathsf {Jac}(T_{u}^{-1})(\Phi ))| =|\mathsf {Jac}(\Phi \mapsto \varphi )(\Phi ))|=\prod _{x\in U}\frac{\Phi _x}{\varphi _x}. \end{aligned}$$

Given \(\Phi \in {{\mathbb R}}_+^V\) such that \(\Phi _{x_0}=\sqrt{2u}\), we define for all \(i\in V\),

$$\begin{aligned}&T=\inf \left\{ t\geqslant 0;\,\ell _{i}(t)=\frac{1}{2}\Phi _i^2\text { for some }i\in V\right\} \nonumber \\&\quad \Phi _i(t)= \sqrt{\Phi _i^2-2\ell _i(t)}, \quad t\leqslant T \end{aligned}$$
(2.3)

so that in (2.2) we have \(\varphi =\Phi (\tau _u)\). An important remark is that

$$\begin{aligned} \Phi \in {D}_u\iff X_T=x_0\quad \iff T=\tau _u. \end{aligned}$$
(2.4)

Finally, we define for a configuration of signs \(\sigma \in \{+1,-1\}^V\) with \(\sigma _{x_0}=+1\),

$$\begin{aligned} M^{{\sigma \Phi }}_t\!=\!\exp \left\{ -\frac{1}{2}\mathcal {E}({\sigma \Phi }(t),{\sigma \Phi }(t))\right\} \frac{\prod _{j\ne x_0}\sigma _j\Phi _j(0)}{\prod _{j\ne X_{t}}\sigma _j\Phi _j(t)}. \end{aligned}$$
(2.5)

From (2.2)–(2.4), we deduce

$$\begin{aligned} \mathbb {E}_{x_0}\otimes P^{G,U}\left[ G\left( \ell (\tau _u)\!+\!\frac{1}{2}\varphi ^2\right) \right] =C\,\,\int _{{{\mathbb R}}_+^U}G\left( \frac{1}{2}\Phi ^2\right) \mathbb {E}_{x_0}\left[ \sum _{\sigma } M^{{\sigma \Phi }}_T{{\mathbb {1}}}_{\{X_T=x_0\}}\right] d\Phi . \end{aligned}$$
(2.6)

Lemma 2

For any \(\Phi \in {{\mathbb R}}^V\) and \(\sigma \in \{-1,1\}^V\), the process \((M^{\sigma \Phi }_{t\wedge T})_{t\geqslant 0}\) is a uniformly integrable martingale.

Proof

Consider the Markov process \((\ell (t), X(t))\), which obviously has generator \( {\tilde{L}}(g)(\ell ,x)= (\frac{\partial }{\partial \ell _x} +L)g(\ell ,x). \) Let f be the function defined by \( f(\ell ,x)=\left( \prod _{y\ne x} \sigma _y \sqrt{\Phi _y^2-2\ell _y}\right) ^{-1} \). Note that, if \(t<T\),

$$\begin{aligned} \frac{d}{d t}\mathcal {E}({\sigma \Phi }(t),{\sigma \Phi }(t))=\frac{2}{({\sigma \Phi })_{X_t}(t)}L({\sigma \Phi }(t))(X_t)=2{Lf\over f}(\ell (t),X_t)=2{{\tilde{L}}f\over f}(\ell (t),X_t), \end{aligned}$$

since \(f(\ell ,x)\) does not depend on \(\ell _x\). Therefore, for \(t<T\),

$$\begin{aligned} {M^{\sigma \Phi }_t\over M^{\sigma \Phi }_0}= \frac{f(\ell (t),X(t))}{f(0,x_0)}e^{-\int _0^t {{\tilde{L}}f\over f}(\ell (s),X(s)) ds} \end{aligned}$$

which implies that \(M^{\sigma \Phi }_{t\wedge T}\) is a martingale, using for instance lemma 3.2 in [11] pp.174–175.

The condition that f is bounded in that result is not satisfied, but the proof remains true, noting that the following integrability conditions hold (see Problem 22 of Chapter 2, p.92 in [11]): first, using that \(({\tilde{L}}f/f)(\ell (t),X(t))\geqslant -\mathsf {Cst}(W)\) and \((|Lf|/f)(\ell (t),X(t))\leqslant \mathsf {Cst}(W)/\Phi _{X_t}(t)\), we deduce

$$\begin{aligned}&\int _0^t {{\tilde{L}}f\over f}(\ell (s),X_s)\exp \left( -\int _0^s{{\tilde{L}}f\over f}(\ell (u),X_u)\,du\right) \,ds \leqslant \mathsf {Cst}(W,\Phi )\sum _{i\in V}\int _{\Phi _i(t)}^{\Phi _i}\frac{ds}{\sqrt{s}}\\&\quad \leqslant \mathsf {Cst}(\Phi ,W,|V|). \end{aligned}$$

Second, \(f(\ell (t),X(t))\) can be upper bounded by an integrable random variable, uniformly in t. Indeed, let us consider the extension of process \((X_t)_{t\in {{\mathbb R}}}\) to \({{\mathbb R}}\), and take the convention that the local times at all sites are 0 at time 0.

For all \(j\in V\), let \(s_j\) be the (possibly negative) local time at j, at the last jump from that site before reaching a local time \(\Phi _j^2/2\) at that site. Let, for all \(j\in V\),

$$\begin{aligned} m_j=\Phi _j^2/2-s_j. \end{aligned}$$

Then the random variables \(m_j\), \(j\in V\), are independent exponential distributions with parameters \(W_j=\sum _{k\sim j}W_{jk}\), since the sequence of local times of jumps from j is a Poisson Point Process with intensity \(W_j\).

Now, for all \(t\in {{\mathbb R}}_+\), \(j\ne X_t\), \(\Phi _j(t\wedge T)\geqslant \sqrt{2m_j}\), so that \(f(\ell (t),X(t))\leqslant \mathsf {Cst}(\Phi )\prod _{j\in V}m_j^{-1/2}\).

This enables us to conclude, since \(\prod _{j\in V}m_j^{-1/2}\) is integrable, which also implies uniform integrability of \(M_{t\wedge T}\), using that \(|M_{t\wedge T}|\leqslant \mathsf {Cst}(W,\Phi ,t) f(\ell (t),X(t)).\) \(\square \)

Let us now consider the process

$$\begin{aligned} N_t^{\Phi }= & {} \sum _{\sigma \in \{-1,+1\}^{V{\setminus }\{x_0\}}} M_t^{\sigma \Phi }\nonumber \\= & {} \sum _{\sigma \in \{-1,+1\}^{V{\setminus }\{x_0\}}} \exp \left\{ -\frac{1}{2}\mathcal {E}(\sigma \Phi (t),\sigma \Phi (t))\right\} \frac{\prod _{j\ne x_0}\sigma _j \Phi _j(0)}{\prod _{j\ne X_{t}}\sigma _j\Phi _j(t)}. \end{aligned}$$
(2.7)

Lemma 3

For all \(x\ne x_0\), we have

$$\begin{aligned} N_T^{\Phi }{{\mathbb {1}}}_{\{X_T=x \}}=0 \end{aligned}$$
(2.8)

Proof

Let \(\sigma ^x\) be the spin-flip of \(\sigma \) at x : \(\sigma ^x=\epsilon ^x \sigma \) with \(\epsilon ^x_y=-1\) if \(y=x\) and 1 if \(y\ne x\). If \(x\ne x_0\) and \(X_T=x\), then

$$\begin{aligned} M^{\sigma ^x\Phi }_T=-M^{\sigma \Phi }_T. \end{aligned}$$

Indeed, since \(\Phi _x(T)= 0\), then \(\sigma ^x\Phi (T)= {\sigma \Phi }(T)\), and the minus sign comes from the numerator of the product term in (2.5). By symmetry, the left-hand side of (2.8) is equal to

$$\begin{aligned} {1\over 2}\left( \sum _{\sigma \in \{-1,+1\}^{V\setminus \{x_0\}}}\left( M^{\sigma \Phi }_T+M^{\sigma ^x\Phi }_T\right) \right) {{\mathbb {1}}}_{\{X_T=x \}}=0. \end{aligned}$$

\(\square \)

It follows from Lemmas 2 and 3, by the optional stopping theorem (using the uniform integrability of \(M_{t\wedge T}\)), that

$$\begin{aligned}&\mathbb {E}_{x_0}\otimes P^{G,U}\left[ G\left( \ell _x(\tau _u)+\frac{1}{2}\varphi _x^2\right) _{x\in V}\right] \\&\quad = C\,\,\int _{{{\mathbb R}}_+^U} G\left( \left( \frac{1}{2}\Phi _x^2\right) _{x\in V}\right) \mathbb {E}_{x_0}\left[ N^\varphi _T{{\mathbb {1}}}_{\{X_T=x_0\}}\right] d\Phi \\&\quad =C\,\,\int _{{{{\mathbb R}}_+^U}} G\left( \frac{1}{2}\Phi ^2\right) \mathbb {E}_{x_0}[N^\varphi _T]d\Phi =C\,\,\int _{{{\mathbb R}}_+^{U}} G\left( \frac{1}{2}\Phi ^2\right) N^\varphi _0\,d\Phi \\&\quad =C\,\,\int _{{{\mathbb R}}_+^{U}} G\left( \frac{1}{2}\Phi ^2\right) \left( \sum _{\sigma } \exp \left\{ -\frac{1}{2}\mathcal {E}({\sigma \Phi },{\sigma \Phi })\right\} \right) d\Phi \\&\quad =C\,\,\int _{{{\mathbb R}}^{U}} G\left( \frac{1}{2}\Phi ^2\right) \left( \exp \left\{ -\frac{1}{2}\mathcal {E}(\Phi ,\Phi )\right\} \right) d\Phi , \end{aligned}$$

which concludes the proof of Theorem 1.

3 Link with vertex-reinforced jump process

The aim of this section is to point out a link between the Ray-Knight identity and a reversed version of the Vertex-Reinforced Jump Process (VRJP).

It is organized as follows. In Sect. 3.1 we compute the Radon–Nykodim derivative of the VRJP, which is similar to the martingale M in Sect. 2: the computation can be used in particular to provide a direct proof of exchangeability of VRJP. In Sect. 3.2 we introduce a time-reversed version of the VRJP, i.e. where the process subtracts rather than adds local time at the site where it stays (see Definition 2). Then we show in Theorem 4 that its Radon–Nykodim derivative is the martingale \(M^\sigma \) in Sect. 2 with positive spins \(\sigma \equiv +1\). Note that the “magnetized” reversed VRJP, defined in Sect. 1 and related to the inversion of Ray-Knight in Theorem 2, involves instead the sum \(N^\Phi \), cf (2.7), of all the martingales \(M^{\sigma \Phi }\).

3.1 The vertex-reinforced jump process and its Radon–Nykodim derivative

Definition 1

Given positive conductances on the edges of the graph \((W_e)_{e\in E}\) and initial positive local times \((\varphi _i)_{i\in V}\), the vertex-reinforced jump process (VRJP) is a continuous-time process \((Y_t)_{t\geqslant 0}\) on V, starting at time 0 at some vertex \(z\in V\) and such that, if Y is at a vertex \(i\in V\) at time t, then, conditionally on \((Y_s, s\leqslant t)\), the process jumps to a neighbour j of i at rate \(W_{i,j}L_j(t)\), where

$$\begin{aligned} L_j(t):=\varphi _j+\int _0^t {1\!\!1}_{\{Y_s=j\}}\,ds. \end{aligned}$$

The Vertex-Reinforced Jump Process was initially proposed by Werner in 2000, first studied by Davis and Volkov [4, 5], then Collevechio [2, 3], Basdevant and Singh [1], and Sabot and Tarrès [19].

Let D be the increasing functional

$$\begin{aligned} D(s)=\frac{1}{2}\sum _{i\in V} (L_i^2(s)-\varphi _i^2), \end{aligned}$$

define the time-changed VRJP

$$\begin{aligned} Z_t= Y_{ D^{-1}(t)}. \end{aligned}$$

and let, for all \(i\in V\) and \(t\geqslant 0\), \(\ell ^Z_i(t)\) be the local time of Z at time t.

Lemma 4

The inverse functional \(D^{-1}\) is given by

$$\begin{aligned} D^{-1}(t)=\sum _{i\in V}(\sqrt{\varphi _i^2+2\ell ^Z_i(t)}-\varphi _i). \end{aligned}$$

Conditionally on the past at time t, the process Z jumps from \(Z_t=i\) to a neighbour j at rate

$$\begin{aligned} W_{i,j}\sqrt{\frac{\varphi _j^2+2\ell ^Z_j(t)}{\varphi _i^2+2\ell ^Z_i(t)}}. \end{aligned}$$

Proof

The proof is elementary and already in [19] [Section 4.3, Proof of Theorem 2 ii)] in a slightly modified version, but we include it here for completeness. First note that, for all \(i\in V\),

$$\begin{aligned} \ell ^Z_i(D(s))=(L_i^2(s)-\varphi _i^2)/2, \end{aligned}$$
(3.1)

since

$$\begin{aligned} (\ell ^Z_i(D(s)))'= D'(s){{\mathbb {1}}}_{\{Z_{D(s)}=i\}}={L_{Y_s}(s)} {{\mathbb {1}}}_{\{Y_s=i\}}. \end{aligned}$$

Hence

$$\begin{aligned} (D^{-1})'(t)={1\over D'(D^{-1}(t))} ={1\over L_{Z_t} (D^{-1}(t))} =\frac{1}{\sqrt{\varphi _{Z_t}^2+2\ell ^Z_{Z_t}(t)}}, \end{aligned}$$

which yields the expression for \(D^{-1}\). It remains to prove the last assertion:

$$\begin{aligned} {{\mathbb P}}(Z_{t+dt}=j | {{\mathcal {F}}}_t)= & {} {{\mathbb P}}(Y_{D^{-1}(t+dt)}=j | {{\mathcal {F}}}_t)\\= & {} W_{Z_t,j}(D^{-1})'(t) L_{j}(D^{-1}(t)) dt = W_{Z_t,j}\sqrt{\frac{\varphi _j^2+2\ell ^Z_j(t)}{\varphi _{Z_t}^2+2\ell ^Z_{Z_t}(t)}} dt. \end{aligned}$$

\(\square \)

Let \({{\mathbb P}}_{x_0,t}\) (resp. \({{\mathbb P}}_{\varphi , x_0,t}^{Z}\)) be the distribution, starting from \(x_0\) and on the time interval [0, t], of the Markov Jump Process with conductances \((W_e)_{e\in E}\) (resp. the time-changed VRJP \((Z_t)_{s\in [0,t]}\) with conductances \((W_e)_{e\in E}\) and initial positive local times \((\varphi _i)_{i\in V}\)).

Theorem 3

The law of the time-changed VRJP Z on the interval [0, t] is absolutely continuous with respect to the law of the MJP X with rates \(W_{i,j}\), with Radon–Nikodym derivative given by

$$\begin{aligned} {d {{\mathbb P}}_{\varphi ,x_0,t}^{Z}\over d{{\mathbb P}}_{x_0,t}}= {e^{{1\over 2}\left( \mathcal {E}(\sqrt{\varphi ^2+2\ell (t)}, \sqrt{\varphi ^2+2\ell (t)}) -\mathcal {E}(\varphi ,\varphi )\right) }}{\prod _{j\ne x_0}\varphi _{j}\over \prod _{j\ne X_t}\sqrt{\varphi _j^2+2\ell _j(t)}}, \end{aligned}$$

where \(\ell _j(t)\) is the local time of X at time t and site j defined in (1.1).

Proof

In the proof, we write \(\ell \) for the local time of both Z and X, since we consider Z and X on the canonical space with different probabilities. Let, for all \(\psi \in {{\mathbb R}}^V\), \(i\in V\), \(t\geqslant 0\),

$$\begin{aligned} F(\psi )=\sum _{\{i,j\}\in E}W_{ij}\psi _i\psi _j,\quad G_i(t)=\prod _{j\ne i}(\varphi _j^2+2\ell _j(t))^{-1/2}. \end{aligned}$$

First note that the probability, for the time-changed VRJP Z, of holding at a site \(v\in V\) on a time interval \([t_1,t_2]\) is

$$\begin{aligned} \exp \left( -\int _{t_1}^{t_2}\sum _{j\sim Z_t}W_{Z_t,j}\frac{\sqrt{\varphi _j^2+2\ell _j(t)}}{\sqrt{\varphi _{Z_t}^2+2\ell _{Z_t}(t)}} dt\right) =\exp \left( -\int _{t_1}^{t_2}d\left( F(\sqrt{\varphi ^2+2\ell (t)})\right) \right) . \end{aligned}$$

Second, conditionally on \((Z_u, u\leqslant t)\), the probability that Z jumps from \(Z_t=i\) to j in the time interval \([t,t+dt]\) is

$$\begin{aligned} W_{ij}\sqrt{\frac{\varphi _j^2+2\ell _j(t)}{\varphi _{i}^2+2\ell _{i}(t)}}\,dt =W_{ij}\frac{G_j(t)}{G_i(t)}\,dt. \end{aligned}$$

Therefore the probability that, at time t, Z has followed a path \(Z_0=x_0\), \(x_1\), \(\ldots \), \(Z_t=x_n\) with jump times respectively in \([t_i,t_i+dt_i]\), \(i=1\ldots n\), where \(t_0=0<t_1<\cdots <t_n<t=t_{n+1}\), is

$$\begin{aligned}&\exp \left( F(\varphi )-F(\sqrt{\varphi ^2+2\ell (t)})\right) \prod _{i=1}^{n}W_{x_{i-1}x_i}\frac{G_{x_i}(t_i)}{G_{x_{i-1}}(t_i)}\,dt_i\\&\quad =\exp \left( F(\varphi )-F(\sqrt{\varphi ^2+2\ell (t)})\right) \frac{G_{X_t}(t)}{G_{x_0}(0)} \prod _{i=1}^{n}W_{x_{i-1}x_i}\,dt_i, \end{aligned}$$

where we use that \(G_{x_i}(t_i)=G_{x_i}(t_{i+1})\), since Z stays at site \(x_i\) on the time interval \([t_i,t_{i+1}]\).

On the other hand, the probability that, at time t, X has followed the same path with jump times in the same intervals is

$$\begin{aligned} \exp \left( -\sum _{i,j: j\sim i}W_{ij}\ell _i\right) \prod _{i=1}^{n}W_{x_{i-1}x_i}\,dt_i, \end{aligned}$$

which concludes the proof. \(\square \)

Note that Theorem 3 can be used to show exchangeability of the VRJP, and provides a martingale for the Markov Jump Process, similar to \(M_t^\Phi \) in (2.5).

Recall that it is shown in [19] that the time-changed VRJP \((Z_t)_{t\geqslant 0}\) is a mixture of Markov Jump Processes, i.e. that there exist random variables \((U_i)_{i\in V}\in \mathcal {H}_0:=\{\sum _{\in V} u_i=0\}\) with a sigma supersymmetric hyperbolic distribution with parameters \((W_{ij}\varphi _i\varphi _j)_{\{i,j\}\in E}\) (see Section 6 of [19] and [7]) such that, conditionally on \((U_i)_{i\in V}\), \(Z_t\) is a Markov jump process starting from z, with jump rate from i to j

$$\begin{aligned} W_{i,j}e^{U_j-U_i}. \end{aligned}$$

In particular, the discrete time process corresponding to the VRJP observed at jump times is exchangeable, and is a mixture of reversible Markov chains with conductances \(W_{i,j}e^{U_i+U_j}\).

3.2 The reversed VRJP and its Radon–Nykodim derivative

Definition 2

Given positive conductances on the edges of the graph \((W_e)_{e\in E}\) and initial positive local times \((\Phi _i)_{i\in V}\), the reversed vertex-reinforced jump process (RVRJP) is a continuous-time process \((\tilde{Y}_t)_{0\leqslant t\leqslant S}\), starting at time 0 at some vertex \(i_0\in V\) such that, if \(\tilde{Y}\) is at a vertex \(i\in V\) at time t, then, conditionally on \((\tilde{Y}_s, s\leqslant t)\), the process jumps to a neighbour j of i at rate \(W_{i,j}\tilde{L}_j(t)\), where

$$\begin{aligned} \tilde{L}_j(t):=\Phi _j-\int _0^t {1\!\!1}_{\{\tilde{Y}_s=j\}}\,ds, \end{aligned}$$

defined up until the stopping time \({\tilde{S}}\) where one of the local times hits 0, i.e.

$$\begin{aligned} \tilde{S}=\inf \{t\in {{\mathbb R}}: \tilde{L}_j(t)=0\text { for some }j\}. \end{aligned}$$

Similarly as for Y, let us define the increasing functional

$$\begin{aligned} \tilde{D}(s)=\frac{1}{2}\sum _{i\in V} (\Phi _i^2-\tilde{L}_i^2(s)), \end{aligned}$$

define the time-changed VRJP

$$\begin{aligned} \tilde{Z}_t= \tilde{Y}_{ \tilde{D}^{-1}(t)}. \end{aligned}$$

and let, for all \(i\in V\) and \(t\geqslant 0\), \(\ell _i^{\tilde{Z}}(t)\) be the local time of \(\tilde{Z}\) at time t.

Then, similarly as in Lemma 4, conditionally on the past at time t, \(\tilde{Z}\) jumps from \(\tilde{Z}_t=i\) to a neighbour j at rate

$$\begin{aligned} W_{i,j} \sqrt{\frac{\Phi _j^2-2{\ell }^{\tilde{Z}}_j(t)}{\Phi _i^2-2\ell ^{\tilde{Z}}_i(t)}}, \end{aligned}$$

and \(\tilde{Z}\) stops at time

$$\begin{aligned} \tilde{T}=\tilde{D}(\tilde{S})=\inf \left\{ t\geqslant 0;\,\ell ^{\tilde{Z}}_i(t)=\frac{1}{2}\Phi _i^2\text { for some }i\in V\right\} . \end{aligned}$$

Let \({{\mathbb P}}_{\Phi ,x_0,t}^{\tilde{Z}}\) be the distribution of \((\tilde{Z}_t)_{t\geqslant 0}\) on the time interval \([0,t\wedge \tilde{T}]\), starting from \(x_0\) and initial condition \(\Phi \).

An easy adaptation of the proof of Theorem 3 shows

Theorem 4

The law of the time-reversed VRJP \(\tilde{Z}\) on the interval \([0,t\wedge \tilde{T}]\) is absolutely continuous with respect to the law of the MJP X with rates \(W_{i,j}\), with Radon–Nikodym derivative given by

$$\begin{aligned} {d {{\mathbb P}}_{\Phi ,x_0,t}^{\tilde{Z}}\over d{{\mathbb P}}_{x_0,t}}= {e^{-{1\over 2}\left( \mathcal {E}(\sqrt{\Phi ^2-2\ell (t\wedge T)}, \sqrt{\Phi ^2-2\ell (t\wedge T)}) -\mathcal {E}(\Phi ,\Phi )\right) }}{\prod _{j\ne x_0}\Phi _j\over \prod _{j\ne X_{t\wedge T}}\sqrt{\Phi _j^2-2\ell _j(t\wedge T)}}, \end{aligned}$$

where \(\ell _j(t)\) (resp. T) is the local time of X at time t and site j (resp. the stopping time) defined in (1.1) (resp. in (2.3)).

Hence, the Radon–Nikodym derivative of the time-reversed VRJP with respect to the MJP is the martingale that appears in the proof of Theorem 1, more precisely

$$\begin{aligned} {d {{\mathbb P}}_{\Phi ,x_0,t}^{\tilde{Z}}\over d{{\mathbb P}}_{x_0,t}}= \frac{M^\Phi _{t\wedge T}}{M^\Phi _0}, \end{aligned}$$

with the notations of Sect. 2. Note that this Radon–Nikodym derivative involves the martingale M with positive spins \(\sigma \equiv +1\). The “magnetized” inverse VRJP, defined in Sect. 1 and related to the inversion of Ray-Knight in Theorem 2, involves the sum all the martingales \(M^{\sigma \Phi }\): this is the purpose of next section.

4 Proof of Lemma 1 and Theorem 2

The proofs of Lemma 1 and Theorem 2 rely on a time change of the process \(\check{Y}\) which is in fact the same time change as the one appearing in Sect. 3 for \({\tilde{Y}}\) : let us define

$$\begin{aligned} \check{D}(s)=\frac{1}{2}\sum _{i\in V} (\Phi _i^2-{\check{L}}_i^2(s)), \end{aligned}$$

define the time-changed VRJP

$$\begin{aligned} {\check{Z}}_t= {\check{Y}}_{ {\check{D}}^{-1}(t)}. \end{aligned}$$

and let, for all \(i\in V\) and \(t\geqslant 0\), \({\ell }^{\check{Z}}_i(t)\) be the local time of \(\check{Z}\) at time t.

Then, similarly to Lemma 4, conditionally on the past at time t, \(\check{Z}\) jumps from \(\check{Z}_t=i\) to a neighbour j at rate

$$\begin{aligned} W_{i,j} \sqrt{\frac{\Phi _j^2-2{\ell }^{\check{Z}}_j(t)}{\Phi _i^2-2{\ell }^{\check{Z}}_i(t)}} {\langle \sigma _j\rangle _{(t)}\over \langle \sigma _i\rangle _{(t)}}, \end{aligned}$$

where we write \(\langle \cdot \rangle _{(t)}\) for \(\langle \cdot \rangle _{D^{-1}(t)}\) according to the notation of Sect. 1 : more precisely, \(\langle \cdot \rangle _{(t)}\) is the expectation for the Ising model with interaction

$$\begin{aligned} J_{i,j}(D^{-1}(t))= W_{i,j}\sqrt{{\Phi _i^2-2{\ell }^{\check{Z}}_i(t)}}\sqrt{{\Phi _j^2-2{\ell }^{\check{Z}}_j(t)}} \end{aligned}$$

since the vectors of local times \({\ell }^{\check{Z}}\) and \(\check{L}\) are related by the formula

$$\begin{aligned} {\ell }^{\check{Z}}(t)= {1\over 2}(\Phi ^2-{\check{L}}(D^{-1}(t))). \end{aligned}$$
(4.1)

Clearly, this process is well defined up to time

$$\begin{aligned} \check{T}=\check{D}(\check{S})=\inf \left\{ t\geqslant 0;\,{\ell }^{\check{Z}}_{i}(t)=\frac{1}{2}\Phi _i^2\text { for some }i\in V\right\} . \end{aligned}$$

Lemma 1 tells that \(\check{Z}_{\check{T}}=x_0\).

We denote by \({{\mathbb P}}^{\check{Z}}_{\Phi , z}\) the law of the process \(\check{Z}\) starting from the initial condition \(\Phi \) and initial state z up to the time \(\check{T}\) (as for \(\check{Y}\) this law depends on the choice of \(x_0\)).

We now prove a more precise version of Theorem 2, giving a description of the conditional law of the full process.

Theorem 5

With the notations of Theorem 2, under \({{\mathbb P}}_{x_0}\left( \cdot | \Phi \right) \), \((\left( X_t)_{t\in [0, \tau _u]}, \varphi \right) \) has the law of \(((({\check{Z}}(t))_{t\in [0,T]}, \sigma \Phi (T))\) where \(\check{Z}\) is distributed under \({{\mathbb P}}_{\Phi , x_0}^{\check{Z}}\) and \(\sigma \) is distributed according to the Ising model with interation \(W_{i,j}\Phi _i(T)\Phi _j(T)\).

We will adopt the following notation

$$\begin{aligned} \Phi _i(t)=\sqrt{\Phi _i^2-2l_i^{\check{Z}}(t)}={\check{L}}_i(D^{-1}(t)). \end{aligned}$$
(4.2)

Recall that \(M_t^{\varphi }\), \(N_t^\Phi \) and T are the processes (starting with the initial conditions \(\varphi \) and \(\Phi \)) and stopping times defined respectively in (2.5), (2.7) and (2.3), as a function of the path of the Markov process X up to time t. The proof of Theorem 5 is based on the following lemma.

Lemma 5

We have:

(i):

For all \(t\leqslant T\),

$$\begin{aligned} N^{\Phi }_t =e^{\sum _{i\in V}W_i(\ell _i(t)-{1\over 2}\Phi _i^2)} F(D^{-1}(t))\langle \sigma _{X_t}\rangle _{(t)}\left( \frac{\prod _{j\ne x_0} \Phi _j(0)}{\prod _{j\ne X_{t}}\Phi _j(t)}\right) , \end{aligned}$$

where \(F(D^{-1}(t))\) (resp. \(\langle \cdot \rangle _{(t)}\)) corresponds to the partition function (resp. distribution) of the Ising model with interaction \(J_{i,j}(D^{-1}(t))=J_{i,j} \Phi _i(t)\Phi _j(t)\), and \(W_i=\sum _{j\sim i} W_{i,j}\).

(ii):

\( N_{T}=0 \) if \(X_{T}\ne x_0\).

(iii):

Under \({{\mathbb P}}_{Z}\) (the law of the MJP \((X_t)\)), \(N^\Phi _{t\wedge T}\) is a positive martingale, more precisely, \({N^\Phi _{t\wedge T}/N_0^\Phi }\) is the Radon–Nykodim derivative of the measure \({{\mathbb P}}_{\Phi ,z}^{\check{Z}}\) with respect to the law of the MJP X starting from z and stopped at time T.

Proof of Lemma 5

(i) We expand the squares in the energy term, which yields

$$\begin{aligned} {1\over 2}\mathcal {E}(\sigma \Phi (t),\sigma \Phi (t))= -\sum _{\{i,j\}\in E} W_{i,j} \Phi _i(t)\Phi _j(t)\sigma _i\sigma _j+ {1\over 2}\sum _{i\in V} W_i(\Phi _i^2-2{\ell }_i(t)), \end{aligned}$$

and the statement follows easily.

(ii) Same argument as in Lemma 3. This can also be seen from the expression in (i) since in this case all the interactions between x and its neighbors vanish, indeed, \(J_{x,y}(T)= 0\), using \(\Phi _x(T)=0\). This implies that the pinning \(\sigma _{x_0}=+1\) has no effect on the spin \(\sigma _x\), and therefore by symmetry that \(\langle \sigma _x\rangle _{(T)}=0\) if \(X_T=x\ne x_0\).

(iii) The fact that \(N^\Phi _t\) is a martingale follows directly from the martingale property of the \(M^{\sigma \Phi }_t\), cf Lemma 2. It is also a consequence of the Radon–Nykodim property proved below. The fact that \(N^\Phi _t\) is positive follows from the positive correlation in the Ising model : \(\langle \sigma _x\rangle _{(t)}=\langle \sigma _{x_0}\sigma _x\rangle _{(t)}\geqslant 0\), see for instance [23].

The beginning of the proof follows the same line of ideas as in the proof of Theorem 3. Similarly, we set

$$\begin{aligned} \check{G}_i(t)= \prod _{j\ne i} {1\over \Phi _j(t)}= \prod _{j\ne i} {1\over \sqrt{{\Phi _j^2-2{\ell }^{\check{Z}}_j(t)}}}, \end{aligned}$$

so that

$$\begin{aligned} {\Phi _j(t)\over \Phi _i(t)}={\check{G}_i(t)\over \check{G}_j(t)}. \end{aligned}$$

First note that the probability, for the time-changed process \(\check{Z}\), of holding at a site \(v\in V\) on a time interval \([t_1,t_2]\) is

$$\begin{aligned} \exp \left( -\int _{t_1}^{t_2}\sum _{j\sim \check{Z}_u}W_{\check{Z}_u,j}\frac{\Phi _j(u)\langle \sigma _j\rangle _{(u)}}{\Phi _{\check{Z}_u}(u)\langle \sigma _{\check{Z}_u}\rangle _{(u)}} du\right) . \end{aligned}$$

Second, conditionally on \((\check{Z}_u, u\leqslant t)\), the probability that \(\check{Z}\) jumps from \(\check{Z}_t=i\) to j in the time interval \([t,t+dt]\) is

$$\begin{aligned} W_{ij}\frac{\Phi _j(t)\langle \sigma _j\rangle _{(t)}}{\Phi _i(t)\langle \sigma _i\rangle _{(t)}} \,dt. \end{aligned}$$

Therefore the probability that, at time t, \(\check{Z}\) has followed a path \(\check{Z}_0=x_0\), \(x_1\), \(\ldots \), \(\check{Z}_t=x_n\) with jump times respectively in \([t_i,t_i+dt_i]\), \(i=1\ldots n\), where \(t_0=0<t_1<\cdots <t_n<t=t_{n+1}\), with \(t\leqslant \check{T}\), is

$$\begin{aligned}&\exp \left( -\int _{0}^{t}\sum _{j\sim \check{Z}_u}W_{\check{Z}_u,j}\frac{\Phi _j(u)\langle \sigma _j\rangle _{(u)}}{\Phi _{\check{Z}_u}(u)\langle \sigma _{\check{Z}_u}\rangle _{(u)}} du \right) \prod _{i=1}^{n}W_{x_{i-1}x_i}\frac{\Phi _{x_i}(t_i) \langle \sigma _{x_i}\rangle _{(t_i)} }{\Phi _{x_{i-1}}(t_i)\langle \sigma _{x_{i-1}}\rangle _{(t_i)}}\,dt_i\\&\quad = \exp \left( -\int _{0}^{t}\sum _{j\sim \check{Z}_u}W_{\check{Z}_u,j}\frac{\Phi _j(u)\langle \sigma _j\rangle _{(u)}}{\Phi _{\check{Z}_u}(u)\langle \sigma _{\check{Z}_u}\rangle _{(u)}} du \right) \prod _{i=1}^{n}W_{x_{i-1}x_i}\frac{\check{G}_{x_{i-1}}(t_i)}{\check{G}_{x_i}(t_i)}\frac{ \langle \sigma _{x_i}\rangle _{(t_i)} }{\langle \sigma _{x_{i-1}}\rangle _{(t_i)}}\,dt_i\\&\quad = \exp \left( -\int _{0}^{t}\sum _{j\sim \check{Z}_u}W_{\check{Z}_u,j}\frac{\Phi _j(u)\langle \sigma _j\rangle _{(u)}}{\Phi _{\check{Z}_u}(u)\langle \sigma _{\check{Z}_u}\rangle _{(u)}} du \right) \frac{\check{G}_{x_0}(0)}{\check{G}_{\check{Z}_t}(t)}\prod _{i=1}^{n}W_{x_{i-1}x_i}\frac{\langle \sigma _{x_i}\rangle _{(t_i)} }{\langle \sigma _{x_{i-1}}\rangle _{(t_i)}}\,dt_i \end{aligned}$$

where we use that \(\check{G}_{x_{i-1}}(t_{i-1})=\check{G}_{x_{i-1}}(t_{i})\), since Z stays at site \(x_{i-1}\) on the time interval \([t_{i-1},t_{i}]\). We now use that

$$\begin{aligned} \prod _{i=1}^{n}\frac{\langle \sigma _{x_i}\rangle _{(t_i)} }{\langle \sigma _{x_{i-1}}\rangle _{(t_i)}} =&\frac{\langle \sigma _{\check{Z}_t}\rangle _{(t)}}{\langle \sigma _{x_0}\rangle _{(0)}} \prod _{i=1}^{n+1}\frac{\langle \sigma _{x_{i-1}}\rangle _{(t_{i-1})} }{\langle \sigma _{x_{i-1}}\rangle _{(t_i)}} \\ =&\langle \sigma _{\check{Z}_t}\rangle _{(t)} \exp \left( -\int _0^t \frac{\frac{\partial }{\partial u} \langle \sigma _{\check{Z}_u}\rangle _{(u)}}{\langle \sigma _{\check{Z}_u}\rangle _{(u)}} du\right) . \end{aligned}$$

Finally, set

$$\begin{aligned} H(t)=F(D^{-1}(t))= \sum _{\sigma \in \{-1,+1\}^{V\setminus \{x_0\}}} \exp \left\{ \sum _{\{i,j\}\in E} W_{i,j}\Phi _i(t)\Phi _j(t)\sigma _i\sigma _j\right\} \end{aligned}$$

and,

$$\begin{aligned} K(t)= \sum _{\sigma \in \{-1,+1\}^{V\setminus \{x_0\}}} \exp \left\{ \sum _{\{i,j\}\in E} W_{i,j} \Phi _i(t)\Phi _j(t)\sigma _i\sigma _j \right\} \sigma _{\check{Z}_t}. \end{aligned}$$

We have \(\langle \sigma _{\check{Z}_t}\rangle _{(t)}=K(t)/H(t)\), so that

$$\begin{aligned} \frac{\frac{\partial }{\partial u} \langle \sigma _{\check{Z}_u}\rangle _{(u)}}{\langle \sigma _{\check{Z}_u}\rangle _{(u)}} = \frac{\frac{\partial }{\partial u} K(u)}{K(u)}- \frac{\frac{\partial }{\partial u} H(u)}{H(u)}. \end{aligned}$$

Now, since

$$\begin{aligned} \frac{\partial }{\partial u} \left\{ \sum _{\{i,j\}\in E} W_{i,j} \Phi _i(u)\Phi _j(u)\sigma _i\sigma _j\right\} = -\sum _{j\sim \check{Z}_u} W_{\check{Z}_u,j} \frac{\Phi _j(u)}{\Phi _{\check{Z}_u}(u)}\sigma _{\check{Z}_u}\sigma _j, \end{aligned}$$

we have that

$$\begin{aligned} \frac{\frac{\partial }{\partial u} K(u)}{K(u)}= -\sum _{j\sim \check{Z}_u} {\Phi _j(u)\over \Phi _{\check{Z}_u}(u)} {\langle \sigma _{j}\rangle _{(u)}\over \langle \sigma _{\check{Z}_u}\rangle _{(u)}} \end{aligned}$$

These identities imply that the probability that, at time t, \(\check{Z}\) has followed a path \(\check{Z}_0=x_0\), \(x_1\), \(\ldots \), \(\check{Z}_t=x_n\) with jump times respectively in \([t_i,t_i+dt_i]\), \(i=1\ldots n\), where \(t_0=0<t_1<\cdots <t_n<t=t_{n+1}\), with \(t\leqslant \check{T}\), is

where in the last equality we used Lemma 5, (i). Finally, the probability that, at time t, the Markov jump process X has followed the same path with jump times in the same intervals is

$$\begin{aligned} \exp \left( -\sum _{i\in V}W_{i}\ell _i\right) \prod _{i=1}^{n}W_{x_{i-1}x_i}\,dt_i. \end{aligned}$$

This exactly tells that the Radon–Nykodim derivative of \(Z_{t\wedge \check{T}}\) under \( {{\mathbb P}}^{\check{Z}}_{\Phi }\) with respect to the law \({{\mathbb P}}\) of the Markov jump process is \({N^\Phi (t\wedge \check{T})\over N^\Phi (0)}\). \(\square \)

Proof of Lemma 1

By (ii) and (iii) of Lemma 5, we have, by the optional stopping theorem,

$$\begin{aligned} {{\mathbb E}}^{\check{Z}}_{\Phi , x_0}\left( {{\mathbb {1}}}_{\{{\check{Y}}_{\check{S}}\ne x_0\}}\right)= & {} {{\mathbb E}}^{\check{Z}}_{\Phi , x_0}\left( {{\mathbb {1}}}_{\{{\check{Z}}_{\check{T}}\ne x_0\}}\right) \\= & {} {{\mathbb E}}_{x_0}\left( N_T^\Phi {{\mathbb {1}}}_{\{X_T\ne x_0\}}/N_0^\Phi \right) ={{\mathbb E}}_{x_0}\left( N_T^\Phi /N_0^\Phi \right) \\= & {} 0 \end{aligned}$$

\(\square \)

Proof of Theorem 5

Let \(\psi ((X_t)_{t\in [0,\tau _u]}, \varphi )\mathop {=}\limits ^{notation}\psi (X, \varphi )\) and \(G(\Phi )\) be test functions. We are interested in the following expectation

$$\begin{aligned} {{\mathbb E}}_{x_0}\otimes P^{G,U}\left( \psi (X,\varphi ) G(\Phi ) \right) ={{\mathbb E}}_{x_0}\left( \int _{{{\mathbb R}}^{V\setminus \{x_0\}}} \psi (X,\varphi ) G(\Phi ) Ce^{-{1\over 2}\mathcal {E}(\varphi , \varphi )} d\varphi \right) ,\nonumber \\ \end{aligned}$$
(4.3)

where, as in the proof of Theorem 1, C is the normalizing constant of the Gaussian free field. Recall that \(\Phi = \sqrt{ \varphi ^2+2l(\tau _u)}\) and set \(\sigma = \mathsf {sign}(\varphi )\). As in the proof of Theorem 1 we change to variables \(\Phi \). Following the computation at the beginning of the proof of Theorem 1 up to Eq. (2.6), we deduce that (4.3) is equal to

$$\begin{aligned} C\int _{{{\mathbb R}}_+^{V{\setminus }\{x_0\}}} G(\Phi ) {{\mathbb E}}_{x_0}\left( \sum _{\sigma } \psi (X,\sigma \Phi (T)) M_T^{\sigma \Phi } {{\mathbb {1}}}_{\{X_T=x_0\}}\right) d\Phi \end{aligned}$$
(4.4)

If \(X_T=x_0\) then, using that \(\sigma _{X_t}=\sigma _{x_0}=1\) and the expansion in the proof of Lemma 5 (i), we deduce that

$$\begin{aligned} M_T^{\sigma \Phi }=N_T^{\Phi }\frac{e^{\sum _{\{i,j\}\in E} W_{i,j} \Phi _i(T)\Phi _j(T) \sigma _i\sigma _j}}{F(D^{-1}(T))} \end{aligned}$$

and, therefore,

$$\begin{aligned} \sum _{\sigma } \psi (X,\sigma \Phi (T)) M_T^{\sigma \Phi }= & {} N_T^{\Phi }{1\over F(D^{-1}(T))}\\&\times \sum _{\sigma } \psi (X,\sigma \Phi (T)) e^{\sum _{\{i,j\}\in E} W_{i,j} \Phi _i(T)\Phi _j(T) \sigma _i\sigma _j}\\= & {} N_T^{\Phi } \langle \psi (X,\sigma \Phi (T))\rangle _{(T)}. \end{aligned}$$

This implies that

$$\begin{aligned} (5.3)= & {} C\int _{{{\mathbb R}}_+^{V\setminus \{x_0\}}} G(\Phi ) {{\mathbb E}}_{x_0}\left( \langle \psi (X,\sigma \Phi (T))\rangle _{(T)}N_T^\Phi {{\mathbb {1}}}_{\{X_T=x_0\}}\right) d\Phi \\= & {} C\int _{{{\mathbb R}}_+^{V\setminus \{x_0\}}} G(\Phi ) N_0^\Phi \check{{\mathbb E}}^{VRJP}_{x_0}\left( \langle \psi (\check{Z},\sigma \Phi (T))\rangle _{(T)} \right) d\Phi \end{aligned}$$

where in the last equality we used Lemma 5 (ii)–(iii). Since

$$\begin{aligned} N_0^\Phi =\sum _{\sigma \in \{-1,+1\}^{V\setminus \{x_0\}}} \exp \left\{ -\frac{1}{2}\mathcal {E}(\sigma \Phi ,\sigma \Phi )\right\} , \end{aligned}$$

it implies that \(CN_0^\Phi \) is the density of \(\Phi \) since by Theorem 1 we have \(\Phi \mathop {=}\limits ^{law} |\sqrt{2u}+\varphi |^2\) where \(\varphi \) has the law of the Gaussian free field \(P^{G,U}\). This exactly means that

$$\begin{aligned} {{\mathbb E}}_{x_0}\otimes P^{G,U}\left( \psi (X,\varphi ) | \Phi \right) = {{\mathbb E}}^{\check{Z}}_{\Phi ,x_0}\left( \langle \psi (\check{Z},\sigma \Phi (T))\rangle _{(T)}\right) . \end{aligned}$$

\(\square \)

Proof of Theorem 2

From Theorem 5, we know that conditionally on \(\Phi \), \((\ell ,\varphi )\) has the law of \((\ell (T), \sqrt{\Phi ^2-2\ell (T)})\), where \(\ell (T)\) is the local time of \(\check{Z}\) under \({{\mathbb P}}^{\check{Z}}_{\Phi ,x_0}\). If we change back to the process \(\check{Y}\) we have, using (4.1),

$$\begin{aligned} \check{L}(S)=\sqrt{\Phi ^2-2\ell (T)}, \end{aligned}$$

hence, \({{\mathcal {L}}}((\ell ,\varphi )| \Phi )\) is the law of \(({1\over 2}(\Phi ^2-L^2(S)), L(S))\) for initial conditions \((\Phi , x_0)\). \(\square \)

5 Inversion of the generalized first Ray-Knight theorem

We use the same notation as in the first section. The generalized first Ray-Knight theorem concerns the local time of the Markov jump process starting at a point \(z_0\ne x_0\), stopped at its first hitting time of \(x_0\). Denote by

$$\begin{aligned} H_{x_0}=\inf \{t\geqslant 0, \quad X_t=x_0\}, \end{aligned}$$

the first hitting time of \(x_0\).

Theorem 6

For any \(z_0\in V\) and any \(s> 0\),

$$\begin{aligned}&\left( \ell _x(H_{x_0})+\frac{1}{2}(\varphi _x+s)^2\right) _{x\in V} \text { under }{{\mathbb P}}_{z_0}\otimes P^{G,U}, \text { has the same ``law'' as}\\&\left( \frac{1}{2}(\varphi _x+s)^2\right) _{x\in V}\text { under }(1+{\varphi _{z_0}\over s})P^{G,U}. \end{aligned}$$

Remark 2

This theorem is in general stated for \(s\ne 0\), but obviously we do not loose generality by restricting to \(s>0\).

This formally means that for any test function g,

$$\begin{aligned}&\int g \left( (\ell _x(H_{x_0})+\frac{1}{2}(\varphi _x+s)^2)_{x\in V} \right) d{{\mathbb P}}_{z_0}\otimes P^{G,U}\nonumber \\&\quad = \int g \left( \left( \frac{1}{2}(\varphi _x+s)^2\right) _{x\in V} \right) \left( 1+{\varphi _{z_0}\over s}\right) dP^{G,U}. \end{aligned}$$
(5.1)

Remark that the measure \((1+{\varphi _{z_0}\over s})P^{G,U}\) has mass 1 (since \(\varphi _{z_0}\) is centered) but is not positive. In fact, since the integrand depends only on \(\vert \varphi _x+s\vert \), \(x\in V\), everything can be written in terms of a positive measure. Indeed, if \(\sigma _x=\mathsf {sign}(\varphi _x+s)\), then conditionally on \(\vert \varphi _x +s\vert \), \(x\in V\), \(\sigma \) has the law of an Ising model with interaction \(J_{i,j} = W_{i,j} \vert \varphi _i +s\vert \vert \varphi _j +s\vert \) and boundary condition \(\sigma _{x_0}= +1 \). This implies that the right hand side of (5.1) can be written equivalently as

$$\begin{aligned} \int g \left( \left( \frac{1}{2}(\varphi _x+s)^2\right) _{x\in V} \right) {\langle \sigma _{z_0}\rangle \over s}\vert s+ \varphi _{z_0}\vert d P^{G,U} \end{aligned}$$

where \(\langle \sigma _{z_0}\rangle \) denotes the expectation of \(\sigma _{z_0}\) with respect to the Ising model described above. Since \(\sigma _{x_0}=+1\), we have that \({\langle \sigma _{z_0}\rangle \over s}\geqslant 0\), and \({\langle \sigma _{z_0}\rangle \over s}\vert s+ \varphi _{z_0}\vert d P^{G,U}\) is a probability measure.

We give now a counterpart of Theorem 2 for the generalized first Ray-Knight theorem. Consider the process \(\check{Y}\) defined in Sect. 1, starting from a point \(z_0\). Denote by \(\check{H}_{x_0}\) the first hitting time of \(x_0\) by the process \(\check{Y}\).

Obviously, Lemma 1 implies the following Lemma 6.

Lemma 6

Almost surely \(\check{H}_{x_0}\leqslant S\), where S is defined in (1.2).

Theorem 7

With the notation of Theorem 6, let

$$\begin{aligned} \Phi _z=\sqrt{2\ell _z(H_{x_0})+(\varphi _z+s)^2}. \end{aligned}$$

Under \({{\mathbb P}}_{z_0}\otimes P^{G,U}\), we have

$$\begin{aligned} {{\mathcal {L}}}\left( \varphi +s | \Phi \right) \mathop {=}\limits ^{law} (\sigma \check{L}(\check{H}_{x_0})), \end{aligned}$$

where \(\check{L}(\check{H}_{x_0})\) is distributed under \({{\mathbb P}}^{\check{Z}}_{\Phi , z_0}\) and, conditionally on \(\check{L}(\check{H}_{x_0})\), \(\sigma \) is distributed according to the distribution of the Ising model with interaction \(J_{i,j}(\check{H}_{x_0})=W_{i,j}\check{L}_i(\check{H}_{x_0})\check{L}_j(\check{H}_{x_0})\) and boundary condition \(\sigma _{x_0}=+1\).

Similarly as for the generalized second Ray-Knight theorem, Theorem 7 is a consequence of the following more precise result. Let us consider, as in Sect. 4, the time-changed version \(\check{Z}\) of the process \(\check{Y}\).

Theorem 8

With the notation of Theorem 7, under \({{\mathbb P}}_{z_0}\left( \cdot | \Phi \right) \), \((\left( X_t)_{t\in [0, H_{x_0}]}, \varphi +s\right) \) has the law of \(((({\check{Z}}(t))_{t\in [0,\check{H}_{x_0}]}, \sigma \Phi (\check{H}_{x_0}))\) where \(\check{Z}\) is distributed under \({{\mathbb P}}_{\Phi , z_0}^{\check{Z}}\), \(\check{H}_{x_0}\) is the first hitting time of \(x_0\) by \(\check{Z}\), and \(\sigma \) is distributed according to the Ising model with interation \(W_{i,j}\Phi _i(\check{H}_{x_0})\Phi _j(\check{H}_{x_0})\) and boundary condition \(\sigma _{x_0}=+1\).

Proof

We only sketch the proof since it is very similar to the proof of Theorem 5. Let \(\psi ((X_t)_{t\in [0, H_{x_0}]}, \varphi +s)\mathop {=}\limits ^{notation}\psi (X, \varphi +s)\) and \(G(\Phi )\) be positive test functions. We are interested in the following expectation

$$\begin{aligned}&{{\mathbb E}}_{z_0}\otimes P^{G,U}\left( \psi (X,\varphi +s) G(\Phi ) \right) \nonumber \\&\quad ={{\mathbb E}}_{z_0}\left( \int _{{{\mathbb R}}^{V{\setminus }\{x_0\}}} \psi (X,\varphi +s) G(\Phi ) Ce^{-{1\over 2}\mathcal {E}(\varphi , \varphi )} d\varphi \right) \end{aligned}$$
(5.2)

where, as in the proof of Theorem 1, C is the normalizing constant of the Gaussian free field. Recall that \(\Phi = \sqrt{ (\varphi +s)^2+2\ell (H_{x_0})}\), set \(\sigma = \mathsf {sign}(\varphi +s)\) and define

$$\begin{aligned} T'=S\wedge H_{x_0}. \end{aligned}$$

As in the proof of Theorem 1 we change to variables \(\Phi \). An easy adaptation of the computation in the proof of Theorem 1 up to Eq. (2.6) yields that (5.2) is equal to

$$\begin{aligned} C\int _{{{\mathbb R}}_+^{V\setminus \{x_0\}}} G(\Phi ) {{\mathbb E}}_{z_0}\left( \sum _{\sigma } \psi (X,\sigma \Phi (T')) M_{T'}^{\sigma \Phi } {{\mathbb {1}}}_{\{X_{T'}=x_0\}}\right) d\Phi , \end{aligned}$$
(5.3)

As in the proof of Theorem 5 we have that, if \(X_{T'}=x_0\), then

$$\begin{aligned} \sum _{\sigma } \psi (X,\sigma \Phi (T')) M_{T'}^{\sigma \Phi }= & {} N_{T'}^{\Phi }{1\over F(D^{-1}(t))}\\&\times \sum _{\sigma } \psi (X,\sigma \Phi (T')) e^{\sum _{\{i,j\}\in E} W_{i,j} \Phi _i(T')\Phi _j(T') \sigma _i\sigma _j} \\= & {} N_{T'}^{\Phi } \langle \psi (X,\sigma \Phi (T'))\rangle _{(T')}. \end{aligned}$$

This implies that

$$\begin{aligned} (5.3)= & {} C\int _{{{\mathbb R}}_+^{V{\setminus }\{x_0\}}} G(\Phi ) {{\mathbb E}}_{z_0}\left( \langle \psi (X,\sigma \Phi (T'))\rangle _{(T')}N_{T'}^\Phi {{\mathbb {1}}}_{X_{T'}=x_0}\right) d\Phi \nonumber \\= & {} C\int _{{{\mathbb R}}_+^{V{\setminus }\{x_0\}}} G(\Phi ) N_0^\Phi {{\mathbb E}}^{\check{Z}}_{\Phi ,z_0}\left( \langle \psi (\check{Z},\sigma \Phi (T'))\rangle _{(T')} \right) d\Phi , \end{aligned}$$

using in the last equality an easy adaptation of Lemma 5 (ii)–(iii) for time \(T'\). Now

$$\begin{aligned} N_0^\Phi =\sum _{\sigma \in \{-1,+1\}^{V{\setminus }\{z_0\}}}\left( {\sigma _{z_0}\Phi _{z_0}\over s}\right) \exp \left\{ -\frac{1}{2}\mathcal {E}(\sigma \Phi ,\sigma \Phi )\right\} , \end{aligned}$$

which implies that \(CN_0^\Phi \) is the density of \(\Phi \) since by Theorem 6 we have \(\Phi \mathop {=}\limits ^{law} |\varphi +s|^2\) under \((1+{\varphi _{z_0}\over s}) P^{G,U}\). This exactly means that

$$\begin{aligned} {{\mathbb E}}_{z_0}\otimes P^{G,U}\left( \psi (X,\varphi ) | \Phi \right) = {{\mathbb E}}^{\check{Z}}_{\Phi ,z_0}\left( \langle \psi (\check{Z},\sigma \Phi (T'))\rangle _{(T')}\right) . \end{aligned}$$

\(\square \)