1

# The non-Markovian nature of turbulence 8: Almost-Markovian models and theories

The non-Markovian nature of turbulence 8: Almost-Markovian models and theories

Previously, in my post of 10 November 2022, I mentioned, purely for completeness, the work of Phythian [1] who presented a self-consistent theory that led to the DIA. The importance of this for Kraichnan was that it also led to a model representation of the DIA and in turn to the development of what he called almost-Markovian’ theories. Some further discussion of this topic can be found in Section 6.3.2 of the book [2], but here we will concentrate on the general class of almost-Markovian models and theories. My concern here is to draw a distinction between their use of Markovian’, which refers to evolution in time, and my use in this series of posts, which refers to interactions in wavenumber.

This class consists of the Eddy-damped, Quasi-normal, Markovian (EDQNM) model of Orszag in 1970 [3], the test-field model of Kraichnan in 1971 [4], the modified LET theory of McComb and Kiyani in 2005 [5], and the theory of Bos and Bertoglio in 2006 [6]. Here we follow the example of Kraichnan who described a theory which relied on a specific assumption that involved the introduction of an adjustable constant as a model. In order to illustrate what is going on in this kind of approach, I will discuss the EDQNM in some detail, as follows.

We begin with the quasi-normal expression for the transfer spectrum $T(k)$ from the Lin equation. This is found to be: \begin{eqnarray}T(k,t) & & =8\pi^2\int d^{3}j\,L\left(\mathbf{k},\mathbf{j}\right)\int_{0}^{t}ds\,R_0\left(k;t,s\right)R_0\left(j;t,s\right)R_0\left(\left|\mathbf{k}-\mathbf{j}\right|;t,s\right) \nonumber \\& \times &\left[C\left(j,s\right)C\left(\left|\mathbf{k}-\mathbf{j}\right|,s\right)-C\left(k,s\right)C\left(\left|\mathbf{k}-\mathbf{j}\right|,s\right)\right],\label{KWE2} \end{eqnarray} where the viscous response function is given by $R_0(k;t,t’)=\exp[-\nu k^2 (t-t’)],$ and the coefficient $L(\mathbf{k,j})$ is defined as: $$L(\mathbf{k,j}) = -2M_{\alpha\beta\gamma}(\mathbf{k})M_{\beta\alpha\delta}(\mathbf{j})P_{\gamma\delta}(\mathbf{k-j}),\label{lkj1}$$ and can be evaluated in terms of three scalar variables as $$L(\mathbf{k,j}) = -\frac{\left[\mu\left(k^{2}+j^{2}\right)-kj\left(1+2\mu^{2}\right)\right]\left(1-\mu^{2}\right)kj}{k^{2}+j^{2}-2kj\mu},\label{lkj2}$$ where $\mu$ is the cosine of the angle between the vectors $\mathbf{k}$ and $\mathbf{j}$. For further discussion and details see Appendix C of the book [7].

Now Orszag argued that the failure of QN was basically due to the use of the viscous response function, when in fact one would expect that the turbulence interactions would contribute to the response function. Accordingly he proposed a modified response function: $$R(k;t,t’)=\exp[-\omega(k)(t-t’)],$$where $\omega(k)$ is a renormalized inverse modal response time. One may note that this is now becoming the same form as that of the Edwards transfer spectrum, but that it is also ad hoc and thus there is the freedom to choose $\omega(k)$. After some experimentation using dimensional analysis, Orszag chose the form: $$\omega(k)=\nu k^2 + g\left[\int_0^k dj j^2 E(j)\right]^{1/2},$$ where the constant $g$ is chosen to give the correct (i.e. experimental) result for the Kolmogorov spectrum. This is the eddy damped part of the model, so replacing $R_0$ by $R$ gives us the EDQN.

Even with the introduction of the damping term, the EDQN model can still lead to negative spectra. This was cured by introducing the Markovian step with respect to time. This rested on the assumption that the characteristic time $[\omega(k) +\omega(j) + \omega(|\mathbf{k-j}|)]^{-1}$ is negligible compared to the evolution time of the products of covariances in the expression for $T(k)$. The equation for the transfer spectrum was Markovianised by replacing the time integral by a memory function $D(k,j;t)$, thus: $$T(k,t) =8\pi^2\int d^{3}j\,L\left(\mathbf{k},\mathbf{j}\right) D(k,j;t)\left[C\left(j,s\right)C\left(\left|\mathbf{k}-\mathbf{j}\right|,s\right)-C\left(k,s\right)C\left(\left|\mathbf{k}-\mathbf{j}\right|,s\right)\right],$$ where the memory function is given by $$D(k,j;t)= \int_0^t ds \, \exp[\omega(k)+\omega(j)+\omega(|\mathbf{k-j}|)](t-s).$$This is now the EDQNM model.

When applied to the stationary case, this result for $T(k)$ is identical to the Edwards result, as given in the post of 3 November 2022; but there are crucial differences. The function $\omega(k)$ in the Edwards theory arises from a Markovian theory with respect to wavenumber interactions and is accordingly related to $T(k)$, thus giving the second equation of the closure. In contrast, the function $\omega(k)$ in EDQNM is fixed independently of the transfer spectrum by means of dimensional analysis and accordingly is not Markovian in the sense of the Edwards SCF. It is important to distinguish between the two kinds of Markovianisation.

In our next post, we will conclude this series of posts by discussing how these considerations affect the application of closures to large-eddy simulation.

[1] R. Phythian. Self-consistent perturbation series for stationary homogeneous turbulence. J.Phys.A, 2:181, 1969.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[3] S. A. Orszag. Analytical theories of turbulence. J. Fluid Mech., 41:363, 1970.
[4] R. H. Kraichnan. An almost-Markovian Galilean-invariant turbulence model. J. Fluid Mech., 47:513, 1971.
[5] W. D. McComb and K. Kiyani. Eulerian spectral closures for isotropic turbulence using a time-ordered fluctuation-dissipation relation. Phys. Rev. E, 72:16309{16312, 2005.
[6] W. J. T. Bos and J.-P. Bertoglio. A single-time, two-point closure based on fluid particle displacements. Phys. Fluids, 18:031706, 2006.
[7] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.

# The non-Markovian nature of turbulence 7: non-Markovian closures and the LET theory in particular.

The non-Markovian nature of turbulence 7: non-Markovian closures and the LET theory in particular.

We can sum up the situation regarding the failure of the pioneering closures as follows. Their form of the transfer spectrum $T(k)$, with its division into input and output parts, with the latter being proportional to the amount of energy in mode $k$, is only valid for Markov processes, so it is incompatible with the nature of turbulence which is non-Markovian. It is also incompatible with the phenomenology of turbulence, where the entire $T(k)$ acts as input (or output), depending on the value of $k$, as I pointed out in 1974 [1]. It is worth noting that the first measurement of $T(k)$ was made by Uberoi in 1963 [2], so turbulence phenomenology was in its infancy at the time the first closures were being developed. In later years, numerical experiments based on high-resolution direct numerical simulations, did not bear out the Markovian picture. In particular, we note the investigation by Kuczaj et al [3]. This is in fact the basic flaw in Kraichnan’s DIA and also the SCF theories of Edwards and Herring: the fault lies not in the covariance equations but in the relationship of the response function to them.

As mentioned in the first blog in the present series (posted on 13 October) a response to this problem took, and continues to take, the form of an extension of the DIA approach to Lagrangian coordinates. A consideration of these theories would take us too far away from our present objective although it should be mentioned that they are non-Markovian in that they are not expressible as Master equations. Instead we will concentrate on the LET theory which exposes the underlying physics of the turbulence energy transfer process.

The LET theory was introduced with the hypothesis that $\omega(k)$ is determined by the entire $T(k)$, not just part of it, and can be defined by a local energy balance [1]. It was extended to the two-time case [4] in 1978; and, less heuristically, in subsequent papers by McComb and co-workers: see [5] for a review. Essentially, the two-time LET theory comprises the DIA covariance equations plus the generalized fluctuation-response relation. It may be compared to Herring’s two-time SCF [6] which comprises the DIA response equation, single-time covariance equation and the generalized fluctuation-response equation. It may also be compared directly to DIA in terms of response equations. However, for our present purposes, we will go back to the simplest case, and show how LET arose in relation to the Edwards SCF.

It was argued by McComb [1], that a correct assignment of the system response in terms of $T(k)$ (i.e. correct’ in the sense of agreeing with the turbulence phenomenology of energy transfer) could lead to a response function which was compatible with K41. This was found to be the case and, citing the form given in [1], we may write for the turbulence viscosity $\nu_T(k)$: $$\nu_T(k)= k^{-2}\int_{j\geq k}d^3 j \frac{L(\mathbf{k,j})C(|\mathbf{k j}|)[C(k)-C(j)]}{C(k)[\omega(k)+\omega(j)+\omega(|\mathbf{k-j}|)]},\label{let-visc}$$ where $\omega(k) = \nu_T(k) k^2$. The lower limit on the integral with respect to $j$ arises when we consider the flux through mode $k$. It was used in [1] to justify wavenumber expansions leading to differential forms but is not needed here and can be omitted. The interesting point here is made by rewriting this in terms of the Edwards dynamical friction $r(k)$. From equation (5) in the post on 3 November, rewritten as $\omega(k)=\nu k^2 + \nu_T(k)k^2 =\nu k^2 + r(k),$ we may rewrite (\ref{let-visc}) as: $$\nu_T(k)= r(k) – k^{-2}\int d^3 j \frac{L(\mathbf{k,j})C(|\mathbf{k,j}|)C(j)}{C(k)[\omega(k)+\omega(j)+ \omega(|\mathbf{k-j}|)]}. \label{let-visc-rk}$$

It was shown [1] that the second term in the LET  response equation  cancelled the divergence in $r(k)$ in the limit of infinite Reynolds number. Hence the term which destroys the Markovian nature of the renormalized perturbation theory is the term which makes the theory compatible with the Kolmogorov $-5/3$ spectrum.

In the next post we will consider the subject of almost-Markovian models, where the term refers to the integrals over time rather than to the energy transfer through wavenumber.

[1] W. D. McComb. A local energy transfer theory of isotropic turbulence. J. Phys. A, 7(5): 632, 1974.
[2] M. S. Uberoi. Energy transfer in isotropic turbulence. Phys. Fluids, 6:1048, 1963.
[3] Arkadiusz K. Kuczaj, Bernard J. Geurts, and W. David McComb. Nonlocal modulation of the energy cascade in broadband-forced turbulence. Phys. Rev. E, 74:16306-16313, 2006.
[4] W. D. McComb. A theory of time dependent, isotropic turbulence. J. Phys. A: Math. Gen., 11(3):613, 1978.
[5] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[6] J. R. Herring. Self-consistent field approach to nonstationary turbulence. Phys. Fluids, 9:2106, 1966.

# The non-Markovian nature of turbulence 6: the assumptions that led to the Edwards theory being Markovian.

The non-Markovian nature of turbulence 6: the assumptions that led to the Edwards theory being Markovian.
Turbulence theories are usually referred to by acronyms e.g. DIA, SCF, ALHDIA, \dots, and so on. Here SCF is Herring’s theory and, to avoid confusion, Herring and Kraichnan referred to the Edwards SCF as EDW [1]. Later on, when I came to write my first book on turbulence [2], I referred to it as EFP, standing for Edwards-Fokker-Planck’ theory. This seemed appropriate as Edwards was guided by the theory of Brownian motion. But it did not occur to me at the time that the significance of this was that his theory was Markovian with respect to interactions in wavenumber space; nor indeed that a Markovian form was the common denominator in all three of the pioneering Eulerian theories. In recent years, it did occur to me that it was not necessary to be so prescriptive; and if one took a less constrained approach the result was a non-Markovian theory, in fact the LET theory.

Following Edwards [3], we define a model system in terms of a Gaussian distribution $P_0[\mathbf{u}]$, which is chosen such that it is normalised to unity and recovers the exact covariance. That is: $$\int \mathcal{D}\mathbf{u} \ P_0[\mathbf{u}] = 1,$$ and $$\int \mathcal{D}\mathbf{u} \ P_0[\mathbf{u}]\ u_\mu(\mathbf{k},t) u_\beta(\mathbf{k’},t’) = \langle u_\alpha(\mathbf{k},t) u_\beta(\mathbf{k’},t’)\rangle=\delta(\mathbf{k}+\mathbf{k’})C_{\alpha\beta} \mathbf{k};t,t’) \ ,$$ respectively. Then one solves the Liouville equation for the exact probability distribution in terms of a perturbation series with $P_0[\mathbf{u}]$ as the zero-order term. We will not go into further details here, as we just want to understand how the Edwards theory was constrained to give a Markovian form.

Equations (1) and (2) introduce the two-time covariance. However, in order to explain the Edwards theory, we will consider the single-time case. Also, for sake of simplicity, we will employ the reduced notation of Herring, as used extensively by Leslie [4] and others (see [2]). In this notation we represent the velocity field by $X_i$, where the index is a combined wave-vector and cartesian tensor index (i.e our $\mathbf{k}$ and $\alpha$). Accordingly, we introduce the Edwards-Fokker-Planck operator as the sum of single-mode operators, in the form: $$L_{EFP} = -\omega_i \frac{\partial}{\partial X_i}\left(X_i + \phi_i \frac{\partial}{\partial X_i}\right), \label{efp}$$ where $\omega_i$ is a renormalized eddy decay rate and $\phi_i$ is the covariance of the velocity field, such that $$\phi_i =\int_{-\infty}^\infty X_i^2 P(X_i)dX_i,$$ and $P$ is the exact distribution. Then it is readily verified that the model equation: $$L_{EFP}P^{(F)} = 0$$ has the Gaussian solution $$P^{(F)} = \frac{e^{ X_i^2/2\phi_i}}{(2\pi\phi_i)^\frac{1}{2}}.$$

However, it is important to note, and is also readily verified, that a more general form of the operator $L_0$, which is given by $$L_0 = H(X_i)\left[X_i + \phi_i \frac{\partial}{\partial X_i}\right],$$ where $H(X_i)$ is an arbitrarily chosen well behaved function, also yields the same Gaussian solution for the zero-order equation: $$L_0P_0 =0.\label{base-op}$$ Hence at this stage the operator $L_0$ is not fully determined. Edwards was guided by an analogy with the theory of Brownian motion and in effect made the choice $$H(X_i) = -\omega_i \frac{\partial}{\partial X_i},$$in order to generate a base operator which could be inverted in terms of an eigenfunction expansion of Hermite polynomials. In this process, the $\{\omega_i\}$ appeared as eigenvalues.

It is this specific choice which over-determines the basic operator which constrained the Edwards theory to be Markovian. More recently it was found that a more minimalist choice, allied to a two-time representation, leads formally to the LET theory [5]. We will consider a more physical basis for the LET theory in the next post.

[1] J. R. Herring and R. H. Kraichnan. Comparison of some approximations for isotropic turbulence Lecture Notes in Physics, volume 12, chapter Statistical Models and Turbulence, page 148. Springer, Berlin, 1972.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[3] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[4] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.
[5] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017

# The non-Markovian nature of turbulence 5: implications for Kraichnan’s DIA and Herring’s SCF

The non-Markovian nature of turbulence 5: implications for Kraichnan’s DIA and Herring’s SCF

Having shown that the Edwards theory is Markovian, our present task is to show that Kraichnan’s DIA and Herring’s SCF are closely related to the Edwards theory.
However, we should first note that, in the case of the DIA, one can see its Markovian nature by considering its prediction for $T(k,t)$, and this was pointed out by no less a person than Kraichnan himself in 1959 [1]. We may quote the relevant passage as follows:

‘The net flow is the resultant of these absorption and
emission terms. It will be noticed that in contrast to the
absorption term, the emission terms are proportional to $E(k)$. This
indicates that the energy exchange acts to maintain equilibrium. If the
spectrum level were suddenly raised to much higher than the equilibrium
value in a narrow neighbourhood of $k$, the emission terms would be
greatly increased while the absorption term would be little affected,
thus energy would be drained from the neighbourhood and equilibrium
re-established. The structure of the emission and absorption terms is
such that we may expect the energy flow to be from strongly to weakly
excited modes, in accord with general statistical mechanical principles.’

Note that the absorption term is what Edwards would call the input to mode $k$ from all other modes, while the emission term is the loss from mode $k$.

Kraichnan’s argument here is essentially a more elaborate version of that due to Edwards, and presents what is very much a Markovian picture of turbulence energy transfer. But, in later years, numerical experiments based on high-resolution direct numerical simulations did not bear that picture out. In particular, we note the investigation by Kuczaj et al [2].

Going back to the relationships between theories, in 1964 Kraichnan [3] showed that if one assumed that the time-correlation and response functions were assumed to take exponential forms (with the same decay parameter $\omega(k,t)$), then the DIA reduced to the Edwards theory, although with only two $\omega$s in the denominator of the equation for $\omega$, rather than the three such parameters as found in the Edwards case: see equations (4) and (5) in the previous blog. Thus the arguments used to demonstrate the Markovian nature of the Edwards theory do not actually work for the single-time stationary form of DIA. See also [4], Section 6.2.6. All we establish by this procedure is that the theories are cognate: that is, they have identical equations for the energy spectrum and similar equations for the response function.

Herring’s SCF has been discussed at some length in Section 6.3 of the book [4]. In time-independent form, it is identical to the DIA with assumed exponential time-dependences. The relationship between the two theories can also be demonstrated for the two-time case. The case for the SCF being classified as Markovian seems strong to me. However, there is some additional evidence from other self-consistent field theories. Balescu and Senatorski [5] actually formulated the problem in terms of a master equation and then treated it perturbatively. Summation of certain classes of diagrams led to the recovery of Herring’s SCF. For completeness, we should also mention the work of Phythian [6], whose self-consistent method resembled those of Edwards and Herring. However his introduction of a infinitesimal response function, like that of DIA, meant that his theory ended up re-deriving the DIA equations.

In the next post we will examine the question of how the Edwards theory came to be Markovian. In particular, we will answer the question: what were the relevant assumptions made by Edwards?

[1] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497-543, 1959.
[2] Arkadiusz K. Kuczaj, Bernard J. Geurts, and W. David McComb. Nonlocal modulation of the energy cascade in broadband-forced turbulence. Phys. Rev. E, 74:16306-16313, 2006.
[3] R. H. Kraichnan. Approximations for steady-state isotropic turbulence. Phys. Fluids, 7(8):1163-1168, 1964.
[4] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[5] R. Balescu and A. Senatorski. A new approach to the theory of fully developed turbulence. Ann.Phys(NY), 58:587, 1970.

[6] R. Phythian. Self-consistent perturbation series for stationary homogeneous
turbulence. J.Phys.A, 2:181, 1969.

# The non-Markovian nature of turbulence 4: the Edwards energy balance as a Master Equation.

The non-Markovian nature of turbulence 4: the Edwards energy balance as a Master Equation.

In this post we will rely on the book [1] for background material and further details. We begin with the well-known Lin equation for the energy spectrum in freely decaying turbulence, thus: $$\label{Lin}\frac{\partial E(k,t)}{\partial t}= T(k,t) – 2\nu k^2 E(k,t),$$ where $T(k,t)$ is the transfer spectum: see [1] for further details. This equation is the Fourier transform of the better known Karman-Howarth equation in real space. But, although the KHE is a local energy balance in the separation of measuring points $r$, the Lin equation is not actually a local energy balance in wavenumber space since the transfer spectrum depends on an integral of the triple-moment over all wavenumbers. For explicit forms, see [1].

The energy spectrum is defined in terms of the covariance in wavenumber space (or spectral density) by the well-known relation: $$\label{spect}E(k,t)= 4\pi k^2C(k,t),$$ but in theory it is more usual to work in terms of the latter quantity, and accordingly we transform (\ref{Lin}) into $$\label{covlin}\frac{\partial C(k,t)}{\partial t}= \frac{T(k,t)}{4\pi k^2} – 2\nu k^2 C(k,t),$$.
The Edwards statistical closure for the transfer spectrum may be written as:$$\label{sfecov}\frac{T(k,t)}{4\pi k^2}=2\int d^3 j\frac{ L(k,j)C(|\mathbf{k}-\mathbf{j}|,t)\{C(j,t)-C(k,t)\}}{\omega(k,t)+\omega(j,t)+\omega(|\mathbf{k}-\mathbf{j}|,t)},$$ where $L(k,j)=L(j,k)$ and the inverse modal response time is given by: $$\label{sferesponse}\omega(k,t)=\nu k^2 + \int d^3 j\frac{ L(k,j)C(|\mathbf{k}-\mathbf{j}|,t)}{\omega(k,t)+\omega(j,t)+\omega(|\mathbf{k}-\mathbf{j}|,t)}.$$ This controls the loss of energy from mode $k$, while the term giving the gain to mode $k$ from all the other modes takes the form: $$\label{sfegain}S(k,t)=2\int d^3 j\frac{ L(k,j)C(|\mathbf{k}-\mathbf{j}|,t)C(j,t)}{\omega(k,t)+\omega(j,t)+\omega(|\mathbf{k}-\mathbf{j}|,t)}.$$ Then, the Edwards form for the transfer spectral density may be written as:$$\label{sfetrans}\frac{T(k,t)}{4\pi k^2}=S(k,t)-2\omega(k,t)C(k,t),$$ and from (\ref{covlin}) the Edwards theory gives the Lin equation as:$$\label{sfecovlin}\frac{\partial C(k,t)}{\partial t}=S(k,t)-2\omega(k,t)C(k,t).$$

Our next step is to compare this to the Master Equation and for simplicity we will consider a quantum system which can exist in any one of a large number of discrete states $|i\rangle$, where $i$ is a positive integer. The relevant equation is the Fermi master equation (see Section 9.1.2 of the book [2]), which may be written as: $$\label{fermi}\frac{d p_i}{d t}= \sum_{j}\nu_{ij} p_j – \left\{\sum_j \nu_{ij}\right\}p_i,$$ where: $p_i$ is the probability of the system being in state $|i\rangle$; $\nu_{ij}$ is the conditional probability per unit time of the system jumping from state $|i\rangle$ to state $|j\rangle$; and the principle of jump rate symmetry gives us $\nu_{ij}=\nu_{ji}$.

Either from this comparison with the Fermi master equation or from comparison with other master equations such as the Boltzmann equation, it is clear that the Edwards theory of turbulence is a Markovian approximation to turbulence which is itself non-Markovian. The two questions which now arise are: first, what are the implications for the other closures due to Kraichnan and to Herring? And, secondly, how did the Markovian nature of the Edwards theory come about? These will be dealt with in the next two posts?

[1] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[2] W. David McComb. Study Notes for Statistical Physics: A concise, unified overview of the subject. Bookboon, 2014. (Free to download from Bookboon.com)

# The non-Markovian nature of turbulence 3: the Master Equation.

The non-Markovian nature of turbulence 3: the Master Equation.

In the previous post we established that the ‘loss’ term in the transport equation depends on the number of particles in the state currently being studied. This followed straightforwardly from our consideration of hard-sphere collisions. Now we want to establish that this is a general consequence of a Markov process, of which the problem of $N$ hard spheres is a particular example.

We follow the treatment given in pages 162-163 of the book [1] and consider the case of Brownian motion, as this is relevant to the Edwards self-consistent field theory of turbulence. We again consider a multipoint joint probability distribution and now consider a continuous variable $X$ which takes on specific values $x_1$ at time $t_1$, $x_2$ at time $t_2$, and in general $x_n$ at time $t_n$; thus; $f_n(x_1,t_1; x_2,t_2; \dots x_n,t_n).$ We then introduce the conditional probability density: $p(x_1,t_1|x_2,t_2),$ which is the probability density that $X=x_2$ at $t=t_2$, given that $X$ had the value $X=x_1$ when $t=t_1\leq t_2$. It is defined by the identity: $$f_1(x_1,t_1)p(x_1,t_1|x_2,t_2)=f_2(x_1,t_1; x_2,t_2).\label{pdef}$$

From this equation (see [1]), we can obtain a general relationship between the single-particle probabilities at different times as: $$f_1(x_2,t_2)=\int p(x_1,t_1|x_2,t_2)f_1(x_1,t_1)dx_1. \label{pprop}$$

Next we formally introduce the concept of a Markov process. We now define this in terms of the conditional probabilities. If: $$p(x_1,t_1;x_2,t_2; \dots n_{n-1},t_{n-1}|x_n,t_n)=p(n_{n-1},t_{n-1}|x_n,t_n),\label{markdef}$$then the current step depends only on the immediately preceding step, and not on any other preceding steps. Under these circumstances the process is said to be Markovian.

It follows that the entire hierarchy of probability distributions can be constructed from the single-particle distribution $f_1(x_1,t_1)$ and the transition probability $p(x_1,t_1|x_2,t_2)$. The latter quantity can be shown to satisfy the Chapman-Kolmogorov equation: $$p(x_1,t_1|x_3,t_3)=\int p(x_1,t_1|x_2,t_2)p(x_2,t_2|x_3,t_3) dx_2,\label{ck}$$ indicating the transitive property of the transition probability.

It is of interest to consider two specific cases.

First, for a chain which has small steps between events, the integral relation (\ref{ck}) can be turned into a differential equation by expanding the time dependences to first order in Taylor series. Putting $f_1 = f$ for simplicity, we may obtain: $$\frac{\partial f(x_2,t_2)}{\partial t}=\int\, dx_1\left\{W(x_1,x_2)f(x_1,t)-W(x_2,x_1)f(x_2,t)\right\}, \label{me}$$ where $W(x_1,x_2)$ is the rate per unit time at which transitions from state $x_1$ to state $x_2$ take place. This is known as the master equation.

Secondly, if $X$ is a continuum variable, we can further derive the Fokker-Planck equation as: $$\frac{\partial f(x,t)}{\partial t}= \frac{\partial [A(x)f(x,t)]}{\partial x} + \frac{\frac{1}{2}\partial^2[B(x)f(x,t)]}{\partial x^2}. \label{fp}$$ This equation describes a random walk with diffusivity $B(x)$ and friction damping $A(x)$. A discussion of this equation as applied to Brownian motion may be found on pages 163-164 of [1] but we will not pursue that here.

In the next post we will discuss the Edwards theory of turbulence (and by extension the other pioneering theories of Kraichnan and of Herring) in the context of the present work.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press 1990.

# The non-Markovian nature of turbulence 2: The influence of the kinetic equation of statistical physics.

The non-Markovian nature of turbulence 2: The influence of the kinetic equation of statistical physics.

The pioneering theories of turbulence which we discussed in the previous post were formulated by theoretical physicists who were undoubtedly influenced by their background in statistical physics. In this post we will look at one particular aspect of this, the Boltzmann equation; and in the next post we will consider the idea of Markov processes more explicitly.

For many people, a Markov process is associated with the concept of a random walk, where the current step depends only on the previous one and memory effects are unimportant. However, for our present purposes, we will need the more general formulation as developed in the context of the kinetic equations of statistical mechanics. A reasonably full treatment of this topic may be found in chapter four of the book [1], along with some more general references. Here we will only need a brief summary, as follows.

We begin with a system of $N$ particles satisfying Hamilton’s equations (e.g. a gas in a box). We take this to be spatially homogeneous, so that distributions depend only on velocities and not on positions. Conservation of probability implies the exact Liouville equation for the $N$-particle distribution function $f_N$, but in practice we would like to have the single-particle distribution $f_1(u,t)$. If we integrate out independent variables progressively, this leads to a statistical hierarchy of governing equations, in which each reduced distribution depends on the previous member of the hierarchy: a closure problem!

The hierarchy terminates with an equation for the single-point distribution $f_1$ in terms of the two-particle distribution $f_2$. This is known as the kinetic equation. The kinetic equation for $f_1(x,u,t)$ may be written as: $$\frac{\partial f_1}{\partial t} + (u.\nabla) f_1 =\{\mbox{Term involving}\, f_2\},$$ where $x$ is the position of a single particle, $u$ is its velocity, and $\nabla$ is the gradient operator with respect to the variable $x$. If we follow Boltzmann and model the gas molecules as hard spheres, then we can assume that the right hand side of the equation is entirely due to collisions. Accordingly, we may write the kinetic equation as: $$\frac{\partial f}{\partial t} = \left(\frac{\partial f}{\partial t}\right)_{collisions},$$ where the convective term vanishes because of the previously assumed homogeneity. Also, we drop the subscript $1$’ as we will only be working with the single-particle distribution.

Now let us consider the basic physics of the collisions. We assume that three-body collisions are unlikely and restrict our attention to the two-body case. Assume we have a collision in which a particle with velocity $u$ collides with another particle moving with velocity $v$, resulting in two particles with velocities $u’$ and $v’$. Evidently this represents a loss of one particle from the set of particles with velocity $u$. Conversely, the inverse two-body collision can result in the gain of one particle to the state $u$. Hence we may interpret the right hand side of (2) as: $$\left(\frac{\partial f}{\partial t}\right)_{collisions} = \mbox{Rate of gain to state}\,u\,-\mbox{Rate of loss from state}\,u.$$

The right hand side can be calculated using elementary scattering theory, along with the assumption of molecular chaos or stossahlansatz, in the form $f_2=f_1f_1$; with the result that equation (1) becomes: $$\frac{\partial f(u,t)}{\partial t} = \int dv \int d\,\omega \,\sigma_d\,|u-v| \{f(u’,t)f(v’,t) – f(v,t)f(u,t)\},$$ where $\sigma_d$ is the differential scattering cross-section, the integral with respect to $\omega$ is over scattering angles, and the integral with respect to $v$ stands for integration over all dummy velocity variables.

This is the Boltzmann equation and its key feature from our present point of view is that the rate of loss of particles from the state $u$ depends on the number in that state, as given by $f(u,t)$. We will develop this further in the next post as being a general characteristic of Markovian theories. Of course the present treatment is rather sketchy, but a pedagogic discussion can be found in the book [2], which is free to download from Bookboon.com.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press 1990.
[2] W. David McComb. Study Notes for Statistical Physics: A concise, unified overview of the subject. Bookboon, 2014.

# The non-Markovian nature of turbulence 1: A puzzling aspect of the pioneering two-point closures.

The non-Markovian nature of turbulence 1: A puzzling aspect of the pioneering two-point closures.

When I began my postgraduate research on turbulence in 1966, the field had just gone through a very exciting phase of new developments. But there was a snag. These exciting new theories which seemed so promising were not quite correct. They had been found to be incompatible with the Kolmogorov spectrum.

This realisation had come about in stages. When Kraichnan published his pioneering paper in 1959 [1], he carried out an approximate analysis and concluded that his new theory (the direct interaction approximation, or DIA as it is universally known) predicted an inertial range energy spectrum proportional to $k^{-3/2}$. He also concluded that the experimental results available at the time were not sufficiently accurate to distinguish between his result and the well-known Kolmogorov $k^{-5/3}$ form. However, this situation had changed by 1962, with the publication of the remarkable results of Grant et al [2], which exhibited a clear $-5/3$ power law over many decades of wavenumber range.

In 1964, Edwards published a self-consistent field theory which, unlike Kraichnan’s DIA, was restricted to single-time correlations [3]. This too turned out to be incompatible with the Kolmogorov spectrum [4]. Edwards attributed the problem to an infra-red divergence in the limit of infinite Reynolds number which, although a different explanation from Kraichnan’s, at least also suggested that the problem was associated with low wavenumber behaviour. In 1965, Herring published a self-consistent field theory [5], which was comparable to that of Edwards, although the equation for the renormalized viscosity differed slightly, but not sufficiently to eliminate the infra-red divergence. In passing, I would note that Herring’s self-consistent field method was more general than that of Edwards, and that is a point which I will refer to in later posts in the present series. Also, for completeness, I should mention that Herring later extended his theory to the two-time case and this was found to be closely related to the DIA of Kraichnan [6].

Kraichnan, in a series of papers, responded to this situation by developing variants of his method in Lagrangian coordinates (later on, in collaboration with Herring); and later Lagrangian methods were introduced by Kaneda, Kida & Goto, and most recently Okumura. My own approach began in 1974, in correcting the Edwards theory, which involved the introduction of the local energy transfer (LET) theory and retained the Eulerian coordinate system. All of these theories are compatible with the Kolmogorov spectrum.

My point now, is really one of taxonomy, although it is nonetheless fundamental for all that. How should we classify the theories in order to distinguish between those which are compatible with Kolmogorov and those which are not? In my 1990 book [7], I resorted to the pragmatic classification: Theories of the first kind and Theories of the second kind; along with a nod to a popular film title! Actually, in recent times, the answer to this question has become apparent, along with the realisation that it has been hiding in plain sight all this time. The clue lies in the Edwards theory and that is the aspect that we shall develop in this series of posts.

The discussion above does not do justice to everything that was going on in this field in the 1960/70s. For instance, I could have mentioned the formalism of Wyld and the well-known EDQNM. Discussions of these, and many more, will be found in my book cited above as [7]. Also, the most recent significant research papers in this field are McComb & Yoffe [8] in 2017 and Okamura [9] in 2018.

[1] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497{543, 1959.
[2] H. L. Grant, R. W. Stewart, and A. Moilliet. Turbulence spectra from a tidal channel. J. Fluid Mech., 12:241-268, 1962.
[3] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[4] S. F. Edwards. Turbulence in hydrodynamics and plasma physics. In Proc. Int. Conf. on Plasma Physics, Trieste, page 595. IAEA, 1965.
[5] J. R. Herring. Self-consistent field approach to turbulence theory. Phys. Fluids, 8:2219, 1965.
[6] J. R. Herring. Self-consistent field approach to nonstationary turbulence. Phys. Fluids, 9:2106, 1966.
[7] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[8] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[9] Makoto Okamura. Closure model for homogeneous isotropic turbulence in the Lagrangian specification of the flow field. J. Fluid Mech., 841:133, 2018.

# Work in progress.

Work in progress.

In my blog of 13 August 2020 I posted a to-do list’ that dated from November 2009. None of these jobs ever got done, because other jobs cropped up which had greater priority. I’m fairly confident that this won’t happen with my current to-do list’ as I see all these jobs as very important and, in a sense, as rounding off my lifetime’s work in turbulence. The list follows below:

[A] Extension of my 2009 analysis of the Kolmogorov spectrum for the stationary case [1] to the case of free decay. It has become increasingly clear in recent years that there are non-trivial differences between stationary isotropic turbulence and freely decaying isotropic turbulence (and grid-generated turbulence is something else again!). As this analysis expresses the pre-factor (i.e. the Kolmogorov constant) in terms of an average over the phases of the system, it is of interest to see whether the peculiarities of free decay affect the pre-factor or the power law (or indeed both).

[B] Turbulent inertial transfer as a non-Markovian stochastic process and the implications for statistical closures. In 1974 [2] I diagnosed the failure of the Edwards single-time theory (and by extension Kraichnan’s two-time DIA) as being due to their dividing the transfer spectrum into input and output. The basis of my local energy transfer (LET) theory was to recognise that at some wavenumbers the entire transfer spectrum behaved as an input while at other wavenumbers it behaved as an output. Subsequently I extended the LET theory to the two-time case by heuristic methods and this formulation was developed by myself and others over many years. However in 2017 [3, 4] I extended the general self-consistent field method of Sam Edwards to the two-time case and re-derived the LET in a more formal way. However, the puzzle was this: why did the Edwards procedure give the wrong answer for the single-time case, but not for the two-time case? I realised at the time (i.e. in 2017) that Edwards had over-determined his base distribution and that his base operator was of unnecessarily high order (see [4]), but it was only recently that the penny dropped and I realised that by specifying the Fokker-Planck operator, Sam had effectively made a Markovian approximation. This needs to be written up in detail in the hope of throwing some light on the behaviour of statistical closure theories and that is my most urgent task. Please note that the letter M’ in EDQNM refers to the fact that it is Markovian in time.

[C] Characteristic decay times of the two-time, two-point Eulerian correlation function and the implications for closures. This is a very old topic which still receives attention: for instance, see [5, 6]. I have intended to get to grips with this for many years, as I have some concerns about the way that it is applied to statistical closures, beginning with the work of Kraichnan on DIA. One suspicion that I have is that the form of scaling is different in the stationary and freely-decaying cases; but I have not seen this point mentioned in the literature.

[D] Reconsideration of renormalization methods in the light of the transient behaviour of the Euler equation. I have posted five blogs with remarks on this topic, beginning on the 19 May 2022. My intention now is to combine these remarks into some more or less coherent analysis, as I believe they support my long-held suspicion (more suspicion!) that there are problems with the way in which stirring forces are used in formulating perturbation theories of the Navier-Stokes equations. Of course it is natural to study a dynamical system subject to a random force, but in the case of turbulence the force creates the system as well as sustaining it against dissipation.

This programme should keep me pretty busy so I don’t expect to post blogs over the next month or two. However, by the autumn I hope to return to at least intermittent postings.

[1] David McComb. Scale-invariance and the inertial-range spectrum in three-dimensional stationary, isotropic turbulence. J. Phys. A: Math. Theor., 42:125501, 2009.
[2] W. D. McComb. A local energy transfer theory of isotropic turbulence. J.Phys.A, 7(5):632, 1974.
[3] David McComb. A fluctuation-relaxation relation for homogeneous, isotropic turbulence. J. Phys. A: Math. Theor., 42:175501, 2009.
[4] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[5] G. He, G. Jin, and Y. Yang. Space-time correlations and dynamic coupling in turbulent flows. Ann. Rev. Fluid Mech., 49:51, 2017.
[6] A. Gorbunova, G. Balarac, L. Canet, G. Eyink, and V. Rossetto. Spatio-temporal correlations in three-dimensional homogeneous and isotropic turbulence. Phys. Fluids, 33:045114, 2021.

# Turbulence renormalization and the Euler equation: 5

Turbulence renormalization and the Euler equation: 5

In the preceding posts we have discussed the fact that the Euler equation can behave like the Navier-Stokes equation (NSE) as a transient for sufficiently short times [1], [2]. It has been found that spectra at lower wavenumbers are very similar to those of turbulence, and there appears to be a transfer of energy to the thermal’ modes at higher wavenumbers. This raises some rather intriguing questions about the general class of renormalized perturbation theories which are often interpreted as renormalizing the fluid viscosity. As these theories are broadly in quite good qualitative and quantitative agreement with the observed behaviour of the NSE, they should also be in good agreement with the spectrally-truncated Euler equation, which of course is inviscid. So in this case there is nothing to renormalize!

In effect this latter point has already been demonstrated, in that [1] was based on direct numerical simulation of the Euler equation and [2] used the EDQN model with the viscosity set equal to zero. So this raises doubts about the concept of a renormalized fluid viscosity as an interpretation of the two-point statistical closure theories. As indicated at the end of the previous post, it may be helpful to consider a case where the renormalization of the fluid viscosity is central to the method and therefore unambiguous. This is provided by the application of renormalization group (RG) to turbulence. A background discussion of this method may be found in [3] and a schematic outline was given in my blog post of 7 May 2020. Here we will just summarise a few points.

Consider isotropic turbulence with wavenumber modes in the range $0\leq k\leq k_{max}$. The basic idea is to average out the modes with $k_1\leq k \leq k_{max}$, while keeping those modes with $0\leq k\leq k_1$ constant. It should be emphasised that such an average is a $\emph{conditional}$ average: it is not the same as the usual ensemble or time average. Once calculated, this average can be added to the molecular viscosity in order to represent the effect of the eliminated modes by an effective viscosity on the retained modes. Then the variables are all scaled (Kolmogorov scaling) on the increased viscosity; and the process repeated for a new cut-off wavenumber $k_2< k_1$; and so on, until the effective viscosity ceases to change. The result is a scale-dependent renormalized viscosity.

Now this appears to round off my series of posts on this topic quite well. There is no viscosity in the Euler equation and so we do not have a starting point for RG. It is as simple as that. Any attempt to categorise the energy sink provided by the equilibrium modes by an effective viscosity still does not appear to provide a starting point for RG. On the other hand, unlike in the so-called renormalized perturbation theories, there is no question about the fact that the kinematic viscosity of the fluid is renormalized.

My overall conclusion is a rather vague and open-ended one. Namely, that it would be interesting to consider all the renormalization approaches to turbulence very much in the context of how they look when applied to the Euler equation as well as the NSE, and I hope to make this the subject of further work. Lastly, before finishing I should enter a caveat about RG and also correct a typographical error.

$\emph{Caveat}$: The choice of the wavenumber $k_{max}$ is crucial. The pioneering applications of RG to random fluid motion chose it to be small enough to exclude the turbulence cascade and found a trivial fixed point as $k \rightarrow 0$. This choice rendered the conditional average trivial, as it restricted the formulation to perturbation theory using Gaussian averages, and of course Gaussian distributions factorize. Unfortunately many supposed applications to NSE turbulence also treated the conditional average as trivial. In fact one must choose $k_{max}$ to be large enough to capture all the dissipation, at least to a good approximation.

$\emph{Correction}$: The last word of the first paragraph of my post on 19 May 2022 should have been viscosity’ not velocity. The correction has been made online.

[1] Cyril Cichowlas, Pauline Bonatti, Fabrice Debbasch, and Marc Brachet. Effective Dissipation and Turbulence in Spectrally Truncated Euler Flows. Phys. Rev. Lett., 95:264502, 2005.
[2] W. J. T. Bos and J.-P. Bertoglio. Dynamics of spectrally truncated inviscid turbulence. Phys. Fluids, 18:071701, 2006.
[3] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.

# Turbulence renormalization and the Euler equation: 4

Turbulence renormalization and the Euler equation: 4
In the previous post we mentioned that Kraichnan’s DIA theory [1] and Wyld’s [2] diagrammatic formalism both depended on the use of an externally applied stirring force to define the response function. This is also true of the later functional formalism of Martin, Siggia and Rose [3]. Both formalisms agree with DIA at second order, and a general discussion of these matters can be found in references [4] and [5]. We also pointed out that this use of applied forces poses problems for the Euler equation. This is because the absence of viscosity means that the kinetic energy will increase without limit. That of course is the reason why numerical studies are limited to the spectrally truncated Euler equation, where the modes are restricted to $0\leq k\leq k_{max}$. So the fact that the Euler equation can behave like the Navier-Stokes equation (NSE) in a transient way not only raises questions about the interpretation of renormalized perturbation theory (RPT) as a renormalization of the molecular viscosity, it also raises doubts about the use of external forces to develop the RPT in the first place.

In the investigations of Cichowlas \emph{et al} [6] and Bos and Bertoglio [7], as discussed in the first of this series of posts on 19 May, the system was given a finite amount of energy which it then redistributed among the modes. For modes with $k\leq k_{th}$, an NSE-like cascade was observed, with a Kolmogorov spectrum; while for $k\geq k_{th}$ the $k^2$ equipartition spectrum was observed. Obviously, in the absence of viscosity the total energy is constant and the system must move to equipartition for all values of wavenumber. Thus the value of $k_{th}$ separating the two forms of behaviour must tend to zero in, it is reasonable to assume, a finite time.

If we applied stirring forces to the spectrally truncated Euler equation, such that they constituted an energy input at low modes at a rate $\varepsilon_W$, then in the absence of viscosity this could be balanced by a form of dissipation to the equipartition modes, where the energy contained in these modes is given by$$E_{th}(t)=\int_{k_{th}}^{k_{}max}\,E(k,t) \,dk,$$ and the dissipation rate by $$\varepsilon(t)= dE_{th}(t)/dt,$$ as discussed in reference [6]. Evidently as time goes on, $k_{th}$ will decrease to some minimum value, which would be determined by the peakiness of the input spectrum near the origin, and after that the total energy would increase without limit.

The only way one could maintain a quasi-NSE form of behaviour in the presence of an input term would be by increasing the value of $k_{max}$ and ultimately taking $k_{max} = \infty$. This naturally rules out numerical simulation but possibly some form of limit could be investigated numerically, rather as the infinite Reynolds number limit can be established in numerical simulations. Cichowlas \emph{et al} [6] introduced an analogue of the Kolmogorov dissipation wavenumber $k_d$ such that $$k_d \sim \left(\frac{\varepsilon}{E_{th}^{3/2}}\right)^{1/4}k_{max}^{3/4}.$$ This raises the possibility that taking the limit of $k_{max} \rightarrow \infty$ would correspond to the infinite Reynolds number limit which is $\lim \nu \rightarrow 0$ such that $\varepsilon_W = constant$, leading to $k_d \rightarrow \infty$

I will extend the discussion to the use of Renormalization Group (RG) in the next post. In the meantime, for sake of completeness I should mention that there is a school of activity in which RPTs are derived in Lagrangian coordinates. The latest developments in this area, along with a good discussion of its relationship to Eulerian theories, can be found in the paper by Okamura [8].

[1] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497{543, 1959.
[2] H. W. Wyld Jr. Formulation of the theory of turbulence in an incompressible fluid. Ann.Phys, 14:143, 1961.
[3] P. C. Martin, E. D. Siggia, and H. A. Rose. Statistical Dynamics of Classical Systems. Phys. Rev. A, 8(1):423-437, 1973.
[4] A. Berera, M. Salewski, and W. D. McComb. Eulerian Field-Theoretic Closure Formalisms for Fluid Turbulence. Phys. Rev. E, 87:013007-1-25, 2013.
[5] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[6] Cyril Cichowlas, Pauline Bonatti, Fabrice Debbasch, and Marc Brachet. Effective Dissipation and Turbulence in Spectrally Truncated Euler Flows. Phys. Rev. Lett., 95:264502, 2005.
[7] W. J. T. Bos and J.-P. Bertoglio. Dynamics of spectrally truncated inviscid turbulence. Phys. Fluids, 18:071701, 2006.
[8] Makoto Okamura. Closure model for homogeneous isotropic turbulence in the Lagrangian specification of the flow field. J. Fluid Mech., 841:133, 2018.

# Turbulence renormalization and the Euler equation: 3

Turbulence renormalization and the Euler equation: 3

In the previous post we saw that the mean-field and self-consistent assumptions/approximations are separate operations, although often referred to in the literature as if they could be used interchangeably. We also saw that the screened potential in a cloud of electrons could be interpreted as a Coulomb potential due to a renormalized charge. This type of interpretation was not immediately obvious for the magnetic case and indeed a much more elaborate statistical field theoretic approach would be needed to identify an analogous procedure in this case. It will be helpful to keep these thoughts in mind as we consider the theoretical approach to turbulence by Kraichnan in his DIA theory [1] in 1959. The other two key theories we shall consider are the diagrammatic method of Wyld [2] and the self-consistent field method of Edwards [3]. In what follows, we will adopt a simplified notation. Fuller details may be found in the books [4] or [5].

Kraichnan considered an infinitesimal fluctuation $\delta f(k,t)$ in the driving forces producing a fluctuation in the velocity field $\delta u(k,t)$. He then differentiated the NSE with respect to $f$ to obtain a governing equation for $\delta u$, with exact solution: $\delta u(k,t) = \int_{-\infty}^t \hat{G}(k;t,t’) \delta f(k,t’)dt’,$ where $\hat{G}$ is the infinitesimal response function. In this work Kraichnan made use of a mean-field assumption, viz. $\langle \hat{G}(t,t’)u(t)u(t’)\rangle = \langle\hat{G}(t,t’)\rangle \langle u(t)u(t’)\rangle = G(t,t’) \langle u(t)u(t’)\rangle,$ where $G$ is the response function that is used for the subsequent perturbation theory.

For perturbation theory, a book-keeping parameter $\lambda$ (ultimately set equal to unity) is introduced to mutliply the nonlinear term and $G$ is expanded in powers of $\lambda$, thus: $G(t,t’)= G_0(t,t’)+\lambda G_1(t,t’) + \lambda^2 G_2(t,t’) + \dots$ For the zero-order term, we set the nonlinear term in the Navier-Stokes equation (NSE) equal to zero and the exact solution is: $G_0(k,t-t’) = \exp[-\nu k^2(t-t’)],\quad\mbox{for}\quad t\geq t’;$ where we have now introduced stationarity. This is the viscous response function. So the technique is to calculate an approximation to the exact response function by means of partial summations of the perturbation series to all orders. This can be thought of as renormalizing the viscosity and that interpretation emerges more clearly in the diagrammatic method of Wyld [2].

The work of Wyld is a very straightforward analysis of the closure problem using conventional perturbation theory and a field-theoretic approach. It has received criticism and comment over the years but the underlying problems are procedural and are readily addressed [6]. From our point of view the pedagogic aspects of his formalism are attractive and it is beyond dispute that at second-order of renormalized perturbation theory his results verify those of Kraichnan. This is an important point as Wyld’s method does not involve a mean-field approximation.

At this stage it is clear that these two approaches cannot be directly applied to the Euler equation as there is no viscosity, and indeed the idea of forcing it would raise questions which we will not explore here. The interesting point here is that the Edwards self-consistent method does not rely explicitly on viscosity; nor, in the absence of viscosity, does it require stirring forces. Essentially it involves a self-consistent solution of the Liouville equation for the probability distribution of the velocities and, as it was applied to the forced NSE, it actually does involve both viscosity and stirring. Indeed it is known to be cognate with both the Kraichnan and the Wyld theories [4], [5]. Hence, like them it can be interpreted in terms of a renormalization of the viscosity.

These three theories, and other related theories, are all Markovian with respect to wavenumber (as opposed to time). The exception is the Local Energy Transfer (LET) theory [7], which does not divide the nonlinear energy transfer spectrum into input and output parts. Recently it has been found that the application of the Edwards self-consistent field method to the case of two-time correlations leads to a non-Markovian (in wavenumber) theory which has the response function R(t,t’) determined by:$R(t,t’) = \left\langle u(t)\tilde{f}(t’)\right\rangle_{0},$ where $\tilde{f}(t)$ is a quasi-entropic force derived from the base distribution and the subscript 0 denotes an average against that distribution. As pointed out in [8], the tilde distinguishes the quasi-entropic force from the stirring force $f$. Edwards showed that $\langle uf\rangle$ was the rate of doing work by the stirring forces on the velocity field, whereas the new quantity $\langle u\tilde{f}\rangle$ determines the two-time response. It would seem that the LET theory can be applied directly to the Euler equation and this is something I hope to report on in the near future.

[1] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497-543, 1959.
[2] H. W. Wyld Jr. Formulation of the theory of turbulence in an incompressible fluid. Ann. Phys, 14:143, 1961.
[3] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[4] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.
[5] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[6] A. Berera, M. Salewski, and W. D. McComb. Eulerian Field-Theoretic Closure Formalisms for Fluid Turbulence. Phys. Rev. E, 87:013007-1-25, 2013.
[7] W. D. McComb. A local energy transfer theory of isotropic turbulence. J. Phys. A, 7(5):632, 1974.
[8] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.

# Turbulence renormalization and the Euler equation: 2

Turbulence renormalization and the Euler equation: 2

In the early 1970s, my former PhD supervisor Sam Edwards asked me to be the external examiner for one of his current students. It was only a few years since I had been on the receiving end of this process so naturally I approached the task in a merciful way! Anyway, if memory serves, the thesis was about a statistical theory of surface roughness and it cited various papers that applied methods of theoretical physics to practical engineering problems such as properties of polymer solutions, stochastic behaviour of structures and (of course) turbulence. To me this crystallized a problem that was then troubling me. If you regarded yourself as belonging to this approach (and I did), what would you call it? The absence of a recognisable generic title when filling in research grant applications or other statements about one’s research seemed to be a handicap.

Ultimately I decided on the term renormalization methods but the term renormalization did not really come into general use, even in physics, until the success of renormalization group (or RG) in the early 1980s. Actually, the common element in these problems is that one is dealing with systems where the degrees of freedom interact with each other. So, another possible title would be many-body theory. We can also expect to observe collective behaviour, which is another possible label. We will begin by looking briefly at two pioneering theories in condensed matter physics, as comparing and contrasting these will be helpful when we go on to the theory of turbulence.

We begin with the Weiss theory of ferromagnetism which dates from 1907 (see Section 3.2 of [1]), in which a piece of magnetic material was pictured as being made up from tiny magnets at the molecular level. This predates quantum theory and nowadays we would think in terms of lattice spins. There are two steps in the theory. First there is the mean field approximation. Weiss considered the effect of an applied magnetic field $B$ producing a magnetization $M$ in the specimen, and argued that the tendency of spins to line up spontaneously would lead to a molecular field $B_m$, such that one could expect an effective field $B_E$, such that: $B_E = B + B_m.$ This is the mean-field approximation.

Then Weiss made the assumption $B_m\propto M.$ This is the self-consistent approximation. Combining the two, and writing the magnetization as a fraction of its saturation value $M_\infty$, an updated treatment gives: $\frac{M}{M_\infty}= \tanh\left[\frac{JZ}{kT}\frac{M}{M_\infty}\right],$ where $J$ is the strength of interaction between spins, $Z$ is the number of nearest neighbours of any one spin, $k$ is the Boltzmann constant and $T$ is the absolute temperature. This expression can be solved graphically for the value of the critical temperature $T_C$: see [1].

Our second theory dates from 1922 and considers electrons (in an electrolyte, say) and evaluates the effect all the other electrons on the potential due to any one electron. For any one electron in isolation, we have the Coulomb potential, thus: $V(r)\sim \frac{e}{r}$ where $e$ is the electronic charge and $r$ is distance from the electron. This theory too has mean-field and self-consistent steps (see [1] for details) and leads to the so-called screened potential, $V_s(r) \sim \frac{e \exp[-r/l_D]}{r},$ where $l_D$ is the Debye length and depends on the electronic charge and the number density of electrons. This potential falls off much faster than the Coulomb form and is interpreted in terms of the screening effect of the cloud of electrons round the one that we are considering.

However, we can interpret it as a form of charge renormalization, in which the free-field charge $e$ is replaced by a charge which has been renormalized by the interactions with the other electrons, or:$e \rightarrow e \times \exp[-r/l_D].$ Note that the renormalized charge depends on $r$ and this type of scale dependence is absolutely characteristic of renormalized quantities. In the next blog post we will discuss statistical theories of turbulence in terms of what we have learned here. For sake of completeness, we should also mention here that the idea of an effective’ or apparent’ or turbulence’ viscosity was introduced in 1877 by Boussinesq. For details, see the book by Hinze [2]. This may possibly be the first recognition of a renormalization process.

[1] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.
[2] J. O. Hinze. Turbulence. McGraw-Hill, New York, 1st edition, 1959. (2nd edition, 1975).

# Turbulence renormalization and the Euler equation: 1

Turbulence renormalization and the Euler equation: 1

The term renormalization comes from particle physics but the concept originated in condensed matter physics and indeed could be said to originate in the study of turbulence in the late 19th and early 20th centuries. It has become a dominant theme in statistical theories of turbulence since the mid-20th century, and a very simple summary of this can be found in my post of 16 April 2020, which includes the sentence: In the case of turbulence, it is probably quite widely recognized nowadays that an effective viscosity may be interpreted as a renormalization of the fluid kinematic viscosity.’ Some further discussion (along with references) may be found in my posts of 30 April and 7 May 2020, but the point that concerns me here, is how can renormalization apply to the Euler equation when its relationship to the Navier-Stokes equation (NSE) corresponds to zero viscosity?

It is well known that a randomly excited and spectrally truncated Euler system corresponds to an equilibrium ensemble in statistical mechanics. This means that it must exhibit energy equipartition at long times (depending on initial conditions) with the spectral energy density $C(k) = A$, where $A$ is constant, and therefore the energy spectrum taking the form $E(k) \sim A k^2$. Indeed this was demonstrated as long ago as 1964 by Kraichnan in the course of testing his DIA statistical closure [1]. However, in 1993, She and Jackson studied a constrained Euler system in the context of reducing the number of degrees of freedom needed to describe Navier-Stokes turbulence [2]. This involves an Euler equation restricted to wavenumber modes $k_{min}\leq k \leq k_{max}$ which is embedded in a forced NSE system, with nonlinear transfer of energy in from the forced modes with $k<k_{min}$ and nonlinear dissipation of energy to modes with $k>k_{max}$, where viscous dissipation is present. This is a very interesting paper which I mention here for completeness and hope to return to at some later time. For the moment I want to concentrate on two rather simpler, but still important, studies of the incompressible Euler equation [3], [4].

In physical terms, it is well known that the presence of the viscous term in the NSE, with its $k^2$ weighting, breaks the symmetry of the nonlinear term common to both equations, and ensures a mean flux of energy in the direction of increasing wavenumber. This symmetry can also be broken by adopting as initial condition some energy spectrum which does not correspond to the equipartition solution. The resulting evolution of a system peaked initially near, but not at, the origin is shown in [1], along with a good discussion of the behaviour of the Euler equation as related to the NSE. Evidently the Euler equation may behave like the NSE as a transient, ultimately tending to equipartition. This behaviour has been studied by Cichowlas et al [3], using direct numerical simulation; and by Bos and Bertoglio [4], using the EDQNM spectral closure. They both find long-lived transients in which at smaller wavenumbers there is a Kolmogorov-type $k^{-5/3}$ spectrum and at higher wavenumbers an equipartition $k^2$ spectrum. In both cases, the equipartition range is seen as acting as a sink, and hence giving rise to an effect like molecular viscosity.

In [4], as in [2], there is some consideration of the relevance to large-eddy simulation, but it should be noted that in both investigations the explicit scales are not subject to molecular viscosity or its analogue. For sake of contrast we note that the operational treatment of NSE turbulence by Young and McComb [5] provides a subgrid sink for explicit modes which are governed by the NSE. It may not be a huge difference in practice, but it is important to be precise about these matters.

However, in the present context the really interesting aspect of [3] and [4] is that, in the absence of viscosity they obtain the sort of turbulent spectrum which may be interpreted in terms of an effective turbulent viscosity, and hence in terms of self-renormalization. In the next post, we will examine this further, beginning with a more detailed look at what is meant by the term renormalization.

[1] R. H. Kraichnan. Decay of isotropic turbulence in the Direct-Interaction Approximation. Phys. Fluids, 7(7):1030-1048, 1964.
[2] Z.-S. She and E. Jackson. Constrained Euler System for Navier-Stokes Turbulence. Phys. Rev. Lett., 70:1255, 1993.
[3] Cyril Cichowlas, Pauline Bonatti, Fabrice Debbasch, and Marc Brachet. Effective Dissipation and Turbulence in Spectrally Truncated Euler Flows. Phys. Rev. Lett., 95:264502, 2005.
[4] W. J. T. Bos and J.-P. Bertoglio. Dynamics of spectrally truncated inviscid turbulence. Phys. Fluids, 18:071701, 2006
[5] A. J. Young and W. D. McComb. Effective viscosity due to local turbulence interactions near the cutoff wavenumber in a constrained numerical simulation. J. Phys. A, 33:133-139, 2000.

# Alternative formulations for statistical theories: 2.

Alternative formulations for statistical theories: 2.

Carrying on from my previous post, I thought it would be interesting to look at the effect of the different formulations on statistical closure theories. In order to keep matters as simple as possible, I am restricting my attention to single-time theories and their forms for the transfer spectrum $T(k,t)$ as it occurs in the Lin equation (see page 56 in [1]). For instance, the form for this due to Edwards [2] may be written in terms of the spectral energy density $C(k,t)$ (or spectral covariance) as: $$T(k,t) = 4\pi k^{2}\int d^{3}j L(k,j,|\mathbf{k}-\mathbf{j}|)D(k,j,|\mathbf{k}-\mathbf{j}|)C( |\mathbf{k}-\mathbf{j}|,t)[C(j,t)-C(k,t)],$$where $$D(k,j,|\mathbf{k}-\mathbf{j}|) = \frac{1}{\omega(k,t)+\omega(j,t)+\omega(|\mathbf{k}-\mathbf{j}|,t)},$$and $\omega(k,t)$ is the inverse modal response time. The geometric factor $L(\mathbf{k},\mathbf{j})$ is given by:$$L(\mathbf{k},\mathbf{j}) = [\mu(k^{2}+j^{2})-kj(1+2\mu^{2})]\frac{(1-\mu^{2})kj}{k^{2}+j^{2}-2kj\mu},$$ and can be seen by inspection to have the symmetry:$$L(\mathbf{k},\mathbf{j}) = L(\mathbf{j,}\mathbf{k}).$$From this it follows, again by inspection, that the integral of the transfer spectrum vanishes, as it must to conserve energy.

Edwards derived this as a self-consistent mean-field solution to the Liouville equation that is associated with the Navier-Stokes equation, and specialised it to the stationary case. Later Orszag [3] derived a similar form by modifying the quasi-normality theory to obtain a closure called the eddy-damped quasi-normality markovian (or EDQNM) model. Although physically motivated, this was an ad hoc procedure and involved an adjustable constant. For this reason it is strictly regarded as a model rather than a theory. As this closure is much used for practical applications, we write in terms of the energy spectrum $E(k,t)=4\pi k^2 C(k,t)$ as:$$T(k,t) = \int _{p+q=k} D(k,p,q)(xy+z^{3}) E(q,t)[E(p,t)pk^{2}-E(k,t)p^{3}]\frac{dpdq}{pq},$$where $$D(k,p,q) = \frac{1}{\eta(k,t)+\eta(p,t)+\eta(q,t)},$$and $\eta(k,t)$ is the inverse modal response time (equivalent to $\omega(k,t)$ in the Edwards theory, but determined in a different way). Also $(xy+z^{3})$ is a geometric factor, where $x$, $y$ and $z$ are the cosines of the angles of the triangle subtended, respectively, by $k$, $p$ and $q$.

My point here is that Orszag, like many others, followed Kraichnan rather than Edwards and it is clear that you cannot deduce the conservation properties of this formulation by inspection. I should emphasise that the formulation can be shown to be conservative. But it is, in my opinion, much more demanding and complicated than the Edwards form, as I found out when beginning my postgraduate research and I felt obliged to plough my way through it. At one point, Kraichnan acknowledged a personal communication from someone who had drawn his attention to an obscure trigonometrical identity which had proved crucial for his method. Ultimately I found the same identity in one of my old school textbooks [5]. The authors, both masters at Harrow School, had shown some prescience, as they noted that this identity was useful for applications!

During the first part of my research, I had to evaluate integrals which relied on the cancellation of pairs of terms which were separately divergent at the origin in wavenumber. At the time I felt that Kraichnan’s way of handling the three scalar wavenumbers would have been helpful, but I managed it nonetheless in the Edwards formulation. Later on I was to find out, as mentioned in the previous blog, that there were in fact snags to Kraichnan’s method too.

In 1990 [4] I wrote about the widespread use of EDQNM in applications. What was true then is probably much more the case today. It seems a pity that someone does not break ranks and employ this useful model closure in the Edwards formulation, rather than make ad hoc corrections afterwards for the case of wavenumber triangles with one very small side.

[1] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[2] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[3] S. A. Orszag. Analytical theories of turbulence. J. Fluid Mech., 41:363, 1970.
[4] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[5] A. W. Siddons and R. T. Hughes. Trigonometry: Part 2 Algebraic Trigonometry. Cambridge University Press, 1928.

# Alternative formulations for statistical theories: 1.

Alternative formulations for statistical theories: 1.

In the spectral representation of turbulence it is well known that interactions in wavenumber space involve triads of wave vectors, with the members of each triad combining to form a triangle. It is perhaps less well known that the way in which this constraint is handled can have practical consequences. This was brought home to me in 1984, when we published our first calculations of the Local Energy Transfer (LET) theory [1].

Our goal was to compare the LET predictions of freely decaying isotropic turbulence with those of Kraichnan’s DIA, as first reported in 1964 [2]. With this in mind, we set out to calculate both DIA and LET under identical conditions; and also to compare out calculations of DIA with those of Kraichnan, in order to provide a benchmark. We applied the Edwards formulation [3] of the equations to both theories; but, apart from that, in order to ensure strict comparability we used exactly the same numerical methods as Kraichnan. Also, three of our initial spectral forms were the same as his, although we also introduced a fourth form to meet the suggestions of experimentalists when comparing with experimental results.

Reference should be made to [1] for details, but predictions of both theories were in line with experimental and numerical results in the field, with LET tending to give greater rates of energy transfer (and higher values of the evolved skewness factor) than DIA, which was assumed to be connected with its compatibility with the Kolmogorov spectrum. However, our calculation of the DIA value of the skewness was about 4% larger than Herring and Kraichnan found [4], which could only be explained by the different mathematical formulation.

Let us consider the two different ways of handling the wavenumber constraint, as follows.

Kraichnan’s notation involved the three wave vectors $\mathbf{k}$, $\mathbf{p}$, and $\mathbf{q}$; and used the identity: $$\int d^3p\int d^3q \,\delta(\mathbf{k}-\mathbf{p}-\mathbf{q})f(k,p,q)=\int_{p+q=k}dpdq\frac{2\pi pq}{k}f(k,p,q),$$ where the constraint is expressed by the Dirac delta function and $f(k,p,q)$ is some relevant function. Note that the domain of integration is in the $(p,q)$ plane, such that the condition $p+q=k$ is always satisfied.

Edwards [3] used a more conventional notation of $\mathbf{k}$, $\mathbf{j}$, and $\mathbf{l}$; and followed a more conventional route of simply integrating over one of the dummy wave vectors in order to eliminate the delta function, thus: $$\int d^3j\int d^3l \,\delta(\mathbf{k}-\mathbf{j}-\mathbf{l})f(k,j,l)=\int_0^\infty 2\pi j^2 dj\int_{-1}^{1}d\mu f(k,j,|\mathbf{k}-\mathbf{j}|),$$where $\mu = \cos \theta_{kj}$ and $\theta_{kj}$ is the angle between the vectors $\mathbf{k}$ and $\mathbf{j}$.

Of course the two formulations are mathematically equivalent. Where differences arise is in the way they handle rounding and truncation errors in numerical procedures. It was pointed out by Kraichnan [2], that corrections had to be made when triangles took the extreme form of having one side very much smaller than the other two. If this problem can lead to an error of about 4%, then it is worth investigating further. I will enlarge on this matter in my next post.

[1] W. D. McComb and V. Shanmugasundaram. Numerical calculations of decaying isotropic turbulence using the LET theory. J. Fluid Mech., 143:95-123, 1984.
[2] R. H. Kraichnan. Decay of isotropic turbulence in the Direct-Interaction Approximation. Phys. Fluids, 7(7):1030{1048, 1964.
[3] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[4] J. R. Herring and R. H. Kraichnan. Comparison of some approximations for isotropic turbulence Lecture Notes in Physics, volume 12, chapter Statistical Models and Turbulence, page 148. Springer, Berlin, 1972.

# From minus five thirds in wavenumber to plus two-thirds in real space.

From $k^{-5/3}$ to $x^{2/3}$.

From time to time, I have remarked that all the controversy about Kolmogorov’s (1941) theory arises because his real-space derivation is rather imprecise. A rigorous derivation relies on a wavenumber-space treatment; and then, in principle, one could derive the two-thirds law for the second-order structure function from Fourier transformation of the minus five-thirds law for the energy spectrum. However, the fractional powers can seem rather daunting and when I was starting out I was fortunate to find a neat way of dealing with this in the book by Hinze [1].

We will work with $E_1(k_1)$, the energy spectrum of longitudinal velocity fluctuations, and $f(x_1)$, the longitudinal correlation coefficient. Hinze [1] cites Taylor [2] as the source of the cosine-Fourier transform relationship between these two quantities, thus:$$U^2 f(x) = \int_0^\infty\, dk_1\,E_1(k_1) \cos(k_1x_1),$$and $$E_1(k_1) =\frac{2}{\pi}\int_0^\infty\, dx_1\, f(x_1) \cos(k_1x_1),$$ where $U$ is the root mean square velocity.

In general, the power laws only apply in the inertial range, which means that we need to restrict the limits of the integrations. However, Hinze obtained a form which allows one to work with the definite limits given above, and reference should be made to page 198 of the first edition of his book [1] for the expression: $$U^2\left[1-f(x_1)\right] = C \int_0^\infty\,dk_1\,k_1^{-5/3}\left[1-\cos(k_1x_1)\right],\label{hinze}$$ where $C$ is a universal constant.

The trick he employed to evaluate the right hand side is to make the change of variables:$$y= k_1x_1 \quad \mbox{hence} \quad dk_1 =\frac{dy}{x_1}.$$ With this substitution, the right hand side of equation (\ref{hinze}) becomes: $$\mbox{RHS of (3)} = C x_1^{2/3}\,\int_0^\infty \,dy [1-\cos y].$$ Integration by parts then leads to: $$\int_0^\infty \,dy [1-\cos y]=\frac{3}{2}\int_0^\infty\, dy\, y^{-2/3}\, \sin y =\frac{3}{4}\Gamma(1/3),$$ where $\Gamma$ is the gamma function. Note that I have omitted any time dependence for sake of simplicity, but of course this is easily added.

[1] J. O. Hinze. Turbulence. McGraw-Hill, New York, 1st edition, 1959. (2nd edition, 1975).
[2] G. I. Taylor. Statistical theory of turbulence. Proc. R. Soc., London, Ser.A, 151:421, 1935.

# Compatibility of temporal spectra with Kolmogorov (1941) and with random sweeping

Compatibility of temporal spectra with Kolmogorov (1941) and with random sweeping.

I previously wrote about temporal frequency spectra, in the context of the Taylor hypothesis and a uniform convection velocity of $U_c$, in my post of 25 February 2021. At the time, I said that I would return to the more difficult question of what happens when there is no uniform convection velcocity present. I also said that this would not necessarily be next week, so at least I was right about that.

As in the earlier post, we consider a turbulent velocity field $u(x,t)$ which is stationary and homogeneous with rms value $U$. This time we just consider the dimensions of the temporal frequency spectrum $E(\omega)$. We use the angular frequency $\omega = 2\pi n$, where $n$ is the frequency in Hertz, in order to be consistent with the usual definition of wavenumber $k$. Integrating the spectrum, we have the condition: $$\int_0^\infty E(\omega) d\omega = U^2,$$which gives us the dimensions: $$\mbox{Dimensions of}\; E(\omega)d\omega = L^2 T^{-2};$$ or velocity squared.

For many years, the literature relating to the wavenumber-frequency correlation $C(k,\omega)$ has been dominated by the question: is decorrelation due to random sweeping effects, which would mean that the characteristic time is the sweeping timescale $(Uk)^{-1}$; or is it characterised by the Kolmogorov timescale $(\varepsilon^{1/3}k^{2/3})^{-1}$? A recent article [1] makes a typical point about the consequences for the frequency spectrum of the dominance of the sweeping effect: ‘… the frequency energy spectrum of Eulerian velocities exhibits a $\omega^{-5/3}$ decay, instead of the $\omega^{-2}$ expected from K41 scaling’. Which is counter-intuitive at first sight! As we saw in my blog of 26/02/21, for the case of uniform convection $\omega^{-5/3}$ is associated with K41.

Let us begin by clearing up the latter point. The authors of [1] cite the book by Monin and Yaglom, but I was unable to find it. (I mean the reference, not the book which is quite conspicuous on my bookshelves. I think that anyone giving a reference to a book, should cite the page number. Sometimes I do that and sometimes I forget!) In any case, it is easy enough to work out. From equation (2) we have the dimensions of $E(\omega)$ as $L^{2}T^{-1}$. From the K41 approach we can write for the inertial range: $$E(\omega) \sim \varepsilon^{n}\omega^{m} \sim \varepsilon \omega^{-2},$$ where we fixed the dependence on the index $n$ first.

The interest in random convective sweeping mainly stems from Kraichnan’s analysis of his direct-interaction approximation (DIA), dating back to 1959. A general discussion of this will be found in the book [2], but we can take a shortcut by noting that Kraichnan obtained an approximate solution for the reponse function $G(k,\tau)$ of his theory (see page 219 of [2]) as: $$G(k,\tau)=\frac{exp(-\nu k^2\tau)J_1(2Uk\tau)}{Uk\tau},$$ where $\tau = t-t’$, $\nu$ is the kinematic viscosity, and $J_1$ is a Bessel function of the first kind. The interesting thing about this is that the K41 characteristic time for the inertial range does not appear. Also, in the inertial range, the exponential factor can be put to one, and the decay is determined by the sweeping time $(Uk)^{-1}$.

Corresponding to this solution for the inertial range, the energy spectrum takes the form: $$E(k) \sim (\varepsilon U)^{1/2}k^{-3/2},$$ as given by equation (6.50) in [2]. As is well known, this $-3/2$ law is sufficiently different from the observed form, which is generally compatible with the K41 $-5/3$ wavenumber spectrum, to be regarded as incorrect. We can obtain the frequency spectrum corresponding to the random sweeping hypothesis by simply replacing the convective velocity $U_c$, as used in Taylor’s hypothesis, by the rms velocity $U$. From equation (8) of the earlier blog, we have; $$E(\omega) \sim (\varepsilon U_c)^{2/3}\omega^{-5/3} \rightarrow (\varepsilon U)^{2/3}\omega^{-5/3} , \quad \mbox{when} \quad U_c \rightarrow U.$$
This result is rather paradoxical to say the least. In order to get a $-5/3$ dependence on frequency, we have to have a $-3/2$ dependence on wavenumber! It is many years since I looked into this and in view of the continuing interest in the subject, I have begun to rexamine it. For the moment, I would make just one observation.

Invoking Taylor’s expression for the dissipation rate, which is: $\varepsilon = C_\varepsilon U^3/L$, where $L$ is the integral lengthscale (not to be confused with the symbol for the length dimension) and $C_\varepsilon$ asymptotes to a constant value for Taylor-Reynolds numbers $R_\lambda \sim 100$ [3], we may examine the relationship between the random sweeping and K41 timescales. Substituting for the rms velocity, have: $$\tau_{sweep} =(Uk)^{-1}\sim (\varepsilon^{1/3}L^{1/3}k)^{-1}.$$ Then, putting $k\sim 1/L \equiv k_L$, we obtain:$$\tau_{sweep}\sim (\varepsilon^{1/3}k_L^{2/3})^{-1} = \tau_{K41}(k_L).$$ So the random sweeping timescale becomes equal to the K41 timescale for wavenumbers in the energy-containing range. Just to make things more puzzling!

[1] A. Gorbunova, G. Balarac, L. Canet, G. Eyink, and V. Rossetto. Spatiotemporal correlations in three-dimensional homogeneous and isotropic turbulence. Phys. Fluids, 33:045114, 2021.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[3] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.

# The question of notation.

The question of notation.

In recent years, when I specify the velocity field for turbulence, I invariably add a word of explanation about my use of Greek letters for Cartesian tensor indices. I point out that these Greek indices should not be confused with those used in four-space for four-dimensional tensors, as encountered in Einstein’s Relativity. I think that I began do this round about the time I retired in 2006 and at the same time began looking at problems in phenomenology. Previously I had just followed Sam Edwards, who had been my PhD supervisor, because it seemed such a very good idea. By reserving Greek letters for indices, one could use letters like $k$, $j$, $l$, $p$ $\dots$ for wavenumbers, which reduced the number of primed or multiply-primed variables needed in perturbation theory.

Presumably it had occurred to me that a different audience might not be familiar with this convention, or perhaps some referee rejected a paper because he didn’t know what Greek letters were [1]? In any case, it was only recently that it occurred to me that Kolmogorov actually uses this convention too. In fact in the paper that I refer to as Kolmogorov 41A [2], one finds the first sentence: ‘We shall denote by $u_{\alpha} (P) = u_{\alpha} (x_1,x_2,x_3), \quad \alpha=1,\,2,\,3,$ the components of the velocity at the moment $t$ at the point with rectangular Cartesian coordinates $x_1,x_2,x_3$. So in future, I could say ‘as used by Kolmogorov’.

Kolmogorov also introduced the second-order and third-order longitudinal structure functions as $B_{dd}(r,t)$ and $B_{ddd}(r,t)$ (the latter appearing in K41B [3]), and others followed similar schemes, with the number of subscript $d$s increasing with order. This was potentially clumsy, and when experimentalists became able to measure high-order moments in the 1970s, they resorted to the notation $S_n(r,t)$. That is, $S$ for ‘structure function’ and integer $n$ for order, which is nicely compact.

During the sixties, statistical turbulence theories used a variety of notations. Unfortunately, for some people a quest for an original approach to a well known problem can begin with a new notation. On one occasion, I remember thinking that I didn’t even know how to pronounce the strange symbol that one optimistic theorist had used for the vertex operator of the Navier-Stokes equation. That was back in the early 1970s and it is still somewhere in my office filing cabinets. I don’t think I missed anything significant by not reading it!

Notational changes should be undertaken with caution. During the late 1990s I was just about the only person working on statistical closure theory (at least, in Eulerian coordinates) and I decided to adopt an emerging convention in dynamical systems theory. That is, I decided to represent all correlations by $C$ and response tensors by $R$.

The only other change I made was to change the symbol for the transverse projection operator to Kraichnan’s use of $P$, from Edwards’s use of $D$. The result is, in my view, a notationally more elegant formalism; and perhaps if people again start taking an interest in renormalized perturbation theories and renormalization group, this would get them off to a good start.

However, there can be more to a formalism than just the notation. The true distinction between the two really lies in the formulation. Starting with the basic vector triad $\mathbf{k},\mathbf{j},\mathbf{l}$, Edwards used the triangle condtion to eliminate the third vector as $\mathbf{l}=\mathbf{k}-\mathbf{j}$. This was done by others, but in the context of the statistical theories virtually everyone followed Kraichnan’s much more complicated approach, in which he retained the three scalar magnitudes and imposed on all sums/integrals the constraint that they should always add up to a triangle. The resulting formulation is more opaque, more difficult to compute and does not permit symmetries to be deduced by simple inspection. Yet for some reason virtually everyone follows it, particularly obviously in the use of EDQNM as a model for applications. A concise account of the two different formalisms can be found in Section 3.5 of the book [4].

[1] Just joking! I’ve never had a paper rejected for that reason, but some rejections over the years have not been a great deal more sensible.
[2] A. N. Kolmogorov. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. C. R. Acad. Sci. URSS, 30:301, 1941.
[3] A. N. Kolmogorov. Dissipation of energy in locally isotropic turbulence. C. R. Acad. Sci. URSS, 32:16, 1941.
[4] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.

# The last post of the weekly Blogs … but intermittently hereafter!

The last post of the weekly Blogs … but intermittently hereafter!
I posted the first of these blogs on 6 February 2020, just as the pandemic was getting under way. Since then (and slightly to my surprise) I have managed to post a blog every week. In the case of holidays, I wrote an appropriate number of blogs in advance and scheduled them to post at the appropriate times.

I wouldn’t like to say that I couldn’t have managed this without the pandemic, but it has certainly cut down on my other ways of spending time. And, for that matter energy, because in normal times I play badminton twice a week. At the same time, I should acquit myself of taking advantage of the pandemic. A year or so earlier, I had set up a WordPress site on servers installed on my own computer, and had constructed a somewhat satirical blog dealing with the mythical activities at a fictitious university. This gave me practice, and when the University of Edinburgh advertised a blogging support service for academic staff, I was ready to go. As I hesitated on the brink, a very helpful young person in Information Services gave me the requisite push and my blogging career was fairly launched.

Now that I have completed two years, I am finding the regular weekly deadline rather onerous. To be blunt, it is getting in the way of jobs that need a longer-term approach. At the same time, there are still many things that I wish to blog about. So from now on, I intend to blog only when I can do so without losing momentum at some other task.
What I hope that you will do, is fill in the little form that you will find beside any blog, with your email address. This will ensure that in future you will receive a notification of each blog as it is posted. I realise that people can be hesitant about putting their name on a list. I can myself, one worry being that it may be difficult to get off again. In this case nothing could be simpler. When you receive your notification email, you just have to click on a link and you will be automatically removed. I have checked this and can assure you that it works.

To end on a positive note, I intend to produce a book which will literally just consist of the individual posts organised into chapters corresponding to the months. I think there is sufficient material contained in them to make an index helpful and, in the case of the ebook version, it will also be searchable. I am very conscious of the need for these things, as at the moment I have to rely on my memory to be sure that I’m not repeating myself!

# From ‘wavenumber murder’ to wavenumber muddle?

From ‘wavenumber murder’ to wavenumber muddle?
In my post of 20 February 2020, I told of the referee who described my use of Fourier transformation as ‘the usual wavenumber murder’. I speculated that the situation had improved over the years due to the use of pseudo-spectral methods in direct numerical simulation, although I was able to quote a more recent example where a referee rejected a paper because he wasn’t comfortable with the idea that structure functions could be evaluated from the corresponding spectra.

However, while it is good to see a growing use of spectral methods, at the same time there are differences between the $x$-space and $k$-space pictures, and this can be confusing. Essentially, the phenomenology of fluid dynamicists has been based on the energy conservation equation in real-space, mostly using structure functions; whereas theorists have worked with the energy balance in wavenumber space as a closure problem for renormalization methods. This separation of activities has gone on over many decades.

For the purpose of this post, I want to look again at the Kolmogorov-Obhukov (1941) theory in $x$-space and $k$-space. Kolmogorov worked in real space and it is convenient to denote his two different derivations of inertial range forms as K41A [1] and K41B [2]. We will concentrate on the second of these, where he derived the well-known 4/5′ law for $S_3(r)$, from the KHE equation. We have quoted this previously and it may be obtained from the book [3] as: $$\varepsilon =-\frac{3}{4}\frac{\partial S_2}{\partial t}+\frac{1}{4r^4}\frac{\partial (r^4 S_3)}{\partial r} +\frac{3\nu}{2r^4}\frac{\partial}{\partial r}\left(r^4\frac{\partial S_2}{\partial r}\right),$$ and all the symbols have their usual meanings.

In order to solve this equation for $S_3$, Kolmogorov neglected both the time-derivative of $S_2$ and the viscous term, and thus obtained a de facto closure. In the case of stationary turbulence the first step is exact but for decaying turbulence it is an approximation for the inertial range which Kolmogorov called local stationarity. Later Batchelor referred to this as equilibrium [4], which is rather unfortunate as turbulence is the archetypal non-equilibrium problem. In fact Batchelor was carrying on Taylor’s idea that the Fourier modes acted as mechanical degrees of freedom and so could be treated by the methods of statistical mechanics. As the classical canon of solved problems in statistical mechanics is limited to thermal equilibrium (normally referred to simply as equilibrium), Batchelor was arguing that Taylor’s approach would be valid for the inertial range. In fact it isn’t because the modes are strongly coupled and this too is not canonical.

In any case, the neglect of the time-derivative of $S_2$ is a key step and its justification in time-dependent flows poses a problem. More recently, McComb and Fairhurst [5] showed that the neglect of this term cannot be an exact step and also cannot be justified by appeal to large Reynolds numbers or restriction to any particular range of values of $r$. In other words, it is a constant term and its neglect must be justified by either measurement or numerical simulation.

The situation is really quite different in wavenumber space. Here we have the Lin equation which is the Fourier transform of the KHE and takes its simplest form as: $$\left(\frac{\partial}{\partial t} + 2\nu k^2\right)E(k,t) = T(k,t).$$where$$T(k,t)= \int_0^\infty dj\, S(k,j:t),$$and $S(k,j;t)$ can be expressed in terms of the third-order moment $C_{\alpha\beta\gamma}(\mathbf{j},\mathbf{k-j},\mathbf{-k};t)$.

One immediate difference is that the KHE is purely local in the variable $r$, whereas the Lin equation is non-local in wavenumber. In fact all Fourier modes are coupled together. We can define the inter-mode energy flux as: $$\Pi (\kappa,t) = \int_\kappa^\infty dk\,T(k,t) = -\int_0^\kappa dk \, T(k,t).$$The criterion for an inertial range of wavenumbers is that the condition $\Pi = \varepsilon$ should hold and this is nowadays referred to as scale invariance. It does not apply in any way to the situation in real space and it has no connection with the concept of local stationarity which was renamed equilibrium by Batchelor.

Lastly, the interpretation of the time-derivative term in wavenumber space is quite different from that in real space. We may see this by rearranging the Lin equation as: $$-T(k,t) = I(k,t) – 2\nu k^2 E(k,t), \quad \mbox{where} \quad I(k,t) = -\frac{\partial E(k,t)}{\partial t}.\label{diff}$$Evidently for free decay the input term $I(k)$ is positive, and this is actually how Uberoi [6] made the first measurements of the transfer spectrum in grid turbulence. He measured the input term and the viscous term and used equation (\ref{diff}) to evaluate $T(k,t)$.

McComb and Fairhurst [5] pointed out that the constant value of the time derivative term in the limit of infinite Reynolds numbers in $r$-space Fourier transforms to a delta function at the origin in $k$-space. In other words this amounts to a derivation of the form postulated by Edwards [7] (following Batchelor [4]) that the transfer spectrum is given in terms of the Dirac delta function $\delta$ by:$$-T(k,t) = \varepsilon \delta(k,t) -\varepsilon \delta(k – \infty,t),$$ in the limit of infinite Reynolds numbers, although the Edwards form was for the stationary case.

This of course is a very extreme situation. The key point to note is that, while the time-derivative of $S_2$ poses a problem for local stationarity in $r$-space, the time-derivative of $E(k,t)$ poses no problem for scale invariance in $k$-space. This is why the $-5/3$ spectrum is so widely observed.

[1] A. N. Kolmogorov. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. C. R. Acad. Sci. URSS, 30:301, 1941.
[2] A. N. Kolmogorov. Dissipation of energy in locally isotropic turbulence. C. R. Acad. Sci. URSS, 32:16, 1941.
[3] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[4] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 1st edition, 1953.
[5] W. D. McComb and R. B. Fairhurst. The dimensionless dissipation rate and the Kolmogorov (1941) hypothesis of local stationarity in freely decaying isotropic turbulence. J. Math. Phys., 59:073103, 2018.
[6] M. S. Uberoi. Energy transfer in isotropic turbulence. Phys. Fluids, 6:1048, 1963.
[7] S. F. Edwards. Turbulence in hydrodynamics and plasma physics. In Proc. Int. Conf. on Plasma Physics, Trieste, page 595. IAEA, 1965.

# Is the concept of an energy cascade a help or a hindrance?

Is the concept of an energy cascade a help or a hindrance?

In his 1947 exegesis of Kolmogorov’s theory, Batchelor [1] explained the underlying idea of a transfer of energy from large eddies to progressively smaller eddies, until the (local) Reynolds number becomes too small for new eddies to form. He pointed out that the situation had been summarized in a rhyme which he believed was due to L. F. Richardson (no reference given) and which is very well known as:

Big whirls have little whirls, Which feed on their velocity,

And little whirls have lesser whirls, And so on to viscosity!

Incidentally he misquoted ‘whirls’ as ‘whorls’, and since then most people seem to have followed suit.

In his discussion Batchelor sometimes followed Kolmogorov, and referred to ‘pulsations’ while at other times he used the more usual ‘eddies’. This variation seems to actually underline the lack of precision of the concept; although, despite this, it is intuitively appealing.

The term ‘cascade’, with its connotations of a stepwise process, and indeed of localness, is also appealing. According to Eyink and Sreenivasan [2] it was first used by Onsager [3]; but it is his earlier use of the concept of energy transfer in wavenumber space [4] that is truly significant. Obukhov had already obtained the inertial-range spectrum corresponding to Kolmogorov’s result for the second-order structure function, but this involved the introduction of an ad hoc eddy viscosity [5]. In [4], Onsager essentially pointed out that the energy flux through modes must be constant in the inertial range. This is the property that is often referred to as scale invariance.

The physics of turbulent energy transfer and dissipation can readily be deduced from the equation of motion in wavenumber space; but it is interesting to put oneself in the position of Richardson, looking at (one imagines) snapshots of flow visualizations and creating his mental picture of a cascade of eddies. The equation of motion in real space would have given some limited help perhaps. Evidently the nonlinear term had to be responsible for the creation of new, smaller eddies; and it was known that this term conserved energy. Also, one could deduce that the viscous term would be more significant at the smallest scales. Nevertheless, it was a remarkable achievement to summarise the essence of turbulence in this very persuasive way. So what are its disadvantages?

The first disadvantage, in my view, is that it focuses attention on a single snapshot of the turbulence. Or, in statistical terms, on a single realization. This leads to people drawing conclusions that require a single realization (e.g. the importance of internal intermittency). However, we must always bear in mind that we need average quantities, and to get to them we actually have to take an average. So, if we average our single snapshot of a flow visualization by taking many such snapshots and constructing a form of ensemble average, the result is a blur! For instance, the recent paper by Yoffe and McComb [6] shows how internal intermittency disappears under ensemble averaging.

Paradoxically, my choice for second disadvantage is that I have concluded that the term ‘cascade’ is unhelpful when applied to the inter-mode energy flux. And this, I may say, is despite the fact that I have spent a working lifetime doing just that! In principle, every wavenumber is coupled to every other wavenumber by the nonlinear term. So we can see the attraction of having some sort of cascade or idea of localness. Indeed, in the 1980s/90s there was quite a lot of attention given, using numerical simulations, to the relative importance of different triads of wavenumbers for energy transfer. Now I am not disparaging that work in any way, but it is very complicated and should not distract us from the essential fact: the flux of energy through a wavenumber $\kappa$, from all other wavenumbers less than $\kappa$, is constant for all values of $\kappa$ in the inertial range. This fact is all the ‘localness’ that we need for the Obukhov-Onsager energy spectrum.

Lastly, it should be understood that the cascade in real space is spread out in space and time. That is, if we distinguish between scale and position by introducing relative and centroid coordinates, thus: $r= (x-x’) \quad \mbox{and}\quad R=(x+x’)/2,$ then in order to observe a cascade through scale $r$ we have to change the position of observation $R$ with time. In contrast, the flux through a mode with wavenumber $\kappa$ takes place at a single value of $R$. It is for this reason that the flux in wavenumber space cannot be applied to the cascade in real space.

Still, the term ‘cascade’, in the context of wavenumber space, is so embedded in the general consciousness (including my own!) that there is little possibility of making a change.

[1] G. K. Batchelor. Kolmogorov’s theory of locally isotropic turbulence. Proc. Camb. Philos. Soc., 43:533, 1947.
[2] G. L. Eyink and K. R. Sreenivasan. Onsager and the Theory of Hydrodynamic Turbulence. Rev. Mod. Phys., 87:78, 2006.
[3] L. Onsager. Statistical Hydrodynamics. Nuovo Cim. Suppl., 6:279, 1949.
[4] L. Onsager. The Distribution of Energy in Turbulence. Phys. Rev., 68:281, 1945. (Abstract only).
[5] A. M. Obukhov. On the distribution of energy in the spectrum of turbulent flow. C.R. Acad. Sci. U.R.S.S, 32:19, 1941.
[6] S. R. Yoffe and W. D. McComb. Does intermittency affect the inertial transfer rate in stationary isotropic turbulence? arXiv:2107.09112v1[physics.flu-dyn], 2021.

# Chaos and Complexity.

Chaos and Complexity.
In the previous blog we discussed the growth of interest in deterministic chaos in low-dimensional dynamical systems, and the way in which it impinged on turbulence theory. Altogether, it seemed like a paradigm shift; in that we learned that only quantum effects were truly random, and that all classical effects were deterministic. If one knew the initial conditions of a classical dynamical system then one could, in principle, predict its entire evolution in time. If! Anyway, in those days we began to refer to the turbulent velocity field as being chaotic rather than random; but I suspect that the majority are now back to random.

However, such ideas also arose in the late 19th Century, as part of the invention of Statistical Mechanics, with Boltzmann’s assumption of molecular chaos. This was to the effect that molecular motion was uncorrelated immediately before and after a two-body collision. It was made in the context of a gas modelled as $N$ particles in a box (where $N$ is of the order of Avogadro’s number), and the motion of the particles is governed by Hamilton’s equations. The system can be specified by the $N$-particle distribution (or density) which is the solution of Liouville’s equation. Although an exact formulation, this theory is contrary to experience because the entropy is found to be constant in time, which contradicts the second law of thermodynamics. This result is a well-known paradox and it was resolved by Boltzmann in his famous $H$-theorem.

Boltzmann wrote the entropy $S$ in terms of a measure $H$ of the information about the system, thus:$S=-k_B H \quad \mbox{where} \quad H=\int \, du \, f(u,t) \,ln\,f(u,t),$ $u$ is the molecular speed and $f(u,t)$ is the single-particle distribution of molecular speeds. In obtaining the equation for $f$, Boltzmann had to overcome a closure problem (much as in turbulence!) and his principle of molecular chaos justified the factorization of the two-particle function into the product of two single-particle functions. So Boltzmann’s $H$-theorem is that $H$ decreases with time, meaning that the entropy increases.

Although this is not so well known, the paradox was also resolved by Gibbs, albeit in a more fundamental way. He showed that if a small amount of the information was lost from $H$, then it was no longer invariant and would increase with time. In his case, this was achieved by coarse-graining the exact Liouville distribution function so that it was no longer exact, but it could equally well be achieved in practice by a slight deficiency in our specification of the initial conditions (position and momentum) of each of the $N$ particles. In fact, to put it bluntly, it would be difficult (or perhaps impossible) to specify the initial state of a system of order $10^{20}$ degrees of freedom with sufficient accuracy to avoid the system entropy increasing with time.

The point to be taken from all this is that Hamilton’s equations, although themselves reversible in time, can nevertheless describe a real system which has properties that are not reversible in time. The answer lies in the complexity of the system.
This applies just as much to the quantum form of Hamilton’s equations. Recently there has been an international discussion (by virtual means) of the Einstein-Podolsky-Rosen paradox, which asserts that quantum mechanics is not a complete theory. I read some of the contributions to this, but was not impressed. In particular the suggestion was put forward that the time-reversal symmetry of the basic quantum equations of motion, ruled out their ability to describe a real world which undergoes irreversible changes; something that is generally referred to as ‘time’s arrow’. But of course the same applies to the classical form of the equations, and one must bear in mind that one has to take not only large-$N$ limits but also continuum limits to describe the real world.

There are lessons here, at least in principle, for turbulence theorists too, and I have given specific instances such as the irrelevance of intermittency or the incorrectness of the Onsager conjecture (when judged by the physics). No doubt I will give more in the months to come. Background material for this blog can be found in the lecture notes [1], which can be downloaded free of charge.

[1] W. David McComb. Study Notes for Statistical Physics: A concise, unified overview of the subject. Bookboon, 2014. (Free download of pdf from Bookboon.com)

# Fashions in turbulence theory.

Fashions in turbulence theory.

Back in the 1980s, fractals were all the rage. They were going to solve everything, and turbulence was no exception. The only thing that I can remember from their use in microscopic physics was that the idea was applied to the problem of diffusion-limited aggregation, and I’ve no memory of how successful they were (or were not). In turbulence they were a hot topic for solving the supposed problem of intermittency, and there was a rash of papers decorated with esoteric mathematical terms. This could be regarded as ‘Merely corroborative detail, intended to give artistic verisimilitude to an otherwise bald and unconvincing narrative.’[1]. When these proved inadequate, the next step was multifractals, which rather underlined the fact that this approach was at best a phenomenology, rather than a fundamental theory. And that activity too seems to have died away.

Another fashion of the 1970s/80s was the idea of deterministic chaos. This began around 1963 with the Lorentz system, a set of simple differential equations intended to model atmospheric convection. These equations were readily computed, and it was established that their solutions were sensitive to small changes in initial conditions. With the growing availability of desktop computers in the following decades, low-dimensional dynamical systems of this kind provided a popular playground for mathematicians and we all began to hear about Lorentz attractors, strange attractors, and the butterfly effect. Just to make contact with the previous fashion, the phase space portraits of these systems often were found to have a fractal structure!

In 1990, a reviewer of my first book [2] rebuked me for saying so little about chaos and asserted that it would be a dominant feature of turbulence theory in the future. Well, thirty years on and we are still waiting for it. The problem with this prediction is that turbulence, in contrast to the low-dimensional models studied by the chaos enthusiasts, involves large numbers of degrees of freedom; and these are all coupled together. As a consequence, the average behaviour of a turbulent fluid is really quite insensitive to fine details of its initial conditions. In reality that butterfly can flap its wings as much as it likes, but it isn’t going to cause a storm.

In fairness, although we have gone back to using our older language of ‘random’ rather than ‘chaotic’ when studying turbulence, the fact remains that deterministic chaos is actually a very useful concept. This is particularly so when taken in the context of complexity, and that will be the subject of our next post.

[1] W. S. Gilbert, The Mikado, Act 2, 1852.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.

# Summary of the Kolmogorov-Obukhov (1941) theory: overview.

Summary of the Kolmogorov-Obukhov (1941) theory: overview.

In the last three posts we have summarised various aspects of the Kolmogorov-Obukhov (1941) theory. When considering this theory, the following things need to be borne in mind.

[a] Whether we are working in $x$-space or $k$-space matters. See my posts of 8 April and 15 April 2021 for a concise general discussion.
[b] In $x$-space the equation of motion (NSE) simply presents us with the problem of an insoluble, nonlinear partial differential equation.
[c] In $k$-space the NSE presents a problem in statistical physics and in itself tells us much about the transfer and dissipation of turbulent kinetic energy.
[d] The Karman-Howarth equation is a local energy balance that holds for any particular value of the distance $r$ between two measuring points.
[e] There is no energy flux between different values of $r$; or, alternatively, through scale.
[f] The energy flux $\Pi(k)$ is derived from the Lin equation (i.e. in wavenumber space) and cannot be applied in $x$-space.
[g] The maximum value of the energy flux, $\Pi_{max}=\varepsilon_T$ (say), is a number, not a function, and can be used (like the dissipation $\varepsilon$) in both $k$-space and $x$-space.
[h] It also matters whether the isotropic turbulence we are considering is stationary or decaying in time.
[g] If the turbulence is decaying in time, then K41B relies on Kolmogorov’s hypothesis of local stationarity. It has been pointed out in a previous post (Part 2 of the present series) that this cannot be the case by virtue of restriction to a range of scales nor in the limit of infinite Reynolds number [1]. See also the supplemental material for [2].
[h] In $k$-space this is not a problem and the $k^{-5/3}$ spectrum can still be expected [1], as of course is found in practice.
[i] If the turbulence is stationary, then K41B is exact for a range of wavenumbers for sufficiently large Reynolds numbers. The extent of this inertial range increases with increasing Reynolds numbers.

I have not said anything in this series about the concept of intermittency corrections or anomalous exponents. This topic has been dealt with in various blogs and soon will be again.

[1] W. D. McComb and R. B. Fairhurst. The dimensionless dissipation rate and the Kolmogorov (1941) hypothesis of local stationarity in freely decaying isotropic turbulence. J. Math. Phys., 59:073103, 2018.
[2] W. David McComb, Arjun Berera, Matthew Salewski, and Sam R. Yoffe. Taylor’s (1935) dissipation surrogate reinterpreted. Phys. Fluids, 22:61704, 2010.

# Summary of the Kolmogorov-Obukhov (1941) theory. Part 3: Obukhov’s theory in k-space.

Summary of the Kolmogorov-Obukhov (1941) theory. Part 3: Obukhov’s theory in k-space.

Obukhov is regarded as having begun the treatment of the problem in wavenumber space. In [1] he referred to an earlier paper by Kolmogorov for the spectral decomposition of the velocity field in one dimension and pointed out that the three-dimensional case is carried out similarly by multiple Fourier integrals. He employed the Fourier-Stieltjes integral but fortunately this usage did not survive. For many decades the standard Fourier transform has been employed in this field.

[a] Obukhov’s paper [1] was published between K41A and K41B, and was described by Batchelor ‘as to some extent anticipating the work of Kolmogorov’. He worked with the energy balance in $k$-space and, influenced by Prandtl’s work, introduced an ad hoc closure based on an effective viscosity.

[b] The derivation of the ‘-5/3’ law for the energy spectrum seems to have been due to Onsager [2]. He argued that Kolmogorov’s similarity principles in $x$-space would imply an invariant flux (equal to the dissipation) through those wavenumbers where the viscosity could be neglected. Dimensional analysis then led to $E(k) \sim \varepsilon^{2/3}k^{-5/3}$.

[c] As mentioned in the previous post (points [c] and [d]), Batchelor discussed both K41A and K41B in his paper [3], but did not include K41B in his book [4]. Also, in his book [4], he discussed K41A entirely in wavenumber space. The reasons for this change to a somewhat revisionist approach can only be guessed at, but there may be a clue in his book. On page 29, first paragraph, he says: ‘Fourier analysis of the velocity field provides us with an extremely valuable analytical tool and one that is well-nigh indispensable for the interpretation of equilibrium or similarity hypotheses.’ (The emphasis is mine.)

[d] This is a very strong statement, and of course the reference is to Kolmogorov’s theory. There is also the fact that K41B is not easily translated into $k$-space. Others followed suit, and Hinze [5] actually gave the impression of quoting from K41A but used the word ‘wavenumber’, which does not in fact occur in that work. By the time I began work as a postgraduate student in 1966, the use of spectral methods had become universal in both experiment and theory.

[e] There does not appear to be any $k$-space ad hoc closure of the Lin equation to parallel K41B (i.e. the derivation of the ‘4/5’ law); but, for the specific case of stationary turbulence, I have put forward a treatment which uses the infinite Reynolds number limit to eliminate the energy spectrum, while retaining its effect through the dissipation rate [6]. It is based on the scale invariance of the inertial flux, thus: $$\Pi(\kappa)=-\int_0^{\kappa}dk\,T(k) = \varepsilon,$$which of course can be written in terms of the triple-moment of the velocity field. As the velocity field in $k$-space is complex, we can write it in terms of amplitude and phase. Accordingly, $$u_{\alpha}(\mathbf{k}) = V(\kappa)\psi_{\alpha}(k’),$$ where $V(\kappa)$ is the root-mean-square velocity, $k’=k/\kappa$ and $\psi$ represents phase effects. The result is: $$V(\kappa)=B^{-1/3}\varepsilon^{1/3}\kappa^{-10/3},$$where $B$ is a constant determined by an integral over the triple-moment of the phases of the system. The Kolmogorov spectral constant is then found to be: $4\pi\, B^{-2/3}$.

[f] Of course a statistical closure, such as the LET theory, is needed to evaluate the expression for $B$. Nevertheless, it is of interest to note that this theory provides an answer to Kraichnan’s interpretation of Landau’s criticism of K41A [7]. Namely, that the dependence of an average (i.e. the spectrum) on the two-thirds power of an average (i.e. the term involving the dissipation) destroys the linearity of the averaging process. In fact, the minus two-thirds power of the average in the form of $B^{-2/3}$ cancels the dependence associated with the dissipation.

[1] A. M. Obukhov. On the distribution of energy in the spectrum of turbulent flow. C.R. Acad. Sci. U.R.S.S, 32:19, 1941.
[2] L. Onsager. The Distribution of Energy in Turbulence. Phys. Rev., 68:281, 1945. (Abstract only.)
[3] G. K. Batchelor. Kolmogoroff’s theory of locally isotropic turbulence. Proc. Camb. Philos. Soc., 43:533, 1947.
[4] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 1st edition, 1953.
[5] J. O. Hinze. Turbulence. McGraw-Hill, New York, 2nd edition, 1975. (First edition in 1959.)
[6] David McComb. Scale-invariance and the inertial-range spectrum in three-dimensional stationary, isotropic turbulence. J. Phys. A: Math. Theor., 42:125501, 2009.
[7] R. H. Kraichnan. On Kolmogorov’s inertial-range theories. J. Fluid Mech., 62:305, 1974.

# Summary of the Kolmogorov-Obukhov (1941) theory. Part 2: Kolmogorov’s theory in x-space.

Summary of the Kolmogorov-Obukhov (1941) theory. Part 2: Kolmogorov’s theory in x-space.
Kolmogorov worked in $x$-space and his two relevant papers are cited below as [1] (often referred to as K41A) and [2] (K41B). We may make a pointwise summary of this work, along with more recent developments as follows.

[a] In K41A, Kolmogorov introduced the concepts of local homogeneity and local isotropy, as applying in a restricted range of scales and in a restricted volume of space. He also seems to have introduced what we now call the structure functions, allowing the introduction of scale through the correlations of velocity differences taken between two points separated by a distance $r$. He used Richardson’s concept of a cascade of eddies, in an intuitive way, to introduce the idea of an inertial sub-range, and then used dimensional analysis to deduce that (in modern notation) $S_2 \sim \varepsilon^{2/3}r^{2/3}$.

[b] In K41B, he used an ad hoc closure of the Karman-Howarth equation (KHE) to argue that $S_3 = 0.8 \varepsilon r$ in the inertial range of values of $r$: the well-known ‘four-fifths law’. He further assumed that the skewness factor was constant and found that this led to the K41A result for $S_2$. The closure was based on the fact that the term explicit in $S_2$ would vanish as the viscosity tended to zero, whereas its effect could still be retained in the dissipation rate.

[c] In 1947, Batchelor [3] provided an exegesis of both these theories. In the case of K41A, this was only partial, but he did make it clear that K41A relied (at least implicitly) on Richardson’s idea of the ‘eddy cascade’. He also pointed out that K41B could not be readily extended to higher-order equations in the statistical hierarchy, because of the presence of the pressure term with its long-range properties in the higher-order equations.

[d] Moffatt [4] credited this paper by Batchelor with bringing Kolmogorov’s work to the Western world. He also, in effect, expressed surprise that Batchelor did not include his re-derivation of K41B in his book. This is a very interesting point; and, in my view, is not unconnected to the fact that Batchelor discussed K41A almost entirely in wavenumber space in his book. I will return to this later.

[e] In 2002, Lundgren re-derived the K41B result, by expanding the dimensionless structure functions in powers of the inverse Reynolds number. By demanding that the expansions matched asymptotically in an overlap region between outer and inner scaling regimes, he was also able to recover the K41A result without the need to make an additional assumption about the constancy of the skewness.

[f] More recently, McComb and Fairhurst [6] used the asymptotic expansion of the dimensionless structure functions to test Kolmogorov’s hypothesis of local stationarity and concluded that it could not be true. They found that the time-derivative must give rise to a constant term; which, however small, violates the K41B derivation of the four-fifths law. Nevertheless, they noted that in wavenumber space, this term (which plays the part of an input to the KHE) will appear as a Dirac delta function at the origin, and hence does not violate the derivation of the minus five-thirds law in $k$-space. We will extend this idea further in the next post.

[1] A. N. Kolmogorov. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. C. R. Acad. Sci. URSS, 30:301, 1941. (K41A)
[2] A. N. Kolmogorov. Dissipation of energy in locally isotropic turbulence. C. R. Acad. Sci. URSS, 32:16, 1941. (K41B)
[3] G. K. Batchelor. Kolmogoroff’s theory of locally isotropic turbulence. Proc. Camb. Philos. Soc., 43:533, 1947.
[4] H. K. Moffatt. G. K. Batchelor and the Homogenization of Turbulence. Annu. Rev. Fluid Mech., 34:19-35, 2002.
[5] Thomas S. Lundgren. Kolmogorov two-thirds law by matched asymptotic expansion. Phys. Fluids, 14:638, 2002.
[6] W. D. McComb and R. B. Fairhurst. The dimensionless dissipation rate and the Kolmogorov (1941) hypothesis of local stationarity in freely decaying isotropic turbulence. J. Math. Phys., 59:073103, 2018.

# Summary of Kolmogorov-Obukhov (1941) theory. Part 1: some preliminaries in x-space and k-space.

Summary of Kolmogorov-Obukhov (1941) theory. Part 1: some preliminaries in $x$-space and $k$-space.

Discussions of the Kolmogorov-Obukhov theory often touch on the question: can the two-thirds law; or, alternatively, the minus five-thirds law, be derived from the equations of motion (NSE)? And the answer is almost always: ‘no, they can’t’! Yet virtually every aspect of this theory is based on what can be readily deduced from the NSE, and indeed has so been deduced, many years ago. So our preliminary here to the actual summary, is to consider what we know from a consideration of the NSE, in both $x$-space and $k$-space. As another preliminary, all the notation is standard and can be found in the two books cited below as references.

We begin with the familiar NSE, consisting of the equation of motion, $$\frac{\partial u_{\alpha}}{\partial t} + \frac{\partial (u_{\alpha}u_\beta)}{\partial x_\beta} =-\frac{1}{\rho}\frac{\partial p}{\partial x_\alpha} + \nu \nabla^2 u_\alpha,$$which expresses conservation of momentum and is local, in that it gives the relationship between the various terms at one point in space; and the incompressibility condition $$\frac{\partial u_\beta}{\partial x_\beta} = 0.$$ It is well known that taking these two equations together allows us to eliminate the pressure by solving a Poisson-type equation. The result is an expression for the pressure which is an integral over the entire velocity field: see equations (2.3) and (2.9) in [1].

In $k$-space we may write the Fourier-transformed version of (1) as: $$\frac{\partial u_\alpha(\mathbf{k},t)}{\partial t} + i k_\beta\int d^3 j u_\alpha(\mathbf{k-j}.t)u_\beta(\mathbf{j},t) = k_\alpha p(\mathbf{k},t) -\nu k^2 u_\alpha (\mathbf{k},t).$$ The derivation can be found in Section 2.4 of [2]. Also, the discrete Fourier-series version (i.e. in finite box) is equation (2.37) in [2].

The crucial point here is that the modes $\mathbf{u}(\mathbf{k},t)$ form a complete set of degrees of freedom and that each mode is coupled to every other mode by the non-linear term. So this is not just a problem in statistical physics, it is an example of the many-body problem.

Note that (1) gives no hint of the cascade, but (3) does. All modes are coupled together and, if there were no viscosity present, this would lead to equipartition, as the conservative non-linear term merely shares out energy among the modes. The viscous term is symmetry-breaking due to the factor $k^2$ which increases the dissipation as the wavenumber increases. This prevents equipartition and leads to a cascade from low to high wavenumbers. All of this becomes even clearer when we multiply the equation of motion by the velocity and average. We then obtain the energy-balance equations in both $x$-space and $k$-space.

We begin in real space with the Karman-Howarth equation (KHE). This can be written in various forms (see Section 3.10.1 in [2]), and here we write in terms of the structure functions for the case of free decay: $$\varepsilon =-\frac{3}{4}\frac{\partial S_2}{\partial t}+\frac{1}{4r^4}\frac{\partial (r^4 S_3)}{\partial r} +\frac{3\nu}{2r^4}\frac{\partial}{\partial r}\left(r^4\frac{\partial S_2}{\partial r}\right).$$Note that the pressure does not appear, as a correlation of the form $\langle up \rangle$ cannot contribute to an isotropic field, and that strictly the left hand side should be the decay rate $\varepsilon_D$ but it is usual to replace this by the dissipation as the two are equal in free decay. Full details of the derivation can be found in Section 3.10 of [2].

For our present purposes, we should emphasise two points. First, this is one equation for two dependent variables and so requires a statistical closure in order to solve for one of the two. In other words, it is an instance of the notorious statistical closure problem. Second, it is local in the variable $r$ and does not couple different scales together. It holds for any value of $r$ but is an energy balance locally at any chosen value of $r$.

The Lin equation is the Fourier transform of the KHE. It can be derived directly in $k$-space from the NSE (see Section 3.2.1 in [2]):$$\left(\frac{\partial}{\partial t} + 2\nu k^2\right)E(k,t) = T(k,t).$$ Here $T(k,t)$ is called the transfer spectrum, and can be written as: $$T(k,t)= \int_0^\infty dj\, S(k,j:t),$$where $S(k,j;t)$ is the transfer spectral density and can be expressed in terms of the third-order moment $C_{\alpha\beta\gamma}(\mathbf{j},\mathbf{k-j},\mathbf{-k};t)$.

Unlike the KHE, which is purely local in its independent variable, the Lin equation is non-local in wavenumber. We can define its associated inter-mode energy flux as:$$\Pi (\kappa,t) = \int_\kappa^\infty dk\,T(k,t) = -\int_0^\kappa dk \, T(k,t).$$

We have now laid a basis for a summary of the Kolmogorov-Obhukov theory and one point should have emerged clearly: the energy cascade is well defined in wavenumber space. It is not defined at all in the context of energy conservation in real space. It can only exist as an intuitive phenomenon which is extended in space and time.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.

# The importance of terminology: stationarity or equilibrium?

The importance of terminology: stationarity or equilibrium?
When I began my post-graduate research in 1966, I found that I immediately had to get used to a new terminology. For instance, concepts like homogeneity and isotropy were a definite novelty. In physics one takes these for granted and they are never mentioned. Indeed the opposite is the case, and the occasional instance of inhomogeneity is encountered: I recall that one experiment relied on an inhomogeneity in the magnetic field. Also, in relativity one learns that a light source can only be isotropic in its co-moving frame. In any other frame, in motion relative to it, the source must appear anisotropic, as shown by Lorentz transformation. For the purposes of turbulence theory (and the theory of soft matter), exactly the same consideration must apply to Galilean transformation. Although, to be realistic, Galilean transformations are actually of little value in these fields, as they are normally satisfied trivially [1].

Then there was the transition from statistical physics to, more generally, the subject of statistics. The Maxwell-Boltzmann distribution was replaced by the normal or Gaussian distribution; and, in the case of turbulence, there was the additional complication of a non-Gaussian distribution, with flatness and skewness factors looming large. (I should mention as an aside that the above does not apply to quantum field theory which is pretty much entirely based on the Gaussian distribution.)

Perhaps the most surprising change was from the concept of equilibrium to one of stationarity. In physics, equilibrium means thermal equilibrium. Of course, other examples of equilibrium are sometimes referred to as special cases. For instance, a body may be in equilibrium under forces. But such references are always in context; and the term equilibrium, when used without qualification of this kind, always means thermal equilibrium. So any real fluid flow is a non-equilibrium process, and turbulence is usually classed as far from equilibrium. Indeed, physicists normally seem to regard turbulence as being the archetypal non-equilibrium process.

Unsurprisingly, the term has only rarely been used in turbulence. I can think of references to the approximate balance between production and dissipation near the wall in pipe flow being referred to as equilibrium; but, apart from that, all that comes to mind is Batchelor’s use of the term in connection with the Kolmogorov (1941) theory [2]. This was never widely used by theorists but recently there has been some usage of the term, so I think that it is worth taking a look at what it is; and, more importantly, what it is not.

Batchelor was carrying on the idea of Taylor, that describing homogeneous turbulence in the Fourier representation allowed the topic to be regarded as a part of statistical physics. He argued that the concept of local stationarity that Kolmogorov had introduced could be regarded as local equilibrium, in analogy with thermal equilibrium. The key word here is ‘local’. If we consider a flow that is globally stationary (as nowadays we can, because we have computer simulations), then clearly it would be nonsensical to describe such a flow as being in equilibrium.

However, recently Batchelor’s concept of local equilibrium has been mis-interpreted as being the same as the condition for the existence of an inertial range of wavenumbers, where the flux through wavenumber becomes equal to the dissipation rate. It is important to understand that this concept is not a part of Kolmogorov’s $x$-space theory but is part of the Obukhov-Onsager $k$-space theory. In contrast, the concept of local stationarity can be applied to either picture; but in my view is best avoided altogether.

I will say no more about this topic here, as I intend to develop it over the next few weeks. In particular, I think it would be helpful to make a pointwise summary of Kolmogorov-Obukhov theory, emphasising the differences between $x$-space and $k$-space forms, clarifying the historical position and indicating some significant and more recent developments.

[1] W. D. McComb. Galilean invariance and vertex renormalization. Phys. Rev. E, 71:37301, 2005.
[2] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 2nd edition, 1971.

# Turbulence in a box.

Turbulence in a box.
When the turbulence theories of Kraichnan, Edwards, Herring, and so on, began attracting attention in the 1960s, they also attracted attention to the underlying ideas of homogeneity, isotropy, and Fourier analysis of the equations of motion. These must have seemed very exotic notions to the fluid dynamicists and engineers who worked on single-point models of the closure problem posed by the Reynolds equation. Particularly, when the theoretical physicists putting forward these new theories had a tendency to write in the language of the relatively new topic of quantum field theory or possibly the even newer statistical field theory. In fact, the only aspect of this new approach that some people working in the field were apparently able to grasp was the fact that the turbulence was in a box, rather than in a pipe or wake or shear layer.

I became aware of this situation when submitting papers in the early 1970s, when I encountered referees who would begin their report with: ‘the author invokes the turbulence in a box concept’. This seemed to me to have ominous overtones. I mean, why comment on it? No one working in the field did: it was taken as quite natural by the theorists. However, in due course it invariably turned out that the referee didn’t think that my paper should be published. Reason? Apparently just the unfamiliarity of the approach. Later on, with the subject of turbulence theory having reached an impasse, they clearly felt quite confident in turning it down. I have written before on my experiences of this kind of refereeing (see, for example, my post of 20 Feb 2020).

Another example of turbulence in a box is the direct numerical simulation of isotropic turbulence, where the Navier-Stokes equations are discretised in a cubical box in terms of a discrete Fourier transform of the velocity field. Since Orszag and Patterson’s pioneering development of the pseudo-spectral method [1] in 1972, the simulation of isotropic turbulence has grown in parallel with the growth of computers; and, in the last few decades, it has become quite an everyday activity in turbulence research. So, now we might expect box turbulence to take its place alongside pipe turbulence, jet turbulence and so on, in the jargon of the subject?

In fact this doesn’t seem to have happened. However, less than twenty years ago, a paper appeared which referred to simulation in a periodic box [2], and since then I have seen references to this in microscopic physics, where the simulations are of molecular systems. I’m not sure why the nature of the box is worth mentioning. It is, after all, a commonplace fact of Fourier analysis, that representation of a non-periodic function in a finite interval requires an assumption of periodic behaviour outside the interval. Much stranger than this is that I am now seeing references to periodic turbulence as, apparently, denoting isotropic turbulence that has been simulated in a periodic box. This does not seem helpful! To most people in the field, periodic turbulence means turbulence that is modulated periodically in time or space. That is, the sort of turbulence that might be found in rotating machinery or perhaps a coherent structure [3]. We have to hope that this usage does not catch on.

[1] S. A. Orszag and G. S. Patterson. Numerical simulation of three-dimensional homogeneous isotropic turbulence. Phys.Rev.Lett, 28:76, 1972.
[2] Y. Kaneda, T. Ishihara, M. Yokokawa, K. Itakura, and A. Uno. Energy dissipation and energy spectrum in high resolution direct numerical simulations of turbulence in a periodic box. Phys. Fluids, 15:L21, 2003.
[3] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.

# Large-scale resolution and finite-size effects.

Large-scale resolution and finite-size effects.
This post arises out of the one on local isotropy posted on 21 October 2021; and in particular relates to the comment posted by Alex Liberzon on the need to choose the size of volume $G$ within which Kolmogorov’s assumptions of localness may hold. In fact, as is so often the case, this resolves itself into a practical matter and raises the question of large-scale resolution in both experiment and numerical simulation.

In recent years there has been growing awareness of the need to fully resolve all scales in simulations of isotropic turbulence, with the emphasis initially being on the resolution of the small scales. In my post of 28 October 2021, I presented results from reference [1] showing that compensating for viscous effects and the effects of forcing on the third-order structure function $S_3(r)$ could account for the differences between the four-fifths law and the DNS data at all scales. In this work, the small-scale resolution had been judged adequate using the criteria established by McComb et al [2].

However in [1], we noted that large-scale resolution had only recently received attention in the literature. We ensured that the ratio of box size to integral length-scale (i.e. $L_{box}/L$) was always greater than four. This choice involved the usual trade-off between resolution requirements and the magnitude of Reynolds number achieved, but the results shown in our post of 28 October would indicate that this criterion for large-scale resolution was perfectly adequate. That could suggest that taking $G\sim (4L)^3$ might be a satisfactory criterion. Nevertheless, I think it would be beneficial if someone were to carry out a more systematic investigation of this, in the same way as reference [1] did for the small-scale resolution.

Some attempts have been made at doing this in experimental work on grid turbulence: see the discussion on pages 219-220 in reference [3], but it clearly is a subject that deserves more attention. As a final point, we should note that this topic can be seen as being related to finite-size effects which are nowadays of general interest in microscopic systems, because there the theory actually relies on the system size being infinite. I suppose that we have a similar problem in turbulence in that the derivation of the solenoidal Navier-Stokes equation requires an infinitely large system, as does the use of the Fourier transform.

[1] W. D. McComb, S. R. Yoffe, M. F. Linkmann, and A. Berera. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence. Phys. Rev. E, 90:053010, 2014.
[2] W. D. McComb, A. Hunter, and C. Johnston. Conditional mode-elimination and the subgrid-modelling problem for isotropic turbulence. Phys. Fluids, 13:2030, 2001.
[3] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.

# The second-order structure function corrected for systematic error.

The second-order structure function corrected for systematic error.

In last week’s post, we discussed the corrections to the third-order structure function $S_3(r)$ arising from forcing and viscous effects, as established by McComb et al [1]. This week we return to that reference in order to consider the effect of systematic error on the second-order structure function, $S_2(r)$. We begin with some general definitions.

The longitudinal structure function of order $n$ is defined by:$$S_n(r) = \left\langle \delta u^n_L(r) \right\rangle,$$ where $\delta u_L(r)$ is the longitudinal velocity difference over a distance $r$. From purely dimensional arguments we may write: $$S_n(r) = C_n \varepsilon^{n/3}\,r^{n/3},$$ where the $C_n$ are dimensionless constants.
However, as is well known, measured values imply $S_n(r)\sim \, r^{\zeta_n}$ where the exponents $\zeta_n$ are not equal to the dimensional result, with the one exception: $\zeta_3 = 1$. In fact it is found that $\Delta_n = |n/3 – \zeta_n|$ is nonzero and increases with order $n$.

It is worth pausing to consider a question. Does this imply that the measurements give $S_n(r)=C_n \varepsilon^{\zeta_n}r^{\zeta_n}$? No, it doesn’t. Not only would this give the wrong dimensions but, more importantly, the time dimension is controlled entirely by the dissipation rate. Accordingly, we must have: $S_n(r)=C_n \varepsilon^{n/3}r^{\zeta_n}\mathcal{L}^{n/3-\zeta_n}$, where $\mathcal{L}$ is some length scale. Unfortunately for aficionados of intermittency corrections (aka anomalous exponents), the only candidate for this is the size of the system (e.g. $\mathcal{L} = L_{box}$), which leads to unphysical results.

Returning to our main theme, the obvious way of measuring the exponent $\zeta_n$ is to make a log-log plot of $S_n$ against $r$, and determine the local slope: $$\zeta_n(r) = d\,\log \,S_n(r)/d\, \log \,r.$$ Then the presence of a plateau would indicate a constant exponent and hence a scaling region. In practice, however, this method has problems. Indeed workers in the field argue that a Taylor-Reynolds number of greater than $R_{\lambda}\sim 500$ is needed for this to work, and of course this is a very high Reynolds number.

A popular way of overcoming this difficulty is the method of extended scale-similarity (or ESS), which relies on the fact that $S_3$ scales with $\zeta_3 =1$ in the inertial range, indicating that one might replace $r$ by $S_3$ as the independent variable, thus: $$S_n(r) \sim [S_3(r)]^{\zeta_n^{\ast}},\qquad \mbox{where} \qquad \zeta_n^{\ast} = \zeta_n/\zeta_3.$$ In order to overcome problems with odd-order structural functions, this technique was extended by using the modulus of the velocity difference, to introduce generalized structure functions $G_n(r)$, such that: $$G_n(r)=\langle |\delta u_L(r)|^n \rangle\sim r^{\zeta_n’}, \qquad \mbox{with scaling exponents} \quad \zeta’_n.$$ Then, by analogy with the ordinary structure functions, taking $G_3$ with $\zeta’ =1$ leads to $$G_n(r) \sim [G_3(r)]^{\Sigma_n}, \qquad\mbox{with} \quad \Sigma_n = \zeta’_n /\zeta’_3 .$$ This technique results in scaling behaviour extending well into the dissipation range which allows exponents to be more easily extracted from the data. Of course, this is in itself an artefact, and this fact should be borne in mind.

There is an alternative to ESS and that is the pseudospectral method, in which the $S_n$ are obtained from their corresponding spectra by Fourier transformation. This has been used by some workers in the field, and in [1] McComb et al followed their example (see [1] for details) and presented a comparison between this method and ESS. They also applied a standard method for reducing systematic errors to evaluate the exponent of the second-order structure function. This involved considering the ratio $|S_n(r)/S_3(r)|$. In this procedure, an exponent $\Gamma_n$ was defined by $$\left | \frac{S_n(r)}{S_3(r)}\right |\sim r^{\Gamma_n}, \qquad \mbox{where} \quad \Gamma_n= \zeta_n – \zeta_3.$$

Results were obtained only for the case $n=2$ and figures 9 and 10 from [1] are of interest, and are reproduced here. The first of these is the plot of the compensated ratio $(r/\eta)^{1/3}U|S_2(r)/S_3(r)|$ against $r/\eta$, where $\eta$ is the dissipation length scale and $U$ is the rms velocity. This illustrates the way in which the exponents were obtained.

In the second figure, we show the variation of the exponent $\Gamma_2 + 1$ with Reynolds number, compared with the variation of the ESS exponent $\Sigma_2$. It can be seen that the first of these tends towards the K41 value of $2/3$, while the ESS value moves away from the K41 result as the Reynolds number increases.

Both methods rely on the assumption $\zeta_3 =1$, hence $\Gamma_2+1 = \zeta_2$, which is why we plot that quantity. We may note that figures 1 and 2 point clearly to the existence of finite Reynolds number corrections as the cause of the deviation from K41 values. Further details and discussion can be found in reference [1].

[1] W. D. McComb, S. R. Yoffe, M. F. Linkmann, and A. Berera. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence. Phys. Rev. E, 90:053010, 2014.

# Viscous and forcing corrections to Kolmogorov’s ‘4/5’ law.

Viscous and forcing corrections to Kolmogorov’s ‘4/5’ law.

The Kolmogorov 4/5′ law for the third-order structure function $S_3(r)$ is widely regarded as the one exact result in turbulence theory. And so it should be: it has a straightforward derivation from the Karman-Howarth equation (KHE), which is an exact energy balance derived from the Navier-Stokes equation. Nevertheless, there is often some confusion around its discussion in the literature. In particular, for stationary isotropic turbulence, there can be confusion about the effects of viscosity (small scales) and forcing (large scales). These aspects have been clarified by McComb et al [1], who used spectral methods to obtain $S_2$ and $S_3$ from a direct numerical simulation of the equations of motion.

If we follow the standard treatment (see [2], Section 4.6.2), we may write: $$S_3(r)= -\frac{4}{5}\varepsilon r + 6\nu\frac{\partial S_2}{\partial r}.$$
In the past, this statement has been criticised because it omits the forcing which must be present in order to sustain a stationary turbulent field. However, it should be borne in mind that this is an entirely local equation; and, if the effect of the forcing is concentrated at the largest scales, then omission of these scales also omits the forcing. We can shed some light on this by reproducing Figure 7 from [1], thus:

The results were taken at a Taylor-Reynolds number $R_{\lambda} = 435.2$, and show how the departure from the 4/5′ law at the small scales is due to the viscous effects. Clearly there is a range of values of $r$ where the 4/5′ law may be regarded as exact, in the ordinary sense appropriate to experimental work. This range of scales is, of course, the inertial range. Note that $\eta$ is the Kolmororov length scale.

Presumably the departure from the 4/5′ law at the large scales is due to forcing effects, and McComb et al [1] also shed light on this point. They did this by working in spectral space, where stirring forces have been studied since the late 1950s in the context of the statistical theories (e.g Kraichnan, Edwards, Novikov, Herring: see [3] for details) and are correspondingly well understood. They began with the Lin equation: $$\frac{\partial E(k,t)}{\partial t} = T(k,t) – 2\nu k^2E(k,t) + W(k),$$ where in principle the energy and transfer spectra depend on time, whereas the spectrum of the stirring forces $W(k)$ is taken as independent of time in order to ensure ultimate stationarity. Thus we will drop the time dependences hereafter as we will only consider the stationary case.

We can derive the KHE from this and the result is the usual KHE plus an input term $I(r)$, defined by: $$I(r) = \frac{3}{r^3}\int_0^r\, dy \,y^2\, W(y),$$ where $W(y)$ is the three-dimensional Fourier transform of the work spectrum $W(k)$. By integrating the KHE (as Kolmogorov did in deriving the 4/5′ law) we obtain the form for the third-order structure function $S_3(r)$ as: $$S_3(r)=X(r) + 6\nu\frac{\partial S_2}{\partial r},$$where where $X(r)$ is given in terms of the forcing spectrum by: $$X(r) = -12r\int_0^{\infty}\,dk W(k)\,\left[\frac{3\sin kr – 3kr \cos kr-(kr)^2 \sin kr}{(kr)^5}\right].$$
The result of including the effect of forcing is shown in Figure 8 of [1], which is reproduced here below.

These results are taken from the same simulation as above, and now the contributions from viscous and forcing effects can be seen to account for the departure of $S_3$ from the 4/5′ law at all scales.

In [1] it is pointed out that $X(r)$ is not a correction to K41, as used in other previous studies. Instead, it replaces the erroneous use of the dissipation rate of others’, and contains all the information of the energy input at all scales. In the limit of $\delta(k)$ forcing, $I(y)= \varepsilon_W = \varepsilon$, such that $X(r) = -4\varepsilon\, r/5$, giving K41 in the infinite Reynolds number limit. Note that $\varepsilon_W$ is the rate of doing work by the stirring forces. Further details may be found in [1].

[1] W. D. McComb, S. R. Yoffe, M. F. Linkmann, and A. Berera. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence. Phys. Rev. E, 90:053010, 2014.
[2] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[3] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.

# Local isotropy, local homogeneity and local stationarity.

Local isotropy, local homogeneity and local stationarity.

In last week’s post I reiterated the argument that the existence of isotropy implies homogeneity. However, Alex Liberzon commented that there could be inhomogeneous flows that exhibited isotropy on scales that were small compared to the overall size of the flow. This comment has the great merit of drawing attention to the difference between a purely theoretical formulation and one dealing with a real practical situation. In my reply, I mentioned that Kolmogorov had introduced the concept of local isotropy, which supported the view that Alex had put forward. So I thought it would be interesting to look in detail again at what Kolmogorov had actually said. Incidentally, Kolmogorov said it in 1941 but for the convenience of readers I have given the later references, as reprinted in the Proceedings of the Royal Society.

Now, although I like to restrict the problem to purely isotropic turbulence, where it still remains controversial in that many people believe in intermittency corrections or anomalous exponents, Kolmogorov actually put forward a theory of turbulence in general. He argued that a cascade as envisaged by Richardson could lead to a range of scales where the turbulence becomes locally homogeneous. In [1], which I refer to as K41A, he put forward two definitions, which I shall paraphrase rather than quote exactly.

The first of these is as follows: Definition 1. The turbulence is called locally homogeneous in the domain $G$ if the probability distribution of the velocity differences is independent of the origin of coordinates in space, time and velocity, providing that all such points are contained within the domain $G$.’

We should note that this includes homogeneity in time as well as in space. In other words, Kolmogorov was assuming local stationarity as well.

Then his second definition is: Definition 2. The turbulence is called locally isotropic in the domain $G$, if it is homogeneous and if, besides, the distribution laws mentioned in Definition 1 are invariant with respect to rotations and reflections of the original system of coordinate axes $(x_1,\,x_2\,x_3)$.’

Note that the emphasis is mine.

Kolmogorov then compared his definition of isotropy to that of Taylor, as introduced in 1935. He stated that his definition is narrower, because he also requires local stationarity, but wider in that it applies to the distribution of the velocity differences, and not to the velocities themselves. Later on, when he derived the so-called ‘$4/5$’ law [2], he had already made the assumption that the time-derivative term could be neglected, and simply quoted the Karman-Howarth equation without it: see equation (3) in [2].

The question then arises, how far do these assumptions apply in any real flow? In my post of 11th February 2021, I conjectured that this might be a matter of the macroscopic symmetry of the flow. For instance, the Kolmogorov picture might apply better in plane channel flow that in plane Couette flow. I plan to return to this point some time.

[1] A. N. Kolmogorov. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. Proc. Roy Soc. Lond., 434:9-13, 1991.
[2] A. N. Kolmogorov. Dissipation of energy in locally isotropic turbulence. Proc. Roy Soc. Lond., 434:15-17, 1991.

# Is isotropy the same as spherical symmetry?

Is isotropy the same as spherical symmetry?

To which you might be tempted to reply: ‘Who ever thought it was?’ Well, I don’t know for sure, but I’ve developed a suspicion that such a misconception may underpin the belief that it is necessary to specify that turbulence is homogeneous as well as isotropic. When I began my career it was widely understood that specifying isotropy was sufficient, as it was generally realised that homogeneity was a necessary condition for isotropy. A statement to this effect could (and can) be found on page 3 of Batchelor’s famous monograph on the subject [1].

I have posted previously on this topic (my second post, actually, on 12 February 2020) and conceded that the acronym HIT, standing of course for ‘homogeneous, isotropic turbulence’, has its attractions. For a start, it’s the shortest possible way of telling people that you are concerned with isotropic turbulence. I’ve used it myself and will probably continue to do so. So I don’t see anything wrong with using it, as such. The problem arises, I think, when some people think that you must use it. In other words, such people apparently believe that there is an inhomogeneous form of isotropic turbulence.

When you think about it that is really quite worrying. I’m not particularly happy about someone, whose understanding is so limited, refereeing one of my papers. Although, to be honest, that could well explain some of the more bizarre referees’ reports over the years! Anyway, let’s examine the idea that there may be some confusion between isotropy and spherical symmetry.

Isotropy just means that a property is independent of orientation. Spherical symmetry sounds quite similar and is probably the more frequently encountered concept for most of us (at least during our formal education). Essentially it means that, relative to some fixed point, a field only varies with distance from the point but not with angle. A familiar example would be a point electric charge in free space. So we might be tempted to visualise isotropy as a form of spherical symmetry, the common element being the independence of orientation.

The problem with doing this, is that the property of isotropy of a medium must apply to any point within it. Whereas, spherical symmetry depends on the existence of a special point which may be taken as the origin of coordinates. But the existence of such a special point would violate spatial homogeneity. So for isotropy to be true, we must have spatial uniformity or homogeneity. I think that one can infer this mathematically from the fact that the only isotropic tensors are (subject to a scalar multiplier) the Kronecker delta $\delta_{ij}$ and the Levi-Civita density $\epsilon_{ijk}$. So any isotropic tensor must have components that are independent of the coordinates of the system.

For this point applied to the cosmos, i.e. homogeneity is a necessary (but not sufficient) condition for isotropy, see Figure 2 on page 24 of [2]. It seems to be easier to visualise these matters in terms of the night sky which is a fairly (if, illusory) static-looking entity. But when we add in a continuum structure and random variations on many length scales, it can be more difficult. We will come back to this particular problem in my next post.

[1] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 2nd edition, 1971.
[2] Steven Weinberg. The first three minutes: a modern view of the origin of the universe. Basic Books, NY, 1993.

# Various kinds of turbulent dissipation?

Various kinds of turbulent dissipation?

The current interest in Onsager’s conjecture (see my blog of 23 September 2021) has sparked my interest in the nature of turbulent dissipation. Essentially a fluid only moves because a force acts on it and does work to maintain it in motion. The effect of viscosity is to convert this kinetic energy of macroscopic motion into random molecular motion, which is perceived as heat. If there is turbulence, this acts to transfer the macroscopic kinetic energy to progressively smaller scales, where the steeper velocity gradients can dissipate it as heat.

This all seems quite straightforward and well understood. However, Onsager’s conjecture, as a matter of physics, is less easily understood. It interprets the infinite Reynolds number limit as being when the continuum nature of the fluid breaks down. It also implies that, when the Reynolds number becomes very large, the Navier-Stokes equation somehow becomes the Euler equation; which, despite its inviscid nature, satisfactorily accounts for the dissipation. It can do this (supposedly) because it has lost its property of conserving energy. In turn, this is supposed to happen because the velocity is no longer a continuous and differentiable field. Of course there does not seem to be any mechanism for turning the dissipated energy into heat, so the thermodynamic aspects of this process look distinctly dodgy.

There are two other cases where macroscopic kinetic energy is not turned into heat.

The first of these is in large-eddy simulation, which has for many years been widely studied for its practical significance. This of course is not a physical situation. It is purely a method of simulating turbulence numerically without being able to resolve all the scales: an introduction can be found in [1]. The central problem is to model the flow of energy to the scales which are too small to be resolved: the so-called subgrid drain. Various models have been studied for the subgrid viscosity, while a novel approach is the operational method of Young and McComb [2]. In this latter, an algorithm is used to feed back energy into the resolved modes, such that the spectral shape is kept constant. In fact this method can be interpreted in terms of an effective subgrid viscosity which is very similar to that found in conventional simulations when a large-eddy simulation is compared to a fully resolved one. But, so far as I know, no one has considered modelling the temperature rise that would be due to the viscous dissipation in these cases.

The second case is the direct simulation of the Euler equation. Such simulations can only lead to thermal equilibrium but naturally the simulations must be truncated to a finite number of modes, to avoid having an infinite amount of energy. However, in 2005, some interesting transient behaviour was been found in truncated Euler simulations [3] and confirmed the following year by the use of a closure approximation [4]. These simulations may be divided in terms of their energy spectra into two spectral ranges: a Kolmogorov range and an equipartition range. A buffer range in between these two is described by Bos and Bertoglio as a ‘quasi-dissipative’ zone, which is another example of non-viscous dissipation. However, it can only exist for a finite time and ultimately the system must move to thermal equilibrium.

I think it would be interesting to see one of the proponents of Onsager’s conjecture explain the simple physics of how the conjectured situation came about with increasing Reynolds number. All the mathematical expressions you need to do that are available. But I don’t think I will see that any time soon!

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] A. J. Young and W. D. McComb. Effective viscosity due to local turbulence interactions near the cutoff wavenumber in a constrained numerical simulation. J. Phys. A, 33:133-139, 2000.
[3] Cyril Cichowlas, Pauline Bonatti, Fabrice Debbasch, and Marc Brachet. Effective Dissipation and Turbulence in Spectrally Truncated Euler Flows. Phys. Rev. Lett., 95:264502, 2005.
[4] W. J. T. Bos and J.-P. Bertoglio. Dynamics of spectrally truncated inviscid turbulence. Phys. Fluids, 18:071701, 2006.

# Superstitions in turbulence theory 2: that intermittency destroys scale-invariance!

Superstitions in turbulence theory 2: that intermittency destroys scale-invariance!

At the moment I am busy revising a paper (see [1] below) in order to meet the comments of the referees. As is so often the case, Referee 1 is supportive and Referee 2 is hostile. Naturally, Referee 2 writes at great length, so it is really a matter of rebuttal rather than our making changes. It seems clear that he is far from his comfort zone and his comments show that he has comprehensively misunderstood our paper. It also seems to me that he has not actually read certain key parts of the manuscript. For instance, he states: ‘The way how the authors use the word “scale-invariance” should be clarified’ (sic).

This is despite the fact that subsection 3.1 of the paper is titled ‘Scale-invariance of the inertial flux in the infinite Reynolds number limit’ and consists of only three paragraphs. It contains two equations, one of which states the criterion for an inertial range. This is followed by a sentence ending with “… where the fact that the criterion holds over a range of wavenumbers is usually referred to as scale-invariance.” Oh, and as regards ‘how the authors use the word’, we cite a number of references to show that others use the phrase, so we are not alone.

The next thing he says is: ‘We know from experimental evidence (intermittency) that scale invariance is broken in the inertial range.’ This is quite simply nonsense. In this context scale-invariance means that the inertial range is characterised by a constant flux over a range of wavenumbers, and this has been shown in many investigations. In fact there is no way in which intermittency, which is a single-realization characteristic, can affect mean quantities such as inertial flux or their properties such as scale-invariance. In a recent paper [2], we have shown that the ensemble average of intermittency vanishes. In the first figure below, we show the effect of using contours of isovorticity and the progressive effect of averaging over $N=1,\,2,\,5,\,10,\,25$ and $46$ realizations.

The effect of the averaging out with increasing number of realizations is evident. While the use of vorticity is more natural, the effect can perhaps be more clearly seen using the Q-criterion, as is done in the next figure.

Both figures are taken from the same stationary DNS of the Navier-Stokes equations. Further details can be found in reference [2].

Over the past three decades there has been an increasing body of evidence to the effect that intermittency does not affect the Kolmogorov spectrum. Any deviations are in fact due to the Kolmogorov conditions not being quite met. Presumably it will take a long time for rational enquiry to defeat superstition in this topic!

[1] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614v2[physics.flu-dyn], 2021.
[2] S. R. Yoffe and W. D. McComb. Does intermittency affect the inertial transfer rate in stationary isotropic turbulence? arXiv:2107.09112v1 [physics.flu-dyn], 2021.

# Superstitions in turbulence theory 1: the infinite Re limit of the Navier-Stokes equation is the Euler equation!

Superstitions in turbulence theory 1: the infinite Re limit of the Navier-Stokes equation is the Euler equation!

I recently posted blogs about the Onsager conjecture [1]; the need to take limits properly (Onsager didn’t!); and the programme at MSRI Berkeley, which referred to the Euler equation as the infinite Reynolds number limit, in a series of posts from 5 – 19 August just past. A later notification about the MSRI programme no longer made that claim; and I speculated (conjectured?) that this might not be unconnected from the appearance of the paper [2] on the arXiv! Now the Isaac Newton Institute is having a new programme on mathematical aspects of turbulence over the first half of next year, and their theme dwells on how the mathematics underlying ‘the proof of the Onsager conjecture … can bring insights into the dissipative anomaly conjecture, a.k.a. Kolmogorov’s zeroth law of turbulence’.

The idea of a dissipation (or dissipative) anomaly goes back to Onsager’s conjecture [1] made in 1949 when turbulence studies were still in their infancy. Although the alternative expression (i.e Kolmogorov’s zeroth law) has also been used, I have no idea who formulated it; nor of the reasoning that lies behind it. While Kolmogorov may have formulated laws in statistics (I am indebted to Mr Google for this information!), his contributions to turbulence do not qualify for the description ‘physical laws’. However, an irony about the way in which Onsager came to his conclusion about a dissipative anomaly recently dawned on me, and the point of this post is to share that with you.

Onsager’s starting point was Taylor’s (1935) expression for the turbulent dissipation [3] thus: $$\varepsilon = C_{\varepsilon}(R_L) U^3/L,$$ where $\varepsilon$ is the dissipation rate, $U$ is the root mean square velocity, $L$ is the integral scale, and $C_{\varepsilon}$ is a coefficient which may depend on the Reynolds number $R_L$, which is formed from the integral scale and the rms velocity. In 1953, Batchelor [4] presented some results that suggested $C_{\varepsilon}$ tended to a constant with increasing Reynolds number.. Nevertheless, this expression was the subject of some debate over the years (although its equivalent for shear flows was widely used in both research and practical applications), until Sreenivasan’s survey papers on grid turbulence [5] in 1984 and on direct numerical simulations [6] in 1998 established the characteristic asymptotic shape of this curve. This work had a seminal effect on the subject and a general account of work in this area can be found in the book [7].

However, it was suggested by McComb et al in 2010 [8] that the Taylor’s expression for the dissipation (1) is actually a surrogate for the peak inertial flux $\Pi_{max}$. See the figure below, which is taken from that paper. It shows from DNS that the group $U^3/L$ behaves like $\Pi_{max}$ for all Reynolds numbers, whereas the behaviour of the dissipation is quite different at low Reynolds numbers.

It was further shown [9], using the Karman-Howarth equation and expanding non-dimensional structure functions in inverse powers of the Reynolds number, that this was the case, with the asymptotic behaviour $C_{\varepsilon} \rightarrow C_{\varepsilon,\infty}$ as $R_L \rightarrow \infty$ corresponding to the onset of the Kolmogorov $4/5’$ law.

In other words, when Onsager deduced from Taylor’s expression that the dissipation did not depend on the viscosity, he was actually deducing that the peak inertial flux did not depend on the viscosity. And indeed it doesn’t!

[1] L. Onsager. Statistical Hydrodynamics. Nuovo Cim. Suppl., 6:279, 1949.
[2] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614v2[physics.flu-dyn], 2021. 28.
(N.B. This paper is presently under revision and will be posted again, possibly with a change of title.)
[3] G. I. Taylor. Statistical theory of turbulence. Proc. R. Soc., London, Ser. A, 151:421, 1935.
[4] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 1st edition, 1953.
[5] K. R. Sreenivasan. On the scaling of the turbulence dissipation rate. Phys. Fluids, 27:1048, 1984.
[6] K. R. Sreenivasan. An update on the energy dissipation rate in isotropic turbulence. Phys. Fluids, 10:528, 1998.
[7] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[8] W. David McComb, Arjun Berera, Matthew Salewski, and Sam R. Yoffe. Taylor’s (1935) dissipation surrogate reinterpreted. Phys. Fluids, 22:61704, 2010.
[9] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.

# Peer review: the role of the referee.

Peer review: the role of the referee.
In earlier years I used to get the occasional phone call from George Batchelor, at that time the editor of Journal of Fluid Mechanics, asking for suggestions of new referees on the statistical theory of turbulence. To avoid confusion I should point out that by this I mean the theoretical physics approach to the statistical closure problem, pioneered by Bob Kraichnan and Sam Edwards, and carried on by myself and others. For anyone interested, a review of this subject can be found in reference [1] below.

I didn’t find this easy, as there were then (as now) very few people working on this topic. My suggestion that Sam Edwards, although no longer active in this area, could certainly referee papers, was met with little enthusiasm. He was seen as ‘too kind’ or even as ‘soft-hearted’! I wasn’t surprised by this, as Sam had explained his position on refereeing to me and it amounted to: ‘Unless it is arrant nonsense, it should be published.’ In contrast, the refereeing process of the JFM was notoriously tough and this has been generally true in turbulence research, and remains so to this day. Indeed this is the general perception in the subject, and to quote Sam again, he once referred to ‘the cut-throat nature of refereeing in turbulence’. I suspect it was this perception which put him off continuing in the subject.

I find myself somewhere between the extremes, perhaps because this is a matter of culture and I have been both engineer and physicist. However, while I respect the professionalism of the engineering approach, at the same time I think it can be taken too far. A typical experience for me (and I believe also for many others) is that a technical discussion can be carried on between the authors and individual referees which is never seen by others in the field. In my view these discussions should be published as an appendix to the paper (assuming of course that the paper is actually accepted for publication). I also think that where the authors have a track record there should be a presumption that the paper should be published. In other words, the onus should be on the referee to come up with definite and reasoned objections, as opposed to the vague prejudiced waffle which is so often the case!

Another problem that arises often in the turbulence community, is the desire of some referees to rewrite the paper. Or rather to force the author(s) to rewrite the paper to the referee’s prescription. It is of course legitimate to point out aspects which are less clear than they might be, but it verges on arrogance to tell the author how to do it. Also, with electronic publication now universal the idea of saving paper/printing costs is no longer so relevant. Papers can easily be as long as they need to be.

I have been on the receiving end of this behaviour on occasion, but nothing compared to something I was told recently; where a leading member of the community was forced to modify his paper four times despite his own judgement that the changes were unnecessary and his making protests to that effect to the editor. Someone else I know, summed it up as ‘lazy editors and biased referees’. He had come from particle physics, where his papers had generally been published ‘as submitted’, to fluid mechanics (in the context of climatology) where there was invariably a battle over changes being required by the referee. Of course I trust that it is clear that I am not referring to the minor changes that we should all be happy to make, but to major structural changes which may in the end be no more than one person’s opinion against another’s. For these two individuals it was the failure by the editors to intervene that caused the problems.

So, it really comes down to the editor in the end. It is their job to protect their referees from unfair attack, on the one hand; and to protect their authors from unfair refereeing, on the other. As I have pointed out elsewhere, in practice what breaks this symmetry is that it is more difficult for the editor to get referees than it is to get prospective authors; who, after all, are queuing up to apply!

[1] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.

# Peer review: The role of the author.

Peer review: The role of the author.
I have previously posted on the role of the editor (see my blog on 09/07/2020) and had intended to go on to discuss the role of the referee. However, before doing that it occurred to me that it might be helpful to first discuss the role of the author. Of course probably every journal lays down rules for author and referee alike: but who pays any attention to these? (Just joking! Although, life is short and if you are having to try more than one journal, then the fact that these detailed rules vary from one journal to another can add to the labour involved.) But what I have in mind are the unwritten rules. These are generally taken for granted and perhaps should be spelled out occasionally in order to ensure that everyone is on the same wavelength.

One basic rule for authors is that they should provide some basic introduction to the problem, discuss previous work and show how their own new work advances the situation. This is very much in our own interest, as it is a key part of demonstrating to our co-workers that our paper is worth reading. However, as I found out at the beginning of my career, this is can be a fraught process. For instance, writing the introduction to a paper on the statistical theory of turbulence was perfectly straightforward, but in the case of an attempted theory of drag reduction by additives this turned out to be quite another matter.

My attention was drawn to this problem when I was in the Theoretical Physics Division at Harwell. At first this involved polymer molecules; but, when I looked into it further, I found out that there was a parallel activity based on the use of macroscopic fibres such as wood-pulp or rayon. This latter activity generally seemed to have originated within the relevant industry, and was often carried on without reference to the better known use of polymer additives.

I found the fibre problem more attractive, because it seemed easier to think about a macroscopic fibre as a linear object which could only have two-dimensional interactions with a three-dimensional eddy of comparable size. If one added in the possibility of elastic deformation of the fibre by the fluid, then one could think in terms of a non-Newtonian relationship between stress and rate of strain for the composite fluid which could act as a model for the fibre suspension. On the assumption that the fibres would tend to be aligned (on average) with the mean flow, physical reasoning led to an expression for a nonlinear correction to the usual Newtonian viscosity, which could be further decomposed into the difference between two-dimensional and three-dimensional inertial transfer terms, both of which represented reversals of the usual energy cascade. This theory offered a qualitative explanation of the changes in turbulent intensities which had been observed in fibre suspensions and was published as a letter in Nature [1].

So far so good! The problems arose when I extended this work and submitted it to JFM. All three referees were unanimous in rejecting the paper. Part of the trouble seemed to be that the work was carried out in spectral space. An account of this can be found in my blog of 20/02/2020, including the infamous description of my analysis as ‘the usual wavenumber murder’! But, as was kindly pointed out to me by George Batchelor, the problem was that I was ‘treading on the toes’ of those who worked in this field (i.e. microrheology). This editorial advice was helpful; because, from my background in physics, I knew very little about fluid mechanics and was happily unaware that the subject of microrheology even existed.

Of course, in the spirit of ‘poacher turned gamekeeper’ I ultimately became very keen on making sure that any paper of mine had a proper literature survey. I owe this mainly to my PhD students, who have always been very assiduous in tracking down references, and who have set me a good example in this respect!
Nowadays, in view of the great increase in publications, I tend to take a more tolerant attitude to others who fail to cite relevant papers. But I’m not sure that this is really justified. After all, although we have had a positive explosion of publications in fluid mechanics, most of this is in practical applications. The amount of truly fundamental work is still quite small. And we do have the power of Google to help us find anything that is relevant to what we are currently publishing. I must say that I am rather sceptical about papers that purport to present applications of theoretical physics to turbulence yet do not mention the name ‘Kraichnan’. I suspect them of being fake theories. This is something that I may expand on sometime.

For those who are interested, a further account of developments in the study of drag reduction may be found in my book cited as [2] below.

[1] W. D. McComb. The turbulent dynamics of an elastic fibre suspension: a mechanism for drag reduction. Nature Physical Science, 241(110):117-118, 1973.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.

# The exactness of mathematics and the inexactness of physics.

The exactness of mathematics and the inexactness of physics.

This post was prompted by something that came up in a previous one (i.e. see my blog on 12 August 2021), where I commented on the fact that an anonymous referee did not know what to make of an asymptotic curve. The obvious conclusion from this curve, for a physicist, was that the system had evolved! There was no point in worrying about the precise value of the Reynolds number. That is a matter of agreeing a criterion if one needs to fix a specific value. But evidently the ratio shown was constant within the resolution limits of the measurements of the system; and this is the key point. Everything in physics comes down to experimental error: the only meaningful comparison possible (i.e. theory with experiment or one experiment with another) is subject to experimental error which is inherent. Strictly one should always quote the error, because it is never zero.

In everyday life, there are of course many practical expedients. For instance, radioactivity takes in principle an infinite amount of time to decay completely, so in practice radioisotopes are characterised by their half-life. So the manufacturers of smoke alarms can tell you when to replace your alarm, as they know the half-life of the radioactive source used in it. In acoustics or diffusion processes or electromagnetism, exponential decays are commonplace, and it is usual to introduce a relaxation time or length, corresponding to when/where the quantity of interest has fallen to $1/e$ of its initial value.

In fluid mechanics, the concept of a viscous boundary layer on a solid surface is of great utility in reconciling the practical consequences of a flow (such as friction drag) with the elegance and solubility of theoretical hydromechanics. The boundary layer builds up in thickness in the stream-wise direction as vorticity created at the solid surface diffuses outwards. But how do we define that thickness? A reasonable criterion is to choose the point where the velocity in the boundary layer is approximately equal to the free-stream velocity. From my dim memory of teaching this subject several decades ago, a criterion of $u_1(x_2) = U_1$, where $U_1$ is the constant free-stream velocity, was adequate for pedagogic purposes.

An interesting partial exception arises in solid state physics, when dealing with crystal lattices. The establishment of the lattice parameters is of course subject to the usual caveats about experimental error, but for statistical physics lattices are countable systems. So if one is carrying out renormalization group calculations (e.g see [1]) then one is coarse-graining the description by replacing the unit cell, of side length $a$, by some larger (renormalized) unit cell. In wavenumber (momentum) space, this means we start from a maximum wavenumber $k_{max}=2\pi/a$ and average out a band of wavenumber modes $k_1 \leq k \leq k_0$, where $k_0=k_{max}$. You can see where the countable aspect comes in, and of course the initial wavenumber is precisely defined (although of course its precise value is subject to the error made in determining the lattice constant).

When extending these ideas to turbulence, the problem of defining the maximum wavenumber is not solved so easily. Originally people (myself included) used the Kolmogorov dissipation wavenumber, but this is not necessarily the maximum excited wavenumber in turbulence. In 1985 I introduced a criterion which was rather like a boundary-layer thickness, adapting the definition of the dissipation rate, thus: $\varepsilon = \int^{\infty}_0 \, 2\nu_0 k^2 E(k) dk \simeq \int^{k_{max}}_0 \, 2\nu_0 k^2 E(k) dk,$ where $\nu_0$ is the molecular viscosity and $E(k)$ is the energy spectrum [2]. When I first started using this, physicists found it odd, because they were used to the more precise lattice case. I should mention for completeness that it is also necessary to use a non-trivial conditional average [3].

Recently there has been growing interest in these matters by those who study the philosophy of maths and science. For instance, van Wierst [4] notes that in the theory of critical phenomena, phase transitions require an infinite system, whereas in real life they take place in finite (and sometimes quite small!) systems. She argues that this paradox can be resolved by the introduction of ‘constructive mathematics’, but my view is that it can be adequately resolved by the concept of scale-invariance. Which brings us back to the infinite Reynolds number limit for turbulence. But, for the moment, I have said enough on that topic in previous posts, and will not expand on it here.

[1] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.
[2] W. D. McComb. Application of Renormalization Group methods to the subgrid modelling problem. In U. Schumann and R. Friedrich, editors, Direct and Large Eddy Simulation of Turbulence, pages 67-81. Vieweg, 1986.
[3] W. D. McComb and A. G. Watt. Conditional averaging procedure for the elimination of the small-scale modes from incompressible uid turbulence at high Reynolds numbers. Phys. Rev. Lett., 65(26):3281-3284, 1990.
[4] Pauline van Wierst. The paradox of phase transitions in the light of constructive mathematics. Synthese, 196:1863, 2019.

# Nightmare on Buccleuch Street.

Nightmare on Buccleuch Street.
Staycation post No 4. I will be out of the virtual office until 30 August.

I haven’t been into the university since the pandemic began but recently I dreamt that I was in the university library, in the section where magazines and journals are kept. In this dream, I was sitting at one of the low tables reading a magazine and two much younger men were also sitting there, in a suitably, socially distanced way. As they were unknown to me, I will call them A and B [1]. A was leafing through The Physics of Fluids while B was staring at one particular page of a tabloid newspaper.

After a while, A spoke. ‘Have you seen that interesting article about constraints on the scaling exponents in the inertial range?’

B shakes his head and goes on studying his tabloid. A continues. ‘These guys use Holder inequalities applied to the structure functions and then to the generalised structure functions; and end up with a condition relating the exponent for $S2$ to the exponent for $S3$. Now, if we assume that the exponent for $S3$ is equal to $1$, then it follows that the exponent for $S2$ is equal to $2/3$. This is exciting. Most people would agree with the first of these, but not the second.’

B continues to stare at his newspaper and makes no response. With a slight note of desperation in his voice, A goes on. ‘But don’t you see, this could fit in nicely with Lundgren’s matched asymptotic expansions analysis. It could also fit in with that guy’s blog about the K62 correction being unphysical. It looks like old Kolmogorov was right all the time … back in 1941. Aren’t you interested, at all?’

At last B looks up. ‘No, why should I be. I don’t use structure functions or spectra in my work. And you will go on using Kolmogorov scaling as you have always done, because it works. So why are you so excited?’

For a moment A just sits there. The he gets up and puts the journal back in the rack. He stands in silence for a few moments. Then he says. ‘You know, I keep feeling it’s Thursday.’

For the first time B looks animated. ‘That’s funny so do I. Let’s go and have a drink.’

Exeunt omnes. It was only a dream and obviously couldn’t happen in real life. The paper to which A was referring is cited below as [2].

[1] There is no C in this story. See my post of 9 July 2020.
[2] L. Djenidi, R. A. Antonia, and S. L. Tang. Mathematical constraints on the scaling exponents in the inertial range of fl uid turbulence. Phys. Fluids, 33:031703, 2021.

# Why am I so concerned about Onsager’s so-called conjecture?

Why am I so concerned about Onsager’s so-called conjecture?
Staycation post No 3. I will be out of the virtual office until 30 August.

In recent years, Onsager’s (1949) paper on turbulence has been rediscovered and its eccentricities promoted enthusiastically, despite the fact that they are at odds with much well-established research in turbulence, beginning with Batchelor, Kraichnan, Edwards, and so on. In particular, a bizarre notion has taken hold that the Euler equation corresponds to the zero-viscosity limit of the Navier-Stokes equations and can be made dissipative, in defiance of the basic physics, by some mysterious alteration of the mathematics. The previous two posts refer to this.
I have been intending to write about this for some time, but the present paper [1] was prompted by an email that I received late in 2019 from MSRI, Berkeley. This was an advance announcement of a Program: ‘Mathematical problems in fluid dynamics’, to take place in the first half of 2021. I quote from the description as follows:

‘The fundamental equations in this area are the well-known Euler equations for inviscid fluids and the Navier-Stokes equations for the (sic) viscous fluids. Relating the two is the problem of the zero-viscosity limit and its connection to the phenomena of turbulence.’

The second sentence is nonsense and runs counter to all the conventions of fluid dynamics, where it has long been known that the relationship between the two equations is obtained by setting the viscosity equal to zero. The infinite Reynolds number limit, in contrast, is observed as an asymptotic behaviour of the Navier-Stokes equation; which, even at high Reynolds numbers, remains the Navier-Stokes equation.

I was appalled by the thought of young mathematicians being taught such unrepresentative and incorrect material. This is what provided my immediate motivation for writing the present paper. The first version of this paper was put on the arXiv on 12 December 2020.

In January of this year, I received from MSRI the final notification of this program. The wording had changed, and after some unexceptional statements about the equations of motion it read:

‘Open problems and connections to related branches of mathematics will be discussed, including the phenomena of turbulence and the zero-viscosity limit. Both theoretical and numerical aspects of these topics will be considered.’

Perhaps it is just a coincidence that this change should follow the arXiv publication of [1], but at least their statement about their course is no longer manifestly false; although much still depends on what was actually taught. It may be noted that Figure 2 of [1] (also see the previous post) shows the onset of scale invariance and, in effect, the zero-viscosity limit, in a direct numerical simulation at a Taylor-Reynolds number of about one hundred. This is the physical infinite Reynolds number limit as it occurs in real fluids.

Another aspect of the influence of Onsager is the use of the term dissipation anomaly which is used instead of what some call the dissipation law. If one criticises the term, the mathematicians seem to believe that one is denying the existence of the effect. Not so. At Edinburgh we have worked on the establishing the existence of the dissipation law and also have elucidated it as arising from the Richardson-Kolmogorov picture [2], [3]. It is a real physical effect and there is nothing anomalous about it.

[1] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614v2[physics.flu-dyn], 2021.
[2] W. David McComb, Arjun Berera, Matthew Salewski, and Sam R. Yoffe.
Taylor’s (1935) dissipation surrogate reinterpreted. Phys. Fluids, 22:61704,
2010.

[3] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.

# That’s the giddy limit!

That’s the giddy limit!
Staycation post No 2. I will be out of the virtual office until 30 August.

The expression above was still in use when I was young, and vestiges of its use linger on even today. It referred, often jocularly, to any behaviour which was deemed unacceptable. Why giddy? I’m afraid that the reference books are silent on that. However, I have encountered examples of mathematical limits which seemed to qualify for the adjective.

Shortly before I retired, I found myself teaching a mathematics course to third-year physics students. The purpose of this course was to try to bring our students up to speed in maths, after the mathematics lecturers had done their best in the previous two years. I suppose that it had a remedial aspect, and at that time the talk was all of the ‘math problem’. One example of a ‘giddy’ limit, which sticks in my mind, arose when I was marking class exam papers. The question asked the students to sketch the function $sinc \,\nu = \sin \nu / \nu$. This required them to work out its value at $\nu =0$, where of course direct substitution results in an indeterminate form. I need hardly say that they had to use either a Taylor series expansion of $sin$ or make use of l’Hopital’s rule to reveal the correct limiting value which is unity. Or of course they could just sketch it and infer the limiting behaviour by eye.

One person did this beautifully, with all the zeros in the right places and the central peak heading up to the value one on both sides. It was as the $y$-axis was approached that giddiness seemed to set in, and the sketched curve then shot down to zero on both sides. The student then proudly declared it to be an indeterminate form. One, which just happened to be zero! This sudden abandonment of all reason was quite baffling and I never understood the reason for it.

However, I recently saw comments by an anonymous referee which seemed to come into a similar category. These were directed at Figure 2 in reference [1] which was intended to demonstrate that the physical infinite Reynolds number limit was determined by the onset of scale-invariance. We show this below. Scale-invariance in this context is defined to be when the maximum rate of inertial transfer $\varepsilon_T$ becomes equal to the viscous dissipation $\varepsilon$. As we were originally studying the dependence of dimensionless dissipation of Taylor-Reynolds number, we actually plot the ratio $\varepsilon / \varepsilon_T$, which reduces towards unity, and this indicates the onset of scale-invariance.

The referee looked at the figure and asked: how is the onset of scale-invariance defined? Is the onset placed at $R_{\lambda}=50,\,100\,150$?

This seems to me to verge on the childish. Does he have no familiarity with the intersection between a mathematically asymptotic result and a real physical system? Has he never met viscous boundary layers, exponential decay of sound or other radiation? The answer in all these cases is set by the resolution of the physical measuring system. Once changes are too small to be measurable, then the asymptote has been reached. The curve that we show in the figure, would go on at a constant level no matter how much one increased the Reynolds number.

The lesson to be drawn from this is that there are no further qualitative changes in the system as you increase the Reynolds number, and this is how real fluids behave. In the next blog we will consider the motivation for the research reported in [1].

[1] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614v2[physics.flu-dyn], 2021.

# When is a conjecture not a conjecture?

When is a conjecture not a conjecture?
Staycation post No 1. I will be out of the virtual office until 30 August.

That sounds like the sort of riddle I used to hear in childhood. For instance, when is a door not a door? The answer was: when it’s ajar! [1] Well, at least we all know what a door is, so let us begin with what a conjecture actually is.

According to my dictionary, a conjecture is simply a guess. But in mathematics it is somehow more than that. Essentially, the idea is that a mathematician can be guided by their experience to postulate that something he/she knows to be true under particular circumstances is in fact true under all possible or relevant circumstances. If they can prove it, then their conjecture becomes a theorem.

The question then arises: what is a conjecture in physics? And if you can demonstrate its truth by measurement or reasoned argument, does it become a theory?

Let us take as an example a system such as an electrolyte or plasma containing many charged particles. The particles interact pairwise through the Coulomb potential and as the Coulomb potential is long-range this presents a many-body problem. What happens in practice is that a form of renormalization takes place, and the Coulomb potential due to any one electron is replaced by a potential which falls off more rapidly due to the screening effect of the cloud of particles surrounding it. A very simple introduction to this idea (which is known as the Debye-Huckel theory) can be found in Section 1.2.1 of the book cited as reference [2] below.

If we take the case of the turbulence cascade, the Fourier wavenumber modes provide the degrees of freedom. Then, instead of pairwise interactions, we have the famous triad interactions, each and every one of which conserves energy. If for simplicity we consider a periodic box, then the mean flux of energy from low wavenumbers to high can be written as the sum of all the individual mean triadic interactions. As in principle all modes are coupled, this is also a many-body problem and one can expect some form of renormalization to take place. In some simple circumstances this can be interpreted as a renormalized viscosity (the effective viscosity) which is very much larger than the molecular viscosity. These ideas date back to the late 19th century and are the earliest example of renormalization (although they did not use this term which came much later on, around the mid-20th century).

Now let us consider what happens as we progressively increase the Reynolds number. For the utmost simplicity we will restrict our attention to forced, stationary isotropic turbulence. Then, if we hold the rate of energy input into the system constant and decrease the viscosity progressively, this increases the Reynolds number at constant dissipation rate. It also increases the size of the largest wavenumbers of the system. The result is a form of scale-invariance in which the flux through wavenumbers is independent of wavenumber and the result is the dissipation law that the scaled dissipation law is independent of the viscosity as a rigorous asymptotic result [3]. It should perhaps be emphasised that this asymptotic behaviour is the infinite Reynolds number limit; but, from a practical point of view, we find that subsequent variation becomes too small to detect at Taylor-Reynolds numbers of a few hundred and thereafter may be treated as constant. We will return to this point in the next post, along with an illustration.

Meanwhile, back in real space, velocity gradients are becoming steeper as the Reynolds number increases, and this aspect disturbed Onsager [4] (see also the review of this paper in the context of Onsager’s life and work [5]). In fact he concluded that the infinite Reynolds number limit was the same as setting the viscosity equal to zero. In his view, the resulting Euler’s equation could still account for the dissipation in terms of singular behaviour. But, it has to be said that, in the absence of viscosity, there is no transfer of macroscopic kinetic energy into heat (i.e. microscopic kinetic energy). I have seen some references to pseudo-dissipation recently, so there is perhaps a growing awareness that Onsager’s conjecture needs further critical thought.
Onsager’s paper concludes with the sentence: ‘The detailed conservation of energy (i.e. the global conservation law of the nonlinear term) does not imply conservation of the total energy if the total number of steps in the cascade is infinite and the double sum … converges only conditionally.’ The italicised parenthesis is mine as Onsager referred here to one of his equation numbers. However this is merely an unsupported assertion which is incorrect on physical grounds because:
1. The number of steps is never infinite in a real physical flow.
2. The individual interactions are conservative so it is not clear how mere summation can lead to overall non-conservation.
3. The physical process involves a renormalization which means that there is a well-defined physical infinite Reynolds number limit at quite moderate Reynolds numbers.
It is totally unclear to me what mathematical justification there can be for this statement; and discussions of it that I have seen in the literature seem to me to be unsound on physical grounds. I shall return to these points in future blogs.

[1] That is, ‘a jar’, geddit? Oh dear, I suppose I am getting into holiday mood!
[2] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.
[3] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.
[4] L. Onsager. Statistical Hydrodynamics. Nuovo Cim. Suppl., 6:279, 1949.
[5] G. L. Eyink and K. R. Sreenivasan. Onsager and the Theory of Hydrodynamic Turbulence. Rev. Mod. Phys., 87:78, 2006.

# How do we identify the presence of turbulence?

How do we identify the presence of turbulence?

In 1971, when I began as a lecturer in Engineering Science at Edinburgh, my degree in physics provided me with no basis for teaching fluid dynamics. I had met the concept of the convective derivative in statistical mechanics, as part of the derivation of the Liouville equation, and that was about it. And of course the turbulence theory of my PhD was part of what we now call statistical field theory. Towards the end of autumn term, I was due to take over the final-year fluids course, but fortunately a research student who worked as a lab demonstrator for me had previously taken the course and kindly lent me his copy of the lecture notes. However, in my first year, I was never more than one lecture ahead of the students!

This grounding in the subject was reinforced by practical experience, when I began doing experimental work on drag reduction by additives and on particle diffusion. It also allowed me to recover quickly from an initial puzzlement, when I saw a paper in JFM which proposed that the occurrence of streamwise vorticity could be taken as a signal of turbulence in duct flow.

Later on, I learned that this idea could be extended to give a plausible picture of the turbulent bursting process, and a discussion can be found in Section 11.4.3 of my book [1], where the development of $\Lambda$ vortices is illustrated in Fig. 11.1. In the book, this is preceded by a treatment of the boundary layer on a flat plate in Section 1.4, which can help us to understand the basic idea as follows. Suppose we have a fluid moving with constant velocity $U_1$, incident on a flat plate lying in the ($x_1,x_3$) plane with its leading edge at $x_1=0$. Vorticity is generated at this point due to the no-slip boundary condition, and diffuses out normal to the plate in the $x_2$ direction, resulting in a velocity field of the form $u_1(x_2)$, in the boundary layer. We can visualize the sense of the vorticity vector by imagining the effect of a small portion of the fluid becoming solidified. That part nearest the plate will slow down, the ‘solid body’ will rotate, and the spin vector will point in the $x_3$ direction. This is the only component of vorticity in the system.

The occurrence of vorticity in the other two directions must be a consequence of instability and almost certainly begins with vorticity building up in the $x_1$ direction due to edge effects. That is, in practice, the plate must be of finite extent in the cross-stream or $x_3$ direction. A turbulence transition could not occur if the plate (as normally assumed for pedagogic purposes) were of infinite extent. This provides an unequivocal criterion for the occurrence of the transition to turbulence, but there is still the question of when the turbulence is in some sense well-developed. And of course other flows may require other criteria.

The question of whether a flow is turbulent or not became something of an issue in the 1980s/90s, when there was a growing interest in applying Renormalization Group (RG) to turbulence. The pioneering work on applying RG to randomly stirred fluid motion was reported by Forster, Nelson and Stephen [2] in 1976, and you should note from the title of their first paper that the word ‘turbulence’ does not appear. Their work was restricted to showing that there was a fixed point of the RG transformations in the limit of zero wavenumbers (i.e. ‘large wavelengths’).

The main drive in turbulence research is always towards applications, and inevitably pressure developed to seek ways of extending the work of Forster et al. to turbulence. In the process a distinction grew up between ‘stirred fluid motion’ and so-called ‘Navier-Stokes turbulence’. The latter should be described by the spectral energy balance known as the Lin equation, whereas the former just reflects its Gaussian forcing. Nowadays, in physics, the distinction has settled down to ‘stirred hydrodynamics’ and just plain turbulence!

The difficulty of defining turbulence in a concise way remains, but some light can be shed on these earlier controversies by considering a more recent discovery that we made at Edinburgh. This was the result that a dynamical system consisting of the Navier-Stokes equations forced by the combination of an initial Gaussian field and a negative damping term, will at very low Reynolds numbers become non-turbulent and take the form of a Beltrami flow [3]. In this paper, we emphasised that at early times the transfer spectrum $T(k,t)$ has the behaviour typically found in simulations of isotropic turbulence but at later times tends to zero. At the same time, the energy spectrum $E(k,t)$ tends to a unimodal spectrum at $k=1$. An interesting point is that the fixed point of Forster et al. $k \rightarrow 0$ is cut off by our lattice, so that we observe a Beltrami flow instead.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] D. Forster, D. R. Nelson, and M. J. Stephen. Long-time tails and the large-eddy behaviour of a randomly stirred fluid. Phys. Rev. Lett., 36(15):867-869, 1976.
[3] W. D. McComb, M. F. Linkmann, A. Berera, S. R. Yoffe, and B. Jankauskas. Self-organization and transition to turbulence in isotropic fluid motion driven by negative damping at low wavenumbers. J. Phys. A Math. Theor., 48:25FT01, 2015.

# Are Kraichnan’s papers difficult to read? Part 2: The DIA.

Are Kraichnan’s papers difficult to read? Part 2: The DIA.

In 2008, or thereabouts, I took part in a small conference at the Isaac Newton Institute and gave a talk on the LET theory, its relationship to DIA, and how both theories could be understood in terms of their relationship to Quasi-normality. During my talk, I was interrupted by someone in the audience, who said that I was wrong in discussing DIA as if Kraichnan’s perturbation theory was the same as that of Wyld. I disagreed, and we had a short exchange of the kind ‘Yes you did! No, I didn’t!’, and the matter was left unresolved.

Sometime afterwards, I refreshed my memory of these matters and realised that I was wrong. Kraichnan’s seminal paper [1] is not easy to understand, but he was claiming to be introducing a new type of perturbation theory, and that undoubtedly differed from Wyld’s subsequent field-theoretic approach [2]. In his book on the subject, Leslie had simply chickened out and used the Wyld analysis [3]. Many of us had then followed in his tracks, but over the years (decades!) I had simply forgotten that fact. It was salutary to be reminded of it, and I duly said something about it in my later book on turbulence [4].

Again this draws attention to the danger of relying uncritically on secondary sources, but an interesting point emerged. Kraichnan made what was essentially a mean-field approximation in his theory. The fact that Wyld could show that the DIA gave identical results to the same order of truncation of conventional perturbation theory tells us that the mean-field approximation for the response function was justified; because the method of renormalization was the same for both approaches. This is of further interest, in that the recent formal derivation of the local energy-transfer (LET) theory also relies on a mean-field approximation involving the response function [5], although this is defined in a completely different way from that in DIA.

Among the select few who actually have got to grips with the new perturbation theory in [1], are my student Matthew Salewski, who did that as a preliminary to the resolution of the apparent differences between formalisms [6]; and S. Kida who revisited DIA in order to derive a Lagrangian theory e.g. see reference [7].

As regards the question which heads this post, we can leave the last word with the man himself. Kraichnan told me that on one occasion a referee had complained to him: ‘Why are your papers so difficult to read?’ and he had replied: ‘If you think they are hard to read, have you considered how difficult they must be to write?’.

[1] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497-543, 1959.
[2] H. W. Wyld Jr. Formulation of the theory of turbulence in an incompressible fluid. Ann.Phys, 14:143, 1961.
[3] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.
[4] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[5] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[6] A. Berera, M. Salewski, and W. D. McComb. Eulerian Field-Theoretic Closure Formalisms for Fluid Turbulence. Phys. Rev. E, 87:013007-1-25, 2013.
[7] S. Kida and S. Goto. A Lagrangian direct-interaction approximation for homogeneous isotropic turbulence. J. Fluid Mech., 345:307-345, 1997.

# Are Kraichnan’s papers difficult to read? Part 1: Galilean Invariance

Are Kraichnan’s papers difficult to read? Part 1: Galilean Invariance.

When I was first at Edinburgh, in the early 1970s, I gave some informal talks on turbulence theory. One of my colleagues became sufficiently interested to start doing some reading on the subject. Shortly afterwards he came up to me at coffee time and said. ‘Are all Kraichnan’s papers as difficult to understand as this one?’ The paper which he was brandishing at me was Kraichnan’s seminal 1959 paper which launched the direct interaction approximation (DIA) [1]. I had to admit that Kraichnan’s papers were in general pretty difficult to read; and I think that my colleague gave up on the idea. Shortly afterwards, Leslie’s book came out and this was very largely devoted to making Kraichnan’s work more accessible [2]; but I think that was too late for one disillusioned individual.

Recently I was reading a paper (might have been one of Kraichnan’s) and I was brought up short by something like ‘… and the variance takes the form:’ followed by a displayed mathematical expression. So it was rather like one half of an equation, with the other (first) half being in words in the text. So, I found that I had to remember what the variance was in this particular context, and then complete the equation in my mind. If I had been writing this, I would have used a symbol for the variance (even if just its definition as $\langle u^2 \rangle$) and displayed an actual equation. But what this reminded me of was my own diagnosis of the difficulty with Kraichnan’s style. I suspected that he would get tired of always writing in maths, and would feel the need for some variety. The trouble was that sometimes he would put the important bits in words, with a corresponding loss of conciseness and precision. As a result there was a temptation to rely on secondary sources such as Leslie’s book [2] or Orszag’s review article [3]; and I was by no means the only one to succumb to this temptation!

The fact that it could be unwise to do so emerged when we produced a paper on calculations of the LET theory (compared with DIA) and submitted it to the JFM [4]. We discussed the idea of random Galilean invariance (RGI) and argued that its averaging process violated the ergodic principle.

We set out the procedure of random Galilean transformation as follows. Consider a velocity field $\mathbf{u}(\mathbf{x},t)$ in a frame of reference $S$. Suppose that we have a set of reference frames $\{S_0,\,S_1,\,S_2,\, \dots\}$, moving with velocities $\{C_0,\,C_1,\,C_2,\,\dots\}$, where the shift velocities are all constant and the sub-ensemble is defined by the probability distribution $P(C)$ of the shift velocities. In practice, Kraichnan took this to be a normal or Gaussian distribution, and averaged with respect to $C$ as well as with respect to the velocity field.

However, Kraichnan’s response to our paper was ‘that’s not what I mean by random Galilean transformations’. But he didn’t enlighten us any further on the matter.

Around that time, a new research student started, and I asked him to go through Kraichan’s papers with the proverbial fine-tooth comb and find out what RGI really was. What he found was that Kraichnan was working with a composite ensemble made up from the members of the turbulent ensemble, each shifted randomly by a constant velocity. So the turbulence ensemble $\{\mathbf{u}^{i}(\mathbf{x},t )\}$, with the superscript $i$ taking integer values, was replaced by a composite ensemble $\{\mathbf{u}^{i}(\mathbf{x},t ) + C_i\}$. This had to be inferred from a brief statement in words in a paper by Kraichnan!

The student then investigated this choice of RGT in conjunction with the derivation of theories and concluded that it was incompatible with the use of renormalized perturbation theory. In other words, Kraichnan was using it as a constraint of theory, once the theory was actually derived. But in fact the underlying use of the composite ensemble invalidated the actual derivation of the theory. It would be too complicated to go further into this matter here, but a full account can be found in Section 10.4 of my book [5], which references Mark Filipiak’s thesis [6].

This experience illustrates the danger of relying too much on secondary sources, however excellent they may be. I will give another example in my next post but I can round this one off with an anecdote. When I first met Bob Kraichnan he told me that he had been very angered by Leslie’s book. I think that he was unhappy at what he saw as an excessive concentration on his work, and also the fact that Leslie had dedicated the book to him. However, he said that various others had persuaded him that he was wrong to react in this way. I added my own voice to this chorus, pointing out that there was absolutely no doubt of his dominance as the father of modern turbulence theory; and the dedication was no more than a personal expression of admiration on the part of David Leslie.

[1] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497-543, 1959.
[2] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.
[3] S. A. Orszag. Analytical theories of turbulence. J. Fluid Mech., 41:363, 1970.
[4] W. D. McComb, V. Shanmugasundaram, and P. Hutchinson. Velocity derivative skewness and two-time velocity correlations of isotropic turbulence as predicted by the LET theory. J. Fluid Mech., 208:91, 1989.
[5] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[6] M. J. Filipiak. Further assessment of the LET theory. PhD thesis, University of Edinburgh, 1992.

# Hurrah for arXiv.com!

Hurrah for arXiv.com!
In my previous blog, I referred to my paper with Michael May [1], which failed to be accepted for publication, despite my having tried several journals. I suppose that some of my choices were unrealistic (e.g Nature) and that I could have tried more. Also, I could have specified referees, which I don’t like doing, but now increasingly suspect that it is prudent to do so. Anyway, I see from ResearchGate that, despite it only being on the arXiv, it continues to receive some attention; and I was pleased to find that it had actually been cited for publication.

It was only recently, when thinking of topics for another blog on peer review, that I remembered that I already had a paper on the arXiv; and it has been cited about a dozen times (although two of those are by me!). This was a paper with one of my students [2] which was presented at the Monte Verita conference in 1998. Naturally I expected it to appear in the conference proceedings, but it received a referee’s report that ran something like this: ‘No doubt the authors have some reasons of their own for doing these things but I am unable to see any interest or value in their work’. So we had to rely on the arXiv publication.

Now, the idea of studying the filtered/partitioned nonlinear term, from the point of view of subgrid modelling and renormalization group, was quite an active field at that time, so the referee was actually revealing his own ignorance. (In fact, I know who it was and someone who knew him personally told me that this is exactly the kind of person he is. Very enthusiastic about his own topic and uninterested in other topics.) This is an extreme deficiency of scholarship, but in my view is not completely untypical of the turbulence community. It is perhaps worth mentioning that one of the results we presented was really quite profound in showing how a subgrid eddy viscosity could represent amplitude effects but not phase effects. Various people working in the field would have had an inkling of this fact, but we actually demonstrated it quantitatively by numerical simulation.

The paper also turned out to have some practical value. Later on, I received a request from someone who was preparing a chapter for inclusion in an encyclopaedia, for permission to reproduce one of our figures. This was published in 2004, and in 2017 a second edition appeared [3]. In 2004 the work was also cited in a specialist article on large-eddy simulation [4], and over the years it has been cited various times in this type of article, most recently in the present year. So, other people saw interest and value in the work, but it didn’t appear in the conference proceedings! The relevant figure appears below.

As a final point, I have sometimes wondered about the status of arXiv publications. An interesting point of view can be found in the book by Roger Penrose [5]. At the beginning of his bibliography he refers favourably to the arXiv, stating that some people actually regard it as a source of eprints, as an alternative to journal publication. He also notes how this can speed up the exchange of ideas, perhaps too much so!

Of course, in his subject, speculative ideas are an everyday fact of life. In turbulence, on the other hand, speculative ideas have little chance of getting past the dour, ‘handbook engineering’ mind-set of so many people in the field of turbulence. So, let’s all post our speculative ideas on the arXiv, where it is quite easy to find them with the aid of Mr Google.

[1] W. D. McComb and M. Q. May. The effect of Kolmogorov (1962) scaling on the universality of turbulence energy spectra. arXiv:1812.09174[physics.flu- dyn], 2018.
[2] W. D. McComb and A. J. Young. Explicit-Scales Projections of the Partitioned Nonlinear Term in Direct Numerical Simulation of the Navier-Stokes Equation. Presented at 2nd Monte Verita Colloquium on Fundamental Problematic Issues in Turbulence: available at arXiv:physics/9806029 v1, 1998.
[3] T. J. R. Hughes, G. Scovazzi, and L. P. Franca. Multiscale and Stabilized Methods. In E. Stein, R. de Borst, and T. J. R. Hughes, editors, Encyclopedia of Computational Mechanics Second Edition, pages 1-102. Wiley, 2017.
[4] T. J. R. Hughes, G. N. Wells, and A. A. Wray. Energy transfers and spectral eddy viscosity in large-eddy simulations of homogeneous isotropic turbulence: Comparison of dynamic Smagorinsky and multiscale models over a range of discretizations. Phys. Fluids, 16(11):4044-4052, 2004.
[5] Roger Penrose. The Road to Reality. Vintage Books, London, 2005.

# The Kolmogorov (1962) theory: a critical review Part 2

The Kolmogorov (1962) theory: a critical review Part 2

Following on to last week’s post, I would like to make a point that, so far as I know, has not previously been made in the literature of the subject. This is, that the energy spectrum is (in the sense of thermodynamics) an intensive quantity. Therefore it should not depend on the system size. This is, as opposed to the total kinetic energy (say) which does depend on the size of the system and is therefore extensive.

What applies to the energy spectrum also applies to the second-order structure function. If we now consider equation (1) from the previous blog, which is $$S_2(r)=C(\mathbf{x},t) \varepsilon^{2/3}r^{2/3}(L/r)^{-\mu}, \label{62S2}$$then for isotropic, stationary turbulence, it may be written as: $$S_2(r)=C \varepsilon^{2/3}r^{2/3} (L/r)^{-\mu}.$$ Note that $C$ is constant, as it can no longer depend on the macrostructure.

Of course this still contains the factor $L^{-\mu}$. Now, $L$ is only specified as the external scale in K62, but it is necessarily related to the size of the system. Accordingly taking the limit of infinite system size, is related to taking the limit of infinite values of $L$, which is needed in order to have $k=0$ and to be able to carry out Fourier transforms. If we do this, we have three possible outcomes. If $\mu$ is negative, then $S_2 \rightarrow \infty$, as $L \rightarrow \infty$, whereas if $\mu$ is positive, then $S_2$ vanishes in the limit of infinite system size. Hence, in either case, the result is unphysical, both by the standards of continuum mechanics and by those of statistical physics.

However, if $\mu = 0$ then there is no problem. The structure function (and spectrum) exist in the limit of infinite system size. Could this be an argument for K41?

Lastly, we should mention that McComb and May [1] have used a plausible method to estimate values of $L$ and, taking a representative value of $\mu=0.1$, have shown that the inclusion of this factor as in K62 destroys the well-known collapse of spectral data that can be achieved using K41 variables.
We began with the well-known graph in which one-dimensional projections of the energy spectrum for a range of Reynolds numbers are normalized on Kolmogorov variables and plotted against $k’=k/k_d$: see, for example, Figure 2.4 of the book [2], which is shown immediately below this text.

In this work, we referred to $L$ as $L_{ext}$ and we estimated it as follows. From the above graph, we see that the universal behaviour always occurs in the limit $R_\lambda \rightarrow \infty$ with all spectra collapsing to a single curve at $k’= k/k_d =1$. As the Reynolds number increases, each graph flattens off as $k$ decreases and ultimately forms a plateau at low wavenumbers. We argued that one can use the point where this departure takes place $k’_{ext}$ (say) to estimate the external length scale, thus; $L’_{ext} = 2\pi/k’_{ext}.$
In order to make a comparison, we chose the results for a tidal channel at $R_{\lambda}=2000$ and for grid turbulence at $R_{\lambda}=72$. We show these two spectra, as selected from Fig. 1, on Figure 2 below.

Note that we plot the scaled one-dimensional spectrum $\psi(k’)=\phi(k’)/(\varepsilon \nu^5)^{1/4}$.
In the next figure, we plot these two spectra in compensated form, where we have taken the one-dimensional spectral constant to be $\alpha_{1}=1/2$, on the basis of Figure 2. In this form the $-5/3$ power law appear as a horizontal line at unity. We will return to this aspect later.

In order to assess the effect of including the K62 correction, we estimated to be $L’_{ext}\sim 50$ for the grid turbulence and as $L’_{ext}\sim 2000$ for the tidal channel. In fact the spectra from the tidal channel do not actually peel off from the $-5/3$ line at low $k$ so our estimate is actually a lower bound for this case. This favours K62 in the comparison. We took the value $\mu = 0.1$, as obtained by high-resolution numerical simulation and the result of including the K62 correction is shown in Figure 4.

It can be seen that including the K62 corrections destroys the collapse of the spectra which, apart from showing a slope of $\mu = -0.1$ in both cases, are now separated and in a constant ratio of $0.69$. Evidently the universal collapse of spectra in Figure 1 would not be observed if the K62 corrections were in fact correct!
My final point is that one of the unfavourable referees for this paper had a major concern with the fact that the results for grid turbulence did not really show much $-5/3$ behaviour. This is to miss the point. The K41 scaling shows a universal form in the dissipation range, as well as in the inertial range. The inclusion of the K62 correction destroys this, when implemented with plausible estimates for the two parameters.

[1] W. D. McComb and M. Q. May. The effect of Kolmogorov (1962) scaling on the universality of turbulence energy spectra. arXiv:1812.09174[physics.flu-dyn], 2018.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.

# The Kolmogorov (1962) theory: a critical review Part 1

The Kolmogorov (1962) theory: a critical review Part 1
As is well known, Kolmogorov interpreted Landau’s criticism as referring to the small-scale intermittency of the instantaneous dissipation rate. His response was to adopt Obukhov’s proposal to introduce a new dissipation rate which had been averaged over a sphere of radius $r$, and which may be denoted by $\varepsilon_r$. This procedure runs into an immediate fundamental objection.

In K41A, (or its wavenumber-space equivalent) the relevant inertial-range quantity for the dimensional analysis is the local (in wavenumber) energy transfer. This is of course equal to the mean dissipation rate by the global conservation of energy (It is a potent source of confusion that these theories are almost always discussed in terms of the dissipation $\varepsilon$, when the proper inertial-range quantity is the nonlinear transfer of energy $\Pi$. The inertial range is defined by the condition $\Pi_{max} = \varepsilon$). However, as pointed out by Kraichnan [1] there is no such simple relationship between locally-averaged energy transfer and locally-averaged dissipation.

Although Kolmogorov presented his 1962 theory as A refinement of previous hypotheses …’, it is now generally understood that this is incorrect. In fact it is a radical change of approach. The 1941 theory amounted to a general assumption that a cascade of many steps would lead to scales where the mean properties of turbulence were independent of the conditions of formation (i.e. of, essentially, the physical size of the system). Whereas, in 1962, the assumption was, in effect, that the mean properties of turbulence did depend on the physical size of the system. We will return to this point presently, but for the moment we concentrate on the preliminary steps.

The 1941 theory relied on a general assumption with an underlying physical plausibility. In contrast, the 1962 theory involved an arbitrary and specific assumption. This was to the effect that the logarithm of $\varepsilon(\mathbf{x},t)$ has a normal distribution for large $L/r$ where $L$ is referred to as an external scale and is related to the physical size of the system. We describe this as arbitrary’ because no physical justification is offered; but in any case it is certainly specific. Then, arguments were developed that led to a modified expression for the second-order structure function, thus: $$S_2(r)=C(\mathbf{x},t)\varepsilon^{2/3}r^{2/3}(L/r)^{-\mu}, \label{62S2}$$ where $C(\mathbf{x},t)$ depends on the macrostructure of the flow.

In addition, Kolmogorov pointed out that the theorem of constancy of skewness …derived (sic) in Kolmogorov (1941b)’ is replaced by $$S(r) = S_0(L/r)^{3\mu/2},$$ where $S_0$ also depends on the macrostructrure.

Equation (\ref{62S2}) is rather clumsy in structure, in the way the prefactor $C$ depends on $x$. This is because of course we have $r=x-x’$, so clearly $C(\mathbf{x},t)$ also depends on $r$. A better way of tackling this would be to introduce centroid and relative coordinates, $\mathbf{R}$ and $\mathbf{r}$, such that $$\mathbf{R} = (\mathbf{x}+\mathbf{x’})/2; \qquad \mbox{and} \qquad \mathbf{r}= ( \mathbf{x}-\mathbf{x’}).$$ Then we can re-write the prefactor as $C(\mathbf{R}, r; t)$, where the dependence on the macrostructure is represented by the centroid variable, while the dependence on the relative variable holds out the possibility that the prefactor becomes constant for sufficiently small values of $r$.

Of course, if we restrict our attention to homogeneous fields, then there can be no dependence of mean quantities on the centroid variable. Accordingly, one should make the replacement: $$C(\mathbf{R}, r; t)=C(r; t),$$ and the additional restriction to stationarity would eliminate the dependence on time. In fact Kraichnan [1] went further and replaced the pre-factor with the constant $C$: see his equation (1.9).

For sake of completeness, another point worth mentioning at this stage is that the derivation of the 4/5′ law is completely unaffected by the refinements’ of K62. This is really rather obvious. The Karman-Howarth equation involves only ensemble-averaged quantities and the derivation of the 4/5′ law requires only the vanishing of the viscous term. This fact was noted by Kolmogorov [2].

[1] R. H. Kraichnan. On Kolmogorov’s inertial-range theories. J. Fluid Mech., 62:305, 1974.
[2] A. N. Kolmogorov. A refinement of previous hypotheses concerning the local structure of turbulence in a viscous incompressible fluid at high Reynolds number. J. Fluid Mech., 13:82-85, 1962.

# The Landau criticism of K41 and problems with averages

The Landau criticism of K41 and problems with averages

The idea that K41 had some problem with the way that averages were taken has its origins in the famous footnote on page 126 of the book by Landau and Lifshitz [1]. This footnote is notoriously difficult to understand; not least because it is meaningless unless its discussion of the dissipation rate $\varepsilon$’ refers to the instantaneous dissipation rate. Yet $\varepsilon$ is clearly defined in the text above (see the equation immediately before their (33.8)) as being the mean dissipation rate. Nevertheless, the footnote ends with the sentence The result of the averaging therefore cannot be universal’. As their preceding discussion in the footnote makes clear, this lack of universality refers to ‘different flows’: presumably wakes, jets, duct flows, and so on.

We can attempt a degree of deconstruction as follows. We will use our own notation, and to this end we introduce the instantaneous structure function $\hat{S}_2(r,t)$, such that $\langle \hat{S}_2(r,t) \rangle =S_2(r)$. Landau and Lifshitz consider the possibility that $S_2(r)$ could be a universal function in any turbulent flow, for sufficiently small values of $r$ (i.e. the Kolmogorov theory). They then reject this possibility, beginning with the statement:

The instantaneous value of $\hat{S}(r,t)$ might in principle be expressed as a universal function of the energy dissipation $\varepsilon$ at the instant considered.’

Now this is rather an odd statement. Ignoring the fact that the dissipation is not the relevant quantity for inertial-range behaviour, it is really quite meaningless to discuss the universality of a random variable in terms of its relation to a mean variable (i.e. the dissipation). A discussion of universality requires mean quantities. Otherwise it is impossible to test the statement. The authors have possibly relied on the qualification at the instant considered’. But how would one establish which instant that was for various different flows?

They then go on:

When we average these expressions, however, an important part will be played by the law of variation of $\varepsilon$ over times of the order of the periods of the large eddies (of size $\sim L$), and this law is different for different flows.’

This seems a rather dogmatic statement but it is clearly wrong for the the broad (and important) class of stationary flows. In such flows, $\varepsilon$ does not vary with time.

The authors conclude (as we pointed out above) that: The result of the averaging therefore cannot be universal.’ One has to make allowance for possible uncertainties arising in translation, but nevertheless, the latter part of their argument only makes any sort of sense if the dissipation rate is also instantaneous. Such an assumption appears to have been made by Kraichnan [2], who provided an interpretation which does not actually depend on the nature of the averaging process.

In fact Kraichnan worked with the energy spectrum, rather than the structure function, and interpreted Landau’s criticism of K41 as applying to $$E(k) = \alpha\varepsilon^{2/3}k^{-5/3}.\label{6-K41}$$
His interpretation of Landau was that the prefactor $\alpha$ may not be a universal constant because the left-hand side of equation (\ref{6-K41}) is an average, while the right-hand side is the 2/3 power of an average.

Any average involves the taking of a limit. Suppose we consider a time average, then we have $$E(k) = \lim_{T\rightarrow\infty}\frac{1}{T}\int^{T}_{0}\widehat{E}(k,t)dt,$$ where as usual the hat’ denotes an instantaneous value. Clearly the statement $$E(k) = \mbox{a constant};$$or equally the statement, $$E(k) = f\equiv\langle\hat{f}\rangle,$$ for some suitable $f$, presents no problem. It is the 2/3′ power on the right-hand side of equation (\ref{6-K41}) which means that we are apparently equating the operation of taking a limit to the 2/3 power of taking a limit.

However, it has recently been shown [3] that this issue is resolved by noting that the pre-factor $\alpha$ itself involves an average over the phases of the system. It turns out that $\alpha$ depends on an ensemble average to the $-2/3$ power and this cancels the dependence on the $2/3$ power on the right hand side of (\ref{6-K41}).

[1] L. D. Landau and E. M. Lifshitz. Fluid Mechanics. Pergamon Press, London, English edition, 1959.
[2] R. H. Kraichnan. On Kolmogorov’s inertial-range theories. J. Fluid Mech., 62:305, 1974.
[3] David McComb. Scale-invariance and the inertial-range spectrum in three-dimensional stationary, isotropic turbulence. J. Phys. A: Math. Theor.,42:125501, 2009.