Summary of Kolmogorov-Obukhov (1941) theory. Part 1: some preliminaries in x-space and k-space.

Summary of Kolmogorov-Obukhov (1941) theory. Part 1: some preliminaries in $x$-space and $k$-space.

Discussions of the Kolmogorov-Obukhov theory often touch on the question: can the two-thirds law; or, alternatively, the minus five-thirds law, be derived from the equations of motion (NSE)? And the answer is almost always: ‘no, they can’t’! Yet virtually every aspect of this theory is based on what can be readily deduced from the NSE, and indeed has so been deduced, many years ago. So our preliminary here to the actual summary, is to consider what we know from a consideration of the NSE, in both $x$-space and $k$-space. As another preliminary, all the notation is standard and can be found in the two books cited below as references.

We begin with the familiar NSE, consisting of the equation of motion, \begin{equation}\frac{\partial u_{\alpha}}{\partial t} + \frac{\partial (u_{\alpha}u_\beta)}{\partial x_\beta} =-\frac{1}{\rho}\frac{\partial p}{\partial x_\alpha} + \nu \nabla^2 u_\alpha,\end{equation}which expresses conservation of momentum and is local, in that it gives the relationship between the various terms at one point in space; and the incompressibility condition \begin{equation}\frac{\partial u_\beta}{\partial x_\beta} = 0.\end{equation} It is well known that taking these two equations together allows us to eliminate the pressure by solving a Poisson-type equation. The result is an expression for the pressure which is an integral over the entire velocity field: see equations (2.3) and (2.9) in [1].

In $k$-space we may write the Fourier-transformed version of (1) as: \begin{equation}\frac{\partial u_\alpha(\mathbf{k},t)}{\partial t} + i k_\beta\int d^3 j u_\alpha(\mathbf{k-j}.t)u_\beta(\mathbf{j},t) = k_\alpha p(\mathbf{k},t) -\nu k^2 u_\alpha (\mathbf{k},t).\end{equation} The derivation can be found in Section 2.4 of [2]. Also, the discrete Fourier-series version (i.e. in finite box) is equation (2.37) in [2].

The crucial point here is that the modes $\mathbf{u}(\mathbf{k},t)$ form a complete set of degrees of freedom and that each mode is coupled to every other mode by the non-linear term. So this is not just a problem in statistical physics, it is an example of the many-body problem.

Note that (1) gives no hint of the cascade, but (3) does. All modes are coupled together and, if there were no viscosity present, this would lead to equipartition, as the conservative non-linear term merely shares out energy among the modes. The viscous term is symmetry-breaking due to the factor $k^2$ which increases the dissipation as the wavenumber increases. This prevents equipartition and leads to a cascade from low to high wavenumbers. All of this becomes even clearer when we multiply the equation of motion by the velocity and average. We then obtain the energy-balance equations in both $x$-space and $k$-space.

We begin in real space with the Karman-Howarth equation (KHE). This can be written in various forms (see Section 3.10.1 in [2]), and here we write in terms of the structure functions for the case of free decay: \begin{equation}\varepsilon =-\frac{3}{4}\frac{\partial S_2}{\partial t}+\frac{1}{4r^4}\frac{\partial (r^4 S_3)}{\partial r} +\frac{3\nu}{2r^4}\frac{\partial}{\partial r}\left(r^4\frac{\partial S_2}{\partial r}\right).\end{equation}Note that the pressure does not appear, as a correlation of the form $\langle up \rangle$ cannot contribute to an isotropic field, and that strictly the left hand side should be the decay rate $\varepsilon_D$ but it is usual to replace this by the dissipation as the two are equal in free decay. Full details of the derivation can be found in Section 3.10 of [2].

For our present purposes, we should emphasise two points. First, this is one equation for two dependent variables and so requires a statistical closure in order to solve for one of the two. In other words, it is an instance of the notorious statistical closure problem. Second, it is local in the variable $r$ and does not couple different scales together. It holds for any value of $r$ but is an energy balance locally at any chosen value of $r$.

The Lin equation is the Fourier transform of the KHE. It can be derived directly in $k$-space from the NSE (see Section 3.2.1 in [2]):\begin{equation} \left(\frac{\partial}{\partial t} + 2\nu k^2\right)E(k,t) = T(k,t).\end{equation} Here $T(k,t)$ is called the transfer spectrum, and can be written as: \begin{equation}T(k,t)= \int_0^\infty dj\, S(k,j:t),\end{equation}where $S(k,j;t)$ is the transfer spectral density and can be expressed in terms of the third-order moment $C_{\alpha\beta\gamma}(\mathbf{j},\mathbf{k-j},\mathbf{-k};t)$.

Unlike the KHE, which is purely local in its independent variable, the Lin equation is non-local in wavenumber. We can define its associated inter-mode energy flux as:\begin{equation}\Pi (\kappa,t) = \int_\kappa^\infty dk\,T(k,t) = -\int_0^\kappa dk \, T(k,t).\end{equation}

We have now laid a basis for a summary of the Kolmogorov-Obhukov theory and one point should have emerged clearly: the energy cascade is well defined in wavenumber space. It is not defined at all in the context of energy conservation in real space. It can only exist as an intuitive phenomenon which is extended in space and time.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.




The importance of terminology: stationarity or equilibrium?

The importance of terminology: stationarity or equilibrium?
When I began my post-graduate research in 1966, I found that I immediately had to get used to a new terminology. For instance, concepts like homogeneity and isotropy were a definite novelty. In physics one takes these for granted and they are never mentioned. Indeed the opposite is the case, and the occasional instance of inhomogeneity is encountered: I recall that one experiment relied on an inhomogeneity in the magnetic field. Also, in relativity one learns that a light source can only be isotropic in its co-moving frame. In any other frame, in motion relative to it, the source must appear anisotropic, as shown by Lorentz transformation. For the purposes of turbulence theory (and the theory of soft matter), exactly the same consideration must apply to Galilean transformation. Although, to be realistic, Galilean transformations are actually of little value in these fields, as they are normally satisfied trivially [1].

Then there was the transition from statistical physics to, more generally, the subject of statistics. The Maxwell-Boltzmann distribution was replaced by the normal or Gaussian distribution; and, in the case of turbulence, there was the additional complication of a non-Gaussian distribution, with flatness and skewness factors looming large. (I should mention as an aside that the above does not apply to quantum field theory which is pretty much entirely based on the Gaussian distribution.)

Perhaps the most surprising change was from the concept of equilibrium to one of stationarity. In physics, equilibrium means thermal equilibrium. Of course, other examples of equilibrium are sometimes referred to as special cases. For instance, a body may be in equilibrium under forces. But such references are always in context; and the term equilibrium, when used without qualification of this kind, always means thermal equilibrium. So any real fluid flow is a non-equilibrium process, and turbulence is usually classed as far from equilibrium. Indeed, physicists normally seem to regard turbulence as being the archetypal non-equilibrium process.

Unsurprisingly, the term has only rarely been used in turbulence. I can think of references to the approximate balance between production and dissipation near the wall in pipe flow being referred to as equilibrium; but, apart from that, all that comes to mind is Batchelor’s use of the term in connection with the Kolmogorov (1941) theory [2]. This was never widely used by theorists but recently there has been some usage of the term, so I think that it is worth taking a look at what it is; and, more importantly, what it is not.

Batchelor was carrying on the idea of Taylor, that describing homogeneous turbulence in the Fourier representation allowed the topic to be regarded as a part of statistical physics. He argued that the concept of local stationarity that Kolmogorov had introduced could be regarded as local equilibrium, in analogy with thermal equilibrium. The key word here is ‘local’. If we consider a flow that is globally stationary (as nowadays we can, because we have computer simulations), then clearly it would be nonsensical to describe such a flow as being in equilibrium.

However, recently Batchelor’s concept of local equilibrium has been mis-interpreted as being the same as the condition for the existence of an inertial range of wavenumbers, where the flux through wavenumber becomes equal to the dissipation rate. It is important to understand that this concept is not a part of Kolmogorov’s $x$-space theory but is part of the Obukhov-Onsager $k$-space theory. In contrast, the concept of local stationarity can be applied to either picture; but in my view is best avoided altogether.

I will say no more about this topic here, as I intend to develop it over the next few weeks. In particular, I think it would be helpful to make a pointwise summary of Kolmogorov-Obukhov theory, emphasising the differences between $x$-space and $k$-space forms, clarifying the historical position and indicating some significant and more recent developments.

[1] W. D. McComb. Galilean invariance and vertex renormalization. Phys. Rev. E, 71:37301, 2005.
[2] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 2nd edition, 1971.




Turbulence in a box.

Turbulence in a box.
When the turbulence theories of Kraichnan, Edwards, Herring, and so on, began attracting attention in the 1960s, they also attracted attention to the underlying ideas of homogeneity, isotropy, and Fourier analysis of the equations of motion. These must have seemed very exotic notions to the fluid dynamicists and engineers who worked on single-point models of the closure problem posed by the Reynolds equation. Particularly, when the theoretical physicists putting forward these new theories had a tendency to write in the language of the relatively new topic of quantum field theory or possibly the even newer statistical field theory. In fact, the only aspect of this new approach that some people working in the field were apparently able to grasp was the fact that the turbulence was in a box, rather than in a pipe or wake or shear layer.

I became aware of this situation when submitting papers in the early 1970s, when I encountered referees who would begin their report with: ‘the author invokes the turbulence in a box concept’. This seemed to me to have ominous overtones. I mean, why comment on it? No one working in the field did: it was taken as quite natural by the theorists. However, in due course it invariably turned out that the referee didn’t think that my paper should be published. Reason? Apparently just the unfamiliarity of the approach. Later on, with the subject of turbulence theory having reached an impasse, they clearly felt quite confident in turning it down. I have written before on my experiences of this kind of refereeing (see, for example, my post of 20 Feb 2020).

Another example of turbulence in a box is the direct numerical simulation of isotropic turbulence, where the Navier-Stokes equations are discretised in a cubical box in terms of a discrete Fourier transform of the velocity field. Since Orszag and Patterson’s pioneering development of the pseudo-spectral method [1] in 1972, the simulation of isotropic turbulence has grown in parallel with the growth of computers; and, in the last few decades, it has become quite an everyday activity in turbulence research. So, now we might expect box turbulence to take its place alongside pipe turbulence, jet turbulence and so on, in the jargon of the subject?

In fact this doesn’t seem to have happened. However, less than twenty years ago, a paper appeared which referred to simulation in a periodic box [2], and since then I have seen references to this in microscopic physics, where the simulations are of molecular systems. I’m not sure why the nature of the box is worth mentioning. It is, after all, a commonplace fact of Fourier analysis, that representation of a non-periodic function in a finite interval requires an assumption of periodic behaviour outside the interval. Much stranger than this is that I am now seeing references to periodic turbulence as, apparently, denoting isotropic turbulence that has been simulated in a periodic box. This does not seem helpful! To most people in the field, periodic turbulence means turbulence that is modulated periodically in time or space. That is, the sort of turbulence that might be found in rotating machinery or perhaps a coherent structure [3]. We have to hope that this usage does not catch on.

[1] S. A. Orszag and G. S. Patterson. Numerical simulation of three-dimensional homogeneous isotropic turbulence. Phys.Rev.Lett, 28:76, 1972.
[2] Y. Kaneda, T. Ishihara, M. Yokokawa, K. Itakura, and A. Uno. Energy dissipation and energy spectrum in high resolution direct numerical simulations of turbulence in a periodic box. Phys. Fluids, 15:L21, 2003.
[3] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.




Large-scale resolution and finite-size effects.

Large-scale resolution and finite-size effects.
This post arises out of the one on local isotropy posted on 21 October 2021; and in particular relates to the comment posted by Alex Liberzon on the need to choose the size of volume $G$ within which Kolmogorov’s assumptions of localness may hold. In fact, as is so often the case, this resolves itself into a practical matter and raises the question of large-scale resolution in both experiment and numerical simulation.

In recent years there has been growing awareness of the need to fully resolve all scales in simulations of isotropic turbulence, with the emphasis initially being on the resolution of the small scales. In my post of 28 October 2021, I presented results from reference [1] showing that compensating for viscous effects and the effects of forcing on the third-order structure function $S_3(r)$ could account for the differences between the four-fifths law and the DNS data at all scales. In this work, the small-scale resolution had been judged adequate using the criteria established by McComb et al [2].

However in [1], we noted that large-scale resolution had only recently received attention in the literature. We ensured that the ratio of box size to integral length-scale (i.e. $L_{box}/L$) was always greater than four. This choice involved the usual trade-off between resolution requirements and the magnitude of Reynolds number achieved, but the results shown in our post of 28 October would indicate that this criterion for large-scale resolution was perfectly adequate. That could suggest that taking $G\sim (4L)^3$ might be a satisfactory criterion. Nevertheless, I think it would be beneficial if someone were to carry out a more systematic investigation of this, in the same way as reference [1] did for the small-scale resolution.

Some attempts have been made at doing this in experimental work on grid turbulence: see the discussion on pages 219-220 in reference [3], but it clearly is a subject that deserves more attention. As a final point, we should note that this topic can be seen as being related to finite-size effects which are nowadays of general interest in microscopic systems, because there the theory actually relies on the system size being infinite. I suppose that we have a similar problem in turbulence in that the derivation of the solenoidal Navier-Stokes equation requires an infinitely large system, as does the use of the Fourier transform.

[1] W. D. McComb, S. R. Yoffe, M. F. Linkmann, and A. Berera. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence. Phys. Rev. E, 90:053010, 2014.
[2] W. D. McComb, A. Hunter, and C. Johnston. Conditional mode-elimination and the subgrid-modelling problem for isotropic turbulence. Phys. Fluids, 13:2030, 2001.
[3] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.




The second-order structure function corrected for systematic error.

The second-order structure function corrected for systematic error.

In last week’s post, we discussed the corrections to the third-order structure function $S_3(r)$ arising from forcing and viscous effects, as established by McComb et al [1]. This week we return to that reference in order to consider the effect of systematic error on the second-order structure function, $S_2(r)$. We begin with some general definitions.

The longitudinal structure function of order $n$ is defined by:\begin{equation} S_n(r) = \left\langle \delta u^n_L(r) \right\rangle, \end{equation} where $\delta u_L(r)$ is the longitudinal velocity difference over a distance $r$. From purely dimensional arguments we may write: \begin{equation} S_n(r) = C_n \varepsilon^{n/3}\,r^{n/3}, \end{equation} where the $C_n$ are dimensionless constants.
However, as is well known, measured values imply $S_n(r)\sim \, r^{\zeta_n}$ where the exponents $\zeta_n$ are not equal to the dimensional result, with the one exception: $\zeta_3 = 1$. In fact it is found that $\Delta_n = |n/3 – \zeta_n|$ is nonzero and increases with order $n$.

It is worth pausing to consider a question. Does this imply that the measurements give $S_n(r)=C_n \varepsilon^{\zeta_n}r^{\zeta_n}$? No, it doesn’t. Not only would this give the wrong dimensions but, more importantly, the time dimension is controlled entirely by the dissipation rate. Accordingly, we must have: $S_n(r)=C_n \varepsilon^{n/3}r^{\zeta_n}\mathcal{L}^{n/3-\zeta_n}$, where $\mathcal{L}$ is some length scale. Unfortunately for aficionados of intermittency corrections (aka anomalous exponents), the only candidate for this is the size of the system (e.g. $\mathcal{L} = L_{box}$), which leads to unphysical results.

Returning to our main theme, the obvious way of measuring the exponent $\zeta_n$ is to make a log-log plot of $S_n$ against $r$, and determine the local slope: \begin{equation} \zeta_n(r) = d\,\log \,S_n(r)/d\, \log \,r.\end{equation} Then the presence of a plateau would indicate a constant exponent and hence a scaling region. In practice, however, this method has problems. Indeed workers in the field argue that a Taylor-Reynolds number of greater than $R_{\lambda}\sim 500$ is needed for this to work, and of course this is a very high Reynolds number.

A popular way of overcoming this difficulty is the method of extended scale-similarity (or ESS), which relies on the fact that $S_3$ scales with $\zeta_3 =1$ in the inertial range, indicating that one might replace $r$ by $S_3$ as the independent variable, thus: \begin{equation}S_n(r) \sim [S_3(r)]^{\zeta_n^{\ast}},\qquad \mbox{where} \qquad \zeta_n^{\ast} = \zeta_n/\zeta_3.\end{equation} In order to overcome problems with odd-order structural functions, this technique was extended by using the modulus of the velocity difference, to introduce generalized structure functions $G_n(r)$, such that: \begin{equation}G_n(r)=\langle |\delta u_L(r)|^n \rangle\sim r^{\zeta_n’}, \qquad \mbox{with scaling exponents} \quad \zeta’_n. \end{equation} Then, by analogy with the ordinary structure functions, taking $G_3$ with $\zeta’ =1$ leads to \begin{equation} G_n(r) \sim [G_3(r)]^{\Sigma_n}, \qquad\mbox{with} \quad \Sigma_n = \zeta’_n /\zeta’_3 . \end{equation} This technique results in scaling behaviour extending well into the dissipation range which allows exponents to be more easily extracted from the data. Of course, this is in itself an artefact, and this fact should be borne in mind.

There is an alternative to ESS and that is the pseudospectral method, in which the $S_n$ are obtained from their corresponding spectra by Fourier transformation. This has been used by some workers in the field, and in [1] McComb et al followed their example (see [1] for details) and presented a comparison between this method and ESS. They also applied a standard method for reducing systematic errors to evaluate the exponent of the second-order structure function. This involved considering the ratio $|S_n(r)/S_3(r)|$. In this procedure, an exponent $\Gamma_n$ was defined by \begin{equation}\left | \frac{S_n(r)}{S_3(r)}\right |\sim r^{\Gamma_n}, \qquad \mbox{where} \quad \Gamma_n= \zeta_n – \zeta_3. \end{equation}

Results were obtained only for the case $n=2$ and figures 9 and 10 from [1] are of interest, and are reproduced here. The first of these is the plot of the compensated ratio $(r/\eta)^{1/3}U|S_2(r)/S_3(r)|$ against $r/\eta$, where $\eta$ is the dissipation length scale and $U$ is the rms velocity. This illustrates the way in which the exponents were obtained.

 

Figure 9 from reference [1].
In the second figure, we show the variation of the exponent $\Gamma_2 + 1$ with Reynolds number, compared with the variation of the ESS exponent $\Sigma_2$. It can be seen that the first of these tends towards the K41 value of $2/3$, while the ESS value moves away from the K41 result as the Reynolds number increases.

 

Figure 10 from reference [1]
Both methods rely on the assumption $\zeta_3 =1$, hence $\Gamma_2+1 = \zeta_2$, which is why we plot that quantity. We may note that figures 1 and 2 point clearly to the existence of finite Reynolds number corrections as the cause of the deviation from K41 values. Further details and discussion can be found in reference [1].

[1] W. D. McComb, S. R. Yoffe, M. F. Linkmann, and A. Berera. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence. Phys. Rev. E, 90:053010, 2014.




Viscous and forcing corrections to Kolmogorov’s ‘4/5’ law.

Viscous and forcing corrections to Kolmogorov’s ‘4/5’ law.

The Kolmogorov `4/5′ law for the third-order structure function $S_3(r)$ is widely regarded as the one exact result in turbulence theory. And so it should be: it has a straightforward derivation from the Karman-Howarth equation (KHE), which is an exact energy balance derived from the Navier-Stokes equation. Nevertheless, there is often some confusion around its discussion in the literature. In particular, for stationary isotropic turbulence, there can be confusion about the effects of viscosity (small scales) and forcing (large scales). These aspects have been clarified by McComb et al [1], who used spectral methods to obtain $S_2$ and $S_3$ from a direct numerical simulation of the equations of motion.

If we follow the standard treatment (see [2], Section 4.6.2), we may write: \begin{equation} S_3(r)= -\frac{4}{5}\varepsilon r + 6\nu\frac{\partial S_2}{\partial r}.\end{equation}
In the past, this statement has been criticised because it omits the forcing which must be present in order to sustain a stationary turbulent field. However, it should be borne in mind that this is an entirely local equation; and, if the effect of the forcing is concentrated at the largest scales, then omission of these scales also omits the forcing. We can shed some light on this by reproducing Figure 7 from [1], thus:

 

Variation of the third-order structure function showing the effect of viscous corrections.

 

The results were taken at a Taylor-Reynolds number $R_{\lambda} = 435.2$, and show how the departure from the `4/5′ law at the small scales is due to the viscous effects. Clearly there is a range of values of $r$ where the `4/5′ law may be regarded as exact, in the ordinary sense appropriate to experimental work. This range of scales is, of course, the inertial range. Note that $\eta$ is the Kolmororov length scale.

Presumably the departure from the `4/5′ law at the large scales is due to forcing effects, and McComb et al [1] also shed light on this point. They did this by working in spectral space, where stirring forces have been studied since the late 1950s in the context of the statistical theories (e.g Kraichnan, Edwards, Novikov, Herring: see [3] for details) and are correspondingly well understood. They began with the Lin equation: \begin{equation} \frac{\partial E(k,t)}{\partial t} = T(k,t) – 2\nu k^2E(k,t) + W(k), \end{equation} where in principle the energy and transfer spectra depend on time, whereas the spectrum of the stirring forces $W(k)$ is taken as independent of time in order to ensure ultimate stationarity. Thus we will drop the time dependences hereafter as we will only consider the stationary case.

We can derive the KHE from this and the result is the usual KHE plus an input term $I(r)$, defined by: \begin{equation}I(r) = \frac{3}{r^3}\int_0^r\, dy \,y^2\, W(y),\end{equation} where $W(y)$ is the three-dimensional Fourier transform of the work spectrum $W(k)$. By integrating the KHE (as Kolmogorov did in deriving the `4/5′ law) we obtain the form for the third-order structure function $S_3(r)$ as: \begin{equation} S_3(r)=X(r) + 6\nu\frac{\partial S_2}{\partial r},\end{equation}where where $X(r)$ is given in terms of the forcing spectrum by: \begin{equation} X(r) = -12r\int_0^{\infty}\,dk W(k)\,\left[\frac{3\sin kr – 3kr \cos kr-(kr)^2 \sin kr}{(kr)^5}\right].\end{equation}
The result of including the effect of forcing is shown in Figure 8 of [1], which is reproduced here below.

 

Variation of the third-order structure function with scale showing both viscous effects and those due to forcing.

 

These results are taken from the same simulation as above, and now the contributions from viscous and forcing effects can be seen to account for the departure of $S_3$ from the `4/5′ law at all scales.

In [1] it is pointed out that $X(r)$ is not a correction to K41, as used in other previous studies. Instead, it replaces the erroneous use of the dissipation rate of others’, and contains all the information of the energy input at all scales. In the limit of $\delta(k)$ forcing, $I(y)= \varepsilon_W = \varepsilon$, such that $X(r) = -4\varepsilon\, r/5$, giving K41 in the infinite Reynolds number limit. Note that $\varepsilon_W$ is the rate of doing work by the stirring forces. Further details may be found in [1].

[1] W. D. McComb, S. R. Yoffe, M. F. Linkmann, and A. Berera. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence. Phys. Rev. E, 90:053010, 2014.
[2] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[3] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.




Local isotropy, local homogeneity and local stationarity.

Local isotropy, local homogeneity and local stationarity.

In last week’s post I reiterated the argument that the existence of isotropy implies homogeneity. However, Alex Liberzon commented that there could be inhomogeneous flows that exhibited isotropy on scales that were small compared to the overall size of the flow. This comment has the great merit of drawing attention to the difference between a purely theoretical formulation and one dealing with a real practical situation. In my reply, I mentioned that Kolmogorov had introduced the concept of local isotropy, which supported the view that Alex had put forward. So I thought it would be interesting to look in detail again at what Kolmogorov had actually said. Incidentally, Kolmogorov said it in 1941 but for the convenience of readers I have given the later references, as reprinted in the Proceedings of the Royal Society.

Now, although I like to restrict the problem to purely isotropic turbulence, where it still remains controversial in that many people believe in intermittency corrections or anomalous exponents, Kolmogorov actually put forward a theory of turbulence in general. He argued that a cascade as envisaged by Richardson could lead to a range of scales where the turbulence becomes locally homogeneous. In [1], which I refer to as K41A, he put forward two definitions, which I shall paraphrase rather than quote exactly.

The first of these is as follows: `Definition 1. The turbulence is called locally homogeneous in the domain $G$ if the probability distribution of the velocity differences is independent of the origin of coordinates in space, time and velocity, providing that all such points are contained within the domain $G$.’

We should note that this includes homogeneity in time as well as in space. In other words, Kolmogorov was assuming local stationarity as well.

Then his second definition is: `Definition 2. The turbulence is called locally isotropic in the domain $G$, if it is homogeneous and if, besides, the distribution laws mentioned in Definition 1 are invariant with respect to rotations and reflections of the original system of coordinate axes $(x_1,\,x_2\,x_3)$.’

Note that the emphasis is mine.

Kolmogorov then compared his definition of isotropy to that of Taylor, as introduced in 1935. He stated that his definition is narrower, because he also requires local stationarity, but wider in that it applies to the distribution of the velocity differences, and not to the velocities themselves. Later on, when he derived the so-called ‘$4/5$’ law [2], he had already made the assumption that the time-derivative term could be neglected, and simply quoted the Karman-Howarth equation without it: see equation (3) in [2].

The question then arises, how far do these assumptions apply in any real flow? In my post of 11th February 2021, I conjectured that this might be a matter of the macroscopic symmetry of the flow. For instance, the Kolmogorov picture might apply better in plane channel flow that in plane Couette flow. I plan to return to this point some time.

[1] A. N. Kolmogorov. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. Proc. Roy Soc. Lond., 434:9-13, 1991.
[2] A. N. Kolmogorov. Dissipation of energy in locally isotropic turbulence. Proc. Roy Soc. Lond., 434:15-17, 1991.




Is isotropy the same as spherical symmetry?

Is isotropy the same as spherical symmetry?

To which you might be tempted to reply: ‘Who ever thought it was?’ Well, I don’t know for sure, but I’ve developed a suspicion that such a misconception may underpin the belief that it is necessary to specify that turbulence is homogeneous as well as isotropic. When I began my career it was widely understood that specifying isotropy was sufficient, as it was generally realised that homogeneity was a necessary condition for isotropy. A statement to this effect could (and can) be found on page 3 of Batchelor’s famous monograph on the subject [1].

I have posted previously on this topic (my second post, actually, on 12 February 2020) and conceded that the acronym HIT, standing of course for ‘homogeneous, isotropic turbulence’, has its attractions. For a start, it’s the shortest possible way of telling people that you are concerned with isotropic turbulence. I’ve used it myself and will probably continue to do so. So I don’t see anything wrong with using it, as such. The problem arises, I think, when some people think that you must use it. In other words, such people apparently believe that there is an inhomogeneous form of isotropic turbulence.

When you think about it that is really quite worrying. I’m not particularly happy about someone, whose understanding is so limited, refereeing one of my papers. Although, to be honest, that could well explain some of the more bizarre referees’ reports over the years! Anyway, let’s examine the idea that there may be some confusion between isotropy and spherical symmetry.

Isotropy just means that a property is independent of orientation. Spherical symmetry sounds quite similar and is probably the more frequently encountered concept for most of us (at least during our formal education). Essentially it means that, relative to some fixed point, a field only varies with distance from the point but not with angle. A familiar example would be a point electric charge in free space. So we might be tempted to visualise isotropy as a form of spherical symmetry, the common element being the independence of orientation.

The problem with doing this, is that the property of isotropy of a medium must apply to any point within it. Whereas, spherical symmetry depends on the existence of a special point which may be taken as the origin of coordinates. But the existence of such a special point would violate spatial homogeneity. So for isotropy to be true, we must have spatial uniformity or homogeneity. I think that one can infer this mathematically from the fact that the only isotropic tensors are (subject to a scalar multiplier) the Kronecker delta $\delta_{ij}$ and the Levi-Civita density $\epsilon_{ijk}$. So any isotropic tensor must have components that are independent of the coordinates of the system.

For this point applied to the cosmos, i.e. homogeneity is a necessary (but not sufficient) condition for isotropy, see Figure 2 on page 24 of [2]. It seems to be easier to visualise these matters in terms of the night sky which is a fairly (if, illusory) static-looking entity. But when we add in a continuum structure and random variations on many length scales, it can be more difficult. We will come back to this particular problem in my next post.

[1] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 2nd edition, 1971.
[2] Steven Weinberg. The first three minutes: a modern view of the origin of the universe. Basic Books, NY, 1993.




Various kinds of turbulent dissipation?

Various kinds of turbulent dissipation?

The current interest in Onsager’s conjecture (see my blog of 23 September 2021) has sparked my interest in the nature of turbulent dissipation. Essentially a fluid only moves because a force acts on it and does work to maintain it in motion. The effect of viscosity is to convert this kinetic energy of macroscopic motion into random molecular motion, which is perceived as heat. If there is turbulence, this acts to transfer the macroscopic kinetic energy to progressively smaller scales, where the steeper velocity gradients can dissipate it as heat.

This all seems quite straightforward and well understood. However, Onsager’s conjecture, as a matter of physics, is less easily understood. It interprets the infinite Reynolds number limit as being when the continuum nature of the fluid breaks down. It also implies that, when the Reynolds number becomes very large, the Navier-Stokes equation somehow becomes the Euler equation; which, despite its inviscid nature, satisfactorily accounts for the dissipation. It can do this (supposedly) because it has lost its property of conserving energy. In turn, this is supposed to happen because the velocity is no longer a continuous and differentiable field. Of course there does not seem to be any mechanism for turning the dissipated energy into heat, so the thermodynamic aspects of this process look distinctly dodgy.

There are two other cases where macroscopic kinetic energy is not turned into heat.

The first of these is in large-eddy simulation, which has for many years been widely studied for its practical significance. This of course is not a physical situation. It is purely a method of simulating turbulence numerically without being able to resolve all the scales: an introduction can be found in [1]. The central problem is to model the flow of energy to the scales which are too small to be resolved: the so-called subgrid drain. Various models have been studied for the subgrid viscosity, while a novel approach is the operational method of Young and McComb [2]. In this latter, an algorithm is used to feed back energy into the resolved modes, such that the spectral shape is kept constant. In fact this method can be interpreted in terms of an effective subgrid viscosity which is very similar to that found in conventional simulations when a large-eddy simulation is compared to a fully resolved one. But, so far as I know, no one has considered modelling the temperature rise that would be due to the viscous dissipation in these cases.

The second case is the direct simulation of the Euler equation. Such simulations can only lead to thermal equilibrium but naturally the simulations must be truncated to a finite number of modes, to avoid having an infinite amount of energy. However, in 2005, some interesting transient behaviour was been found in truncated Euler simulations [3] and confirmed the following year by the use of a closure approximation [4]. These simulations may be divided in terms of their energy spectra into two spectral ranges: a Kolmogorov range and an equipartition range. A buffer range in between these two is described by Bos and Bertoglio as a ‘quasi-dissipative’ zone, which is another example of non-viscous dissipation. However, it can only exist for a finite time and ultimately the system must move to thermal equilibrium.

I think it would be interesting to see one of the proponents of Onsager’s conjecture explain the simple physics of how the conjectured situation came about with increasing Reynolds number. All the mathematical expressions you need to do that are available. But I don’t think I will see that any time soon!

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] A. J. Young and W. D. McComb. Effective viscosity due to local turbulence interactions near the cutoff wavenumber in a constrained numerical simulation. J. Phys. A, 33:133-139, 2000.
[3] Cyril Cichowlas, Pauline Bonatti, Fabrice Debbasch, and Marc Brachet. Effective Dissipation and Turbulence in Spectrally Truncated Euler Flows. Phys. Rev. Lett., 95:264502, 2005.
[4] W. J. T. Bos and J.-P. Bertoglio. Dynamics of spectrally truncated inviscid turbulence. Phys. Fluids, 18:071701, 2006.




Superstitions in turbulence theory 2: that intermittency destroys scale-invariance!

Superstitions in turbulence theory 2: that intermittency destroys scale-invariance!

At the moment I am busy revising a paper (see [1] below) in order to meet the comments of the referees. As is so often the case, Referee 1 is supportive and Referee 2 is hostile. Naturally, Referee 2 writes at great length, so it is really a matter of rebuttal rather than our making changes. It seems clear that he is far from his comfort zone and his comments show that he has comprehensively misunderstood our paper. It also seems to me that he has not actually read certain key parts of the manuscript. For instance, he states: ‘The way how the authors use the word “scale-invariance” should be clarified’ (sic).

This is despite the fact that subsection 3.1 of the paper is titled ‘Scale-invariance of the inertial flux in the infinite Reynolds number limit’ and consists of only three paragraphs. It contains two equations, one of which states the criterion for an inertial range. This is followed by a sentence ending with “… where the fact that the criterion holds over a range of wavenumbers is usually referred to as scale-invariance.” Oh, and as regards ‘how the authors use the word’, we cite a number of references to show that others use the phrase, so we are not alone.

The next thing he says is: ‘We know from experimental evidence (intermittency) that scale invariance is broken in the inertial range.’ This is quite simply nonsense. In this context scale-invariance means that the inertial range is characterised by a constant flux over a range of wavenumbers, and this has been shown in many investigations. In fact there is no way in which intermittency, which is a single-realization characteristic, can affect mean quantities such as inertial flux or their properties such as scale-invariance. In a recent paper [2], we have shown that the ensemble average of intermittency vanishes. In the first figure below, we show the effect of using contours of isovorticity and the progressive effect of averaging over $N=1,\,2,\,5,\,10,\,25$ and $46$ realizations.

The effect of ensemble averaging on contours of isovorticity showing how increasing the number of realisations averages out the intermittency.

The effect of the averaging out with increasing number of realizations is evident. While the use of vorticity is more natural, the effect can perhaps be more clearly seen using the Q-criterion, as is done in the next figure.

 

The same procedure as in the previous figure, this time using the Q-criterion.

Both figures are taken from the same stationary DNS of the Navier-Stokes equations. Further details can be found in reference [2].

Over the past three decades there has been an increasing body of evidence to the effect that intermittency does not affect the Kolmogorov spectrum. Any deviations are in fact due to the Kolmogorov conditions not being quite met. Presumably it will take a long time for rational enquiry to defeat superstition in this topic!

[1] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614v2[physics.flu-dyn], 2021.
[2] S. R. Yoffe and W. D. McComb. Does intermittency affect the inertial transfer rate in stationary isotropic turbulence? arXiv:2107.09112v1 [physics.flu-dyn], 2021.




Superstitions in turbulence theory 1: the infinite Re limit of the Navier-Stokes equation is the Euler equation!

Superstitions in turbulence theory 1: the infinite Re limit of the Navier-Stokes equation is the Euler equation!

I recently posted blogs about the Onsager conjecture [1]; the need to take limits properly (Onsager didn’t!); and the programme at MSRI Berkeley, which referred to the Euler equation as the infinite Reynolds number limit, in a series of posts from 5 – 19 August just past. A later notification about the MSRI programme no longer made that claim; and I speculated (conjectured?) that this might not be unconnected from the appearance of the paper [2] on the arXiv! Now the Isaac Newton Institute is having a new programme on mathematical aspects of turbulence over the first half of next year, and their theme dwells on how the mathematics underlying ‘the proof of the Onsager conjecture … can bring insights into the dissipative anomaly conjecture, a.k.a. Kolmogorov’s zeroth law of turbulence’.

The idea of a dissipation (or dissipative) anomaly goes back to Onsager’s conjecture [1] made in 1949 when turbulence studies were still in their infancy. Although the alternative expression (i.e Kolmogorov’s zeroth law) has also been used, I have no idea who formulated it; nor of the reasoning that lies behind it. While Kolmogorov may have formulated laws in statistics (I am indebted to Mr Google for this information!), his contributions to turbulence do not qualify for the description ‘physical laws’. However, an irony about the way in which Onsager came to his conclusion about a dissipative anomaly recently dawned on me, and the point of this post is to share that with you.

Onsager’s starting point was Taylor’s (1935) expression for the turbulent dissipation [3] thus: \begin{equation}\varepsilon = C_{\varepsilon}(R_L) U^3/L,\end{equation} where $\varepsilon$ is the dissipation rate, $U$ is the root mean square velocity, $L$ is the integral scale, and $C_{\varepsilon}$ is a coefficient which may depend on the Reynolds number $R_L$, which is formed from the integral scale and the rms velocity. In 1953, Batchelor [4] presented some results that suggested $C_{\varepsilon}$ tended to a constant with increasing Reynolds number.. Nevertheless, this expression was the subject of some debate over the years (although its equivalent for shear flows was widely used in both research and practical applications), until Sreenivasan’s survey papers on grid turbulence [5] in 1984 and on direct numerical simulations [6] in 1998 established the characteristic asymptotic shape of this curve. This work had a seminal effect on the subject and a general account of work in this area can be found in the book [7].

However, it was suggested by McComb et al in 2010 [8] that the Taylor’s expression for the dissipation (1) is actually a surrogate for the peak inertial flux $\Pi_{max}$. See the figure below, which is taken from that paper. It shows from DNS that the group $U^3/L$ behaves like $\Pi_{max}$ for all Reynolds numbers, whereas the behaviour of the dissipation is quite different at low Reynolds numbers.

Variation of the dissipation rate, the peak inertial flux and the Taylor dissipation surrogate with increasing Reynolds number from direct numerical simulation [8].
It was further shown [9], using the Karman-Howarth equation and expanding non-dimensional structure functions in inverse powers of the Reynolds number, that this was the case, with the asymptotic behaviour $C_{\varepsilon} \rightarrow C_{\varepsilon,\infty}$ as $R_L \rightarrow \infty$ corresponding to the onset of the Kolmogorov $`4/5’$ law.

In other words, when Onsager deduced from Taylor’s expression that the dissipation did not depend on the viscosity, he was actually deducing that the peak inertial flux did not depend on the viscosity. And indeed it doesn’t!

[1] L. Onsager. Statistical Hydrodynamics. Nuovo Cim. Suppl., 6:279, 1949.
[2] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614v2[physics.flu-dyn], 2021. 28.
(N.B. This paper is presently under revision and will be posted again, possibly with a change of title.)
[3] G. I. Taylor. Statistical theory of turbulence. Proc. R. Soc., London, Ser. A, 151:421, 1935.
[4] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 1st edition, 1953.
[5] K. R. Sreenivasan. On the scaling of the turbulence dissipation rate. Phys. Fluids, 27:1048, 1984.
[6] K. R. Sreenivasan. An update on the energy dissipation rate in isotropic turbulence. Phys. Fluids, 10:528, 1998.
[7] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[8] W. David McComb, Arjun Berera, Matthew Salewski, and Sam R. Yoffe. Taylor’s (1935) dissipation surrogate reinterpreted. Phys. Fluids, 22:61704, 2010.
[9] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.




Peer review: the role of the referee.

Peer review: the role of the referee.
In earlier years I used to get the occasional phone call from George Batchelor, at that time the editor of Journal of Fluid Mechanics, asking for suggestions of new referees on the statistical theory of turbulence. To avoid confusion I should point out that by this I mean the theoretical physics approach to the statistical closure problem, pioneered by Bob Kraichnan and Sam Edwards, and carried on by myself and others. For anyone interested, a review of this subject can be found in reference [1] below.

I didn’t find this easy, as there were then (as now) very few people working on this topic. My suggestion that Sam Edwards, although no longer active in this area, could certainly referee papers, was met with little enthusiasm. He was seen as ‘too kind’ or even as ‘soft-hearted’! I wasn’t surprised by this, as Sam had explained his position on refereeing to me and it amounted to: ‘Unless it is arrant nonsense, it should be published.’ In contrast, the refereeing process of the JFM was notoriously tough and this has been generally true in turbulence research, and remains so to this day. Indeed this is the general perception in the subject, and to quote Sam again, he once referred to ‘the cut-throat nature of refereeing in turbulence’. I suspect it was this perception which put him off continuing in the subject.

I find myself somewhere between the extremes, perhaps because this is a matter of culture and I have been both engineer and physicist. However, while I respect the professionalism of the engineering approach, at the same time I think it can be taken too far. A typical experience for me (and I believe also for many others) is that a technical discussion can be carried on between the authors and individual referees which is never seen by others in the field. In my view these discussions should be published as an appendix to the paper (assuming of course that the paper is actually accepted for publication). I also think that where the authors have a track record there should be a presumption that the paper should be published. In other words, the onus should be on the referee to come up with definite and reasoned objections, as opposed to the vague prejudiced waffle which is so often the case!

Another problem that arises often in the turbulence community, is the desire of some referees to rewrite the paper. Or rather to force the author(s) to rewrite the paper to the referee’s prescription. It is of course legitimate to point out aspects which are less clear than they might be, but it verges on arrogance to tell the author how to do it. Also, with electronic publication now universal the idea of saving paper/printing costs is no longer so relevant. Papers can easily be as long as they need to be.

I have been on the receiving end of this behaviour on occasion, but nothing compared to something I was told recently; where a leading member of the community was forced to modify his paper four times despite his own judgement that the changes were unnecessary and his making protests to that effect to the editor. Someone else I know, summed it up as ‘lazy editors and biased referees’. He had come from particle physics, where his papers had generally been published ‘as submitted’, to fluid mechanics (in the context of climatology) where there was invariably a battle over changes being required by the referee. Of course I trust that it is clear that I am not referring to the minor changes that we should all be happy to make, but to major structural changes which may in the end be no more than one person’s opinion against another’s. For these two individuals it was the failure by the editors to intervene that caused the problems.

So, it really comes down to the editor in the end. It is their job to protect their referees from unfair attack, on the one hand; and to protect their authors from unfair refereeing, on the other. As I have pointed out elsewhere, in practice what breaks this symmetry is that it is more difficult for the editor to get referees than it is to get prospective authors; who, after all, are queuing up to apply!

[1] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.




Peer review: The role of the author.

Peer review: The role of the author.
I have previously posted on the role of the editor (see my blog on 09/07/2020) and had intended to go on to discuss the role of the referee. However, before doing that it occurred to me that it might be helpful to first discuss the role of the author. Of course probably every journal lays down rules for author and referee alike: but who pays any attention to these? (Just joking! Although, life is short and if you are having to try more than one journal, then the fact that these detailed rules vary from one journal to another can add to the labour involved.) But what I have in mind are the unwritten rules. These are generally taken for granted and perhaps should be spelled out occasionally in order to ensure that everyone is on the same wavelength.

One basic rule for authors is that they should provide some basic introduction to the problem, discuss previous work and show how their own new work advances the situation. This is very much in our own interest, as it is a key part of demonstrating to our co-workers that our paper is worth reading. However, as I found out at the beginning of my career, this is can be a fraught process. For instance, writing the introduction to a paper on the statistical theory of turbulence was perfectly straightforward, but in the case of an attempted theory of drag reduction by additives this turned out to be quite another matter.

My attention was drawn to this problem when I was in the Theoretical Physics Division at Harwell. At first this involved polymer molecules; but, when I looked into it further, I found out that there was a parallel activity based on the use of macroscopic fibres such as wood-pulp or rayon. This latter activity generally seemed to have originated within the relevant industry, and was often carried on without reference to the better known use of polymer additives.

I found the fibre problem more attractive, because it seemed easier to think about a macroscopic fibre as a linear object which could only have two-dimensional interactions with a three-dimensional eddy of comparable size. If one added in the possibility of elastic deformation of the fibre by the fluid, then one could think in terms of a non-Newtonian relationship between stress and rate of strain for the composite fluid which could act as a model for the fibre suspension. On the assumption that the fibres would tend to be aligned (on average) with the mean flow, physical reasoning led to an expression for a nonlinear correction to the usual Newtonian viscosity, which could be further decomposed into the difference between two-dimensional and three-dimensional inertial transfer terms, both of which represented reversals of the usual energy cascade. This theory offered a qualitative explanation of the changes in turbulent intensities which had been observed in fibre suspensions and was published as a letter in Nature [1].

So far so good! The problems arose when I extended this work and submitted it to JFM. All three referees were unanimous in rejecting the paper. Part of the trouble seemed to be that the work was carried out in spectral space. An account of this can be found in my blog of 20/02/2020, including the infamous description of my analysis as ‘the usual wavenumber murder’! But, as was kindly pointed out to me by George Batchelor, the problem was that I was ‘treading on the toes’ of those who worked in this field (i.e. microrheology). This editorial advice was helpful; because, from my background in physics, I knew very little about fluid mechanics and was happily unaware that the subject of microrheology even existed.

Of course, in the spirit of ‘poacher turned gamekeeper’ I ultimately became very keen on making sure that any paper of mine had a proper literature survey. I owe this mainly to my PhD students, who have always been very assiduous in tracking down references, and who have set me a good example in this respect!
Nowadays, in view of the great increase in publications, I tend to take a more tolerant attitude to others who fail to cite relevant papers. But I’m not sure that this is really justified. After all, although we have had a positive explosion of publications in fluid mechanics, most of this is in practical applications. The amount of truly fundamental work is still quite small. And we do have the power of Google to help us find anything that is relevant to what we are currently publishing. I must say that I am rather sceptical about papers that purport to present applications of theoretical physics to turbulence yet do not mention the name ‘Kraichnan’. I suspect them of being fake theories. This is something that I may expand on sometime.

For those who are interested, a further account of developments in the study of drag reduction may be found in my book cited as [2] below.

[1] W. D. McComb. The turbulent dynamics of an elastic fibre suspension: a mechanism for drag reduction. Nature Physical Science, 241(110):117-118, 1973.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.




The exactness of mathematics and the inexactness of physics.

The exactness of mathematics and the inexactness of physics.

This post was prompted by something that came up in a previous one (i.e. see my blog on 12 August 2021), where I commented on the fact that an anonymous referee did not know what to make of an asymptotic curve. The obvious conclusion from this curve, for a physicist, was that the system had evolved! There was no point in worrying about the precise value of the Reynolds number. That is a matter of agreeing a criterion if one needs to fix a specific value. But evidently the ratio shown was constant within the resolution limits of the measurements of the system; and this is the key point. Everything in physics comes down to experimental error: the only meaningful comparison possible (i.e. theory with experiment or one experiment with another) is subject to experimental error which is inherent. Strictly one should always quote the error, because it is never zero.

In everyday life, there are of course many practical expedients. For instance, radioactivity takes in principle an infinite amount of time to decay completely, so in practice radioisotopes are characterised by their half-life. So the manufacturers of smoke alarms can tell you when to replace your alarm, as they know the half-life of the radioactive source used in it. In acoustics or diffusion processes or electromagnetism, exponential decays are commonplace, and it is usual to introduce a relaxation time or length, corresponding to when/where the quantity of interest has fallen to $1/e$ of its initial value.

In fluid mechanics, the concept of a viscous boundary layer on a solid surface is of great utility in reconciling the practical consequences of a flow (such as friction drag) with the elegance and solubility of theoretical hydromechanics. The boundary layer builds up in thickness in the stream-wise direction as vorticity created at the solid surface diffuses outwards. But how do we define that thickness? A reasonable criterion is to choose the point where the velocity in the boundary layer is approximately equal to the free-stream velocity. From my dim memory of teaching this subject several decades ago, a criterion of $u_1(x_2) = U_1$, where $U_1$ is the constant free-stream velocity, was adequate for pedagogic purposes.

An interesting partial exception arises in solid state physics, when dealing with crystal lattices. The establishment of the lattice parameters is of course subject to the usual caveats about experimental error, but for statistical physics lattices are countable systems. So if one is carrying out renormalization group calculations (e.g see [1]) then one is coarse-graining the description by replacing the unit cell, of side length $a$, by some larger (renormalized) unit cell. In wavenumber (momentum) space, this means we start from a maximum wavenumber $k_{max}=2\pi/a$ and average out a band of wavenumber modes $k_1 \leq k \leq k_0$, where $k_0=k_{max}$. You can see where the countable aspect comes in, and of course the initial wavenumber is precisely defined (although of course its precise value is subject to the error made in determining the lattice constant).

When extending these ideas to turbulence, the problem of defining the maximum wavenumber is not solved so easily. Originally people (myself included) used the Kolmogorov dissipation wavenumber, but this is not necessarily the maximum excited wavenumber in turbulence. In 1985 I introduced a criterion which was rather like a boundary-layer thickness, adapting the definition of the dissipation rate, thus: \[\varepsilon = \int^{\infty}_0 \, 2\nu_0 k^2 E(k) dk \simeq \int^{k_{max}}_0 \, 2\nu_0 k^2 E(k) dk,\] where $\nu_0$ is the molecular viscosity and $E(k)$ is the energy spectrum [2]. When I first started using this, physicists found it odd, because they were used to the more precise lattice case. I should mention for completeness that it is also necessary to use a non-trivial conditional average [3].

Recently there has been growing interest in these matters by those who study the philosophy of maths and science. For instance, van Wierst [4] notes that in the theory of critical phenomena, phase transitions require an infinite system, whereas in real life they take place in finite (and sometimes quite small!) systems. She argues that this paradox can be resolved by the introduction of ‘constructive mathematics’, but my view is that it can be adequately resolved by the concept of scale-invariance. Which brings us back to the infinite Reynolds number limit for turbulence. But, for the moment, I have said enough on that topic in previous posts, and will not expand on it here.

[1] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.
[2] W. D. McComb. Application of Renormalization Group methods to the subgrid modelling problem. In U. Schumann and R. Friedrich, editors, Direct and Large Eddy Simulation of Turbulence, pages 67-81. Vieweg, 1986.
[3] W. D. McComb and A. G. Watt. Conditional averaging procedure for the elimination of the small-scale modes from incompressible uid turbulence at high Reynolds numbers. Phys. Rev. Lett., 65(26):3281-3284, 1990.
[4] Pauline van Wierst. The paradox of phase transitions in the light of constructive mathematics. Synthese, 196:1863, 2019.




Nightmare on Buccleuch Street.

Nightmare on Buccleuch Street.
Staycation post No 4. I will be out of the virtual office until 30 August.

I haven’t been into the university since the pandemic began but recently I dreamt that I was in the university library, in the section where magazines and journals are kept. In this dream, I was sitting at one of the low tables reading a magazine and two much younger men were also sitting there, in a suitably, socially distanced way. As they were unknown to me, I will call them A and B [1]. A was leafing through The Physics of Fluids while B was staring at one particular page of a tabloid newspaper.

After a while, A spoke. ‘Have you seen that interesting article about constraints on the scaling exponents in the inertial range?’

B shakes his head and goes on studying his tabloid. A continues. ‘These guys use Holder inequalities applied to the structure functions and then to the generalised structure functions; and end up with a condition relating the exponent for $S2$ to the exponent for $S3$. Now, if we assume that the exponent for $S3$ is equal to $1$, then it follows that the exponent for $S2$ is equal to $2/3$. This is exciting. Most people would agree with the first of these, but not the second.’

B continues to stare at his newspaper and makes no response. With a slight note of desperation in his voice, A goes on. ‘But don’t you see, this could fit in nicely with Lundgren’s matched asymptotic expansions analysis. It could also fit in with that guy’s blog about the K62 correction being unphysical. It looks like old Kolmogorov was right all the time … back in 1941. Aren’t you interested, at all?’

At last B looks up. ‘No, why should I be. I don’t use structure functions or spectra in my work. And you will go on using Kolmogorov scaling as you have always done, because it works. So why are you so excited?’

For a moment A just sits there. The he gets up and puts the journal back in the rack. He stands in silence for a few moments. Then he says. ‘You know, I keep feeling it’s Thursday.’

For the first time B looks animated. ‘That’s funny so do I. Let’s go and have a drink.’

Exeunt omnes. It was only a dream and obviously couldn’t happen in real life. The paper to which A was referring is cited below as [2].

[1] There is no C in this story. See my post of 9 July 2020.
[2] L. Djenidi, R. A. Antonia, and S. L. Tang. Mathematical constraints on the scaling exponents in the inertial range of fl uid turbulence. Phys. Fluids, 33:031703, 2021.




Why am I so concerned about Onsager’s so-called conjecture?

Why am I so concerned about Onsager’s so-called conjecture?
Staycation post No 3. I will be out of the virtual office until 30 August.

In recent years, Onsager’s (1949) paper on turbulence has been rediscovered and its eccentricities promoted enthusiastically, despite the fact that they are at odds with much well-established research in turbulence, beginning with Batchelor, Kraichnan, Edwards, and so on. In particular, a bizarre notion has taken hold that the Euler equation corresponds to the zero-viscosity limit of the Navier-Stokes equations and can be made dissipative, in defiance of the basic physics, by some mysterious alteration of the mathematics. The previous two posts refer to this.
I have been intending to write about this for some time, but the present paper [1] was prompted by an email that I received late in 2019 from MSRI, Berkeley. This was an advance announcement of a Program: ‘Mathematical problems in fluid dynamics’, to take place in the first half of 2021. I quote from the description as follows:

‘The fundamental equations in this area are the well-known Euler equations for inviscid fluids and the Navier-Stokes equations for the (sic) viscous fluids. Relating the two is the problem of the zero-viscosity limit and its connection to the phenomena of turbulence.’

The second sentence is nonsense and runs counter to all the conventions of fluid dynamics, where it has long been known that the relationship between the two equations is obtained by setting the viscosity equal to zero. The infinite Reynolds number limit, in contrast, is observed as an asymptotic behaviour of the Navier-Stokes equation; which, even at high Reynolds numbers, remains the Navier-Stokes equation.

I was appalled by the thought of young mathematicians being taught such unrepresentative and incorrect material. This is what provided my immediate motivation for writing the present paper. The first version of this paper was put on the arXiv on 12 December 2020.

In January of this year, I received from MSRI the final notification of this program. The wording had changed, and after some unexceptional statements about the equations of motion it read:

‘Open problems and connections to related branches of mathematics will be discussed, including the phenomena of turbulence and the zero-viscosity limit. Both theoretical and numerical aspects of these topics will be considered.’

Perhaps it is just a coincidence that this change should follow the arXiv publication of [1], but at least their statement about their course is no longer manifestly false; although much still depends on what was actually taught. It may be noted that Figure 2 of [1] (also see the previous post) shows the onset of scale invariance and, in effect, the zero-viscosity limit, in a direct numerical simulation at a Taylor-Reynolds number of about one hundred. This is the physical infinite Reynolds number limit as it occurs in real fluids.

Another aspect of the influence of Onsager is the use of the term dissipation anomaly which is used instead of what some call the dissipation law. If one criticises the term, the mathematicians seem to believe that one is denying the existence of the effect. Not so. At Edinburgh we have worked on the establishing the existence of the dissipation law and also have elucidated it as arising from the Richardson-Kolmogorov picture [2], [3]. It is a real physical effect and there is nothing anomalous about it.

[1] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614v2[physics.flu-dyn], 2021.
[2] W. David McComb, Arjun Berera, Matthew Salewski, and Sam R. Yoffe.
Taylor’s (1935) dissipation surrogate reinterpreted. Phys. Fluids, 22:61704,
2010.

[3] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.




That’s the giddy limit!

That’s the giddy limit!
Staycation post No 2. I will be out of the virtual office until 30 August.

The expression above was still in use when I was young, and vestiges of its use linger on even today. It referred, often jocularly, to any behaviour which was deemed unacceptable. Why giddy? I’m afraid that the reference books are silent on that. However, I have encountered examples of mathematical limits which seemed to qualify for the adjective.

Shortly before I retired, I found myself teaching a mathematics course to third-year physics students. The purpose of this course was to try to bring our students up to speed in maths, after the mathematics lecturers had done their best in the previous two years. I suppose that it had a remedial aspect, and at that time the talk was all of the ‘math problem’. One example of a ‘giddy’ limit, which sticks in my mind, arose when I was marking class exam papers. The question asked the students to sketch the function $sinc \,\nu = \sin \nu / \nu$. This required them to work out its value at $\nu =0$, where of course direct substitution results in an indeterminate form. I need hardly say that they had to use either a Taylor series expansion of $sin$ or make use of l’Hopital’s rule to reveal the correct limiting value which is unity. Or of course they could just sketch it and infer the limiting behaviour by eye.

One person did this beautifully, with all the zeros in the right places and the central peak heading up to the value one on both sides. It was as the $y$-axis was approached that giddiness seemed to set in, and the sketched curve then shot down to zero on both sides. The student then proudly declared it to be an indeterminate form. One, which just happened to be zero! This sudden abandonment of all reason was quite baffling and I never understood the reason for it.

However, I recently saw comments by an anonymous referee which seemed to come into a similar category. These were directed at Figure 2 in reference [1] which was intended to demonstrate that the physical infinite Reynolds number limit was determined by the onset of scale-invariance. We show this below. Scale-invariance in this context is defined to be when the maximum rate of inertial transfer $\varepsilon_T$ becomes equal to the viscous dissipation $\varepsilon$. As we were originally studying the dependence of dimensionless dissipation of Taylor-Reynolds number, we actually plot the ratio $ \varepsilon / \varepsilon_T $, which reduces towards unity, and this indicates the onset of scale-invariance.

Onset of the infinite Reynolds limit in stationary isotropic turbulence.

The referee looked at the figure and asked: how is the onset of scale-invariance defined? Is the onset placed at $R_{\lambda}=50,\,100\,150$?

This seems to me to verge on the childish. Does he have no familiarity with the intersection between a mathematically asymptotic result and a real physical system? Has he never met viscous boundary layers, exponential decay of sound or other radiation? The answer in all these cases is set by the resolution of the physical measuring system. Once changes are too small to be measurable, then the asymptote has been reached. The curve that we show in the figure, would go on at a constant level no matter how much one increased the Reynolds number.

The lesson to be drawn from this is that there are no further qualitative changes in the system as you increase the Reynolds number, and this is how real fluids behave. In the next blog we will consider the motivation for the research reported in [1].

[1] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614v2[physics.flu-dyn], 2021.




When is a conjecture not a conjecture?

When is a conjecture not a conjecture?
Staycation post No 1. I will be out of the virtual office until 30 August.

That sounds like the sort of riddle I used to hear in childhood. For instance, when is a door not a door? The answer was: when it’s ajar! [1] Well, at least we all know what a door is, so let us begin with what a conjecture actually is.

According to my dictionary, a conjecture is simply a guess. But in mathematics it is somehow more than that. Essentially, the idea is that a mathematician can be guided by their experience to postulate that something he/she knows to be true under particular circumstances is in fact true under all possible or relevant circumstances. If they can prove it, then their conjecture becomes a theorem.

The question then arises: what is a conjecture in physics? And if you can demonstrate its truth by measurement or reasoned argument, does it become a theory?

Let us take as an example a system such as an electrolyte or plasma containing many charged particles. The particles interact pairwise through the Coulomb potential and as the Coulomb potential is long-range this presents a many-body problem. What happens in practice is that a form of renormalization takes place, and the Coulomb potential due to any one electron is replaced by a potential which falls off more rapidly due to the screening effect of the cloud of particles surrounding it. A very simple introduction to this idea (which is known as the Debye-Huckel theory) can be found in Section 1.2.1 of the book cited as reference [2] below.

If we take the case of the turbulence cascade, the Fourier wavenumber modes provide the degrees of freedom. Then, instead of pairwise interactions, we have the famous triad interactions, each and every one of which conserves energy. If for simplicity we consider a periodic box, then the mean flux of energy from low wavenumbers to high can be written as the sum of all the individual mean triadic interactions. As in principle all modes are coupled, this is also a many-body problem and one can expect some form of renormalization to take place. In some simple circumstances this can be interpreted as a renormalized viscosity (the effective viscosity) which is very much larger than the molecular viscosity. These ideas date back to the late 19th century and are the earliest example of renormalization (although they did not use this term which came much later on, around the mid-20th century).

Now let us consider what happens as we progressively increase the Reynolds number. For the utmost simplicity we will restrict our attention to forced, stationary isotropic turbulence. Then, if we hold the rate of energy input into the system constant and decrease the viscosity progressively, this increases the Reynolds number at constant dissipation rate. It also increases the size of the largest wavenumbers of the system. The result is a form of scale-invariance in which the flux through wavenumbers is independent of wavenumber and the result is the dissipation law that the scaled dissipation law is independent of the viscosity as a rigorous asymptotic result [3]. It should perhaps be emphasised that this asymptotic behaviour is the infinite Reynolds number limit; but, from a practical point of view, we find that subsequent variation becomes too small to detect at Taylor-Reynolds numbers of a few hundred and thereafter may be treated as constant. We will return to this point in the next post, along with an illustration.

Meanwhile, back in real space, velocity gradients are becoming steeper as the Reynolds number increases, and this aspect disturbed Onsager [4] (see also the review of this paper in the context of Onsager’s life and work [5]). In fact he concluded that the infinite Reynolds number limit was the same as setting the viscosity equal to zero. In his view, the resulting Euler’s equation could still account for the dissipation in terms of singular behaviour. But, it has to be said that, in the absence of viscosity, there is no transfer of macroscopic kinetic energy into heat (i.e. microscopic kinetic energy). I have seen some references to pseudo-dissipation recently, so there is perhaps a growing awareness that Onsager’s conjecture needs further critical thought.
Onsager’s paper concludes with the sentence: ‘The detailed conservation of energy (i.e. the global conservation law of the nonlinear term) does not imply conservation of the total energy if the total number of steps in the cascade is infinite and the double sum … converges only conditionally.’ The italicised parenthesis is mine as Onsager referred here to one of his equation numbers. However this is merely an unsupported assertion which is incorrect on physical grounds because:
1. The number of steps is never infinite in a real physical flow.
2. The individual interactions are conservative so it is not clear how mere summation can lead to overall non-conservation.
3. The physical process involves a renormalization which means that there is a well-defined physical infinite Reynolds number limit at quite moderate Reynolds numbers.
It is totally unclear to me what mathematical justification there can be for this statement; and discussions of it that I have seen in the literature seem to me to be unsound on physical grounds. I shall return to these points in future blogs.

[1] That is, ‘a jar’, geddit? Oh dear, I suppose I am getting into holiday mood!
[2] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.
[3] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.
[4] L. Onsager. Statistical Hydrodynamics. Nuovo Cim. Suppl., 6:279, 1949.
[5] G. L. Eyink and K. R. Sreenivasan. Onsager and the Theory of Hydrodynamic Turbulence. Rev. Mod. Phys., 87:78, 2006.




How do we identify the presence of turbulence?

How do we identify the presence of turbulence?

In 1971, when I began as a lecturer in Engineering Science at Edinburgh, my degree in physics provided me with no basis for teaching fluid dynamics. I had met the concept of the convective derivative in statistical mechanics, as part of the derivation of the Liouville equation, and that was about it. And of course the turbulence theory of my PhD was part of what we now call statistical field theory. Towards the end of autumn term, I was due to take over the final-year fluids course, but fortunately a research student who worked as a lab demonstrator for me had previously taken the course and kindly lent me his copy of the lecture notes. However, in my first year, I was never more than one lecture ahead of the students!

This grounding in the subject was reinforced by practical experience, when I began doing experimental work on drag reduction by additives and on particle diffusion. It also allowed me to recover quickly from an initial puzzlement, when I saw a paper in JFM which proposed that the occurrence of streamwise vorticity could be taken as a signal of turbulence in duct flow.

Later on, I learned that this idea could be extended to give a plausible picture of the turbulent bursting process, and a discussion can be found in Section 11.4.3 of my book [1], where the development of $\Lambda$ vortices is illustrated in Fig. 11.1. In the book, this is preceded by a treatment of the boundary layer on a flat plate in Section 1.4, which can help us to understand the basic idea as follows. Suppose we have a fluid moving with constant velocity $U_1$, incident on a flat plate lying in the ($x_1,x_3$) plane with its leading edge at $x_1=0$. Vorticity is generated at this point due to the no-slip boundary condition, and diffuses out normal to the plate in the $x_2$ direction, resulting in a velocity field of the form $u_1(x_2)$, in the boundary layer. We can visualize the sense of the vorticity vector by imagining the effect of a small portion of the fluid becoming solidified. That part nearest the plate will slow down, the ‘solid body’ will rotate, and the spin vector will point in the $x_3$ direction. This is the only component of vorticity in the system.

The occurrence of vorticity in the other two directions must be a consequence of instability and almost certainly begins with vorticity building up in the $x_1$ direction due to edge effects. That is, in practice, the plate must be of finite extent in the cross-stream or $x_3$ direction. A turbulence transition could not occur if the plate (as normally assumed for pedagogic purposes) were of infinite extent. This provides an unequivocal criterion for the occurrence of the transition to turbulence, but there is still the question of when the turbulence is in some sense well-developed. And of course other flows may require other criteria.

The question of whether a flow is turbulent or not became something of an issue in the 1980s/90s, when there was a growing interest in applying Renormalization Group (RG) to turbulence. The pioneering work on applying RG to randomly stirred fluid motion was reported by Forster, Nelson and Stephen [2] in 1976, and you should note from the title of their first paper that the word ‘turbulence’ does not appear. Their work was restricted to showing that there was a fixed point of the RG transformations in the limit of zero wavenumbers (i.e. ‘large wavelengths’).

The main drive in turbulence research is always towards applications, and inevitably pressure developed to seek ways of extending the work of Forster et al. to turbulence. In the process a distinction grew up between ‘stirred fluid motion’ and so-called ‘Navier-Stokes turbulence’. The latter should be described by the spectral energy balance known as the Lin equation, whereas the former just reflects its Gaussian forcing. Nowadays, in physics, the distinction has settled down to ‘stirred hydrodynamics’ and just plain turbulence!

The difficulty of defining turbulence in a concise way remains, but some light can be shed on these earlier controversies by considering a more recent discovery that we made at Edinburgh. This was the result that a dynamical system consisting of the Navier-Stokes equations forced by the combination of an initial Gaussian field and a negative damping term, will at very low Reynolds numbers become non-turbulent and take the form of a Beltrami flow [3]. In this paper, we emphasised that at early times the transfer spectrum $T(k,t)$ has the behaviour typically found in simulations of isotropic turbulence but at later times tends to zero. At the same time, the energy spectrum $E(k,t)$ tends to a unimodal spectrum at $k=1$. An interesting point is that the fixed point of Forster et al. $k \rightarrow 0$ is cut off by our lattice, so that we observe a Beltrami flow instead.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] D. Forster, D. R. Nelson, and M. J. Stephen. Long-time tails and the large-eddy behaviour of a randomly stirred fluid. Phys. Rev. Lett., 36(15):867-869, 1976.
[3] W. D. McComb, M. F. Linkmann, A. Berera, S. R. Yoffe, and B. Jankauskas. Self-organization and transition to turbulence in isotropic fluid motion driven by negative damping at low wavenumbers. J. Phys. A Math. Theor., 48:25FT01, 2015.




Are Kraichnan’s papers difficult to read? Part 2: The DIA.

Are Kraichnan’s papers difficult to read? Part 2: The DIA.

In 2008, or thereabouts, I took part in a small conference at the Isaac Newton Institute and gave a talk on the LET theory, its relationship to DIA, and how both theories could be understood in terms of their relationship to Quasi-normality. During my talk, I was interrupted by someone in the audience, who said that I was wrong in discussing DIA as if Kraichnan’s perturbation theory was the same as that of Wyld. I disagreed, and we had a short exchange of the kind ‘Yes you did! No, I didn’t!’, and the matter was left unresolved.

Sometime afterwards, I refreshed my memory of these matters and realised that I was wrong. Kraichnan’s seminal paper [1] is not easy to understand, but he was claiming to be introducing a new type of perturbation theory, and that undoubtedly differed from Wyld’s subsequent field-theoretic approach [2]. In his book on the subject, Leslie had simply chickened out and used the Wyld analysis [3]. Many of us had then followed in his tracks, but over the years (decades!) I had simply forgotten that fact. It was salutary to be reminded of it, and I duly said something about it in my later book on turbulence [4].

Again this draws attention to the danger of relying uncritically on secondary sources, but an interesting point emerged. Kraichnan made what was essentially a mean-field approximation in his theory. The fact that Wyld could show that the DIA gave identical results to the same order of truncation of conventional perturbation theory tells us that the mean-field approximation for the response function was justified; because the method of renormalization was the same for both approaches. This is of further interest, in that the recent formal derivation of the local energy-transfer (LET) theory also relies on a mean-field approximation involving the response function [5], although this is defined in a completely different way from that in DIA.

Among the select few who actually have got to grips with the new perturbation theory in [1], are my student Matthew Salewski, who did that as a preliminary to the resolution of the apparent differences between formalisms [6]; and S. Kida who revisited DIA in order to derive a Lagrangian theory e.g. see reference [7].

As regards the question which heads this post, we can leave the last word with the man himself. Kraichnan told me that on one occasion a referee had complained to him: ‘Why are your papers so difficult to read?’ and he had replied: ‘If you think they are hard to read, have you considered how difficult they must be to write?’.

[1] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497-543, 1959.
[2] H. W. Wyld Jr. Formulation of the theory of turbulence in an incompressible fluid. Ann.Phys, 14:143, 1961.
[3] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.
[4] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[5] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[6] A. Berera, M. Salewski, and W. D. McComb. Eulerian Field-Theoretic Closure Formalisms for Fluid Turbulence. Phys. Rev. E, 87:013007-1-25, 2013.
[7] S. Kida and S. Goto. A Lagrangian direct-interaction approximation for homogeneous isotropic turbulence. J. Fluid Mech., 345:307-345, 1997.




Are Kraichnan’s papers difficult to read? Part 1: Galilean Invariance

Are Kraichnan’s papers difficult to read? Part 1: Galilean Invariance.

When I was first at Edinburgh, in the early 1970s, I gave some informal talks on turbulence theory. One of my colleagues became sufficiently interested to start doing some reading on the subject. Shortly afterwards he came up to me at coffee time and said. ‘Are all Kraichnan’s papers as difficult to understand as this one?’ The paper which he was brandishing at me was Kraichnan’s seminal 1959 paper which launched the direct interaction approximation (DIA) [1]. I had to admit that Kraichnan’s papers were in general pretty difficult to read; and I think that my colleague gave up on the idea. Shortly afterwards, Leslie’s book came out and this was very largely devoted to making Kraichnan’s work more accessible [2]; but I think that was too late for one disillusioned individual.

Recently I was reading a paper (might have been one of Kraichnan’s) and I was brought up short by something like ‘… and the variance takes the form:’ followed by a displayed mathematical expression. So it was rather like one half of an equation, with the other (first) half being in words in the text. So, I found that I had to remember what the variance was in this particular context, and then complete the equation in my mind. If I had been writing this, I would have used a symbol for the variance (even if just its definition as $\langle u^2 \rangle$) and displayed an actual equation. But what this reminded me of was my own diagnosis of the difficulty with Kraichnan’s style. I suspected that he would get tired of always writing in maths, and would feel the need for some variety. The trouble was that sometimes he would put the important bits in words, with a corresponding loss of conciseness and precision. As a result there was a temptation to rely on secondary sources such as Leslie’s book [2] or Orszag’s review article [3]; and I was by no means the only one to succumb to this temptation!

The fact that it could be unwise to do so emerged when we produced a paper on calculations of the LET theory (compared with DIA) and submitted it to the JFM [4]. We discussed the idea of random Galilean invariance (RGI) and argued that its averaging process violated the ergodic principle.

We set out the procedure of random Galilean transformation as follows. Consider a velocity field $\mathbf{u}(\mathbf{x},t)$ in a frame of reference $S$. Suppose that we have a set of reference frames $\{S_0,\,S_1,\,S_2,\, \dots\}$, moving with velocities $\{C_0,\,C_1,\,C_2,\,\dots\}$, where the shift velocities are all constant and the sub-ensemble is defined by the probability distribution $P(C)$ of the shift velocities. In practice, Kraichnan took this to be a normal or Gaussian distribution, and averaged with respect to $C$ as well as with respect to the velocity field.

However, Kraichnan’s response to our paper was ‘that’s not what I mean by random Galilean transformations’. But he didn’t enlighten us any further on the matter.

Around that time, a new research student started, and I asked him to go through Kraichan’s papers with the proverbial fine-tooth comb and find out what RGI really was. What he found was that Kraichnan was working with a composite ensemble made up from the members of the turbulent ensemble, each shifted randomly by a constant velocity. So the turbulence ensemble $\{\mathbf{u}^{i}(\mathbf{x},t )\}$, with the superscript $i$ taking integer values, was replaced by a composite ensemble $\{\mathbf{u}^{i}(\mathbf{x},t ) + C_i\}$. This had to be inferred from a brief statement in words in a paper by Kraichnan!

The student then investigated this choice of RGT in conjunction with the derivation of theories and concluded that it was incompatible with the use of renormalized perturbation theory. In other words, Kraichnan was using it as a constraint of theory, once the theory was actually derived. But in fact the underlying use of the composite ensemble invalidated the actual derivation of the theory. It would be too complicated to go further into this matter here, but a full account can be found in Section 10.4 of my book [5], which references Mark Filipiak’s thesis [6].

This experience illustrates the danger of relying too much on secondary sources, however excellent they may be. I will give another example in my next post but I can round this one off with an anecdote. When I first met Bob Kraichnan he told me that he had been very angered by Leslie’s book. I think that he was unhappy at what he saw as an excessive concentration on his work, and also the fact that Leslie had dedicated the book to him. However, he said that various others had persuaded him that he was wrong to react in this way. I added my own voice to this chorus, pointing out that there was absolutely no doubt of his dominance as the father of modern turbulence theory; and the dedication was no more than a personal expression of admiration on the part of David Leslie.

[1] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497-543, 1959.
[2] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.
[3] S. A. Orszag. Analytical theories of turbulence. J. Fluid Mech., 41:363, 1970.
[4] W. D. McComb, V. Shanmugasundaram, and P. Hutchinson. Velocity derivative skewness and two-time velocity correlations of isotropic turbulence as predicted by the LET theory. J. Fluid Mech., 208:91, 1989.
[5] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[6] M. J. Filipiak. Further assessment of the LET theory. PhD thesis, University of Edinburgh, 1992.




Hurrah for arXiv.com!

Hurrah for arXiv.com!
In my previous blog, I referred to my paper with Michael May [1], which failed to be accepted for publication, despite my having tried several journals. I suppose that some of my choices were unrealistic (e.g Nature) and that I could have tried more. Also, I could have specified referees, which I don’t like doing, but now increasingly suspect that it is prudent to do so. Anyway, I see from ResearchGate that, despite it only being on the arXiv, it continues to receive some attention; and I was pleased to find that it had actually been cited for publication.

It was only recently, when thinking of topics for another blog on peer review, that I remembered that I already had a paper on the arXiv; and it has been cited about a dozen times (although two of those are by me!). This was a paper with one of my students [2] which was presented at the Monte Verita conference in 1998. Naturally I expected it to appear in the conference proceedings, but it received a referee’s report that ran something like this: ‘No doubt the authors have some reasons of their own for doing these things but I am unable to see any interest or value in their work’. So we had to rely on the arXiv publication.

Now, the idea of studying the filtered/partitioned nonlinear term, from the point of view of subgrid modelling and renormalization group, was quite an active field at that time, so the referee was actually revealing his own ignorance. (In fact, I know who it was and someone who knew him personally told me that this is exactly the kind of person he is. Very enthusiastic about his own topic and uninterested in other topics.) This is an extreme deficiency of scholarship, but in my view is not completely untypical of the turbulence community. It is perhaps worth mentioning that one of the results we presented was really quite profound in showing how a subgrid eddy viscosity could represent amplitude effects but not phase effects. Various people working in the field would have had an inkling of this fact, but we actually demonstrated it quantitatively by numerical simulation.

The paper also turned out to have some practical value. Later on, I received a request from someone who was preparing a chapter for inclusion in an encyclopaedia, for permission to reproduce one of our figures. This was published in 2004, and in 2017 a second edition appeared [3]. In 2004 the work was also cited in a specialist article on large-eddy simulation [4], and over the years it has been cited various times in this type of article, most recently in the present year. So, other people saw interest and value in the work, but it didn’t appear in the conference proceedings! The relevant figure appears below.

Figure 15 from reference [2] as reproduced in reference [3].
As a final point, I have sometimes wondered about the status of arXiv publications. An interesting point of view can be found in the book by Roger Penrose [5]. At the beginning of his bibliography he refers favourably to the arXiv, stating that some people actually regard it as a source of eprints, as an alternative to journal publication. He also notes how this can speed up the exchange of ideas, perhaps too much so!

Of course, in his subject, speculative ideas are an everyday fact of life. In turbulence, on the other hand, speculative ideas have little chance of getting past the dour, ‘handbook engineering’ mind-set of so many people in the field of turbulence. So, let’s all post our speculative ideas on the arXiv, where it is quite easy to find them with the aid of Mr Google.

[1] W. D. McComb and M. Q. May. The effect of Kolmogorov (1962) scaling on the universality of turbulence energy spectra. arXiv:1812.09174[physics.flu- dyn], 2018.
[2] W. D. McComb and A. J. Young. Explicit-Scales Projections of the Partitioned Nonlinear Term in Direct Numerical Simulation of the Navier-Stokes Equation. Presented at 2nd Monte Verita Colloquium on Fundamental Problematic Issues in Turbulence: available at arXiv:physics/9806029 v1, 1998.
[3] T. J. R. Hughes, G. Scovazzi, and L. P. Franca. Multiscale and Stabilized Methods. In E. Stein, R. de Borst, and T. J. R. Hughes, editors, Encyclopedia of Computational Mechanics Second Edition, pages 1-102. Wiley, 2017.
[4] T. J. R. Hughes, G. N. Wells, and A. A. Wray. Energy transfers and spectral eddy viscosity in large-eddy simulations of homogeneous isotropic turbulence: Comparison of dynamic Smagorinsky and multiscale models over a range of discretizations. Phys. Fluids, 16(11):4044-4052, 2004.
[5] Roger Penrose. The Road to Reality. Vintage Books, London, 2005.




The Kolmogorov (1962) theory: a critical review Part 2

The Kolmogorov (1962) theory: a critical review Part 2

Following on to last week’s post, I would like to make a point that, so far as I know, has not previously been made in the literature of the subject. This is, that the energy spectrum is (in the sense of thermodynamics) an intensive quantity. Therefore it should not depend on the system size. This is, as opposed to the total kinetic energy (say) which does depend on the size of the system and is therefore extensive.

What applies to the energy spectrum also applies to the second-order structure function. If we now consider equation (1) from the previous blog, which is \begin{equation}S_2(r)=C(\mathbf{x},t) \varepsilon^{2/3}r^{2/3}(L/r)^{-\mu}, \label{62S2}\end{equation}then for isotropic, stationary turbulence, it may be written as: \begin{equation}S_2(r)=C \varepsilon^{2/3}r^{2/3} (L/r)^{-\mu}. \end{equation} Note that $C$ is constant, as it can no longer depend on the macrostructure.

Of course this still contains the factor $L^{-\mu}$. Now, $L$ is only specified as the external scale in K62, but it is necessarily related to the size of the system. Accordingly taking the limit of infinite system size, is related to taking the limit of infinite values of $L$, which is needed in order to have $k=0$ and to be able to carry out Fourier transforms. If we do this, we have three possible outcomes. If $\mu$ is negative, then $S_2 \rightarrow \infty$, as $L \rightarrow \infty$, whereas if $\mu$ is positive, then $S_2$ vanishes in the limit of infinite system size. Hence, in either case, the result is unphysical, both by the standards of continuum mechanics and by those of statistical physics.

However, if $\mu = 0$ then there is no problem. The structure function (and spectrum) exist in the limit of infinite system size. Could this be an argument for K41?

Lastly, we should mention that McComb and May [1] have used a plausible method to estimate values of $L$ and, taking a representative value of $\mu=0.1$, have shown that the inclusion of this factor as in K62 destroys the well-known collapse of spectral data that can be achieved using K41 variables.
We began with the well-known graph in which one-dimensional projections of the energy spectrum for a range of Reynolds numbers are normalized on Kolmogorov variables and plotted against $k’=k/k_d$: see, for example, Figure 2.4 of the book [2], which is shown immediately below this text.

 

Measured one-dimensional spectra fro a wide range of Reynolds numbers showing the asymptotic effect of scaling on K41 variablelsl. Reproduced from Figure 2.4 of Reference 2.

 

In this work, we referred to $L$ as $L_{ext}$ and we estimated it as follows. From the above graph, we see that the universal behaviour always occurs in the limit $R_\lambda \rightarrow \infty$ with all spectra collapsing to a single curve at $k’= k/k_d =1$. As the Reynolds number increases, each graph flattens off as $k$ decreases and ultimately forms a plateau at low wavenumbers. We argued that one can use the point where this departure takes place $k’_{ext}$ (say) to estimate the external length scale, thus; \[L’_{ext} = 2\pi/k’_{ext}.\]
In order to make a comparison, we chose the results for a tidal channel at $R_{\lambda}=2000$ and for grid turbulence at $R_{\lambda}=72$. We show these two spectra, as selected from Fig. 1, on Figure 2 below.

 

Figure 2 from Reference 1.

 

Note that we plot the scaled one-dimensional spectrum $\psi(k’)=\phi(k’)/(\varepsilon \nu^5)^{1/4}$.
In the next figure, we plot these two spectra in compensated form, where we have taken the one-dimensional spectral constant to be $\alpha_{1}=1/2$, on the basis of Figure 2. In this form the $-5/3$ power law appear as a horizontal line at unity. We will return to this aspect later.

 

Figure 3 from Reference 1.

 

In order to assess the effect of including the K62 correction, we estimated to be $L’_{ext}\sim 50$ for the grid turbulence and as $L’_{ext}\sim 2000$ for the tidal channel. In fact the spectra from the tidal channel do not actually peel off from the $-5/3$ line at low $k$ so our estimate is actually a lower bound for this case. This favours K62 in the comparison. We took the value $\mu = 0.1$, as obtained by high-resolution numerical simulation and the result of including the K62 correction is shown in Figure 4.

 

Figure 4 from Reference 1.

 

It can be seen that including the K62 corrections destroys the collapse of the spectra which, apart from showing a slope of $\mu = -0.1$ in both cases, are now separated and in a constant ratio of $0.69$. Evidently the universal collapse of spectra in Figure 1 would not be observed if the K62 corrections were in fact correct!
My final point is that one of the unfavourable referees for this paper had a major concern with the fact that the results for grid turbulence did not really show much $-5/3$ behaviour. This is to miss the point. The K41 scaling shows a universal form in the dissipation range, as well as in the inertial range. The inclusion of the K62 correction destroys this, when implemented with plausible estimates for the two parameters.

[1] W. D. McComb and M. Q. May. The effect of Kolmogorov (1962) scaling on the universality of turbulence energy spectra. arXiv:1812.09174[physics.flu-dyn], 2018.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.




The Kolmogorov (1962) theory: a critical review Part 1

The Kolmogorov (1962) theory: a critical review Part 1
As is well known, Kolmogorov interpreted Landau’s criticism as referring to the small-scale intermittency of the instantaneous dissipation rate. His response was to adopt Obukhov’s proposal to introduce a new dissipation rate which had been averaged over a sphere of radius $r$, and which may be denoted by $\varepsilon_r$. This procedure runs into an immediate fundamental objection.

In K41A, (or its wavenumber-space equivalent) the relevant inertial-range quantity for the dimensional analysis is the local (in wavenumber) energy transfer. This is of course equal to the mean dissipation rate by the global conservation of energy (It is a potent source of confusion that these theories are almost always discussed in terms of the dissipation $\varepsilon$, when the proper inertial-range quantity is the nonlinear transfer of energy $\Pi$. The inertial range is defined by the condition $\Pi_{max} = \varepsilon$). However, as pointed out by Kraichnan [1] there is no such simple relationship between locally-averaged energy transfer and locally-averaged dissipation.

Although Kolmogorov presented his 1962 theory as `A refinement of previous hypotheses …’, it is now generally understood that this is incorrect. In fact it is a radical change of approach. The 1941 theory amounted to a general assumption that a cascade of many steps would lead to scales where the mean properties of turbulence were independent of the conditions of formation (i.e. of, essentially, the physical size of the system). Whereas, in 1962, the assumption was, in effect, that the mean properties of turbulence did depend on the physical size of the system. We will return to this point presently, but for the moment we concentrate on the preliminary steps.

The 1941 theory relied on a general assumption with an underlying physical plausibility. In contrast, the 1962 theory involved an arbitrary and specific assumption. This was to the effect that the logarithm of $\varepsilon(\mathbf{x},t)$ has a normal distribution for large $L/r$ where $L$ is referred to as an external scale and is related to the physical size of the system. We describe this as `arbitrary’ because no physical justification is offered; but in any case it is certainly specific. Then, arguments were developed that led to a modified expression for the second-order structure function, thus: \begin{equation}S_2(r)=C(\mathbf{x},t)\varepsilon^{2/3}r^{2/3}(L/r)^{-\mu}, \label{62S2}\end{equation} where $C(\mathbf{x},t)$ depends on the macrostructure of the flow.

In addition, Kolmogorov pointed out that `the theorem of constancy of skewness …derived (sic) in Kolmogorov (1941b)’ is replaced by \begin{equation} S(r) = S_0(L/r)^{3\mu/2},\end{equation} where $S_0$ also depends on the macrostructrure.

Equation (\ref{62S2}) is rather clumsy in structure, in the way the prefactor $C$ depends on $x$. This is because of course we have $r=x-x’$, so clearly $C(\mathbf{x},t)$ also depends on $r$. A better way of tackling this would be to introduce centroid and relative coordinates, $\mathbf{R}$ and $\mathbf{r}$, such that \begin{equation}\mathbf{R} = (\mathbf{x}+\mathbf{x’})/2; \qquad \mbox{and} \qquad \mathbf{r}= ( \mathbf{x}-\mathbf{x’}).\end{equation} Then we can re-write the prefactor as $C(\mathbf{R}, r; t)$, where the dependence on the macrostructure is represented by the centroid variable, while the dependence on the relative variable holds out the possibility that the prefactor becomes constant for sufficiently small values of $r$.

Of course, if we restrict our attention to homogeneous fields, then there can be no dependence of mean quantities on the centroid variable. Accordingly, one should make the replacement: \begin{equation}C(\mathbf{R}, r; t)=C(r; t),\end{equation} and the additional restriction to stationarity would eliminate the dependence on time. In fact Kraichnan [1] went further and replaced the pre-factor with the constant $C$: see his equation (1.9).

For sake of completeness, another point worth mentioning at this stage is that the derivation of the `4/5′ law is completely unaffected by the `refinements’ of K62. This is really rather obvious. The Karman-Howarth equation involves only ensemble-averaged quantities and the derivation of the `4/5′ law requires only the vanishing of the viscous term. This fact was noted by Kolmogorov [2].

[1] R. H. Kraichnan. On Kolmogorov’s inertial-range theories. J. Fluid Mech., 62:305, 1974.
[2] A. N. Kolmogorov. A refinement of previous hypotheses concerning the local structure of turbulence in a viscous incompressible fluid at high Reynolds number. J. Fluid Mech., 13:82-85, 1962.




The Landau criticism of K41 and problems with averages

The Landau criticism of K41 and problems with averages

The idea that K41 had some problem with the way that averages were taken has its origins in the famous footnote on page 126 of the book by Landau and Lifshitz [1]. This footnote is notoriously difficult to understand; not least because it is meaningless unless its discussion of the `dissipation rate $\varepsilon$’ refers to the instantaneous dissipation rate. Yet $\varepsilon$ is clearly defined in the text above (see the equation immediately before their (33.8)) as being the mean dissipation rate. Nevertheless, the footnote ends with the sentence `The result of the averaging therefore cannot be universal’. As their preceding discussion in the footnote makes clear, this lack of universality refers to ‘different flows’: presumably wakes, jets, duct flows, and so on.

We can attempt a degree of deconstruction as follows. We will use our own notation, and to this end we introduce the instantaneous structure function $\hat{S}_2(r,t)$, such that $\langle \hat{S}_2(r,t) \rangle =S_2(r)$. Landau and Lifshitz consider the possibility that $S_2(r)$ could be a universal function in any turbulent flow, for sufficiently small values of $r$ (i.e. the Kolmogorov theory). They then reject this possibility, beginning with the statement:

`The instantaneous value of $\hat{S}(r,t)$ might in principle be expressed as a universal function of the energy dissipation $\varepsilon$ at the instant considered.’

Now this is rather an odd statement. Ignoring the fact that the dissipation is not the relevant quantity for inertial-range behaviour, it is really quite meaningless to discuss the universality of a random variable in terms of its relation to a mean variable (i.e. the dissipation). A discussion of universality requires mean quantities. Otherwise it is impossible to test the statement. The authors have possibly relied on the qualification `at the instant considered’. But how would one establish which instant that was for various different flows?

They then go on:

`When we average these expressions, however, an important part will be played by the law of variation of $\varepsilon$ over times of the order of the periods of the large eddies (of size $\sim L$), and this law is different for different flows.’

This seems a rather dogmatic statement but it is clearly wrong for the the broad (and important) class of stationary flows. In such flows, $\varepsilon$ does not vary with time.

The authors conclude (as we pointed out above) that: `The result of the averaging therefore cannot be universal.’ One has to make allowance for possible uncertainties arising in translation, but nevertheless, the latter part of their argument only makes any sort of sense if the dissipation rate is also instantaneous. Such an assumption appears to have been made by Kraichnan [2], who provided an interpretation which does not actually depend on the nature of the averaging process.

In fact Kraichnan worked with the energy spectrum, rather than the structure function, and interpreted Landau’s criticism of K41 as applying to \begin{equation}E(k) = \alpha\varepsilon^{2/3}k^{-5/3}.\label{6-K41}\end{equation}
His interpretation of Landau was that the prefactor $\alpha$ may not be a universal constant because the left-hand side of equation (\ref{6-K41}) is an average, while the right-hand side is the 2/3 power of an average.

Any average involves the taking of a limit. Suppose we consider a time average, then we have \begin{equation} E(k) = \lim_{T\rightarrow\infty}\frac{1}{T}\int^{T}_{0}\widehat{E}(k,t)dt, \end{equation} where as usual the `hat’ denotes an instantaneous value. Clearly the statement \begin{equation}E(k) = \mbox{a constant};\end{equation}or equally the statement, \begin{equation}E(k) = f\equiv\langle\hat{f}\rangle, \end{equation} for some suitable $f$, presents no problem. It is the `2/3′ power on the right-hand side of equation (\ref{6-K41}) which means that we are apparently equating the operation of taking a limit to the 2/3 power of taking a limit.

However, it has recently been shown [3] that this issue is resolved by noting that the pre-factor $\alpha$ itself involves an average over the phases of the system. It turns out that $\alpha$ depends on an ensemble average to the $-2/3$ power and this cancels the dependence on the $2/3$ power on the right hand side of (\ref{6-K41}).

[1] L. D. Landau and E. M. Lifshitz. Fluid Mechanics. Pergamon Press, London, English edition, 1959.
[2] R. H. Kraichnan. On Kolmogorov’s inertial-range theories. J. Fluid Mech., 62:305, 1974.
[3] David McComb. Scale-invariance and the inertial-range spectrum in three-dimensional stationary, isotropic turbulence. J. Phys. A: Math. Theor.,42:125501, 2009.




The Kolmogorov-Obukhov Spectrum.

The Kolmogorov-Obukhov Spectrum.

To lay a foundation for the present piece, we will first consider the joint Kolmogorov-Obukhov picture in more detail. For completeness, we should begin by mentioning that Kolmogorov also used the Karman-Howarth equation, which is the energy balance equation connecting the second- and third-order structure functions, to derive the so-called `$4/5$’ law for the third-order structure function.This procedure amounts to a de facto closure, as the time-derivative is neglected (an exact step in our present case, as we are restricting our attention to stationary turbulence) and the term involving the viscosity vanishes in the limit of infinite Reynolds number. This is often referred to as `the only exact result in turbulence theory’; but increasingly it is being referred to, perhaps more correctly, as `the only asymptotically exact result in turbulence’.

As part of this work, he also assumed that the skewness was constant; and this provided a relationship between the second- and third-order structure functions which recovered the `$2/3$’ law. It is interesting to note that Lundgren used the method of matched asymptotic expansions to obtain both the `$4/5$’ and `$2/3$’ laws, without having to make any assumption about the skewness. This work also offered a way of estimating the extent of the inertial range in real space.

However, the Karman-Howarth equation is local in the independent variables and therefore does not describe an energy cascade. In contrast, the Lin equation (which is just its Fourier transform) shows that all the degrees of freedom in turbulence are coupled together. It takes the form, for the energy spectrum $E(k, t)$, in the presence of an input spectrum $W(k)$: \begin{equation}\frac{\partial E(k,t)}{\partial t} = W(k)+ T(k,t)- 2\nu_{0}k^{2}E(k, t),\label{lin}\end{equation} where $\nu_{0}$ is the kinematic viscosity and the transfer spectrum $T(k,t)$ is given by\begin{eqnarray}T(k,t) & = & 2\pi k^{2}\int d^{3}j\int d^{3}l\,\delta(\mathbf{k}-\mathbf{j}-\mathbf{l})M_{\alpha\beta\gamma}(\mathbf{k})\nonumber \\ & \times & \left\{C_{\beta\gamma\alpha}(\mathbf{j},\mathbf{l},\mathbf{-k};t)-C_{\beta\gamma\alpha}(\mathbf{-j},\mathbf{-l},\mathbf{k};t)\right\},\end{eqnarray}with \begin{equation} M_{\alpha\beta\gamma}(\mathbf{k})=-\frac{i}{2}\left[k_{\beta}P_{\alpha\gamma}(\mathbf{k})+k_{\gamma}P_{\alpha\beta}(\mathbf{k})\right],\label{M}\end{equation} and the projector $P_{\alpha\beta}(\mathbf{k})$ is \begin{equation}P_{\alpha\beta}(\mathbf{k})=\delta_{\alpha\beta}-\frac{k_{\alpha}k_{\beta}}{|\mathbf{k}|^{2}}, \end{equation}where $\delta_{\alpha\beta}$ is the Kronecker delta, and the third-order moment $C_{\beta\gamma\alpha}$ here takes the specific form: \begin{equation} C_{\beta\gamma\alpha}(\mathbf{j},\mathbf{l},\mathbf{-k};t)=\langle u_{\beta}(\mathbf{j},t)u_{\gamma}(\mathbf{l},t)u_{\alpha}(\mathbf{-k},t) \rangle.\end{equation}

At this stage we also define the flux of energy $\Pi(\kappa,t)$ due to inertial transfer through the mode with wavenumber $k=\kappa$. This is given by: \begin{equation}\Pi(\kappa,t) = \int_{\kappa}^{\infty}\,dk\,T(k,t).\end{equation}
Further discussion and details may be found in Section 4.2 of the book [1].
We now have a rather simple picture. In formulating our problem, the shape of the input spectrum should be chosen to be peaked near the origin, such that higher wavenumbers are driven by inertial transfer, with energy being dissipated locally by the viscosity. Then we can define the rate at which stirring forces do work on the system by: \begin{equation} \int_0^\infty \, W(k)\, dk = \varepsilon_W. \end{equation}

Obukhov’s idea of the constant inertial flux can be expressed as follows. As the Reynolds number is increased, the transfer rate, as given by equation (6), will also increase and must reach a maximum value, which in turn must be equal to the viscous dissipation. Thus we introduce the symbol $\varepsilon_T$ for the maximum inertial flux as: \begin{equation}\varepsilon_T = \Pi_{\mbox{max}},\end{equation} and for stationary turbulence at sufficiently high Reynolds number, we have the limiting condition: \begin{equation}\varepsilon = \varepsilon_T = \varepsilon_W.\end{equation}

Thus the loose idea of a local cascade involving eddies in real space is replaced by the precisely formulated concept of scale invariance of the inertial flux in wavenumber space. As is well known, this picture leads directly to the $-5/3$ energy spectrum in the limit of large Reynolds numbers.

[1] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.




Why do we call it ‘The Kolmogorov Spectrum’?

Why do we call it ‘The Kolmogorov Spectrum’?
The Kolmogorov $-5/3$ spectrum continues to be the subject of contentious debate. Despite its great utility in applications and its overwhelming confirmation by experiments, it is still plagued by the idea that it is subject to intermittency corrections. From a fundamental view this is difficult to understand because Kolmogorov’s theory (K41a) was expressed in terms of the mean dissipation, which can hardly be affected by intermittency. Another problem is that Kolmogorov actually derived the $2/3$ law for the structure function. Of course one can derive the spectrum from this result by Fourier transformation; but this is not a completely trivial process and we will discuss it in a future post.

The trouble seems to be that Kolmogorov’s theory, despite its great pioneering importance, was an incomplete and inconsistent theory. It was formulated in real space; where, although the energy transfer process can be loosely visualised from Richardson’s idea of a cascade, the concept of such a cascade is not mathematically well defined. Also, having introduced the inertial range of scales, where the viscosity may be neglected, he characterised this range by the viscous dissipation rate, which is not only inconsistent but incorrect. An additional complication, which undoubtedly plays a part, is that his theory was applied to turbulence in general. The basic idea was that the largest scales would be affected by the nature of the flow, but a stepwise cascade would result in smaller eddies being universal in some sense. That is, they would have much the same statistical properties, despite the different conditions of formation. In order to avoid uncertainties that can arise from this rather general idea, we will restrict our attention to stationary, isotropic turbulence here.

To make a more physical picture we have to follow Obukhov and work in $k$ space with the Fourier transform $\mathbf{u}(\mathbf{k},t)$ of the velocity field $\mathbf{u}(\mathbf{x},t)$. This was introduced by Taylor in order to allow the problem of isotropic turbulence to be formulated as one of statistical mechanics, with the Fourier components acting as the degrees of freedom. In this way, Obukhov identified the conservative, inertial flux of energy through the modes as being the key quantity determining the energy spectrum in the inertial range. It follows that, with the input and dissipation being negligible, the flux must be constant (i.e. independent of wavenumber) in the inertial range, with the extent of the inertial range increasing as the Reynolds number was increased, and this was later recognized by Osager in (1945). Later still, this property became widely known and for many years has been referred to by theoretical physicists as scale invariance. It should be emphasised that the inertial flux is an average quantitiy, as indeed is the energy spectrum, and any intermittency effects present, which are characteristics of the instantaneous velocity field, will inevitably be averaged out. Of course, in stationary flows the inertial transfer rate is the same as the dissipation rate, but in non-stationary flows it is not.

This is not intended to minimise the importance of Kolmogorov’s pioneering work. It is merely that we would argue that one also needs to consider Obukhov’s theory (also, in 1941), with possibly also a later contribution from Onsager (in 1945), in order to have a complete theoretical picture. In effect this seems to have been the view of the turbulence community from the late 1940s onwards. Discussion of turbulent energy transfer and dissipation in isotropic turbulence was almost entirely in terms of the spectral picture. It was not until the extensive measurements of higher-order structure functions by Anselmet et al. (in 1984) that the real-space picture became of interest, along with the concept of anomalous exponents.

I would argue that we should go back to the term ‘Kolmogorov-Obukhov spectrum’, as indeed was quite often done in earlier years. We will develop this idea in the next post. All source references for this piece will be found in the book [1].

[1] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.




The different roles of the Gaussian pdf in Renormalized Perturbation Theory (RPT) and Self-Consistent Field (SCF) theory.

The different roles of the Gaussian pdf in Renormalized Perturbation Theory (RPT) and Self-Consistent Field (SCF) theory.

In last week’s blog, I discussed the Kraichnan and Wyld approaches to the turbulence closure problem. These field-theoretic approaches are examples of RPTs, while the pioneering theory of Edwards [1] is a self-consistent field theory. An interesting difference between them is the different ways in which they make use of a Gaussian (or normal) base distribution. Any theory is going to begin with a Gaussian distribution, because it is tractable. We know how to express all its moments in terms of the second-order moment. Of course, we also know that it predicts that odd order moments are zero, so some trick must be employed to get it to tell us anything about turbulence.

As we did last week, we begin with the Fourier-transformed solenoidal Navier-Stokes equation (NSE) written in an extremely compressed notation as: \begin{equation} \mathcal{L}_{0,k}u_k = \lambda M_{0,k}u_ju_{k-j},\end{equation} where the linear operator $\mathcal{L}_{0,k} = \partial /\partial t + \nu_0 k^2$, $\nu_0$ is the kinematic viscosity of the fluid, $M_{0,k}$ is the inertial transfer operator which contains the eliminated pressure term, and $\lambda$ is a book-keeping parameter which is used to keep track of terms during an iterative solution.

Now let us consider the closure problem. We multiply equation (1) through by $u_{-k}$ and average, to obtain: \begin{equation} \mathcal{L}_{0,k}\langle u_k u_{-k}\rangle= \lambda M_{0,k}\langle u_ju_{k-j}u_{-k}\rangle,\end{equation} where the angle brackets denote an average. Evidentally, if we evaluate the averages here with a Gaussian pdf, the triple moment vanishes (trivially, by symmetry)

Then we set up a perturbation-type approach by expanding the velocity field in powers of $\lambda$ as: \begin{equation} u_k = u^{(0)}_k + \lambda u^{(1)}_k + \lambda^2 u^{(2)}_k + \lambda^3 u^{(3)}_k + \dots, \end{equation} where $u^{(0)}_k$ is a velocity field with a Gaussian distribution. The general procedure has two steps. First, substitute the expansion (3) into the right hand side of equation (1) and calculate the coefficients iteratively in terms of the $u^{(0)}_k$. Secondly, substitute the explicit form of the expansion, now entirely expressed in terms of the $u^{(0)}$ into the right hand side of equation (2), and evaluate the averages to all orders, using the rules for a Gaussian distribution. If we denote the inverse of the linear operator by $\mathcal{L}^{-1}_{0,k} \equiv R_{0,k}$, and the Gaussian zero-order covariance by $\langle u_k u_{-k}\rangle=C_{0,k}$, then the triple moment on the right hand of equation (2) can be written to all orders in products and convolutions of $R_{0,k}$ and $C_{0,k}$.

Kraichnan introduced renormalization in this problem by making the replacements: \[R_{0,k}\rightarrow R_{k} \quad \mbox{and} \quad C_{0,k} \rightarrow C_k,\] to all orders in the perturbation expansion of the triple-moment in (2). This step involves partial summations of the perturbation expansion in different classes of terms.
At this point it is worth noting that what happens here is rather like in a direct-numerical simulation of the NSE. There we begin with a Gaussian initial field. As time goes on, the nonlinear term induces couplings between modes and the system moves to a field which is representative of Navier-Stokes turbulence. Of course the initial distribution is constrained in this case to give the total energy that we require in the simulation. Note that the zero-order field in perturbation theory is in principle present at all times and is not constrained in this way.

In contrast, what Edwards introduced was a perturbation expansion of the probability distribution function of the velocity field, not of the velocity field itself. For this reason, he did not work directly with the NSE but instead used it to derive a Liouville equation for the probability distribution $P[u,t]$. It should be noted that the Liouville equation, although containing the nonlinearity of the velocity field, is nevertheless a linear equation for the pdf. Edwards then expanded $P[u,t]$, the exact pdf, as follows: \begin{equation}P[u,t] = P^{0}[u] + \epsilon P^{1}[u,t] + \epsilon^2 P^{2}[u,t] + \mathcal{O}(\epsilon^3),\end{equation} where $P^{0}[u]$ is a Gaussian distribution. The significant step here is to demand that the zero-order pdf gives the same result for the second-order moment as the exact pdf. That is, \begin{equation}\int \, P^{(0)}[u] \, u_ku_{-k} \mathcal{D}u = \int \, P[u,t] \, u_ku_{-k} \mathcal{D}u \equiv C_k. \end{equation}

This is in fact the basis of the self-consistency requirement in the theory. For further details the interested reader should consult either of the books referenced below as [1] and [2]. The Edwards method [3] does not rely on partially summing infinite perturbation series, nor is it like the functional formalisms which are equivalent to such summation procedures. Instead it relies on the fact that the measured pdf in turbulence is not very different from a Gaussian. In this respect, it is encouraging that it gives similar results to the RPTs. This resemblance is heightened in the recent derivation of the LET theory as a two-time SCF [4], thus extending the Edwards method.

[1] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press Oxford, 1973.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[3] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[4] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.




What if anything is wrong with Wyld’s (1962) turbulence formulation?

What if anything is wrong with Wyld’s (1962) turbulence formulation?

When I began my PhD in 1966, I found Wyld’s paper [1] to be one of the easiest to understand. However, one feature of the formalism struck me as odd or incorrect, so I didn’t spend any more time on it. But I had found it very useful in helping me to understand how a theory like Kraichnan’s DIA could work. In short, I thought that it had pedagogic value. Some years later, when I wrote up my first attempt to derive a two-time version of the LET theory [2], I made use of a variant of Wyld’s formalism, albeit with his procedural error corrected. I was surprised by the hostility of the referees towards Wyld’s work, which they said had been subject to later criticism. As is so often the case with referees in this field, they accepted the criticism as utterly damning, without apparently any critical thought, or ability to produce a nuanced reaction, on their own part.

My aim in this blog is to explain what I noticed about Wyld’s formalism all those years ago, and I shall give only as much of his method as necessary to make this a brief and understandable point. We begin with the Fourier-transformed solenoidal Navier-Stokes equation, written in an extremely compressed notation as: \begin{equation} \mathcal{L}_{0,k}u_k = \lambda M_{0,k}u_ju_{k-j},\end{equation} where the linear operator $\mathcal{L}_{0,k} = \partial /\partial t + \nu_0 k^2$, $\nu_0$ is the kinematic viscosity of the fluid, $M_{0,k}$ is the inertial transfer operator which contains the eliminated pressure term, and $\lambda$ is a book-keeping parameter which is used to keep track of terms during an iterative solution. Properly detailed versions of these equations may be found in either [3] or [4], but these will be sufficient for my present purposes.

Now let us begin with the closure problem. We multiply equation (1) through by $u_{-k}$ and average, to obtain: \begin{equation} \mathcal{L}_{0,k}\langle u_k u_{-k}\rangle= \lambda M_{0,k}\langle u_ju_{k-j}u_{-k}\rangle,\end{equation} where the angle brackets denote an average. Then we set up a perturbation-type approach by expanding the velocity field in powers of $\lambda$ as: \begin{equation} u_k = u^{(0)}_k + \lambda u^{(1)}_k + \lambda^2 u^{(2)}_k + \lambda^3 u^{(3)}_k + \dots, \end{equation} where $u^{(0)}_k$ is a velocity field with a Gaussian distribution.

The general procedure then has two steps. First, substitute the expansion (3) into the right hand side of equation (1) and calculate the coefficients iteratively in terms of the $u^{(0)}_k$. Secondly, substitute the explicit form of the expansion, now entirely expressed in terms of the $u^{(0)}$ into the right hand side of equation (2), and evaluate the averages to all orders, using the rules for a Gaussian distribution. If we denote the inverse of the linear operator by $\mathcal{L}^{-1}_{0,k} \equiv R_{0,k}$, and the Gaussian zero-order covariance by $\langle u_k u_{-k}\rangle=C_{0,k}$, then the triple moment on the right hand of equation (2) can be written to all orders in products and convolutions of $R_{0,k}$ and $C_{0,k}$.

Wyld did not follow this procedure exactly. Instead, he inverted the linear operator on the left hand side of (2), and wrote an expression for the exact covariance $C_k$ as: \begin{equation} \langle u_k u_{-k}\rangle \equiv C_k= R_{0,k}\lambda M_{0,k}\langle u_ju_{k-j}u_{-k}\rangle.\end{equation} Of course, (4) is mathematically equivalent to (2), so does this matter? Well, when we consider renormalization, it does!

Kraichnan introduced renormalization in this problem as making the replacements: \[R_{0,k}\rightarrow R_{k} \quad \mbox{and} \quad C_{0,k} \rightarrow C_k\] to all orders in the perturbation expansion of the triple-moment in (2). When Wyld used diagram methods to show how such a renormalization could come about, by summing subsets of terms to all orders, he in effect also renormalized both the explicit operators $R_{0,k}$ and $M_{0,k}$ on the right hand side of (4). The first of these erroneous steps created the famous double-counting problem, while the second raised questions about vertex renormalization. A full account of this topic and the introduction of `improved Lee-Wyld theory’ can be found in reference [5].

Lastly, for sake of completeness, I should mention that reference [2] was superseded in 2017 by reference [6], as the derivation of the two-time LET theory.

[1] H. W. Wyld Jr. Formulation of the theory of turbulence in an incompressible fluid. Ann.Phys, 14:143, 1961.
[2] W. D. McComb. A theory of time dependent, isotropic turbulence. J.Phys.A:Math.Gen., 11(3):613, 1978.
[3] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[4] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[5] A. Berera, M. Salewski, and W. D. McComb. Eulerian Field-Theoretic Closure Formalisms for Fluid Turbulence. Phys. Rev. E, 87:013007-1-25, 2013.
[6] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017




Is turbulence research still in its infancy?

Is turbulence research still in its infancy?
Recently I came across the article by Lumley and Yaglom which is cited below as [1]. I think it is new to me but quite possibly I will find it lurking in my filing system when at last I am able to visit my university office again. It is always good to get something gossipy and opinionated to read about turbulence as a welcome relief from all the worthy but demanding research papers! In any case, their Abstract is well worth quoting here:
This field does not appear to have a pyramidal structure, like the best of physics. We have very few great hypotheses. Most of our experiments are exploratory experiments. What does this mean?
They go on to answer their own question: ‘We believe it means that, even after 100 years, turbulence studies are still in their infancy.

I’m not quite sure what is meant by the phrase ‘pyramidal structure’, but overall the general sense is clear; and really quite persuasive. Indeed, even after a further two decades, which have been marked by an explosive growth in research, this depressing view is still to a considerable extent justified. However, I think that it might be of interest to consider in what ways it is justified and in which ways the comparison with physics may be unfair.

There are of course the unresolved issues of fundamental turbulence theory, but what is more compelling in my view, is the bizarre and muddled nature of some key aspects of the subject. To begin with, there is the Kolmogorov spectrum. Nowadays it is probably well known that Kolmogorov worked in real space and derived the $2/3$ law, from which the $-5/3$ spectrum of course follows by Fourier transformation. Yet beginning with Batchelor’s monograph [2], and for decades thereafter, discussion of the subject was entirely in terms of wavenumber space. A particularly egregious example arises in the book by Hinze [3]. After acknowledging [2], he writes: ‘These considerations have led Kolmogoroff (sic) to make the following hypothesis.’ He then goes on to state the hypothesis (top of page 184 in the first edition) and expresses it in terms of wavenumber. As his statement of the hypothesis is in inverted commas, I assumed that it was a quotation from Kolmogorov’s paper [4], but Kolmogorov nowhere uses the word ‘wavenumber’ in that paper!

This is not in itself a serious matter. But it is symptomatic, and the fact remains that various commentators rely on a real-space treatment to draw conclusions about spectra. For me, the truly astonishing fact is that I have been unable to find an exegesis of Kolmogorov’s original paper anywhere. All treatments are brief and superficial, in contrast to his later paper [5] in which he derived the $4/5$ law. This of course has been widely reviewed and discussed in detail. Which is perhaps not unconnected with the fact that it is very much easier to understand!

There are other schools of thought that one can point to, where the real problem is a failure to realise that the ideas being put forward are unphysical. For instance, the uncritical adoption of Onsager’s pioneering work in which the viscosity is put equal to zero instead of taking the limit of zero viscosity. The result is the unphysical idea of dissipation taking place in the absence of viscosity, which of course it cannot. Absorption of energy by an infinite wavenumber space is not the same as viscous dissipation. At best it might be described as pseudo dissipation. Further discussion of this topic can be found in reference [6].

To round this off, there is Kolmogorov’s 1962 paper, presenting what he described as ‘a refinement of previous hypotheses’. In fact, as is increasingly recognised, it is nothing of the sort. It is instead the wholesale abandonment of previous hypotheses. But I have said that elsewhere. What concerns me here is that the theory is manifestly unphysical. The energy spectrum is (in thermodynamic terms) an intensive quantity. Thus the factor $L^{\mu}$ which is now incorporated into the power-law form violates the requirement that it should not depend on the size of the system. In the limit of infinite system size, the energy spectrum must now go to zero if the exponent is negative and to infinity if it is positive. Curiously, no one seems to have commented on this.

Lumley and Yaglom were referring to the problem of achieving a fundamental understanding of turbulence and it is perhaps worth keeping in mind that the great success of physics is based on the happy accident of linearity. On purely taxonomic grounds, turbulence belongs to the class of many-body problems with strong coupling. These are just as intractable in nuclear physics, particle physics, and condensed matter physics as in fluid turbulence. The difference is that these activities are generally pursued in a more scholarly way, with a more collegial atmosphere among the participants. As a previous generation used to say: verb. sap!

[1] J. L. Lumley and A. M. Yaglom. A Century of Turbulence. Flow, Turbulence and Combustion, 66:241, 2001.
[2] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 1st edition, 1953.
[3] J. O. Hinze. Turbulence. McGraw-Hill, New York, 1st edition, 1959.
[4] A. N. Kolmogorov. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. C. R. Acad. Sci. URSS, 30:301, 1941.
[5] A. N. Kolmogorov. Dissipation of energy in locally isotropic turbulence. C. R. Acad. Sci. URSS, 32:16, 1941.
[6] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614v2[physics.flu-dyn], 2021.




Culture wars: applied scientists versus natural scientists.

Culture wars: applied scientists versus natural scientists.

In my early years at Edinburgh, I attended a seminar on polymer drag reduction; and, as I was walking back with a small group, we were discussing what we had just learned. In response to a comment made by one member of the group, I observed that it made the problem seem horribly complicated. The others nodded in agreement; with the exception of an American who was visiting the Chemical Engineering department. He turned on me and said reprovingly. ‘You mean that it’s beautifully complicated.’ The implication was very much that this problem was a foe worthy of his intellectual steel, so to speak. Well, I wonder how he got on with that?

It struck me at the time as an indication of a different culture. Physicists and mathematicians seem to see beauty in simplicity, even to the point of regarding it as evidence in favour of a particular theory. Do applied scientists and engineers really see beauty in complication? Even engineering structures as different as a bridge, a motor car or a ship are often held to conform to the old engineering adage: if it looks right, it is right! That surely is an appreciation of simplicity of design, is it not?

Nevertheless, the idea that there are different cultures came to me early on in my career. I can remember when I started out in the nuclear power industry, a colleague who was a chemical engineer (this is just coincidence: I haven’t got it in for chemical engineers!) said to me. ‘I don’t see any point in physics as a discipline. What’s the use of it?’ So I pointed out that we both owed our employment to physics and he had to reluctantly concede that perhaps nuclear physics had some point after all! That was in the early 1960s, and since then developments in condensed matter physics have, through the agency of materials science, chemistry and microelectronics, transformed the world that we live in.

Over the years I have heard many comments like that made by engineers about physics but I cannot recall any physicist making a similar comment about engineering. Generally, the attitude that I have picked up is a sort of respectful assumption that the engineer has other skills which generally produce impressive results. Perhaps the difference here is that the physicists are clear about their own ignorance of the details of engineering science whereas engineers tend to assume that what they don’t know doesn’t exist?

Shortly after my first book on turbulence was published [1], I received a letter (yes, not an email!) from the late Stan Corrsin, who commented on it and also sent a copy of a review that he had written of David Leslie’s earlier book [2]. I found his review very interesting because it addressed the problem that seems to be ignored by most people: that when theoretical physicists start tackling turbulence the results should be of interest to engineers but may in fact be unintelligible to them. This is not a matter of not being able to follow the mathematics so much as ‘not sharing assumptions about what is natural or appropriate to do in any given circumstance’. In other words, what I am trying to describe by the word ‘culture’. This is about all I can remember from the review. I may still have it in my office, but that has been off limits to me for more than a year now, and I have been unable to find the review online. One other phrase that I do recall, is that Corrsin said, in effect, that Leslie’s book did help to bridge this gap, but that ‘it was no Rosetta stone.

Sometimes I think that it is impossible to provide a Rosetta stone for this purpose and it is only when theoretical physicists become tired of staring at their own navels, that we will see a flowering of theory in turbulence and other practical problems. That will happen when they become bored with strings, multiverses, dark matter, quantum gravity and similar fantasy physics.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.




‘A little learning is a dangerous thing!’ (Alexander Pope, 1688-1744)

‘A little learning is a dangerous thing!’ (Alexander Pope, 1688-1744)
I have written about the problems posed by the different cultures to be found in the turbulence community; and in particular of the difficulties faced by some referees when confronted by Fourier methods. My interest in the matter is of course the difficulties faced by the author who dares to use Fourier transforms when he encounters such an individual. In my post on 20 April 2020, I told of the referee who described Fourier analysis as ‘the usual wavenumber murder’. Thinking of this brought back a rather strange incident from the mid-1970s, and it occurs to me that it really underlines my point.

In those days, we used to get visitors from the United States, who would come for a day and ask various people about their work. I seem to recall that they were sponsored by the Office for Naval Research and, as we benefited from a huge flow of NASA reports, stemming from their various programmes, it seemed only fair to send something back.

One particular visitor was a fluid dynamicist who worked on the lubrication of journal bearings. He was known to my colleagues in this area, who told me that he was eminent in that field. So, once he was settled in my office and we had got over the usual preliminaries, he asked me to explain my theoretical research to him. I went to the blackboard and happily began explaining about eliminating the pressure from the Navier-Stokes equation and then how to Fourier transform it.

I hadn’t got very far, when he held up his hand and said. ‘Stop right there! I wouldn’t use Fourier transforms with a nonlinear problem like turbulence.’

I was a little bit taken aback, but my main reaction was that this was a chance for me to learn something, because it was at that time that I was receiving reports from JFM referees which were hostile to the use of Fourier methods.
I didn’t waste time in asking him why. I just asked what he would use instead. His reply astonished me. ‘I would use the Green’s function method.’

In the circumstances I saw no point in continuing and changed the topic to talk about my other work. He seemed quite happy about that. Perhaps it was just a cunning plan to avoid listening to some boring mathematics for an hour or so?
At this stage it will be clear to many people why I did not continue the discussion. But for those who don’t know, there were two points:
A. My visitor was wrong at the most fundamental level. Green’s functions are only applicable to linear problems. For instance, we can eliminate the pressure field from the NSE, because it satisfies a Poisson’s equation, which is of course linear.
B. As a sort of corollary of awfulness, a standard method of evaluating Green’s functions is by the use of Fourier transforms!
These matters are discussed in detail in Appendix D of reference [1] below.

The title of the poem by Alexander Pope has passed into the language as a caution against being too authoritative when one is not really an expert. The question of who does more harm, someone who thinks he knows all about Fourier methods; or someone who is frightened of them and behaves in a childish way, is really a moot point.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press,
1990.




Intermittency, intermittency, intermittency!

Intermittency, intermittency, intermittency!
It is well known that those who are concerned with the sale of property say that the three factors determining the value of a house are: location, location, location. In fact I believe that there is a television programme with that as a title. This trope has passed into the general consciousness; so much so, that a recent prime minister declared his principal objectives in government to be: education, education, education. (Incidentally, I wonder how that worked out?)

My use of the title here is not to suggest that I think that intermittency is the dominant feature of the turbulent velocity field, or indeed of any particular importance, so much as to draw attention to the fact that there are three types of turbulent intermittency. Of course in complicated situations such as in turbomachinery, an anemometer signal can be interrupted by the passage of a rotor, say. That would be a form of intermittency. However, by intermittency, what I have in mind is something intrinsic to the turbulent field and not caused by some external behaviour. I believe that is what most people would mean by it.
For convenience, we may list these different types, as follows:

1. Free surface intermittency. This form of intermittency occurs in flows like wakes and unconfined jets. It arises from the irregular nature of the boundary of the flow. An anemometer positioned at the edge of the flow will sometimes register a turbulent signal and sometimes not. There is also a dynamical problem posed by the interaction between the flow of the wake or jet and the ambient fluid, but that is not something that we will pursue here.

2. The bursting process in pipe flow. This was discovered in the 1960s, when it was found that a short-sample-time autocorrelation could show a near-sinusoidal variation with time, corresponding to a sequence of events in which turbulent energy was generated locally in both space and time. Measurement of the bursting period was helpful in understanding the mechanism of drag reduction by polymer additives.

3. Internal intermittency. This is the apparent inability of the eddying motions of turbulence to fill space, even in isotropic turbulence. Originally it was referred to as the dissipation intermittency and then later on as the fine-structure intermittency. In recent years it has been established that by means of high-Reynolds number simulations that this inability to fill space is in fact present at all length scales. Thus the growing modern practice is to describe it as internal which distinguishes it from the two types of intermittency above.

An account of all three types may be found in Section 3.2 of the book [1], although at that time I used the term fine-structure intermittency, in line with other writers at that time. I should also point out that I would no longer give the same prominence to the instantaneous dissipation. I am now clear that the failure to distinguish between this and its mean value; combined with the failure to recognise that the significant quantity in determining the inertial-range spectrum/structure-function is the inertial transfer rate, underpins much of the confusion over the $k^{-5/3}$ (or $r^{2/3}$) result for the inertial range. I have written quite a lot about this matter in recent years and expect to write a great deal more.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.




Does the failure to use spectral methods harm one’s understanding of turbulence?

Does the failure to use spectral methods harm one’s understanding of turbulence?

Vacation post No. 3: I will be out of the virtual office until Monday 19 April.

As described in the previous post, traditional methods of visualising turbulence involve vaguely specified and ill-defined eddying motions whereas Fourier methods lead to a well-defined problem in many-body physics. This seems to be a perfectly straightforward situation; and one might wonder: in what way do fluid dynamicists feel that the Fourier wavenumber space representation is obscuring the physics? Given that they regard a vortex-based picture, however imprecise, as `the physics’, I suspect (a suspicion based on many discussions over the years!) that the problem arises when they try to reconcile the two formulations. Of course, in an intuitive way, one may associate large wavenumbers with small spatial separations. That is, `high k’ corresponds to `small r’ and vice versa. But those attempts, which one sees from time to time, to interpret the $k$-space picture in terms of arbitrarily prescribed vortex motions in real space, seem positively designed to cause confusion. It is important to bear in mind that the Fourier representation reformulates the problem, and you should study it on its own terms, even if you long for vortices!

Does this matter? I think it does. For example, I can point to the strange situation in which (it seems) most fluid dynamicists believe that there should be intermittency corrections to the exponent of Kolmogorov’s $k^{-5/3}$ energy spectrum, whereas it seems that most theoretical physicists (who work in wavenumber space) do not. The hidden point here, is that Kolmogorov worked in real space, and derived the $r^{2/3}$ form of the second-order structure function, for an intermediate range of values of $r$ where the form of the input term and the viscous dissipation could both be neglected, thus introducing the inertial range. His theory was inconsistent; in that he then considered the structure function to depend on the dissipation rate, even although this had been excluded from the inertial range. It is this step which gives some credibility to the possibility of intermittency effects, particularly as there may be some doubt about whether or not the dissipation rate in the theory is the average value or not.

The surprising thing is that, at much the same time, Obukhov worked in $k$-space, and identified the conservative, inertial flux of energy through the modes as being the key quantity determining the energy spectrum in the inertial range. It follows that, with the production and dissipation being negligible in this range of wavenumbers, the flux must be constant (i.e. independent of wavenumber) in the inertial range This was later recognized by Osager. Later still, this property became widely known and for many years has been referred to by theoretical physicists as scale invariance. Scale invariance is a general mathematical property and can refer to various things in turbulence research. It simply means that something which might depend on an independent variable, in either real space or wavenumber space, is in fact constant. It should be emphasised that the inertial flux is an average quantity, as indeed is the energy spectrum, and any intermittency must necessarily be averaged out. In fact a modern analysis leading to the $k^{-5/3}$ spectrum would start from the Lin equation. Therefore it is hard to see how internal intermittency, which is incidentally present on all scales can affect this derivation.




Does the use of spectral methods obscure the physics of turbulence?

Does the use of spectral methods obscure the physics of turbulence?

Vacation post No. 2: I will be out of the virtual office until Monday 19 April.

Recently, someone who posted a comment on one of my early blogs about spectral methods (see the post on 20 February 2020), commented that a certain person has said `spectral methods obscure the physics of turbulence’. They asked for my opinion on this statement and I gave a fairly robust and concise reply. However, on reflection, I thought that a more nuanced response might be helpful. As the vast majority of turbulence researchers work in real space, it seems probable that many would share that sentiment, or something very like it.

In fact, I will begin by challenging the second part of the statement. What precisely is meant by the phrase `the physics of turbulence’? In order to answer this question, let us begin by examining the concept of the turbulence problem in both real space and Fourier wavenumber space. Note that in what follows, all dependent variables are understood to be per unit mass of fluid, and we restrict our attention to incompressible fluid motion.

In real space, we have the velocity field $\mathbf{u}(\mathbf{x},t)$, which satisfies the Navier-Stokes equation (NSE). This equation expresses conservation of momentum and is local in $x$. It is also nonlinear and is therefore, in general, insoluble. From it we can derive the Karman-Howarth equation (KHE), which expresses conservation of energy and relates the second-order moment to the third-order moment. This is also local in $x$, and is also insoluble, as it embodies the statistical closure problem of turbulence. If we wish, we can change from moments to structure functions, but the KHE remains local in $r$, the distance between the two measuring points. This formulation gives no hint of a turbulence cascade as it is entirely local in nature.

The situation is radically different in Fourier wavenumber ($k$) space. Here we have a velocity field $\mathbf{u}(\mathbf{k},t)$ which now satisfies the NSE in $k$-space. This is still insoluble, and when we derive the Lin equation from it (or by Fourier transformation of the KHE), this again expresses conservation of energy, and is again subject to the closure problem. However, there is a major difference. As pointed out by Batchelor [1], Taylor introduced the Fourier representation in order to turn turbulence into a problem in statistical physics, with the $\mathbf{u}(\mathbf{k},t)$ playing the part of the degrees of freedom. The nonlinear term takes the form of a convolution in wavenumber space and this couples each degree of freedom to every other. In the absence of viscosity, this process leads to equipartition, rather as in an ideal gas. However, the viscous term is symmetry-breaking, with its factor of $k^2$ skewing its effect to high wavenumbers, so that energy must flow through the modes of the system from low wavenumbers to high. We may complete the picture by injecting energy at low wavenumbers. The result is a physical system which has been discussed in many papers and books and has been studied by theoretical physicists over the decades since the 1950s. In short, Fourier transformation reveals a physical system which is not apparent from the equations of motion in real space.

What, then, do those working in real space mean by the physics of turbulence? Presumably they rely on ideas about vortex motion, as established by flow visualisation; and here the difficulty lies. Richardson put forward the concept of a cascade in terms of `’whirls” (not, incidentally, whorls! [2]); and certainly this has gripped the imagination of generations of workers in the field. In a general, qualitative way it is easy to understand; and one can envisage the transfer of eddying motions from large scales to small scales. But when it comes to a quantitative point of view, the resulting picture is very vague and imprecise. Of course attempts have been made to make it more precise and researchers have considered assemblies of well-defined vortex motions. This is a perfectly reasonable way for fluid dynamicists to go about things, but it involves a considerable element of guess work. In contrast, Fourier wavenumber space gives a precise representation of the physical system and essentially formulates the basic problem as a statistical field theory.

So, spectral methods actually expose the underlying physics of turbulence, rather than obscuring it. It is my view that those who are not comfortable with them must necessarily have a very restricted and limited understanding of the subject. I shall illustrate that in my next post.

[1] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 2nd edition, 1971.
[2] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.




Stirring forces and the turbulence response.

Stirring forces and the turbulence response.

Vacation post No. I: I will be out of the office until Monday 19 April.

In my previous post, I argued that there seems to be really no justification for regarding the stirring forces that we invoke in isotropic turbulence as mysterious, at least in the context of statistical physics. However, when I was thinking about it, I remembered that Kraichnan had introduced stirring forces in quite a different way from Edwards and it occurred to me that this might be worth looking at again. Edwards had introduced them in order to study stationary turbulence, but in Kraichnan’s case they were central to the basic idea for his turbulence theory. In that way, Kraichnan’s formulation was more in the spirit of dynamical systems theory, rather than statistical physics.

Following Kraichnan, let us consider the case where the Navier-Stokes equation (NSE) is subject to random force $f_{\alpha}(\mathbf{k},t)$, where the Greek indices take the usual values of $1,\,2,\,3$ corresponding to Cartesian tensor notation. If the force undergoes a fluctuation \[f_{\alpha}(\mathbf{k},t) \rightarrow f_{\alpha}(\mathbf{k},t) +\delta f_{\alpha}(\mathbf{k},t),\] then we may expect the velocity field to undergo a corresponding fluctuation \[u_{\alpha}(\mathbf{k},t) \rightarrow u_{\alpha}(\mathbf{k},t) +\delta u_{\alpha}(\mathbf{k},t).\] If the increments are small enough, we may neglect the second order of small quantities, then we may introduce the infinitesimal response function $\hat{R}_{\alpha\beta}(\mathbf{k};t,t’)$, such that \[\delta u_{\alpha}(\mathbf{k},t) = \int_{-\infty}^t\,\hat{R}_{\alpha\beta}(\mathbf{k};t,t’)\delta f_{\beta}(\mathbf{k},t’)\,dt’.\]

Kraichnan linearised the NSE in order to derive a governing equation for the infinitesimal response function. Then he introduced the ensemble-averaged form \[\langle\hat{R}_{\alpha\beta}(\mathbf{k};t,t’)\rangle =R_{\alpha\beta}(\mathbf{k};t,t’),\] where \[R_{\alpha\beta}(\mathbf{k};t,t)=1,\] in order to make a statistical closure. The result was the Direct Interaction Approximation (DIA) and it is worth noting in passing that its derivation contains the step $\langle uu\hat{R} \rangle = \langle uu \rangle \langle \hat{R}\rangle$, which makes the theory a mean-field approximation.

The failure of DIA was attributed by Kraichnan to the use of an Eulerian coordinate system and he responded by generalising DIA to what he called Lagrangian-history coordinates, leading to a much more complicated formulation. This step inspired others to DIA-type methods in more conventional Lagrangian coordinates. However, the fact remains that the purely Eulerian LET (or local energy transfer) does not fail in the same way as DIA. It is worth noting that unsuccessful theories in Eulerian coordinates are invariably Markovian in wavenumber (this should be distinguished from a Markovian property in time).

An alternative explanation for the failure of Markovian theories is that the basic ansatz, in the steps outlined above, may not identify the correct response for turbulence. In dynamical systems the dissipation occurs where the force acts. In turbulence it occurs at a distance in space and time. When the force acts to stir the fluid, the energy is transferred to higher wavenumbers by a conservative process, until it comes into detailed balance with the viscous dissipation. Arguably the system response needs to include some further effect, connecting one velocity mode to another, as happens in the LET theory [1].

In all theories, the direct action of the stirring force is both to create the modes and then populate them with energy. In DIA, the way in which energy is put into the modes (i.e. the input term) can be calculated exactly by renormalized perturbation theory in terms of the ensemble-averaged response function . However, the general closure of the statistical equations for the velocity moments is equivalent to an assumption that the same procedure will work for it, which is really only an assumption. So it may be that it is the turbulence response which is mysterious, and not the stirring forces as such.

General treatments of these matters will be found in the books [2,3]. It should be noted that I’ve used a modern notation for the response function (e.g. see [4]).

[1] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[2] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.
[3] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990
[4] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.




The mysterious stirring forces

The mysterious stirring forces
In the late 1970s there was an upsurge in interest in the turbulence problem among theoretical physicists. This arose out of the application of renormalization group (RG) methods to the problem of stirred fluid motion. As this problem was restricted to a very low wavenumber cutoff, these approaches had nothing to say about real fluid turbulence. Nevertheless, the work on RG stimulated a lot of speculative discussion, and one paper referred to `the mysterious stirring forces’. I found this rather unsettling, because I had been familiar with the concept of stirring forces from the start of my PhD project in 1966. Why, I wondered, did some people find them mysterious?

As time passed, I came to the conclusion that it was just lack of familiarity on the part of these theorists, although they seemed quite happy to launch into speculation on a subject that they knew very little about. (Well, it was just a conference paper!) So I was left with the feeling that one day it might be worth writing something to debunk this comment. Recently it occurred to me that it would make a good topic for a blog.

The standard form used nowadays for the stirring forces was introduced by Sam Edwards in 1964 and has its roots in the study of Brownian motion, and similar problems involving fluctuations about equilibrium. Let us consider the motion of a colloidal particle under the influence of molecular impacts in a liquid. For simplicity, we specialise to one-dimensional motion with velocity $u$. The particle will experience Stokes drag with coefficient $\eta$, per unit mass. Accordingly, we can use Newton’s second law to write its macroscopic equation of motion as: \begin{equation} \partial u/\partial t =-\eta \, u. \end{equation} At the microscopic level, the particle will experience the individual molecular impacts as a random force $f(t)$, say. So the microscopic equation of motion becomes: \begin{equation}\partial u/\partial t =-\eta \, u + f(t). \end{equation} This equation is known as the Langevin equation. In order to solve it, we need to specify $f$ in terms of a physically plausible model.

We begin by noting that the average effect of the molecular impacts on the colloidal particle must be zero, thus we have: \begin{equation}\langle f(t) \rangle =0. \end{equation} As a result, the average of equation (2) reduces to equation (1), which is consistent. Then in order to represent the irregular nature of the molecular impacts, we assume that $f(t)$ is only correlated with itself at very short times $t\leq t_c$, where $t_c$ is the duration of a collision. We can express this in terms of the autocorrelation function $w$ as: \begin{equation} \langle f(t)f(t’) \rangle = w(t-t’), \end{equation} and \begin{equation} W(t) = \int_0^t\,w(\tau)\,d\tau, \end{equation} where \begin{equation} W(\tau)\rightarrow W = \mbox{constant}.\end{equation}

We can go on to solve the Langevin equation (2) for the short-time and long-time behaviour of the particle velocity $u(t)$, much as in Taylor’s Lagrangian analysis of turbulent diffusion. We can also derive the fluctuation-dissipation relation: see reference [1] for details.

In his self-consistent field theory of turbulence, Edwards drew various analogies with the theory of Brownian motion [2]. In particular, he went further than in equations (4) to (6), and chose the stirring forces to be instantaneously correlated with themselves; or: \begin{equation}w(t-t’) = W \delta(t-t’), \end{equation} where $\delta$ is the Dirac delta function. In the study of stochastic dynamical systems, this is known as `white noise forcing’. It allows one to express the rate at which the stirring force does work on the turbulent fluid in terms of the autocorrelation of the stirring forces [3].

It also provides a criterion for the detection of `fake theories’. These are theories which are put out by people with skill in quantum field theory and which purport to be theories of turbulence. Such theories do not engage with the established body of work in the theory of turbulence, nor do they mention how they overcome the problems that have proved to be a stumbling block for legitimate theories. Invariably, they attribute the purpose of the delta function to be to maintain Galiean invariance and clearly do not know what it is actually used for. In fact, the Navier-Stokes equations are trivially Galilean-invariant and adding an external force to them cannot destroy that [4].

[1] W. David McComb. Study Notes for Statistical Physics: A concise, unified overview of the subject. Bookboon, 2014.
[2] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[3] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[4] W. D. McComb. Galilean invariance and vertex renormalization. Phys. Rev. E, 71:37301, 2005.




Is the entropy of turbulence a maximum?

Is the entropy of turbulence a maximum?
In 1969 I published my first paper [1], jointly with my supervisor Sam Edwards, in which we maximised the turbulent entropy, defined in terms of the information content, in order to obtain a prescription for $\omega(k)$, the renormalized decay time for the energy contained in the mode with wavenumber $k$. Of course, in statistical mechanics, one associates the maximum of the entropy with thermal equilibrium. So, in the circumstances, we were very frank about possible problems with this approach, having actually stated in the title that our system was ‘far from equilibrium’. Before we examine this aspect further, it may be of interest to look at the background to the work.

By the mid-nineteen sixties, there had been a number of related theories of turbulence, but the most important were probably Kraichnan’s direct-interaction approximation (DIA) in 1959 and the Edwards self-consistent field theory in 1964. At this time there seems to have been a mixture of excitement and frustration. It had become clear from experiment that the Kolmogorov $-5/3$ power law (or something very close to it) was the correct inertial-range form, and none of the various theories was compatible with it. Kraichnan ultimately concluded that he needed to change to a so-called Lagrangian-history coordinatate system, but otherwise could retain all the features of the DIA; whereas Edwards concluded that he needed to find a different way of choosing the response function, which in his case depended on $\omega(k)$. In my view, and irrespective of the merits or otherwise of the ‘maximum entropy’ method, Edwards made the right decision.

When I began my PhD research in 1966, my first job was to work out the turbulent entropy, using Shannon’s definition, in terms of the turbulent probability distribution; and then carry out a functional differentiation with respect to $\omega(k)$, in order to establish the presence of a maximum. What I didn’t know, was that Sam had himself carried out this calculation but had got stuck. In order to take the limit of infinite Reynolds numbers, he had to show that his theory was well behaved at three particular points in wavenumber space: $k=0$, $k=\infty$ and $|\mathbf{k}+\mathbf{j}|=0$, where $j$ is a dummy wavenumber. He had been able to show the first two, but not the third. Not knowing that there was a problem, I soon discovered it, but by means of a trick involving dividing up the range of integration, I managed to show that it was well behaved. However, the prediction of the value of the Kolmogorov constant was not good, and this was not encouraging.

In later years, when I had a lot more experience of both turbulence and statistical physics, I thought more critically about this way of treating turbulence. The maximum entropy method is the canonical way of solving problems in thermal equilibrium where there are only either weak or very local interactions. If we take the para-ferromagnetic transition as an example, we can think of the temperature being reduced and an assembly of molecular magnets (i.e. spins on a lattice) tending to line up as the effective coupling increases. However, this process would be swamped by the imposition of a powerful external magnetic field. Similarly, the molecular diffusion process can be swamped by vigorous stirring. In the case of turbulence, it is possible to study absolute equilibrium ensembles by considering an initially stirred inviscid fluid in a finite system. If we replace the Euler equation by the Navier-Stokes equation, then the effect of the viscosity is symmetry-breaking and the system is dominated by a flow of energy through the modes.

This, of course is a truism of statistical physics: a system is either controlled by entropy or energy conservation. In the case of turbulence, it is always the latter. Turbulence is always a driven phenomenon. So while perhaps entropy is actually a maximum with respect to variation of $\omega(k)$, it may be too broad a maximum allow an accurate determination of $\omega(k)$. Also, it is worth bearing in mind, that it is not precisely turbulence but the statistical theory we are approximating it by, which needs to show the requisite behaviour.

In any case, in 1974 I published my local energy transfer theory of turbulence [2], which is in good accord with the basic physics of the turbulent cascade.

[1] S. F. Edwards and W. D. McComb. Statistical mechanics far from equilibrium. J.Phys.A, 2:157, 1969.
[2] W. D. McComb. A local energy transfer theory of isotropic turbulence. J.Phys.A, 7(5):632, 1974.




Analogies between critical phenomena and turbulence: 2

Analogies between critical phenomena and turbulence: 2

In the previous post, I discussed the misapplication to turbulence of concepts like the relationship between mean-field theory and Renormalization Group in critical phenomena. This week I have the concept of ‘anomalous exponents’ in my sights!

This term appears to be borrowed from the concept of anomalous dimension in the theory of critical phenomena, so we start from a consideration of dimension, bearing in mind that the dimension of the space can be anything from $d=1$ up to $d=\infty$, and is not necessarily an integer. In critical phenomena it is usual to define three different kinds of dimensionality, as follows:

[a] Scale dimension. This is defined as the dimension of a physical quantity as established from the effect of a scaling transformation. Confusingly, this is normally just referred to as dimension.

[b] Normal (canonical) dimension. This is the (scale) dimension as established by simple dimensional analysis.

[c] Anomalous dimension. This is the dimension as established under RG transformation.

In this context, normal dimension is regarded as the naïve dimension and anomalous dimension is regarded as the actual or correct dimension. In turbulence we don’t have dimensionality as a playground, so the merry band of would-be turbulence theorists have extended the concept to the exponents of power-law forms of the moments of the velocity field plotted against order. The Kolmogorov forms (dimensional analysis) are seen as canonical and the actual (i.e. measured) exponents are seen as anomalous. The former are seen as wrong and the latter as correct. Naturally, the true believers in intermittency corrections have seized on this nomenclature as adding something to their case. (Also, see my post of 21 January 2021).

Let us actually apply the concept of scale dimension $d_s$ (say) in three-dimensional turbulence (i.e. $d=3$), using the procedures from critical phenomena (see Section 9.3 of [1]) to the energy spectrum $E(k)$. That is, we express the spectrum in terms of the total energy $E$, thus \[\int\,d^3k\,E(k) = E \quad \mbox{hence} \quad E(k) \sim E\,k^{-3}.\] So, bearing in mind that wavenumber has dimensions of inverse length, it follows that the canonical scale dimension is $d_s = 3$ in $d=3$.

If we now consider the Kolmogorov spectrum based on scale invariance and an inertial transfer rate $\varepsilon_T$, dimensional analysis gives us \[E(k) \sim \varepsilon_T\,k^{-5/3} .\] As this result can also be got from RG transformation, properly formulated for macroscopic fluid turbulence, and employing rational approximations (see [2] – [5]), it follows that K41 corresponds to the anomalous dimension $d_E = 5/3$. So much for inept comparisons with critical phenomena.

[1] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.
[2] W. D. McComb and A. G. Watt. Conditional averaging procedure for the elimination of the small-scale modes from incompressible fluid turbulence at high Reynolds numbers. Phys. Rev. Lett., 65(26):3281-3284, 1990.
[3] W. D. McComb, W. Roberts, and A. G. Watt. Conditional-averaging procedure for problems with mode-mode coupling. Phys. Rev. A, 45(6):3507-3515, 1992.
[4] W. D. McComb and A. G. Watt. Two-field theory of incompressible-fluid turbulence. Phys. Rev. A, 46(8):4797-4812, 1992.
[5] W. D. McComb. Asymptotic freedom, non-Gaussian perturbation theory, and the application of renormalization group theory to isotropic turbulence. Phys. Rev. E, 73:26303-26307, 2006.




Analogies between critical phenomena and turbulence: 1

Analogies between critical phenomena and turbulence: 1
In the late 1970s, application of Renormalization Group (RG) to stirred fluid motion led to an upwelling of interest among theoretical physicists in the possibility of solving the notorious turbulence problem. I remember reading a conference paper which included some discussion that was rather naïve in tone. For instance, why did turbulence theorists study the energy spectrum rather than something else? Also, rather unsettlingly, there was a reference to the ‘mysterious stirring forces’ (sic): I shall return to that comment in a future post. However, although no turbulence theory emerged from this activity, a way of thinking did, and this found a receptive audience in those members of the turbulence community who believe in intermittency corrections. In my view, one set of views is as unjustified as the other, and I shall now explain why I think this.

To understand how these views came about, we need to consider the background in critical phenomena. During the 1960s, theorists in this area began to use concepts like scaling and self-similarity to derive exact relationships between critical exponents. (In passing, I note that in fluid dynamics these tools had already been in active use for more than half a century!) In this way, the six critical exponents of a typical system could be reduced to just two to be determined. At first the gap was bridged by mean-field theory, but then RG came along and the problem was solved.

It is important to know that RG can be viewed, in some respects, as a correction to mean-field theory. As a result, theorists in this field essentially ended up taking the view: ‘mean-field theory, bad! RG good!’, and this had a tendency to spill over into other areas as a sort of judgement. In general this was the attitude during the 1980s/90s, and few paused to reflect that other phenomena might belong to a different universality class. For instance, should the self-consistent field theory of multi-electron atoms be ruled out, because RG is better than mean-field theory at describing the para-ferromagnetic phase transition? Fortunately, this sort of thinking has presumably died out by now, but it has left an unhelpful residue in turbulence theory.

One form of this is the assertion that the Kolmogorov ‘$-5/3$’ energy spectrum is a mean-field theory, and that an RG calculation would lead to an exponent of the form ‘$-5/3+\mu$’; precisely what the ‘intermittency correction’ enthusiasts had been saying all along! The snag with this is that the derivation of the Kolmogorov spectrum does not rely on a mean-field step, nor indeed on the invariable accompaniment of a self-consistent field step. In fact, this can be a problem in critical phenomena. People tend to refer loosely to mean-field theories, without mentioning that they are also self-consistent theories. Actually in turbulence we have various self-consistent field theories which do not predict the Kolmogorov exponent and one which does [1].
In my next post, I will develop this topic further. In the meantime, a general background account of these matters may be found in the book cited below as [2].
[1] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[2] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.




Compatibility of temporal spectra with Kolmogorov (1941): the Taylor hypothesis.

Compatibility of temporal spectra with Kolmogorov (1941): the Taylor hypothesis.

Earlier this year I received an enquiry from Alex Liberzon, who was puzzled by the fact that some people plot temporal frequency spectra with a $-5/3$ power law, but he was unable to reconcile the dimensions. This immediately took me back to the 1970s when I was doing experimental work on drag-reduction, and we used to measure frequency spectra and convert them to one-dimensional wavenumber spectra using Taylor’s hypothesis of frozen convection [1]. It turned out that Alex’s question was more complicated than that and I will return to it at the end. But I thought my own treatment of this topic in [1] was terse, to say the least, and that a fuller treatment of it might be of general interest. It also has the advantage of clearing the easier stuff out of the way!

Consider a turbulent velocity field $u(x,t)$ which is stationary and homogeneous with rms value $U$. According to Kolmogorov (1941) [2], the mean square variation in the velocity field over a distance $r$ from a point $x$ is given by:\begin{equation}\langle \Delta u^2_r \rangle \sim (\varepsilon r)^{2/3}.\end{equation} If we now consider the turbulence to be convected by a uniform velocity $U_c$ in the $x$-direction, then the K41 result for the mean square variation in the velocity field over an interval of time $\tau$ at a point $x$ is given by: \begin{equation}\langle \Delta u^2_\tau \rangle \sim (\varepsilon U_c\tau)^{2/3}.\end{equation}The dimensional consistency of the two forms is obvious from inspection.

Next let us examine the dimensions of the temporal and spatial spectra. We will use the angular frequency $\omega = 2\pi f$, where $f $ is the frequency in Hertz, in order to be consistent with the definition of wavenumber $k_1$, where $k_1$ is the component of the wavevector in the direction of $x$. Integrating both forms of the spectrum, we have the condition: \begin{equation} \int^\infty_0 E(\omega) d\omega = \int_0^\infty E_{11}(k_1) dk_1 = U^2. \end{equation} Evidently the dimensions are given by: \begin{equation}\mbox{Dimensions of}\, E(\omega)d\omega = \mbox{Dimensions of}\, E_{11}(k_1) dk_1 = L^2 T^{-2};\end{equation} or velocity squared.

Then we introduce Taylor’s hypothesis in the form: \begin{equation} \frac{\partial}{\partial t} = U_c \frac{\partial}{\partial x}, \quad \mbox{thus} \quad \omega = U_c k_1;\end{equation} and hence: \begin{equation}k_1= \frac{\omega}{U_c} \quad \mbox{and} \quad dk_1 = \frac{d\omega}{U_c}. \end{equation}
The Kolmogorov wavenumber spectrum (in the one-dimensional form that is usually measured) is given by:\begin{equation}E_{11}(k_1) = \alpha_1 \varepsilon^{2/3} k^{-5/3}_1 dk_1.\end{equation}We should note that $\alpha_1$ is the constant in the one-dimensional spectrum and is related to the three-dimensional form $\alpha$ by $\alpha_1 = (18/55)\alpha $. Substituting for the wavenumbers from (6) into (7) we find:\begin{equation} E_{11}(k_{1})dk_{1} = \alpha_1 (\varepsilon U_c)^{2/3}\omega^{5/3} d\omega \equiv E(\omega)d\omega, \end{equation} which is easily shown to have the correct dimensions of velocity squared.

After seeing this analysis, Alex came back with: but what about when the field is homogeneous and isotropic, with $U_c=0$? That’s a very good question and takes us into a topic which originated with Kraichnan’s analysis of the failure of DIA in (1964) [1]: the importance of sweeping effects on the decay of the velocity correlation. There are now numerous papers which address this topic and they continue to appear. So it does not give the impression of being settled. From my point of view, this is important in the context of closure approximations; but I understand that the answer to the question of $f^{-5/3}$ or $f^{-2}$ depends on the importance or otherwise of sweeping effects.

I intend to return to this, but not necessarily next week!

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] A. N. Kolmogorov. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. C. R. Acad. Sci. URSS, 30:301, 1941.




The concept of universality classes in critical phenomena.

The concept of universality classes in critical phenomena.
The universality of the small scales, which is predicted by the Richardson-Kolmogorov picture, is not always observed in practice; and in the previous post I conjectured that departures from this might be accounted for by differences in the spatial symmetry of the large scale flow. To take this idea a step further, I now wonder whether it would be worth exploring how the idea of universality classes could be applied to the turbulent cascade? First, I should explain what universality classes actually are.

In the study of critical phenomena, we are concerned with changes of phase or state which can occur at a critical temperature, which is invariably denoted by $T_c$. For instance, the transition from liquid to gas, or the transition from para- to ferromagnetism. In general, it is found that the thermodynamic variables (e.g. heat capacity, magnetic susceptibility) of a system either tend to zero, or tend to infinity, as the system approaches the critical temperature. If we represent any such macroscopic variable by $F(T)$ and introduce the reduced temperature $\Theta_c$ by \[\Theta_c = \frac{T-T_c}{T_c}.\] Then, as $T\rightarrow T_c$ and $\Theta_c \rightarrow 0$, we have \[F(\Theta_c) = A \Theta^{-n},\] where $A$ is a constant and $n$ is the critical exponent. Obviously the critical exponent will be negative when $F(0)=0$ and positive when $F(0)=\infty$.

Here the constant $A$ and the critical temperature $T_c$ depend on the details of the system at the molecular level and therefore vary from one system to another. These quantities must be determined experimentally. However, in practice it is found that sometimes different systems have the same values of critical exponents and this depends only on symmetry properties of the microscopic energy function (or Hamiltonian). When this is found to be the case, the two systems are said to be in the same universality class.

Accordingly, in my view it would be worth reviewing the different investigations in order to find out if one could organise results for the inertial-range exponent into some kind of universality classes, although allowance should be made for experimental error, which tends to be much greater in fluid dynamics than in microscopic physics. I would be tempted to take a look through my files, but unfortunately I remain cut off from my university office by the pandemic.

Further details about critical phenomena may be found in reference [1] below.

[1] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.




Macroscopic symmetry and microscopic universality.

Macroscopic symmetry and microscopic universality.
The concepts of macroscopic and microscopic are often borrowed, in an unacknowledged way, from physics, in order to think about the fundamentals of turbulence. By that, I mean that there is usually no explicit acknowledgement, nor indeed apparent realization, that the ratio of large scales to small scales is many orders of magnitude smaller in turbulence (which is at all scales actually a macroscopic phenomenon) than it is in microscopic physics.

This idea began with Kolmogorov in 1941, when he employed Richardson’s concept of a cascade of energy from large eddies to small; to argue that, after a sufficiently large number of steps, there could be a range of eddy sizes which were statistically independent of their large-scale progenitors. In passing, it should be noted that the concept of ‘eddy’ can be left rather intuitive, and we could talk equally vaguely about ‘scales’. However, combining the cascade idea with Taylor’s earlier introduction of Fourier modes as the degrees of freedom of a turbulent system, leads to a much more satisfactory analogy with statistical physics, with the onset of scale invariance strengthening the analogy to the microscopic theory of critical phenomena. As is well known, that leads to the `$5/3$’ spectrum, which was expected to be universal.

My own view is that it would be good to get it settled that the Kolmogorov spectrum holds for isotropic turbulence. There is still an absence of consensus about that. But the broader claim of universality has been supported by measurements of spectra in a vast variety of flow configurations; although, inevitably there have been instances where it is not supported. So we end up with yet another unresolved issue in turbulence. Is small-scale turbulence universal or not?

In order to consider whether or not the concept of symmetry could assist with this, it may be helpful to think in terms of definite examples. First, let us consider laminar flow in the $x_1$ direction between fixed parallel plates situated at $x_2=\pm a$. The velocity distribution between the plates will be a symmetric function of the variable $x_2$. If now we consider a flow where one plate is moving with respect to the other, and this is the only cause of fluid motion, then we have plane Couette flow and, as is well known, the velocity profile will now be an antisymmetric function of $x_2$. However, the molecular viscosity of the fluid will be unaffected by the different macroscopic symmetries and will be the same in both cases.

If we now extend this discussion to the case of turbulent mean velocities and inquire about the behaviour of the effective turbulent viscosity ($\nu_t$, say: for a definition see Section 1.5 of reference [1]), it is clear that this will be very different in the two cases, and arguably that should apply to the cascade process as well.

In isotropic turbulence, the cascade is described by the Lin equation, with the key quantity being the transfer spectrum $T(k)$. Its extension to an inhomogeneous case will bring in a number of transfer spectra, such as $T_{11}$, $T_{12}$ and so on. In order to cope with the dependence on spatial coordinates, the introduction of centroid and relative coordinates that we used in the previous post will prove useful. Recall that we considered a covariance function $C(\mathbf{x},\mathbf{x’})$, leaving the time variables out for simplicity and introduced the change of variables to centroid and relative coordinates, thus: \[\mathbf{R} = (\mathbf{x} + \mathbf{x’})/2 \qquad \mbox{and} \qquad \mathbf{r} = (\mathbf{x} – \mathbf{x’}). \] In this case one component of the spectral tensor could be written as: $T_{11}(\mathbf{k}, R_2)$, where we have Fourier transformed with respect to the relative coordinate only. Then, at least in the core region of the flow, we could expand out the dependence on the centroid coordinate in Taylor series. In this way we could separate the wavenumber cascade from spatial effects, such as production and spatial energy transfer.

Ideally one could even use a closure theory: the covariance equation of the DIA has been validated by the LET theory [2] and, although some work has been done on this in the past, a really serious approach would require a lot of bright young people to get involved. Unfortunately, vast numbers of bright young people all over the world are involved in complicated pedagogical exercises in cosmology, particle theory, string theory, quantum gravity and so on, most of which has gone beyond any proper theoretical foundation. Ah well, important but less glamorous problems like turbulence must await their turn.

For completeness, I should emphasise that all flows discussed above are assumed to be incompressible and well-developed.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor. 50:375501, 2017.




Can statistical theory help with turbulence modelling?

Can statistical theory help with turbulence modelling?
When reading the book by Sagaut and Cambon some years ago, I was struck by their balance between fundamentals and applications [1]. This started me thinking, and it appeared to me that I had become ever more concentrated on fundamentals in recent years. In other words, I seemed to epitomize the old saying about scholarship consisting of `learning more and more about less and less’!

It was not always so. I began my career in research and development, which was very practical indeed. Then my employers sent me back to university where I took a degree in theoretical physics, followed by a PhD on the statistical theory of turbulence. Obviously the rot had set in; but, even so, in later years I did quite a lot of experimental work on drag reduction by additives and also turbulent diffusion. At least these topics had a practical orientation. Moreover, I have also used the $k-\varepsilon$ model to carry out calculations on the `jet in crossflow’ problem. This might seem surprising, but it arose quite naturally in the following way.

Around about 1980 I had a call from a colleague in the maths department at Edinburgh. The Iran-Iraq war had recently broken out, and one of his MPhil students came from that part of the world. The student had decided that he would rather take a PhD than go home and be involved in the fighting. Very understandable, but the difficulty was that he needed a more substantial project. At present he was studying the jet in crossflow problem, using ideal flow methods. My colleague wondered if I could join in as co-supervisor and introduce some turbulence to the project in order to make it more realistic.

Lacking any experience in this field, I happily agreed to join in, and proposed that we use the $k-\varepsilon$ model, which at the time was the best known of the engineering models. We set out on a programme of studying both the model and associated numerical methods, in the process considering a hierarchy of problems of increasing difficulty, until we reached the jet in crossflow.

This was a long time ago, but two things about this PhD supervision remain in my memory. First, the student was a mathematician and had no prior knowledge of numerical computation. This leaves me with an abiding impression that he initially found it very difficult to realise that we did not need to be able to solve an equation in the mathematical sense. Because of this, we had many discussions which appeared to be going well and then ended in frustration. Secondly, once we managed to encourage him to overcome his reluctance and try to use the computer, he proved to be a natural and worked rapidly through our hierarchy of problems, ending up with useful results in a commendably short time. This happened at a time of upheaval for me, when I was moving from the School of Engineering to the School of Physics, so I have only a rather vague memory of how things turned out. I believe that he got his PhD and then went on somewhere in England as a postdoc. Whether the results were published or not, I don’t recall. But the experience left me with an appreciation of the value of a practical engineering model, where my own fundamental work would have been of little assistance. A short discussion of the $k-\varepsilon$ model can be found in Section 3.3.4 of my book, given as reference [2] below.

When considering how statistical theory might help, we should first recognize that it does give rise to a class of models, beginning with the Eddy Damped Quasi-Normal model (which is cognate to the self-consistent field theory of Edwards) and has a single adjustable constant. It is, however, restricted to homogeneous turbulence. What we could really do with is something like $k-\varepsilon$, which is a single-point theory, but which arises in a systematic way from a two-point statistical theory. The value of the latter is that it takes into account spatially (and temporally) nonlocal effects.

The details of the statistical closure theories are complicated, but the basic idea of how one might try to derive single-point engineering models is quite simple. The key quantity is the covariance of two fluctuating velocities at different points (and times) and a theory consists of a closed set of equations to determine the covariance. In general, the covariance tensor is a matrix of nine covariance functions, although symmetry will often reduce that. We will consider just one such function, which we write as $C(\mathbf{x},\mathbf{x’})$, leaving the time variables out for simplicity. We then make the change of variables to centroid and relative coordinates, thus: \[\mathbf{R} = (\mathbf{x} + \mathbf{x’})/2 \qquad \mbox{and} \qquad \mathbf{r} = (\mathbf{x} – \mathbf{x’}). \]

Now, the statistical theories are studied for the homogeneous case in order to simplify the problem. That is, we assume that there is no dependence on the centroid coordinate; and Fourier transform into wavenumber space, with respect to the relative variable. However, the basic derivation and renormalization are not restricted to this case, and we can write down equations for the general case. Then, recognizing that most turbulent shear flows have a smooth dependence on the centroid coordinate, we can envisage expanding in the centroid coordinate, with coefficients obtained as integrals over wavenumber. Then, setting $x=x’$, we could end up with single-point equations, where coefficients are determined by integrals that arise in the fundamental theory.

This would not be a trivial process but, given the huge importance of turbulence calculations in a variety of applications, it is perhaps surprising that it has been so comprehensively neglected. A recent discussion of statistical two-point closures can be found in reference [3]. For completeness, I should mention that a second edition of [1] has appeared and I understand that a third edition is in the pipeline.

[1] P. Sagaut and C. Cambon. Homogeneous Turbulence Dynamics. Cambridge University Press, Cambridge, 2008.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[3] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.




The last post … of the first year!

The last post … of the first year!
A year ago, when I began this blog, few of us can have had any idea of what the year had in store from the coronavirus, now known to us as covid-19. Over the years, I have sometimes reflected on the very fortunate lives of my generation. I was born at the beginning of World War 2 and it impinged very little on my life or consciousness. In contrast, my grandparents all were adults during WW1, and would have suffered from that; while my parents must have endured fear and anxiety during WW2, but did not pass on any of that to me or my siblings. Basically all that I can remember was the occasional comment about the wonderful things (e.g. unlimited cream or butter) that one could get before the war!

So perhaps the pandemic is our war? Well, for many people it must seem like it; but, for those of us who are retired and have not been touched personally by the fatal consequences of the virus, it really only amounts to a degree of anxiety and some disruption of our lives. In my own case, I have not been able to go to my university office since last February. But this lack of access to my papers and books has merely been an inconvenience. Although, I do have plans to write a couple of review articles in the coming year; and, if I don’t have access to my office, only certain preliminaries will be possible.

In my first post, I referred to a paper of mine which I speculated might be my last as it had bounced from four different journals. I mentioned that I had let my guard down and made some sweeping statements without justifying them in detail. At the time I hadn’t mastered the art or science of incorporating references in my blogs, so I can now remedy the omission and this paper can be found as reference [1] below. So you can judge for yourself. Comments would be welcome. Just as a foretaste of something that I shall return to, is that in my view such a paper should have been unnecessary. The point it makes is that K41 scaling is observed for spectra and K62 scaling is not.

Incidentally, my speculation about publishing no more papers turned out to be overly pessimistic: see reference [2] below. There is rather a nice story attached to this, but I won’t go into that at the moment. Suffice it to say that it quite encouraged me and I have to confess that I now have a number of papers at various stages of preparation. At worst their fate when submitted to journals should make interesting anecdotes under the generic title of `peer review’.

To close on an upbeat note, I intend to integrate some of my blogs with the preparation of the two review articles that I have in mind. First, I intend to review the general topic of energy transfer and dissipation. In particular, the existing literature on the subject is unhelpful to the point of being quite bizarre. For instance, I recently read a discussion of the paper known as K41 (see reference [3] below) in which the author purports to quote this paper and in the process uses the word `wavenumber’, when in fact K41 derives the two-thirds law for the second-order structure function (i.e. $S_2(r) \sim r^{2/3}$), and the word wavenumber does not appear in the paper! Moreover, there is not a single exegesis (so far as I know) of K41 in the literature. Given its seminal nature, this is absolutely astonishing. It needs to be put right.

Secondly, I intend to write an article on statistical theories of turbulence, which will be much more accessible to those who are not theoretical physicists, and who balk at the word renormalization. In deciding which words not to use, I shall be guided by the acerbic remarks of the late Philip Saffman, which are to be found in his published lecture notes. Basically, I remain optimistic about this activity.

[1] W. D. McComb and M. Q. May. The effect of Kolmogorov (1962) scaling on the universality of turbulence energy spectra. arXiv:1812.09174[physics.flu-dyn], 2018.
[2] W. D. McComb. A modified Lin equation for the energy balance in isotropic turbulence. Theoretical & Applied Mechanics Letters, 10:377-381, 2020.
[3] A. N. Kolmogorov. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. C. R. Acad. Sci. URSS, 30:301,1941.




How important are the higher-order moments of the velocity field?

How important are the higher-order moments of the velocity field?

Up until about 1970, fundamental work on turbulence was dominated by the study of the energy spectrum, and most work was carried out in wavenumber space. In 1963 Uberoi measured the time-derivative of the energy spectrum and also the dissipation spectrum, in grid turbulence; and used the Lin equation to obtain the form of the transfer spectrum $T(k)$ [1]. Later on, this work was extended and refined by van Atta and Chen, who obtained the transfer spectrum more directly from the third-order correlation [2]. This seems to have been the peak of experimental interest in spectra, and from then on there was a growing concentration on the behaviour of the moments (strictly speaking, in the form of structure functions) in real space [3], [4].

Introducing the structure function of order $n$ by \[S_n(r) = \langle \delta u_L^n(r) \rangle,\] where $\delta u_L^n(r)$ is the longitudinal velocity difference, taken over a distance $r$, it is well known that, on dimensional grounds, they are expected to take the form \[S_n=C_n \,(\varepsilon r)^{n/3},\] whereas investigations like [3] and [4] (and many following them over the years) found deviations from this that increased with order $n$. Such results gave increased traction to belief in intermittency corrections and anomalous exponents.

Yet, when one considers it, the moments of a distribution are equivalent to the distribution itself. It is well known that the moments are related to the distribution through its characteristic function which is its Fourier transform. From the simple example on page 529 of reference [5], we see that the characteristic function can be expanded out in terms of the moments. Hence the distribution can be recovered to any desired order from the infinite set of its moments. Therefore, when one measures moments to some order, one is merely assessing the accuracy with which one has measured the distribution itself. A plot of the measured exponent $\zeta_n$ against order $n$ is no more or less than a plot of systematic experimental error. A glance at the plots of measured distributions in both [3] and [4] will make this point with compelling force, especially when one considers the wings of the distribution.

A brief overview of this topic and a number of more recent references may be found in [6]. Note that in that reference, a standard laboratory method of reducing systematic error was used to measure $\zeta_2$ and showed that it tended towards the canonical value of $2/3$ as the Reynolds number was increased. As a matter of some slight interest, I learnt that method when I was about sixteen years old at school.

[1] M. S. Uberoi. Energy transfer in isotropic turbulence. Phys. Fluids, 6:1048, 1963.
[2] C. W. van Atta and W. Y. Chen. Measurements of spectral energy transfer in grid turbulence. J. Fluid Mech., 38:743-763, 1969.
[3] C. W. van Atta and W. Y. Chen. Structure functions of turbulence in the atmospheric boundary layer over the ocean. J. Fluid Mech., 44:145, 1970.
[4] F. Anselmet, Y. Gagne, E. J. Hopfinger, and R. A. Antonia. High-order velocity structure functions in turbulent shear flows. J. Fluid Mech., 140:63, 1984.
[5] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[6] W. D. McComb, S. R. Yoffe, M. F. Linkmann, and A. Berera. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence. Phys. Rev. E, 90:053010, 2014.




How big is infinity?

How big is infinity?
In physics it is usual to derive theories of macroscopic systems by taking an infinite limit. This could be the continuum limit or the thermodynamic limit. Or, in the theory of critical phenomena, the signal of a nontrivial fixed point is that the correlation length becomes infinite. Of course, what we mean by `infinity’ is actually just a very large number. But the mathematicians do not like this. In reference [1] below, the author states: ‘… statistical-mechanical theories of phase transitions tell us that phase transitions only occur in infinite systems’. She sees this as paradoxical because, as we all know, in everyday life we are surrounded by finite systems undergoing phase transitions. She further believes that the paradox can be resolved by working with constructive mathematics, rather than classical mathematics, which is what we all normally use.

My quotation from [1] is certainly open to deconstruction, and I doubt if many physicists would agree with it. What originally drew my attention to this particular problem is the situation in turbulence theory. As the Reynolds number is increased (or, the viscosity is decreased), the dissipation rate becomes independent of the viscosity. Physicists attribute this to the energy transfer by the nonlinear term in the equation of motion becoming scale-invariant. As the Reynolds number is increased even more, this scale-invariance extends further through wavenumber space, and nothing thereafter changes, either qualitatively or quantitatively. This in practical terms is the infinite Reynolds number limit, and it occurs at quite modest, finite values of the Reynolds number.

However, many mathematicians, harking back to a paper by Onsager [2] in 1949, believe that the infinite Reynolds number limit corresponds to zero viscosity; and, even more bizarrely, that the continuum properties of the fluid break down in this limit. Accordingly, they are driven to finding ways of making the Fourier representation of the inviscid Euler equation dissipative, by destroying its symmetry-based conservation properties. I have discussed this topic in three previous posts on 12, 19 and 26 November; and a paper, at that time in preparation, is now available on the arXiv as [3].

[1] Pauline van Wierst. The paradox of phase transitions in the light of constructive mathematics. Synthese, 196:1863, 2019.
[2] L. Onsager. Statistical Hydrodynamics. Nuovo Cim. Suppl., 6:279, 1949.
[3] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614[physics.flu-dyn], 2020.




My life in wavenumber space

My life in wavenumber space
In September 1966, when I began work on my PhD, I almost immediately began to dwell in wavenumber space. After a brief nod to the real-space equations, I had to learn about Fourier transformation of the velocity field, with the wave-vector $\mathbf{k}$ replacing the position vector $\mathbf{x}$, and the Navier-Stokes equations being changed from real space to wavenumber space. In addition, it was usual in those days to begin with the velocity field in a cubic box and use Fourier series. Then at some stage one would let the box size tend to infinity, and replace summations by integrals. At the same time, the periodic boundary conditions would be replaced by good behaviour at infinity. So far as theoretical work was concerned, I was not to emerge from wavenumber space until around 2006, when I began to take an interest in the phenomenology of turbulence.

This narrowness was not unusual and indeed did not seem particularly narrow at the time. There had been an incursion of theoretical physicists into turbulence from the 1950s onwards; and, for theorists of the time, wavenumber space was just momentum space with Planck’s constant set equal to unity. So everyone working on the statistical theory of turbulence was quite at home in wavenumber space, and it fitted in with what was almost a tradition in turbulence theory, which had begun with Taylor’s introduction of spectral methods in the 1930s and had been carried on in the 1950s by Batchelor’s book in particular. Problems only arose when one’s papers were refereed by those who were not part of this grouping, and who were hostile to spectral methods. But I have written about that in other blogs and it is not what concerns me here, which is something rather more subtle.

The other day I was trying to work something out and was sure that I had done it previously. I’m not keen on doing anything that I, or indeed anyone else, has already done. Hence I was checking back in my notebooks and found what I was looking for dated May 1993. So, that was satisfactory, but it reminded me of why I had done the work originally. During the 1970s/80s, I became increasingly aware of referees who felt that theories predicting the Kolmogorov $-5/3$ law should not be published, because ‘intermittency corrections meant that it wasn’t correct’. It seemed to me that the very structure of renormalization theories was evidence for the correctness of the $-5/3$ law. But as such theories were very largely inaccessible to fluid dynamicists (especially, of course, when they were refereeing them!) I had wondered how one could extract the basic ideas without the full level of complication.

The essential feature, it seemed to me, was the occurrence of scale invariance, in which the inertial flux through wavenumber became constant independent of wavenumber. Beginning with the velocity field in $k$-space, one could exploit its complex nature to separate out amplitude and phase effects. Then, in the context of the energy balance equation (nowadays increasingly referred to as the Lin equation), one could determine the energy spectrum by power counting; with its prefactor being determined by an average over the phases.

I wrote this up and submitted it to PRL sometime in 1993. The response was interesting. It was rejected with a report that spoke approvingly of how it was written and presented but regretted that the energy-balance equation had already been used to derive the so-called ‘$4/5$’ law for the third-order structure function by Kolmogorov. I of course was happily ignorant of this. It was something done in real space. Which demonstrates the disadvantages of taking too limited or narrow an approach.

In 2006 I retired and began to take an interest in various phenomenological questions. This meant that at last I crossed over into real space and worked with the Kármán-Howarth equation as well as with the Lin equation. When working on the scale-invariance paradox, I decided to revisit my 1993 theory and this was published as [1] below. I was now able to point out that it answered the Landau criticism of Kolmogorov’s theory (as reinterpreted by Kraichnan [2]), in that its prefactor also depended on an average to the two-thirds power. If the original referee had been more familiar with spectral methods, he might have realised that my paper was a derivation of the inertial-range energy spectrum from the equations of motion, not the Fourier transform of the third-order structure function. So it was very much a different result from the Kolmogorov ‘$4/5$’ law. It also occurs to me as I write this, that the relationship between prefactors in the real-space and wavenumber-space formulations might be worth looking at.

Is there a moral in all this? I think there is. Basing my opinion on long experience of papers, discussions and referee reports, I believe that those fluid dynamicists who are uncomfortable with spectral methods understand less about the basic physics of turbulence than they otherwise might… and the New Year is a time for resolutions!

[1] David McComb. Scale-invariance and the inertial-range spectrum in three-dimensional stationary, isotropic turbulence. J. Phys. A: Math. Theor., 42:125501, 2009.
[2] R. H. Kraichnan. On Kolmogorov’s inertial-range theories. J. Fluid Mech., 62:305, 1974.




How many angels can dance on the point of a pin?

How many angels can dance on the point of a pin?
When I was young this was often quoted as an example of the foolishness of the medieval schoolmen and the nonsensical nature of their discussions. I happily classed those who debated it along with those who, not only believed that the sun was pulled round the heavens in a fiery chariot, but who were quite prepared to specify the precise number of horses pulling the chariot. Later on it seemed that it might have been a sort of reductio ad absurdam, used for critical purposes. Perhaps like the original intention behind Schrodinger’s cat? Later still it seemed that it might be an ironical comment by a seventeenth century protestant theologian. In any case, it has passed into the language as the epitome of foolish and pointless discussion that has some degree of intellectual pretension.

Where then may such pointless intellectual activity be found nowadays? Well, passing over easy targets like the arts, sociology and modern literary criticism, the answer, which may surprise you, is physics. Why should it surprise you? The further answer to that is that physics has been the gift that keeps giving. Over the past century or more, it has given us the impression that it can answer any question, and in the process give rise to amazing developments in science and engineering which alter all our lives for the better. In fact the twentieth-century advances in physics underpin all advances in medicine, transport, engineering and all-round super electronic devices which smooth our paths in so many ways!

As we become less bedazzled by the wonders of quantum theory and relativity, we are more conscious of the inconsistencies, such as dark matter and dark energy, the mysterious use of string theory in many dimensions, and a standard model of the universe which is, in some ways, apparently at a similar stage to the nineteenth-century study of the periodic table, prior to the development of quantum theory. Lee Smolin, in his book The trouble with physics points out the need for a revolution in physics. Roger Penrose in his more recent book Fashion, Faith and Fantasy in the New Physics of the Universe deplores the view that quantum theory has been so successful that it must apply to gravity too. As someone who has always worked in the classical physics area of turbulence theory (albeit using the methods of quantum field theory), I am merely an onlooker. But I have been surprised to notice that much modern physics seems to involve material that I lectured on in statistical field theory to final-year undergraduates and first-year postgraduates. I’m thinking here of topics like mean-field theory and $\phi^4$ scalar field theory. I also tend to feel surprised to see many attempts at a theory of quantum gravity based on the path-integral formulation of quantum mechanics. This is equivalent to solving the Schrödinger equation and one would not do that for a macroscopic box of gas, let alone the universe. Instead, because of the instability of the wave-function, one would use the density matrix formulation.

Every year we turn out thousands of our cleverest young people in all parts of the world to work on cosmology and particle theory. Inevitably their lives are devoted to what can be little more than pedagogical work. In contrast, the important fundamental problems of fluid turbulence receive little attention. I’m not advocating a dirigiste approach of any kind. I very much understand the importance of scholarship and research on fundamentals being a sort of creative ferment. But if a fraction of the effort on lattice QCD went into turbulence simulation, with the same sort of attitudes, it could transform the situation. As it is, we are lumbered with a turbulence community who mostly (it would seem) do not understand the concept of scale-invariance; and therefore do not understand that its onset is what defines the infinite Reynolds number limit!




Academic fathers and Mother Christmas

Academic fathers and Mother Christmas
In the mid-1980s I visited the Max Planck institute in Bonn to give a talk. While I was there, some of the German mathematicians told me about the concept of an academic father. They said that your PhD supervisor was your academic father, his supervisor was your academic grandfather, and so on. In that way: ‘We can all trace our lineage back to Gauss!’

In my own case, Sam Edwards was my supervisor and I was under the impression that Nicholas Kemmer had been his supervisor. Kemmer was retired by the time I joined the Physics department at Edinburgh and I never met him as such. Our only acquaintance was that on his rare visits to the department, he would call hello in passing, as my office door was always open.

I once discussed this concept with colleagues on some social occasion and one of them reckoned that Kemmer’s supervisor had been Weyl. So it turned out that someone I was collaborating with at the time was a sort of academic cousin. I’m not sure just what kind of cousin. My wife is an expert on matters like ‘second cousin, twice removed’, but it’s all Greek to me. Although I’m actually a bit better at Greek that at cousinage.

Recently I checked up on this and found to my surprise that Sam’s supervisor was Julian Schwinger and in turn his had been Isidor Isaac Rabi. This was encouraging, as both were Nobel Laureates in physics. Then Rabi’s supervisor had been Albert Potter Wills, who in turn was supervised by Arthur Gordon Webster (No, me neither.). He at least was supervised by Helmholtz, but after that the trail went cold again and it didn’t look like we were heading back to Gauss.

There must have been some reason why I had thought that Kemmer was Sam’s supervisor. Perhaps that was when he had still been at Cambridge University? Then he would have changed to Schwinger at Harvard? If Kemmer had been Sam’s supervisor for part of the time then he could still count as an ‘academic father’.

So I thought that I would check Kemmer out and found that his supervisor had been Pauli (not Weyl!) and in turn Pauli’s had been Sommerfeld, whose had been Lindemann (the mathematician, not the later physicist), and his had been Klein. Then Klein’s supervisor was Plucker, who was supervised by Gerling and (at last) we are back to Gauss, who was Gerling’s supervisor. But can I claim to be descended from Gauss? Well, I’m still not sure.

Of course this is all a rather old-fashioned idea. There are growing numbers of women in physics and mathematics and if we want to talk about academic descent then we should include academic mothers and, in time, academic grandmothers; and so on. Inclusiveness is the watchword nowadays and as this is Christmas Eve I shall be hanging up my stocking in the hope that Mother Christmas will put some nice presents in it. Certainly she has made a great job of decorating our tree: see below.

 

 

If you have been, then thank you for reading; and I wish you a happy Christmas!




Peer Review: Through the Looking Glass

Peer Review: Through the Looking Glass
Five years ago, when carrying out direct numerical simulations (DNS) of isotropic turbulence at Edinburgh, we made a surprising discovery. We found that turbulence states died away at very low values of the Reynolds number and the flow became self-organised, taking the form of a Beltrami flow, which has velocity and vorticity vectors aligned. This work is reported in [1] below, and illustrated by the following figure.

 

Visualization of the velocity field (red arrows) and the vorticity field (blue arrows) before and after self-organization.

A video of the simulation, showing the symmetry-breaking transition, complete with characteristic ‘critical slowing down’, can be found at the online article [1]: https://doi.org/10.1088/1751-8113/48/25/25FT01
The link to the video can be found under the heading Supplementary Data. Downloading this should be straightforward using Windows, but if using a Mac you may have to have an app such as VLC installed.

The article [1] was featured on the front cover of the journal, thus:

 

 

It was downloaded hundreds of times within a few days of publication and the total number of downloads now stands at 2708.
That sounds like a success story and you may well wonder why I want to feature this as yet another problem with peer review. The answer to that lies in the fact that we first submitted it to Physical Review Letters and that was such an bizarre experience that it deserves to be told!

Normally I would refer to the two referees as Referee A and Referee B, but as their behaviour seemed to belong to the Looking Glass world (that turbulence assessment so often is) I have decided to call them Tweedledum and Tweedledee.

First, Tweedledum said that he didn’t understand how we were forcing the turbulence. He had never seen anything like that before. Perhaps the strange behaviour was due to our strange forcing. He didn’t think that our Letter should be published.

Then, Tweedledee said that he didn’t understand how we were forcing the turbulence. He also had never seen anything like that before. Perhaps the strange behaviour was due to our strange forcing. He also didn’t think that our Letter should be published.

In Alice Through the Looking Glass, the twins had a famous battle. That did not happen in the present case where they were in perfect agreement; although Tweedledum (or was it Tweedledee?) suggested that perhaps if we did a lot more work and wrote it up as a much longer article, then it might be suitable for publication. This rather misses the point of having a journal like PRL!

When we submitted our paper to J. Phys. A, we pointed out the following: our method of forcing is known as negative damping; it was introduced to turbulence theory in 1965 by Jack Herring; it was first used in DNS in 1997 by Luc Machiels; has subsequently been used in numerous investigations; and in 2005 was studied theoretically by Doering and Petrov [2]. Not precisely an obscure technique then. But what an intellectually feeble performance from Tweedledum and Tweedledee. No wonder problems in turbulence remain unresolved for generations.

One might end up by wondering what if any harm had been done by the lack of scholarly behaviour on the part of these referees who were presumably chosen to be representative of the turbulence community. After all, the paper has been published and has clearly aroused quite a lot of interest. The trouble is, I suspect that J. Phys. A does not have the same visibility among turbulence researchers as PRL. In that case the numerous downloads may reflect the fact that many physicists are interested in an example of a nonlinear phase transition without necessarily having any interest in turbulence. More generally, over the years it seems to me that turbulence referees tend to exert a frictional drag on the process of publishing papers. Many of them give the impression of not wanting the pure pool of ignorance to be spoiled by any new understanding or knowledge.

[1] W. D. McComb, M. F. Linkmann, A. Berera, S. R. Yoffe, and B. Jankauskas. Self-organization and transition to turbulence in isotropic fluid motion driven by negative damping at low wavenumbers. J. Phys. A Math.Theor., 48:25FT01, 2015.
[2] Charles R. Doering and Nikola P. Petrov. Low-wavenumber forcing and turbulent energy dissipation. Progress in Turbulence, 101(1):11-18, 2005.




Should theories of turbulence be intelligible to fluid dynamicists?

Should theories of turbulence be intelligible to fluid dynamicists?

One half of the Nobel Prize in physics for 2020 was awarded to Roger Penrose for demonstrating that ‘black hole formation is a robust prediction of the General Theory of Relativity’. While it’s not my field, I do know a little about general relativity; so I had a look at what I could find online. It rapidly became clear to me that in order to understand Penroses’s work in detail, I would have to master a great deal of mathematics – topology in particular – which is unfamiliar to me. This would mean giving up everything else for a substantial period of time and that just wouldn’t make sense. So, despite knowing the basic equations of general relativity (for a simple, yet reasonably complete introduction, see reference [1]), I just have to take the word of other people that it all makes sense.

So what about relativistic quantum field theories, derived from the Navier-Stokes equations? Well, starting with Kraichnan, Wyld and Edwards in the early 1960s and leading up to my own LET theory [2], there exists a moderately successful class of statistical theories of turbulence which are essentially based on quantum field theory. Unfortunately, I would assume that many (most?) fluid dynamicists are as unfamiliar with the background to these as I am with the methods of Penrose in demonstrating that general relativity implies the existence of black holes. Although at least I hope that I belong to the same ‘culture’ as Penrose, in the sense that I appreciate the significance of what he has done and also why he has done it.

The question of how understandable (to turbulence researchers) statistical theories should be, was raised in lecture notes entitled ‘Problems and progress in the theory of turbulence’ [3] by Philip Saffman. In these he wrote down his list of the properties a theory should have. These were generally unexceptionable and really quite obvious. Indeed, one should perhaps bear in mind that a physicist would be very unlikely to write down a similar list, essentially because they would regard it all as being understood. The point that particularly interests me is that the second item in his list, after ‘Clear physical or engineering purpose’ is ‘Intelligibility’. It is worth quoting exactly what he says about this.

‘Intelligibility means that it can be understood, appreciated and applied by a competent scientist without years of study or familiarity with the jargon and techniques of a narrow speciality.’

Obviously, in view of what I wrote at the beginning of this post, I have a certain amount of sympathy with this view. At the same time, I feel that I should challenge it. The final phrase, which I have emphasised, has a faint flavour of the pejorative about it, particularly when taken in conjunction with his other writings. But we are entitled to ask what he means, by a ‘narrow speciality’.

His concern was with those theories of turbulence which are applications of quantum field theory, a subject that made great advances in the 1940s/50s. But quantum field theory was not a ‘narrow speciality’ in the 1970s; and is even less so today. It is a major discipline worldwide and, if we add in statistical field theory in condensed matter physics, then the activity involved would dwarf all turbulence research by orders of magnitude. Moreover, the theory in these areas is closely linked to the experimental work. There is a vast, and growing, body of work in these areas, so this cannot be seen as a narrow or esoteric activity.

Presumably then, he meant simply the applications to turbulence. For Saffman this boiled down to the work of Kraichnan, so he does not give a balanced or scholarly view of this field. Indeed, he does not cite any of the relevant papers by Kraichnan but instead relies on the book by Leslie. It is difficult to see his comments generally as being anything but an expression of frustration that there is an activity going on which he does not understand, combined with a degree of resentment because he felt that his own type of work was somehow being belittled or patronised.

here are other parts of his lecture notes that I value, such as his criticism of Kolmogorov’s 1962 ‘refined theory’; and the general tone of the lectures is undoubtedly stimulating. But although Philip Saffman is no longer here to speak for himself, I still think that his views about fundamental approaches to turbulence should be challenged, if only because similar views seem to be quite widespread today. I am occasionally surprised by how glibly members of the turbulence community are prepared to write off renormalization methods, with phrases such as ‘everyone knows that Kraichnan’s theory is wrong and no one bothers about it anymore’. Well life is so much easier if you pass up on the challenges. But to such people, I would address the question: what have you got to put in its place?

In the mid-1970s, when Saffman was writing, the situation was very different from that today. The basic idea of the LET theory was put forward by me in 1974, incidentally offering a fundamental reason for the failure of the Edwards theory and other cognate theories, including Kraichnan’s. Since then the LET theory has been developed and extensively computed and compared to other theories. I have also published three books, all intended to make such theories more accessible to non-physicists. Two are on turbulence and one on renormalization methods; and their titles can be found in the list of my publications in this blog. So I would like to answer my own question by saying that turbulence theories are intelligible to fluid dynamicists, provided that they are open minded and are prepared to make a bit of an effort. That’s what I would like to say but I have to make one caveat. There are theories, supposedly of turbulence, which are simply a relabelling of text book equations from quantum field theory with variables appropriate to turbulence. Yet such theories do not engage with the existing body of work or explain how they solve problems that others encountered. They used to appear in obscure journals of the old Soviet Union, but now they appear in the learned journals of the west. It appears that the authors do not understand that their work is unsound or perhaps do not care. I intend to write on the subject of Fake Theories (don’t know what put that idea in my head!) but as a topic it presents its difficulties.

Lastly, for completeness, I should mention that there is a class of theories based on the use of Lagrangian coordinates. A recent development in this type of theory also presents a decent and balanced review of other work in the field [4]. I also intend to write about Lagrangian theories in a future post.

[1] W. D. McComb. Dynamics and Relativity. Oxford University Press, 1999.
[2] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[3] P. G. Saffman. Problems and progress in the theory of turbulence. In H. Fiedler, editor, Structure and Mechanisms of Turbulence II, volume 76 of Lecture Notes in Physics, pages 273-306. Springer-Verlag, 1977.
[4] Makoto Okamura. Closure model for homogeneous isotropic turbulence in the Lagrangian specification of the flow field. J. Fluid Mech., 841:133, 2018.