Free decay of isotropic turbulence as a test problem.

Free decay of isotropic turbulence as a test problem.

When I began my postgraduate research in 1966, I quickly decided that there was one problem that I would never work on. That was the free decay of the kinetic energy of turbulence from some initial value. Although, as the subject of my postgraduate research was the turbulence closure problem, there didn’t seem to be any danger of my being asked to do so.

This particular free decay problem, as widely discussed in the literature, can, if one likes, be regarded as a reduced form of the general closure problem. Instead of trying to calculate the two-point correlation (or, equivalently, the energy spectrum), one is simply trying to calculate the decay curve with time of the total energy. This involves making various assumptions about the nature of the decay process and the most crucial seemed to be that a certain integral was constant with respect to time during the decay: this was generally referred to as the Loitsyansky invariant.

We can introduce this by considering the behaviour of the energy spectrum at small values of the wavenumber $k$. This can be written as a Taylor polynomial \[E(k,t) = E_2(t)k^2 + E_4(t)k^4 + \dots .\] Here the coefficient $E_4(t)$, when Fourier transformed to real space, is known as the Loitsyansky integral, and in general it depends on time. It seemed that this was indeed invariant during decay for the case of isotropic turbulence but it had been shown that this was not necessarily the case for turbulence that was merely homogeneous. The problem was that a correlation of the velocity with the pressure, which is suppressed by symmetry in the isotropic case, existed in the more general case. The difficulty here is that the pressure can be expressed as an integral over the velocity field and so the correlation $\langle u p \rangle$ is long-range in nature, and this invalidates the proof of invariance of $E_4$ which works for the isotropic case.

So far so good. What puzzled me at the time was that this failure in the more general case somehow seemed to contaminate the isotropic case. People working in this field seemed unwilling to reply on the invariance of $E_4$ even for isotropic turbulence. However, with the accretion of knowledge over the years (I’d like to claim wisdom as well, but that might be too big a stretch!), I believe that I understand their concerns. At the time, the only practical application of the theory was to grid turbulence; and although this was reckoned to be a good approximation to being isotropic, it might not be perfect; and it might vary to some extent from one experimental apparatus to another. And just to add to the confusion, at about that time (although I didn’t know it) Saffman published a theory of grid turbulence in which $E_2(t)$ was an invariant. This led to controversy based on $E_2$ versus $E_4$ which is with us to this day.

In more recent years, I have had to weaken my position on this matter, because my students have found it interesting to do free-decay calculations, in order to compare our simulations with those of others. So when I was preparing my recent book on HIT, I decided it would provide a good reason to really look into this topic. As part of this work, I was checking various results and to my astonishment, when I worked out $E_2$ I found that it was exactly zero. This work has been published and includes a new proof of the invariance of $E_4$ which is based on conservation of energy [1]. In passing, I should note that the refereeing process for this paper was something that I found educational and I will refer to that in future posts when I get onto the subject of peer review.

Shortly after I published this work, a paper on grid turbulence appeared and it seemed that their results suggested that $E_2$ was non-zero. I sent a copy of [1] to the author and he replied `evidently grid turbulence is less isotropic than we thought’. This struck me as a crucial point. If we are to make progress and have meaningful discussions on this topic, we need to recognise that free decay of isotropic turbulence and grid turbulence are two different problems. In fact, as things have moved on from the mid-sixties, we also have to consider DNS of free decay as being in principle a different problem. Let us now examine the three problems in turn, as follows:

1. Free decay of the turbulent kinetic energy is a mathematical problem which can be formulated precisely for homogeneous isotropic turbulence.

2. Grid-generated turbulence evolves out of an ensemble of wakes and is stationary with time and inhomogeneous in the streamwise direction. In order to make comparisons with free decay, it is necessary to invoke Taylor’s hypothesis of frozen convection.

3. DNS of freely decaying turbulence is based on the Navier-Stokes equations discretised on a lattice. Quite apart from the errors involved (analogous to experimental error in the grid-turbulence case), representation on a lattice is symmetry breaking for all continuous symmetries. The two principal ones in this case are Galilean invariance and isotropy.

Essentially, these are all three different problems and if we wish to make comparisons we have to at least bear that fact in mind. I have lost count of the many heated arguments that I have heard or taken part in over the years which ran along the lines: A says `The sky is blue!’ and B replies: `Oh no, I assure you that grass is green!’ In other words they are not talking about the same thing. That may seem rather extreme but supposing one is momentum conservation and the other is energy conservation. Such a waste of time and energy (and momentum, for that matter).

[1] W. D. McComb. Infrared properties of the energy spectrum in freely decaying isotropic turbulence. Phys. Rev. E, 93:013103, 2016.

Stationary isotropic turbulence as a test problem.

Stationary isotropic turbulence as a test problem.

When I was first publishing, in the early 1970s, referees would often say something like `the author uses the turbulence in a box concept’ before going on to reveal a degree of incomprehension about what I might be doing, let alone what I actually was doing. A few years later, when direct numerical simulation (DNS) had got under way, that phrase might have had some significance; and indeed its use is now common, albeit qualified by the word `periodic’. Of course, when Fourier methods were introduced by Taylor in the 1930s, it was in the form of Fourier series. But by the 1960s it was becoming usual among theorists to briefly introduce Fourier series and then take the infinite system limit and turn them into Fourier transforms: or, increasingly, just to formulate the problem straightaway in the infinite system. However, it can be worth one’s while starting with the finite cubic box of side L, and thinking in terms of the basic physics, as well as the Fourier methods.

In order to represent the velocity field in terms of Fourier series, we introduce the wavevector \[\mathbf{k}=(2\pi/L)\{n_1,n_2,n_3\},\] where the integers $n_1,n_2,n_3$ all lie in the range from $-\infty$ to $\infty$. Fourier sums are taken over the discrete values of $\mathbf{k}$. Then the transition to the continuous, infinite system is made by taking the limit of infinite system size, such that \[\lim_{L\rightarrow \infty}\left(\frac{2\pi}{L}\right)^3\sum_{\mathbf{k}} = \int d^3\,k.\] As ever in physics, we assume that everything is well-behaved; and that both the field variables and their transforms exist, being independent of system size as we go to this limit.

We do not have to restrict these ideas to the Fourier representation. They are generally true when we make the transition from classical mechanics to continuum mechanics. To do this, we begin with a finite system and replace discrete objects by densities. A continuous (or field) representation is introduced by defining continuous densities in the limit of infinite system size. All physical observables must be expressed in terms of densities or rates. They cannot depend on the size of the system, otherwise we would be unable to take the continuum limit. So, if we formulate turbulence in real space in terms of structure functions in a box, then theoretical expressions for the structure functions (or equivalently, the moments) must not depend on the size of the box. This provides us with a basic first test for any theory; and to our knowledge there have been some surprising failures to recognise this. We will come back to two specific examples presently. First we will look at the general question of how to test theories.

Now, stationary isotropic turbulence can be rigorously formulated as a mathematical problem, where `rigour’ is taken to be in the sense of theoretical physics, but it does not occur in nature or indeed in the laboratory. It is true that it may occur to a reasonable approximation in geophysical and astronomical flows, but at the moment it seems that DNS might be our best bet for testing mathematical theories of isotropic turbulence. So it behoves us to examine the question: how representative is DNS of the mathematical problem that we are studying?

Well, of course DNS has been an active field of research for several decades now and this aspect has not been neglected. Nevertheless, one is left with the impression that it is very much a pragmatic activity, governed by `rule of thumb’ methods. For instance, when we began DNS at Edinburgh in the 1990s, I asked around for advice on the maximum value of the wavenumber that we should use, as this seemed to vary from less than the Kolmogorov dissipation wavenumber to very much greater. The consensus of advice that I received was to choose $k_{max} = 1.5 k_d$, and this is what we did. Later on, in 2001, we demonstrated a rational procedure for choosing $k_{max}$: see Figure 2 of reference [1] or Figure 1.6 of reference [2]. One conclusion that emerges from this, is that to resolve the dissipation rate might mean devoting one’s entire simulation to the dissipation range of wavenumbers!

In recent years there seems to have been more emphasis on resolving the largest scale of the turbulence, although much of this work has been for the case of free decay. But concerns remain, particularly in the terms of experimental error. It is also necessary to note a fundamental problem. The mere fact of representing the continuum NSE on a discrete lattice is symmetry breaking for Galilean invariance and isotropy, to name but two. I’m not sure how one can take this into account, except by considering a transition towards the continuum limit and looking for asymptotic behaviour. This could involve starting with a `fully resolved’ simulation and looking at increasingly finer mesh sizes. To say the least this would be very expensive in terms of computer storage and run time. Naturally, workers in the field always want the highest possible Reynolds number. But, if you begin with low Reynolds numbers, it is cheap and easy to do, and you can learn something from the variation of observables with Reynolds number. There exist some well-known simulations that have employed vast resources to achieve enormous Reynolds numbers and yet provide only a few spot values without any error bars, with no indication of asymptotic behaviour, and I understand suspicions about how well-resolved they are. An awful warning to us all!

Lastly, two more awful warnings. First, as we discussed in the previous post, Kraichnan’s asymptotic solution of DIA depends on the largest scale of the system. That in itself is enough to rule it out as unphysical, whether one accepts Kolmogorov (1941) or not. However, as I pointed out, our computations at Edinburgh do not support this asymptotic form, which was obtained analytically using approximations that Kraichnan found plausible. A critical examination of that analysis is in my opinion long overdue.

Secondly, we have the Kolmogorov (1962) form of the energy spectrum, which also depends on the largest scale of the system. Probably few people now take this work seriously, but its baleful presence influences the turbulence community and lends credence to the increasingly unrealistic idea of intermittency corrections. In fact it has recently been shown that the inclusion of the largest scale destroys the widely observed scaling on Kolmogorov variables [3]. This should have been obvious, without any need to plot the graphs!

[1] W. D. McComb, A. Hunter, and C. Johnston. Conditional mode-elimination and the subgrid-modelling problem for isotropic turbulence. Phys. Fluids, 13:2030, 2001.

[2] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.

[3] W. D. McComb and M. Q. May. The effect of Kolmogorov (1962) scaling on the universality of turbulence energy spectra. arXiv:1812.09174[physics.fluid-dyn], 2018.

Asymptotic behaviour of the Direct Interaction Approximation.

Asymptotic behaviour of the Direct Interaction Approximation.
As mentioned previously, Kraichnan’s asymptotic solution of the DIA, for high Reynolds numbers and large wavenumbers, did not agree with the observed asymptotic behaviour of turbulence. His expression for the spectrum was $E(k)=C’\varepsilon^{1/2}U^{1/2}k^{-3/2}$, where $U$ is the root-mean-square velocity and $C’$ is a constant. In 1964 (see [1] for the reference) he wrote: `Recent experimental evidence gives strong support to [the Kolmogorov `-5’3’ form] and rules out [the `-3/2’ form above] as a correct asymptotic law.’

However, Kraichnan’s result is not actually an asymptotic form. The rms velocity $U$ is in fact part of the solution, not the initial conditions. We may underline this by writing $U= [\int_0^\infty \, E(k)\,dk]^{1/2}$, which allows us to rewrite the Kraichnan result as $E(k)=C’ \varepsilon^{1/2}[\int_0^\infty \, E(k)\,dk]^{1/4}\, k^{-3/2}$. So, far from being an asymptotic solution, this appears to be a form of transcendental equation for the energy spectrum.

Now you may object that the dissipation rate is also part of the solution, rather than of the initial conditions, and hence this is also a criticism of the Kolmogorov form. But this is not so. The dissipation only appears because it is equal to the inertial transfer rate. From the simple physics of the inertial range in wavenumber space, the appropriate quantity is the maximum value of the inertial flux of energy through modes, which we will denote by $\varepsilon_T$. Hence the Kolmogorov form should really be $E(k) \sim \varepsilon_T^{2/3}k^{-5/3}$. Of course Kolmogorov worked in real space and derived the `2/3’ law. But in 1941 Obukhov recognised that in wavenumber space the relevant quantity was the scale-invariant energy flux, as did Onsager a few years later.

A way of putting the Kraichnan result in a more asymptotic form was given by McComb and Yoffe [1], who made use of the asymptotic Taylor surrogate for the dissipation rate, $\varepsilon = C_{\varepsilon,\infty} U^3/L$, where $L$ is the integral length scale and $ C_{\varepsilon,\infty} = 0.468 \pm 0.006$ [2], to substitute for $U$ in the Kraichnan spectrum, and obtained: $E(k) = C’C_{\varepsilon,\infty}^{-1/3}\varepsilon^{2/3}L^{\beta}k^{-5/3 + \beta}$, where $\beta = 1/6$. Note that we have changed $\mu$ in that reference to $\beta$ in order to avoid any confusion with the so-called intermittency correction, which normally is represented by that symbol.

Kraichnan only computed the Eulerian DIA for free decay at low Reynolds numbers. However, in 1989 McComb, Shanmugasundaram and Hutchinson [3] reported calculations for free decay of both DIA and LET for Taylor-Reynolds numbers in the range $0.5 \leq R_{\lambda}(t_f ) \leq 1009$ where $t_f$ is the final time of the computation. These results do not support the asymptotic form of the DIA energy spectrum, as given above. It was found that (for example) at $ R_{\lambda} ( t_f) = 533$, the two theories were virtually indistinguishable and both gave the Kolmogorov spectrum to within the accuracy of the numerical methods. It was shown that this result was not an artefact of the initial conditions, by taking $k^{-3/2}$ as the initial spectrum, whereupon it was found that both theories evolved away from this form to once again give $k^{-5/3}$ as the final spectrum.

There is much that remains to be understood about Eulerian turbulence theories and the behaviour of two-time correlations.

[1] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[2] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.
[3] W. D. McComb, V. Shanmugasundaram, and P. Hutchinson. Velocity derivative skewness and two-time velocity correlations of isotropic turbulence as predicted by the LET theory. J. Fluid Mech., 208:91, 1989.

A brief summary of two-point renormalized perturbation theories.

A brief summary of two-point renormalized perturbation theories.
In the previous post we discussed the introduction of Kraichnan’s DIA, based on a combination of a mean-field assumption and a new kind of perturbation theory, and how it was supported by Wyld’s formalism, itself based on a conventional perturbation expansion of the NSE. This was not too surprising, as Kraichnan’s mean field assumption involved his infinitesimal response function which the Wyld comparison showed was the same as the viscous response function, and hence not a random variable. By 1961 it was known that the asymptotic solution of DIA was incorrect, with implications for both the Wyld formalism (and the MSR formalism later on: see previous post).

The next step forward was the theory of Edwards [1] in 1964, which was restricted to the more limited single-time covariance and also to the stationary case. This took as its starting point the Liouville equation for $P$, the probability distribution functional of the velocity field, and went beyond the mean-field case to calculate corrections to it self-consistently. That is, Edwards made the substitution $P\equiv P_0 + (P –P_0)$ and then expanded in powers of the correction term $\Delta P = P – P_0$. Then, taking $P_0$ to be Gaussian, and exploiting the symmetries of the system, Edwards gave a highly intuitive treatment of the problem, in which he drew strongly on an analogy with the theory of Brownian motion. It turned out that the resulting theory was closely related to the DIA and, like it, did not agree with the Kolmogorov spectrum.

The following year Herring [2], using formal methods of many-body theory, produced a self-consistent field theory which was much more abstract than the Edwards one, but yielded the same energy equation. Then, in 1966 he generalised this theory to the two-time case [3]. All three theories [1-3] led to the same energy equation as DIA, but all differed in the form of the response equation.

Now, it is in the introduction of the response equation that the renormalization takes place, and it is in the form of the response equation that the deviation from Kolmogorov lies, so this difference between these response equations raises fundamental questions about all these theories. Various interpretations were offered at the time, but these were all phenomenological in character. It was much later that a uniform, fundamental diagnosis was offered and I will come on to that presently. But this was the situation when I began post-graduate research with Sam Edwards in October 1966. The exciting developments of the previous decade seemed to be leading to a dead end, and my first task was to choose the response function of the Edwards theory in a new way, such that it maximised the turbulent entropy [4].

On the basis of the Edwards analysis, his theory had failed under the extreme circumstances of an infinite Reynolds number limit, in which the input was modelled by a delta-function at the origin in $k$-space and the dissipation was represented by a delta-function at $k=\infty$. Edwards argued that under these circumstances the Kolmogorov spectrum would apply at all wavenumbers, and in his original theory this led to an infra-red divergence in the integral for the response function. (Note: Kraichnan used the scale-invariance of the inertial flux $\Pi$ as his criterion for the inertial range, but the two methods are mathematically equivalent.) The `maximum entropy’ theory [4] certainly achieved the result of eliminating the infra-red divergence, but that was about as much as one could say for it. It became clearer to me later that it was not a very sound approach.

It is a truism in statistical physics that a system is either dominated by entropy or energy. If we consider a system made of many microscopic magnets on a lattice then the entropy will determine the distribution. However if we switch on a powerful external magnetic field, all the little magnets will line up with it and (small fluctuations aside) entropy has no say in the matter! It is just like that in turbulence. The system is dominated by a symmetry breaking current of energy through the modes, running from small to large wavenumbers, where it is dissipated by viscosity. There is no real reason to assume that the entropy determines the turbulence response.

When I was in my first post-doctoral job, I gave a talk to some theorists. I explained my early ideas on how energy transfer might determine the turbulence response. They heard me out politely, and then I made the mistake of mentioning the maximum entropy work. Immediately they became enthusiastic. ‘Tell us about that’, they said. The impression they gave was ‘now that’s a real theory!’ I was in awe of them as they were much older and more experienced than me, and talked so authoritatively about all aspects of theoretical physics. Nevertheless, this was my first inkling of conventional thinking. The implication seemed to be: it was a text-book method, so it must be good.

Over the next few years I developed the local energy transfer (LET) theory [5, 6], and also offered a unified explanation of the failure of first-generation renormalized perturbation theories. The further extension of this work to the two-time case has had a rather chequered history and will be the subject of further posts.

[1] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[2] J. R. Herring. Self-consistent field approach to turbulence theory. Phys. Fluids, 8:2219, 1965.
[3] J. R. Herring. Self-consistent field approach to nonstationary turbulence. Phys. Fluids, 9:2106, 1966.
[4] S. F. Edwards and W. D. McComb. Statistical mechanics far from equilibrium. J.Phys.A, 2:157, 1969.
[5] W. D. McComb. A local energy transfer theory of isotropic turbulence. J.Phys.A, 7(5):632, 1974.
[6] W. D. McComb. The inertial range spectrum from a local energy transfer theory of isotropic turbulence. J.Phys.A, 9:179, 1976.

Theories versus formalisms

Theories versus formalisms.
After the catastrophe of quasi-normality, the modern era of turbulence theory began in the late 1950s, with a series of papers by Kraichnan in the Physical Review, culminating in the formal presentation of his direct-interaction approximation (DIA) in JFM in 1959 [1].

The next step was the paper by Wyld [2], which set out a formal treatment of the turbulence problem based on, and very much in the language of, quantum field theory. Wyld carried out a conventional perturbation theory, based on the viscous response of a fluid to a random stirring force. He showed how simple diagrams could be used with combinatorics to generate all the terms in an infinite series for the two-point correlation function. He also showed that terms could be classified by the topological properties of their corresponding diagrams. In this way, he found that one class of terms could be summed exactly and that another could be re-expressed in terms of partially summed series, thus introducing the idea of renormalization. In other words, the exact correlation could be expressed as an expansion in terms of itself and a renormalized response function (or propagator). In a sense, this could be regarded as a general solution of the problem, but obviously one that by itself does not provide a tractable theory. In short, it is a formalism.

As an aside, I should just mention that Wyld’s paper was evidently very much written for theoretical physicists. That is no reason why any competent applied mathematician shouldn’t follow it, but one suspects that few did. Also, the work has been subject to a degree of criticism: the current version may be found as the improved Wyld-Lee theory in #8 of the list of My Recent Papers on this website. But this does not affect anything I will say here and I will return to this topic in a future blog.

In contrast, Kraichnan began by introducing the infinitesimal response function $\hat{G}$, which connected an infinitesimal change in the stirring forces to an infinitesimal change in the velocity field. He made this the basis of what he claimed was an unconventional (superior?) perturbation theory, making use of ideas like weak dependence, maximal randomness, and direct interaction. Unfortunately these ideas did not attract general agreement, and I suspect that he found the refereeing process with JFM, and the subsequent experience of the Marseille Conference (see the previous blog), rather bruising. Apparently he said. `The optimism of British applied mathematicians is unbounded.’ Then after a pause. `From below.’ I was told this by Sam Edwards when I was a postgraduate student. Sam obviously appreciated the interplay of wit and cynicism.

Now, in completing his theory, Kraichnan made the substitution $\hat{G}= G \equiv \langle \hat{G} \rangle$, which is in effect a mean-field approximation. So it is important to note that, when the conventional perturbation formalism of Wyld is truncated at second-order in the renormalized expansion, the equations of Kraichnan’s DIA are recovered. This is important because it suggests that this particular mean-field approximation is in fact justified. However, we know that Kraichnan came to the conclusion that his theory was wrong, at least in terms of its asymptotic behaviour at high Reynolds numbers: see the previous blog.

This has the immediate implication that Wyld’s formalism is also wrong, when truncated at second order. Which is also true of the later functional formalism of Martin, Siggia and Rose [3]. Kraichnan came to the conclusion that his DIA approach should be carried out in a mixed Eulerian-Lagrangian coordinate system; and, if correct, that would presumably also apply to the two formalisms. However, there is also the question of whether or not it is appropriate to treat the system response as one would in dynamical system theory. After all, the stirring forces in a fluid, first have to create the system, and only then do they maintain it against the dissipative effects of viscosity. We will return to this aspect in future blogs.
[1] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497-543, 1959.
[2] H. W. Wyld Jr. Formulation of the theory of turbulence in an incompressible fluid. Ann. Phys, 14:143, 1961.
[3] P. C. Martin, E. D. Siggia, and H. A. Rose. Statistical Dynamics of Classical Systems. Phys. Rev. A, 8(1):423-437, 1973.

Marseille (1961): a paradoxical outcome.

Marseille (1961): a paradoxical outcome.
When I was first at Edinburgh, in the early 1970s, a number of samizdat-like documents, of entirely mysterious provenance, were being passed around. One that came my way, was a paper by Lumley which contained some rather interesting ideas for treating the problem of turbulent diffusion. I expect that it is still in my filing system; but, with the Covid-19 lockdown, I am cut off from my university office and unable to refresh my memory. Later on I encountered the paper by Proudman which criticised Kraichnan’s theory of turbulence – the Direct-Interaction Approximation – and by that time I presumably had heard about the meeting held in Marseille in 1961. Of course my ignorance is not all that surprising, in that the meeting, which was the source of these papers, took place five years before I began my postgraduate research. In any case, I must have known about it by the late 1980s, as these papers are correctly referenced in my 1990 book on the physics of turbulence.

An interesting and informal account of this meeting is given by Moffatt in his review [1], which is essentially an appreciation of the life and work of G. K. Batchelor, and accordingly the meeting is seen, as it were, through this prism. Having told the story of how Batchelor discovered the work of Kolmogorov, while searching through the literature of turbulence in the library of the Cambridge Philosophical Society; and how he had expanded the short and rather cryptic papers of Kolmogorov into what was to become a seminal work on the subject [2], Moffatt sees the Marseille meeting as a ‘watershed’ in the study of turbulence. In support of this, he highlights two contributions to the meeting.

First, there is the report by Stewart of experimental measurements of energy spectra carried out in the channel between Vancouver Island and the mainland. This investigation achieved values of the Taylor-Reynolds number up to about 3000, and several decades of power-law behaviour, which appeared to support the Kolmogorov $-5/3$ spectrum. This work was published the following year [3].

Secondly, there was a lecture by Kolmogorov, also published in the following year [4], in which he outlined a refinement (sic) of his 1941 theory in response to a criticism by Landau. His conclusion was that the power of $-5/3$ should be subject to a small correction $\mu$; but he was unable to obtain a value for $\mu$.
There is an element of contradiction here, but that could possibly be resolved quite trivially if one were to find out that the two agreed within experimental error. So that in itself is not a paradox. The paradox that I have in mind arises in a different way.

Moffatt discusses the fact that Batchelor essentially gave up turbulence as his main research interest after this meeting. His argument appears to be that Batchelor was already becoming discouraged by the difficulties of the subject. And, given that a major part of his own research had been the interpretation and dissemination of the Kolmogorov (1941) theory, he may have found that Kolmogorov’s lecture at this meeting came as the last straw!

Another possibility, that Moffatt doesn’t mention, is that Batcheleor may have found the new wave of theoretical physics approaches, as initiated by Kraichnan, not only complicated but also part of an alien culture, to the extent that this too was discouraging. I have a personal note that I can add here. I only met Batchelor once; in 1967 when he examined my Master’s thesis. At one point he had some difficulty with the units, where I was giving a quantum physics analogy, and I pointed out that there would be a Planck’s constant involved, but that I was working in units where Planck’s constant was unity. At another stage he pointed out that he was, at the risk of being accused of cynicism, no more optimistic about these new quantum-inspired approaches, than about anything else. And, that was with Sam Edwards, who had published a theory of turbulence in JFM three years earlier, also in the room! I am quite sure that forty (or more) years on, there would be many in turbulence research who would eagerly say that he had proved to be right. But, following one’s prejudices, rather than engaging with a subject, is the abnegation of scholarship. Sometimes the truth lies deep.

However, another major discouragement took place at this meeting. Kraichnan was predicting an inertial-range spectrum with an exponent of $-3/2$. Even if the results of Grant et al. [3] were compatible with a small correction to $5/3$, they were certainly good enough to convincingly rule out Kraichnan’s rival $3/2$ exponent. As a result, Kraichnan had to look at his theory again, and over a period of several years he became convinced that the problem was insoluble in Eulerian coordinates, and that there was a need to change to a mixed coordinate system which he called Lagrangian-History coordinates. The result was an immensely complicated theory, which not only had to be abridged in order to permit computation, but also depended on the way in which the theory was formulated. This has left a legacy of other workers who employ a more conventional Lagrangian system.

This, then, is the paradox that I had in mind. The outcome of the meeting, put in very broad brush terms, is that Batchelor changed his mind because Kolmogorov (1941) was wrong and Kraichnan changed his mind because it was correct. It cannot be said that progress in turbulence is ever smooth.
[1] H. K. Moffatt. G. K. Batchelor and the Homogenization of Turbulence. Ann. Rev. Fluid Mech., 34:19-35, 2002.
[2] G. K. Batchelor. Kolmogorov’s theory of locally isotropic turbulence. Proc. Camb. Philos. Soc., 43:533, 1947.
[3] H. L. Grant, R. W. Stewart, and A. Moilliet. Turbulence spectra from a tidal channel. J. Fluid Mech., 12:241-268, 1962.
[4] A. N. Kolmogorov. A refinement of previous hypotheses concerning the local structure of turbulence in a viscous incompressible fluid at high Reynolds number. J. Fluid Mech., 13:82-85, 1962.

Which Navier-Stokes equation do you use?

Which Navier-Stokes equation do you use?

In the first half of 1999, a major turbulence programme was held at the Isaac Newton Institute in Cambridge. On those days when there were no lectures or seminars during the morning, a large group of us used to meet for coffee and discussions. In my view these discussions were easily the most enjoyable aspect of the programme. On one particular morning, as a prelude to making some point, I said that I was probably unusual in that I have taught the derivation of the Navier-Stokes equation (NSE) as continuum mechanics to engineering students and by statistical mechanics to physicists and mathematicians. The general reaction was that that I was not merely unusual, but surely unique! I gathered, from comments made, that everyone present saw the NSE as part of continuum mechanics.

Of course the two forms of NSE are apparently identical, otherwise one could not refer to both as the Navier-Stokes equation. Nevertheless, when one comes to consider the infinite Reynolds number limit, it is necessary to become rather more particular. We can start doing this by stating the two forms, as follows.

First, the continuum-mechanical NSE is exact for a continuous fluid which shows Newtonian behaviour under all circumstances of interest.

Secondly, the statistical-mechanical NSE is the first approximation to the exact statistical mechanical equations of motion. So in principal it should be followed by a statement to the effect that there are higher-order terms.

Now strictly, if we want to consider cases where the continuum approximation breaks down, we should be using the second of these forms. Batchelor argued that in the limit of zero viscosity (at constant dissipation rate) the dissipation would be concentrated at infinity in wavenumber space. Edwards [1] went further and represented this dissipation by a delta-function at $k=\infty$ and matched it with a delta-function input of energy at $k=0$. In this way he could obtain an infinitely long inertial range and assume that the $-5/3$ spectrum applied everywhere, as a test of his closure approximation.

The Edwards procedure is valid, because he was applying it to a closure of the (in effect) continuum-mechanical NSE, as indeed is everyone else who discusses behaviour at large Reynolds numbers; or, for that matter, statistical closures. But the question of the validity of this model arises when people consider the breakdown of the NSE. This actually requires some consideration of the basic physics, which in this case means statistical mechanics; and, essentially this boils down to the following: The general requirement for the continuum limit to be valid is that the smallest length-scale of the fluid motion should be much larger than the mean free path of the fluid’s molecules.

The only example of this being looked at quantitatively, that I know of, may be found in Section 1.3 of the book by Leslie [2]. He considered flow in a pipe at a Reynolds number of $10^6$, with a pipe diameter of $10mm =10^{-2}m$, which he described as an extreme case. In Section 2.8 of his book, he calculates the minimum eddy size to be greater than $10^{-4}mm =10^{-7}m$. He notes that for a liquid the mean free path is of the order of the atomic dimensions and thus about $10^{-10}m$ and hence the use of a continuum form is very well justified. He further comments: ‘It [the continuum limit] is also satisfied, although not by such a comfortable margin, by any gas dense enough to produce a Reynolds number of $10^6$ in a passage only $10mm$ in diameter.’

I think that it would be a good idea if those who discuss cases where a theory based on the Navier-Stoke equation is supposed to break down actually put in some numbers to indicate where their revised theory would be applicable and the NSE wouldn’t. Or perhaps, it might be salutary to consider in detail the variation of significant quantities with increasing Reynolds number and identify the smooth development of asymptotic behaviour. I will return to this point in future posts.

Anyone who would like an introductory discussion of the derivation of macroscopic balance equations from statistical mechanics should consult Section 7.6 of my book Study notes for statistical physics, which may be downloaded free of charge from Bookboon.com.

[1] S. F. Edwards. Turbulence in hydrodynamics and plasma physics. In Proc. Int. Conf. on Plasma Physics, Trieste, page 595. IAEA, 1965.
[2] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.

Turbulence as a quantum field theory: 2

Turbulence as a quantum field theory: 2
In the previous post, we specified the problem of stationary, isotropic turbulence, and discussed the nature of turbulence phenomenology, insofar as it is relevant to taking our first steps in a field-theoretic approach. Now we will extend that specification in order to allow us to concentrate on renormalization group or RG.

RG originated in quantum field theory in the 1950s, but is best known for its successes in critical phenomena in the 1970s, along with the creation of the new subject of statistical field theory. Essentially it began as a method of exploiting scale invariance, and ended up as a method of detecting it, and also establishing the conditions under which it would hold. It is most easily understood in the theory of ferromagnetism, where we can envisage a model consisting of lots of little atomic magnets on a lattice. These atomic magnets (or lattice spins) interact with each other and, if we call the interaction energy for any pair $J$, this energy appears in the partition function as $J/k_B T$, where $k_B$ is the Boltzmann constant, and $T$ is the absolute temperature. This quantity is the coupling constant.

Now RG consists of coarse-graining our microscopic description, and then re-scaling it, to see it we can get back to where we started. If so, that would be a fixed point. In practice, we might expect to carry out this transformation a number of times, in order to reach such a fixed point. So in effect we are progressively reducing the number of degrees of freedom. This involves some sort of partial average at each step, in contrast to a full ensemble average, which gets you down from lots of degrees of freedom to just a few numbers being needed to describe a system.

Actually, merely by waving our hands about, we can deduce something about the fixed points of our lattice model of a ferromagnet. If we consider very high temperatures, then the coupling strength will be reduced to zero. The lattice spins will have a Gaussian probability distribution. We can envisage that this will be a fixed point, as no amount of coarse-graining will change it from a purely random distribution. At the other extreme, as the temperature tends to zero, the coupling tends to infinity and there can be no random behaviour: the spins will all line up. Once again, perfect order cannot be changed by coarse graining, and this also is a fixed point. What happens in between these extremes is interesting. As the temperature is reduced from some very large value, clumps of aligned spins will occur as fluctuations. The size of these fluctuations is characterised by the correlation length. As the temperature approaches some critical value $T_c$ from above, the correlation length will tend to infinity. When this occurs, it is no longer possible to coarse-grain away the ordering, as it exists on all scales. This fixed point is the critical point of the lattice.

So, RG applied to the model identifies the high- and low-temperature fixed points, which are trivial; and the critical fixed point which corresponds to the onset of ferromagnetism. This is known as real space RG and I have given a fuller account (with pictures!) elsewhere [1]. For completeness, I should mention that the momentum-space analytical treatment involves Gaussian perturbation theory in order to evaluate parameters associated with the critical point. Also, the temperature in this context is known as a control parameter.

Variation of the coupling strength with wavenumber in isotropic turbulence.

In turbulence, the degrees of freedom are the independently excited Fourier modes. The coupling parameter for each mode can be identified with Batchelor’s Reynolds number (see my earlier post on 23/04/20) which takes the form $R(k)=[E(k)]^{1/2}/\nu k^{1/2}$. Using the schematic energy spectrum, as given in the preceding post, we can identify the trivial fixed points where the coupling falls to zero. This is because the spectrum is known to go to zero at least as $k^4$ as $k\rightarrow 0$ and to zero exponentially as $k\rightarrow \infty$. By analogy with quantum field theory, we refer to these points as being asymptotically free in the infra-red and the ultra-violet, respectively. In order to compare with magnetism, we can argue that the $k=0$ fixed point is analogous with the high-temperature, where the low-$k$ motion is random (Gaussian) due to the stirring, whereas at large $k$, the motion is damped by viscosity and is analogous to the low-temperature fixed point. In the figure we identify another possible, but non-trivial, fixed point where the inertial range is represented by the Kolmogorov $k^{-5/3}$ spectrum. A power law, being scale-free, is likely to be associated with a fixed point of the RG transformations.

In order to carry out calculations, we seek to eliminate modes progressively in bands, first $k_1\leq k\leq k_0$, then $k_2 \leq k \leq k_1$, and so on. At the first stage, the effect of the missing modes results in an increase to the viscosity $\nu_0 \rightarrow \nu_1 = \nu_0 + \delta \nu_0$. We then rescale on the increased viscosity, and repeat the process. Note that we rename the molecular viscosity $\nu = \nu_0$ for this purpose. Also note that it can be a little counter-intuitive associating zero with the maximum value of $k$, but we want an increasing index as we reduce $k$, leading on to a recurrence relationship which may reach a fixed point.

In the theory of magnetism, the lattice spacing $a$ is used to define the maximum wavenumber, thus $k_{max} = 2\pi/a$. In turbulence, sometimes the Kolmogorov wavenumber is used for the maximum, but this is likely to be incorrect by at least an order of magnitude. A better definition has been given [2] in terms of the dissipation integral, thus: $\varepsilon = \int_0^\infty 2\nu_0 k^2 E(k) dk \simeq \int_0^{k_{max}}2\nu_0 k^2 E(k) dk$.

I shall highlight two calculations here. Forster et al [3] carried out an RG calculation by restricting the wavenumbers considered to a region near the origin. This was very much a Gaussian perturbation theory of the type used in the study of critical phenomena. They did not refer to this as turbulence, and instead considered it as the large scale asymptotic behaviour of randomly stirred fluid motion.

Later, McComb and Watt [4], introduced a form of conditional average which allowed the RG transformation to be formulated as an approximation, valid even at large wavenumbers. They were able to find a non-trivial fixed point which corresponded to the onset of the inertial (power-law) range and gave a good value of the Kolmogorov spectral constant. This work has been carried on and refined but is very largely ignored. In contrast, Forster et al seem to have established a new paradigm of Gaussian fluid motion which permits the application of much field theoretic RG which relies on the simplifications of the paradigm. There is, however, one difference. Nowadays people publishing in this field describe it as turbulence! The most up-to-date treatment of the conditional averaging method will be found in [5].

[1] W. D. McComb. Renormalization Methods. Oxford University Press, 2004.

[2] W. D. McComb. Application of Renormalization Group methods to the subgrid modelling problem. In U. Schumann and R. Friedrich, editors, Direct and Large Eddy Simulation of Turbulence, pages 67, 81. Vieweg, 1986.

[3] D. Forster, D. R. Nelson, and M. J. Stephen. Long-time tails and the large eddy behaviour of a randomly stirred fluid. Phys. Rev. Lett., 36 (15):867-869, 1976.

[4] W. D. McComb and A. G. Watt. Conditional averaging procedure for the elimination of the small-scale modes from incompressible fluid turbulence at high Reynolds numbers. Phys. Rev. Lett., 65(26):3281-3284, 1990.

[5] W. D. McComb. Asymptotic freedom, non-Gaussian perturbation theory, and the application of renormalization group theory to isotropic turbulence. Phys. Rev. E, 73:26303-26307, 2006.

Turbulence as a quantum field theory: 1

Turbulence as a quantum field theory: 1

In the late 1940s, the remarkable success of arbitrary renormalization procedures in quantum electrodynamics in giving an accurate picture of the interaction between matter and the electromagnetic field, led on to the development of quantum field theory. The basis of the method was perturbation theory, which is essentially a way of solving an equation by expanding it around a similar, but soluble, equation and obtaining the coefficients in the expansion iteratively.

As a result of these successes, perturbation theory became part of the education of every physicist. Indeed, it is not too much to say that it is part of our DNA. Yet, a few years ago, when I looked at the website of an applied maths department, they had a lengthy explanation of what perturbation theory was, as they were using it on some problem. One simply couldn’t imagine that, on a physics department website, and it illustrates the cultural voids between different disciplines in the turbulence community. For instance, I used to hear/read comments to the effect that ‘isotropic turbulence had been studied for its potential application to shear flows, but this proved not to be the case and now it was of no further interest.’ From a physicist’s point of view, the reason for studying isotropic turbulence is the same as the motivation for being the first to climb Everest. Because it is there! But, interestingly, the study of isotropic turbulence has increased in recent years, driven by the growth of direct numerical simulation of the equations of motion as a discipline in its own right.

However, back to the sixties. The idea of applying these methods to turbulence caught on, and for a while things seem to have been quite exciting. In particular, there were the pioneering theories of Kraichnan, Edwards and Herring. There was also, the formalism of Wyld, which was the most like quantum field theory. At this point, I know from long and bitter experience that there will be wiseacres muttering ‘Wyld was wrong’. They won’t know what exactly is wrong, but they will be quoting a well-known later formalism by Martin, Siggia and Rose. In fact it has recently been shown that the two formalisms are compatible, once some simple procedural changes have been made to Wyld’s approach [1].

We will return to Wyld in a later post (and also to the distinction between formalisms and theories). Here we want to take a critical look at the underlying physics of applying the methods of quantum field theory to fluid turbulence. It is one thing to apply the iterative-perturbative approach to the Navier-Stokes equations (NSE), and another to justify the application of specific renormalization procedures to a macroscopic phenomenon in classical physics. So, let’s begin by formulating the problem of turbulence for this purpose, in order to see whether the analogy is justified.

We consider a cubical box of side $L$, occupied by a fluid which is stirred by random forces with a multivariate-normal distribution and with instantaneous correlation in time. This condition ensures that any correlations which arise in the velocity field are due to the NSE. It also is known as the white noise condition and allows us to work out the rate at which the forces do work on the fluid in terms of the autocorrelation of the random forces, which is part of the specification of the problem. (Occasionally one sees it stated that the delta-function autocorrelation in time is needed for Galilean invariance. I must say that I would like to see a reasoned justification for that statement.)

By expanding the velocity field (and pressure) in Fourier series, we can study the NSE in wavenumber $k$ space. It is usual nowadays to proceed immediately to the limit $L \rightarrow \infty$ and make use of the Fourier integral representation. It is important to note, that this is a limit. It does not imply that there is a quantity $\epsilon = 1/L = 0$. It does however imply that all our procedures and results must be independent of $L$. Then the problem may be seen as one of strong nonlinear coupling, due to the form of the nonlinear term in wavenumber space.

Strong nonlinear coupling? Well that’s the conventional view and it is certainly not wrong. But let’s not be too glib about this. It is well known, and probably has been since at latest the early part of the last century, that making variables non-dimensionless on specific length- and velocity-scales results in a Reynolds number appearing in front of the nonlinear term as a prefactor. Expressing, this in terms of quantum field theory, the Reynolds number plays the part of the coupling constant. In quantum-electrodynamics, the coupling constant is the fine-structure constant with a value of about $1/137$, and thus provides a small parameter for perturbation expansion. While the resulting series is not strictly convergent, it does give answers of astonishing accuracy. It is equally well known that attempting perturbation theory in fluid dynamics is unwise for anything other than creeping flow, where the Reynolds number is small. So applying perturbation theory to turbulence looks distinctly unpromising.

There is also the basic phenomenology of turbulence which we must take into account. The stirring motion of the forces will produce fluid velocities with normal (or Gaussian) distributions. Then the effect of the nonlinear coupling is to generate modes with larger values of wavenumber than those initially stirred. This is accompanied by the transfer of energy from small wavenumbers to large, and if left to carry on would lead to equipartition for any finite set of modes, albeit with the total energy increasing with time. This assumes the imposition of a cut-off wavenumber, but in practice the action of viscosity is symmetry-breaking, and the kinetic energy of turbulent motion leaves the system as heat. The situation is as shown in the sketch which, despite our restriction to isotropic turbulence in a box, is actually quite illustrative of what goes on in many turbulent flows.

Sketch of the energy spectrum of isotropic turbulence at moderate Reynolds number.

Various characteristic scales can be defined, but the most important is the Kolmogorov dissipation wavenumber, thus: $k_d=(\varepsilon /\nu_0^3)^{1/4}$, which gives the order of magnitude of the wavenumber at which the viscous effects begin to dominate. For the application of renormalized perturbation theory (which we will discuss in a later post), this phenomenology is important for assessment purposes. However, when we look at the later introduction of renormalization group theory, we have to consider this picture in rather more detail. We will do that in the next post.

[1] A. Berera, M. Salewski, and W. D. McComb. Eulerian Field-Theoretic Closure Formalisms for Fluid Turbulence. Phys. Rev. E, 87:013007-1-25, 2013.

Is there an alternative infinite Reynolds number limit?

Is there an alternative infinite Reynolds number limit?

I first became conscious of the term dissipation anomaly in January 2006, at a summer school, where the lecturer preceding me laid heavy emphasis on the term, drawing an analogy with the concept of anomaly in quantum field theory, as he did so. It seemed that this had become a popular name for the fact that turbulence possesses a finite rate of dissipation in the limit as the viscosity tends to zero. I found the term puzzling, as this behaviour seemed perfectly natural to me. At the time it occurred to me that it probably depended on how you had first met turbulence, whether the use of this term seemed natural or not. In my case, I had met turbulence in the form of shear flows, long before I had been introduced to the study of isotropic turbulence in my PhD project.

Back in the real world, the experiments of Osborne Reynolds were conducted on pipe flow in the late 1890s, and this line of work was continued in the 1930s and 1950s by (for example) Nikuradse and Laufer [1]. This led to a picture where turbulence was seen as possessing its own resistance to flow. The disorderly eddying motions were perceived to have a randomizing effect analogous to, but much stronger than, the effects of the fluid’s molecular viscosity. This in turn led to the useful but limited concept of the eddy viscosity. As the Reynolds number was increased, the eddy viscosity became dominant, typically being two orders of magnitude greater than the fluid viscosity.

In principle, there are three alternative ways of varying the Reynolds number in pipe flow, but in practice it is just a matter of turning up the pump speed. Certainly no one would try to do it by decreasing the viscosity or increasing the pipe diameter. In isotropic turbulence, the situation is not so straightforward, as we use forms of the Reynolds number which depend on internal length and velocity scales. Indeed the only unambiguous characteristic which is known initially is the fluid viscosity.

An ingenious way round this was given by Batchelor (see pp 106 – 107, in [2]), who introduced a Reynolds number for an individual degree for freedom (i.e. wave-number mode) as $R(k) = [E(k)]^{1/2}/\nu k^{1/2}$, in terms of the wavenumber spectrum, the viscosity and the wave-number of that particular degree of freedom. He argued that the effect of decreasing the viscosity would be to increase the dominance of the inertial forces on that particular mode, so that the region of wave-number space which is significantly affected by viscous forces moves out towards $k=\infty$. He concluded: `In the limit of infinite Reynolds number the sink of energy is displaced to infinity and the influence of viscous forces is negligible for wave-numbers of finite magnitude.’ A similar conclusion was reached by Edwards from a consideration of the Kolmogorov dissipation wave-number [1], who showed that the sink of energy at infinity could be represented by a Dirac delta function.

It is perhaps also worth mentioning that the use of this local (in wave-number) Reynolds number provides a strength parameter for the consideration of isotropic turbulence as an analogous quantum field theory [3].
Evidently the conclusion that the infinite Reynolds limit in isotropic turbulence corresponds to a sink of energy at infinity in $k$-space seems to be well justified. Nevertheless, this use of the value infinity in the mathematical sense is only justified in theoretical continuum mechanics. In reality it cannot correspond to zero viscosity. It can be shown quite easily from the phenomenology of the subject that the infinite Reynolds number behaviour of isotropic turbulence can be demonstrated asymptotically to any required accuracy without the need for zero viscosity. We shall return to this in a later post.

1. W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
2 G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 1st edition, 1953.
3. W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.

What relevance has theoretical physics to turbulence theory?

What relevance has theoretical physics to turbulence theory?

The question is of course rhetorical, as I intend to answer it. But I have to pause on the thought that it is also unsatisfactory in some respects. So why ask it then? Well my reply to that is that various turbulence researchers have over the years in effect answered it for me. Their answer would be none at all! In fact, in the case of various anonymous referees, they have often displayed a marked hostility to the idea of theoretical physicists being involved in turbulence research. But the reason why I find it unsatisfactory is that it seems to assume that turbulence theory is not part of theoretical physics, whereas I think it is; or, rather, it should be. So let’s begin by examining that question.

As is well known, the fundamental problem of turbulence is the statistical closure problem that is posed by the hierarchy of moments of the velocity field. Well, molecular physics has the same problem when the molecules interact with each other. This takes the form of the BBGKY hierarchy, although this is expressed in terms of the reduced probability distribution functions. If we consider the simpler problem, where molecules are non-interacting hard spheres, then we have classical statistical physics. In these circumstances we can obtain the energy of the system simply by adding up all the individual energies. The partition function of the system then factorizes, and we can obtain the system free energy quite trivially. However, if the individual molecules are coupled together by an interaction potential, then this factorization is no longer possible as each molecule is coupled to every other molecule in the system. So it is for turbulence, if we work in the Fourier wavenumber representation, the modes of the velocity field are coupled together by the nonlinear term in the velocity field, thus posing an example of what in physics is called the many-body problem.

One could go on with other examples in microscopic physics, for example the theory of magnetism which involves the coupling together of all spins on lattice sites, but it really boils down to the fact that the bedrock problem of theoretical physics is that of strong-coupling. And turbulence formulated in $k$-space comes into that category. The only difference is, that turbulence is mainly studied by engineers and applied scientists, while theorists mostly prefer to study what they see as more fundamental problems, even if these studies become ever more arid for lack of genuine inspiration or creativity. But as a matter of taxonomy, not opinion, turbulence should belong to physics as an example of the many-body problem.

Now let’s turn to our actual question. We can begin by noting that we are talking about insoluble problems. That is, there is no general method of obtaining an exact solution. We have to consider approximate methods. First, there is perturbation theory, which relies on (and is limited by) the ability to perform Gaussian functional integrals. Secondly, there is self-consistent field theory. Both of these rely, either directly or indirectly, on the concept of renormalization. In molecular physics, this involves adding some of the interaction energy to the bare particle, in order to create a dressed particle, also known as a quasi-particle. Such quasi-particles do not interact with each other and so the partition function can be evaluated by factorization, just as in the ideal-gas case. In the case of turbulence, it is probably quite widely recognized nowadays that an effective viscosity may be interpreted as a renormalization of the fluid kinematic viscosity. However, it should be borne in mind that the stirring forces and the interaction strength may also require renormalization.

There is no inherent reason why the subject of statistical turbulence theory should be mysterious and I intend to post short discussions of various aspects. Not so much maths, as `good versus bad’ or `justified versus unjustified’; plus tips on how to use some common sense reasoning to cut through the intimidatingly complicated mathematics and (in some cases self-important pomposity) of some theories which are not really new turbulence theories but merely text-book material from quantum field theory in which variables have been relabelled, but the essential difficulties of extending to turbulence have not been tackled.

The Kolmogorov `5/3’ spectrum and why it is important

The Kolmogorov `5/3’ spectrum and why it is important

An intriguing aspect of the Kolmogorov inertial range spectrum is that it was not actually derived by Kolmogorov. This fact was unknown to me when, as a new postgraduate student, I first encountered the `5/3’ spectrum in 1966. At that time, all work on the statistical theory of turbulence was in spectral or wavenumber ($k$) space , and the Kolmogorov form was seen as playing an important part in deciding between alternative theoretical approaches.

As is well known nowadays, in 1941 Kolmogorov derived power-law forms for the second- and third-order structure functions in $r$ space. In the same year, it was Obukhov [1] who worked in $k$ space, introducing the energy flux through wavenumber as the spectral realization of the Richardson-Kolmogorov cascade, and making the all-important identification of the scale-invariance of the energy flux as corresponding to the Kolmogorov picture for real space. It is usual nowadays to denote this quantity by $\Pi(k)$, and in this context scale-invariance means that it becomes a constant, independent of $k$. For stationary turbulence that constant is the dissipation rate. Obukhov did actually produce the `5/3’ law, but this involved additional hypotheses about the form of an effective viscosity, so it was left to Onsager in 1945 [2] to combine simple dimensional analysis with the assumption of scale-invariance of the flux to produce a spectral form on equal terms with Kolmogorov’s `2/3’ law for $S_2(r)$. This work was discussed (and in effect) disseminated by Batchelor in 1947 [3], and later in his well-known monograph. Curiously enough, in his book, Batchelor only discussed the spectral picture, having discussed only the real-space picture in [3]. This is something that we shall return to in later posts. But it seems that the effect was to establish the dominance of the spectral picture for many years.

In the early sixties, there was considerable excitement about the new statistical theories of turbulence, but when Grant, Stewart and Moilliet published their experimental results for spectra, which extended over many decades of wavenumber, it became clear beyond doubt that the Kolmogorov inertial-range form was valid and that the theories of Kraichnan and Edwards were not quite correct. We will write about this separately in other posts, but for me in 1966 the challenge was to produce an amended form of the Edwards theory which would be compatible with the `5/3’ spectrum. This, in other words, was a restatement of the turbulence closure problem. It is one that I have worked on ever since.

This is not an easy problem and progress has been slow. But there has been progress, culminating in McComb & Yoffe (2015): see #3 of my recent publications. However, over the years, beginning in the late 1970s, this work has increasingly received referee reports which are hostile to the very activity and which assert that the basic problem for closures is not to obtain $k^{-5/3}$ but rather to obtain a value for $\mu$, where the exponent should be $-5/3 + \mu$, due to intermittency corrections. Unfortunately for this point of view, the so-called intermittency correction $\mu$ comes attached to a factor $L$, representing the physical size of the system. This means that the limit $L \rightarrow \infty$ does not exist, which is something of a snag for the modified Kolmogorov theory.

We shall enlarge on this elsewhere. For the moment it is interesting to note that the enthusiasm for intermittency corrections arose from the study of structure functions and in particular their behaviour with increasing order. This became a very popular field of research throughout the 1980s/90s and threatened to establish a sort of standard model, from which no one was permitted to dissent. Fortunately, there has been a fight back over the last decade or two, and the importance of finite Reynolds number effects (or FRN) is becoming established. In particular, the group consisting of Antonia and co-workers has emphasised consistently (and in my view correctly) that the Kolmogorov result $S_3 \sim (4/5)r$ (which the Intermittentists regard as exact) is only correct in the limit of infinite Reynolds numbers. At finite viscosities there must be a correction, however small. A similar conclusion has been reached for the second-order structure function by McComb et al (2014), who used a method for reducing systematic errors to show that this exponent too tended to the canonical value in the limit of infinite Reynolds numbers. These facts have severe consequences for the way in which the Intermittentists analyse their data and draw their conclusions.

This leaves us with an interesting point about the difference between real space and wavenumber space. The above comments are true for structure functions, because in $r$-space everything is local. In contrast, the nonlinear energy transfers in $k$-space are highly nonlocal. The dominant feature in wavenumber space is the flux of energy through the modes, from low wavenumbers to high. The Kolmogorov picture involves the onset of scale invariance at a critical Reynolds number, and the increasing extent of the associated inertial range of wavenumbers as the Reynolds number increases. The infinite Reynolds number limit in $k$-space then corresponds to the inertial range being of infinite extent. At finite Reynolds numbers, it will be of merely finite extent, but there is no reason to believe that there is any other finite Reynolds number correction. I believe that this is more than just a conjecture.

[1]A. M. Obukhov. On the distribution of energy in the spectrum of turbulent flow. C.R. Acad. Sci. U.R.S.S, 32:19, 1941.

[2] L. Onsager. The Distribution of Energy in Turbulence. Phys. Rev., 68:281, 1945.

[3] G. K. Batchelor. Kolmogorov’s theory of locally isotropic turbulence. Proc. Camb. Philos. Soc., 43:533, 1947.

Scientific discussion in the turbulence community.

Scientific discussion in the turbulence community.

Shortly after I retired, I began a two-year travel fellowship, with the hope of having interesting discussions on various aspects of turbulence. I’m sure that I had many interesting discussions, particularly in trying out some new and half-baked ideas that I had about that time, but what really sticks in my mind are certain unsatisfactory discussions.

To set the scene, I had recently become aware of Lundgren’s (2002) paper [1] and, having worked through it in detail, I was convinced that it offered a proof that the second-order structure function took the Kolmogorov `2/3’ form asymptotically in the limit of infinite Reynolds numbers. There is of course little or no disagreement about Kolmogorov’s derivation of the `4/5’ law for the third-order structure function. For stationary turbulence, it is undoubtedly asymptotically correct in the infinite Reynolds number limit. But in order to find the second-order form, Kolmogorov had to make the additional assumption that the skewness of the longitudinal derivative became constant in the infinite Reynolds number limit. Introducing the skewness $S$ as $S=S_3(r)/S_2(r)^{3/2}$, and substituting the `4/5’ law for $S_3$, results in the well-known form $S_2(r)=(-4/5S)^{2/3}\varepsilon^{2/3}r^{2/3}\equiv C_2\varepsilon^{2/3}r^{2/3}$. Numerical results do indeed suggest that the skewness becomes independent of the Reynolds number as the latter increases, but it remains a weakness of the theory that this assumption is needed.

Lundgren [1] started, like Kolmogorov, from the Karman-Howarth equation (KHE), and did the following. He put the KHE in dimensionless form by a generic change of variables based on time-dependent length and velocity scales, $l$ and $u$. He then chose to examine: first, Von Karman scaling; and secondly, Kolmogorov scaling, with appropriate choices for $l$ and $u$. In both cases, he solved for the scaled second-order structure function by a perturbation expansion in inverse powers of the Reynolds number. He then employed the method of matched asymptotic expansions which recovered the Kolmogorov form for $S_2$. The `4/5’ law was also recovered for $S_3$, both results naturally following in the large Reynolds number limit. A more extensive account of this work can be found in Section 6.4.6 of my 2014 book.

Before setting off on my travels, I consulted a colleague who, although specializing in soft matter, had some familiarity with turbulence. To my surprise he seemed quite unenthusiastic about this work. He said something to the effect that it was a pity that Lundgren had to assume the same scaled form for both the second-order and the third-order structure functions. Now, on reflection I saw that this was nonsense. All Lundgren did was introduce a change of variables: this is not an assumption; it merely restates the problem, as it were. Secondly, the basic Kolmogorov theory deals with the probability distribution functional, and this means that all the moments (and hence structure functions) will be affected in the same way by any operation on it [2].

On the first of my visits, I began to discuss this with Professor X, who seemed very sceptical at first, then his comments seemed increasingly irrelevant, then he realised that he was thinking of an entirely later piece of work by Lundgren. At that point the discussion fizzled out.

On a later visit to a different university, at an early stage in the discussion with Professor Y, I commented that the method relied on the fact that the Karman-Howarth equation was local in the variable $r$. To which he swiftly replied: `Yes Tom does have to assume that.’ That effectively brought things to a close, because once again we are faced with nonsense. In fact this particular individual seems to believe that the existence of an energy cascade implies that the KHE is nonlocal! But of course the nonlocalness is confined to the Lin equation in wavenumber space.

On a later occasion, I tried to bring the subject up again, but no luck. He said: `Tom just makes the same assumptions as Kolmogorov did. So there is nothing new.’ At this point I finally gave up. However, as we have just seen, Kolmogorov has to assume that the skewness $S$ becomes constant as the Reynolds number increases. In contrast, the Lundgren analysis actually shows that this is so. In addition, it also provides a way of assessing systematic corrections to the `4/5’ law at large but finite Reynolds numbers.

The basic theoretical problems in turbulence are very hard and perhaps even impossible to solve, in a strict sense. However, the fact that lesser problems of phenomenology are plagued by controversy, with issues remaining unresolved for decades, seems to me to be a matter of attitude (and culture) that leads to a basic lack of scholarship. I think we need to trade in the old turbulence community and get a new one.

[1]Thomas S. Lundgren. Kolmogorov two-thirds law by matched asymptotic expansion. Phys. Fluids, 14:638, 2002.

[2] I have to own up to an error here. For years I argued that only the second- and third-order structure functions were involved in Kolmogorov and hence conclusions based on higher-order moments were irrelevant. Then (quite recently!) I noticed in a paper by Batchelor the comment that as the hypotheses were for the pdf, they automatically applied to moments of all orders.

Intermittency corrections (sic) and the perversity of group think

Intermittency corrections (sic) and the perversity of group think.

In The Times of 11 January this year, there was a report by their Science Editor which had the title Expert’s lonely 30-year quest for Alzheimer’s cure offers new hope. Senile dementia is the curse of the age (even if temporarily eclipsed by the Corona virus) and the article tells how in 1905 Alois Alzheimer made a post mortem examination of the brain of a woman who in her later years had become confused and forgetful. He found two pathological features: one consisted of clumps of plaques of a protein called beta amyloid and the other consisted of sticky tangles of a different protein, later identified by a Professor Claude Wischik as a protein called tau.

Now, with two possible causes, you might imagine that researchers in the field would be interested in both. But you would be wrong. It seems that the community targeted the beta amyloid cause and for many years neglected the other possibility. Now, after decades of failure, the major pharmaceutical companies are developing anti-tau drugs. Even if none of these proves to be the magic bullet, it seems a healthier situation that both symptoms (and the possible interaction between them) are being studied. The article ends on a note of moderate optimism, but the question remains: why was the research skewed towards just the one possibility? The article seems to suggest that this may have been because beta amyloid was already known and possibly implicated in another pathology. As always, in applied research there is a temptation to go for the `quick and dirty solution’!

The behaviour of the researchers pursuing the beta amyloid option (to the exclusion of the equally possible tau option) exhibits some of the characteristics of what psychologists call group think. A similar phenomenon has been part of fundamental research on turbulence for at least five decades. As is well known, it started with a remark by Landau about the Kolmogorov (1941) theory; or K41 for short. This criticism is based on the idea that intermittency of the dissipation rate has implications for the K41 theory, despite the fact that the physical basis of that theory is the inertial transfer rate, which is sometimes equal to the dissipation rate. This criticism, along with various others, is discussed in Chapters 4 and 6 of my 2014 book on turbulence and I will not consider it further here. All I wish to note is that there has been an ongoing body of work on so-called intermittency corrections, and the strange thing is that more obvious corrections have been largely neglected, until quite recent times. Let us now expand on that.

Essentially Kolmogorov used Richardson’s concept of the cascade to argue that energy transfer would proceed by a stepwise process from large scales (production range) to small scales and this would result in a universal form for the structure functions in these small scales. Furthermore, for large Reynolds numbers, the effect of the viscosity would only be appreciable at very small scales, and there would be an intermediate subrange of scales where the local excitation would be controlled by inertial transfer into the subrange from the large scales and inertial transfer out of the subrange into the small scales where it would be dissipated by viscous effects.

At this point, I should enter a small caveat. I feel quite uncomfortable with what I have just written. The physical concept of the cascade is rather ill-defined in real space. I would be much happier talking in terms of wavenumber space where the cascade is well defined and the key concept is scale-invariance of the inertial flux. This fact was recognized by Obukhov (1941), by Onsager (1945) and by Batchelor (1947), and after that very widely. It is rather as if Kolmogorov, in choosing to work in real space, had opted for Betamax rather than VHS!

However, ignoring my quibbles, in either space one point is clear: this is an approximate theory. Either $S_2 \sim \varepsilon^{2/3}r^{2/3}$ or $E(k) \sim \varepsilon^{2/3}k^{-5/3}$ is only asymptotically valid in the limit of infinite Reynolds numbers. Under all other circumstances, there must be corrections due to finite-Reynolds number (FRN) effects. These corrections may be small enough to ignore: bear in mind that on various measures an infinite Reynolds number is not all that large. There is certainly no need to worry about zero viscosity (pace) Onsager and his hagiographers! We shall return to this specific point in later posts.

The response of Kolmogorov to Landau’s criticism was the somewhat ad hoc K62, in which the retention of the specific effect of the large scales of the system (in both structure functions and spectra), completely reversed the original assumption of the stepwise cascade leading to universal behaviour. For reasons that are far from clear to me, this sparked off a positive industry of intermittency corrections, anomalous exponents and various improvements (sic) on Kolmogorov, which lasts to this day. In contrast, from the late 1990s, increasing attention, both experimental and theoretical, has been given to FRN effects, and in particular the way in which they have been ignored in assessing the evidence for anomalous exponents and suchlike. We may highlight the situation in the field by contrasting two major papers, both published in leading learned journals within the last year.

The first of these is by Tang et al [1], who note in their abstract that K62 `has been embraced by an overwhelming majority of turbulence researchers.’ This paper is one in a series in which this group has investigated the alternative effect of finite Reynolds number corrections. In addition to their own analysis, they also cite many papers from recent years which support their conclusion that the failure to account for FRN effects has `almost invariably been mistaken for the intermittency effect’. In the main body of their paper, they express themselves even more forcibly. In contrast, the paper by Dubrulle [2], which is very much in the K62 camp, so to speak, cites not a single reference to FRN effects. Instead the author argues that small-scale intermittency is incompatible with homogeneity, and makes the radical proposal that the Karman-Howarth equation should be replaced by a weak form which takes account of singularities. At this point one takes leave of continuum mechanics and much else besides! If we consult Batchelor’s book, we find that homogeneity is defined in terms of mean quantities and is therefore entirely compatible with intermittency of the velocity field, which is nowadays understood to be present at all scales.

I was tempted to say that it is difficult to imagine such a fundamental gulf in any subject other than turbulence, but then that’s where we came in!

[1] S. Tang, R. A. Antonia, L. Djenidi, and Y. Zhou. Phys .Rev. Fluids 4, 024607 (2019).

[2] B. Dubrulle. J. Fluid Mech. 867, P1, (2019).

Bad proofs and `curate’s egg’ theories

Bad proofs and `curate’s egg’ theories.

At about the time I took up my appointment at Edinburgh, I heard about a pure mathematician who wanted to be remembered for his bad proofs. Some years later I read his obituary in The Times and this fact was mentioned again. I had thought that I had kept the cutting but it seems not, so I’m afraid that I don’t remember his name. But I do remember what was meant by the term `bad proofs’. This man’s view was that many proofs in mathematics have been polished by various hands over the years and he wanted to be remembered for his originality. His proofs would be unpolished and hence seen as original.

The choice of the word `bad’ is interesting, in view of its pejorative overtones. I would be inclined to think that the original proof would at least be valid and hence not to be described as bad. Perhaps, later more elegant versions of the proof would emphasise the unpolished nature of the original. Hence, perhaps `rough’ might be a better description. Presumably the word `bad’ was chosen to emphasise the paradoxical appearance of that statement. Well, at least he is being remembered for his quirky assertion about what he wanted to be remembered for.

For some time I have wondered whether there is an analogous term for turbulence theories. By which I mean attempts to solve the statistical closure problem. This was originally formulated by Reynolds for pipe flow, but as usual we will consider it here as applied to isotropic turbulence. Obviously `bad’ is no good, because we do not have the paradoxical juxtaposition that we have with the word `proof’, which in itself indicates success, which is certainly not bad. One obvious possibility would be `rough’ but somehow that does not appeal. `Rough theories’ does not sound good. In fact it sounds bad.

Recently I came up with the idea of the `curate’s egg’ theories, meaning `good in parts’. This saying stems from a cartoon which appeared in the British humorous magazine Punch in 1895. It shows a nervous curate breakfasting with the bishop. The bishop expresses concern that the curate’s egg is not a good one. The curate, anxious not to make a fuss, bravely asserts that his egg is `good in parts’. The term passed into everyday speech and was still current when I was young. In the 1960s I was commuting regularly by train, and I would buy Punch to read on the journey. On one occasion there was a commemorative issue and a facsimile of the original cartoon was reproduced, so I was interested to see the origin of the phrase. We didn’t have Google in those days!

The reason that I think that such a term might be helpful is that many members of the turbulence community seem to see a theory as being either right or wrong. And if it’s deemed to be wrong, then it should be dismissed and never considered again. A striking example of this kind of thing arose a few years ago when I was trying to get a paper on the LET theory published (see #10 in the list of recent papers)) and it had gone to arbitration. The Associate Editor who was consulted turned the paper down because `this is the sort of stuff Kraichnan did and everybody has known for the last twenty years that it’s wrong’.

This decision was easily overturned. The sheer idiocy of the proposition that, because one person had tackled a problem and failed, other people should be barred from making further attempts, ensured that. But what interests me is the fact that Kraichnan’s work is reduced to `the sort of stuff’ and regarded as `wrong’. This was done by someone who was an applied mathematician and not a theoretical physicist. I am not a betting man, but I would put a small amount of money on the assumption that this referee had very little knowledge of Kraichnan’s vast output, and was relying on hearsay for his opinion. I understand the difficulties facing anyone from an engineering background in trying to get to grips with this type of many-body or field theory although there are accessible treatments available. But if you are unable to understand this work in detail, then it is unlikely that you are qualified to referee it.

If we take an example from physics, in critical phenomena (e.g. the transition from para- to ferromagnetism) the subject was dominated by mean-field theory up until the late 1970s, when renormalization group (RG) was applied to critical phenomena. This does not mean that mean-field theory was immediately dismissed. In fact it is still taught in undergraduate courses. Prior to RG there was a balanced understanding of the limitations and successes of mean-field theory and no one ever thought of it as `right’, with the corollary that no one now dismisses it as simply `wrong’.

I know what I would like to have for other subjects, such as cosmology, particle theory or indeed musical theory. I would like to be able to read a simple account which explains the state of play, without going into too much detail. That is what I intend to provide for statistical theories of turbulence in future posts. In my view, most theories of turbulence can be regarded as `curate’s eggs’: they have both good and bad aspects. The important thing is that those working in the field of turbulence should have some understanding of the situation and should appreciate the importance of having further research in this area.

The infinite-Reynolds number limit: a first look

The infinite-Reynolds number limit: a first look.

I notice that MSRI at Berkeley have a programme next year on math problems in fluid dynamics. The primary component seems to be an examination of the relationship between the Euler and Navier-Stokes equations, `in the zero-viscosity limit’. The latter is, of course, the same as the limit of infinite Reynolds numbers, providing that the limit is taken in the same way with the same constraints. I think that it is a failure to appreciate this proviso that has resulted in the concept becoming something of a vexed question over the years. Yet it was clearly explained by Batchelor in 1953 and elegantly re-formulated by Edwards in 1965. As a result, a group of theorists has been quite happy about the concept, but many other workers in the field seem to be uneasy.

I first became aware of this when talking to Bob Kraichnan at a meeting in 1984. When I used the term, his reaction surprised me. He began to hold forth on the subject. He said that people were `frightened’ of the idea of the infinite-Reynolds number limit. Rather defensively I said that I wasn’t frightened by it. His reply was. `Oh, I know that you aren’t but you would be surprised at the number of people who are!’ Since then I have indeed been surprised by how often you get a comment from a referee which goes something like: `The authors take the infinite-Re limit … but of course you cannot really have zero viscosity, can you.’ This rather nervous addendum suggests strongly that the referee does not understand the concept of a limit.

Well, one thing I would claim to understand is the idea of a limit in mathematical analysis. This is because the first class of my school course on calculus dealt with nothing else. I can remember that class period clearly, even although it was about sixty five years ago. One example that our maths master gave, was to imagine that you were cutting up your twelve-inch ruler, which was standard in those days. You cut it into two identical pieces in a perfect cutting process, with no waste. Then you put one piece over to your right hand side, and now cut the left hand piece into two identical pieces. One of these you put over to the right hand side, and add it on to the six-inch piece already there, to make a nine-inch ruler. The remaining piece you again cut into two, and move half over to make a ten and a half inch ruler. However much you repeat this process, the ruler will approach but never reach twelve inches again. In other words, twelve inches is the limit and you can only approach it asymptotically.

Suppose we carry out a similar thought experiment on turbulence; although you could actually do this, most readily by DNS. What we are going to do is to stir a fluid in order to produce stationary, isotropic turbulence. Now at this stage, we don’t even think about dissipation. We are trying to drive a dynamical system and we start by specifying the forcing in terms of the rate of doing work on the fluid. We call this quantity $\varepsilon_W$ and it is fixed. Next our dynamical system is fully specified once we choose the boundary conditions and the kinematic viscosity $\nu$. Accordingly, providing the forcing spectrum is peaked near the origin in wavenumber space, and there has been an appropriate choice of value of the initial kinematic viscosity, energy will enter the system at low wavenumbers, be transferred by conservative inertial processes to higher wavenumbers, and ultimately dissipated at the highest excited wavenumbers. Once the system becomes stationary, the dissipation rate must be equal to the rate of doing work, and so the Kolmogorov dissipation wavenumber is given by $k_d = (\varepsilon_W /\nu^3)^{1/4}$.

Now let us carry out a sequence of experiments in which $\varepsilon_W$ remains fixed, but we progressively reduce the value of the kinematic viscosity. In each experiment, the viscosity is smaller and the dissipation wavenumber is larger. Therefore there is a greater volume of wavenumber space and it will take longer to fill with energy. Ultimately, corresponding to the limiting case, we have an infinite volume of wavenumber space and the system will take an infinite time to reach stationarity and in principle will contain an infinite amount of energy. Note that this is not a catastrophe! In continuum problems, a catastrophe is when you get an infinite density of some kind. Here the work, transfer and dissipation rates are the densities of the problem, and they are perfectly well behaved.

At this stage, when I try to discuss the infinite Reynolds number limit, people tend to get uneasy and talk about possible singularities or discontinuities. I don’t really think that there is any cause for such hand-wringing. You have to decide first, which Navier-Stokes equation (NSE) you are using. There are two possibilities and they are identical; but we arrive at them by different routes.

If we arrive at the NSE by continuum mechanics, then in principle we can take the limit of zero viscosity without worry. After all, this is just a model of a real viscous fluid and, among other things, it is rigorously incompressible which a real fluid isn’t. We accept that in practice that it is the flow which is incompressible, not the fluid. So if the density variations are too small to detect, we can safely use the NSE.

If you come by the statistical physics route, then you must bound the smallest length scale (here the Kolmogorov dissipation length scale) such that it is orders of magnitude larger than inter-molecular distances. In practice, we may see the asymptotic behaviour associated with small viscosity arising long before there is any danger of breaching the continuum limit. For instance, if we look at the behaviour of the dimensionless dissipation rate as the Reynolds number is increased (see Fig. 1 of paper #6 in my list of recent papers) we are actually seeing the onset of the infinite Reynolds number limit. The accuracy of the determinations of $C_{\varepsilon,\infty}$ in this work is very decent, but if greater accuracy were required, then a bigger simulation would provide. Just like in boundary layer theory, it is all a matter of quite pragmatic considerations. I will give a more pedagogic discussion of this topic in a future post.

A first look at Kolmogorov (1941)

A first look at Kolmogorov (1941)

Around the turn of the new millennium, I attended the PhD oral of one of my own students for the last time as Internal Examiner. After that the regulations were changed; or perhaps it was frowned on for the supervisor to also be the Internal. Later still I stopped attending in any capacity: I think it became that the student had to invite their supervisor if they wanted them to attend. Is this an improvement on the previous system? Actually, my own PhD oral was conducted by David Leslie, who had previously been my second supervisor, and Sam Edwards who was my first supervisor! The three of us had had many discussions of my work in the past, so the atmosphere was informal and friendly. But I don’t think the examination lacked rigour and I suppose it would have been difficult to find anyone else in the UK who could have acted as external examiner.

However, back to my own last stint as Internal. The candidate was a graduate with joint honours in maths and computer science. He was a very able young man and did good work, but he was not a physicist and never quite engaged with the physics. So when the External asked him if he could derive the Kolmogorov spectrum, he said `No’, then added pertly `Can you?’ Alas, the External was unable to do so. Fortunately the Internal was able to go to the blackboard and do the needful. The External was quite a well-known member of the turbulence community, so we will spare his blushes. Yet, it left me wondering how many turbulence researchers could sit down and derive the Kolmogorov energy spectrum, or equivalently the second-order structure function, without consulting a book? For any such benighted souls, I will now offer a crib. Virtue should be its own reward, but in the process of putting this together, I think I have found the answer to something that had puzzled me. I will return to that at the end of this post.

For simplicity, let’s work with the second-order structure function $S_2(r)$. This is what Kolmogorov did: the form for the energy spectrum came later. Glossing over the physical justification, we consider the question: how do we express $S_2(r)$ in terms of the dissipation rate $\varepsilon$ and the distance between measuring points $r$, for some intermediate range of values of $r$?

The first thing to notice is that $S_2$ has dimensions of velocity squared (or energy per unit mass: we won’t keep repeating this) and that the dissipation is the rate of change of the energy with time. It follows that $S_2$ depends on the inverse of time squared whereas dissipation depends on the inverse of time cubed. Hence, the structure function must depend on the dissipation to the power of $2/3$. Or,

\[S_2(r) \sim \varepsilon^{2/3}.\]

This is the Kolmogorov result. Put in its most general form: if you seek to express the energy in terms of the dissipation, inertial transfer, eddy-decay rate, or any other rate of change, you must have a two-thirds power from the need to have consistency of the time dimension across both sides of the equation.

Now what happens when we tidy up the dimensions of length? On the right hand side of the equation, we now have the dimensions of length to the power of $4/3$. In order to make this consistent with $S_2$ on the left hand side, we must multiply by a length to the power of $2/3$. From Kolmogorov (1941), this length must be $r$, and if we put a constant $C$ in front, we recover the well-known K41 result

\[S_2(r) = C r^{2/3}\varepsilon^{2/3}.\]

If however, we think that it might also depend on another length, then we only have available some length characteristic of the size of the system, say $L_{ext}$. If we include this, then we must multiply the right hand side by $L_{ext}^p r^m$, where $p+m=2/3$. In other words, the power of $r$ is no longer determined. This is, in effect, what Kolmogorov did in 1962, albeit by a more circuitous route. And, in the process he threw away his entire theory, which was based on the idea that the many steps of the Richardson cascade would lead to a universal result at small scales. In Kolmogorov (1962) that does not happen: the final result depends on the physical size of the system.

Let us now hark back to what had puzzled me. In a previous post I mentioned a contumacious referee. In fact this individual kept asserting that `$r^{2/3}$ is not Kolmogorov’. We pressed him to explain but it was clear that he had found his excuse for rejecting the paper and wasn’t prepared to be more helpful (or indeed scholarly). As our paper contained a discussion of the fact that the extended scale similarity technique gave the two-thirds law as an artifact in the dissipation range, it is possible that he was actually agreeing with us! However, taking his comment as a general statement, I would be inclined to agree with it. From the discussion we have given above, it should be clear that it is the dependence on the dissipation rate to the two-thirds power that is actually Kolmogorov. For anyone interested, the paper is Number 7 in the list of my recent papers given on this website.

The energy balance equation: or what’s in a name?

The energy balance equations: or what’s in a name? 

Over the last few years I have noticed that the Karman-Howarth equation is sometimes referred to nowadays as the `scale-by-scale energy budget equation’. Having thought about it carefully, I have concluded that I understand that description; but I think the mere fact that one has to think carefully is a disadvantage. To Anglophone speakers of English, the term `budget’ suggests some sort of forward planning. Actually I think that in physics the more correct term would be local energy balance equation. Let us consider the form of the KHE equation when it is written in terms of the second-order and third-order structure functions, thus:

\[0=-\frac{2}{3}\frac{\dd E}{\dd t} + \frac{1}{2}\frac{\dd S_2}{\dd t} + \frac{1}{6r^4}\frac{\dd}{\dd r}(r^4 S_3) – \frac{\nu}{r^4}\frac{\dd}{\dd r}\left(r^4\frac{\dd S_2}{\dd r}\right). \]

Note that all notation and background for this post will be found in my (2014) book on HIT. Also, I have moved the term involving the total energy (per unit mass) to the right of the equal sign, for a reason which will become obvious.

More recently I have seen exactly the same phrase used to describe the Lin equation, which is just the Fourier transform of the KHE to wavenumber space. This strikes me as even more surprising, but again I don’t want to say that it is actually wrong. Indeed in one sense I rather welcome it, because it makes it clear that the concept of scale belongs equally to wavenumber space. It can be all too easy to fall into a usage in which real space is regarded as `scale space’ and is distinguished in that way from wavenumber space. But the real problem here is that it is only valid for the simplest form of the Lin equation, and this in itself can be misleading.

Let us now consider the Lin equation in terms of the energy spectrum and the transfer spectrum. We may write this in its well-known form:

\[\left(\ddt + 2\nu k^2\right)E(k,t) = T(k,t).\]

Here, as with the KHE, we assume that there are no forces acting.

However, unlike the KHE, this is not the whole story. We may also express the transfer spectrum in terms of its spectral density, thus:

\[T(k,t) = \int_0^\infty\, dj \,S(k,j;t).\]

When we substitute this in, we obtain the second form of the Lin equation, and this is actually more comparable with the KHE as given above, because the transfer spectrum density contains the Fourier transform of the third-order structure function, which of course occurs explicitly in the KHE.

Now compare the two equations. The KHE holds for any value of the independent variable. If we take some particular value of the independent variable, then each term can be evaluated as a number corresponding to that value of $r$, and the above equation becomes a set of four numbers adding up to zero. If we consider another value of $r$, then we have a different four numbers but they must still add up to zero. In short, KHE is local in the independent variable.

The Lin equation, if we write it in its full form, tells us that all the Fourier modes are coupled to each other. It is, in the language of physics, an example of the many body problem. It is in fact highly non-local as in principle it couples every mode to every other mode.

A corollary of this is that the KHE does not predict a cascade. But the Lin equation does. This can be deduced from the nonlinear term which couples all modes together plus the presence of the viscous term which is symmetry-breaking. If the viscous term were set equal to zero, then the coupled but inviscid equation would yield equipartition states.

The well-known question at the head of this post is rhetorical and expects the answer `A rose by any other name would smell as sweet’. But I’m afraid that Juliet’s laissez-faire attitude to terminology would not be widely applicable. One thinks of the surgeon who fails to distinguish between the liver and the spleen. Or the pilot who thinks west is just as good a name for east. In the turbulence community, I suppose that `locality’ for `localness’, or `inverse’ for `reverse’ arise because they seem natural coinages to non-Anglophones. In the wider world, the classic case since the 1960s is Karl Popper’s idea that a scientific theory should be falsifiable. But in everyday English speech, to falsify means to make false. For instance, to falsify an entry in one’s accounts, means, to put it in the demotic, to cook the books!

I shall return to this point in future posts and in particular to the localness of the KHE.

Wavenumber Murder and other grisly tales

Wavenumber Murder and other grisly tales.

When I was first at Edinburgh, I worked on developing a theory of turbulent drag reduction by additives. But, instead of considering polymers, I studied the much less well-known phenomenon involving macroscopic fibres. This was because it seemed to me that the fibres were probably of a length which was comparable to the size of the smallest turbulent eddies. It also seemed to me that the interaction between fibre and eddy would be two-dimensional and that it might be possible to formulate an explanation of turbulent drag reduction on mainly geometrical grounds. In particular, I had in mind that two-dimensional eddies could have a reverse cascade, with the energy being transferred from high wavenumbers to small. That is, the reverse (but not inverse) of the usual process. In this way drag might be reduced.

I derived a simple model for this process, and a letter describing it was published by Nature Physical Science in 1974. So far so good. Then I set to work writing the theory up in more detail and submitted it to the JFM. The results were not so good this time, and I had three referees’ reports to consider. At least, George Batchelor did not feel the need to suppress any of the reports on the grounds of it being too offensive (someone I knew actually had this exeperience). But still, they were pretty bad.

No doubt this was salutary. I didn’t dissent from the view that the paper should be rejected. In fact I dismantled it into several much better papers and got them published elsewhere. But what sticks in my mind even yet is the referee who wrote: `The author commits the usual wavenumber murder. Who knows what unphysical assumptions are being made under the cover of wavenumber space?’

Well, that’s for me to know and you to find out, perhaps! Of course, now that I am older (a lot) and wiser (a little), I realise that I could have played it better. I could have written up the use of Fourier methods, quoted Batchelor’s book extensively, and thus made it very difficult for the referee to respond in that rather childish way. But why would that even occur to me? I was used at that stage to turbulence theorists who moved straight into wavenumber space without seeing any need to justify it. This is a cultural factor. Theoretical physicists are used to operating in momentum space which, give or take Planck’s constant, is just wavenumber space in disguise. Anyway, at the time I was surprised and disappointed that the editor did not at least intervene on this particular point.

I actually found that referee’s reaction quite shocking, but in one form or another I was to encounter it occasionally over the years, until at last it seemed to die out. Partially this could be attributed I would guess to the growth of DNS, with its dependence on spectral methods. Also, I think it could be due to better educated individuals becoming attracted to the study of turbulence.

Anyway, a few years ago, and just when I thought it was safe to mention spectral methods again, I made a big mistake. I had written (with three co-authors) a paper in which we used spectral methods to evaluate the exponents associated with real-space structure functions. It had been increasingly believed that the inertial-range exponents departed from the Kolmogorov (1941) forms, increasingly with both order and with Reynolds number, although it was actually realised that this could be attributed to systematic experimental error. So we had used a standard method of experimental physics to reduce systematic error and found that the exponent for the second-order structure function in fact tended to the Kolmogorov canonical form, as the Reynolds number was increased. This is precisely the sort of result that merits a short communication and accordingly we submitted it as such. One of the referees was contumacious (and I may come back to him in later blog), the other was broadly favourable but seemed rather nervous about various points. However, when we had responded to his various points, he wanted one or two more changes and then he would recommend it for publication. At the same time, he commented that he really did wish that we hadn’t used spectral methods.

This was where I made my big mistake. Overcome by kindly feelings towards this ref, and obeying my pedagogical instincts, I tried to re-assure him. I pointed out that he was quite happy with the pseudo-spectral method of DNS, in which the convolution sums in wavenumber space are evaluated more economically in real space and then transformed back into wavenumber. Now, I said, we are employing the same technique, but the other way round. We are evaluating the convolutions determining the structure function in real space, by going into wavenumber space. The response had a petulant tone. We were, he said, talking nonsense. The structure functions did not involve convolution integrals and he was rejecting the paper as mathematically unsound!

Later on we wrote up a longer version of the work and it was published: see #7 in the list of recent papers on this site. Appendix A is the place to look for the maths which bewildered the poor benighted referee. While accepting that this degree of detail was not given in the short communication, what is one to make of a referee who is unaware that a structure function can be expressed in terms of a correlation function and that the latter is a convolution integral?

Both referees were frightened of Fourier methods and between them almost seem to have bookended my career. But referees who are comprehensively out of their depth have not been a rare phenomenon over the years. The forms which this inadequacy takes have been many and varied and I shall probably be dipping into my extensive rogues’ gallery in future posts. There is also the question of the editor’s role in finding referees who are actually qualified to referee a specific manuscript, and this too seems a fit subject for further enquiry. However, I should finish by pointing out that being on the receiving end of inadequate refereeing is not exclusively my problem.

In the first half of 1999, the Isaac Newton Institute held a workshop on turbulence. During the opening week, we saw famous name after famous name go up to the podium to give a talk, which almost invariably ended with `and so I sent it off to Physica D instead’. This last was received with understanding nods and smiles by an audience who were clearly familiar with the idea. This quite cheered me up, it seemed that I was not alone. At the same time, the sheer waste of time and energy involved seemed quite shocking. It prompted the thought: is it the turbulence community that is the problem, rather than the turbulence? That is something to consider further in future posts.

HIT: Do three-letter acronyms always win out?

HIT: Do three-letter acronyms always win out?

In 1997, I visited Delft Technical University and while I was there gave a course of lectures on turbulence theory. During these lectures, I mentioned that nowadays people seemed to refer to homogeneous, isotropic turbulence; whereas, when I started out, it was commonplace to simply say isotropic turbulence. The homogeneity was assumed, as a necessary condition for the isotropy. After the morning session, when we were making our way back for lunch, the postgrads who were attending, said to me `Three-letter acronyms always win out!’. Naturally, I pooh-poohed this, but many years on, I have to confess that I use the three-word name of the subject (it was the title of my 2014 book) and the acronym as well. Sometimes it is just a matter of euphony. But does it do any harm? Well, that’s an interesting question, but for the moment let us make a short digression.

In recent years I have been thinking a little about cosmology (well it makes a change from turbulence) and have learned about the cosmological principle, which states that the universe is both homogeneous and isotropic.Homogeneous means that its properties are independent of position and isotropic means that its properties are independent of orientation. In everyday life, one might think of a piece of metal or plastic being homogeneous and isotropic, in contrast to wood which has a grain. So naturally when I step out into my back garden in the evening, I can observe this for myself … or rather, I can’t. Actually the night sky looks anything but homogeneous, let alone isotropic. Are the cosmologists deluded?

The answer lies in the fact that the cosmological principle applies to averaged properties. Apparently it is necessary to take averages over huge volumes of space, each of which contains vast numbers of galaxies, for the concepts of homogeneity and isotropic to apply. Evidently, to paraphrase J. B. S. Haldane (and following in the footsteps of Werner Heisenberg) the universe is not only bigger than we think, it is bigger than we can think. So, if I want to behave like an idiot, I should just go about proclaiming: `The cosmologists are mad. You only have to look up at the night sky to see that their claims about the uniformity of the universe are completely unjustified.’ In doing so, I would be ignoring the details of what the cosmologists actually said, and surely no one would be so silly as to do that before launching into speech? Well, in turbulence that is exactly what many people do.

In turbulence, for many years we have had flow visualisations based on direct numerical simulation of the equations of fluid motion. These undoubtedly show a spotty distribution of various characteristics of interest, especially the dissipation rate, and this is generally taken as supporting the idea that turbulence intermittency has implications for statistical theories. Indeed, there are those who go further and see results like this as invalidating assumptions of homogeneity and isotropy. What they leave out of the reckoning is; first, that homogeneity and isotropy are properties of average quantities, in turbulence as in cosmology. Secondly, the flow visualisations are snapshots or single realisations. If you average over them, the spottiness disappears, as indeed it has to, in order to conform to homogeneity and isotropy, and the field becomes uniform and without structure.

If we go to the fountainhead for this subject, in Batchelor’s classic monograph on page 3 we may read: `The possibility of this further assumption of isotropy exists only when the turbulence is already homogeneous, for certain directions would be preferred by a lack of homogeneity’. Batchelor also points out that homogeneity and isotropy are average properties of the random variable, and in fact they are defined formally in terms of the probability distribution functional (the pdf, or equivalently its moments).

So this is where I answer my own question. It does matter. It is needed for clear thinking and the best possible understanding that we are careful about the fact that homogeneity is a necessary condition for isotropy. In the process we have to be careful about definitions. In that way one can perhaps avoid the egregious errors which occur in a recent paper, where it is argued that intermittency at the small scales is incompatible with homogeneity and so invalidates the energy-balance equation derived rigorously by averaging the equations of motion. Actually, intermittency is present at all scales and is part of the exact solution of the equations of motion. It is not in any way incompatible with the pdf, which must take a form appropriate to the intermittent (single-realization characteristic) and homogeneous (ensemble-averaged characteristic) nature of the random field. We shall return to a more specific way to this publication in later posts.

The First Post

The First Post

Many years ago, early in my career, I learned the hard way that every paper submitted for publication should be ruthlessly pared down to consist solely of factual material and fully justified statements. Any personal opinions, speculations, whimsical thoughts, comments or suchlike, should be eliminated; as, in the words of the poet John Donne, they would offer `hostages to fortune’. That is, there would be at least one referee who would make such an opinion (suitably misinterpreted!) the basis for outright rejection of the manuscript, probably accompanied by gratuitously offensive comments. This of course raises questions about the role of the editor in this increasingly fraught process of peer review, and that is something to which I shall return in future blogs.
In the middle period of my career, I would occasionally receive a referee’s report which expressed regret that I had not included more of my own views, and indicated that they would be welcome. My response to this was `No fear’, to use an expression from my remote childhood.

Recently I gave in to the temptation to do just that and, in what might well be my last journal submission (rejected by four different journals), I sweepingly dismissed both the Kolmogorov (1962) `revised theory’ and Landau’s criticism of the Kolmogorov (1941) theory, without explaining why. I suppose I was relying on the critique published in my book of 2014. But they were seized upon by one referee to reject the paper, followed by the patronizing comment `Need I say more’. Well, actually what he needed to do was to say less and to think more. That too is something to which I shall return in future blogs.

Evidently my self-imposed constraints are beginning to chafe! So, as a blog (if it is to be of any value as offering clarification or stimulus) should in fact consist very largely of the things that I have omitted from papers, the temptation to blog is clear. As I began my postgraduate research in 1966, I am now in my forty fifth year of turbulence research, so there should be no lack of material. Oh, and it should also be both pithy and hard-hitting. You have been warned.