1

The exactness of mathematics and the inexactness of physics.

The exactness of mathematics and the inexactness of physics.

This post was prompted by something that came up in a previous one (i.e. see my blog on 12 August 2021), where I commented on the fact that an anonymous referee did not know what to make of an asymptotic curve. The obvious conclusion from this curve, for a physicist, was that the system had evolved! There was no point in worrying about the precise value of the Reynolds number. That is a matter of agreeing a criterion if one needs to fix a specific value. But evidently the ratio shown was constant within the resolution limits of the measurements of the system; and this is the key point. Everything in physics comes down to experimental error: the only meaningful comparison possible (i.e. theory with experiment or one experiment with another) is subject to experimental error which is inherent. Strictly one should always quote the error, because it is never zero.

In everyday life, there are of course many practical expedients. For instance, radioactivity takes in principle an infinite amount of time to decay completely, so in practice radioisotopes are characterised by their half-life. So the manufacturers of smoke alarms can tell you when to replace your alarm, as they know the half-life of the radioactive source used in it. In acoustics or diffusion processes or electromagnetism, exponential decays are commonplace, and it is usual to introduce a relaxation time or length, corresponding to when/where the quantity of interest has fallen to $1/e$ of its initial value.

In fluid mechanics, the concept of a viscous boundary layer on a solid surface is of great utility in reconciling the practical consequences of a flow (such as friction drag) with the elegance and solubility of theoretical hydromechanics. The boundary layer builds up in thickness in the stream-wise direction as vorticity created at the solid surface diffuses outwards. But how do we define that thickness? A reasonable criterion is to choose the point where the velocity in the boundary layer is approximately equal to the free-stream velocity. From my dim memory of teaching this subject several decades ago, a criterion of $u_1(x_2) = U_1$, where $U_1$ is the constant free-stream velocity, was adequate for pedagogic purposes.

An interesting partial exception arises in solid state physics, when dealing with crystal lattices. The establishment of the lattice parameters is of course subject to the usual caveats about experimental error, but for statistical physics lattices are countable systems. So if one is carrying out renormalization group calculations (e.g see [1]) then one is coarse-graining the description by replacing the unit cell, of side length $a$, by some larger (renormalized) unit cell. In wavenumber (momentum) space, this means we start from a maximum wavenumber $k_{max}=2\pi/a$ and average out a band of wavenumber modes $k_1 \leq k \leq k_0$, where $k_0=k_{max}$. You can see where the countable aspect comes in, and of course the initial wavenumber is precisely defined (although of course its precise value is subject to the error made in determining the lattice constant).

When extending these ideas to turbulence, the problem of defining the maximum wavenumber is not solved so easily. Originally people (myself included) used the Kolmogorov dissipation wavenumber, but this is not necessarily the maximum excited wavenumber in turbulence. In 1985 I introduced a criterion which was rather like a boundary-layer thickness, adapting the definition of the dissipation rate, thus: \[\varepsilon = \int^{\infty}_0 \, 2\nu_0 k^2 E(k) dk \simeq \int^{k_{max}}_0 \, 2\nu_0 k^2 E(k) dk,\] where $\nu_0$ is the molecular viscosity and $E(k)$ is the energy spectrum [2]. When I first started using this, physicists found it odd, because they were used to the more precise lattice case. I should mention for completeness that it is also necessary to use a non-trivial conditional average [3].

Recently there has been growing interest in these matters by those who study the philosophy of maths and science. For instance, van Wierst [4] notes that in the theory of critical phenomena, phase transitions require an infinite system, whereas in real life they take place in finite (and sometimes quite small!) systems. She argues that this paradox can be resolved by the introduction of ‘constructive mathematics’, but my view is that it can be adequately resolved by the concept of scale-invariance. Which brings us back to the infinite Reynolds number limit for turbulence. But, for the moment, I have said enough on that topic in previous posts, and will not expand on it here.

[1] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.
[2] W. D. McComb. Application of Renormalization Group methods to the subgrid modelling problem. In U. Schumann and R. Friedrich, editors, Direct and Large Eddy Simulation of Turbulence, pages 67-81. Vieweg, 1986.
[3] W. D. McComb and A. G. Watt. Conditional averaging procedure for the elimination of the small-scale modes from incompressible uid turbulence at high Reynolds numbers. Phys. Rev. Lett., 65(26):3281-3284, 1990.
[4] Pauline van Wierst. The paradox of phase transitions in the light of constructive mathematics. Synthese, 196:1863, 2019.




Nightmare on Buccleuch Street.

Nightmare on Buccleuch Street.
Staycation post No 4. I will be out of the virtual office until 30 August.

I haven’t been into the university since the pandemic began but recently I dreamt that I was in the university library, in the section where magazines and journals are kept. In this dream, I was sitting at one of the low tables reading a magazine and two much younger men were also sitting there, in a suitably, socially distanced way. As they were unknown to me, I will call them A and B [1]. A was leafing through The Physics of Fluids while B was staring at one particular page of a tabloid newspaper.

After a while, A spoke. ‘Have you seen that interesting article about constraints on the scaling exponents in the inertial range?’

B shakes his head and goes on studying his tabloid. A continues. ‘These guys use Holder inequalities applied to the structure functions and then to the generalised structure functions; and end up with a condition relating the exponent for $S2$ to the exponent for $S3$. Now, if we assume that the exponent for $S3$ is equal to $1$, then it follows that the exponent for $S2$ is equal to $2/3$. This is exciting. Most people would agree with the first of these, but not the second.’

B continues to stare at his newspaper and makes no response. With a slight note of desperation in his voice, A goes on. ‘But don’t you see, this could fit in nicely with Lundgren’s matched asymptotic expansions analysis. It could also fit in with that guy’s blog about the K62 correction being unphysical. It looks like old Kolmogorov was right all the time … back in 1941. Aren’t you interested, at all?’

At last B looks up. ‘No, why should I be. I don’t use structure functions or spectra in my work. And you will go on using Kolmogorov scaling as you have always done, because it works. So why are you so excited?’

For a moment A just sits there. The he gets up and puts the journal back in the rack. He stands in silence for a few moments. Then he says. ‘You know, I keep feeling it’s Thursday.’

For the first time B looks animated. ‘That’s funny so do I. Let’s go and have a drink.’

Exeunt omnes. It was only a dream and obviously couldn’t happen in real life. The paper to which A was referring is cited below as [2].

[1] There is no C in this story. See my post of 9 July 2020.
[2] L. Djenidi, R. A. Antonia, and S. L. Tang. Mathematical constraints on the scaling exponents in the inertial range of fl uid turbulence. Phys. Fluids, 33:031703, 2021.




Why am I so concerned about Onsager’s so-called conjecture?

Why am I so concerned about Onsager’s so-called conjecture?
Staycation post No 3. I will be out of the virtual office until 30 August.

In recent years, Onsager’s (1949) paper on turbulence has been rediscovered and its eccentricities promoted enthusiastically, despite the fact that they are at odds with much well-established research in turbulence, beginning with Batchelor, Kraichnan, Edwards, and so on. In particular, a bizarre notion has taken hold that the Euler equation corresponds to the zero-viscosity limit of the Navier-Stokes equations and can be made dissipative, in defiance of the basic physics, by some mysterious alteration of the mathematics. The previous two posts refer to this.
I have been intending to write about this for some time, but the present paper [1] was prompted by an email that I received late in 2019 from MSRI, Berkeley. This was an advance announcement of a Program: ‘Mathematical problems in fluid dynamics’, to take place in the first half of 2021. I quote from the description as follows:

‘The fundamental equations in this area are the well-known Euler equations for inviscid fluids and the Navier-Stokes equations for the (sic) viscous fluids. Relating the two is the problem of the zero-viscosity limit and its connection to the phenomena of turbulence.’

The second sentence is nonsense and runs counter to all the conventions of fluid dynamics, where it has long been known that the relationship between the two equations is obtained by setting the viscosity equal to zero. The infinite Reynolds number limit, in contrast, is observed as an asymptotic behaviour of the Navier-Stokes equation; which, even at high Reynolds numbers, remains the Navier-Stokes equation.

I was appalled by the thought of young mathematicians being taught such unrepresentative and incorrect material. This is what provided my immediate motivation for writing the present paper. The first version of this paper was put on the arXiv on 12 December 2020.

In January of this year, I received from MSRI the final notification of this program. The wording had changed, and after some unexceptional statements about the equations of motion it read:

‘Open problems and connections to related branches of mathematics will be discussed, including the phenomena of turbulence and the zero-viscosity limit. Both theoretical and numerical aspects of these topics will be considered.’

Perhaps it is just a coincidence that this change should follow the arXiv publication of [1], but at least their statement about their course is no longer manifestly false; although much still depends on what was actually taught. It may be noted that Figure 2 of [1] (also see the previous post) shows the onset of scale invariance and, in effect, the zero-viscosity limit, in a direct numerical simulation at a Taylor-Reynolds number of about one hundred. This is the physical infinite Reynolds number limit as it occurs in real fluids.

Another aspect of the influence of Onsager is the use of the term dissipation anomaly which is used instead of what some call the dissipation law. If one criticises the term, the mathematicians seem to believe that one is denying the existence of the effect. Not so. At Edinburgh we have worked on the establishing the existence of the dissipation law and also have elucidated it as arising from the Richardson-Kolmogorov picture [2], [3]. It is a real physical effect and there is nothing anomalous about it.

[1] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614v2[physics.flu-dyn], 2021.
[2] W. David McComb, Arjun Berera, Matthew Salewski, and Sam R. Yoffe.
Taylor’s (1935) dissipation surrogate reinterpreted. Phys. Fluids, 22:61704,
2010.

[3] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.




That’s the giddy limit!

That’s the giddy limit!
Staycation post No 2. I will be out of the virtual office until 30 August.

The expression above was still in use when I was young, and vestiges of its use linger on even today. It referred, often jocularly, to any behaviour which was deemed unacceptable. Why giddy? I’m afraid that the reference books are silent on that. However, I have encountered examples of mathematical limits which seemed to qualify for the adjective.

Shortly before I retired, I found myself teaching a mathematics course to third-year physics students. The purpose of this course was to try to bring our students up to speed in maths, after the mathematics lecturers had done their best in the previous two years. I suppose that it had a remedial aspect, and at that time the talk was all of the ‘math problem’. One example of a ‘giddy’ limit, which sticks in my mind, arose when I was marking class exam papers. The question asked the students to sketch the function $sinc \,\nu = \sin \nu / \nu$. This required them to work out its value at $\nu =0$, where of course direct substitution results in an indeterminate form. I need hardly say that they had to use either a Taylor series expansion of $sin$ or make use of l’Hopital’s rule to reveal the correct limiting value which is unity. Or of course they could just sketch it and infer the limiting behaviour by eye.

One person did this beautifully, with all the zeros in the right places and the central peak heading up to the value one on both sides. It was as the $y$-axis was approached that giddiness seemed to set in, and the sketched curve then shot down to zero on both sides. The student then proudly declared it to be an indeterminate form. One, which just happened to be zero! This sudden abandonment of all reason was quite baffling and I never understood the reason for it.

However, I recently saw comments by an anonymous referee which seemed to come into a similar category. These were directed at Figure 2 in reference [1] which was intended to demonstrate that the physical infinite Reynolds number limit was determined by the onset of scale-invariance. We show this below. Scale-invariance in this context is defined to be when the maximum rate of inertial transfer $\varepsilon_T$ becomes equal to the viscous dissipation $\varepsilon$. As we were originally studying the dependence of dimensionless dissipation of Taylor-Reynolds number, we actually plot the ratio $ \varepsilon / \varepsilon_T $, which reduces towards unity, and this indicates the onset of scale-invariance.

Onset of the infinite Reynolds limit in stationary isotropic turbulence.

The referee looked at the figure and asked: how is the onset of scale-invariance defined? Is the onset placed at $R_{\lambda}=50,\,100\,150$?

This seems to me to verge on the childish. Does he have no familiarity with the intersection between a mathematically asymptotic result and a real physical system? Has he never met viscous boundary layers, exponential decay of sound or other radiation? The answer in all these cases is set by the resolution of the physical measuring system. Once changes are too small to be measurable, then the asymptote has been reached. The curve that we show in the figure, would go on at a constant level no matter how much one increased the Reynolds number.

The lesson to be drawn from this is that there are no further qualitative changes in the system as you increase the Reynolds number, and this is how real fluids behave. In the next blog we will consider the motivation for the research reported in [1].

[1] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614v2[physics.flu-dyn], 2021.




When is a conjecture not a conjecture?

When is a conjecture not a conjecture?
Staycation post No 1. I will be out of the virtual office until 30 August.

That sounds like the sort of riddle I used to hear in childhood. For instance, when is a door not a door? The answer was: when it’s ajar! [1] Well, at least we all know what a door is, so let us begin with what a conjecture actually is.

According to my dictionary, a conjecture is simply a guess. But in mathematics it is somehow more than that. Essentially, the idea is that a mathematician can be guided by their experience to postulate that something he/she knows to be true under particular circumstances is in fact true under all possible or relevant circumstances. If they can prove it, then their conjecture becomes a theorem.

The question then arises: what is a conjecture in physics? And if you can demonstrate its truth by measurement or reasoned argument, does it become a theory?

Let us take as an example a system such as an electrolyte or plasma containing many charged particles. The particles interact pairwise through the Coulomb potential and as the Coulomb potential is long-range this presents a many-body problem. What happens in practice is that a form of renormalization takes place, and the Coulomb potential due to any one electron is replaced by a potential which falls off more rapidly due to the screening effect of the cloud of particles surrounding it. A very simple introduction to this idea (which is known as the Debye-Huckel theory) can be found in Section 1.2.1 of the book cited as reference [2] below.

If we take the case of the turbulence cascade, the Fourier wavenumber modes provide the degrees of freedom. Then, instead of pairwise interactions, we have the famous triad interactions, each and every one of which conserves energy. If for simplicity we consider a periodic box, then the mean flux of energy from low wavenumbers to high can be written as the sum of all the individual mean triadic interactions. As in principle all modes are coupled, this is also a many-body problem and one can expect some form of renormalization to take place. In some simple circumstances this can be interpreted as a renormalized viscosity (the effective viscosity) which is very much larger than the molecular viscosity. These ideas date back to the late 19th century and are the earliest example of renormalization (although they did not use this term which came much later on, around the mid-20th century).

Now let us consider what happens as we progressively increase the Reynolds number. For the utmost simplicity we will restrict our attention to forced, stationary isotropic turbulence. Then, if we hold the rate of energy input into the system constant and decrease the viscosity progressively, this increases the Reynolds number at constant dissipation rate. It also increases the size of the largest wavenumbers of the system. The result is a form of scale-invariance in which the flux through wavenumbers is independent of wavenumber and the result is the dissipation law that the scaled dissipation law is independent of the viscosity as a rigorous asymptotic result [3]. It should perhaps be emphasised that this asymptotic behaviour is the infinite Reynolds number limit; but, from a practical point of view, we find that subsequent variation becomes too small to detect at Taylor-Reynolds numbers of a few hundred and thereafter may be treated as constant. We will return to this point in the next post, along with an illustration.

Meanwhile, back in real space, velocity gradients are becoming steeper as the Reynolds number increases, and this aspect disturbed Onsager [4] (see also the review of this paper in the context of Onsager’s life and work [5]). In fact he concluded that the infinite Reynolds number limit was the same as setting the viscosity equal to zero. In his view, the resulting Euler’s equation could still account for the dissipation in terms of singular behaviour. But, it has to be said that, in the absence of viscosity, there is no transfer of macroscopic kinetic energy into heat (i.e. microscopic kinetic energy). I have seen some references to pseudo-dissipation recently, so there is perhaps a growing awareness that Onsager’s conjecture needs further critical thought.
Onsager’s paper concludes with the sentence: ‘The detailed conservation of energy (i.e. the global conservation law of the nonlinear term) does not imply conservation of the total energy if the total number of steps in the cascade is infinite and the double sum … converges only conditionally.’ The italicised parenthesis is mine as Onsager referred here to one of his equation numbers. However this is merely an unsupported assertion which is incorrect on physical grounds because:
1. The number of steps is never infinite in a real physical flow.
2. The individual interactions are conservative so it is not clear how mere summation can lead to overall non-conservation.
3. The physical process involves a renormalization which means that there is a well-defined physical infinite Reynolds number limit at quite moderate Reynolds numbers.
It is totally unclear to me what mathematical justification there can be for this statement; and discussions of it that I have seen in the literature seem to me to be unsound on physical grounds. I shall return to these points in future blogs.

[1] That is, ‘a jar’, geddit? Oh dear, I suppose I am getting into holiday mood!
[2] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.
[3] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.
[4] L. Onsager. Statistical Hydrodynamics. Nuovo Cim. Suppl., 6:279, 1949.
[5] G. L. Eyink and K. R. Sreenivasan. Onsager and the Theory of Hydrodynamic Turbulence. Rev. Mod. Phys., 87:78, 2006.




How do we identify the presence of turbulence?

How do we identify the presence of turbulence?

In 1971, when I began as a lecturer in Engineering Science at Edinburgh, my degree in physics provided me with no basis for teaching fluid dynamics. I had met the concept of the convective derivative in statistical mechanics, as part of the derivation of the Liouville equation, and that was about it. And of course the turbulence theory of my PhD was part of what we now call statistical field theory. Towards the end of autumn term, I was due to take over the final-year fluids course, but fortunately a research student who worked as a lab demonstrator for me had previously taken the course and kindly lent me his copy of the lecture notes. However, in my first year, I was never more than one lecture ahead of the students!

This grounding in the subject was reinforced by practical experience, when I began doing experimental work on drag reduction by additives and on particle diffusion. It also allowed me to recover quickly from an initial puzzlement, when I saw a paper in JFM which proposed that the occurrence of streamwise vorticity could be taken as a signal of turbulence in duct flow.

Later on, I learned that this idea could be extended to give a plausible picture of the turbulent bursting process, and a discussion can be found in Section 11.4.3 of my book [1], where the development of $\Lambda$ vortices is illustrated in Fig. 11.1. In the book, this is preceded by a treatment of the boundary layer on a flat plate in Section 1.4, which can help us to understand the basic idea as follows. Suppose we have a fluid moving with constant velocity $U_1$, incident on a flat plate lying in the ($x_1,x_3$) plane with its leading edge at $x_1=0$. Vorticity is generated at this point due to the no-slip boundary condition, and diffuses out normal to the plate in the $x_2$ direction, resulting in a velocity field of the form $u_1(x_2)$, in the boundary layer. We can visualize the sense of the vorticity vector by imagining the effect of a small portion of the fluid becoming solidified. That part nearest the plate will slow down, the ‘solid body’ will rotate, and the spin vector will point in the $x_3$ direction. This is the only component of vorticity in the system.

The occurrence of vorticity in the other two directions must be a consequence of instability and almost certainly begins with vorticity building up in the $x_1$ direction due to edge effects. That is, in practice, the plate must be of finite extent in the cross-stream or $x_3$ direction. A turbulence transition could not occur if the plate (as normally assumed for pedagogic purposes) were of infinite extent. This provides an unequivocal criterion for the occurrence of the transition to turbulence, but there is still the question of when the turbulence is in some sense well-developed. And of course other flows may require other criteria.

The question of whether a flow is turbulent or not became something of an issue in the 1980s/90s, when there was a growing interest in applying Renormalization Group (RG) to turbulence. The pioneering work on applying RG to randomly stirred fluid motion was reported by Forster, Nelson and Stephen [2] in 1976, and you should note from the title of their first paper that the word ‘turbulence’ does not appear. Their work was restricted to showing that there was a fixed point of the RG transformations in the limit of zero wavenumbers (i.e. ‘large wavelengths’).

The main drive in turbulence research is always towards applications, and inevitably pressure developed to seek ways of extending the work of Forster et al. to turbulence. In the process a distinction grew up between ‘stirred fluid motion’ and so-called ‘Navier-Stokes turbulence’. The latter should be described by the spectral energy balance known as the Lin equation, whereas the former just reflects its Gaussian forcing. Nowadays, in physics, the distinction has settled down to ‘stirred hydrodynamics’ and just plain turbulence!

The difficulty of defining turbulence in a concise way remains, but some light can be shed on these earlier controversies by considering a more recent discovery that we made at Edinburgh. This was the result that a dynamical system consisting of the Navier-Stokes equations forced by the combination of an initial Gaussian field and a negative damping term, will at very low Reynolds numbers become non-turbulent and take the form of a Beltrami flow [3]. In this paper, we emphasised that at early times the transfer spectrum $T(k,t)$ has the behaviour typically found in simulations of isotropic turbulence but at later times tends to zero. At the same time, the energy spectrum $E(k,t)$ tends to a unimodal spectrum at $k=1$. An interesting point is that the fixed point of Forster et al. $k \rightarrow 0$ is cut off by our lattice, so that we observe a Beltrami flow instead.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] D. Forster, D. R. Nelson, and M. J. Stephen. Long-time tails and the large-eddy behaviour of a randomly stirred fluid. Phys. Rev. Lett., 36(15):867-869, 1976.
[3] W. D. McComb, M. F. Linkmann, A. Berera, S. R. Yoffe, and B. Jankauskas. Self-organization and transition to turbulence in isotropic fluid motion driven by negative damping at low wavenumbers. J. Phys. A Math. Theor., 48:25FT01, 2015.




Are Kraichnan’s papers difficult to read? Part 2: The DIA.

Are Kraichnan’s papers difficult to read? Part 2: The DIA.

In 2008, or thereabouts, I took part in a small conference at the Isaac Newton Institute and gave a talk on the LET theory, its relationship to DIA, and how both theories could be understood in terms of their relationship to Quasi-normality. During my talk, I was interrupted by someone in the audience, who said that I was wrong in discussing DIA as if Kraichnan’s perturbation theory was the same as that of Wyld. I disagreed, and we had a short exchange of the kind ‘Yes you did! No, I didn’t!’, and the matter was left unresolved.

Sometime afterwards, I refreshed my memory of these matters and realised that I was wrong. Kraichnan’s seminal paper [1] is not easy to understand, but he was claiming to be introducing a new type of perturbation theory, and that undoubtedly differed from Wyld’s subsequent field-theoretic approach [2]. In his book on the subject, Leslie had simply chickened out and used the Wyld analysis [3]. Many of us had then followed in his tracks, but over the years (decades!) I had simply forgotten that fact. It was salutary to be reminded of it, and I duly said something about it in my later book on turbulence [4].

Again this draws attention to the danger of relying uncritically on secondary sources, but an interesting point emerged. Kraichnan made what was essentially a mean-field approximation in his theory. The fact that Wyld could show that the DIA gave identical results to the same order of truncation of conventional perturbation theory tells us that the mean-field approximation for the response function was justified; because the method of renormalization was the same for both approaches. This is of further interest, in that the recent formal derivation of the local energy-transfer (LET) theory also relies on a mean-field approximation involving the response function [5], although this is defined in a completely different way from that in DIA.

Among the select few who actually have got to grips with the new perturbation theory in [1], are my student Matthew Salewski, who did that as a preliminary to the resolution of the apparent differences between formalisms [6]; and S. Kida who revisited DIA in order to derive a Lagrangian theory e.g. see reference [7].

As regards the question which heads this post, we can leave the last word with the man himself. Kraichnan told me that on one occasion a referee had complained to him: ‘Why are your papers so difficult to read?’ and he had replied: ‘If you think they are hard to read, have you considered how difficult they must be to write?’.

[1] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497-543, 1959.
[2] H. W. Wyld Jr. Formulation of the theory of turbulence in an incompressible fluid. Ann.Phys, 14:143, 1961.
[3] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.
[4] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[5] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[6] A. Berera, M. Salewski, and W. D. McComb. Eulerian Field-Theoretic Closure Formalisms for Fluid Turbulence. Phys. Rev. E, 87:013007-1-25, 2013.
[7] S. Kida and S. Goto. A Lagrangian direct-interaction approximation for homogeneous isotropic turbulence. J. Fluid Mech., 345:307-345, 1997.




Are Kraichnan’s papers difficult to read? Part 1: Galilean Invariance

Are Kraichnan’s papers difficult to read? Part 1: Galilean Invariance.

When I was first at Edinburgh, in the early 1970s, I gave some informal talks on turbulence theory. One of my colleagues became sufficiently interested to start doing some reading on the subject. Shortly afterwards he came up to me at coffee time and said. ‘Are all Kraichnan’s papers as difficult to understand as this one?’ The paper which he was brandishing at me was Kraichnan’s seminal 1959 paper which launched the direct interaction approximation (DIA) [1]. I had to admit that Kraichnan’s papers were in general pretty difficult to read; and I think that my colleague gave up on the idea. Shortly afterwards, Leslie’s book came out and this was very largely devoted to making Kraichnan’s work more accessible [2]; but I think that was too late for one disillusioned individual.

Recently I was reading a paper (might have been one of Kraichnan’s) and I was brought up short by something like ‘… and the variance takes the form:’ followed by a displayed mathematical expression. So it was rather like one half of an equation, with the other (first) half being in words in the text. So, I found that I had to remember what the variance was in this particular context, and then complete the equation in my mind. If I had been writing this, I would have used a symbol for the variance (even if just its definition as $\langle u^2 \rangle$) and displayed an actual equation. But what this reminded me of was my own diagnosis of the difficulty with Kraichnan’s style. I suspected that he would get tired of always writing in maths, and would feel the need for some variety. The trouble was that sometimes he would put the important bits in words, with a corresponding loss of conciseness and precision. As a result there was a temptation to rely on secondary sources such as Leslie’s book [2] or Orszag’s review article [3]; and I was by no means the only one to succumb to this temptation!

The fact that it could be unwise to do so emerged when we produced a paper on calculations of the LET theory (compared with DIA) and submitted it to the JFM [4]. We discussed the idea of random Galilean invariance (RGI) and argued that its averaging process violated the ergodic principle.

We set out the procedure of random Galilean transformation as follows. Consider a velocity field $\mathbf{u}(\mathbf{x},t)$ in a frame of reference $S$. Suppose that we have a set of reference frames $\{S_0,\,S_1,\,S_2,\, \dots\}$, moving with velocities $\{C_0,\,C_1,\,C_2,\,\dots\}$, where the shift velocities are all constant and the sub-ensemble is defined by the probability distribution $P(C)$ of the shift velocities. In practice, Kraichnan took this to be a normal or Gaussian distribution, and averaged with respect to $C$ as well as with respect to the velocity field.

However, Kraichnan’s response to our paper was ‘that’s not what I mean by random Galilean transformations’. But he didn’t enlighten us any further on the matter.

Around that time, a new research student started, and I asked him to go through Kraichan’s papers with the proverbial fine-tooth comb and find out what RGI really was. What he found was that Kraichnan was working with a composite ensemble made up from the members of the turbulent ensemble, each shifted randomly by a constant velocity. So the turbulence ensemble $\{\mathbf{u}^{i}(\mathbf{x},t )\}$, with the superscript $i$ taking integer values, was replaced by a composite ensemble $\{\mathbf{u}^{i}(\mathbf{x},t ) + C_i\}$. This had to be inferred from a brief statement in words in a paper by Kraichnan!

The student then investigated this choice of RGT in conjunction with the derivation of theories and concluded that it was incompatible with the use of renormalized perturbation theory. In other words, Kraichnan was using it as a constraint of theory, once the theory was actually derived. But in fact the underlying use of the composite ensemble invalidated the actual derivation of the theory. It would be too complicated to go further into this matter here, but a full account can be found in Section 10.4 of my book [5], which references Mark Filipiak’s thesis [6].

This experience illustrates the danger of relying too much on secondary sources, however excellent they may be. I will give another example in my next post but I can round this one off with an anecdote. When I first met Bob Kraichnan he told me that he had been very angered by Leslie’s book. I think that he was unhappy at what he saw as an excessive concentration on his work, and also the fact that Leslie had dedicated the book to him. However, he said that various others had persuaded him that he was wrong to react in this way. I added my own voice to this chorus, pointing out that there was absolutely no doubt of his dominance as the father of modern turbulence theory; and the dedication was no more than a personal expression of admiration on the part of David Leslie.

[1] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497-543, 1959.
[2] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.
[3] S. A. Orszag. Analytical theories of turbulence. J. Fluid Mech., 41:363, 1970.
[4] W. D. McComb, V. Shanmugasundaram, and P. Hutchinson. Velocity derivative skewness and two-time velocity correlations of isotropic turbulence as predicted by the LET theory. J. Fluid Mech., 208:91, 1989.
[5] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[6] M. J. Filipiak. Further assessment of the LET theory. PhD thesis, University of Edinburgh, 1992.




Hurrah for arXiv.com!

Hurrah for arXiv.com!
In my previous blog, I referred to my paper with Michael May [1], which failed to be accepted for publication, despite my having tried several journals. I suppose that some of my choices were unrealistic (e.g Nature) and that I could have tried more. Also, I could have specified referees, which I don’t like doing, but now increasingly suspect that it is prudent to do so. Anyway, I see from ResearchGate that, despite it only being on the arXiv, it continues to receive some attention; and I was pleased to find that it had actually been cited for publication.

It was only recently, when thinking of topics for another blog on peer review, that I remembered that I already had a paper on the arXiv; and it has been cited about a dozen times (although two of those are by me!). This was a paper with one of my students [2] which was presented at the Monte Verita conference in 1998. Naturally I expected it to appear in the conference proceedings, but it received a referee’s report that ran something like this: ‘No doubt the authors have some reasons of their own for doing these things but I am unable to see any interest or value in their work’. So we had to rely on the arXiv publication.

Now, the idea of studying the filtered/partitioned nonlinear term, from the point of view of subgrid modelling and renormalization group, was quite an active field at that time, so the referee was actually revealing his own ignorance. (In fact, I know who it was and someone who knew him personally told me that this is exactly the kind of person he is. Very enthusiastic about his own topic and uninterested in other topics.) This is an extreme deficiency of scholarship, but in my view is not completely untypical of the turbulence community. It is perhaps worth mentioning that one of the results we presented was really quite profound in showing how a subgrid eddy viscosity could represent amplitude effects but not phase effects. Various people working in the field would have had an inkling of this fact, but we actually demonstrated it quantitatively by numerical simulation.

The paper also turned out to have some practical value. Later on, I received a request from someone who was preparing a chapter for inclusion in an encyclopaedia, for permission to reproduce one of our figures. This was published in 2004, and in 2017 a second edition appeared [3]. In 2004 the work was also cited in a specialist article on large-eddy simulation [4], and over the years it has been cited various times in this type of article, most recently in the present year. So, other people saw interest and value in the work, but it didn’t appear in the conference proceedings! The relevant figure appears below.

Figure 15 from reference [2] as reproduced in reference [3].
As a final point, I have sometimes wondered about the status of arXiv publications. An interesting point of view can be found in the book by Roger Penrose [5]. At the beginning of his bibliography he refers favourably to the arXiv, stating that some people actually regard it as a source of eprints, as an alternative to journal publication. He also notes how this can speed up the exchange of ideas, perhaps too much so!

Of course, in his subject, speculative ideas are an everyday fact of life. In turbulence, on the other hand, speculative ideas have little chance of getting past the dour, ‘handbook engineering’ mind-set of so many people in the field of turbulence. So, let’s all post our speculative ideas on the arXiv, where it is quite easy to find them with the aid of Mr Google.

[1] W. D. McComb and M. Q. May. The effect of Kolmogorov (1962) scaling on the universality of turbulence energy spectra. arXiv:1812.09174[physics.flu- dyn], 2018.
[2] W. D. McComb and A. J. Young. Explicit-Scales Projections of the Partitioned Nonlinear Term in Direct Numerical Simulation of the Navier-Stokes Equation. Presented at 2nd Monte Verita Colloquium on Fundamental Problematic Issues in Turbulence: available at arXiv:physics/9806029 v1, 1998.
[3] T. J. R. Hughes, G. Scovazzi, and L. P. Franca. Multiscale and Stabilized Methods. In E. Stein, R. de Borst, and T. J. R. Hughes, editors, Encyclopedia of Computational Mechanics Second Edition, pages 1-102. Wiley, 2017.
[4] T. J. R. Hughes, G. N. Wells, and A. A. Wray. Energy transfers and spectral eddy viscosity in large-eddy simulations of homogeneous isotropic turbulence: Comparison of dynamic Smagorinsky and multiscale models over a range of discretizations. Phys. Fluids, 16(11):4044-4052, 2004.
[5] Roger Penrose. The Road to Reality. Vintage Books, London, 2005.




The Kolmogorov (1962) theory: a critical review Part 2

The Kolmogorov (1962) theory: a critical review Part 2

Following on to last week’s post, I would like to make a point that, so far as I know, has not previously been made in the literature of the subject. This is, that the energy spectrum is (in the sense of thermodynamics) an intensive quantity. Therefore it should not depend on the system size. This is, as opposed to the total kinetic energy (say) which does depend on the size of the system and is therefore extensive.

What applies to the energy spectrum also applies to the second-order structure function. If we now consider equation (1) from the previous blog, which is \begin{equation}S_2(r)=C(\mathbf{x},t) \varepsilon^{2/3}r^{2/3}(L/r)^{-\mu}, \label{62S2}\end{equation}then for isotropic, stationary turbulence, it may be written as: \begin{equation}S_2(r)=C \varepsilon^{2/3}r^{2/3} (L/r)^{-\mu}. \end{equation} Note that $C$ is constant, as it can no longer depend on the macrostructure.

Of course this still contains the factor $L^{-\mu}$. Now, $L$ is only specified as the external scale in K62, but it is necessarily related to the size of the system. Accordingly taking the limit of infinite system size, is related to taking the limit of infinite values of $L$, which is needed in order to have $k=0$ and to be able to carry out Fourier transforms. If we do this, we have three possible outcomes. If $\mu$ is negative, then $S_2 \rightarrow \infty$, as $L \rightarrow \infty$, whereas if $\mu$ is positive, then $S_2$ vanishes in the limit of infinite system size. Hence, in either case, the result is unphysical, both by the standards of continuum mechanics and by those of statistical physics.

However, if $\mu = 0$ then there is no problem. The structure function (and spectrum) exist in the limit of infinite system size. Could this be an argument for K41?

Lastly, we should mention that McComb and May [1] have used a plausible method to estimate values of $L$ and, taking a representative value of $\mu=0.1$, have shown that the inclusion of this factor as in K62 destroys the well-known collapse of spectral data that can be achieved using K41 variables.
We began with the well-known graph in which one-dimensional projections of the energy spectrum for a range of Reynolds numbers are normalized on Kolmogorov variables and plotted against $k’=k/k_d$: see, for example, Figure 2.4 of the book [2], which is shown immediately below this text.

 

Measured one-dimensional spectra fro a wide range of Reynolds numbers showing the asymptotic effect of scaling on K41 variablelsl. Reproduced from Figure 2.4 of Reference 2.

 

In this work, we referred to $L$ as $L_{ext}$ and we estimated it as follows. From the above graph, we see that the universal behaviour always occurs in the limit $R_\lambda \rightarrow \infty$ with all spectra collapsing to a single curve at $k’= k/k_d =1$. As the Reynolds number increases, each graph flattens off as $k$ decreases and ultimately forms a plateau at low wavenumbers. We argued that one can use the point where this departure takes place $k’_{ext}$ (say) to estimate the external length scale, thus; \[L’_{ext} = 2\pi/k’_{ext}.\]
In order to make a comparison, we chose the results for a tidal channel at $R_{\lambda}=2000$ and for grid turbulence at $R_{\lambda}=72$. We show these two spectra, as selected from Fig. 1, on Figure 2 below.

 

Figure 2 from Reference 1.

 

Note that we plot the scaled one-dimensional spectrum $\psi(k’)=\phi(k’)/(\varepsilon \nu^5)^{1/4}$.
In the next figure, we plot these two spectra in compensated form, where we have taken the one-dimensional spectral constant to be $\alpha_{1}=1/2$, on the basis of Figure 2. In this form the $-5/3$ power law appear as a horizontal line at unity. We will return to this aspect later.

 

Figure 3 from Reference 1.

 

In order to assess the effect of including the K62 correction, we estimated to be $L’_{ext}\sim 50$ for the grid turbulence and as $L’_{ext}\sim 2000$ for the tidal channel. In fact the spectra from the tidal channel do not actually peel off from the $-5/3$ line at low $k$ so our estimate is actually a lower bound for this case. This favours K62 in the comparison. We took the value $\mu = 0.1$, as obtained by high-resolution numerical simulation and the result of including the K62 correction is shown in Figure 4.

 

Figure 4 from Reference 1.

 

It can be seen that including the K62 corrections destroys the collapse of the spectra which, apart from showing a slope of $\mu = -0.1$ in both cases, are now separated and in a constant ratio of $0.69$. Evidently the universal collapse of spectra in Figure 1 would not be observed if the K62 corrections were in fact correct!
My final point is that one of the unfavourable referees for this paper had a major concern with the fact that the results for grid turbulence did not really show much $-5/3$ behaviour. This is to miss the point. The K41 scaling shows a universal form in the dissipation range, as well as in the inertial range. The inclusion of the K62 correction destroys this, when implemented with plausible estimates for the two parameters.

[1] W. D. McComb and M. Q. May. The effect of Kolmogorov (1962) scaling on the universality of turbulence energy spectra. arXiv:1812.09174[physics.flu-dyn], 2018.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.




The Kolmogorov (1962) theory: a critical review Part 1

The Kolmogorov (1962) theory: a critical review Part 1
As is well known, Kolmogorov interpreted Landau’s criticism as referring to the small-scale intermittency of the instantaneous dissipation rate. His response was to adopt Obukhov’s proposal to introduce a new dissipation rate which had been averaged over a sphere of radius $r$, and which may be denoted by $\varepsilon_r$. This procedure runs into an immediate fundamental objection.

In K41A, (or its wavenumber-space equivalent) the relevant inertial-range quantity for the dimensional analysis is the local (in wavenumber) energy transfer. This is of course equal to the mean dissipation rate by the global conservation of energy (It is a potent source of confusion that these theories are almost always discussed in terms of the dissipation $\varepsilon$, when the proper inertial-range quantity is the nonlinear transfer of energy $\Pi$. The inertial range is defined by the condition $\Pi_{max} = \varepsilon$). However, as pointed out by Kraichnan [1] there is no such simple relationship between locally-averaged energy transfer and locally-averaged dissipation.

Although Kolmogorov presented his 1962 theory as `A refinement of previous hypotheses …’, it is now generally understood that this is incorrect. In fact it is a radical change of approach. The 1941 theory amounted to a general assumption that a cascade of many steps would lead to scales where the mean properties of turbulence were independent of the conditions of formation (i.e. of, essentially, the physical size of the system). Whereas, in 1962, the assumption was, in effect, that the mean properties of turbulence did depend on the physical size of the system. We will return to this point presently, but for the moment we concentrate on the preliminary steps.

The 1941 theory relied on a general assumption with an underlying physical plausibility. In contrast, the 1962 theory involved an arbitrary and specific assumption. This was to the effect that the logarithm of $\varepsilon(\mathbf{x},t)$ has a normal distribution for large $L/r$ where $L$ is referred to as an external scale and is related to the physical size of the system. We describe this as `arbitrary’ because no physical justification is offered; but in any case it is certainly specific. Then, arguments were developed that led to a modified expression for the second-order structure function, thus: \begin{equation}S_2(r)=C(\mathbf{x},t)\varepsilon^{2/3}r^{2/3}(L/r)^{-\mu}, \label{62S2}\end{equation} where $C(\mathbf{x},t)$ depends on the macrostructure of the flow.

In addition, Kolmogorov pointed out that `the theorem of constancy of skewness …derived (sic) in Kolmogorov (1941b)’ is replaced by \begin{equation} S(r) = S_0(L/r)^{3\mu/2},\end{equation} where $S_0$ also depends on the macrostructrure.

Equation (\ref{62S2}) is rather clumsy in structure, in the way the prefactor $C$ depends on $x$. This is because of course we have $r=x-x’$, so clearly $C(\mathbf{x},t)$ also depends on $r$. A better way of tackling this would be to introduce centroid and relative coordinates, $\mathbf{R}$ and $\mathbf{r}$, such that \begin{equation}\mathbf{R} = (\mathbf{x}+\mathbf{x’})/2; \qquad \mbox{and} \qquad \mathbf{r}= ( \mathbf{x}-\mathbf{x’}).\end{equation} Then we can re-write the prefactor as $C(\mathbf{R}, r; t)$, where the dependence on the macrostructure is represented by the centroid variable, while the dependence on the relative variable holds out the possibility that the prefactor becomes constant for sufficiently small values of $r$.

Of course, if we restrict our attention to homogeneous fields, then there can be no dependence of mean quantities on the centroid variable. Accordingly, one should make the replacement: \begin{equation}C(\mathbf{R}, r; t)=C(r; t),\end{equation} and the additional restriction to stationarity would eliminate the dependence on time. In fact Kraichnan [1] went further and replaced the pre-factor with the constant $C$: see his equation (1.9).

For sake of completeness, another point worth mentioning at this stage is that the derivation of the `4/5′ law is completely unaffected by the `refinements’ of K62. This is really rather obvious. The Karman-Howarth equation involves only ensemble-averaged quantities and the derivation of the `4/5′ law requires only the vanishing of the viscous term. This fact was noted by Kolmogorov [2].

[1] R. H. Kraichnan. On Kolmogorov’s inertial-range theories. J. Fluid Mech., 62:305, 1974.
[2] A. N. Kolmogorov. A refinement of previous hypotheses concerning the local structure of turbulence in a viscous incompressible fluid at high Reynolds number. J. Fluid Mech., 13:82-85, 1962.




The Landau criticism of K41 and problems with averages

The Landau criticism of K41 and problems with averages

The idea that K41 had some problem with the way that averages were taken has its origins in the famous footnote on page 126 of the book by Landau and Lifshitz [1]. This footnote is notoriously difficult to understand; not least because it is meaningless unless its discussion of the `dissipation rate $\varepsilon$’ refers to the instantaneous dissipation rate. Yet $\varepsilon$ is clearly defined in the text above (see the equation immediately before their (33.8)) as being the mean dissipation rate. Nevertheless, the footnote ends with the sentence `The result of the averaging therefore cannot be universal’. As their preceding discussion in the footnote makes clear, this lack of universality refers to ‘different flows’: presumably wakes, jets, duct flows, and so on.

We can attempt a degree of deconstruction as follows. We will use our own notation, and to this end we introduce the instantaneous structure function $\hat{S}_2(r,t)$, such that $\langle \hat{S}_2(r,t) \rangle =S_2(r)$. Landau and Lifshitz consider the possibility that $S_2(r)$ could be a universal function in any turbulent flow, for sufficiently small values of $r$ (i.e. the Kolmogorov theory). They then reject this possibility, beginning with the statement:

`The instantaneous value of $\hat{S}(r,t)$ might in principle be expressed as a universal function of the energy dissipation $\varepsilon$ at the instant considered.’

Now this is rather an odd statement. Ignoring the fact that the dissipation is not the relevant quantity for inertial-range behaviour, it is really quite meaningless to discuss the universality of a random variable in terms of its relation to a mean variable (i.e. the dissipation). A discussion of universality requires mean quantities. Otherwise it is impossible to test the statement. The authors have possibly relied on the qualification `at the instant considered’. But how would one establish which instant that was for various different flows?

They then go on:

`When we average these expressions, however, an important part will be played by the law of variation of $\varepsilon$ over times of the order of the periods of the large eddies (of size $\sim L$), and this law is different for different flows.’

This seems a rather dogmatic statement but it is clearly wrong for the the broad (and important) class of stationary flows. In such flows, $\varepsilon$ does not vary with time.

The authors conclude (as we pointed out above) that: `The result of the averaging therefore cannot be universal.’ One has to make allowance for possible uncertainties arising in translation, but nevertheless, the latter part of their argument only makes any sort of sense if the dissipation rate is also instantaneous. Such an assumption appears to have been made by Kraichnan [2], who provided an interpretation which does not actually depend on the nature of the averaging process.

In fact Kraichnan worked with the energy spectrum, rather than the structure function, and interpreted Landau’s criticism of K41 as applying to \begin{equation}E(k) = \alpha\varepsilon^{2/3}k^{-5/3}.\label{6-K41}\end{equation}
His interpretation of Landau was that the prefactor $\alpha$ may not be a universal constant because the left-hand side of equation (\ref{6-K41}) is an average, while the right-hand side is the 2/3 power of an average.

Any average involves the taking of a limit. Suppose we consider a time average, then we have \begin{equation} E(k) = \lim_{T\rightarrow\infty}\frac{1}{T}\int^{T}_{0}\widehat{E}(k,t)dt, \end{equation} where as usual the `hat’ denotes an instantaneous value. Clearly the statement \begin{equation}E(k) = \mbox{a constant};\end{equation}or equally the statement, \begin{equation}E(k) = f\equiv\langle\hat{f}\rangle, \end{equation} for some suitable $f$, presents no problem. It is the `2/3′ power on the right-hand side of equation (\ref{6-K41}) which means that we are apparently equating the operation of taking a limit to the 2/3 power of taking a limit.

However, it has recently been shown [3] that this issue is resolved by noting that the pre-factor $\alpha$ itself involves an average over the phases of the system. It turns out that $\alpha$ depends on an ensemble average to the $-2/3$ power and this cancels the dependence on the $2/3$ power on the right hand side of (\ref{6-K41}).

[1] L. D. Landau and E. M. Lifshitz. Fluid Mechanics. Pergamon Press, London, English edition, 1959.
[2] R. H. Kraichnan. On Kolmogorov’s inertial-range theories. J. Fluid Mech., 62:305, 1974.
[3] David McComb. Scale-invariance and the inertial-range spectrum in three-dimensional stationary, isotropic turbulence. J. Phys. A: Math. Theor.,42:125501, 2009.




The Kolmogorov-Obukhov Spectrum.

The Kolmogorov-Obukhov Spectrum.

To lay a foundation for the present piece, we will first consider the joint Kolmogorov-Obukhov picture in more detail. For completeness, we should begin by mentioning that Kolmogorov also used the Karman-Howarth equation, which is the energy balance equation connecting the second- and third-order structure functions, to derive the so-called `$4/5$’ law for the third-order structure function.This procedure amounts to a de facto closure, as the time-derivative is neglected (an exact step in our present case, as we are restricting our attention to stationary turbulence) and the term involving the viscosity vanishes in the limit of infinite Reynolds number. This is often referred to as `the only exact result in turbulence theory’; but increasingly it is being referred to, perhaps more correctly, as `the only asymptotically exact result in turbulence’.

As part of this work, he also assumed that the skewness was constant; and this provided a relationship between the second- and third-order structure functions which recovered the `$2/3$’ law. It is interesting to note that Lundgren used the method of matched asymptotic expansions to obtain both the `$4/5$’ and `$2/3$’ laws, without having to make any assumption about the skewness. This work also offered a way of estimating the extent of the inertial range in real space.

However, the Karman-Howarth equation is local in the independent variables and therefore does not describe an energy cascade. In contrast, the Lin equation (which is just its Fourier transform) shows that all the degrees of freedom in turbulence are coupled together. It takes the form, for the energy spectrum $E(k, t)$, in the presence of an input spectrum $W(k)$: \begin{equation}\frac{\partial E(k,t)}{\partial t} = W(k)+ T(k,t)- 2\nu_{0}k^{2}E(k, t),\label{lin}\end{equation} where $\nu_{0}$ is the kinematic viscosity and the transfer spectrum $T(k,t)$ is given by\begin{eqnarray}T(k,t) & = & 2\pi k^{2}\int d^{3}j\int d^{3}l\,\delta(\mathbf{k}-\mathbf{j}-\mathbf{l})M_{\alpha\beta\gamma}(\mathbf{k})\nonumber \\ & \times & \left\{C_{\beta\gamma\alpha}(\mathbf{j},\mathbf{l},\mathbf{-k};t)-C_{\beta\gamma\alpha}(\mathbf{-j},\mathbf{-l},\mathbf{k};t)\right\},\end{eqnarray}with \begin{equation} M_{\alpha\beta\gamma}(\mathbf{k})=-\frac{i}{2}\left[k_{\beta}P_{\alpha\gamma}(\mathbf{k})+k_{\gamma}P_{\alpha\beta}(\mathbf{k})\right],\label{M}\end{equation} and the projector $P_{\alpha\beta}(\mathbf{k})$ is \begin{equation}P_{\alpha\beta}(\mathbf{k})=\delta_{\alpha\beta}-\frac{k_{\alpha}k_{\beta}}{|\mathbf{k}|^{2}}, \end{equation}where $\delta_{\alpha\beta}$ is the Kronecker delta, and the third-order moment $C_{\beta\gamma\alpha}$ here takes the specific form: \begin{equation} C_{\beta\gamma\alpha}(\mathbf{j},\mathbf{l},\mathbf{-k};t)=\langle u_{\beta}(\mathbf{j},t)u_{\gamma}(\mathbf{l},t)u_{\alpha}(\mathbf{-k},t) \rangle.\end{equation}

At this stage we also define the flux of energy $\Pi(\kappa,t)$ due to inertial transfer through the mode with wavenumber $k=\kappa$. This is given by: \begin{equation}\Pi(\kappa,t) = \int_{\kappa}^{\infty}\,dk\,T(k,t).\end{equation}
Further discussion and details may be found in Section 4.2 of the book [1].
We now have a rather simple picture. In formulating our problem, the shape of the input spectrum should be chosen to be peaked near the origin, such that higher wavenumbers are driven by inertial transfer, with energy being dissipated locally by the viscosity. Then we can define the rate at which stirring forces do work on the system by: \begin{equation} \int_0^\infty \, W(k)\, dk = \varepsilon_W. \end{equation}

Obukhov’s idea of the constant inertial flux can be expressed as follows. As the Reynolds number is increased, the transfer rate, as given by equation (6), will also increase and must reach a maximum value, which in turn must be equal to the viscous dissipation. Thus we introduce the symbol $\varepsilon_T$ for the maximum inertial flux as: \begin{equation}\varepsilon_T = \Pi_{\mbox{max}},\end{equation} and for stationary turbulence at sufficiently high Reynolds number, we have the limiting condition: \begin{equation}\varepsilon = \varepsilon_T = \varepsilon_W.\end{equation}

Thus the loose idea of a local cascade involving eddies in real space is replaced by the precisely formulated concept of scale invariance of the inertial flux in wavenumber space. As is well known, this picture leads directly to the $-5/3$ energy spectrum in the limit of large Reynolds numbers.

[1] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.




Why do we call it ‘The Kolmogorov Spectrum’?

Why do we call it ‘The Kolmogorov Spectrum’?
The Kolmogorov $-5/3$ spectrum continues to be the subject of contentious debate. Despite its great utility in applications and its overwhelming confirmation by experiments, it is still plagued by the idea that it is subject to intermittency corrections. From a fundamental view this is difficult to understand because Kolmogorov’s theory (K41a) was expressed in terms of the mean dissipation, which can hardly be affected by intermittency. Another problem is that Kolmogorov actually derived the $2/3$ law for the structure function. Of course one can derive the spectrum from this result by Fourier transformation; but this is not a completely trivial process and we will discuss it in a future post.

The trouble seems to be that Kolmogorov’s theory, despite its great pioneering importance, was an incomplete and inconsistent theory. It was formulated in real space; where, although the energy transfer process can be loosely visualised from Richardson’s idea of a cascade, the concept of such a cascade is not mathematically well defined. Also, having introduced the inertial range of scales, where the viscosity may be neglected, he characterised this range by the viscous dissipation rate, which is not only inconsistent but incorrect. An additional complication, which undoubtedly plays a part, is that his theory was applied to turbulence in general. The basic idea was that the largest scales would be affected by the nature of the flow, but a stepwise cascade would result in smaller eddies being universal in some sense. That is, they would have much the same statistical properties, despite the different conditions of formation. In order to avoid uncertainties that can arise from this rather general idea, we will restrict our attention to stationary, isotropic turbulence here.

To make a more physical picture we have to follow Obukhov and work in $k$ space with the Fourier transform $\mathbf{u}(\mathbf{k},t)$ of the velocity field $\mathbf{u}(\mathbf{x},t)$. This was introduced by Taylor in order to allow the problem of isotropic turbulence to be formulated as one of statistical mechanics, with the Fourier components acting as the degrees of freedom. In this way, Obukhov identified the conservative, inertial flux of energy through the modes as being the key quantity determining the energy spectrum in the inertial range. It follows that, with the input and dissipation being negligible, the flux must be constant (i.e. independent of wavenumber) in the inertial range, with the extent of the inertial range increasing as the Reynolds number was increased, and this was later recognized by Osager in (1945). Later still, this property became widely known and for many years has been referred to by theoretical physicists as scale invariance. It should be emphasised that the inertial flux is an average quantitiy, as indeed is the energy spectrum, and any intermittency effects present, which are characteristics of the instantaneous velocity field, will inevitably be averaged out. Of course, in stationary flows the inertial transfer rate is the same as the dissipation rate, but in non-stationary flows it is not.

This is not intended to minimise the importance of Kolmogorov’s pioneering work. It is merely that we would argue that one also needs to consider Obukhov’s theory (also, in 1941), with possibly also a later contribution from Onsager (in 1945), in order to have a complete theoretical picture. In effect this seems to have been the view of the turbulence community from the late 1940s onwards. Discussion of turbulent energy transfer and dissipation in isotropic turbulence was almost entirely in terms of the spectral picture. It was not until the extensive measurements of higher-order structure functions by Anselmet et al. (in 1984) that the real-space picture became of interest, along with the concept of anomalous exponents.

I would argue that we should go back to the term ‘Kolmogorov-Obukhov spectrum’, as indeed was quite often done in earlier years. We will develop this idea in the next post. All source references for this piece will be found in the book [1].

[1] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.




The different roles of the Gaussian pdf in Renormalized Perturbation Theory (RPT) and Self-Consistent Field (SCF) theory.

The different roles of the Gaussian pdf in Renormalized Perturbation Theory (RPT) and Self-Consistent Field (SCF) theory.

In last week’s blog, I discussed the Kraichnan and Wyld approaches to the turbulence closure problem. These field-theoretic approaches are examples of RPTs, while the pioneering theory of Edwards [1] is a self-consistent field theory. An interesting difference between them is the different ways in which they make use of a Gaussian (or normal) base distribution. Any theory is going to begin with a Gaussian distribution, because it is tractable. We know how to express all its moments in terms of the second-order moment. Of course, we also know that it predicts that odd order moments are zero, so some trick must be employed to get it to tell us anything about turbulence.

As we did last week, we begin with the Fourier-transformed solenoidal Navier-Stokes equation (NSE) written in an extremely compressed notation as: \begin{equation} \mathcal{L}_{0,k}u_k = \lambda M_{0,k}u_ju_{k-j},\end{equation} where the linear operator $\mathcal{L}_{0,k} = \partial /\partial t + \nu_0 k^2$, $\nu_0$ is the kinematic viscosity of the fluid, $M_{0,k}$ is the inertial transfer operator which contains the eliminated pressure term, and $\lambda$ is a book-keeping parameter which is used to keep track of terms during an iterative solution.

Now let us consider the closure problem. We multiply equation (1) through by $u_{-k}$ and average, to obtain: \begin{equation} \mathcal{L}_{0,k}\langle u_k u_{-k}\rangle= \lambda M_{0,k}\langle u_ju_{k-j}u_{-k}\rangle,\end{equation} where the angle brackets denote an average. Evidentally, if we evaluate the averages here with a Gaussian pdf, the triple moment vanishes (trivially, by symmetry)

Then we set up a perturbation-type approach by expanding the velocity field in powers of $\lambda$ as: \begin{equation} u_k = u^{(0)}_k + \lambda u^{(1)}_k + \lambda^2 u^{(2)}_k + \lambda^3 u^{(3)}_k + \dots, \end{equation} where $u^{(0)}_k$ is a velocity field with a Gaussian distribution. The general procedure has two steps. First, substitute the expansion (3) into the right hand side of equation (1) and calculate the coefficients iteratively in terms of the $u^{(0)}_k$. Secondly, substitute the explicit form of the expansion, now entirely expressed in terms of the $u^{(0)}$ into the right hand side of equation (2), and evaluate the averages to all orders, using the rules for a Gaussian distribution. If we denote the inverse of the linear operator by $\mathcal{L}^{-1}_{0,k} \equiv R_{0,k}$, and the Gaussian zero-order covariance by $\langle u_k u_{-k}\rangle=C_{0,k}$, then the triple moment on the right hand of equation (2) can be written to all orders in products and convolutions of $R_{0,k}$ and $C_{0,k}$.

Kraichnan introduced renormalization in this problem by making the replacements: \[R_{0,k}\rightarrow R_{k} \quad \mbox{and} \quad C_{0,k} \rightarrow C_k,\] to all orders in the perturbation expansion of the triple-moment in (2). This step involves partial summations of the perturbation expansion in different classes of terms.
At this point it is worth noting that what happens here is rather like in a direct-numerical simulation of the NSE. There we begin with a Gaussian initial field. As time goes on, the nonlinear term induces couplings between modes and the system moves to a field which is representative of Navier-Stokes turbulence. Of course the initial distribution is constrained in this case to give the total energy that we require in the simulation. Note that the zero-order field in perturbation theory is in principle present at all times and is not constrained in this way.

In contrast, what Edwards introduced was a perturbation expansion of the probability distribution function of the velocity field, not of the velocity field itself. For this reason, he did not work directly with the NSE but instead used it to derive a Liouville equation for the probability distribution $P[u,t]$. It should be noted that the Liouville equation, although containing the nonlinearity of the velocity field, is nevertheless a linear equation for the pdf. Edwards then expanded $P[u,t]$, the exact pdf, as follows: \begin{equation}P[u,t] = P^{0}[u] + \epsilon P^{1}[u,t] + \epsilon^2 P^{2}[u,t] + \mathcal{O}(\epsilon^3),\end{equation} where $P^{0}[u]$ is a Gaussian distribution. The significant step here is to demand that the zero-order pdf gives the same result for the second-order moment as the exact pdf. That is, \begin{equation}\int \, P^{(0)}[u] \, u_ku_{-k} \mathcal{D}u = \int \, P[u,t] \, u_ku_{-k} \mathcal{D}u \equiv C_k. \end{equation}

This is in fact the basis of the self-consistency requirement in the theory. For further details the interested reader should consult either of the books referenced below as [1] and [2]. The Edwards method [3] does not rely on partially summing infinite perturbation series, nor is it like the functional formalisms which are equivalent to such summation procedures. Instead it relies on the fact that the measured pdf in turbulence is not very different from a Gaussian. In this respect, it is encouraging that it gives similar results to the RPTs. This resemblance is heightened in the recent derivation of the LET theory as a two-time SCF [4], thus extending the Edwards method.

[1] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press Oxford, 1973.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[3] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[4] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.




What if anything is wrong with Wyld’s (1962) turbulence formulation?

What if anything is wrong with Wyld’s (1962) turbulence formulation?

When I began my PhD in 1966, I found Wyld’s paper [1] to be one of the easiest to understand. However, one feature of the formalism struck me as odd or incorrect, so I didn’t spend any more time on it. But I had found it very useful in helping me to understand how a theory like Kraichnan’s DIA could work. In short, I thought that it had pedagogic value. Some years later, when I wrote up my first attempt to derive a two-time version of the LET theory [2], I made use of a variant of Wyld’s formalism, albeit with his procedural error corrected. I was surprised by the hostility of the referees towards Wyld’s work, which they said had been subject to later criticism. As is so often the case with referees in this field, they accepted the criticism as utterly damning, without apparently any critical thought, or ability to produce a nuanced reaction, on their own part.

My aim in this blog is to explain what I noticed about Wyld’s formalism all those years ago, and I shall give only as much of his method as necessary to make this a brief and understandable point. We begin with the Fourier-transformed solenoidal Navier-Stokes equation, written in an extremely compressed notation as: \begin{equation} \mathcal{L}_{0,k}u_k = \lambda M_{0,k}u_ju_{k-j},\end{equation} where the linear operator $\mathcal{L}_{0,k} = \partial /\partial t + \nu_0 k^2$, $\nu_0$ is the kinematic viscosity of the fluid, $M_{0,k}$ is the inertial transfer operator which contains the eliminated pressure term, and $\lambda$ is a book-keeping parameter which is used to keep track of terms during an iterative solution. Properly detailed versions of these equations may be found in either [3] or [4], but these will be sufficient for my present purposes.

Now let us begin with the closure problem. We multiply equation (1) through by $u_{-k}$ and average, to obtain: \begin{equation} \mathcal{L}_{0,k}\langle u_k u_{-k}\rangle= \lambda M_{0,k}\langle u_ju_{k-j}u_{-k}\rangle,\end{equation} where the angle brackets denote an average. Then we set up a perturbation-type approach by expanding the velocity field in powers of $\lambda$ as: \begin{equation} u_k = u^{(0)}_k + \lambda u^{(1)}_k + \lambda^2 u^{(2)}_k + \lambda^3 u^{(3)}_k + \dots, \end{equation} where $u^{(0)}_k$ is a velocity field with a Gaussian distribution.

The general procedure then has two steps. First, substitute the expansion (3) into the right hand side of equation (1) and calculate the coefficients iteratively in terms of the $u^{(0)}_k$. Secondly, substitute the explicit form of the expansion, now entirely expressed in terms of the $u^{(0)}$ into the right hand side of equation (2), and evaluate the averages to all orders, using the rules for a Gaussian distribution. If we denote the inverse of the linear operator by $\mathcal{L}^{-1}_{0,k} \equiv R_{0,k}$, and the Gaussian zero-order covariance by $\langle u_k u_{-k}\rangle=C_{0,k}$, then the triple moment on the right hand of equation (2) can be written to all orders in products and convolutions of $R_{0,k}$ and $C_{0,k}$.

Wyld did not follow this procedure exactly. Instead, he inverted the linear operator on the left hand side of (2), and wrote an expression for the exact covariance $C_k$ as: \begin{equation} \langle u_k u_{-k}\rangle \equiv C_k= R_{0,k}\lambda M_{0,k}\langle u_ju_{k-j}u_{-k}\rangle.\end{equation} Of course, (4) is mathematically equivalent to (2), so does this matter? Well, when we consider renormalization, it does!

Kraichnan introduced renormalization in this problem as making the replacements: \[R_{0,k}\rightarrow R_{k} \quad \mbox{and} \quad C_{0,k} \rightarrow C_k\] to all orders in the perturbation expansion of the triple-moment in (2). When Wyld used diagram methods to show how such a renormalization could come about, by summing subsets of terms to all orders, he in effect also renormalized both the explicit operators $R_{0,k}$ and $M_{0,k}$ on the right hand side of (4). The first of these erroneous steps created the famous double-counting problem, while the second raised questions about vertex renormalization. A full account of this topic and the introduction of `improved Lee-Wyld theory’ can be found in reference [5].

Lastly, for sake of completeness, I should mention that reference [2] was superseded in 2017 by reference [6], as the derivation of the two-time LET theory.

[1] H. W. Wyld Jr. Formulation of the theory of turbulence in an incompressible fluid. Ann.Phys, 14:143, 1961.
[2] W. D. McComb. A theory of time dependent, isotropic turbulence. J.Phys.A:Math.Gen., 11(3):613, 1978.
[3] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[4] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[5] A. Berera, M. Salewski, and W. D. McComb. Eulerian Field-Theoretic Closure Formalisms for Fluid Turbulence. Phys. Rev. E, 87:013007-1-25, 2013.
[6] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017




Is turbulence research still in its infancy?

Is turbulence research still in its infancy?
Recently I came across the article by Lumley and Yaglom which is cited below as [1]. I think it is new to me but quite possibly I will find it lurking in my filing system when at last I am able to visit my university office again. It is always good to get something gossipy and opinionated to read about turbulence as a welcome relief from all the worthy but demanding research papers! In any case, their Abstract is well worth quoting here:
This field does not appear to have a pyramidal structure, like the best of physics. We have very few great hypotheses. Most of our experiments are exploratory experiments. What does this mean?
They go on to answer their own question: ‘We believe it means that, even after 100 years, turbulence studies are still in their infancy.

I’m not quite sure what is meant by the phrase ‘pyramidal structure’, but overall the general sense is clear; and really quite persuasive. Indeed, even after a further two decades, which have been marked by an explosive growth in research, this depressing view is still to a considerable extent justified. However, I think that it might be of interest to consider in what ways it is justified and in which ways the comparison with physics may be unfair.

There are of course the unresolved issues of fundamental turbulence theory, but what is more compelling in my view, is the bizarre and muddled nature of some key aspects of the subject. To begin with, there is the Kolmogorov spectrum. Nowadays it is probably well known that Kolmogorov worked in real space and derived the $2/3$ law, from which the $-5/3$ spectrum of course follows by Fourier transformation. Yet beginning with Batchelor’s monograph [2], and for decades thereafter, discussion of the subject was entirely in terms of wavenumber space. A particularly egregious example arises in the book by Hinze [3]. After acknowledging [2], he writes: ‘These considerations have led Kolmogoroff (sic) to make the following hypothesis.’ He then goes on to state the hypothesis (top of page 184 in the first edition) and expresses it in terms of wavenumber. As his statement of the hypothesis is in inverted commas, I assumed that it was a quotation from Kolmogorov’s paper [4], but Kolmogorov nowhere uses the word ‘wavenumber’ in that paper!

This is not in itself a serious matter. But it is symptomatic, and the fact remains that various commentators rely on a real-space treatment to draw conclusions about spectra. For me, the truly astonishing fact is that I have been unable to find an exegesis of Kolmogorov’s original paper anywhere. All treatments are brief and superficial, in contrast to his later paper [5] in which he derived the $4/5$ law. This of course has been widely reviewed and discussed in detail. Which is perhaps not unconnected with the fact that it is very much easier to understand!

There are other schools of thought that one can point to, where the real problem is a failure to realise that the ideas being put forward are unphysical. For instance, the uncritical adoption of Onsager’s pioneering work in which the viscosity is put equal to zero instead of taking the limit of zero viscosity. The result is the unphysical idea of dissipation taking place in the absence of viscosity, which of course it cannot. Absorption of energy by an infinite wavenumber space is not the same as viscous dissipation. At best it might be described as pseudo dissipation. Further discussion of this topic can be found in reference [6].

To round this off, there is Kolmogorov’s 1962 paper, presenting what he described as ‘a refinement of previous hypotheses’. In fact, as is increasingly recognised, it is nothing of the sort. It is instead the wholesale abandonment of previous hypotheses. But I have said that elsewhere. What concerns me here is that the theory is manifestly unphysical. The energy spectrum is (in thermodynamic terms) an intensive quantity. Thus the factor $L^{\mu}$ which is now incorporated into the power-law form violates the requirement that it should not depend on the size of the system. In the limit of infinite system size, the energy spectrum must now go to zero if the exponent is negative and to infinity if it is positive. Curiously, no one seems to have commented on this.

Lumley and Yaglom were referring to the problem of achieving a fundamental understanding of turbulence and it is perhaps worth keeping in mind that the great success of physics is based on the happy accident of linearity. On purely taxonomic grounds, turbulence belongs to the class of many-body problems with strong coupling. These are just as intractable in nuclear physics, particle physics, and condensed matter physics as in fluid turbulence. The difference is that these activities are generally pursued in a more scholarly way, with a more collegial atmosphere among the participants. As a previous generation used to say: verb. sap!

[1] J. L. Lumley and A. M. Yaglom. A Century of Turbulence. Flow, Turbulence and Combustion, 66:241, 2001.
[2] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 1st edition, 1953.
[3] J. O. Hinze. Turbulence. McGraw-Hill, New York, 1st edition, 1959.
[4] A. N. Kolmogorov. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. C. R. Acad. Sci. URSS, 30:301, 1941.
[5] A. N. Kolmogorov. Dissipation of energy in locally isotropic turbulence. C. R. Acad. Sci. URSS, 32:16, 1941.
[6] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614v2[physics.flu-dyn], 2021.




Culture wars: applied scientists versus natural scientists.

Culture wars: applied scientists versus natural scientists.

In my early years at Edinburgh, I attended a seminar on polymer drag reduction; and, as I was walking back with a small group, we were discussing what we had just learned. In response to a comment made by one member of the group, I observed that it made the problem seem horribly complicated. The others nodded in agreement; with the exception of an American who was visiting the Chemical Engineering department. He turned on me and said reprovingly. ‘You mean that it’s beautifully complicated.’ The implication was very much that this problem was a foe worthy of his intellectual steel, so to speak. Well, I wonder how he got on with that?

It struck me at the time as an indication of a different culture. Physicists and mathematicians seem to see beauty in simplicity, even to the point of regarding it as evidence in favour of a particular theory. Do applied scientists and engineers really see beauty in complication? Even engineering structures as different as a bridge, a motor car or a ship are often held to conform to the old engineering adage: if it looks right, it is right! That surely is an appreciation of simplicity of design, is it not?

Nevertheless, the idea that there are different cultures came to me early on in my career. I can remember when I started out in the nuclear power industry, a colleague who was a chemical engineer (this is just coincidence: I haven’t got it in for chemical engineers!) said to me. ‘I don’t see any point in physics as a discipline. What’s the use of it?’ So I pointed out that we both owed our employment to physics and he had to reluctantly concede that perhaps nuclear physics had some point after all! That was in the early 1960s, and since then developments in condensed matter physics have, through the agency of materials science, chemistry and microelectronics, transformed the world that we live in.

Over the years I have heard many comments like that made by engineers about physics but I cannot recall any physicist making a similar comment about engineering. Generally, the attitude that I have picked up is a sort of respectful assumption that the engineer has other skills which generally produce impressive results. Perhaps the difference here is that the physicists are clear about their own ignorance of the details of engineering science whereas engineers tend to assume that what they don’t know doesn’t exist?

Shortly after my first book on turbulence was published [1], I received a letter (yes, not an email!) from the late Stan Corrsin, who commented on it and also sent a copy of a review that he had written of David Leslie’s earlier book [2]. I found his review very interesting because it addressed the problem that seems to be ignored by most people: that when theoretical physicists start tackling turbulence the results should be of interest to engineers but may in fact be unintelligible to them. This is not a matter of not being able to follow the mathematics so much as ‘not sharing assumptions about what is natural or appropriate to do in any given circumstance’. In other words, what I am trying to describe by the word ‘culture’. This is about all I can remember from the review. I may still have it in my office, but that has been off limits to me for more than a year now, and I have been unable to find the review online. One other phrase that I do recall, is that Corrsin said, in effect, that Leslie’s book did help to bridge this gap, but that ‘it was no Rosetta stone.

Sometimes I think that it is impossible to provide a Rosetta stone for this purpose and it is only when theoretical physicists become tired of staring at their own navels, that we will see a flowering of theory in turbulence and other practical problems. That will happen when they become bored with strings, multiverses, dark matter, quantum gravity and similar fantasy physics.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.




‘A little learning is a dangerous thing!’ (Alexander Pope, 1688-1744)

‘A little learning is a dangerous thing!’ (Alexander Pope, 1688-1744)
I have written about the problems posed by the different cultures to be found in the turbulence community; and in particular of the difficulties faced by some referees when confronted by Fourier methods. My interest in the matter is of course the difficulties faced by the author who dares to use Fourier transforms when he encounters such an individual. In my post on 20 April 2020, I told of the referee who described Fourier analysis as ‘the usual wavenumber murder’. Thinking of this brought back a rather strange incident from the mid-1970s, and it occurs to me that it really underlines my point.

In those days, we used to get visitors from the United States, who would come for a day and ask various people about their work. I seem to recall that they were sponsored by the Office for Naval Research and, as we benefited from a huge flow of NASA reports, stemming from their various programmes, it seemed only fair to send something back.

One particular visitor was a fluid dynamicist who worked on the lubrication of journal bearings. He was known to my colleagues in this area, who told me that he was eminent in that field. So, once he was settled in my office and we had got over the usual preliminaries, he asked me to explain my theoretical research to him. I went to the blackboard and happily began explaining about eliminating the pressure from the Navier-Stokes equation and then how to Fourier transform it.

I hadn’t got very far, when he held up his hand and said. ‘Stop right there! I wouldn’t use Fourier transforms with a nonlinear problem like turbulence.’

I was a little bit taken aback, but my main reaction was that this was a chance for me to learn something, because it was at that time that I was receiving reports from JFM referees which were hostile to the use of Fourier methods.
I didn’t waste time in asking him why. I just asked what he would use instead. His reply astonished me. ‘I would use the Green’s function method.’

In the circumstances I saw no point in continuing and changed the topic to talk about my other work. He seemed quite happy about that. Perhaps it was just a cunning plan to avoid listening to some boring mathematics for an hour or so?
At this stage it will be clear to many people why I did not continue the discussion. But for those who don’t know, there were two points:
A. My visitor was wrong at the most fundamental level. Green’s functions are only applicable to linear problems. For instance, we can eliminate the pressure field from the NSE, because it satisfies a Poisson’s equation, which is of course linear.
B. As a sort of corollary of awfulness, a standard method of evaluating Green’s functions is by the use of Fourier transforms!
These matters are discussed in detail in Appendix D of reference [1] below.

The title of the poem by Alexander Pope has passed into the language as a caution against being too authoritative when one is not really an expert. The question of who does more harm, someone who thinks he knows all about Fourier methods; or someone who is frightened of them and behaves in a childish way, is really a moot point.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press,
1990.




Intermittency, intermittency, intermittency!

Intermittency, intermittency, intermittency!
It is well known that those who are concerned with the sale of property say that the three factors determining the value of a house are: location, location, location. In fact I believe that there is a television programme with that as a title. This trope has passed into the general consciousness; so much so, that a recent prime minister declared his principal objectives in government to be: education, education, education. (Incidentally, I wonder how that worked out?)

My use of the title here is not to suggest that I think that intermittency is the dominant feature of the turbulent velocity field, or indeed of any particular importance, so much as to draw attention to the fact that there are three types of turbulent intermittency. Of course in complicated situations such as in turbomachinery, an anemometer signal can be interrupted by the passage of a rotor, say. That would be a form of intermittency. However, by intermittency, what I have in mind is something intrinsic to the turbulent field and not caused by some external behaviour. I believe that is what most people would mean by it.
For convenience, we may list these different types, as follows:

1. Free surface intermittency. This form of intermittency occurs in flows like wakes and unconfined jets. It arises from the irregular nature of the boundary of the flow. An anemometer positioned at the edge of the flow will sometimes register a turbulent signal and sometimes not. There is also a dynamical problem posed by the interaction between the flow of the wake or jet and the ambient fluid, but that is not something that we will pursue here.

2. The bursting process in pipe flow. This was discovered in the 1960s, when it was found that a short-sample-time autocorrelation could show a near-sinusoidal variation with time, corresponding to a sequence of events in which turbulent energy was generated locally in both space and time. Measurement of the bursting period was helpful in understanding the mechanism of drag reduction by polymer additives.

3. Internal intermittency. This is the apparent inability of the eddying motions of turbulence to fill space, even in isotropic turbulence. Originally it was referred to as the dissipation intermittency and then later on as the fine-structure intermittency. In recent years it has been established that by means of high-Reynolds number simulations that this inability to fill space is in fact present at all length scales. Thus the growing modern practice is to describe it as internal which distinguishes it from the two types of intermittency above.

An account of all three types may be found in Section 3.2 of the book [1], although at that time I used the term fine-structure intermittency, in line with other writers at that time. I should also point out that I would no longer give the same prominence to the instantaneous dissipation. I am now clear that the failure to distinguish between this and its mean value; combined with the failure to recognise that the significant quantity in determining the inertial-range spectrum/structure-function is the inertial transfer rate, underpins much of the confusion over the $k^{-5/3}$ (or $r^{2/3}$) result for the inertial range. I have written quite a lot about this matter in recent years and expect to write a great deal more.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.




Does the failure to use spectral methods harm one’s understanding of turbulence?

Does the failure to use spectral methods harm one’s understanding of turbulence?

Vacation post No. 3: I will be out of the virtual office until Monday 19 April.

As described in the previous post, traditional methods of visualising turbulence involve vaguely specified and ill-defined eddying motions whereas Fourier methods lead to a well-defined problem in many-body physics. This seems to be a perfectly straightforward situation; and one might wonder: in what way do fluid dynamicists feel that the Fourier wavenumber space representation is obscuring the physics? Given that they regard a vortex-based picture, however imprecise, as `the physics’, I suspect (a suspicion based on many discussions over the years!) that the problem arises when they try to reconcile the two formulations. Of course, in an intuitive way, one may associate large wavenumbers with small spatial separations. That is, `high k’ corresponds to `small r’ and vice versa. But those attempts, which one sees from time to time, to interpret the $k$-space picture in terms of arbitrarily prescribed vortex motions in real space, seem positively designed to cause confusion. It is important to bear in mind that the Fourier representation reformulates the problem, and you should study it on its own terms, even if you long for vortices!

Does this matter? I think it does. For example, I can point to the strange situation in which (it seems) most fluid dynamicists believe that there should be intermittency corrections to the exponent of Kolmogorov’s $k^{-5/3}$ energy spectrum, whereas it seems that most theoretical physicists (who work in wavenumber space) do not. The hidden point here, is that Kolmogorov worked in real space, and derived the $r^{2/3}$ form of the second-order structure function, for an intermediate range of values of $r$ where the form of the input term and the viscous dissipation could both be neglected, thus introducing the inertial range. His theory was inconsistent; in that he then considered the structure function to depend on the dissipation rate, even although this had been excluded from the inertial range. It is this step which gives some credibility to the possibility of intermittency effects, particularly as there may be some doubt about whether or not the dissipation rate in the theory is the average value or not.

The surprising thing is that, at much the same time, Obukhov worked in $k$-space, and identified the conservative, inertial flux of energy through the modes as being the key quantity determining the energy spectrum in the inertial range. It follows that, with the production and dissipation being negligible in this range of wavenumbers, the flux must be constant (i.e. independent of wavenumber) in the inertial range This was later recognized by Osager. Later still, this property became widely known and for many years has been referred to by theoretical physicists as scale invariance. Scale invariance is a general mathematical property and can refer to various things in turbulence research. It simply means that something which might depend on an independent variable, in either real space or wavenumber space, is in fact constant. It should be emphasised that the inertial flux is an average quantity, as indeed is the energy spectrum, and any intermittency must necessarily be averaged out. In fact a modern analysis leading to the $k^{-5/3}$ spectrum would start from the Lin equation. Therefore it is hard to see how internal intermittency, which is incidentally present on all scales can affect this derivation.




Does the use of spectral methods obscure the physics of turbulence?

Does the use of spectral methods obscure the physics of turbulence?

Vacation post No. 2: I will be out of the virtual office until Monday 19 April.

Recently, someone who posted a comment on one of my early blogs about spectral methods (see the post on 20 February 2020), commented that a certain person has said `spectral methods obscure the physics of turbulence’. They asked for my opinion on this statement and I gave a fairly robust and concise reply. However, on reflection, I thought that a more nuanced response might be helpful. As the vast majority of turbulence researchers work in real space, it seems probable that many would share that sentiment, or something very like it.

In fact, I will begin by challenging the second part of the statement. What precisely is meant by the phrase `the physics of turbulence’? In order to answer this question, let us begin by examining the concept of the turbulence problem in both real space and Fourier wavenumber space. Note that in what follows, all dependent variables are understood to be per unit mass of fluid, and we restrict our attention to incompressible fluid motion.

In real space, we have the velocity field $\mathbf{u}(\mathbf{x},t)$, which satisfies the Navier-Stokes equation (NSE). This equation expresses conservation of momentum and is local in $x$. It is also nonlinear and is therefore, in general, insoluble. From it we can derive the Karman-Howarth equation (KHE), which expresses conservation of energy and relates the second-order moment to the third-order moment. This is also local in $x$, and is also insoluble, as it embodies the statistical closure problem of turbulence. If we wish, we can change from moments to structure functions, but the KHE remains local in $r$, the distance between the two measuring points. This formulation gives no hint of a turbulence cascade as it is entirely local in nature.

The situation is radically different in Fourier wavenumber ($k$) space. Here we have a velocity field $\mathbf{u}(\mathbf{k},t)$ which now satisfies the NSE in $k$-space. This is still insoluble, and when we derive the Lin equation from it (or by Fourier transformation of the KHE), this again expresses conservation of energy, and is again subject to the closure problem. However, there is a major difference. As pointed out by Batchelor [1], Taylor introduced the Fourier representation in order to turn turbulence into a problem in statistical physics, with the $\mathbf{u}(\mathbf{k},t)$ playing the part of the degrees of freedom. The nonlinear term takes the form of a convolution in wavenumber space and this couples each degree of freedom to every other. In the absence of viscosity, this process leads to equipartition, rather as in an ideal gas. However, the viscous term is symmetry-breaking, with its factor of $k^2$ skewing its effect to high wavenumbers, so that energy must flow through the modes of the system from low wavenumbers to high. We may complete the picture by injecting energy at low wavenumbers. The result is a physical system which has been discussed in many papers and books and has been studied by theoretical physicists over the decades since the 1950s. In short, Fourier transformation reveals a physical system which is not apparent from the equations of motion in real space.

What, then, do those working in real space mean by the physics of turbulence? Presumably they rely on ideas about vortex motion, as established by flow visualisation; and here the difficulty lies. Richardson put forward the concept of a cascade in terms of `’whirls” (not, incidentally, whorls! [2]); and certainly this has gripped the imagination of generations of workers in the field. In a general, qualitative way it is easy to understand; and one can envisage the transfer of eddying motions from large scales to small scales. But when it comes to a quantitative point of view, the resulting picture is very vague and imprecise. Of course attempts have been made to make it more precise and researchers have considered assemblies of well-defined vortex motions. This is a perfectly reasonable way for fluid dynamicists to go about things, but it involves a considerable element of guess work. In contrast, Fourier wavenumber space gives a precise representation of the physical system and essentially formulates the basic problem as a statistical field theory.

So, spectral methods actually expose the underlying physics of turbulence, rather than obscuring it. It is my view that those who are not comfortable with them must necessarily have a very restricted and limited understanding of the subject. I shall illustrate that in my next post.

[1] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 2nd edition, 1971.
[2] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.




Stirring forces and the turbulence response.

Stirring forces and the turbulence response.

Vacation post No. I: I will be out of the office until Monday 19 April.

In my previous post, I argued that there seems to be really no justification for regarding the stirring forces that we invoke in isotropic turbulence as mysterious, at least in the context of statistical physics. However, when I was thinking about it, I remembered that Kraichnan had introduced stirring forces in quite a different way from Edwards and it occurred to me that this might be worth looking at again. Edwards had introduced them in order to study stationary turbulence, but in Kraichnan’s case they were central to the basic idea for his turbulence theory. In that way, Kraichnan’s formulation was more in the spirit of dynamical systems theory, rather than statistical physics.

Following Kraichnan, let us consider the case where the Navier-Stokes equation (NSE) is subject to random force $f_{\alpha}(\mathbf{k},t)$, where the Greek indices take the usual values of $1,\,2,\,3$ corresponding to Cartesian tensor notation. If the force undergoes a fluctuation \[f_{\alpha}(\mathbf{k},t) \rightarrow f_{\alpha}(\mathbf{k},t) +\delta f_{\alpha}(\mathbf{k},t),\] then we may expect the velocity field to undergo a corresponding fluctuation \[u_{\alpha}(\mathbf{k},t) \rightarrow u_{\alpha}(\mathbf{k},t) +\delta u_{\alpha}(\mathbf{k},t).\] If the increments are small enough, we may neglect the second order of small quantities, then we may introduce the infinitesimal response function $\hat{R}_{\alpha\beta}(\mathbf{k};t,t’)$, such that \[\delta u_{\alpha}(\mathbf{k},t) = \int_{-\infty}^t\,\hat{R}_{\alpha\beta}(\mathbf{k};t,t’)\delta f_{\beta}(\mathbf{k},t’)\,dt’.\]

Kraichnan linearised the NSE in order to derive a governing equation for the infinitesimal response function. Then he introduced the ensemble-averaged form \[\langle\hat{R}_{\alpha\beta}(\mathbf{k};t,t’)\rangle =R_{\alpha\beta}(\mathbf{k};t,t’),\] where \[R_{\alpha\beta}(\mathbf{k};t,t)=1,\] in order to make a statistical closure. The result was the Direct Interaction Approximation (DIA) and it is worth noting in passing that its derivation contains the step $\langle uu\hat{R} \rangle = \langle uu \rangle \langle \hat{R}\rangle$, which makes the theory a mean-field approximation.

The failure of DIA was attributed by Kraichnan to the use of an Eulerian coordinate system and he responded by generalising DIA to what he called Lagrangian-history coordinates, leading to a much more complicated formulation. This step inspired others to DIA-type methods in more conventional Lagrangian coordinates. However, the fact remains that the purely Eulerian LET (or local energy transfer) does not fail in the same way as DIA. It is worth noting that unsuccessful theories in Eulerian coordinates are invariably Markovian in wavenumber (this should be distinguished from a Markovian property in time).

An alternative explanation for the failure of Markovian theories is that the basic ansatz, in the steps outlined above, may not identify the correct response for turbulence. In dynamical systems the dissipation occurs where the force acts. In turbulence it occurs at a distance in space and time. When the force acts to stir the fluid, the energy is transferred to higher wavenumbers by a conservative process, until it comes into detailed balance with the viscous dissipation. Arguably the system response needs to include some further effect, connecting one velocity mode to another, as happens in the LET theory [1].

In all theories, the direct action of the stirring force is both to create the modes and then populate them with energy. In DIA, the way in which energy is put into the modes (i.e. the input term) can be calculated exactly by renormalized perturbation theory in terms of the ensemble-averaged response function . However, the general closure of the statistical equations for the velocity moments is equivalent to an assumption that the same procedure will work for it, which is really only an assumption. So it may be that it is the turbulence response which is mysterious, and not the stirring forces as such.

General treatments of these matters will be found in the books [2,3]. It should be noted that I’ve used a modern notation for the response function (e.g. see [4]).

[1] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[2] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.
[3] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990
[4] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.




The mysterious stirring forces

The mysterious stirring forces
In the late 1970s there was an upsurge in interest in the turbulence problem among theoretical physicists. This arose out of the application of renormalization group (RG) methods to the problem of stirred fluid motion. As this problem was restricted to a very low wavenumber cutoff, these approaches had nothing to say about real fluid turbulence. Nevertheless, the work on RG stimulated a lot of speculative discussion, and one paper referred to `the mysterious stirring forces’. I found this rather unsettling, because I had been familiar with the concept of stirring forces from the start of my PhD project in 1966. Why, I wondered, did some people find them mysterious?

As time passed, I came to the conclusion that it was just lack of familiarity on the part of these theorists, although they seemed quite happy to launch into speculation on a subject that they knew very little about. (Well, it was just a conference paper!) So I was left with the feeling that one day it might be worth writing something to debunk this comment. Recently it occurred to me that it would make a good topic for a blog.

The standard form used nowadays for the stirring forces was introduced by Sam Edwards in 1964 and has its roots in the study of Brownian motion, and similar problems involving fluctuations about equilibrium. Let us consider the motion of a colloidal particle under the influence of molecular impacts in a liquid. For simplicity, we specialise to one-dimensional motion with velocity $u$. The particle will experience Stokes drag with coefficient $\eta$, per unit mass. Accordingly, we can use Newton’s second law to write its macroscopic equation of motion as: \begin{equation} \partial u/\partial t =-\eta \, u. \end{equation} At the microscopic level, the particle will experience the individual molecular impacts as a random force $f(t)$, say. So the microscopic equation of motion becomes: \begin{equation}\partial u/\partial t =-\eta \, u + f(t). \end{equation} This equation is known as the Langevin equation. In order to solve it, we need to specify $f$ in terms of a physically plausible model.

We begin by noting that the average effect of the molecular impacts on the colloidal particle must be zero, thus we have: \begin{equation}\langle f(t) \rangle =0. \end{equation} As a result, the average of equation (2) reduces to equation (1), which is consistent. Then in order to represent the irregular nature of the molecular impacts, we assume that $f(t)$ is only correlated with itself at very short times $t\leq t_c$, where $t_c$ is the duration of a collision. We can express this in terms of the autocorrelation function $w$ as: \begin{equation} \langle f(t)f(t’) \rangle = w(t-t’), \end{equation} and \begin{equation} W(t) = \int_0^t\,w(\tau)\,d\tau, \end{equation} where \begin{equation} W(\tau)\rightarrow W = \mbox{constant}.\end{equation}

We can go on to solve the Langevin equation (2) for the short-time and long-time behaviour of the particle velocity $u(t)$, much as in Taylor’s Lagrangian analysis of turbulent diffusion. We can also derive the fluctuation-dissipation relation: see reference [1] for details.

In his self-consistent field theory of turbulence, Edwards drew various analogies with the theory of Brownian motion [2]. In particular, he went further than in equations (4) to (6), and chose the stirring forces to be instantaneously correlated with themselves; or: \begin{equation}w(t-t’) = W \delta(t-t’), \end{equation} where $\delta$ is the Dirac delta function. In the study of stochastic dynamical systems, this is known as `white noise forcing’. It allows one to express the rate at which the stirring force does work on the turbulent fluid in terms of the autocorrelation of the stirring forces [3].

It also provides a criterion for the detection of `fake theories’. These are theories which are put out by people with skill in quantum field theory and which purport to be theories of turbulence. Such theories do not engage with the established body of work in the theory of turbulence, nor do they mention how they overcome the problems that have proved to be a stumbling block for legitimate theories. Invariably, they attribute the purpose of the delta function to be to maintain Galiean invariance and clearly do not know what it is actually used for. In fact, the Navier-Stokes equations are trivially Galilean-invariant and adding an external force to them cannot destroy that [4].

[1] W. David McComb. Study Notes for Statistical Physics: A concise, unified overview of the subject. Bookboon, 2014.
[2] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[3] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[4] W. D. McComb. Galilean invariance and vertex renormalization. Phys. Rev. E, 71:37301, 2005.




Is the entropy of turbulence a maximum?

Is the entropy of turbulence a maximum?
In 1969 I published my first paper [1], jointly with my supervisor Sam Edwards, in which we maximised the turbulent entropy, defined in terms of the information content, in order to obtain a prescription for $\omega(k)$, the renormalized decay time for the energy contained in the mode with wavenumber $k$. Of course, in statistical mechanics, one associates the maximum of the entropy with thermal equilibrium. So, in the circumstances, we were very frank about possible problems with this approach, having actually stated in the title that our system was ‘far from equilibrium’. Before we examine this aspect further, it may be of interest to look at the background to the work.

By the mid-nineteen sixties, there had been a number of related theories of turbulence, but the most important were probably Kraichnan’s direct-interaction approximation (DIA) in 1959 and the Edwards self-consistent field theory in 1964. At this time there seems to have been a mixture of excitement and frustration. It had become clear from experiment that the Kolmogorov $-5/3$ power law (or something very close to it) was the correct inertial-range form, and none of the various theories was compatible with it. Kraichnan ultimately concluded that he needed to change to a so-called Lagrangian-history coordinatate system, but otherwise could retain all the features of the DIA; whereas Edwards concluded that he needed to find a different way of choosing the response function, which in his case depended on $\omega(k)$. In my view, and irrespective of the merits or otherwise of the ‘maximum entropy’ method, Edwards made the right decision.

When I began my PhD research in 1966, my first job was to work out the turbulent entropy, using Shannon’s definition, in terms of the turbulent probability distribution; and then carry out a functional differentiation with respect to $\omega(k)$, in order to establish the presence of a maximum. What I didn’t know, was that Sam had himself carried out this calculation but had got stuck. In order to take the limit of infinite Reynolds numbers, he had to show that his theory was well behaved at three particular points in wavenumber space: $k=0$, $k=\infty$ and $|\mathbf{k}+\mathbf{j}|=0$, where $j$ is a dummy wavenumber. He had been able to show the first two, but not the third. Not knowing that there was a problem, I soon discovered it, but by means of a trick involving dividing up the range of integration, I managed to show that it was well behaved. However, the prediction of the value of the Kolmogorov constant was not good, and this was not encouraging.

In later years, when I had a lot more experience of both turbulence and statistical physics, I thought more critically about this way of treating turbulence. The maximum entropy method is the canonical way of solving problems in thermal equilibrium where there are only either weak or very local interactions. If we take the para-ferromagnetic transition as an example, we can think of the temperature being reduced and an assembly of molecular magnets (i.e. spins on a lattice) tending to line up as the effective coupling increases. However, this process would be swamped by the imposition of a powerful external magnetic field. Similarly, the molecular diffusion process can be swamped by vigorous stirring. In the case of turbulence, it is possible to study absolute equilibrium ensembles by considering an initially stirred inviscid fluid in a finite system. If we replace the Euler equation by the Navier-Stokes equation, then the effect of the viscosity is symmetry-breaking and the system is dominated by a flow of energy through the modes.

This, of course is a truism of statistical physics: a system is either controlled by entropy or energy conservation. In the case of turbulence, it is always the latter. Turbulence is always a driven phenomenon. So while perhaps entropy is actually a maximum with respect to variation of $\omega(k)$, it may be too broad a maximum allow an accurate determination of $\omega(k)$. Also, it is worth bearing in mind, that it is not precisely turbulence but the statistical theory we are approximating it by, which needs to show the requisite behaviour.

In any case, in 1974 I published my local energy transfer theory of turbulence [2], which is in good accord with the basic physics of the turbulent cascade.

[1] S. F. Edwards and W. D. McComb. Statistical mechanics far from equilibrium. J.Phys.A, 2:157, 1969.
[2] W. D. McComb. A local energy transfer theory of isotropic turbulence. J.Phys.A, 7(5):632, 1974.




Analogies between critical phenomena and turbulence: 2

Analogies between critical phenomena and turbulence: 2

In the previous post, I discussed the misapplication to turbulence of concepts like the relationship between mean-field theory and Renormalization Group in critical phenomena. This week I have the concept of ‘anomalous exponents’ in my sights!

This term appears to be borrowed from the concept of anomalous dimension in the theory of critical phenomena, so we start from a consideration of dimension, bearing in mind that the dimension of the space can be anything from $d=1$ up to $d=\infty$, and is not necessarily an integer. In critical phenomena it is usual to define three different kinds of dimensionality, as follows:

[a] Scale dimension. This is defined as the dimension of a physical quantity as established from the effect of a scaling transformation. Confusingly, this is normally just referred to as dimension.

[b] Normal (canonical) dimension. This is the (scale) dimension as established by simple dimensional analysis.

[c] Anomalous dimension. This is the dimension as established under RG transformation.

In this context, normal dimension is regarded as the naïve dimension and anomalous dimension is regarded as the actual or correct dimension. In turbulence we don’t have dimensionality as a playground, so the merry band of would-be turbulence theorists have extended the concept to the exponents of power-law forms of the moments of the velocity field plotted against order. The Kolmogorov forms (dimensional analysis) are seen as canonical and the actual (i.e. measured) exponents are seen as anomalous. The former are seen as wrong and the latter as correct. Naturally, the true believers in intermittency corrections have seized on this nomenclature as adding something to their case. (Also, see my post of 21 January 2021).

Let us actually apply the concept of scale dimension $d_s$ (say) in three-dimensional turbulence (i.e. $d=3$), using the procedures from critical phenomena (see Section 9.3 of [1]) to the energy spectrum $E(k)$. That is, we express the spectrum in terms of the total energy $E$, thus \[\int\,d^3k\,E(k) = E \quad \mbox{hence} \quad E(k) \sim E\,k^{-3}.\] So, bearing in mind that wavenumber has dimensions of inverse length, it follows that the canonical scale dimension is $d_s = 3$ in $d=3$.

If we now consider the Kolmogorov spectrum based on scale invariance and an inertial transfer rate $\varepsilon_T$, dimensional analysis gives us \[E(k) \sim \varepsilon_T\,k^{-5/3} .\] As this result can also be got from RG transformation, properly formulated for macroscopic fluid turbulence, and employing rational approximations (see [2] – [5]), it follows that K41 corresponds to the anomalous dimension $d_E = 5/3$. So much for inept comparisons with critical phenomena.

[1] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.
[2] W. D. McComb and A. G. Watt. Conditional averaging procedure for the elimination of the small-scale modes from incompressible fluid turbulence at high Reynolds numbers. Phys. Rev. Lett., 65(26):3281-3284, 1990.
[3] W. D. McComb, W. Roberts, and A. G. Watt. Conditional-averaging procedure for problems with mode-mode coupling. Phys. Rev. A, 45(6):3507-3515, 1992.
[4] W. D. McComb and A. G. Watt. Two-field theory of incompressible-fluid turbulence. Phys. Rev. A, 46(8):4797-4812, 1992.
[5] W. D. McComb. Asymptotic freedom, non-Gaussian perturbation theory, and the application of renormalization group theory to isotropic turbulence. Phys. Rev. E, 73:26303-26307, 2006.




Analogies between critical phenomena and turbulence: 1

Analogies between critical phenomena and turbulence: 1
In the late 1970s, application of Renormalization Group (RG) to stirred fluid motion led to an upwelling of interest among theoretical physicists in the possibility of solving the notorious turbulence problem. I remember reading a conference paper which included some discussion that was rather naïve in tone. For instance, why did turbulence theorists study the energy spectrum rather than something else? Also, rather unsettlingly, there was a reference to the ‘mysterious stirring forces’ (sic): I shall return to that comment in a future post. However, although no turbulence theory emerged from this activity, a way of thinking did, and this found a receptive audience in those members of the turbulence community who believe in intermittency corrections. In my view, one set of views is as unjustified as the other, and I shall now explain why I think this.

To understand how these views came about, we need to consider the background in critical phenomena. During the 1960s, theorists in this area began to use concepts like scaling and self-similarity to derive exact relationships between critical exponents. (In passing, I note that in fluid dynamics these tools had already been in active use for more than half a century!) In this way, the six critical exponents of a typical system could be reduced to just two to be determined. At first the gap was bridged by mean-field theory, but then RG came along and the problem was solved.

It is important to know that RG can be viewed, in some respects, as a correction to mean-field theory. As a result, theorists in this field essentially ended up taking the view: ‘mean-field theory, bad! RG good!’, and this had a tendency to spill over into other areas as a sort of judgement. In general this was the attitude during the 1980s/90s, and few paused to reflect that other phenomena might belong to a different universality class. For instance, should the self-consistent field theory of multi-electron atoms be ruled out, because RG is better than mean-field theory at describing the para-ferromagnetic phase transition? Fortunately, this sort of thinking has presumably died out by now, but it has left an unhelpful residue in turbulence theory.

One form of this is the assertion that the Kolmogorov ‘$-5/3$’ energy spectrum is a mean-field theory, and that an RG calculation would lead to an exponent of the form ‘$-5/3+\mu$’; precisely what the ‘intermittency correction’ enthusiasts had been saying all along! The snag with this is that the derivation of the Kolmogorov spectrum does not rely on a mean-field step, nor indeed on the invariable accompaniment of a self-consistent field step. In fact, this can be a problem in critical phenomena. People tend to refer loosely to mean-field theories, without mentioning that they are also self-consistent theories. Actually in turbulence we have various self-consistent field theories which do not predict the Kolmogorov exponent and one which does [1].
In my next post, I will develop this topic further. In the meantime, a general background account of these matters may be found in the book cited below as [2].
[1] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[2] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.




Compatibility of temporal spectra with Kolmogorov (1941): the Taylor hypothesis.

Compatibility of temporal spectra with Kolmogorov (1941): the Taylor hypothesis.

Earlier this year I received an enquiry from Alex Liberzon, who was puzzled by the fact that some people plot temporal frequency spectra with a $-5/3$ power law, but he was unable to reconcile the dimensions. This immediately took me back to the 1970s when I was doing experimental work on drag-reduction, and we used to measure frequency spectra and convert them to one-dimensional wavenumber spectra using Taylor’s hypothesis of frozen convection [1]. It turned out that Alex’s question was more complicated than that and I will return to it at the end. But I thought my own treatment of this topic in [1] was terse, to say the least, and that a fuller treatment of it might be of general interest. It also has the advantage of clearing the easier stuff out of the way!

Consider a turbulent velocity field $u(x,t)$ which is stationary and homogeneous with rms value $U$. According to Kolmogorov (1941) [2], the mean square variation in the velocity field over a distance $r$ from a point $x$ is given by:\begin{equation}\langle \Delta u^2_r \rangle \sim (\varepsilon r)^{2/3}.\end{equation} If we now consider the turbulence to be convected by a uniform velocity $U_c$ in the $x$-direction, then the K41 result for the mean square variation in the velocity field over an interval of time $\tau$ at a point $x$ is given by: \begin{equation}\langle \Delta u^2_\tau \rangle \sim (\varepsilon U_c\tau)^{2/3}.\end{equation}The dimensional consistency of the two forms is obvious from inspection.

Next let us examine the dimensions of the temporal and spatial spectra. We will use the angular frequency $\omega = 2\pi f$, where $f $ is the frequency in Hertz, in order to be consistent with the definition of wavenumber $k_1$, where $k_1$ is the component of the wavevector in the direction of $x$. Integrating both forms of the spectrum, we have the condition: \begin{equation} \int^\infty_0 E(\omega) d\omega = \int_0^\infty E_{11}(k_1) dk_1 = U^2. \end{equation} Evidently the dimensions are given by: \begin{equation}\mbox{Dimensions of}\, E(\omega)d\omega = \mbox{Dimensions of}\, E_{11}(k_1) dk_1 = L^2 T^{-2};\end{equation} or velocity squared.

Then we introduce Taylor’s hypothesis in the form: \begin{equation} \frac{\partial}{\partial t} = U_c \frac{\partial}{\partial x}, \quad \mbox{thus} \quad \omega = U_c k_1;\end{equation} and hence: \begin{equation}k_1= \frac{\omega}{U_c} \quad \mbox{and} \quad dk_1 = \frac{d\omega}{U_c}. \end{equation}
The Kolmogorov wavenumber spectrum (in the one-dimensional form that is usually measured) is given by:\begin{equation}E_{11}(k_1) = \alpha_1 \varepsilon^{2/3} k^{-5/3}_1 dk_1.\end{equation}We should note that $\alpha_1$ is the constant in the one-dimensional spectrum and is related to the three-dimensional form $\alpha$ by $\alpha_1 = (18/55)\alpha $. Substituting for the wavenumbers from (6) into (7) we find:\begin{equation} E_{11}(k_{1})dk_{1} = \alpha_1 (\varepsilon U_c)^{2/3}\omega^{5/3} d\omega \equiv E(\omega)d\omega, \end{equation} which is easily shown to have the correct dimensions of velocity squared.

After seeing this analysis, Alex came back with: but what about when the field is homogeneous and isotropic, with $U_c=0$? That’s a very good question and takes us into a topic which originated with Kraichnan’s analysis of the failure of DIA in (1964) [1]: the importance of sweeping effects on the decay of the velocity correlation. There are now numerous papers which address this topic and they continue to appear. So it does not give the impression of being settled. From my point of view, this is important in the context of closure approximations; but I understand that the answer to the question of $f^{-5/3}$ or $f^{-2}$ depends on the importance or otherwise of sweeping effects.

I intend to return to this, but not necessarily next week!

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] A. N. Kolmogorov. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. C. R. Acad. Sci. URSS, 30:301, 1941.




The concept of universality classes in critical phenomena.

The concept of universality classes in critical phenomena.
The universality of the small scales, which is predicted by the Richardson-Kolmogorov picture, is not always observed in practice; and in the previous post I conjectured that departures from this might be accounted for by differences in the spatial symmetry of the large scale flow. To take this idea a step further, I now wonder whether it would be worth exploring how the idea of universality classes could be applied to the turbulent cascade? First, I should explain what universality classes actually are.

In the study of critical phenomena, we are concerned with changes of phase or state which can occur at a critical temperature, which is invariably denoted by $T_c$. For instance, the transition from liquid to gas, or the transition from para- to ferromagnetism. In general, it is found that the thermodynamic variables (e.g. heat capacity, magnetic susceptibility) of a system either tend to zero, or tend to infinity, as the system approaches the critical temperature. If we represent any such macroscopic variable by $F(T)$ and introduce the reduced temperature $\Theta_c$ by \[\Theta_c = \frac{T-T_c}{T_c}.\] Then, as $T\rightarrow T_c$ and $\Theta_c \rightarrow 0$, we have \[F(\Theta_c) = A \Theta^{-n},\] where $A$ is a constant and $n$ is the critical exponent. Obviously the critical exponent will be negative when $F(0)=0$ and positive when $F(0)=\infty$.

Here the constant $A$ and the critical temperature $T_c$ depend on the details of the system at the molecular level and therefore vary from one system to another. These quantities must be determined experimentally. However, in practice it is found that sometimes different systems have the same values of critical exponents and this depends only on symmetry properties of the microscopic energy function (or Hamiltonian). When this is found to be the case, the two systems are said to be in the same universality class.

Accordingly, in my view it would be worth reviewing the different investigations in order to find out if one could organise results for the inertial-range exponent into some kind of universality classes, although allowance should be made for experimental error, which tends to be much greater in fluid dynamics than in microscopic physics. I would be tempted to take a look through my files, but unfortunately I remain cut off from my university office by the pandemic.

Further details about critical phenomena may be found in reference [1] below.

[1] W. D. McComb. Renormalization Methods: A Guide for Beginners. Oxford University Press, 2004.




Macroscopic symmetry and microscopic universality.

Macroscopic symmetry and microscopic universality.
The concepts of macroscopic and microscopic are often borrowed, in an unacknowledged way, from physics, in order to think about the fundamentals of turbulence. By that, I mean that there is usually no explicit acknowledgement, nor indeed apparent realization, that the ratio of large scales to small scales is many orders of magnitude smaller in turbulence (which is at all scales actually a macroscopic phenomenon) than it is in microscopic physics.

This idea began with Kolmogorov in 1941, when he employed Richardson’s concept of a cascade of energy from large eddies to small; to argue that, after a sufficiently large number of steps, there could be a range of eddy sizes which were statistically independent of their large-scale progenitors. In passing, it should be noted that the concept of ‘eddy’ can be left rather intuitive, and we could talk equally vaguely about ‘scales’. However, combining the cascade idea with Taylor’s earlier introduction of Fourier modes as the degrees of freedom of a turbulent system, leads to a much more satisfactory analogy with statistical physics, with the onset of scale invariance strengthening the analogy to the microscopic theory of critical phenomena. As is well known, that leads to the `$5/3$’ spectrum, which was expected to be universal.

My own view is that it would be good to get it settled that the Kolmogorov spectrum holds for isotropic turbulence. There is still an absence of consensus about that. But the broader claim of universality has been supported by measurements of spectra in a vast variety of flow configurations; although, inevitably there have been instances where it is not supported. So we end up with yet another unresolved issue in turbulence. Is small-scale turbulence universal or not?

In order to consider whether or not the concept of symmetry could assist with this, it may be helpful to think in terms of definite examples. First, let us consider laminar flow in the $x_1$ direction between fixed parallel plates situated at $x_2=\pm a$. The velocity distribution between the plates will be a symmetric function of the variable $x_2$. If now we consider a flow where one plate is moving with respect to the other, and this is the only cause of fluid motion, then we have plane Couette flow and, as is well known, the velocity profile will now be an antisymmetric function of $x_2$. However, the molecular viscosity of the fluid will be unaffected by the different macroscopic symmetries and will be the same in both cases.

If we now extend this discussion to the case of turbulent mean velocities and inquire about the behaviour of the effective turbulent viscosity ($\nu_t$, say: for a definition see Section 1.5 of reference [1]), it is clear that this will be very different in the two cases, and arguably that should apply to the cascade process as well.

In isotropic turbulence, the cascade is described by the Lin equation, with the key quantity being the transfer spectrum $T(k)$. Its extension to an inhomogeneous case will bring in a number of transfer spectra, such as $T_{11}$, $T_{12}$ and so on. In order to cope with the dependence on spatial coordinates, the introduction of centroid and relative coordinates that we used in the previous post will prove useful. Recall that we considered a covariance function $C(\mathbf{x},\mathbf{x’})$, leaving the time variables out for simplicity and introduced the change of variables to centroid and relative coordinates, thus: \[\mathbf{R} = (\mathbf{x} + \mathbf{x’})/2 \qquad \mbox{and} \qquad \mathbf{r} = (\mathbf{x} – \mathbf{x’}). \] In this case one component of the spectral tensor could be written as: $T_{11}(\mathbf{k}, R_2)$, where we have Fourier transformed with respect to the relative coordinate only. Then, at least in the core region of the flow, we could expand out the dependence on the centroid coordinate in Taylor series. In this way we could separate the wavenumber cascade from spatial effects, such as production and spatial energy transfer.

Ideally one could even use a closure theory: the covariance equation of the DIA has been validated by the LET theory [2] and, although some work has been done on this in the past, a really serious approach would require a lot of bright young people to get involved. Unfortunately, vast numbers of bright young people all over the world are involved in complicated pedagogical exercises in cosmology, particle theory, string theory, quantum gravity and so on, most of which has gone beyond any proper theoretical foundation. Ah well, important but less glamorous problems like turbulence must await their turn.

For completeness, I should emphasise that all flows discussed above are assumed to be incompressible and well-developed.

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor. 50:375501, 2017.




Can statistical theory help with turbulence modelling?

Can statistical theory help with turbulence modelling?
When reading the book by Sagaut and Cambon some years ago, I was struck by their balance between fundamentals and applications [1]. This started me thinking, and it appeared to me that I had become ever more concentrated on fundamentals in recent years. In other words, I seemed to epitomize the old saying about scholarship consisting of `learning more and more about less and less’!

It was not always so. I began my career in research and development, which was very practical indeed. Then my employers sent me back to university where I took a degree in theoretical physics, followed by a PhD on the statistical theory of turbulence. Obviously the rot had set in; but, even so, in later years I did quite a lot of experimental work on drag reduction by additives and also turbulent diffusion. At least these topics had a practical orientation. Moreover, I have also used the $k-\varepsilon$ model to carry out calculations on the `jet in crossflow’ problem. This might seem surprising, but it arose quite naturally in the following way.

Around about 1980 I had a call from a colleague in the maths department at Edinburgh. The Iran-Iraq war had recently broken out, and one of his MPhil students came from that part of the world. The student had decided that he would rather take a PhD than go home and be involved in the fighting. Very understandable, but the difficulty was that he needed a more substantial project. At present he was studying the jet in crossflow problem, using ideal flow methods. My colleague wondered if I could join in as co-supervisor and introduce some turbulence to the project in order to make it more realistic.

Lacking any experience in this field, I happily agreed to join in, and proposed that we use the $k-\varepsilon$ model, which at the time was the best known of the engineering models. We set out on a programme of studying both the model and associated numerical methods, in the process considering a hierarchy of problems of increasing difficulty, until we reached the jet in crossflow.

This was a long time ago, but two things about this PhD supervision remain in my memory. First, the student was a mathematician and had no prior knowledge of numerical computation. This leaves me with an abiding impression that he initially found it very difficult to realise that we did not need to be able to solve an equation in the mathematical sense. Because of this, we had many discussions which appeared to be going well and then ended in frustration. Secondly, once we managed to encourage him to overcome his reluctance and try to use the computer, he proved to be a natural and worked rapidly through our hierarchy of problems, ending up with useful results in a commendably short time. This happened at a time of upheaval for me, when I was moving from the School of Engineering to the School of Physics, so I have only a rather vague memory of how things turned out. I believe that he got his PhD and then went on somewhere in England as a postdoc. Whether the results were published or not, I don’t recall. But the experience left me with an appreciation of the value of a practical engineering model, where my own fundamental work would have been of little assistance. A short discussion of the $k-\varepsilon$ model can be found in Section 3.3.4 of my book, given as reference [2] below.

When considering how statistical theory might help, we should first recognize that it does give rise to a class of models, beginning with the Eddy Damped Quasi-Normal model (which is cognate to the self-consistent field theory of Edwards) and has a single adjustable constant. It is, however, restricted to homogeneous turbulence. What we could really do with is something like $k-\varepsilon$, which is a single-point theory, but which arises in a systematic way from a two-point statistical theory. The value of the latter is that it takes into account spatially (and temporally) nonlocal effects.

The details of the statistical closure theories are complicated, but the basic idea of how one might try to derive single-point engineering models is quite simple. The key quantity is the covariance of two fluctuating velocities at different points (and times) and a theory consists of a closed set of equations to determine the covariance. In general, the covariance tensor is a matrix of nine covariance functions, although symmetry will often reduce that. We will consider just one such function, which we write as $C(\mathbf{x},\mathbf{x’})$, leaving the time variables out for simplicity. We then make the change of variables to centroid and relative coordinates, thus: \[\mathbf{R} = (\mathbf{x} + \mathbf{x’})/2 \qquad \mbox{and} \qquad \mathbf{r} = (\mathbf{x} – \mathbf{x’}). \]

Now, the statistical theories are studied for the homogeneous case in order to simplify the problem. That is, we assume that there is no dependence on the centroid coordinate; and Fourier transform into wavenumber space, with respect to the relative variable. However, the basic derivation and renormalization are not restricted to this case, and we can write down equations for the general case. Then, recognizing that most turbulent shear flows have a smooth dependence on the centroid coordinate, we can envisage expanding in the centroid coordinate, with coefficients obtained as integrals over wavenumber. Then, setting $x=x’$, we could end up with single-point equations, where coefficients are determined by integrals that arise in the fundamental theory.

This would not be a trivial process but, given the huge importance of turbulence calculations in a variety of applications, it is perhaps surprising that it has been so comprehensively neglected. A recent discussion of statistical two-point closures can be found in reference [3]. For completeness, I should mention that a second edition of [1] has appeared and I understand that a third edition is in the pipeline.

[1] P. Sagaut and C. Cambon. Homogeneous Turbulence Dynamics. Cambridge University Press, Cambridge, 2008.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[3] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.




The last post … of the first year!

The last post … of the first year!
A year ago, when I began this blog, few of us can have had any idea of what the year had in store from the coronavirus, now known to us as covid-19. Over the years, I have sometimes reflected on the very fortunate lives of my generation. I was born at the beginning of World War 2 and it impinged very little on my life or consciousness. In contrast, my grandparents all were adults during WW1, and would have suffered from that; while my parents must have endured fear and anxiety during WW2, but did not pass on any of that to me or my siblings. Basically all that I can remember was the occasional comment about the wonderful things (e.g. unlimited cream or butter) that one could get before the war!

So perhaps the pandemic is our war? Well, for many people it must seem like it; but, for those of us who are retired and have not been touched personally by the fatal consequences of the virus, it really only amounts to a degree of anxiety and some disruption of our lives. In my own case, I have not been able to go to my university office since last February. But this lack of access to my papers and books has merely been an inconvenience. Although, I do have plans to write a couple of review articles in the coming year; and, if I don’t have access to my office, only certain preliminaries will be possible.

In my first post, I referred to a paper of mine which I speculated might be my last as it had bounced from four different journals. I mentioned that I had let my guard down and made some sweeping statements without justifying them in detail. At the time I hadn’t mastered the art or science of incorporating references in my blogs, so I can now remedy the omission and this paper can be found as reference [1] below. So you can judge for yourself. Comments would be welcome. Just as a foretaste of something that I shall return to, is that in my view such a paper should have been unnecessary. The point it makes is that K41 scaling is observed for spectra and K62 scaling is not.

Incidentally, my speculation about publishing no more papers turned out to be overly pessimistic: see reference [2] below. There is rather a nice story attached to this, but I won’t go into that at the moment. Suffice it to say that it quite encouraged me and I have to confess that I now have a number of papers at various stages of preparation. At worst their fate when submitted to journals should make interesting anecdotes under the generic title of `peer review’.

To close on an upbeat note, I intend to integrate some of my blogs with the preparation of the two review articles that I have in mind. First, I intend to review the general topic of energy transfer and dissipation. In particular, the existing literature on the subject is unhelpful to the point of being quite bizarre. For instance, I recently read a discussion of the paper known as K41 (see reference [3] below) in which the author purports to quote this paper and in the process uses the word `wavenumber’, when in fact K41 derives the two-thirds law for the second-order structure function (i.e. $S_2(r) \sim r^{2/3}$), and the word wavenumber does not appear in the paper! Moreover, there is not a single exegesis (so far as I know) of K41 in the literature. Given its seminal nature, this is absolutely astonishing. It needs to be put right.

Secondly, I intend to write an article on statistical theories of turbulence, which will be much more accessible to those who are not theoretical physicists, and who balk at the word renormalization. In deciding which words not to use, I shall be guided by the acerbic remarks of the late Philip Saffman, which are to be found in his published lecture notes. Basically, I remain optimistic about this activity.

[1] W. D. McComb and M. Q. May. The effect of Kolmogorov (1962) scaling on the universality of turbulence energy spectra. arXiv:1812.09174[physics.flu-dyn], 2018.
[2] W. D. McComb. A modified Lin equation for the energy balance in isotropic turbulence. Theoretical & Applied Mechanics Letters, 10:377-381, 2020.
[3] A. N. Kolmogorov. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. C. R. Acad. Sci. URSS, 30:301,1941.




How important are the higher-order moments of the velocity field?

How important are the higher-order moments of the velocity field?

Up until about 1970, fundamental work on turbulence was dominated by the study of the energy spectrum, and most work was carried out in wavenumber space. In 1963 Uberoi measured the time-derivative of the energy spectrum and also the dissipation spectrum, in grid turbulence; and used the Lin equation to obtain the form of the transfer spectrum $T(k)$ [1]. Later on, this work was extended and refined by van Atta and Chen, who obtained the transfer spectrum more directly from the third-order correlation [2]. This seems to have been the peak of experimental interest in spectra, and from then on there was a growing concentration on the behaviour of the moments (strictly speaking, in the form of structure functions) in real space [3], [4].

Introducing the structure function of order $n$ by \[S_n(r) = \langle \delta u_L^n(r) \rangle,\] where $\delta u_L^n(r)$ is the longitudinal velocity difference, taken over a distance $r$, it is well known that, on dimensional grounds, they are expected to take the form \[S_n=C_n \,(\varepsilon r)^{n/3},\] whereas investigations like [3] and [4] (and many following them over the years) found deviations from this that increased with order $n$. Such results gave increased traction to belief in intermittency corrections and anomalous exponents.

Yet, when one considers it, the moments of a distribution are equivalent to the distribution itself. It is well known that the moments are related to the distribution through its characteristic function which is its Fourier transform. From the simple example on page 529 of reference [5], we see that the characteristic function can be expanded out in terms of the moments. Hence the distribution can be recovered to any desired order from the infinite set of its moments. Therefore, when one measures moments to some order, one is merely assessing the accuracy with which one has measured the distribution itself. A plot of the measured exponent $\zeta_n$ against order $n$ is no more or less than a plot of systematic experimental error. A glance at the plots of measured distributions in both [3] and [4] will make this point with compelling force, especially when one considers the wings of the distribution.

A brief overview of this topic and a number of more recent references may be found in [6]. Note that in that reference, a standard laboratory method of reducing systematic error was used to measure $\zeta_2$ and showed that it tended towards the canonical value of $2/3$ as the Reynolds number was increased. As a matter of some slight interest, I learnt that method when I was about sixteen years old at school.

[1] M. S. Uberoi. Energy transfer in isotropic turbulence. Phys. Fluids, 6:1048, 1963.
[2] C. W. van Atta and W. Y. Chen. Measurements of spectral energy transfer in grid turbulence. J. Fluid Mech., 38:743-763, 1969.
[3] C. W. van Atta and W. Y. Chen. Structure functions of turbulence in the atmospheric boundary layer over the ocean. J. Fluid Mech., 44:145, 1970.
[4] F. Anselmet, Y. Gagne, E. J. Hopfinger, and R. A. Antonia. High-order velocity structure functions in turbulent shear flows. J. Fluid Mech., 140:63, 1984.
[5] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[6] W. D. McComb, S. R. Yoffe, M. F. Linkmann, and A. Berera. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence. Phys. Rev. E, 90:053010, 2014.




How big is infinity?

How big is infinity?
In physics it is usual to derive theories of macroscopic systems by taking an infinite limit. This could be the continuum limit or the thermodynamic limit. Or, in the theory of critical phenomena, the signal of a nontrivial fixed point is that the correlation length becomes infinite. Of course, what we mean by `infinity’ is actually just a very large number. But the mathematicians do not like this. In reference [1] below, the author states: ‘… statistical-mechanical theories of phase transitions tell us that phase transitions only occur in infinite systems’. She sees this as paradoxical because, as we all know, in everyday life we are surrounded by finite systems undergoing phase transitions. She further believes that the paradox can be resolved by working with constructive mathematics, rather than classical mathematics, which is what we all normally use.

My quotation from [1] is certainly open to deconstruction, and I doubt if many physicists would agree with it. What originally drew my attention to this particular problem is the situation in turbulence theory. As the Reynolds number is increased (or, the viscosity is decreased), the dissipation rate becomes independent of the viscosity. Physicists attribute this to the energy transfer by the nonlinear term in the equation of motion becoming scale-invariant. As the Reynolds number is increased even more, this scale-invariance extends further through wavenumber space, and nothing thereafter changes, either qualitatively or quantitatively. This in practical terms is the infinite Reynolds number limit, and it occurs at quite modest, finite values of the Reynolds number.

However, many mathematicians, harking back to a paper by Onsager [2] in 1949, believe that the infinite Reynolds number limit corresponds to zero viscosity; and, even more bizarrely, that the continuum properties of the fluid break down in this limit. Accordingly, they are driven to finding ways of making the Fourier representation of the inviscid Euler equation dissipative, by destroying its symmetry-based conservation properties. I have discussed this topic in three previous posts on 12, 19 and 26 November; and a paper, at that time in preparation, is now available on the arXiv as [3].

[1] Pauline van Wierst. The paradox of phase transitions in the light of constructive mathematics. Synthese, 196:1863, 2019.
[2] L. Onsager. Statistical Hydrodynamics. Nuovo Cim. Suppl., 6:279, 1949.
[3] W. D. McComb and S. R. Yoffe. The infinite Reynolds number limit and the quasi-dissipative anomaly. arXiv:2012.05614[physics.flu-dyn], 2020.




My life in wavenumber space

My life in wavenumber space
In September 1966, when I began work on my PhD, I almost immediately began to dwell in wavenumber space. After a brief nod to the real-space equations, I had to learn about Fourier transformation of the velocity field, with the wave-vector $\mathbf{k}$ replacing the position vector $\mathbf{x}$, and the Navier-Stokes equations being changed from real space to wavenumber space. In addition, it was usual in those days to begin with the velocity field in a cubic box and use Fourier series. Then at some stage one would let the box size tend to infinity, and replace summations by integrals. At the same time, the periodic boundary conditions would be replaced by good behaviour at infinity. So far as theoretical work was concerned, I was not to emerge from wavenumber space until around 2006, when I began to take an interest in the phenomenology of turbulence.

This narrowness was not unusual and indeed did not seem particularly narrow at the time. There had been an incursion of theoretical physicists into turbulence from the 1950s onwards; and, for theorists of the time, wavenumber space was just momentum space with Planck’s constant set equal to unity. So everyone working on the statistical theory of turbulence was quite at home in wavenumber space, and it fitted in with what was almost a tradition in turbulence theory, which had begun with Taylor’s introduction of spectral methods in the 1930s and had been carried on in the 1950s by Batchelor’s book in particular. Problems only arose when one’s papers were refereed by those who were not part of this grouping, and who were hostile to spectral methods. But I have written about that in other blogs and it is not what concerns me here, which is something rather more subtle.

The other day I was trying to work something out and was sure that I had done it previously. I’m not keen on doing anything that I, or indeed anyone else, has already done. Hence I was checking back in my notebooks and found what I was looking for dated May 1993. So, that was satisfactory, but it reminded me of why I had done the work originally. During the 1970s/80s, I became increasingly aware of referees who felt that theories predicting the Kolmogorov $-5/3$ law should not be published, because ‘intermittency corrections meant that it wasn’t correct’. It seemed to me that the very structure of renormalization theories was evidence for the correctness of the $-5/3$ law. But as such theories were very largely inaccessible to fluid dynamicists (especially, of course, when they were refereeing them!) I had wondered how one could extract the basic ideas without the full level of complication.

The essential feature, it seemed to me, was the occurrence of scale invariance, in which the inertial flux through wavenumber became constant independent of wavenumber. Beginning with the velocity field in $k$-space, one could exploit its complex nature to separate out amplitude and phase effects. Then, in the context of the energy balance equation (nowadays increasingly referred to as the Lin equation), one could determine the energy spectrum by power counting; with its prefactor being determined by an average over the phases.

I wrote this up and submitted it to PRL sometime in 1993. The response was interesting. It was rejected with a report that spoke approvingly of how it was written and presented but regretted that the energy-balance equation had already been used to derive the so-called ‘$4/5$’ law for the third-order structure function by Kolmogorov. I of course was happily ignorant of this. It was something done in real space. Which demonstrates the disadvantages of taking too limited or narrow an approach.

In 2006 I retired and began to take an interest in various phenomenological questions. This meant that at last I crossed over into real space and worked with the Kármán-Howarth equation as well as with the Lin equation. When working on the scale-invariance paradox, I decided to revisit my 1993 theory and this was published as [1] below. I was now able to point out that it answered the Landau criticism of Kolmogorov’s theory (as reinterpreted by Kraichnan [2]), in that its prefactor also depended on an average to the two-thirds power. If the original referee had been more familiar with spectral methods, he might have realised that my paper was a derivation of the inertial-range energy spectrum from the equations of motion, not the Fourier transform of the third-order structure function. So it was very much a different result from the Kolmogorov ‘$4/5$’ law. It also occurs to me as I write this, that the relationship between prefactors in the real-space and wavenumber-space formulations might be worth looking at.

Is there a moral in all this? I think there is. Basing my opinion on long experience of papers, discussions and referee reports, I believe that those fluid dynamicists who are uncomfortable with spectral methods understand less about the basic physics of turbulence than they otherwise might… and the New Year is a time for resolutions!

[1] David McComb. Scale-invariance and the inertial-range spectrum in three-dimensional stationary, isotropic turbulence. J. Phys. A: Math. Theor., 42:125501, 2009.
[2] R. H. Kraichnan. On Kolmogorov’s inertial-range theories. J. Fluid Mech., 62:305, 1974.




How many angels can dance on the point of a pin?

How many angels can dance on the point of a pin?
When I was young this was often quoted as an example of the foolishness of the medieval schoolmen and the nonsensical nature of their discussions. I happily classed those who debated it along with those who, not only believed that the sun was pulled round the heavens in a fiery chariot, but who were quite prepared to specify the precise number of horses pulling the chariot. Later on it seemed that it might have been a sort of reductio ad absurdam, used for critical purposes. Perhaps like the original intention behind Schrodinger’s cat? Later still it seemed that it might be an ironical comment by a seventeenth century protestant theologian. In any case, it has passed into the language as the epitome of foolish and pointless discussion that has some degree of intellectual pretension.

Where then may such pointless intellectual activity be found nowadays? Well, passing over easy targets like the arts, sociology and modern literary criticism, the answer, which may surprise you, is physics. Why should it surprise you? The further answer to that is that physics has been the gift that keeps giving. Over the past century or more, it has given us the impression that it can answer any question, and in the process give rise to amazing developments in science and engineering which alter all our lives for the better. In fact the twentieth-century advances in physics underpin all advances in medicine, transport, engineering and all-round super electronic devices which smooth our paths in so many ways!

As we become less bedazzled by the wonders of quantum theory and relativity, we are more conscious of the inconsistencies, such as dark matter and dark energy, the mysterious use of string theory in many dimensions, and a standard model of the universe which is, in some ways, apparently at a similar stage to the nineteenth-century study of the periodic table, prior to the development of quantum theory. Lee Smolin, in his book The trouble with physics points out the need for a revolution in physics. Roger Penrose in his more recent book Fashion, Faith and Fantasy in the New Physics of the Universe deplores the view that quantum theory has been so successful that it must apply to gravity too. As someone who has always worked in the classical physics area of turbulence theory (albeit using the methods of quantum field theory), I am merely an onlooker. But I have been surprised to notice that much modern physics seems to involve material that I lectured on in statistical field theory to final-year undergraduates and first-year postgraduates. I’m thinking here of topics like mean-field theory and $\phi^4$ scalar field theory. I also tend to feel surprised to see many attempts at a theory of quantum gravity based on the path-integral formulation of quantum mechanics. This is equivalent to solving the Schrödinger equation and one would not do that for a macroscopic box of gas, let alone the universe. Instead, because of the instability of the wave-function, one would use the density matrix formulation.

Every year we turn out thousands of our cleverest young people in all parts of the world to work on cosmology and particle theory. Inevitably their lives are devoted to what can be little more than pedagogical work. In contrast, the important fundamental problems of fluid turbulence receive little attention. I’m not advocating a dirigiste approach of any kind. I very much understand the importance of scholarship and research on fundamentals being a sort of creative ferment. But if a fraction of the effort on lattice QCD went into turbulence simulation, with the same sort of attitudes, it could transform the situation. As it is, we are lumbered with a turbulence community who mostly (it would seem) do not understand the concept of scale-invariance; and therefore do not understand that its onset is what defines the infinite Reynolds number limit!




Academic fathers and Mother Christmas

Academic fathers and Mother Christmas
In the mid-1980s I visited the Max Planck institute in Bonn to give a talk. While I was there, some of the German mathematicians told me about the concept of an academic father. They said that your PhD supervisor was your academic father, his supervisor was your academic grandfather, and so on. In that way: ‘We can all trace our lineage back to Gauss!’

In my own case, Sam Edwards was my supervisor and I was under the impression that Nicholas Kemmer had been his supervisor. Kemmer was retired by the time I joined the Physics department at Edinburgh and I never met him as such. Our only acquaintance was that on his rare visits to the department, he would call hello in passing, as my office door was always open.

I once discussed this concept with colleagues on some social occasion and one of them reckoned that Kemmer’s supervisor had been Weyl. So it turned out that someone I was collaborating with at the time was a sort of academic cousin. I’m not sure just what kind of cousin. My wife is an expert on matters like ‘second cousin, twice removed’, but it’s all Greek to me. Although I’m actually a bit better at Greek that at cousinage.

Recently I checked up on this and found to my surprise that Sam’s supervisor was Julian Schwinger and in turn his had been Isidor Isaac Rabi. This was encouraging, as both were Nobel Laureates in physics. Then Rabi’s supervisor had been Albert Potter Wills, who in turn was supervised by Arthur Gordon Webster (No, me neither.). He at least was supervised by Helmholtz, but after that the trail went cold again and it didn’t look like we were heading back to Gauss.

There must have been some reason why I had thought that Kemmer was Sam’s supervisor. Perhaps that was when he had still been at Cambridge University? Then he would have changed to Schwinger at Harvard? If Kemmer had been Sam’s supervisor for part of the time then he could still count as an ‘academic father’.

So I thought that I would check Kemmer out and found that his supervisor had been Pauli (not Weyl!) and in turn Pauli’s had been Sommerfeld, whose had been Lindemann (the mathematician, not the later physicist), and his had been Klein. Then Klein’s supervisor was Plucker, who was supervised by Gerling and (at last) we are back to Gauss, who was Gerling’s supervisor. But can I claim to be descended from Gauss? Well, I’m still not sure.

Of course this is all a rather old-fashioned idea. There are growing numbers of women in physics and mathematics and if we want to talk about academic descent then we should include academic mothers and, in time, academic grandmothers; and so on. Inclusiveness is the watchword nowadays and as this is Christmas Eve I shall be hanging up my stocking in the hope that Mother Christmas will put some nice presents in it. Certainly she has made a great job of decorating our tree: see below.

 

 

If you have been, then thank you for reading; and I wish you a happy Christmas!




Peer Review: Through the Looking Glass

Peer Review: Through the Looking Glass
Five years ago, when carrying out direct numerical simulations (DNS) of isotropic turbulence at Edinburgh, we made a surprising discovery. We found that turbulence states died away at very low values of the Reynolds number and the flow became self-organised, taking the form of a Beltrami flow, which has velocity and vorticity vectors aligned. This work is reported in [1] below, and illustrated by the following figure.

 

Visualization of the velocity field (red arrows) and the vorticity field (blue arrows) before and after self-organization.

A video of the simulation, showing the symmetry-breaking transition, complete with characteristic ‘critical slowing down’, can be found at the online article [1]: https://doi.org/10.1088/1751-8113/48/25/25FT01
The link to the video can be found under the heading Supplementary Data. Downloading this should be straightforward using Windows, but if using a Mac you may have to have an app such as VLC installed.

The article [1] was featured on the front cover of the journal, thus:

 

 

It was downloaded hundreds of times within a few days of publication and the total number of downloads now stands at 2708.
That sounds like a success story and you may well wonder why I want to feature this as yet another problem with peer review. The answer to that lies in the fact that we first submitted it to Physical Review Letters and that was such an bizarre experience that it deserves to be told!

Normally I would refer to the two referees as Referee A and Referee B, but as their behaviour seemed to belong to the Looking Glass world (that turbulence assessment so often is) I have decided to call them Tweedledum and Tweedledee.

First, Tweedledum said that he didn’t understand how we were forcing the turbulence. He had never seen anything like that before. Perhaps the strange behaviour was due to our strange forcing. He didn’t think that our Letter should be published.

Then, Tweedledee said that he didn’t understand how we were forcing the turbulence. He also had never seen anything like that before. Perhaps the strange behaviour was due to our strange forcing. He also didn’t think that our Letter should be published.

In Alice Through the Looking Glass, the twins had a famous battle. That did not happen in the present case where they were in perfect agreement; although Tweedledum (or was it Tweedledee?) suggested that perhaps if we did a lot more work and wrote it up as a much longer article, then it might be suitable for publication. This rather misses the point of having a journal like PRL!

When we submitted our paper to J. Phys. A, we pointed out the following: our method of forcing is known as negative damping; it was introduced to turbulence theory in 1965 by Jack Herring; it was first used in DNS in 1997 by Luc Machiels; has subsequently been used in numerous investigations; and in 2005 was studied theoretically by Doering and Petrov [2]. Not precisely an obscure technique then. But what an intellectually feeble performance from Tweedledum and Tweedledee. No wonder problems in turbulence remain unresolved for generations.

One might end up by wondering what if any harm had been done by the lack of scholarly behaviour on the part of these referees who were presumably chosen to be representative of the turbulence community. After all, the paper has been published and has clearly aroused quite a lot of interest. The trouble is, I suspect that J. Phys. A does not have the same visibility among turbulence researchers as PRL. In that case the numerous downloads may reflect the fact that many physicists are interested in an example of a nonlinear phase transition without necessarily having any interest in turbulence. More generally, over the years it seems to me that turbulence referees tend to exert a frictional drag on the process of publishing papers. Many of them give the impression of not wanting the pure pool of ignorance to be spoiled by any new understanding or knowledge.

[1] W. D. McComb, M. F. Linkmann, A. Berera, S. R. Yoffe, and B. Jankauskas. Self-organization and transition to turbulence in isotropic fluid motion driven by negative damping at low wavenumbers. J. Phys. A Math.Theor., 48:25FT01, 2015.
[2] Charles R. Doering and Nikola P. Petrov. Low-wavenumber forcing and turbulent energy dissipation. Progress in Turbulence, 101(1):11-18, 2005.




Should theories of turbulence be intelligible to fluid dynamicists?

Should theories of turbulence be intelligible to fluid dynamicists?

One half of the Nobel Prize in physics for 2020 was awarded to Roger Penrose for demonstrating that ‘black hole formation is a robust prediction of the General Theory of Relativity’. While it’s not my field, I do know a little about general relativity; so I had a look at what I could find online. It rapidly became clear to me that in order to understand Penroses’s work in detail, I would have to master a great deal of mathematics – topology in particular – which is unfamiliar to me. This would mean giving up everything else for a substantial period of time and that just wouldn’t make sense. So, despite knowing the basic equations of general relativity (for a simple, yet reasonably complete introduction, see reference [1]), I just have to take the word of other people that it all makes sense.

So what about relativistic quantum field theories, derived from the Navier-Stokes equations? Well, starting with Kraichnan, Wyld and Edwards in the early 1960s and leading up to my own LET theory [2], there exists a moderately successful class of statistical theories of turbulence which are essentially based on quantum field theory. Unfortunately, I would assume that many (most?) fluid dynamicists are as unfamiliar with the background to these as I am with the methods of Penrose in demonstrating that general relativity implies the existence of black holes. Although at least I hope that I belong to the same ‘culture’ as Penrose, in the sense that I appreciate the significance of what he has done and also why he has done it.

The question of how understandable (to turbulence researchers) statistical theories should be, was raised in lecture notes entitled ‘Problems and progress in the theory of turbulence’ [3] by Philip Saffman. In these he wrote down his list of the properties a theory should have. These were generally unexceptionable and really quite obvious. Indeed, one should perhaps bear in mind that a physicist would be very unlikely to write down a similar list, essentially because they would regard it all as being understood. The point that particularly interests me is that the second item in his list, after ‘Clear physical or engineering purpose’ is ‘Intelligibility’. It is worth quoting exactly what he says about this.

‘Intelligibility means that it can be understood, appreciated and applied by a competent scientist without years of study or familiarity with the jargon and techniques of a narrow speciality.’

Obviously, in view of what I wrote at the beginning of this post, I have a certain amount of sympathy with this view. At the same time, I feel that I should challenge it. The final phrase, which I have emphasised, has a faint flavour of the pejorative about it, particularly when taken in conjunction with his other writings. But we are entitled to ask what he means, by a ‘narrow speciality’.

His concern was with those theories of turbulence which are applications of quantum field theory, a subject that made great advances in the 1940s/50s. But quantum field theory was not a ‘narrow speciality’ in the 1970s; and is even less so today. It is a major discipline worldwide and, if we add in statistical field theory in condensed matter physics, then the activity involved would dwarf all turbulence research by orders of magnitude. Moreover, the theory in these areas is closely linked to the experimental work. There is a vast, and growing, body of work in these areas, so this cannot be seen as a narrow or esoteric activity.

Presumably then, he meant simply the applications to turbulence. For Saffman this boiled down to the work of Kraichnan, so he does not give a balanced or scholarly view of this field. Indeed, he does not cite any of the relevant papers by Kraichnan but instead relies on the book by Leslie. It is difficult to see his comments generally as being anything but an expression of frustration that there is an activity going on which he does not understand, combined with a degree of resentment because he felt that his own type of work was somehow being belittled or patronised.

here are other parts of his lecture notes that I value, such as his criticism of Kolmogorov’s 1962 ‘refined theory’; and the general tone of the lectures is undoubtedly stimulating. But although Philip Saffman is no longer here to speak for himself, I still think that his views about fundamental approaches to turbulence should be challenged, if only because similar views seem to be quite widespread today. I am occasionally surprised by how glibly members of the turbulence community are prepared to write off renormalization methods, with phrases such as ‘everyone knows that Kraichnan’s theory is wrong and no one bothers about it anymore’. Well life is so much easier if you pass up on the challenges. But to such people, I would address the question: what have you got to put in its place?

In the mid-1970s, when Saffman was writing, the situation was very different from that today. The basic idea of the LET theory was put forward by me in 1974, incidentally offering a fundamental reason for the failure of the Edwards theory and other cognate theories, including Kraichnan’s. Since then the LET theory has been developed and extensively computed and compared to other theories. I have also published three books, all intended to make such theories more accessible to non-physicists. Two are on turbulence and one on renormalization methods; and their titles can be found in the list of my publications in this blog. So I would like to answer my own question by saying that turbulence theories are intelligible to fluid dynamicists, provided that they are open minded and are prepared to make a bit of an effort. That’s what I would like to say but I have to make one caveat. There are theories, supposedly of turbulence, which are simply a relabelling of text book equations from quantum field theory with variables appropriate to turbulence. Yet such theories do not engage with the existing body of work or explain how they solve problems that others encountered. They used to appear in obscure journals of the old Soviet Union, but now they appear in the learned journals of the west. It appears that the authors do not understand that their work is unsound or perhaps do not care. I intend to write on the subject of Fake Theories (don’t know what put that idea in my head!) but as a topic it presents its difficulties.

Lastly, for completeness, I should mention that there is a class of theories based on the use of Lagrangian coordinates. A recent development in this type of theory also presents a decent and balanced review of other work in the field [4]. I also intend to write about Lagrangian theories in a future post.

[1] W. D. McComb. Dynamics and Relativity. Oxford University Press, 1999.
[2] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[3] P. G. Saffman. Problems and progress in the theory of turbulence. In H. Fiedler, editor, Structure and Mechanisms of Turbulence II, volume 76 of Lecture Notes in Physics, pages 273-306. Springer-Verlag, 1977.
[4] Makoto Okamura. Closure model for homogeneous isotropic turbulence in the Lagrangian specification of the flow field. J. Fluid Mech., 841:133, 2018.




Turbulent dissipation and the two cultures?

Turbulent dissipation and the two cultures?
I recently saw the paper cited as [1] below, which for me is I think the first of the 2021 papers. As the title suggests, it presents a review of methods of measuring the turbulent dissipation rate. It contains a certain amount of basic theory, along the lines of expressions for the dissipation rate, the Taylor dissipation surrogate, remarks about the role of inertial transfer, dynamical equilibrium, and so on. But there is no attempt at a statistical theory and most theoretical attempts at explaining the dependence of the dissipation rate $\varepsilon$ on the Taylor-Reynolds number do not get a mention.

Nevertheless, the authors do cite a paper by my co-authors and me, which presents an analytical theory of the dependence of the normalized dissipation $C_{\varepsilon}$, which is the dissipation rate divided by $U^3/L$, where $U$ is the root mean square velocity and $L$ is the integral lengthscale [2]. They say that we `explained that the decay of the dimensionless dissipation with increasing Reynolds number was because of the increase in the Taylor surrogate’. This is true for forced, stationary turbulence, because we can keep the rate of forcing (and hence the dissipation) constant while decreasing the viscosity in order to increase the Reynolds number.

However, this paper says so much more! It presents an analytical theory, based on the Karman-Howarth equation, in which dimensionless structure functions are expanded in inverse powers of the Reynolds number. The resulting expression is given by: \begin{equation}C_{\varepsilon}= C_{\varepsilon,\infty}+C/R_L + O(1/R^2_L), \end{equation} where $R_L$ is the integral scale Reynolds number. Direct numerical simulation was used to obtain the coefficients as $C=18.9 \pm 0.009$ and $C_{\varepsilon,\infty}= 0.468\pm0.006$. The result compared to other numerical investigations is shown in the figure below, which is taken from Fig. 1 of [2], and where equation (44) of that reference is the equation just given here.

 

 

It is worth emphasising that this result is asymptotically exact in the limit of large Reynolds numbers. For low Reynolds numbers, our DNS confirmed that the $1/R_L$ dependence was correct to within experimental error. When this theory was later applied to magnetohydrodynamic turbulence, it was found necessary to include a term at order $1/R^2_L$ at low Reynolds numbers [3]. In fact, a detailed argument was previously put forward by us to the effect that the $1/R_L$ dependence was exact for isotropic turbulence: see the supplemental material to the paper cited as [4] below.

I should also emphasise that none of this is intended as a criticism of Wang et al, which is a perfectly competent piece of work of its general type. It is really a matter of emphasising the gulf between fluid dynamics and physics. For instance, it would be very unlikely that an experimental particle physicist would fail to see the point of a paper by a theoretical particle physicist, even if they were unable to follow the detailed derivations in it. This is because in physics we all have the same education up to a certain level, and even thereafter there is overlap and much in common. But fluid dynamics is much less homogeneous than physics and this leads to misunderstandings based very largely on cultural gaps. Those of us who belong to the very small number of physicists working on turbulence have much cause to be aware of this. I have posted about this before and I will do so soon again!

[1] Guichao Wang et al. Estimation of the dissipation rate of turbulent kinetic energy: A review. Chem. Eng. Science, 229:11633, 2021.
[2] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.
[3] M. F. Linkmann, A. Berera, W. D. McComb, and M. E. McKay. Nonuniversality and Finite Dissipation in Decaying Magnetohydrodynamic Turbulence. Phys. Rev. Lett., 114:235001, 2015.
[4] W. David McComb, Arjun Berera, Matthew Salewski, and Sam R. Yoffe. Taylor’s (1935) dissipation surrogate reinterpreted. Phys. Fluids, 22:61704, 2010.




The infinite Reynolds number limit: Onsager versus Batchelor: 3

The infinite Reynolds number limit: Onsager versus Batchelor: 3

In the preceding two posts, we have pointed out that the final statement by Onsager in his 1949 paper [1] is, in the absence of a proper limiting procedure, only a conjecture; and that the infinite Reynolds number limit, as introduced by Batchelor [2] and extended by Edwards [3], shows that it is incorrect. We have also shown that it is not in accord with the way in which turbulence is nowadays known to behave. In this post, we consider the question of how well the Batchelor-Edwards picture of dissipation agrees with the experimental picture and consider the nature of the equations of fluid motion. Additionally, we consider the physical nature of the process that we term ‘dissipation’.

A particular problem with Onsager’s paper is that it conflates two quite distinct situations. These are: the infinite Reynolds number limit, on the one hand; and the breakdown of the continuum limit, on the other. In order to distinguish between these two, we have to distinguish between the two kinds of Navier-Stokes equation (NSE). If we wish to take a true (in the mathematical sense) infinite Reynold number limit, then we must work with the equations of continuum mechanics. If we want to consider the breakdown of the continuum limit, then we must consider a fluid made of molecules, in which the equations of motion were derived by a macroscopic averaging process. I have touched on this distinction in my post of 14 May 2020 and will develop it in rather more detail here.

The equations of fluid motion, as they are normally encountered by engineers and applied mathematicians, are derived macroscopically; and rely on the concept of a fluid continuum which is without structure. They express Newton’s second law of motion as applied to the continuum and are based on a linear approximation to the relation between the shear stresses and corresponding rates of strain. Fortuitously, this approximation applies to a wide class of fluids. They are also based on the assumption of incompressibility which means that the macroscopic fluid motions do not produce density changes. Of course, sound waves will travel through any fluid, so strictly the incompressibility is only an approximation.

The NSE is expected to describe the macroscopic motion of any Newtonian fluid. If we set the viscosity equal to zero, then we have the Euler equation, which is taken to apply to an ideal fluid. It then provides a relationship between velocity and pressure for fluid motions which are remote from boundaries. Combined with the concept of streamline flow, it leads to the Bernoulli equation. This can be solved for practical problems by the use of ad hoc coefficients which take effects such as viscosity into account. Also, the Euler equation can be combined with boundary layer theory to describe real fluid motions.

If we consider the Batchelor-Edwards infinite Reynolds number limit [2,3], which locates the dissipation at $k=\infty$, then this can only apply in the continuum mechanics picture just outlined. What, then, is the use of such a limit? The answer is that it is useful in any context where one’s theory is based on the continuum model. In the case of Edwards, he applied it to his self-consistent field theory of turbulence. Of course, as we pointed out in the preceding post, this is mathematically equivalent to Kraichnan’s use of scale-invariance in testing his direct-interaction approximation.

Now let us turn to the microscopic derivation of the NSE. This begins at the molecular level and one ends up by averaging over volumes which are small compared to the flow volume but large enough to contain very large numbers of molecules. Evaluating such averages is seen as a limiting process and is often referred to as the continuum limit.

It is worth quoting what Batchelor said (ibid page 5) after discussing the possibility that small scale motions might not satisfy the continuum limit, he went on: ‘However, the action of viscosity is to suppress strongly the small-scale components of turbulence and we shall see that for all practical conditions the spectral distribution of energy dies away effectively to zero long before length scales comparable with the mean free path are reached. As a consequence, we can ignore the molecular structure of the medium and regard it as a continuous fluid.’

In my post on 14 May 2020, I quoted a calculation by Leslie [4], making exactly the same point, but in a more quantitative way. As an aside, I note that over the years I have heard many speculations about singularities and near-singularities (sic), but I have never heard of anyone making such speculations actually doing a calculation to establish under just what circumstances this pathological behaviour might be expected to occur. As we have seen, and will discuss further in our forthcoming paper [5], the practical onset of scale-invariance is at quite a moderate Reynolds number.

We will conclude by considering what we mean by ‘viscous dissipation’. This is the rate at which the kinetic energy of fluid motion is randomised at the molecular level, with the result that the fluid heats up. Turbulent dissipation is of course known to be very much larger, but the turbulent motions are themselves dissipated by molecular motion and again the fluid heats up. This is a two-stage process, with energy being transferred through wavenumber until it is finally dissipated by viscosity. As the Reynolds number increases, the volume of wavenumber space also increases, such that a greater amount of energy can be accommodated, and this leads to scale-invariance, and to apparent independence of the coefficient of viscosity. This absorption of energy may be seen as a quasi-dissipation but the real dissipation still happens at the end of the cascade! It would be really quite strange if this limiting process led to a situation where there was only quasi-dissipation and the fluid no longer heated up. In other words, if the Onsager view were to prevail over the Batchelor-Edwards view.

[1] L. Onsager. Statistical Hydrodynamics. Nuovo Cim. Suppl., 6:279, 1949.
[2] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 2nd edition, 1971.
[3] S. F. Edwards. Turbulence in hydrodynamics and plasma physics. In Proc. Int. Conf. on Plasma Physics, Trieste, page 595. IAEA, 1965.
[4] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.
[5] W. D. McComb and S. R. Yoffe. The inifinite Reynolds number limit and the quasi-dissipative anomaly. (In preparation: 2020)




The infinite Reynolds number limit: Onsager versus Batchelor: 2

The infinite Reynolds number limit: Onsager versus Batchelor: 2
In the preceding post, we argued that the final statement by Onsager in his 1949 paper [1] is, in the absence of a proper limiting procedure, only a conjecture; and that the infinite Reynolds number limit, as introduced by Batchelor [2] and extended by Edwards [3], shows that it is incorrect. It is indeed possible to formulate the limiting case such that the detailed symmetry, which guarantees energy conservation by the nonlinear term, is preserved globally. At this point we should note that such extreme limits can only be taken in the context of continuum mechanics, but not for a real physical fluid, where the equation of motion is derived from statistical mechanics. There is also the question: how does the turbulence actually behave in the limit of infinite Reynolds numbers?

We may address these two points together by introducing the flux $\Pi(\kappa)$ of energy through the mode with wavenumber $\kappa$, thus: \begin{equation}\Pi(\kappa) = \int_\kappa^\infty \,dk\, T(k) = – \int_0^\kappa\, dk \, T(k), \end{equation} where $T(k)$ is the energy transfer spectrum, as it appears in the Lin equation, and we have assumed stationarity for sake of simplicity.

As is well known, the effect of increasing the Reynolds number is to increase the flux until it reaches a maximum value equal to the rate of dissipation $\varepsilon$. We may write this as: \begin{equation}\Pi_{\mbox{max}} \equiv \varepsilon_T = \varepsilon.\end{equation} Thereafter, as we increase the Reynolds number, the flux cannot increase any further, but the dissipation wavenumber keeps increasing, and the above relationship applies over an increasing range of wavenumbers. This is known as scale-invariance and in effect defines the inertial range. We now write the Edwards result for the infinite Reynolds number limit as: \[T(k) = \varepsilon_W \delta (k) -\varepsilon \delta (k-\infty) \equiv \varepsilon \delta (k) -\varepsilon \delta (k-\infty), \] where $\varepsilon_W $ is the rate of doing work by arbitrary stirring forces. Then, trivially, substituting this into equation (1) shows that it is mathematically equivalent to scale-invariance.

For many years, it has been widely accepted among theorists that the onset of scale-invariance is in effect the onset of infinite Reynolds number behaviour. Both numerical simulations and computations of statistical closures alike have shown the asymptotic behaviour \[\lim_{R\rightarrow \infty}\frac{\varepsilon_T}{\varepsilon} \rightarrow 1,\] from below. Here we show the reciprocal of this behaviour (because we were studying the dissipation at the time) where we plot the ratio of dissipation to maximum flux against Taylor-Reynolds number.

The infinite Reynolds number limit
Ratio of dissipation to peak inertial transfer rate as a function of Taylor-Reynolds number.

This figure is taken from the thesis [4] and will appear in a paper now in preparation [5]. Clearly the ratio of dissipation to maximum flux $\varepsilon_T$ approaches unity from above, as the Reynolds number increases. Evidently from about $R_\lambda \sim 100$, scale-invariance is well established. In the next post, we shall discuss the nature of viscous dissipation and distinguish it from quasi-dissipation.

[1] L. Onsager. Statistical Hydrodynamics. Nuovo Cim. Suppl., 6:279, 1949.
[2] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 2nd edition, 1971.
[3] S. F. Edwards. Turbulence in hydrodynamics and plasma physics. In Proc. Int. Conf. on Plasma Physics, Trieste, page 595. IAEA, 1965.
[4] S. R. Yoffe. Investigation of the transfer and dissipation of energy in isotropic turbulence. PhD thesis, University of Edinburgh, 2012.
[5] W. D. McComb and S. R. Yoffe. The inifinite Reynolds number limit and the quasi-dissipative anomaly. (In preparation: 2020)




The infinite Reynolds number limit: Onsager versus Batchelor: 1

The infinite Reynolds number limit: Onsager versus Batchelor: 1

A pioneering paper on turbulence by Onsager, which was published in 1949 [1], seems to have had a profound influence on some aspects of the subject in later years. In particular, he put forward the idea that as the turbulence was still dissipative in the limit of infinite Reynolds numbers (or zero viscosity) it implied that the Euler equation must be dissipative despite its lack of viscosity. This supposed behaviour has come to be referred to as the dissipation anomaly. This view of matters is at odds with that of Batchelor [2] and of Edwards [3]: for a discussion see my post on 23 April 2020; but for the moment I will focus on the last paragraph in [1].

The key point involved is that the inertial-transfer term $T(k)$ of the Lin equation conserves energy, thus: \[\int_0^\infty \,dk T(k) = \int_0^\infty \, dk \int_0^\infty \, dj S(k,j) =0,\] because of the anti-symmetry of $S(k,j)$ under interchange of $k$ and $j$. Onsager uses the symbol $Q(k,k’)$ for this quantity, and states the antisymmetric property as his equation (17). Once he has set the viscosity equal to zero, he concludes that the anti-symmetry of $S$ (or his $Q$) no longer implies overall energy conservation. The final sentence of his paper reads: ‘The detailed conservation of equation (17) does not imply conservation of the total energy if the number of steps in the cascade is infinite, as expected (i.e. for zero viscosity), and the double sum of $Q(k,k’)$ converges only conditionally.’ Note that the parenthesis in italics has been added by me.

Now this is open to two immediate criticisms. First, setting the viscosity equal to zero and replacing the NSE by the Euler equation, is not the same thing as taking the limit of zero viscosity, as done by Batchelor [2] and Edwards [3]. Secondly, the idea of ‘steps in the cascade’, although intuitively very attractive, is not sufficiently well-defined to be suitable for quantitative purposes. In contrast, the limiting process followed by Edwards is mathematically well defined and shows that in the limit of zero viscosity, the NSE possesses dissipation in the form of a delta function at $k=\infty$. Accordingly Onsager’s final statement is without justification and, on the Batchelor-Edwards picture, is incorrect.

These arguments deal with extreme situations, but a more moderate approach is to follow the second method of defining the infinite-Reynolds number limit, which also arises out of Batchelor’s work and which leads to the concept of scale-invariance of the inertial flux. This approach was followed by Kraichnan and many others; and, although differing in detail, is mathematically equivalent to the Edwards formulation. We will discuss this in the next post.

[1] L. Onsager. Statistical Hydrodynamics. Nuovo Cim. Suppl., 6:279, 1949.
[2] G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 2nd edition, 1971.
[3] S. F. Edwards. Turbulence in hydrodynamics and plasma physics. In Proc. Int. Conf. on Plasma Physics, Trieste, page 595. IAEA, 1965.




The role of Gaussians in turbulence studies.

The role of Gaussians in turbulence studies.
The Gaussian, or normal, distribution plays a key part in statistical field theory. This is partly because it is the only functional which can be integrated and partly because Gaussian distributions are frequently encountered in microscopic physics at, or near, thermal equilibrium. The latter is not the case in turbulence. Indeed the non-Gaussian nature of the turbulence probability functional (pdf) is inescapable. In the absence of a mean flow, the statistical closure problem amounts to how one expresses the third-order moment $\langle uuu \rangle$ in terms of the second-order moment $\langle uu \rangle$. It is a matter of symmetry (so that it can be determined by inspection) that the third-order moment vanishes when evaluated against a Gaussian pdf. Of course various turbulence pdfs are seen to be quite close to Gaussian in form. This is particularly so for the distribution of the velocity at a single point. But some deviation from normality for a turbulence pdf is of the essence.

We will not discuss the properties of Gaussian forms here: a pedagogic treatment can be found in Appendix B of my recent book, which is cited below as reference [1]. Our aim is to give a brief discussion of three ways in which Gaussians are used in turbulence, one in Direct Numerical Simulation (DNS) and two in statistical theory. From these considerations we should be able to make a number of general points without going through a lot of complicated theory. The one theoretical aspect we should keep in mind, is the form of the solenoidal Navier-Stokes equation in wavenumber space, which we can write in a very symbolic form as: \[\left( \frac{\partial}{\partial t} + \nu k^2\right) u_k = M_k u_ju_j + f_k .\] Here $k$ and $j$ are combined wavenumbers and tensor indices, $\nu$ is the kinematic viscosity, $M_k$ is the inertial transfer operator, $u_k$ is the Fourier transform of the velocity field, and $f_k$ is a stirring force, if required. A full discussion of this equation can be found in reference [1]. As ever, repeated indices are summed.

The two standard problems in DNS are (a) free decay; and (b) forced, stationary turbulence. In both cases, we start with an arbitrary (non-turbulent) velocity field, which is random and has a multivariate normal (i.e. Gaussian) distribution. The arbitrary initial energy spectrum $E(k,0)$ is chosen to be confined to very low wavenumbers. As time goes on, the nonlinear coupling in the NSE generates a velocity field at ever higher wavenumbers. In spectral terms, this is seen as $E(k,t)$ spreads out to higher wavenumbers and the skewness $-S$ rises from $S=0$ (corresponding to a Gaussian pdf) to $-S\sim 0.5$, corresponding to developed turbulence. A brief introduction to DNS may be found in Section 3.2 of reference [2].

The theoretical approach began with quasi-normality in the 1950s, in which one assumes that the fourth-order moment can be factorised as if Gaussian, in order to solve the second equation of the statistical hierarchy for the third-order moment. This, as is well known, led to a catastrophe. The first real advance was due to Kraichnan [3] and followed by Wyld [4], in what is now known as renormalized perturbation theory. In some ways, this is rather like the DNS, in that we start with a random Gaussian velocity field with a prescribed spectrum which is confined to low wavenumbers. Then, instead of stepping this forward in time on the computer, we substitute it into the non-linear term of the NSE. Assigning a book-keeping parameter $\lambda$ (where $\lambda = 1$) to the nonlinear term, we expand out in powers of $\lambda$, with coefficients in the series being calculated iteratively. This is not strictly speaking perturbation theory, as $\lambda$ is not small, but it resembles it, hence the name. Of course we cannot truncate at low order in $\lambda$, so we must sum infinite series, or rearrange into sub-series which can be summed. This approach leads to remarkably successful results, although there are still some questions to be answered.

The last approach was due to Edwards [5] and is the method of the self-consistent field. In this theory, Edwards used the NSE to derive a Liouville equation for the turbulence pdf. The Gaussian pdf in this work is quite different from the other two. It is chosen to give the correct value of the two-velocity moment. Its role then is as a basis function for an iterative solution of the Liouville equation as an operator-product expansion about the Gaussian zero-order distribution. Symmetry arguments play an important part in this work and if you wish to pursue this point, you will find a discussion (and an extension to two-time forms) in reference [6]. It is worth noting two points. First, this is an expansion for the exact pdf about a Gaussian and, as I remarked earlier, turbulence pdfs can be quite close to Gaussian in form. Hence there is a possibility of establishing a second-order truncation as a rational approximation. Secondly, the statistical closures derived this way are cognate to Kraichnan’s closures which are derived by very different methods. These points should encourage you to take a `glass full’ rather than a `glass empty’ view of statistical turbulence theory!

[1] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[2] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[3] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497-543, 1959.
[4] H. W. Wyld Jr. Formulation of the theory of turbulence in an incompressible fluid. Ann. Phys, 14:143, 1961.
[5] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[6] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.




Is there actually a single ‘turbulence problem’?

Is there actually a single ‘turbulence problem’?
When I was preparing last week’s post, I consulted the Saffman lectures in order to find an example of the culture clash between theoretical physics and applied maths. In the process I noticed quite a few points that I felt tempted to write about and in particular that old perennial question: is turbulence a single universal phenomenon? Or, does it depend on the physical situation under consideration and its conditions of formation? Over the years this question has been put numerous times by various people, both in discussions and in writing, but never seems to lead anywhere. Saffman pointed out that the opposite extreme would be to consider each situation of practical importance and describe it to the required degree of detail. At the same time he conceded that there was evidence for universality, but suggested that there might be merit in some form of cataloguing and classifying of flows.

Of course there has always been some degree of classification, even just for pedagogic purposes. For instance, free shear flows versus wall-bounded flows; but presumably Saffman was thinking in terms of something more profound. So far as I know, no such scheme exists; but, if it did, it might be analogous to the idea of universality classes in the theory of critical phenomena. Such phenomenon are characterised by the way macroscopic observables, such as specific heat, magnetic susceptibility or the correlation length, behave as a system tends to the critical point. They either diverge (become infinite) or go to zero. This behaviour is represented by a power-law dependence on the reduced temperature, with the introduction of critical exponents which are either positive or negative, according to the observed behaviour at the critical point. If two different physical phenomena are found to have the same values of their critical exponents, then they are said to be in the same universality class. This is of course, a purely phenomenological approach, but it corresponds to an underlying symmetry in the Hamiltonian of the system, along with the dimensionality of the space. An introductory account of this topic can be found in the book cited as reference [1] below.

There is no doubt that many of the pioneers of turbulence theory, viz. Taylor, Kolmogorov, Batchelor and Townsend, thought in terms of a correspondence between turbulence and statistical mechanics. As we have pointed out elsewhere (see Section 12.5 of reference [2]), Batchelor wrote about ‘an ultimate statistical state of the turbulence’ that would follow from a ‘whole class of different initial conditions’. As we have also pointed out (ibid), one problem with this is the very great difference in the number of degrees of freedom $N$ between the two. In effect, for canonical statistical mechanics, $N$ is so large that fluctuations can effectively be neglected and the average and instantaneous probability distribution functions are virtually identical. This is certainly not the case for turbulence. So perhaps one can only expect a somewhat limited correspondence between the two. This is not an argument for giving up the analogy. Merely a plea for realism in employing it.

The basic idea underpinning the statistical picture of turbulence is that, as energy transfer proceeds from large eddies to small, information is lost about the conditions of their formation. Although many people prefer to think in terms of real space and ‘eddies’, the idea of an energy cascade is not well defined unless one works in wavenumber space, where the Fourier modes are the degrees of freedom. So, strictly speaking, one should express this in terms of transfer from small wavenumber modes to those at large wavenumber, where turbulent kinetic energy is converted into heat. This process is in accordance with the Lin equation, whereas the Karman-Howarth equation is entirely local and can tell us nothing about it.

The scaling of spectra from a variety of flows on Kolmogorov variables supports this picture and, even if there are results that do not, this does not invalidate the correctness of Kolmogorov scaling for certain flows. The valid (and interesting) question then is: how do such flows differ from those that do? A consideration of spatial symmetry may shed some light on this.

Suppose, for a simple example, we consider turbulent shear flow in the $x$-direction, between infinite parallel plates situated at $y=\pm a$. The flows are homogeneous in the $z$-direction, while the mean velocity $U$ depends on $y$. If the plates are at rest, then $U(y)$ is symmetric under the interchange of $y$ and $-y$. However, if the plates are in relative motion (Couette flow) then $U(y)$ is antisymmetric under this interchange. The first case is an approximation to flow in a plane duct (or even, with some adjustment, to pipe flow) and it is well known that Kolmogorov scaling is observed. What happens in the Couette case, I don’t know. But it would be interesting to find out. The appropriate tool for cataloguing flows in this way is to transform to centroid and difference coordinates, and make an expansion in the centroid coordinate in Taylor series. Well, it’s an idea!

Lastly, for the theoretical physicist the problem posed by the Navier-Stokes equation in wavenumber space, and driven by random noise, is a well-posed problem. It should be noted that the pioneers in this area were careful to set it up such that it could satisfy the Kolmogorov conditions for an inertial range, and in doing this they were guided by the statistical treatment of other dynamical problems, such as Brownian motion. Nowadays it is seen as belonging to a wide class of driven diffusion equations with particular relevance to soft condensed matter. Recently we have even found the surprising result that it can undergo a phase transition at low Reynolds numbers [3], so there is much still to understand about this stochastic dynamical system.

[1] W. D. McComb. Renormalization Methods. Oxford University Press, 2004.
[2] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[3] W. D. McComb, M. F. Linkmann, A. Berera, S. R. Yoffe, and B. Jankauskas. Self-organization and transition to turbulence in isotropic fluid motion driven by negative damping at low wavenumbers. J. Phys. A Math. Theor., 48:25FT01, 2015.




Here’s to mathematics and may it never be of use to anyone!

Here’s to mathematics and may it never be of use to anyone!

When I was a student, I read that mathematicians at conference dinners would drink a toast along the lines of the title of this piece. As an idealistic young man, I was quite shocked by this; and thought it very arrogant. Apart from anything else, it seemed to sell the entire discipline of applied maths very short indeed. I think that it took me until I was in my middle years to understand and indeed empathise with this statement.

In fact it can be seen as an indicator of what I call the culture of a subject. By `culture’ I mean something to do with a sense of what is the right way to think about physical problems, such as turbulence, or to attempt to solve them. The conviction that engineers, mathematicians and physicists have different cultures has grown on me over the years (and remember that I have been both mechanical engineer and theoretical physicist at different stages of my career).

A minor incident which helped my understanding of the mathematician’s attitude (or culture) happened when a colleague and I invigilated a class exam. A class from the maths department was being examined at the same time and their subject was something like `Functional analysis and Fourier analysis’. Well, I thought, this is something that I know a bit about. So I picked up a copy of the exam paper and was surprised to find that all the questions were to do with proving existence or uniqueness; not, as I would have expected, to actually work out some specific functional form when given certain initial conditions.

Another hint came at a workshop on turbulence at the Max Planck Institute for Mathematics in Bonn, sometime in the mid-1980s. All the speakers were theoretical physicists but a number of the resident mathematicians attended. When the first speaker had finished outlining his theory, one of the mathematicians said: `I would not dream of presenting such a very long calculation to an audience in one lecture.’ That was a bit of a bummer as we on the physics side had thought that it was a theory, not a calculation. This chap, a rather flamboyant American, had his comeuppance later, when we all went to lunch at a pizza restaurant and he attempted to order with Italian intonations and theatrical gestures. The waiter was having none of it and pretended not to understand. So the flamboyant one had to calm down and order like the rest of us.

These and other encounters led me to understand that for mathematicians it is essential to be able to study those aspects of the subject which interest them, without constraints being imposed for any reason. And so it is for physics. For pure physics it is essential to be able to think the unthinkable (if necessary) and pursue curiosity based research. In passing, I should note that much physics research nowadays is really to be classed as applied physics. For instance, condensed matter physics, with its bedrock problems unsolved, seems to me to be very much materials science.

Naturally, it is in the subject of turbulence that these different cultures may clash. Theoretical physicists can publish in topics like particle theory, critical phenomena, cosmology or plasma physics, without having a mechanical engineer or applied mathematician refereeing the papers that they submit to journals. In turbulence, as I know from endless personal experience, this is not so. Of course this is exacerbated by the shortage of theorists working in the field, and even then there can be problems because of different agendas and an inability to put self-interest aside. I shall return to that particular aspect in a future blog, but for the moment I am concerned with the different cultures. Various instances can be found in the well-known lectures of Philip Saffman [1].

These lectures, and a previous set, are opinionated and quite stimulating to read; not least, in my case, because I so often disagree with them. In reference [1] on page 294, Saffman has a section on statistical methods. He begins with the general statistical theories, as pioneered by Kraichnan and the following quotation is of interest:

`The techniques of the statistical theory are supposed to be rigorous and analytical. They are certainly impressive with their talk of Greens (sic) functions, propagators, diagrams, Galilean invariance, and other jargon of physics and probability theory, and the resulting integro-differential equations are sufficiently complicated to suspend belief, but in fact the approximations are just as ad hoc and lacking in mathematical justification as those of Reynolds stress modelling. Also, the absence of a physical basis is unfortunately usually combined with the obscurity of the details.’

In this Saffman certainly made his position clear. It perfectly underlines the existence of a culture clash. In fact a detailed deconstruction of that quotation (which would not be entirely unsympathetic to Saffman) could be of interest and I might return to that for a blog on its own later on. However, to bring this to a point, he ends up quoting Leslie (1973) and Bradshaw (1976) on the significance of Kraichnan’s work but does not support his comments (which are rather confused) with any actual references to Kraichnan.

One point I should mention, is that he says that Kraichnan’s theory `can postdict the Kolmogorov constant, which may not exist because of intermittency, …’ In later years, at the NASA-ICASE workshop on turbulence in 1984, we discussed his use of the word `postdict’ and he conceded that if a theory were genuinely from first principles it would be appropriate to say predict. Of course the question arises, was Kraichnan’s theory genuinely from first principles? And that is where Saffman’s criticisms really have some force. Again, this is something that I shall return to in later posts.

[1] P. G. Saffman. Problems and progress in the theory of turbulence. In H. Fiedler, editor, Structure and Mechanisms of Turbulence II, volume 76 of Lecture Notes in Physics, pages 273-306. Springer-Verlag, 1977.




Operational Large-Eddy Simulation.

Operational Large-Eddy Simulation.
When I was visiting TU Delft in 1997, I stayed with my wife and daughter in the Hague, where we rented an apartment from one of the professors at Delft. He and his wife occupied the penthouse above us. They had originally bought the second apartment so that their teenage daughters could combine a degree of freedom with parental oversight. By the time we came, their girls had long since left home and they were renting it out to visiting academics.

My landlord (I’ll call him Brian, because that was his name) used to give me a lift into the campus in the mornings and this could be a nerve-racking experience. His car had once belonged to Queen Juliana and was a gigantic Cadillac (I think) that was fitted with every conceivable luxury. But Brian was rather a small man and had to drive while peering through the steering wheel, so that when he swept out of the apartment block and turned into the narrow street beside a canal, there seemed to me to be a good chance that we would end up in the water.

However, one day he took me to see his experimental rigs. These were in a vast hangar and were heavily insulated, so that to me they just looked like large shapeless lumps wrapped in something like kitchen foil. I was unable to muster much enthusiasm and Brian told me that I `had no soul’! On a later occasion, he remarked tartly that he didn’t see the point of turbulence theory as he would just use the computer if he needed to take turbulence into account. That, I thought, would come as a surprise to those engineers whose field is turbulence modelling, but his remark stayed with me and I wondered whether one actually could use the computer in some operational way to model turbulence.

In Edinburgh, at that time, we were studying large-eddy simulation of the energy spectrum in the context of RG, and using DNS to test ideas. A typical approach was to run a fully resolved simulation (say with $N=256^3$ mesh points), with maximum wavenumber $K_{max} =1.2 k_d$, and compare this with a low-wavemumber part simulation, cut off at $k=K_C$, having in this case $N=64^3$. As energy spectra invariably showed an upturn near the maximum wavenumber (exaggerated on a log scale), it seemed likely that an unresolved simulation would show a marked upturn and that removing this could be made the basis of a feedback mechanism to produce the `correct spectrum’ in which the velocity field was reduced proportionally.

When back in Edinburgh, I discussed this with one of my students, who was working on DNS at the time. In [1] you can see how Alistair turned these vague ideas into an algorithm that worked. Referring to the figure below, this shows a spectrum for $N=64^3$, where we introduce the wavenumber $k_{upturn}$ to mark the point where the unresolved spectrum starts to turn up. That is the uncorrected spectrum and its derivative is used to identify the position of $k_{upturn}$.

 

An energy spectrum with an upturn (crosses), its derivative (triangles) and a schematic indication of what the corrected spectrum should look like after the application of the feedback procedure (dashed curve). The vertical solid and dash-dotted lines indicate $k_{upturn}$ and $K_C$ respectively.

In the second figure, we show a comparison between the corrected and uncorrected spectra for $N=64^3$, along with the spectrum for the resolved simulation with $N=256^3$. It is clear that the compensated spectrum agrees well with the fully resolved one over their common range of wavenumbers.

 

Average evolved energy spectra, showing the results from resolved $256^3$ simulation (circles), the unresolved $64^3$ simulation and the compensated $64^3$ simulation (diamonds).

For further details you should consult reference [1]. However, we can make a two specific points here.

First, the ratio $k_{upturn}/K_C$ is plotted as a function of time in the paper. It was later pointed out to me that this is probably a good measure of eddy noise and that it would be interesting to know the form of its pdf. Unfortunately we did not think of measuring that at the time and anyone who is interested in subgrid modelling might find it helpful to rectify this omission. Another point in passing is that Fig. 2 in reference [1] shows a very rapid burst of activity at one stage and somehow this underlines just how little we really understand about the NSE as a dynamical system.

Secondly, when the subgrid drain due to the feedback loop is interpreted as a subgrid eddy viscosity, this agrees closely with the usual phenomenological form based on the truncated transfer spectrum and the corresponding energy spectrum. See Fig. 5 in reference [1].

Of course one would like to apply such a method to shear flows, but there the picture is complicated by the fact that lack of homogeneity means that energy can flow due to inertial transfer in space as well as in wavenumber. One could study this by separating the two effects, using centroid and relative coordinates. If spatial transfer were mainly due to large eddies, then a practical separation might be achieved.

[1] A. J. Young and W. D. McComb. Effective viscosity due to local turbulence interactions near the cutoff wavenumber in a constrained numerical simulation. J. Phys. A, 33:133-139, 2000.




Peer Review in Wonderland

Peer Review in Wonderland
In 1974 I completed a task which had begun during my PhD days, and found a way of rendering the Edwards statistical closure compatible with the Kolmogorov spectrum. The basic idea was that the entire transfer spectrum $T(k)$ acted as a sink of energy at low wavenumbers and a source of energy at high wavenumbers. It could not, as in the Edwards theory, be divided into separate output and input terms that were valid at all $k$. This division, which is present in other theories, including DIA, would have seemed natural at the time, as it is characteristic of many equations of mathematical physics, such as the Boltzmann equation, the Pauli master equation, and the Chapman-Kolmogorov equation. But these equations are for Markov processes and turbulence transport in wavenumber is most emphatically not Markovian.

My key assumption was that the turbulence response was determined by a local (in wavenumber) energy balance and I called it the Local Energy Transfer (or LET) theory. It was limited to the single-time, stationary formulation, and a few years later I published a two-time extension of the theory which could be compared to Kraichnan’s DIA. This analysis was somewhat heuristic and I have written in a previous post (see post for 16 July 2020) about its inadequacies and how they were cured over the years by phenomenological and heuristic methods. However, my long term ambition was to follow the self-consistent field methods of Sam Edwards and actually derive the LET for the two-time case from first principles.

It is a well-known truth that, as one gets older, it becomes more difficult to do mathematics. I’m not sure why this should be, but certainly as the years went on I was happy to entrust the detailed derivations to my students. Nevertheless, once I retired I felt that this was the moment to try. I had no commitments, apart from the visits to be made as part of my Leverhulme travel fellowship, so I could proceed at a glacial pace to try to work out a theory. The result was a half-baked theory which I published in 2009, two and a half years after retirement. A lot more time passed, and in 2017 I published a paper which, although in some respects still a work in progress, does amount to a first-principles derivation of the LET. It is also a concise review of the topic and one of my colleagues said that I should have published it as a review article, in which case I would have escaped the hassle and also been paid some money. Well, if I had escaped the hassle, I wouldn’t have had anything to write about in this post!

The first lot of hassle arose when I submitted it to JFM. This is where Sam published his 1964 paper and I thought it appropriate to do the same. But the JFM alas is not what it was when Sam published there nor indeed what it was over the years that I published a number of papers in it. Indeed, if memory serves, one of the referees said something to the effect that I had had more than my share of JFM papers and that appeared to be his main reason for rejection. For the moment I will pass over this episode. To do it justice I would have to publish the entire correspondence online. Whether or not I do that, there are some points to be made about Lagrangian versus Eulerian theories, so I will return to that topic in a later post.

The next step was that I rewrote the paper and submitted it to JPA [1], where it ultimately appeared. At the first hurdle there was the usual lukewarm result and, as JPA is a staff-edited journal, my manuscript was sent off to a member of the editorial board (EBM) for a decision. In passing, I should say that, while I also think that the time for anonymous refereeing has passed, I am strongly opposed to an EBM sheltering behind anonymity when giving a decision. It is in, my view, an impropriety.

Of course, it isn’t necessarily very difficult to figure out who your anonymous EBM is; and naturally his field of interest as well. In this case the EBM made a few vague remarks which indicated that he had probably not even troubled to read the paper. Then he said something like: `I should have thought that a theory to explain the anomalous exponents of the higher-order moments was a more worthy problem.’ That was apparently his grounds for rejecting the paper. I then appealed to the Editor-in-Chief, who made a careful assessment of the situation which was reflected in his detailed written statement in favour of publishing the manuscript. This was a scholarly decision which was fully justified by the subsequent interest in the paper. Within days of the paper being published, it had been downloaded several hundred times, and at time of writing the total number of downloads is over twelve hundred. That is a very large number of downloads when compared to recent papers on turbulence in JPA and perhaps when compared to any papers on turbulence.

So once again, we find ourselves in the Alice in Wonderland situation that is quite common in turbulence refereeing, where the normal rules don’t seem to apply. The failing of my LET theory, so far as the EBM was concerned, is that it is compatible with the Kolmogorov spectrum and hence possibly incompatible with the existence of his particular problem. With regard to anomalous exponents, recent analysis using a standard technique of experimental physics to account for systematic error, indicates that this may be the main cause of so-called anomalous exponents [2]. I shall have more to say on this particular topic in a later post.

[1] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[2] W. D. McComb, S. R. Yoffe, M. F. Linkmann, and A. Berera. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence. Phys. Rev. E, 90:053010, 2014.




Formulation of Renormalization Group (RG) for turbulence: 2

Formulation of Renormalization Group (RG) for turbulence: 2
In last week’s post, we recognised that the basic step of averaging over high-frequency modes was impossible in principle for a classical, deterministic problem such as turbulence. Curiously enough, for many years it has been recognized in the analogous subgrid modelling problem that a conditional average is required; and that this must be evaluated approximately. But even then it has not apparently been realised that the formulation of the average must also be approximate. As for theoretical physicists, they have long forgotten that Wilson pointed out the need for a conditional average in RG, and that it is evaded in their field by working with Gaussian distributions, which render it trivial. So one still sees the occasional pointless paper claiming to be a theory of turbulence by people who are unaware of the work of Forster et al as mentioned in my previous post.

During the second half of the 1980s, I was writing my first book on turbulence and simultaneously trying to figure out what was wrong with my iterative averaging form of RG. By the end of that decade I had sent off my MS to the publishers and could concentrate on the problems of RG. Early in 1990 I realised that the average over the $u^+$ could not be simply a filtered average if $u^-$ was to be held constant. Working closely with my student Alex Watt, I came up with the two-field theory to evaluate the conditional average approximately; and this produced a considerable improvement by reducing the dependence on the choice of spatial rescaling factor [1]. Early in 1991, when I returned from the US where I had visited MSRI, Berkeley, with a side visit to the Turbulence Centre at Stanford, we began work on a formulation of the conditional average, in which we were joined by another of my students, Bill Roberts. Some questions had arisen during my trip to the States and that lent additional impetus to this work. If memory serves, the key realisation that a conditional average of the type we were using was impossible for a macroscopic deterministic system was due to Alex and Bill; and arose when they were discussing this by themselves. This galvanised our approach and this work was published as reference [2].

Although it defies chronology, we will discuss this theory first. Consider a set of realizations $\{u(k,t)\}$, with their low-$k$ parts clustering around one particular member of the set $v^-(k,t)$, such that \[u^-(k,t)=v^-(k,t) + \phi^-(k,t),\] where $\phi^-$ is the control parameter for the conditional average $\langle \dots \rangle_c$ and is chosen to satisfy \[\langle \phi^- \rangle_c = 0 \qquad \mbox{and} \qquad \langle u^- \rangle_c = v^-.\] In principle bounds on $\phi^-$ can be determined from a predictability study of the NSE, but clearly the more chaotic it is, the smaller is $\phi^-$, and of course when the conditional average is of modes with asymptotic freedom, $\phi^- = 0$.

The two-field theory was put forward in [1] and the essential step was to write the high-$k$ modes in terms of a new field $w^+$, thus \[u^+ = w^+ + \Delta^+.\] Here $w^+$ is of the same general type as $u^+$ but is not coupled to $u^-$. We identified a form for $\Delta^+$ by making an expansion of the velocity field in Taylor series in wavenumber about $k_0$. We tested the theory by predicting a value for $\alpha$, the pre-factor in the Kolmogorov spectrum and found this to be much less sensitive to the value of the bandwidth parameter. This theory involved two plausible approximations and these were later subsumed into a consistent perturbation expansion in powers of the local (in wavenumber) Reynolds number, along with the expansion of the chaotic velocity field being replaced by an equivalent expansion of the covariance. The current situation is that we predict $\alpha = 1.62 \pm 0.05$ over the range $0.2 \leq \eta \leq 0.6$ of the bandwidth parameter. Evidently this breaks down for $\eta \leq 0.2$ as the band is so small that integrals are dominated by behaviour near the lower cut-off wavenumber; while for $\eta \geq 0.6$ the breakdown is due to the inadequacy of the first-order truncation of the Taylor series for a large bandwidth.

Fraction of energy (full line) and dissipation (dashed line) lost due to truncation at a specific wavenumber. This figure is from reference [4].
 

 

In carrying out this analysis, we eliminate modes starting from a maximum value of $k=k_0$ and end up with the onset of scale-invariance at a fixed-point wavenumber which is a fraction of $k_0$. This fixed point is the top of the inertial range of wavenumbers and, although this is not a precisely defined wavenumber, experimentalists have traditionally taken it to be $0.1 k_d -0.2k_d$, where $k_d$is the Kolmogorov wavenumber. If we take $k_0 = 1.6 k_d$ (see the figure which is taken from reference [4]) and consider the case $\eta = 0.5$, where the fixed point occurs at the fourth iteration, we find the numerical value of the fixed-point wavenumber is $k_{\ast} = (0.5)^4 1.6k_d = 0.1k_d$, in pretty good agreement with the experimental picture.

To sum up, this method seems to represent the inertial transfer of energy rather well. But, as it stands, if offers nothing on the phase-coupling effects in the momentum equations which are usually referred to as eddy noise.

[1] W. D. McComb and A. G. Watt. Conditional averaging procedure for the elimination of the small-scale modes from incompressible-fluid turbulence at high Reynolds numbers. Phys. Rev. Lett., 65(26):3281-3284, 1990.
[2] W. D. McComb, W. Roberts, and A. G. Watt. Conditional-averaging procedure for problems with mode-mode coupling. Phys. Rev. A, 45(6):3507-3515, 1992.
[3] W. D. McComb and A. G. Watt. Two-field theory of incompressible-fluid turbulence. Phys. Rev. A, 46(8):4797-4812, 1992.
[4] W. D. McComb, A. Hunter, and C. Johnston. Conditional mode-elimination and the subgrid-modelling problem for isotropic turbulence. Phys. Fluids, 13:2030, 2001.




Formulation of Renormalization Group (RG) for turbulence: 1

Formulation of Renormalization Group (RG) for turbulence: 1

In my posts of 30 April and 7 May, I discussed the relevance of field-theoretic methods (and particularly RG) to the Navier-Stokes equation (NSE). Here I want to deal with some specific points and in the process highlight the snags involved in going from microscopic quantum randomness to macroscopic deterministic chaos.

The application of the dynamic RG algorithm to randomly stirred fluid motion was pioneered by Forster et al (FNS) [1] and what is essentially their algorithm (albeit in our notation) may be stated as follows.

Filter the velocity field $u(k,t)$ into $u^-(k,t)$ on $0\leq k \leq k_1$ and $u^+(k,t)$ on $k_1\leq k \leq k_0$. Note that we introduce $\nu_0$ as the notation for the kinematic viscosity, thus anticipating the subsequent normalization. The RG algorithm then consists of two steps.

1. Solve the NSE on $k_1 \leq k \leq k_0$. Use that solution to find the mean effect of the high-$k$ modes and substitute it into the NSE on $0 \leq k \leq k_1$. This results in an increment to the viscosity $\nu_0 \rightarrow \nu_1 = \nu_0 + \delta \nu_0$.

2. Rescale the basic variables so that the NSE on $0 \leq k \leq k_1$ looks similar to the original NSE on $0 \leq k \leq k_0$.

These steps are repeated for $k_2 < k_1$, $k_3 < k-2$, and so on; until a fixed point which defines the renormalized viscosity is reached. The general idea is illustrated schematically in the figure.

 

Sketch illustrating the choice of wavenumber bands for Gaussian perturbation theory at small wavenumbers and the choice of bands for recursive RG at large wavenumbers.

Two approaches are illustrated in the figure. First, we have the theory of FNS [1], in which a stirring force with Gaussian statistics is specified and an ultraviolet cut-off wavenumber $k_0 =\Lambda$ is chosen to be small enough to exclude the effects of the cascade. This means that this is not a theory of turbulence and the authors make that fact clear in their title. More recent workers in the field have not been so scrupulous. While FNS do obtain a fixed point, this is at $k=0$, which is a trivial fixed point analogous to the high-temperature fixed point in critical phenomena.

The first version of my iterative-averaging theory was developed over the period 1982-86 and is summarised in [2]. Here the ultraviolet cutoff is chosen to be $k_0=k_max$, such that the turbulence dissipation is approximately captured by the formulation, thus:\[\varepsilon \simeq \int^{k_{max}}_0 \, 2\nu_0 k^2 E(k) \,dk,\] where $E(k)$ is the energy spectrum [2]. The wavenumber bands are introduced through $k_1 = h k_0$, where $h$ is the spatial rescaling factor, such that $0 \leq h \leq 1$, and the bandwidth is given by $\eta = 1-h$. In this approach the stirring forces are chosen to be peaked near the origin and are specified by their rate of doing work on the fluid. The RG iteration does not involve them in any way as it reaches a fixed point which corresponds to the top of the inertial range.

It is a cardinal principle of RG that the final result should not depend on the arbitrary parameters of the transformation. In the case of iterative averaging, the fixed point effective viscosity was found to be independent the choice of value $\nu_0$, over quite a wide range. However, there was some dependence on the choice of $h$ and this was a signal that something was wrong.

The problem lay with the averaging. To simply filter and then average over the $u^+$ means that we are treating the $u^-$ and $u^+$ as independent variables. In the case of Gaussian variables, as considered by FNS, they are independent. But for the deterministic solutions of the macroscopic NSE, they cannot be independent variables. In fact, it is not even possible to formulate a rigorous conditional average.

This is easily seen (although it took many years to see it!). The $u^-$ and the $u^+$ each consists of a filter function and a Fourier transform operating on the identical $u(x,t)$. It is only the latter which is averaged. So if we average one, we average the other.
In order to get round this difficulty, we have to formulate the conditional average as an approximation and exploit the underlying idea of deterministic chaos. We shall discuss this in the next post.

[1] D. Forster, D. R. Nelson, and M. J. Stephen. Large-distance and long-time properties of a randomly stirred fluid. Phys. Rev. A, 16(2):732-749, 1977.
[2] W. D. McComb. Application of Renormalization Group methods to the subgrid modelling problem. In U. Schumann and R. Friedrich, editors, Direct and Large Eddy Simulation of Turbulence, pages 67{81. Vieweg, 1986. 25.
[3] W. D. McComb. Theory of turbulence. Rep. Prog. Phys., 58:1117{1206, 1995.
[4] W. D. McComb. Asymptotic freedom, non-Gaussian perturbation theory, and the application of renormalization group theory to isotropic turbulence. Phys. Rev. E, 73:26303-26307, 2006.




Reynolds averaging re-formulated.

Reynolds averaging re-formulated.
At the beginning of the 1980s I was still involved in experimental work on drag reduction; while, on the theoretical side, I had begun numerical evaluation of the LET theory. One day I went into the lab to help a student who was having problems with his laser anemometer. In those days we used a digital voltmeter to obtain the mean velocity and an rms voltmeter to obtain (you’ve guessed it!) the rms velocity. Actually at that stage we had begun recording the anemometer output voltage and taking it away for A/D conversion and subsequent processing on a computer. But we still used the voltmeters for setting things up, and essentially the rule of thumb was to turn up the value of the time constant until the reading became steady.

It was while my student was playing with these things, that I started thinking that it was Reynolds’s introduction of averaging which had created the closure problem, and (this is very profound!) if we didn’t average then we wouldn’t have that problem. So how would it be if we averaged over a very short time? Would we have a small version of the closure problem? One perhaps that would be more easily solved, and then one could average the resulting smoothed system over a slightly longer time; and so on. I began to picture replacing Reynolds averaging with a series of smoothing operations, over progressively longer times, and with some approximate calculation at each stage. So one might envisage replacing the Reynolds equation with a form in which the Reynolds stresses did not occur as such, but were represented by a constitutive relationship derived during the preceding iterations plus the unaveraged portion of the nonlinear term.

I began working on this idea and ultimately it was published as reference [1]. What I want to do here is make a couple of general points about this analysis, but first I will explain the basic idea. Suppose we have a quasi-steady mean flow, in which external conditions (e.g. applied pressure gradients, boundary conditions) vary with time over scales which are long with respect to the scales of the turbulent energy transfer processes. Then we may define the mean velocity as: \begin{equation}\overline{U(t)} = \frac{1}{2T}\int_{-T}^{T}\, U(t+s)\,ds,\end{equation} where $2T$ is large enough to smooth out the turbulent fluctuations, but shorter than the timescales of external variations. Of course, if the mean flow is actually steady, then we can take the limit where $T\rightarrow \infty$ in the usual way. In either case, we may obtain the fluctuating velocity $u$ in the usual way, thus: \[u=U-\overline{U},\] where trivially it follows that $\overline{u} =0$. Note that this analysis is in real space, but to keep things simple I’m omitting space variables and the vector nature of velocity.

Next let us generalise the above smoothing operation to \begin{equation} \langle U(t)\rangle_0 =\int_{\infty}^{\infty}\, U(t+s) a_0(s)\, ds, \end{equation} where \[\int_{-\infty}^{\infty} \, a_0(s)\, ds =1.\] The analogue of the fluctuating velocity from Reynolds averaging can be defined in an analogous way, thus: \[u_{0}(t) = U(t) – \langle U(t) \rangle_0.\] Evidently the actual limits of integration are determined in practice by the choice of the weight function $a_0(t)$ and I began with the natural choice of the Heaviside unit function multiplied by $1/2\tau_0$ and defined on $-\tau_0 \leq t \leq \tau_0$, where $\tau_0$ is very small compared to any relevant turbulence timescale, but otherwise arbitrary. With this choice, our smoothing operation is just the first operation above, with $T=\tau_0$. Then, repeating the process with $\tau_1 > \tau_0$, and so on, for ever increasing smoothing times, would ultimately take us back to Reynolds averaging. But this is not the choice of $a_0$ that I made in [1], and I will come back to that.

At that time, the success of the renormalization group (RG) in the theory of critical phenomenon was becoming well known and it occurred to me that my underlying iterative method could be turned into a RG calculation. To do this, I dropped the shear flow aspects and specialised the theory to isotropic turbulence. Then I used Fourier transform with respect to time to introduce the angular frequency $\omega$; and invoked the Taylor hypothesis to introduce the wavenumber $k$. Hence I had turned my iterative averaging over time, into an iterative form of mode elimination which led to a fixed point for the effective viscosity arising from the eliminated modes.

This was the form of the paper submitted for publication. The referee was Bob Kraichnan and, although broadly happy with the paper, he expressed a concern that the wavenumber bands were not clearly defined. I agreed with this and fixed the problem by choosing a new weight function to be \[a_0(t) = (1/\tau_0) sinc (\omega t/\tau_0),\] where $sinc$ is the sine over its argument, and this is how the paper was published. There were two broad consequences of this.

First, I am left with the feeling that I didn’t actually do what I set out to do; and reformulate Reynolds averaging. Unfortunately, due to the pandemic, my older notebooks are not available to me, so a fresh look at that aspect will have to wait. Secondly, this was the beginning of a number of years working on RG applied to turbulence. There was a lot more to it than I imagined at the early stage and an overview and exposition of the current situation can be found in reference [2]. I intend to follow this post with some remarks and observations on the application of RG to turbulence in future posts.

[1] W. D. McComb. Reformulation of the statistical equations for turbulent shear flow. Phys. Rev. A, 26(2):1078-1094, 1982.
[2] W. D. McComb. Asymptotic freedom, non-Gaussian perturbation theory, and the application of renormalization group theory to isotropic turbulence. Phys. Rev. E, 73:26303-26307, 2006.




Turbulent dissipation and other rates of change.

Turbulent dissipation and other rates of change.
When I was working for my PhD with Sam Edwards in the late 1960s, my second supervisor was David Leslie. We would meet up every so often to discuss progress, and I recall that David was invariably exasperated by our concentration on asymptotic behaviour at high wavenumbers. He was strongly motivated towards applications, and felt that the production process at low wavenumbers was more important. To him the dissipation was uninteresting. He used to say to us: `you are messing about down in the drains, when the interesting stuff is all in the production region.’

We had many good humoured arguments but none of us changed our positions. Yet with the passage of time, I increasingly feel that David had a point; even when we restrict our attention to isotropic turbulence. In fact, I would go further and argue that much of the confusion over the Kolmogorov (1941) picture, arises from a failure to see that the dissipation is not the primary quantity. And even when one arrives legitimately at the dissipation (having first considered the production and then the inertial transfer rates), there is often confusion between the instantaneous dissipation rate and the mean dissipation rate. I have myself contributed to that confusion, and this is an opportunity to set matters straight. But first let us consider what the usual practices are.

Kolmogorov used $\epsilon$ for the instantaneous dissipation and $\bar{\epsilon}$ for the mean. Then, in 1953, Batchelor used $\epsilon$ for the mean dissipation (see equation (6.3.2) in the second edition of his book). A few years later, in 1959, Hinze favoured $\varepsilon$ for the mean dissipation, and this has tended to prevail ever since, particularly in theoretical physics, where $\epsilon$ is used as an expansion parameter: e.g. the famous $\epsilon$-expansion!

In my 1990 book [1], I used $\varepsilon$ for instantaneous dissipation in equation (1.17) and $\langle \varepsilon \rangle$ for the mean dissipation in equation (A18). Unfortunately, where I discuss the Kolmogorov variables, in Chapter Two and elsewhere, it is clear that I intend $\varepsilon$ to be the mean dissipation rate. In fact this is the most prevalent usage throughout the literature, at least in theoretical work. When one thinks about it; well it makes sense. One is only ever really interested in mean quantities and a hat notation can be used for instantaneous values where they are required. In my later book [2], I tried to sort this out, as follows: \[\widehat{\varepsilon}=\; \mbox{instantaneous dissipation}\] \[\varepsilon=\;\mbox{mean dissipation}\] \[\varepsilon_D = -\ddt E :\;\mbox{the eddy decay rate}\] \[\varepsilon_T = \Pi_{max} :\;\mbox{maximum rate of inertial transfer}\] \[\varepsilon_W :\;\mbox{rate at which stirring forces do work on the fluid}\].

So how does this help us with Kolmogorov, back in 1941? Well, in fact it helps us with Obukhov who, unlike Kolmogorov, worked in wavenumber space where there actually is a turbulence cascade. Obukhov realised that as the Reynolds number increased, there would be a limit where the inertial transfer rate became equal to the dissipation. As the Reynolds number continued to increase, this region of maximum energy transfer would increase in extent, to ever higher wavenumbers. This behaviour has been amply confirmed and is an example of scale invariance. It was recognized by both Obukhov and Onsager that in this range of wavenumbers the spectrum would take the form \[E(k) \sim \varepsilon_T^{2/3}k^{-5/3}.\] If you wish, you can replace the rate of inertial transfer with the dissipation rate. If you want to derive Kolmogorov’s $r^{2/3}$ law, then just Fourier transform the Obukhov result for the spectrum. It is the form that has been derived by a properly formulated physical argument. It would be difficult to see how anyone could drag in the so-called intermittency corrections!

[1] W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
[2] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.