1

Heuristic is as heuristic does!

Heuristic is as heuristic does!
In the early years of my career, I would sometimes encounter the word `heuristic’ in a mathematical theory. I understood that authors, when using this word, were in effect crossing their fingers behind their back and indicating that their work might not be entirely rigorous. But I found myself quite unable to understand precisely what the word meant.

Naturally I consulted a dictionary. It said:
1. Heuristic: serving or leading to find out.
2. (Of method, argument etc) depending on assumptions based on past experience.
3. Consisting of guided trial and error.

Well, number 2 looked the most relevant but was not really helpful. I still wasn’t sure how I should interpret the word when I met it in an article. I found this mildly frustrating.

Some years later, I was working on the preparation of my book on the physics of turbulence, and I was considering the relationship between the work of Sam Edwards [1], and the later work of Novikov [2], on the introduction of random forcing to the Navier-Stokes equation. In discussing the paper by Edwards, Novikov made use of the word `heuristic’ and this is what he said:

`However, the probability distribution density in functional space, has no clearcut mathematical meaning, so that the entire analysis in [my reference [1], cited by Novikov as his reference [7]] has a heuristic character (which does not detract from the value of this interesting paper).’

The point was that Edwards was working with the pdf while Novikov used the characteristic functional. So that while the Edwards analysis led to the same result as the Novikov analysis, it was mathematically iffy. I felt, from this, that it was possible for me to understand how mathematicians used the word `heuristic’, and since then I have become quite comfortable with it and sometimes use it myself.

That was progress of a kind. But with the passage of time I am no longer sure that Novikov was correct. The fact is that the Edwards analysis was carried out in a finite volume (a cube of side $L$), with the limit of infinite system volume being taken at the end of the calculation. In other words, I think that this analysis was mathematically well defined. So although I understand Novikov’s use of the word heuristic, I no longer agree with the basis of his comments. I intend to return to the concept of the gulf between rigour in theoretical physics, on the one hand, and in mathematics on the other.

[1] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[2] E. A. Novikov. Functionals and the random-force method in turbulence theory. Soviet Physics JETP, 20:1290, 1965.




Peer review: some further thoughts.

Peer review: some further thoughts.
Vacation post No 4. I will be out of the virtual office until Monday 31 August.
Peer review continues to cause concern, with widespread perceptions of unfairness. Although most of what I have noticed recently seems to be in the medical/public health communities, where one major gripe appears to be that established researchers have a significantly better chance of getting published. The current favourite response to this is to introduce double-blind refereeing, where you don’t know who your referee is and they don’t know who you are. Well, I can’t see that working in turbulence and I doubt if there is any way that I could conceal my identity. In fact, that goes for anyone who publishes regularly in a field which does not have a lot of participants. So, in STEM subjects in general, that looks like a non-starter.

In any case, why shouldn’t a researcher with a good track record of publication in their subject have a better chance of being published? Indeed, I would go further. I think that it should be part of the `rules of the game’ that there should be a presumption that a further publication on a topic should be published unless it is wrong in some way, or misleading, or quite definitely does not add anything to previous publications by that particular author. In other words, there should be an onus on the referee to demonstrate such faults.

I would actually go further and argue that, rather than introducing an additional layer of anonymity, we should remove the existing one. In my view, it would be helpful if referees had to put their name to their report. It should improve both fairness and (sometimes) courtesy. I should make it clear that I apply that opinion to everyone who referees and do not exclude myself!

Naturally there will be those who will respond that if we remove anonymous refereeing, then the sky will fall in. I don’t see why this should be. In my early years at Edinburgh, I did some work on turbulent diffusion in aerosol jets and this was published in the Journal of Aerosol Science. Their policy, at least at that time, was to have one referee who was expected to engage constructively with a submission and then to sign their report. My memory of it (rather vague now) was that it was a civilised and effective process. I also remember that the late Bob Kraichnan signed his referee reports and that was my experience on the few occasions that he refereed anything of mine.

And what about me? Well, I have dropped my anonymity on a number of occasions over the years, but only where I felt that it was particularly appropriate, for instance when my own work was being criticised. Apart from that, I have just been part of the flock! However, I seriously believe that the nature of refereeing in turbulence demands reform. My PhD supervisor described it as `cut-throat’ and at times it would be hard to disagree. Partly I think that this is due to the heterogeneous nature of the turbulence community, so that very often people are refereeing work that they are simply not able to understand.

I have yet further thoughts on this subject, which will be the subject of further posts. At the moment I am looking forward to a month’s holiday from turbulence, so this is being written on the 30 July in order to be posted on the 27 August. On the 31 August I shall begin reading my email again.




Is there any place for personal taste in science?

Is there any place for personal taste in science?

Vacation post No 3. I will be out of the virtual office until Monday 31 August.

It has long been the case that physicists talk approvingly about a
physical theory as being `elegant’ or even `beautiful’. Like so much
else, this seems to have become commonplace in the 1960s. More recently
I have become aware of similar sentiments being expressed in
mathematics. In that case one can see that some particular proof, say,
might be preferred to another, purely on grounds of economy or clarity
or conciseness. However, in the case of physics, one might expect that a
comparison of a theory’s predictions with experimental results should be
the deciding factor.

There is an old adage in engineering design to the effect that `if it
looks right, then it is right’. Obviously, there are constraints on this
in that your design for a motor car must look as if it is capable of
being a motor car. This latter point is an instance of the precept `form
follows function’ which originated in architectural design in the early
part of the last century. But the adage refers to quality, and is
supposedly a way of separating a good design from other designs that are
merely adequate. So the implication is that a purely aesthetic judgement
can lead to a design that satisfies various, perhaps quantitative,
criteria which give a universal meaning to the term `good design’ in
some particular context.

Of course the insertion of the word `probably’ into the engineering
adage might lead to its justification in practice. That is, if it looks
right then it `probably’ is right. So the adage could offer a guide as
to whether or not one should take a particular design idea further. For
this to work there must exist some consensus on what is meant by `looks
right’. And this undoubtedly changes with time. A motor car which was at
the leading edge of design in the 1960s will look distinctly
old-fashioned nowadays.

But there is always some unease about using a personal value judgement
to determine a matter which will ultimately be settled on a
quantitative basis. And there are other complications too, even when the
quantitative aspect is not present, as for example in the arts. An awful
warning may be found in the well known crisis in painting at the end of
the nineteenth century. This was triggered by the invention of
photography, which in turn led to artists becoming experimental in order
to avoid producing paintings which were no more than (in effect)
photographs. Such attempts were reviled and even the formation of
schools of activity (e.g. Fauves, impressionists) did not at first lead
to acceptance.

Unfortunately the fact that impressionist paintings are now highly
valued appears to have led to the pendulum swinging too far in the other
direction of uncritical acceptance. Even so, those who are specialists
in the world of art, literature or music can argue that their
`informed’ eye or ear gives their opinion a special weight. And no
doubt that is a tempting argument in science too. Indeed, in the case of
string theory or the idea of the multiverse, where testing against
experiment is impossible, it is arguable that aesthetic criteria may be
all that one has. But, if consensus develops, this can then lead to the
creation of schools of opinion and standard models, which in turn can have the
perverse effect of shutting down other approaches to the problem. This
is not the case in the arts. Indeed, the non-specialist can say `I know
what I like’, and there is an end to it. One does not have that freedom
in science. Or at least, not if one expects to get published in the learned
journals.

Therefore, it does seem that there are dangers from importing purely
personal aesthetic considerations into science. It is interesting to
note that the greatest physicist of all had some words to say on this
particular subject. In the preface to his 1916 book, entitled
`Relativity’, Einstein stated that he had followed the precepts of that
other great theoretical physicist, Boltzmann, `… according to whom,
matters of elegance ought to be left to the tailor and to the cobbler’.




My list of jobs to do from 17 November 2009.

My list of jobs to do from 17 November 2009.
Vacation post No 2. I will be out of the virtual office until Monday 31 August.
Recently I was tidying up some papers and I came across this list from 2009. At that time I had just entered my fourth year of retirement (now in my fourteenth!) and these were the things I wanted to do. Actually other jobs took priority and none of the following list was ever done!

1. LET: evaluate the Kolmogorov pre-factor as a function of Reynolds number. Does it asymptote?
2. DNS: `Kolmogorov exponent’ as a function of Reynolds number. (In fact the inverted commas were because this was shorthand for measure the power-law exponent for the inertial range of wavenumbers and see if it asymptotes to -5/3. I would also add the pre-factor to this, as in the LET case above.)
3. Calculate LET with the de facto vertex renormalization of omitting modes from the convolution sum: test for universality of the cut-off wavenumber ratios. (Method due to Kadomtsev: see Leslie’s book.)
4. Do the same with DNS.
5. Make a systematic examination of the dependence on initial conditions for both DNS and LET.
6. Use DNS to investigate the vorticity transfer corresponding to the filtered, partitioned energy transfers $T^{–}$, $T^{-+}$, $T^{+-}$, and $T^{++}$.
7. Use stirring forces which are not `white noise’ to test effect of initial conditions.

Some of these ideas were prompted by the fact that I was studying the variation of the dimensionless dissipation as a function of Reynolds numbers at the time. This only required quite small Reynolds numbers and it was easy to map out the dependence. Our first paper reporting this work was rejected by one of the referees because he had a simulation which could go to much bigger Re, and so our work couldn’t be any good. Fortunately this idiosyncratic view did not prevail.

Seriously, though, I think that the turbulence community as a whole has been influenced by the need to get to large Re in order to resolve questions about universal behaviour, and it is perhaps time to build up a better understanding of the basic physics of turbulence by looking at the low-Re behaviour. Point 6 is relevant to large-eddy simulation, renormalization group and the scale-invariance paradox.

Are there any bright young people out there with access to a code and a computer who would like to take on any of these things? If so, just get in touch and I’ll be happy to advise you.




Can mathematicians solve problems in physics?

Can mathematicians solve problems in physics?
Vacation post No 1. I will be out of the virtual office until Monday 31 August.
When I used to lecture final-year undergraduates in mathematical physics, there were often quite a few mathematicians attending and I would sometimes tease them by pointing out that mathematicians try to prove the ergodic theorem whereas physicists don’t need to. We know it must be true! This was always taken in good part, but it wasn’t really a joke, because I believe it to be literally true. Progress in physics from earliest times has proceeded from experimental observation, which is then codified in mathematical theory. When a new observation arises and does not agree with the existing theory, then so much the worse for the theory. We have to devise a new and better one. (I believe the Hegelian position is the exact opposite of this: so much the worse for the observation!)

The only exception to this that I know of is the work of the great Paul Dirac, who actually started his working life as an electrical engineer and only later qualified in mathematics. He tackled the problem of deducing a relativistic form of the Schrödinger by purely mathematical methods and ended up predicting the existence of antimatter. Nice one Paul!

If one is going to have an exception, what an exception to have. The only thing that I can think of which might be comparable, is the work of Emmy Noether. Her theorem that continuous symmetry of a physical quantity implies its conservation underpins the whole of fundamental theoretical physics. And of course much mathematical work has gone into the development of modern formulations from the original observation-based forms, such as Newton’s laws of motion. However, I don’t know enough about Noether’s theorem to be sure about whether or not it also represents a significant exception. I still intend to rectify this, although I have been intending to do so, for many years.

As regards the relevance of my original question to turbulence, I can come up with a specific example in a related field. A few years before I retired, I had some discussions with a mathematician about problems in soft (condensed) matter. This arose in a social way, in that one of my colleagues had attended a party in the maths department and got talking to a young mathematician who bemoaned the fact that he had no one to discuss his work with. My colleague knew that I had published something in this area [1] and suggested that we make contact. As a result we had a number of discussions (and some games of badminton!) and it was clear that we were poles apart in the way we looked at things. Nevertheless, one specific point emerged. He had reservations about the (at that time) famous KPZ equation for nonlinear deposition. On purely mathematical grounds (something to do with simultaneously working with generalized functions and Fourier transforms, I think) he had concluded that the KPZ equation was mathematically unsound and needed a counter-term to be added to deal with this. Accordingly he was quite surprised to find that my co-author and I had already come to this conclusion on purely physical grounds and that we had identified the requisite term to be added [1].

It seems to me that modern theoretical physics is dominated by this sort of pure mathematical approach which may in fact be sterile without a new physical hypothesis of the kind that physicists can actually understand to be such. In the rather humbler discipline of turbulence theory, I note many papers which seem to be predicated on the assumption that one must take account of singularities. I believe this activity may actually be harmful, as well as unnecessary, because it makes people unsure about things. For example, when a referee insists that I qualify some statement about taking a limit or making an expansion, with the phrase `provided that no singularity occurs’ I feel that I am being forced to make use of the mathematician’s comfort blanket. Frankly, I would rather rely on the physicist’s comfort blanket, which is based on the interlocking physical picture which in turn is based primarily on observation. Just bear it in mind: we physicists know that the ergodic theorem holds.

[1] W. D. McComb and R. V. R. Pandya. Hidden symmetry in a conservative equation for nonlinear growth. J. Phys. A: Math. Gen., 29:L629, 1996.




Should turbulence researchers dare to be dull?

Should turbulence researchers dare to be dull?
I recently read a book review in The Times which was headed `Scientists must dare to be dull’. Well, that was attention grabbing, because most of the general population probably think that we already are. The author of the review then went further in a subheading: `We should listen to this warning about how neophilia and hype is ruining research.’ Now that does sound a bit exaggerated; and he seeks to make his case by quoting examples from Science Fictions: Exposing Fraud, Bias, Negligence and Hype in Science by Stuart Ritchie.

Now I’m not sure if `neophilia’ is a neologism or not (my spell-checker doesn’t seem to like it), but clearly it is intended to mean `love of the new’. And this, along with `hype’, has been a feature of academic research since the early 1980s. Before that, academic research was a gentlemanly pursuit, which in theory academics were supposed to do. However, when I took up my lectureship at Edinburgh in 1971, the teaching and administration were divided up equally, and once these chores were out of the way, one was free to do some research or some other activity. Alternative activities pursued by certain colleagues ranged from collecting antiques, through small-boat sailing, to (and this was rather extreme) one colleague who seemed to be turning himself into a market gardener in his spare time.

This all changed around the early 1980s, with the introduction of research assessment exercises, in which the government turned a beady eye on the research output of academics, presumably to divert attention from its own inadequacies. From then on, everything had to be newer, bigger and more `hype worthy’. Then of course, in time, research had to have impact! But we shall say no more about that. Instead let us turn to what the effect of this has been on research in turbulence.

We should begin by observing that turbulence, like all the rest of fluid dynamics, is dominated by research on practical problems. So my observations, as always, concern the relatively small amount of fundamental work; and even here there has for a long time been an excessive concentration on newness. Given that the problems we still need to solve are really quite old, a concentration on newness seems likely to be counter-productive. My own experience over the years has been of one particular referee who invariably says of my manuscript `there is nothing very new here’ and then turns it down!

To be more specific, I would say that direct numerical simulation of the equations of motion to represent isotropic turbulence is the most obvious example of the desire for the new, where in this case the desirable `new’ is a higher Reynolds number. This undoubtedly leads to a feeling of competition, with the achievement of a large Reynolds number seen as an end in itself. I believe this to be detrimental to scholarship, particularly when other desirable features of the DNS may have been sacrificed in order to achieve it.

A particular example of this arose in 2010 when we submitted a short paper in which we showed that the so-called Taylor dissipation surrogate was more likely a surrogate for the inertial transfer [1]. This was based on theoretical arguments and on some simulations of freely decaying turbulence, for various Reynolds numbers up to about $R_{\lambda}\simeq 60$, which showed the onset of asymptotic behaviour. One referee was favourable but the other recommended rejection on the grounds that our simulation was very much smaller that his one. This seems to have echoes of the behaviour of small boys in the school playground, but it has nothing to do with scholarship. Fortunately the editor was easily persuaded of this fact, and the paper was published.

A coda to this story is that we developed our simulations over the next few years, and also introduced a theory based on an asymptotic expansion in inverse powers of the Reynolds number, which was exact in the limit of infinite Reynolds numbers. For Reynolds numbers up to $R_{\lambda}\simeq 435$ in forced turbulence, we were able to verify our predicted $1/R$ decay law and measure the asymptotic value of the normalised dissipation rate as: $C_{\varepsilon,\infty} = 0.468 \pm 0.006$. Apart from supporting our results at lower Reynolds numbers, this work drew attention to the fact that certain high-Reynolds simulations merely provide a few outlier points on our systematic treatment of the subject [2]. How much better if they had started with low values of the Reynolds number and worked up!

Turbulence is essentially an asymptotic phenomenon; a fact that was realised by early workers in the subject who measured mean velocity profiles in duct flows (and indeed other shear flows) for huge ranges of Reynolds numbers, and clearly demonstrated its asymptotic behaviour. This is what we need today. Turbulence theory is like a jigsaw, in which not only are many pieces missing, but many of those we have are unclear. In effect, we’re not quite sure which part of the picture they represent. In my view, what is needed is a big collaboration to carry out simulations which we can all access and have our questions answered. But the simulation is the easy part of that: I believe that there are databases for high-Re simulations, but what about all the low Reynolds numbers which allow us to move up an asymptotic curve and actually see what is going on?

The author of the above book review sees the need for `boring, plodding research that merely provides a sound basis for the continued progress of the Enlightenment’. I don’t buy that description, and presumably he is being ironic, but I do accept that that is what we need. In the case of turbulence, we would also need a sea change to more open-mindedness on the part of many members of the community of researchers. I don’t think that is going to happen any time soon.

[1] W. David McComb, Arjun Berera, Matthew Salewski, and Sam R. Yoffe. Taylor’s (1935) dissipation surrogate reinterpreted. Phys. Fluids, 22:61704, 2010.
[2] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.




The modified Lin equation.

The modified Lin equation.

In my post of 27 February I discussed the importance of being aware of the full form of the Lin equation as this reveals the existence of a cascade in wavenumber space. In this post I want to take this a bit further, using my resolution of the scale-invariance paradox [1].

For me this topic first arose during a meeting in 1991 at MSRI, Berkeley. When I had finished my talk, Bob Kraichnan came up to me with a copy of my recently published book and pointed out Figure 2.5, which was a plot of the terms in the Lin equation for freely decaying turbulence. He commented on the fact that the transfer spectrum $T(k)$ was shown as zero for an extended range of values of $k$. He commented that people used to think that was the case, because it would be expected from the scale-invariance of the flux, but that in practice it was never observed. There was always a single zero-crossing. I was able to reassure him that figure was based on a computation of the LET theory; that there had been an error which had now been rectified; and that the revised figure would show a single zero-crossing and would appear in the paperback edition of the book to be published later that year.

However, I was left with a nagging feeling that there was an unresolved problem with this result. The first measurements of $T(k)$ had been published by Uberoi [2] in 1963, and this author had said that the single zero-crossing was probably due to the low Reynolds number and indicated that he would expect $T(k)=0$ over an extended range of $k$ to develop with increasing Reynolds number. Although this does not seem to have been a matter of widespread concern, over the 1970s/80s/90s various ad hoc methods were used to cope with this behaviour in numerical calculations: for some references to this work, see [1]. As a matter of interest, I include both versions of Figure 2.5 below.

Figure 2.5 from Physics of Fluid Turbulence 1990

Figure 2.5 from Physics of Fluid Turbulence 1991

 

 

 

 

 

 

 

 

 

 

The Lin equation (see reference [3]) takes the form: \begin{equation} \left( \frac{d}{dt} + 2 \nu k^2 \right) E(k,t) = T(k,t)\label{enbalt}\end{equation} where $E(k,t)$ is the energy spectrum, $T(k,t)$ is the energy transfer spectrum and $\nu$ is the kinematic viscosity. Now let us integrate each term of (\ref{enbalt}) with respect to wavenumber, from zero up to some arbitrarily chosen wavenumber $\kappa$: \begin{equation} \frac{d}{dt}\int_{0}^{\kappa} dk\, E(k,t) = \int^{\kappa}_{0} dk\, T(k,t)-2 \nu\int_{0}^{\kappa} dk\, k^2 E(k,t). \label{fluxbalt1} \end{equation} The energy transfer spectrum may be written as \begin{equation} T(k,t) = \int^{\infty}_{0} dj\, S(k,j;t), \label{ts}\end{equation} where, as is well known, $S(k,j;t)$ can be expressed in terms of the triple moment. Its antisymmetry under interchange of $k$ and $j$ guarantees energy conservation in the form: \begin{equation}\int^{\infty}_{0} dk\, T(k,t) =0. \label{encon} \end{equation}

With some use of the antisymmetry of $S$, along with equation (\ref{encon}), equation (\ref{fluxbalt1}) may be written as \begin{equation}\frac{d}{dt}\int_{0}^{\kappa} dk\, E(k,t) = – \int^{\infty}_{\kappa} dk\,\int^{\kappa}_{0} dj\, S(k,j;t)-2 \nu\int_{0}^{\kappa} dk\, k^2 E(k,t).\label{fluxbalt2}\end{equation} the integral of the transfer term is readily interpreted as the net flux of energy from wavenumbers less than $\kappa$ to those greater than $\kappa$, at any time $t$.

It is convenient to introduce a specific symbol $\Pi$ for this energy flux, thus: \begin{equation}\Pi (\kappa,t) = \int^{\infty}_{\kappa} dk\, T(k,t) =-\int^{\kappa}_{0} dk\,T(k,t),\label{tp}\end{equation} where the second equality follows from (\ref{encon}).

The key to resolving the paradox is to introduce transfer spectra which have been filtered with respect to $k$ and which have had their integration over $j$ partitioned at the filter cut-off, i.e. $j=k_c$ [1],[4]. Beginning with the Heaviside unit step function, defined by:
\begin{eqnarray} H(x) & = & 1 \qquad \mbox{for} \qquad x > 0; \\& = & 0 \qquad \mbox {for} \qquad x < 0.\end{eqnarray} we may define low-pass and high-pass filter functions, thus: \begin{equation}\theta^{-}(x) = 1 – H(x),\end{equation} and \begin{equation} \theta^{+}(x) = H(x). \end{equation} We may then decompose the transfer spectrum, as given by (\ref{ts}), into four constituent parts, \begin{equation}T^{–}(k|k_{c}) = \theta^{-}(k-k_{c})\int^{k_{c}}_{0}dj\, S(k,j); \label{tmm}\end{equation} \begin{equation} T^{-+}(k|k_{c}) = \theta^{-}(k-k_{c})\int^{\infty}_{k_{c}}dj\, S(k,j); \label{tmp}\end{equation} \begin{equation} T^{+-}(k|k_{c}) = \theta^{+}(k-k_{c})\int^{k_{c}}_{0}dj\, S(k,j); \label{tpm} \end{equation} and \begin{equation}T^{++}(k|k_{c}) = \theta^{+}(k-k_{c})\int^{\infty}_{k_{c}}dj\, S(k,j),\label{tpp} \end{equation} such that the overall requirement of energy conservation is satisfied: \begin{equation} \int^{\infty}_{0}dk\left[T^{–}(k|k_{c}) + T^{-+}(k|k_{c}) + T^{+-}(k|k_{c}) + T^{++}(k|k_{c})\right] = 0. \end{equation}It is readily verified that the individual filtered/partitioned transfer spectra have the following properties: \begin{equation} \int^{k_{c}}_{0}dk\, T^{–}(k|k_{c}) = 0; \label{mm} \end{equation} \begin{equation} \int^{k_{c}}_{0}dk\, T^{-+}(k|k_{c}) = -\Pi(k_{c});\label{mp} \end{equation} \begin{equation}\int^{\infty}_{k_{c}}dk\, T^{+-}(k|k_{c}) = \Pi(k_{c}); \label{pm} \end{equation} and \begin{equation} \int^{\infty}_{k_{c}}dk\, T^{++}(k|k_{c}) = 0. \label{pp} \end{equation} Equation (\ref{fluxbalt1}) may be rewritten in terms of the filtered/partitioned transfer spectrum as: \begin{equation} \frac{d}{dt}\int^{k_{c}}_{0}dk\, E(k,t) = -\int^{\infty}_{k_{c}}dk\, T^{+-}(k|k_{c}) -2\nu_{0}\int^{k_{c}}_{0}dk\, k^{2}E(k,t). \label{fluxbaltmod} \end{equation} We note from equation (\ref{mm}) that $T^{–}(k|k_c)$ is conservative on the interval $[0,k_c]$, and hence does not appear in (\ref{fluxbaltmod}), while $T^{-+}(k|k_{c})$ has been replaced by $-T^{+-}(k|k_{c})$, using (\ref{mp}) and (\ref{pm}). Those working with DNS or analytical theory, can avoid the paradox by changing their definition of energy fluxes, from those given by (\ref{tp}), to the forms: \begin{equation} \Pi (\kappa,t) = \int^{\infty}_{\kappa} dk\, T^{+-}(k|\kappa,t) =-\int^{\kappa}_{0} dk\, T^{-+}(k|\kappa,t),\label{tpmod} \end{equation} where $T^{+-}(k|\kappa,t)$ is defined by (\ref{tpm}) and $T^{-+}(k|\kappa,t)$ by (\ref{tmp}). This is equivalent to (\ref{tp}); but, unlike it, avoids the paradox.

This behaviour is illustrated in the figure below, where we should note that $T^{-+}(k|\kappa)$ is defined below the cut-off wavenumber $\kappa = k_{c}$, and $-T^{+-}(k|\kappa)$ is defined above it.

 

Modified form of transfer spectrum to avoid the scale-invariance paradox.

 

This raises the question of how exactly the Lin equation should be written, in order to emphasise these properties. That will be the subject of a paper which is now in preparation [5]. It is worth making the point that the filtered-partitioned forms of the transfer spectrum have only been studied in the context of the subgrid modelling problem [4]. Given the much more powerful computers now available, it would undoubtedly be rewarding to study the role of these terms in the energy balance for a range of Reynolds numbers. I very much hope that someone will do this.

Acknowledgement: the above figure was suggested by John Morgan, who also prepared it.

[1] David McComb. Scale-invariance in three-dimensional turbulence: a paradox and its resolution. J. Phys. A: Math. Theor., 41:75501, 2008.
[2] M. S. Uberoi. Energy transfer in isotropic turbulence. Phys. Fluids, 6:1048, 1963.
[3] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[4] W. D. McComb and A. J. Young. Explicit-Scales Projections of the Partitioned Nonlinear Term in Direct Numerical Simulation of the Navier-Stokes Equation. In Proc. 2nd Monte Verita Colloquium on Fundamental Problematic Issues in Turbulence: available at arXiv:physics/9806029 v1, 1998.
[5] W. D. McComb. A modified Lin equation for the energy balance in isotropic turbulence. arXiv:2007.13622v1 [physics.flu-dyn] 27 Jul 2020

 

 

 

 

 

 




Local Energy Transfer (LET): a curate’s egg theory?

Local Energy Transfer (LET): a curate’s egg theory?
The LET theory began well as a modification to the Edwards theory [1,2], which was a single-time theory, and then underwent a rather heuristic extension to two-time form to become in effect a modification of Kraichnan’s DIA theory [3]. It was successfully computed for freely decaying turbulence in subsequent years and in one of these papers its derivation was put on a better footing [4]. This work was later formalised [5], and more recently the theory has been formally derived by applying the Edwards self-consistent field method to the full two-time pdf [6]. As the resulting set of equations for the two-time correlation and response functions is a fully Eulerian theory which gives good results, both quantitative and qualitative, I thought there might be some interest in a simple outline of the twists and turns in its evolution!

In 1966 when I began my postgraduate studies, the problem with both the Edwards theory and DIA was that they were incompatible with the observed $k^{-5/3}$ energy spectrum. It was 1974 before I saw what was wrong with the Edwards theory (and by extension DIA) was that the inertial transfer spectrum (usually denoted by $T(k)$ in the notation of the Lin equation) was divided into two parts, a diffusive term and a dissipative term which was proportional to the amount of energy in mode $k$. Now this is a form which crops up in physics, for example the Boltzmann equation, the Fermi master equation, and the Fokker-Planck equation, so is must have seemed quite natural. However, the first measurements of $T(k)$ were reported in 1963, and after that it became obvious that the entire term $T(k)$ was either input or output, depending on the value of the labelling wavenumber $k$. This was what I finally managed to see in 1974 and so I proposed that the turbulent response in the Edwards case was determined by a local (in wavenumber) energy balance involving the whole of $T(k)$ [1,2].

Extending this idea to Kraichnan’s two-time theory presented a far from trivial problem. My intuitive feeling was that the idea of determining the system response in terms of the relationship between stirring forces and the resulting velocity field should be abandoned and instead I decided to base my approach on the introduction of a velocity field propagator. I argued that in perturbation theory we would have at zero order a relationship: \begin{equation} u^0(k,t) = R^0(k,t-s) u^0(k,s). \end{equation} Note that this is in an updated notation, with $R$ standing for response function, and that it is simplified with tensor indices being omitted, and we have assumed stationarity. Corresponding to some renormalization of the perturbation series I then proposed the introduction of an exact propagator $R$, such that: \begin{equation} u(k,t) = R(k,t-s) u(k,s). \end{equation} This allowed me to derive equations for the correlation function $C(k;t-t’)$ and the response function $R(k;t-t’)$. These were identical to those of Kraichnan’s DIA apart from the presence of an additional term in the response equation. This additional term had, of course, the crucial effect of making the response equation compatible with the $-5/3$ spectrum.

When the paper was submitted for publication it ran into trouble with the referees. One of them was worried by the fact that sometimes $R$ was treated as if statistically sharp and at others as if it were not. I couldn’t understand that, but I added a footnote to say that the response function was statistically sharp. The other referee conceded that LET should do better than DIA at high Reynolds number, but reckoned that DIA would be better at low Reynolds numbers and so publication should await numerical calculations! I was quite fascinated by this report. It put me in mind of the comedy routine of early films where some luckless person tries to pack an overfull suitcase. He pushes in a shirt collar at one corner and snaps the lid closed, only to notice that a tie is peeping out at another corner. So he struggles to push that in, again snaps the suitcase closed only to see that a sock is sticking out at another corner. And so it goes on. Perhaps that was `the packing a suitcase’ method of assessing a theory?

A few years later, we published the numerical calculations and it turned out that the LET was actually better than DIA at all Reynolds numbers. It also turned out that DIA was not as bad at high Reynolds numbers as had been expected. The referees for the paper were Jack Herring and Bob Kraichnan, and I remember Batchelor telling me that I had `stirred them up quite a bit’ and that they would like to contact me directly. I recall that we had some very interesting and amicable discussions by letter: email was still in its infancy!

Equation (2) is open to some serious criticism and we should now consider what is wrong with it. Essentially it implies a fixed phase relationship between two realisations of the velocity field at different times, when there is no reason to suppose that such a relationship can exist in a mixing system like fluid turbulence. Another way to look at this is to rewrite (2) such that $R$ is defined as the ratio of the two velocities, and we immediately see that we should have $\hat{R}$: a random variable. Now to replace $\hat{R}$ by $R$ would be a mean-field approximation (there is an equivalent step in the derivation of DIA) but that can only be done in the context of some averaging operation. This was introduced in [4] where the basic hypothesis underlying LET was taken to be: \begin{equation} C(k;t,t’) = R(k;t,t’)C(k;t’,t’) \, \mbox{for} \, t’\leq t. \end{equation} Equation (3) is just the fluctuation relaxation relationship (FRR) which has been derived in dynamical systems theory for systems with a Gaussian initial distribution. Incidentally, the fluctuation dissipation theorem is a special case of the FRR which applies to small fluctuations about equilibrium in microscopic systems.

The FRR applied to turbulence has now been derived by a self-consistent method in which the base distribution is Gaussian at all times [6]. This reference gives a review of the topic as well as that derivation. It should perhaps be noted that the zero-order Gaussian pdf in this theory is an approximation to the exact pdf which is chosen to give the correct value of the covariance. It should be distinguished from the zero-order pdf which is obtained from the viscous response function applied to Gaussian stirring forces.

To sum up, equation (1) is a bad equation which yet provides a heuristic derivation of a useful set of equations: the LET theory. I think that it is analogous to a `bad proof’ as discussed in my post of 19th March 2020. Hence, LET was a curate’s egg theory. I think that it might now be described as just a theory.

[1] W. D. McComb. A local energy transfer theory of isotropic turbulence. J.Phys.A, 7(5):632, 1974.
[2] W. D. McComb. The inertial range spectrum from a local energy transfer theory of isotropic turbulence. J.Phys.A, 9:179, 1976.
[3] W. D. McComb. A theory of time dependent, isotropic turbulence. J.Phys.A:Math.Gen., 11(3):613, 1978.
[4] W. D. McComb, M. J. Filipiak, and V. Shanmugasundaram. Rederivation and further assessment of the LET theory of isotropic turbulence, as applied to passive scalar convection. J. Fluid Mech., 245:279-300, 1992.
[5] K. Kiyani and W. D. McComb. Time-ordered fluctuation-dissipation relation for incompressible isotropic turbulence. Physical Review E, 70:66303-66304, 2004.
[6] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.




Peer review: the role of the editor.

Peer review: the role of the editor.
In 1985 I published a paper in JFM on laser-doppler measurements in drag-reducing fibre suspensions. This was the only paper on experimental work that I published in that journal and the refereeing process was not without interest. There was the usual iteration process and Referees A and B were fine, but Referee C was something else. His comments had a curious, slightly hysterical tinge, I felt. For instance, `Something is very far wrong here.’ and `Conservation of energy is being violated here.’ And others like that. Each attempt I made to reassure him, simply made matters worse. I should just mention in parenthesis that when you get a referee like this, they are impossible to reassure or satisfy. Editors need to be alive to this fact and in this case George Batchelor eventually said something to the effect `I’m afraid that C is being rather too suspicious and so I am going to disregard his reports.’ In my view this was a perfect example of a good editor in action. He had ample evidence from A and B that the paper should be published and he took responsibility for having made an unlucky choice in C.

Some years later I was again having a paper reviewed by JFM and once again Referees A and B were fine, but this time C objected to the fact that the LET theory was being applied to isotropic turbulence. He said `there is far too much of this sort of work going on’ and `the real problems are shear flows’. In response I argued that this work was physics and that, in comparison to condensed matter physics or particle physics, the amount of work on isotropic turbulence is very small and we really need a great deal more. Again, in parenthesis, this remains my opinion. Referee C responded by recommending rejection, and this time the editor (not Batchelor!) said `well clearly C is an idiot and I’m going to ignore him’.

Actually this is all beginning to sound like it belongs in the story by the Canadian humourist Stephen Leacock `A, B and C: the human element in mathematics’ in which he discusses problems in arithmetic of the type: `A, B and C are employed to dig a ditch. A can dig twice as fast as B and B can dig twice as fast C etc’. In his short story Leacock speculates about the three individuals and their interactions. He concludes that C always gets the dirty end of the stick and is a weak, undersized individual who dies young. Poor C!

So let us therefore turn to a bimodal form of refereeing, as practiced by the Physical Review. As I mentioned in my post of 25 June, when writing my book on HIT I found out that the coefficient $E_2$ in the Taylor expansion of the energy spectrum was identically zero. To my astonishment this appeared to be a new result, particularly in view of the ongoing controversy over `Saffman invariance vs Loitsianskii invariance’. After getting it independently checked, I wrote it up and submitted it to PRE. At the risk of spoiling the suspense, I should say that it was ultimately accepted for publication [1]. Nevertheless, the refereeing process had some remarkable features and raises some questions of interest.

First, Referees 1 and 2 replied. Referee 1 was positive and 2 was not. In fact their report was an incoherent rant which I found impossible to understand. I could manage to pick out phrases which I recognized as being points that are made about grid turbulence, but I was unable to discern anything relating to my paper. Moreover the entire report was in bold italic font, rather giving the impression of being what the police used to call `a green ink letter’.

So the Editor commissioned reports 3 and 4, one of which was favourable and the other was not. And then the Editor commissioned reports 5 and 6, one of which was favourable and the other was not. There was also a new development in that Referee 6 dragged in a recent disagreement between two different sets of investigators.

At this stage the Editor decided to reject my manuscript. This seemed to me to be `box ticking’ of the worst kind. Three for and three against, so let’s be on the safe side and reject it! Unlike in the two cases discussed above with JFM, there was no attempt to make a judgement of the relative quality of the referee reports. Naturally, I did not accept this. There followed a so-called arbitration, which was no such thing, and which I had no difficulty in shooting down. Then the Editor proposed a compromise. If I would add some material relating to the disagreement that Referee 6 had instanced, he would send it back to that referee. However, despite my adding material relating to that disagreement, Referee 6 did not change his extremely hostile attitude and recommended rejection. This time the Editor did what he should have done sooner and ignored this referee’s unbalanced report.

I should say that when I say Editor, I mean one of the associate editors of PRE at that time. Also, as PRE doesn’t come well out of this, I should mention a case where they did, and where (refreshingly!) the villains were not members of the turbulence community. I will keep this brief because I think this topic merits a post to itself. Basically I had done an analysis which showed that Galilean invariance did not suppress vertex renormalization in the NSE or similar equations which were of interest in soft condensed matter. Now unfortunately there was a substantial body of work in soft matter which relied very heavily on the supposition that it did, and not surprisingly my manuscript got a hostile reception. Any favourable reports were lukewarm (`might be of mild interest’) and the Editor turned the MS down.

I wrote to the Editor to say that I accepted his decision but wanted to point something out. If I was wrong, then not only were the `soft matter’ theorists better off as a result, but so also would I be, in that my LET theory would automatically be correct to fourth- rather than third-order in renormalized perturbation theory! The Editor suggested that I formally appeal against his decision, I did, and the arbitration was very much in my favour [2].

All four of these examples worked out satisfactorily, in my view, in that papers which should have been published were published. But they have worked out in different ways. In particular there is the question of should the editor pay attention to the quality of the reports? Let us bear in mind that editors are perhaps more reluctant to offend referees than authors. Also, when a number of referees are positive can that be cancelled out by a number being negative? I welcome comments on my posts and would particular welcome comments on these particular points.

[1] W. D. McComb. Infrared properties of the energy spectrum in freely decaying isotropic turbulence. Phys. Rev. E, 93:013103, 2016.
[2] W. D. McComb. Galilean invariance and vertex renormalization. Phys. Rev. E, 71:37301, 2005.




Further thoughts on free decay of isotropic turbulence.

Further thoughts on free decay of isotropic turbulence.
In the previous post I discussed the initial value problem posed by the free decay of the energy in isotropic turbulence, along with things that we ought to bear in mind when considering its experimental or DNS realisations. We should also mention the more general problem of the free decay of two-point covariances (or spectra) as that merits a few words in the context of both DNS and the study of two-point statistical closures. However, before considering it, we should first consider an outstanding question about the simpler case: at what stage is the turbulence to be considered as evolved?

The question arises because the initial state of the turbulence is not actually a solution (or, more accurately, derived from a solution) of the Navier-Stokes equation. For the purely mathematical problem, we may indeed assume that the initial field corresponds to isotropic turbulence. But for grid turbulence, the wakes that form behind the bars of the grid are expected to coalesce into a three-dimensional turbulent field, which dies away with downstream distance. This stationary stream-wise decay has to be converted to decay with time by invoking Taylor’s hypothesis, but the crucial question is: at what distance downstream can the turbulence be said to be evolved?

The same question must arise with DNS, where we specify an initial spectrum on a lattice. Such initial spectra are arbitrarily chosen to have suitable properties. In particular, they are chosen to be peaked at low values of wavenumber, so that the evolution of turbulence can be seen as the spectrum not only decreases in magnitude, but also spreads out, as time goes on. So once again we wish to know at what time the spectrum will be representative of turbulence, rather than the initial conditions.

Probably most investigations into this topic have been concerned with establishing whether or not the decay follows a power law; and, if so, what that power law is. In fact some researchers cite the onset of power-law behaviour as indicating that the turbulence is well developed. Yet there is at least once situation where the need for a definite criterion matters and this is the study of the dimensionless dissipation in terms of its dependence on the Reynolds number. This is known to follow a characteristic curve in which it asymptotes to a constant value with increasing Reynolds number.

Now, for stationary turbulence, the existence of a unique curve is unambiguous on both experimental (i.e. DNS) and theoretical grounds), but for free decay there is a fair amount of scatter between the various investigations. When we began working on this problem at Edinburgh some years ago, we were surprised to find that most researchers seemed rather vague about the stage of the decay process at which their measurements were taken. It seemed to us that this was likely to prove crucial. An investigation would consist of carrying out a free decay simulation at a particular Reynolds number; then repeating it for a higher Reynolds number, and so on. Then the problem at any Reynolds number is to choose a decay time to take measurements that corresponds in some sense to `the same stage’ at other Reynolds numbers. This is not a trivial problem and we decided to look into it in detail [1].

When a turbulence simulation is started from an arbitrary initial velocity field with a Gaussian distribution, both the inertial transfer and the skewness grow from zero, pass through a peak and then decay. In contrast, the dissipation rate starts with a finite value and either decays (low Reynolds numbers, say $R_\lambda(0) \leq 25$) or rises to a peak and then decays (higher Reynolds numbers). The existence of a peak offers the possibility of a well-defined criterion which would allow the results of one investigation to be compared with another. We plotted graphs of dimensionless dissipation $C_\varepsilon (t_e)$ against Reynolds number for various choices of evolved time $t_e$ (see Fig. 13) and found that the resulting behaviour depended strongly on the choice made. For instance, choosing $t_e$ to be either based on the peak skewness or peak inertial transfer led to the curve tending to zero. As there is no peak dissipation for low Reynolds numbers (and the variation of $ C_\varepsilon$ is predominantly a low Reynolds number phenomenon) this appeared to rule peak dissipation out as a criterion. However, we found that a composite criterion, based on peak transfer at low Reynolds numbers and on peak dissipation at larger Reynolds numbers, where a peak existed, gave very interesting results, with the dimensionless dissipation curve being very like the stationary forced case, and tending to a value of about $0.5$.

I do not claim that these results are prescriptive or definitive in any way, although they are certainly quite plausible. But I hope they will encourage others to investigate further. If this is not done, the studies in decaying turbulence will remain a hodge-podge where variations between investigations are often probably due to a failure to compare like with like.

Lastly, in my previous post I said that at an early stage in my career I resolved to stay clear of the problem of free decay. To avoid any appearance of inconsistency I should point out that this resolution was limited to the theoretical problem of predicting the decay rate of the energy. In the late 1970s we began studying the LES theory applied to the problem of free decay of two-point, two-time statistics. This work was reported in 1984 [2], and involved a detailed comparison with DIA, using the same initial spectra and computational methods as previously used by Kraichnan. This allowed `like for like’ comparisons in great detail, which was still the case when the comparisons were extended to DNS in later years. So the onset problem did not, as such, arise.

[1] S. R. Yoffe and W. D. McComb. Onset criteria for freely decaying turbulence. Phys. Rev. Fluids, 3:104605, 2018.
[2] W. D. McComb and V. Shanmugasundaram. Numerical calculations of decaying isotropic turbulence using the LET theory. J. Fluid Mech., 143:95-123, 1984.




Free decay of isotropic turbulence as a test problem.

Free decay of isotropic turbulence as a test problem.

When I began my postgraduate research in 1966, I quickly decided that there was one problem that I would never work on. That was the free decay of the kinetic energy of turbulence from some initial value. Although, as the subject of my postgraduate research was the turbulence closure problem, there didn’t seem to be any danger of my being asked to do so.

This particular free decay problem, as widely discussed in the literature, can, if one likes, be regarded as a reduced form of the general closure problem. Instead of trying to calculate the two-point correlation (or, equivalently, the energy spectrum), one is simply trying to calculate the decay curve with time of the total energy. This involves making various assumptions about the nature of the decay process and the most crucial seemed to be that a certain integral was constant with respect to time during the decay: this was generally referred to as the Loitsyansky invariant.

We can introduce this by considering the behaviour of the energy spectrum at small values of the wavenumber $k$. This can be written as a Taylor polynomial \[E(k,t) = E_2(t)k^2 + E_4(t)k^4 + \dots .\] Here the coefficient $E_4(t)$, when Fourier transformed to real space, is known as the Loitsyansky integral, and in general it depends on time. It seemed that this was indeed invariant during decay for the case of isotropic turbulence but it had been shown that this was not necessarily the case for turbulence that was merely homogeneous. The problem was that a correlation of the velocity with the pressure, which is suppressed by symmetry in the isotropic case, existed in the more general case. The difficulty here is that the pressure can be expressed as an integral over the velocity field and so the correlation $\langle u p \rangle$ is long-range in nature, and this invalidates the proof of invariance of $E_4$ which works for the isotropic case.

So far so good. What puzzled me at the time was that this failure in the more general case somehow seemed to contaminate the isotropic case. People working in this field seemed unwilling to reply on the invariance of $E_4$ even for isotropic turbulence. However, with the accretion of knowledge over the years (I’d like to claim wisdom as well, but that might be too big a stretch!), I believe that I understand their concerns. At the time, the only practical application of the theory was to grid turbulence; and although this was reckoned to be a good approximation to being isotropic, it might not be perfect; and it might vary to some extent from one experimental apparatus to another. And just to add to the confusion, at about that time (although I didn’t know it) Saffman published a theory of grid turbulence in which $E_2(t)$ was an invariant. This led to controversy based on $E_2$ versus $E_4$ which is with us to this day.

In more recent years, I have had to weaken my position on this matter, because my students have found it interesting to do free-decay calculations, in order to compare our simulations with those of others. So when I was preparing my recent book on HIT, I decided it would provide a good reason to really look into this topic. As part of this work, I was checking various results and to my astonishment, when I worked out $E_2$ I found that it was exactly zero. This work has been published and includes a new proof of the invariance of $E_4$ which is based on conservation of energy [1]. In passing, I should note that the refereeing process for this paper was something that I found educational and I will refer to that in future posts when I get onto the subject of peer review.

Shortly after I published this work, a paper on grid turbulence appeared and it seemed that their results suggested that $E_2$ was non-zero. I sent a copy of [1] to the author and he replied `evidently grid turbulence is less isotropic than we thought’. This struck me as a crucial point. If we are to make progress and have meaningful discussions on this topic, we need to recognise that free decay of isotropic turbulence and grid turbulence are two different problems. In fact, as things have moved on from the mid-sixties, we also have to consider DNS of free decay as being in principle a different problem. Let us now examine the three problems in turn, as follows:

1. Free decay of the turbulent kinetic energy is a mathematical problem which can be formulated precisely for homogeneous isotropic turbulence.

2. Grid-generated turbulence evolves out of an ensemble of wakes and is stationary with time and inhomogeneous in the streamwise direction. In order to make comparisons with free decay, it is necessary to invoke Taylor’s hypothesis of frozen convection.

3. DNS of freely decaying turbulence is based on the Navier-Stokes equations discretised on a lattice. Quite apart from the errors involved (analogous to experimental error in the grid-turbulence case), representation on a lattice is symmetry breaking for all continuous symmetries. The two principal ones in this case are Galilean invariance and isotropy.

Essentially, these are all three different problems and if we wish to make comparisons we have to at least bear that fact in mind. I have lost count of the many heated arguments that I have heard or taken part in over the years which ran along the lines: A says `The sky is blue!’ and B replies: `Oh no, I assure you that grass is green!’ In other words they are not talking about the same thing. That may seem rather extreme but supposing one is momentum conservation and the other is energy conservation. Such a waste of time and energy (and momentum, for that matter).

[1] W. D. McComb. Infrared properties of the energy spectrum in freely decaying isotropic turbulence. Phys. Rev. E, 93:013103, 2016.




Stationary isotropic turbulence as a test problem.

Stationary isotropic turbulence as a test problem.

When I was first publishing, in the early 1970s, referees would often say something like `the author uses the turbulence in a box concept’ before going on to reveal a degree of incomprehension about what I might be doing, let alone what I actually was doing. A few years later, when direct numerical simulation (DNS) had got under way, that phrase might have had some significance; and indeed its use is now common, albeit qualified by the word `periodic’. Of course, when Fourier methods were introduced by Taylor in the 1930s, it was in the form of Fourier series. But by the 1960s it was becoming usual among theorists to briefly introduce Fourier series and then take the infinite system limit and turn them into Fourier transforms: or, increasingly, just to formulate the problem straightaway in the infinite system. However, it can be worth one’s while starting with the finite cubic box of side L, and thinking in terms of the basic physics, as well as the Fourier methods.

In order to represent the velocity field in terms of Fourier series, we introduce the wavevector \[\mathbf{k}=(2\pi/L)\{n_1,n_2,n_3\},\] where the integers $n_1,n_2,n_3$ all lie in the range from $-\infty$ to $\infty$. Fourier sums are taken over the discrete values of $\mathbf{k}$. Then the transition to the continuous, infinite system is made by taking the limit of infinite system size, such that \[\lim_{L\rightarrow \infty}\left(\frac{2\pi}{L}\right)^3\sum_{\mathbf{k}} = \int d^3\,k.\] As ever in physics, we assume that everything is well-behaved; and that both the field variables and their transforms exist, being independent of system size as we go to this limit.

We do not have to restrict these ideas to the Fourier representation. They are generally true when we make the transition from classical mechanics to continuum mechanics. To do this, we begin with a finite system and replace discrete objects by densities. A continuous (or field) representation is introduced by defining continuous densities in the limit of infinite system size. All physical observables must be expressed in terms of densities or rates. They cannot depend on the size of the system, otherwise we would be unable to take the continuum limit. So, if we formulate turbulence in real space in terms of structure functions in a box, then theoretical expressions for the structure functions (or equivalently, the moments) must not depend on the size of the box. This provides us with a basic first test for any theory; and to our knowledge there have been some surprising failures to recognise this. We will come back to two specific examples presently. First we will look at the general question of how to test theories.

Now, stationary isotropic turbulence can be rigorously formulated as a mathematical problem, where `rigour’ is taken to be in the sense of theoretical physics, but it does not occur in nature or indeed in the laboratory. It is true that it may occur to a reasonable approximation in geophysical and astronomical flows, but at the moment it seems that DNS might be our best bet for testing mathematical theories of isotropic turbulence. So it behoves us to examine the question: how representative is DNS of the mathematical problem that we are studying?

Well, of course DNS has been an active field of research for several decades now and this aspect has not been neglected. Nevertheless, one is left with the impression that it is very much a pragmatic activity, governed by `rule of thumb’ methods. For instance, when we began DNS at Edinburgh in the 1990s, I asked around for advice on the maximum value of the wavenumber that we should use, as this seemed to vary from less than the Kolmogorov dissipation wavenumber to very much greater. The consensus of advice that I received was to choose $k_{max} = 1.5 k_d$, and this is what we did. Later on, in 2001, we demonstrated a rational procedure for choosing $k_{max}$: see Figure 2 of reference [1] or Figure 1.6 of reference [2]. One conclusion that emerges from this, is that to resolve the dissipation rate might mean devoting one’s entire simulation to the dissipation range of wavenumbers!

In recent years there seems to have been more emphasis on resolving the largest scale of the turbulence, although much of this work has been for the case of free decay. But concerns remain, particularly in the terms of experimental error. It is also necessary to note a fundamental problem. The mere fact of representing the continuum NSE on a discrete lattice is symmetry breaking for Galilean invariance and isotropy, to name but two. I’m not sure how one can take this into account, except by considering a transition towards the continuum limit and looking for asymptotic behaviour. This could involve starting with a `fully resolved’ simulation and looking at increasingly finer mesh sizes. To say the least this would be very expensive in terms of computer storage and run time. Naturally, workers in the field always want the highest possible Reynolds number. But, if you begin with low Reynolds numbers, it is cheap and easy to do, and you can learn something from the variation of observables with Reynolds number. There exist some well-known simulations that have employed vast resources to achieve enormous Reynolds numbers and yet provide only a few spot values without any error bars, with no indication of asymptotic behaviour, and I understand suspicions about how well-resolved they are. An awful warning to us all!

Lastly, two more awful warnings. First, as we discussed in the previous post, Kraichnan’s asymptotic solution of DIA depends on the largest scale of the system. That in itself is enough to rule it out as unphysical, whether one accepts Kolmogorov (1941) or not. However, as I pointed out, our computations at Edinburgh do not support this asymptotic form, which was obtained analytically using approximations that Kraichnan found plausible. A critical examination of that analysis is in my opinion long overdue.

Secondly, we have the Kolmogorov (1962) form of the energy spectrum, which also depends on the largest scale of the system. Probably few people now take this work seriously, but its baleful presence influences the turbulence community and lends credence to the increasingly unrealistic idea of intermittency corrections. In fact it has recently been shown that the inclusion of the largest scale destroys the widely observed scaling on Kolmogorov variables [3]. This should have been obvious, without any need to plot the graphs!

[1] W. D. McComb, A. Hunter, and C. Johnston. Conditional mode-elimination and the subgrid-modelling problem for isotropic turbulence. Phys. Fluids, 13:2030, 2001.

[2] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.

[3] W. D. McComb and M. Q. May. The effect of Kolmogorov (1962) scaling on the universality of turbulence energy spectra. arXiv:1812.09174[physics.fluid-dyn], 2018.




Asymptotic behaviour of the Direct Interaction Approximation.

Asymptotic behaviour of the Direct Interaction Approximation.
As mentioned previously, Kraichnan’s asymptotic solution of the DIA, for high Reynolds numbers and large wavenumbers, did not agree with the observed asymptotic behaviour of turbulence. His expression for the spectrum was $E(k)=C’\varepsilon^{1/2}U^{1/2}k^{-3/2}$, where $U$ is the root-mean-square velocity and $C’$ is a constant. In 1964 (see [1] for the reference) he wrote: `Recent experimental evidence gives strong support to [the Kolmogorov `-5’3’ form] and rules out [the `-3/2’ form above] as a correct asymptotic law.’

However, Kraichnan’s result is not actually an asymptotic form. The rms velocity $U$ is in fact part of the solution, not the initial conditions. We may underline this by writing $U= [\int_0^\infty \, E(k)\,dk]^{1/2}$, which allows us to rewrite the Kraichnan result as $E(k)=C’ \varepsilon^{1/2}[\int_0^\infty \, E(k)\,dk]^{1/4}\, k^{-3/2}$. So, far from being an asymptotic solution, this appears to be a form of transcendental equation for the energy spectrum.

Now you may object that the dissipation rate is also part of the solution, rather than of the initial conditions, and hence this is also a criticism of the Kolmogorov form. But this is not so. The dissipation only appears because it is equal to the inertial transfer rate. From the simple physics of the inertial range in wavenumber space, the appropriate quantity is the maximum value of the inertial flux of energy through modes, which we will denote by $\varepsilon_T$. Hence the Kolmogorov form should really be $E(k) \sim \varepsilon_T^{2/3}k^{-5/3}$. Of course Kolmogorov worked in real space and derived the `2/3’ law. But in 1941 Obukhov recognised that in wavenumber space the relevant quantity was the scale-invariant energy flux, as did Onsager a few years later.

A way of putting the Kraichnan result in a more asymptotic form was given by McComb and Yoffe [1], who made use of the asymptotic Taylor surrogate for the dissipation rate, $\varepsilon = C_{\varepsilon,\infty} U^3/L$, where $L$ is the integral length scale and $ C_{\varepsilon,\infty} = 0.468 \pm 0.006$ [2], to substitute for $U$ in the Kraichnan spectrum, and obtained: $E(k) = C’C_{\varepsilon,\infty}^{-1/3}\varepsilon^{2/3}L^{\beta}k^{-5/3 + \beta}$, where $\beta = 1/6$. Note that we have changed $\mu$ in that reference to $\beta$ in order to avoid any confusion with the so-called intermittency correction, which normally is represented by that symbol.

Kraichnan only computed the Eulerian DIA for free decay at low Reynolds numbers. However, in 1989 McComb, Shanmugasundaram and Hutchinson [3] reported calculations for free decay of both DIA and LET for Taylor-Reynolds numbers in the range $0.5 \leq R_{\lambda}(t_f ) \leq 1009$ where $t_f$ is the final time of the computation. These results do not support the asymptotic form of the DIA energy spectrum, as given above. It was found that (for example) at $ R_{\lambda} ( t_f) = 533$, the two theories were virtually indistinguishable and both gave the Kolmogorov spectrum to within the accuracy of the numerical methods. It was shown that this result was not an artefact of the initial conditions, by taking $k^{-3/2}$ as the initial spectrum, whereupon it was found that both theories evolved away from this form to once again give $k^{-5/3}$ as the final spectrum.

There is much that remains to be understood about Eulerian turbulence theories and the behaviour of two-time correlations.

[1] W. D. McComb and S. R. Yoffe. A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence. J. Phys. A: Math. Theor., 50:375501, 2017.
[2] W. D. McComb, A. Berera, S. R. Yoffe, and M. F. Linkmann. Energy transfer and dissipation in forced isotropic turbulence. Phys. Rev. E, 91:043013, 2015.
[3] W. D. McComb, V. Shanmugasundaram, and P. Hutchinson. Velocity derivative skewness and two-time velocity correlations of isotropic turbulence as predicted by the LET theory. J. Fluid Mech., 208:91, 1989.




A brief summary of two-point renormalized perturbation theories.

A brief summary of two-point renormalized perturbation theories.
In the previous post we discussed the introduction of Kraichnan’s DIA, based on a combination of a mean-field assumption and a new kind of perturbation theory, and how it was supported by Wyld’s formalism, itself based on a conventional perturbation expansion of the NSE. This was not too surprising, as Kraichnan’s mean field assumption involved his infinitesimal response function which the Wyld comparison showed was the same as the viscous response function, and hence not a random variable. By 1961 it was known that the asymptotic solution of DIA was incorrect, with implications for both the Wyld formalism (and the MSR formalism later on: see previous post).

The next step forward was the theory of Edwards [1] in 1964, which was restricted to the more limited single-time covariance and also to the stationary case. This took as its starting point the Liouville equation for $P$, the probability distribution functional of the velocity field, and went beyond the mean-field case to calculate corrections to it self-consistently. That is, Edwards made the substitution $P\equiv P_0 + (P –P_0)$ and then expanded in powers of the correction term $\Delta P = P – P_0$. Then, taking $P_0$ to be Gaussian, and exploiting the symmetries of the system, Edwards gave a highly intuitive treatment of the problem, in which he drew strongly on an analogy with the theory of Brownian motion. It turned out that the resulting theory was closely related to the DIA and, like it, did not agree with the Kolmogorov spectrum.

The following year Herring [2], using formal methods of many-body theory, produced a self-consistent field theory which was much more abstract than the Edwards one, but yielded the same energy equation. Then, in 1966 he generalised this theory to the two-time case [3]. All three theories [1-3] led to the same energy equation as DIA, but all differed in the form of the response equation.

Now, it is in the introduction of the response equation that the renormalization takes place, and it is in the form of the response equation that the deviation from Kolmogorov lies, so this difference between these response equations raises fundamental questions about all these theories. Various interpretations were offered at the time, but these were all phenomenological in character. It was much later that a uniform, fundamental diagnosis was offered and I will come on to that presently. But this was the situation when I began post-graduate research with Sam Edwards in October 1966. The exciting developments of the previous decade seemed to be leading to a dead end, and my first task was to choose the response function of the Edwards theory in a new way, such that it maximised the turbulent entropy [4].

On the basis of the Edwards analysis, his theory had failed under the extreme circumstances of an infinite Reynolds number limit, in which the input was modelled by a delta-function at the origin in $k$-space and the dissipation was represented by a delta-function at $k=\infty$. Edwards argued that under these circumstances the Kolmogorov spectrum would apply at all wavenumbers, and in his original theory this led to an infra-red divergence in the integral for the response function. (Note: Kraichnan used the scale-invariance of the inertial flux $\Pi$ as his criterion for the inertial range, but the two methods are mathematically equivalent.) The `maximum entropy’ theory [4] certainly achieved the result of eliminating the infra-red divergence, but that was about as much as one could say for it. It became clearer to me later that it was not a very sound approach.

It is a truism in statistical physics that a system is either dominated by entropy or energy. If we consider a system made of many microscopic magnets on a lattice then the entropy will determine the distribution. However if we switch on a powerful external magnetic field, all the little magnets will line up with it and (small fluctuations aside) entropy has no say in the matter! It is just like that in turbulence. The system is dominated by a symmetry breaking current of energy through the modes, running from small to large wavenumbers, where it is dissipated by viscosity. There is no real reason to assume that the entropy determines the turbulence response.

When I was in my first post-doctoral job, I gave a talk to some theorists. I explained my early ideas on how energy transfer might determine the turbulence response. They heard me out politely, and then I made the mistake of mentioning the maximum entropy work. Immediately they became enthusiastic. ‘Tell us about that’, they said. The impression they gave was ‘now that’s a real theory!’ I was in awe of them as they were much older and more experienced than me, and talked so authoritatively about all aspects of theoretical physics. Nevertheless, this was my first inkling of conventional thinking. The implication seemed to be: it was a text-book method, so it must be good.

Over the next few years I developed the local energy transfer (LET) theory [5, 6], and also offered a unified explanation of the failure of first-generation renormalized perturbation theories. The further extension of this work to the two-time case has had a rather chequered history and will be the subject of further posts.

[1] S. F. Edwards. The statistical dynamics of homogeneous turbulence. J. Fluid Mech., 18:239, 1964.
[2] J. R. Herring. Self-consistent field approach to turbulence theory. Phys. Fluids, 8:2219, 1965.
[3] J. R. Herring. Self-consistent field approach to nonstationary turbulence. Phys. Fluids, 9:2106, 1966.
[4] S. F. Edwards and W. D. McComb. Statistical mechanics far from equilibrium. J.Phys.A, 2:157, 1969.
[5] W. D. McComb. A local energy transfer theory of isotropic turbulence. J.Phys.A, 7(5):632, 1974.
[6] W. D. McComb. The inertial range spectrum from a local energy transfer theory of isotropic turbulence. J.Phys.A, 9:179, 1976.




Theories versus formalisms

Theories versus formalisms.
After the catastrophe of quasi-normality, the modern era of turbulence theory began in the late 1950s, with a series of papers by Kraichnan in the Physical Review, culminating in the formal presentation of his direct-interaction approximation (DIA) in JFM in 1959 [1].

The next step was the paper by Wyld [2], which set out a formal treatment of the turbulence problem based on, and very much in the language of, quantum field theory. Wyld carried out a conventional perturbation theory, based on the viscous response of a fluid to a random stirring force. He showed how simple diagrams could be used with combinatorics to generate all the terms in an infinite series for the two-point correlation function. He also showed that terms could be classified by the topological properties of their corresponding diagrams. In this way, he found that one class of terms could be summed exactly and that another could be re-expressed in terms of partially summed series, thus introducing the idea of renormalization. In other words, the exact correlation could be expressed as an expansion in terms of itself and a renormalized response function (or propagator). In a sense, this could be regarded as a general solution of the problem, but obviously one that by itself does not provide a tractable theory. In short, it is a formalism.

As an aside, I should just mention that Wyld’s paper was evidently very much written for theoretical physicists. That is no reason why any competent applied mathematician shouldn’t follow it, but one suspects that few did. Also, the work has been subject to a degree of criticism: the current version may be found as the improved Wyld-Lee theory in #8 of the list of My Recent Papers on this website. But this does not affect anything I will say here and I will return to this topic in a future blog.

In contrast, Kraichnan began by introducing the infinitesimal response function $\hat{G}$, which connected an infinitesimal change in the stirring forces to an infinitesimal change in the velocity field. He made this the basis of what he claimed was an unconventional (superior?) perturbation theory, making use of ideas like weak dependence, maximal randomness, and direct interaction. Unfortunately these ideas did not attract general agreement, and I suspect that he found the refereeing process with JFM, and the subsequent experience of the Marseille Conference (see the previous blog), rather bruising. Apparently he said. `The optimism of British applied mathematicians is unbounded.’ Then after a pause. `From below.’ I was told this by Sam Edwards when I was a postgraduate student. Sam obviously appreciated the interplay of wit and cynicism.

Now, in completing his theory, Kraichnan made the substitution $\hat{G}= G \equiv \langle \hat{G} \rangle$, which is in effect a mean-field approximation. So it is important to note that, when the conventional perturbation formalism of Wyld is truncated at second-order in the renormalized expansion, the equations of Kraichnan’s DIA are recovered. This is important because it suggests that this particular mean-field approximation is in fact justified. However, we know that Kraichnan came to the conclusion that his theory was wrong, at least in terms of its asymptotic behaviour at high Reynolds numbers: see the previous blog.

This has the immediate implication that Wyld’s formalism is also wrong, when truncated at second order. Which is also true of the later functional formalism of Martin, Siggia and Rose [3]. Kraichnan came to the conclusion that his DIA approach should be carried out in a mixed Eulerian-Lagrangian coordinate system; and, if correct, that would presumably also apply to the two formalisms. However, there is also the question of whether or not it is appropriate to treat the system response as one would in dynamical system theory. After all, the stirring forces in a fluid, first have to create the system, and only then do they maintain it against the dissipative effects of viscosity. We will return to this aspect in future blogs.
[1] R. H. Kraichnan. The structure of isotropic turbulence at very high Reynolds numbers. J. Fluid Mech., 5:497-543, 1959.
[2] H. W. Wyld Jr. Formulation of the theory of turbulence in an incompressible fluid. Ann. Phys, 14:143, 1961.
[3] P. C. Martin, E. D. Siggia, and H. A. Rose. Statistical Dynamics of Classical Systems. Phys. Rev. A, 8(1):423-437, 1973.




Marseille (1961): a paradoxical outcome.

Marseille (1961): a paradoxical outcome.
When I was first at Edinburgh, in the early 1970s, a number of samizdat-like documents, of entirely mysterious provenance, were being passed around. One that came my way, was a paper by Lumley which contained some rather interesting ideas for treating the problem of turbulent diffusion. I expect that it is still in my filing system; but, with the Covid-19 lockdown, I am cut off from my university office and unable to refresh my memory. Later on I encountered the paper by Proudman which criticised Kraichnan’s theory of turbulence – the Direct-Interaction Approximation – and by that time I presumably had heard about the meeting held in Marseille in 1961. Of course my ignorance is not all that surprising, in that the meeting, which was the source of these papers, took place five years before I began my postgraduate research. In any case, I must have known about it by the late 1980s, as these papers are correctly referenced in my 1990 book on the physics of turbulence.

An interesting and informal account of this meeting is given by Moffatt in his review [1], which is essentially an appreciation of the life and work of G. K. Batchelor, and accordingly the meeting is seen, as it were, through this prism. Having told the story of how Batchelor discovered the work of Kolmogorov, while searching through the literature of turbulence in the library of the Cambridge Philosophical Society; and how he had expanded the short and rather cryptic papers of Kolmogorov into what was to become a seminal work on the subject [2], Moffatt sees the Marseille meeting as a ‘watershed’ in the study of turbulence. In support of this, he highlights two contributions to the meeting.

First, there is the report by Stewart of experimental measurements of energy spectra carried out in the channel between Vancouver Island and the mainland. This investigation achieved values of the Taylor-Reynolds number up to about 3000, and several decades of power-law behaviour, which appeared to support the Kolmogorov $-5/3$ spectrum. This work was published the following year [3].

Secondly, there was a lecture by Kolmogorov, also published in the following year [4], in which he outlined a refinement (sic) of his 1941 theory in response to a criticism by Landau. His conclusion was that the power of $-5/3$ should be subject to a small correction $\mu$; but he was unable to obtain a value for $\mu$.
There is an element of contradiction here, but that could possibly be resolved quite trivially if one were to find out that the two agreed within experimental error. So that in itself is not a paradox. The paradox that I have in mind arises in a different way.

Moffatt discusses the fact that Batchelor essentially gave up turbulence as his main research interest after this meeting. His argument appears to be that Batchelor was already becoming discouraged by the difficulties of the subject. And, given that a major part of his own research had been the interpretation and dissemination of the Kolmogorov (1941) theory, he may have found that Kolmogorov’s lecture at this meeting came as the last straw!

Another possibility, that Moffatt doesn’t mention, is that Batcheleor may have found the new wave of theoretical physics approaches, as initiated by Kraichnan, not only complicated but also part of an alien culture, to the extent that this too was discouraging. I have a personal note that I can add here. I only met Batchelor once; in 1967 when he examined my Master’s thesis. At one point he had some difficulty with the units, where I was giving a quantum physics analogy, and I pointed out that there would be a Planck’s constant involved, but that I was working in units where Planck’s constant was unity. At another stage he pointed out that he was, at the risk of being accused of cynicism, no more optimistic about these new quantum-inspired approaches, than about anything else. And, that was with Sam Edwards, who had published a theory of turbulence in JFM three years earlier, also in the room! I am quite sure that forty (or more) years on, there would be many in turbulence research who would eagerly say that he had proved to be right. But, following one’s prejudices, rather than engaging with a subject, is the abnegation of scholarship. Sometimes the truth lies deep.

However, another major discouragement took place at this meeting. Kraichnan was predicting an inertial-range spectrum with an exponent of $-3/2$. Even if the results of Grant et al. [3] were compatible with a small correction to $5/3$, they were certainly good enough to convincingly rule out Kraichnan’s rival $3/2$ exponent. As a result, Kraichnan had to look at his theory again, and over a period of several years he became convinced that the problem was insoluble in Eulerian coordinates, and that there was a need to change to a mixed coordinate system which he called Lagrangian-History coordinates. The result was an immensely complicated theory, which not only had to be abridged in order to permit computation, but also depended on the way in which the theory was formulated. This has left a legacy of other workers who employ a more conventional Lagrangian system.

This, then, is the paradox that I had in mind. The outcome of the meeting, put in very broad brush terms, is that Batchelor changed his mind because Kolmogorov (1941) was wrong and Kraichnan changed his mind because it was correct. It cannot be said that progress in turbulence is ever smooth.
[1] H. K. Moffatt. G. K. Batchelor and the Homogenization of Turbulence. Ann. Rev. Fluid Mech., 34:19-35, 2002.
[2] G. K. Batchelor. Kolmogorov’s theory of locally isotropic turbulence. Proc. Camb. Philos. Soc., 43:533, 1947.
[3] H. L. Grant, R. W. Stewart, and A. Moilliet. Turbulence spectra from a tidal channel. J. Fluid Mech., 12:241-268, 1962.
[4] A. N. Kolmogorov. A refinement of previous hypotheses concerning the local structure of turbulence in a viscous incompressible fluid at high Reynolds number. J. Fluid Mech., 13:82-85, 1962.




Which Navier-Stokes equation do you use?

Which Navier-Stokes equation do you use?

In the first half of 1999, a major turbulence programme was held at the Isaac Newton Institute in Cambridge. On those days when there were no lectures or seminars during the morning, a large group of us used to meet for coffee and discussions. In my view these discussions were easily the most enjoyable aspect of the programme. On one particular morning, as a prelude to making some point, I said that I was probably unusual in that I have taught the derivation of the Navier-Stokes equation (NSE) as continuum mechanics to engineering students and by statistical mechanics to physicists and mathematicians. The general reaction was that that I was not merely unusual, but surely unique! I gathered, from comments made, that everyone present saw the NSE as part of continuum mechanics.

Of course the two forms of NSE are apparently identical, otherwise one could not refer to both as the Navier-Stokes equation. Nevertheless, when one comes to consider the infinite Reynolds number limit, it is necessary to become rather more particular. We can start doing this by stating the two forms, as follows.

First, the continuum-mechanical NSE is exact for a continuous fluid which shows Newtonian behaviour under all circumstances of interest.

Secondly, the statistical-mechanical NSE is the first approximation to the exact statistical mechanical equations of motion. So in principal it should be followed by a statement to the effect that there are higher-order terms.

Now strictly, if we want to consider cases where the continuum approximation breaks down, we should be using the second of these forms. Batchelor argued that in the limit of zero viscosity (at constant dissipation rate) the dissipation would be concentrated at infinity in wavenumber space. Edwards [1] went further and represented this dissipation by a delta-function at $k=\infty$ and matched it with a delta-function input of energy at $k=0$. In this way he could obtain an infinitely long inertial range and assume that the $-5/3$ spectrum applied everywhere, as a test of his closure approximation.

The Edwards procedure is valid, because he was applying it to a closure of the (in effect) continuum-mechanical NSE, as indeed is everyone else who discusses behaviour at large Reynolds numbers; or, for that matter, statistical closures. But the question of the validity of this model arises when people consider the breakdown of the NSE. This actually requires some consideration of the basic physics, which in this case means statistical mechanics; and, essentially this boils down to the following: The general requirement for the continuum limit to be valid is that the smallest length-scale of the fluid motion should be much larger than the mean free path of the fluid’s molecules.

The only example of this being looked at quantitatively, that I know of, may be found in Section 1.3 of the book by Leslie [2]. He considered flow in a pipe at a Reynolds number of $10^6$, with a pipe diameter of $10mm =10^{-2}m$, which he described as an extreme case. In Section 2.8 of his book, he calculates the minimum eddy size to be greater than $10^{-4}mm =10^{-7}m$. He notes that for a liquid the mean free path is of the order of the atomic dimensions and thus about $10^{-10}m$ and hence the use of a continuum form is very well justified. He further comments: ‘It [the continuum limit] is also satisfied, although not by such a comfortable margin, by any gas dense enough to produce a Reynolds number of $10^6$ in a passage only $10mm$ in diameter.’

I think that it would be a good idea if those who discuss cases where a theory based on the Navier-Stoke equation is supposed to break down actually put in some numbers to indicate where their revised theory would be applicable and the NSE wouldn’t. Or perhaps, it might be salutary to consider in detail the variation of significant quantities with increasing Reynolds number and identify the smooth development of asymptotic behaviour. I will return to this point in future posts.

Anyone who would like an introductory discussion of the derivation of macroscopic balance equations from statistical mechanics should consult Section 7.6 of my book Study notes for statistical physics, which may be downloaded free of charge from Bookboon.com.

[1] S. F. Edwards. Turbulence in hydrodynamics and plasma physics. In Proc. Int. Conf. on Plasma Physics, Trieste, page 595. IAEA, 1965.
[2] D. C. Leslie. Developments in the theory of turbulence. Clarendon Press, Oxford, 1973.




Turbulence as a quantum field theory: 2

Turbulence as a quantum field theory: 2
In the previous post, we specified the problem of stationary, isotropic turbulence, and discussed the nature of turbulence phenomenology, insofar as it is relevant to taking our first steps in a field-theoretic approach. Now we will extend that specification in order to allow us to concentrate on renormalization group or RG.

RG originated in quantum field theory in the 1950s, but is best known for its successes in critical phenomena in the 1970s, along with the creation of the new subject of statistical field theory. Essentially it began as a method of exploiting scale invariance, and ended up as a method of detecting it, and also establishing the conditions under which it would hold. It is most easily understood in the theory of ferromagnetism, where we can envisage a model consisting of lots of little atomic magnets on a lattice. These atomic magnets (or lattice spins) interact with each other and, if we call the interaction energy for any pair $J$, this energy appears in the partition function as $J/k_B T$, where $k_B$ is the Boltzmann constant, and $T$ is the absolute temperature. This quantity is the coupling constant.

Now RG consists of coarse-graining our microscopic description, and then re-scaling it, to see it we can get back to where we started. If so, that would be a fixed point. In practice, we might expect to carry out this transformation a number of times, in order to reach such a fixed point. So in effect we are progressively reducing the number of degrees of freedom. This involves some sort of partial average at each step, in contrast to a full ensemble average, which gets you down from lots of degrees of freedom to just a few numbers being needed to describe a system.

Actually, merely by waving our hands about, we can deduce something about the fixed points of our lattice model of a ferromagnet. If we consider very high temperatures, then the coupling strength will be reduced to zero. The lattice spins will have a Gaussian probability distribution. We can envisage that this will be a fixed point, as no amount of coarse-graining will change it from a purely random distribution. At the other extreme, as the temperature tends to zero, the coupling tends to infinity and there can be no random behaviour: the spins will all line up. Once again, perfect order cannot be changed by coarse graining, and this also is a fixed point. What happens in between these extremes is interesting. As the temperature is reduced from some very large value, clumps of aligned spins will occur as fluctuations. The size of these fluctuations is characterised by the correlation length. As the temperature approaches some critical value $T_c$ from above, the correlation length will tend to infinity. When this occurs, it is no longer possible to coarse-grain away the ordering, as it exists on all scales. This fixed point is the critical point of the lattice.

So, RG applied to the model identifies the high- and low-temperature fixed points, which are trivial; and the critical fixed point which corresponds to the onset of ferromagnetism. This is known as real space RG and I have given a fuller account (with pictures!) elsewhere [1]. For completeness, I should mention that the momentum-space analytical treatment involves Gaussian perturbation theory in order to evaluate parameters associated with the critical point. Also, the temperature in this context is known as a control parameter.

Variation of the coupling strength with wavenumber in isotropic turbulence.

In turbulence, the degrees of freedom are the independently excited Fourier modes. The coupling parameter for each mode can be identified with Batchelor’s Reynolds number (see my earlier post on 23/04/20) which takes the form $R(k)=[E(k)]^{1/2}/\nu k^{1/2}$. Using the schematic energy spectrum, as given in the preceding post, we can identify the trivial fixed points where the coupling falls to zero. This is because the spectrum is known to go to zero at least as $k^4$ as $k\rightarrow 0$ and to zero exponentially as $k\rightarrow \infty$. By analogy with quantum field theory, we refer to these points as being asymptotically free in the infra-red and the ultra-violet, respectively. In order to compare with magnetism, we can argue that the $k=0$ fixed point is analogous with the high-temperature, where the low-$k$ motion is random (Gaussian) due to the stirring, whereas at large $k$, the motion is damped by viscosity and is analogous to the low-temperature fixed point. In the figure we identify another possible, but non-trivial, fixed point where the inertial range is represented by the Kolmogorov $k^{-5/3}$ spectrum. A power law, being scale-free, is likely to be associated with a fixed point of the RG transformations.

In order to carry out calculations, we seek to eliminate modes progressively in bands, first $k_1\leq k\leq k_0$, then $k_2 \leq k \leq k_1$, and so on. At the first stage, the effect of the missing modes results in an increase to the viscosity $\nu_0 \rightarrow \nu_1 = \nu_0 + \delta \nu_0$. We then rescale on the increased viscosity, and repeat the process. Note that we rename the molecular viscosity $\nu = \nu_0$ for this purpose. Also note that it can be a little counter-intuitive associating zero with the maximum value of $k$, but we want an increasing index as we reduce $k$, leading on to a recurrence relationship which may reach a fixed point.

In the theory of magnetism, the lattice spacing $a$ is used to define the maximum wavenumber, thus $k_{max} = 2\pi/a$. In turbulence, sometimes the Kolmogorov wavenumber is used for the maximum, but this is likely to be incorrect by at least an order of magnitude. A better definition has been given [2] in terms of the dissipation integral, thus: $\varepsilon = \int_0^\infty 2\nu_0 k^2 E(k) dk \simeq \int_0^{k_{max}}2\nu_0 k^2 E(k) dk$.

I shall highlight two calculations here. Forster et al [3] carried out an RG calculation by restricting the wavenumbers considered to a region near the origin. This was very much a Gaussian perturbation theory of the type used in the study of critical phenomena. They did not refer to this as turbulence, and instead considered it as the large scale asymptotic behaviour of randomly stirred fluid motion.

Later, McComb and Watt [4], introduced a form of conditional average which allowed the RG transformation to be formulated as an approximation, valid even at large wavenumbers. They were able to find a non-trivial fixed point which corresponded to the onset of the inertial (power-law) range and gave a good value of the Kolmogorov spectral constant. This work has been carried on and refined but is very largely ignored. In contrast, Forster et al seem to have established a new paradigm of Gaussian fluid motion which permits the application of much field theoretic RG which relies on the simplifications of the paradigm. There is, however, one difference. Nowadays people publishing in this field describe it as turbulence! The most up-to-date treatment of the conditional averaging method will be found in [5].

[1] W. D. McComb. Renormalization Methods. Oxford University Press, 2004.

[2] W. D. McComb. Application of Renormalization Group methods to the subgrid modelling problem. In U. Schumann and R. Friedrich, editors, Direct and Large Eddy Simulation of Turbulence, pages 67, 81. Vieweg, 1986.

[3] D. Forster, D. R. Nelson, and M. J. Stephen. Long-time tails and the large eddy behaviour of a randomly stirred fluid. Phys. Rev. Lett., 36 (15):867-869, 1976.

[4] W. D. McComb and A. G. Watt. Conditional averaging procedure for the elimination of the small-scale modes from incompressible fluid turbulence at high Reynolds numbers. Phys. Rev. Lett., 65(26):3281-3284, 1990.

[5] W. D. McComb. Asymptotic freedom, non-Gaussian perturbation theory, and the application of renormalization group theory to isotropic turbulence. Phys. Rev. E, 73:26303-26307, 2006.




Turbulence as a quantum field theory: 1

Turbulence as a quantum field theory: 1

In the late 1940s, the remarkable success of arbitrary renormalization procedures in quantum electrodynamics in giving an accurate picture of the interaction between matter and the electromagnetic field, led on to the development of quantum field theory. The basis of the method was perturbation theory, which is essentially a way of solving an equation by expanding it around a similar, but soluble, equation and obtaining the coefficients in the expansion iteratively.

As a result of these successes, perturbation theory became part of the education of every physicist. Indeed, it is not too much to say that it is part of our DNA. Yet, a few years ago, when I looked at the website of an applied maths department, they had a lengthy explanation of what perturbation theory was, as they were using it on some problem. One simply couldn’t imagine that, on a physics department website, and it illustrates the cultural voids between different disciplines in the turbulence community. For instance, I used to hear/read comments to the effect that ‘isotropic turbulence had been studied for its potential application to shear flows, but this proved not to be the case and now it was of no further interest.’ From a physicist’s point of view, the reason for studying isotropic turbulence is the same as the motivation for being the first to climb Everest. Because it is there! But, interestingly, the study of isotropic turbulence has increased in recent years, driven by the growth of direct numerical simulation of the equations of motion as a discipline in its own right.

However, back to the sixties. The idea of applying these methods to turbulence caught on, and for a while things seem to have been quite exciting. In particular, there were the pioneering theories of Kraichnan, Edwards and Herring. There was also, the formalism of Wyld, which was the most like quantum field theory. At this point, I know from long and bitter experience that there will be wiseacres muttering ‘Wyld was wrong’. They won’t know what exactly is wrong, but they will be quoting a well-known later formalism by Martin, Siggia and Rose. In fact it has recently been shown that the two formalisms are compatible, once some simple procedural changes have been made to Wyld’s approach [1].

We will return to Wyld in a later post (and also to the distinction between formalisms and theories). Here we want to take a critical look at the underlying physics of applying the methods of quantum field theory to fluid turbulence. It is one thing to apply the iterative-perturbative approach to the Navier-Stokes equations (NSE), and another to justify the application of specific renormalization procedures to a macroscopic phenomenon in classical physics. So, let’s begin by formulating the problem of turbulence for this purpose, in order to see whether the analogy is justified.

We consider a cubical box of side $L$, occupied by a fluid which is stirred by random forces with a multivariate-normal distribution and with instantaneous correlation in time. This condition ensures that any correlations which arise in the velocity field are due to the NSE. It also is known as the white noise condition and allows us to work out the rate at which the forces do work on the fluid in terms of the autocorrelation of the random forces, which is part of the specification of the problem. (Occasionally one sees it stated that the delta-function autocorrelation in time is needed for Galilean invariance. I must say that I would like to see a reasoned justification for that statement.)

By expanding the velocity field (and pressure) in Fourier series, we can study the NSE in wavenumber $k$ space. It is usual nowadays to proceed immediately to the limit $L \rightarrow \infty$ and make use of the Fourier integral representation. It is important to note, that this is a limit. It does not imply that there is a quantity $\epsilon = 1/L = 0$. It does however imply that all our procedures and results must be independent of $L$. Then the problem may be seen as one of strong nonlinear coupling, due to the form of the nonlinear term in wavenumber space.

Strong nonlinear coupling? Well that’s the conventional view and it is certainly not wrong. But let’s not be too glib about this. It is well known, and probably has been since at latest the early part of the last century, that making variables non-dimensionless on specific length- and velocity-scales results in a Reynolds number appearing in front of the nonlinear term as a prefactor. Expressing, this in terms of quantum field theory, the Reynolds number plays the part of the coupling constant. In quantum-electrodynamics, the coupling constant is the fine-structure constant with a value of about $1/137$, and thus provides a small parameter for perturbation expansion. While the resulting series is not strictly convergent, it does give answers of astonishing accuracy. It is equally well known that attempting perturbation theory in fluid dynamics is unwise for anything other than creeping flow, where the Reynolds number is small. So applying perturbation theory to turbulence looks distinctly unpromising.

There is also the basic phenomenology of turbulence which we must take into account. The stirring motion of the forces will produce fluid velocities with normal (or Gaussian) distributions. Then the effect of the nonlinear coupling is to generate modes with larger values of wavenumber than those initially stirred. This is accompanied by the transfer of energy from small wavenumbers to large, and if left to carry on would lead to equipartition for any finite set of modes, albeit with the total energy increasing with time. This assumes the imposition of a cut-off wavenumber, but in practice the action of viscosity is symmetry-breaking, and the kinetic energy of turbulent motion leaves the system as heat. The situation is as shown in the sketch which, despite our restriction to isotropic turbulence in a box, is actually quite illustrative of what goes on in many turbulent flows.

Sketch of the energy spectrum of isotropic turbulence at moderate Reynolds number.

Various characteristic scales can be defined, but the most important is the Kolmogorov dissipation wavenumber, thus: $k_d=(\varepsilon /\nu_0^3)^{1/4}$, which gives the order of magnitude of the wavenumber at which the viscous effects begin to dominate. For the application of renormalized perturbation theory (which we will discuss in a later post), this phenomenology is important for assessment purposes. However, when we look at the later introduction of renormalization group theory, we have to consider this picture in rather more detail. We will do that in the next post.

[1] A. Berera, M. Salewski, and W. D. McComb. Eulerian Field-Theoretic Closure Formalisms for Fluid Turbulence. Phys. Rev. E, 87:013007-1-25, 2013.




Is there an alternative infinite Reynolds number limit?

Is there an alternative infinite Reynolds number limit?

I first became conscious of the term dissipation anomaly in January 2006, at a summer school, where the lecturer preceding me laid heavy emphasis on the term, drawing an analogy with the concept of anomaly in quantum field theory, as he did so. It seemed that this had become a popular name for the fact that turbulence possesses a finite rate of dissipation in the limit as the viscosity tends to zero. I found the term puzzling, as this behaviour seemed perfectly natural to me. At the time it occurred to me that it probably depended on how you had first met turbulence, whether the use of this term seemed natural or not. In my case, I had met turbulence in the form of shear flows, long before I had been introduced to the study of isotropic turbulence in my PhD project.

Back in the real world, the experiments of Osborne Reynolds were conducted on pipe flow in the late 1890s, and this line of work was continued in the 1930s and 1950s by (for example) Nikuradse and Laufer [1]. This led to a picture where turbulence was seen as possessing its own resistance to flow. The disorderly eddying motions were perceived to have a randomizing effect analogous to, but much stronger than, the effects of the fluid’s molecular viscosity. This in turn led to the useful but limited concept of the eddy viscosity. As the Reynolds number was increased, the eddy viscosity became dominant, typically being two orders of magnitude greater than the fluid viscosity.

In principle, there are three alternative ways of varying the Reynolds number in pipe flow, but in practice it is just a matter of turning up the pump speed. Certainly no one would try to do it by decreasing the viscosity or increasing the pipe diameter. In isotropic turbulence, the situation is not so straightforward, as we use forms of the Reynolds number which depend on internal length and velocity scales. Indeed the only unambiguous characteristic which is known initially is the fluid viscosity.

An ingenious way round this was given by Batchelor (see pp 106 – 107, in [2]), who introduced a Reynolds number for an individual degree for freedom (i.e. wave-number mode) as $R(k) = [E(k)]^{1/2}/\nu k^{1/2}$, in terms of the wavenumber spectrum, the viscosity and the wave-number of that particular degree of freedom. He argued that the effect of decreasing the viscosity would be to increase the dominance of the inertial forces on that particular mode, so that the region of wave-number space which is significantly affected by viscous forces moves out towards $k=\infty$. He concluded: `In the limit of infinite Reynolds number the sink of energy is displaced to infinity and the influence of viscous forces is negligible for wave-numbers of finite magnitude.’ A similar conclusion was reached by Edwards from a consideration of the Kolmogorov dissipation wave-number [1], who showed that the sink of energy at infinity could be represented by a Dirac delta function.

It is perhaps also worth mentioning that the use of this local (in wave-number) Reynolds number provides a strength parameter for the consideration of isotropic turbulence as an analogous quantum field theory [3].
Evidently the conclusion that the infinite Reynolds limit in isotropic turbulence corresponds to a sink of energy at infinity in $k$-space seems to be well justified. Nevertheless, this use of the value infinity in the mathematical sense is only justified in theoretical continuum mechanics. In reality it cannot correspond to zero viscosity. It can be shown quite easily from the phenomenology of the subject that the infinite Reynolds number behaviour of isotropic turbulence can be demonstrated asymptotically to any required accuracy without the need for zero viscosity. We shall return to this in a later post.

1. W. D. McComb. The Physics of Fluid Turbulence. Oxford University Press, 1990.
2 G. K. Batchelor. The theory of homogeneous turbulence. Cambridge University Press, Cambridge, 1st edition, 1953.
3. W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.




What relevance has theoretical physics to turbulence theory?

What relevance has theoretical physics to turbulence theory?

The question is of course rhetorical, as I intend to answer it. But I have to pause on the thought that it is also unsatisfactory in some respects. So why ask it then? Well my reply to that is that various turbulence researchers have over the years in effect answered it for me. Their answer would be none at all! In fact, in the case of various anonymous referees, they have often displayed a marked hostility to the idea of theoretical physicists being involved in turbulence research. But the reason why I find it unsatisfactory is that it seems to assume that turbulence theory is not part of theoretical physics, whereas I think it is; or, rather, it should be. So let’s begin by examining that question.

As is well known, the fundamental problem of turbulence is the statistical closure problem that is posed by the hierarchy of moments of the velocity field. Well, molecular physics has the same problem when the molecules interact with each other. This takes the form of the BBGKY hierarchy, although this is expressed in terms of the reduced probability distribution functions. If we consider the simpler problem, where molecules are non-interacting hard spheres, then we have classical statistical physics. In these circumstances we can obtain the energy of the system simply by adding up all the individual energies. The partition function of the system then factorizes, and we can obtain the system free energy quite trivially. However, if the individual molecules are coupled together by an interaction potential, then this factorization is no longer possible as each molecule is coupled to every other molecule in the system. So it is for turbulence, if we work in the Fourier wavenumber representation, the modes of the velocity field are coupled together by the nonlinear term in the velocity field, thus posing an example of what in physics is called the many-body problem.

One could go on with other examples in microscopic physics, for example the theory of magnetism which involves the coupling together of all spins on lattice sites, but it really boils down to the fact that the bedrock problem of theoretical physics is that of strong-coupling. And turbulence formulated in $k$-space comes into that category. The only difference is, that turbulence is mainly studied by engineers and applied scientists, while theorists mostly prefer to study what they see as more fundamental problems, even if these studies become ever more arid for lack of genuine inspiration or creativity. But as a matter of taxonomy, not opinion, turbulence should belong to physics as an example of the many-body problem.

Now let’s turn to our actual question. We can begin by noting that we are talking about insoluble problems. That is, there is no general method of obtaining an exact solution. We have to consider approximate methods. First, there is perturbation theory, which relies on (and is limited by) the ability to perform Gaussian functional integrals. Secondly, there is self-consistent field theory. Both of these rely, either directly or indirectly, on the concept of renormalization. In molecular physics, this involves adding some of the interaction energy to the bare particle, in order to create a dressed particle, also known as a quasi-particle. Such quasi-particles do not interact with each other and so the partition function can be evaluated by factorization, just as in the ideal-gas case. In the case of turbulence, it is probably quite widely recognized nowadays that an effective viscosity may be interpreted as a renormalization of the fluid kinematic viscosity. However, it should be borne in mind that the stirring forces and the interaction strength may also require renormalization.

There is no inherent reason why the subject of statistical turbulence theory should be mysterious and I intend to post short discussions of various aspects. Not so much maths, as `good versus bad’ or `justified versus unjustified’; plus tips on how to use some common sense reasoning to cut through the intimidatingly complicated mathematics and (in some cases self-important pomposity) of some theories which are not really new turbulence theories but merely text-book material from quantum field theory in which variables have been relabelled, but the essential difficulties of extending to turbulence have not been tackled.




The Kolmogorov `5/3’ spectrum and why it is important

The Kolmogorov `5/3’ spectrum and why it is important

An intriguing aspect of the Kolmogorov inertial range spectrum is that it was not actually derived by Kolmogorov. This fact was unknown to me when, as a new postgraduate student, I first encountered the `5/3’ spectrum in 1966. At that time, all work on the statistical theory of turbulence was in spectral or wavenumber ($k$) space , and the Kolmogorov form was seen as playing an important part in deciding between alternative theoretical approaches.

As is well known nowadays, in 1941 Kolmogorov derived power-law forms for the second- and third-order structure functions in $r$ space. In the same year, it was Obukhov [1] who worked in $k$ space, introducing the energy flux through wavenumber as the spectral realization of the Richardson-Kolmogorov cascade, and making the all-important identification of the scale-invariance of the energy flux as corresponding to the Kolmogorov picture for real space. It is usual nowadays to denote this quantity by $\Pi(k)$, and in this context scale-invariance means that it becomes a constant, independent of $k$. For stationary turbulence that constant is the dissipation rate. Obukhov did actually produce the `5/3’ law, but this involved additional hypotheses about the form of an effective viscosity, so it was left to Onsager in 1945 [2] to combine simple dimensional analysis with the assumption of scale-invariance of the flux to produce a spectral form on equal terms with Kolmogorov’s `2/3’ law for $S_2(r)$. This work was discussed (and in effect) disseminated by Batchelor in 1947 [3], and later in his well-known monograph. Curiously enough, in his book, Batchelor only discussed the spectral picture, having discussed only the real-space picture in [3]. This is something that we shall return to in later posts. But it seems that the effect was to establish the dominance of the spectral picture for many years.

In the early sixties, there was considerable excitement about the new statistical theories of turbulence, but when Grant, Stewart and Moilliet published their experimental results for spectra, which extended over many decades of wavenumber, it became clear beyond doubt that the Kolmogorov inertial-range form was valid and that the theories of Kraichnan and Edwards were not quite correct. We will write about this separately in other posts, but for me in 1966 the challenge was to produce an amended form of the Edwards theory which would be compatible with the `5/3’ spectrum. This, in other words, was a restatement of the turbulence closure problem. It is one that I have worked on ever since.

This is not an easy problem and progress has been slow. But there has been progress, culminating in McComb & Yoffe (2015): see #3 of my recent publications. However, over the years, beginning in the late 1970s, this work has increasingly received referee reports which are hostile to the very activity and which assert that the basic problem for closures is not to obtain $k^{-5/3}$ but rather to obtain a value for $\mu$, where the exponent should be $-5/3 + \mu$, due to intermittency corrections. Unfortunately for this point of view, the so-called intermittency correction $\mu$ comes attached to a factor $L$, representing the physical size of the system. This means that the limit $L \rightarrow \infty$ does not exist, which is something of a snag for the modified Kolmogorov theory.

We shall enlarge on this elsewhere. For the moment it is interesting to note that the enthusiasm for intermittency corrections arose from the study of structure functions and in particular their behaviour with increasing order. This became a very popular field of research throughout the 1980s/90s and threatened to establish a sort of standard model, from which no one was permitted to dissent. Fortunately, there has been a fight back over the last decade or two, and the importance of finite Reynolds number effects (or FRN) is becoming established. In particular, the group consisting of Antonia and co-workers has emphasised consistently (and in my view correctly) that the Kolmogorov result $S_3 \sim (4/5)r$ (which the Intermittentists regard as exact) is only correct in the limit of infinite Reynolds numbers. At finite viscosities there must be a correction, however small. A similar conclusion has been reached for the second-order structure function by McComb et al (2014), who used a method for reducing systematic errors to show that this exponent too tended to the canonical value in the limit of infinite Reynolds numbers. These facts have severe consequences for the way in which the Intermittentists analyse their data and draw their conclusions.

This leaves us with an interesting point about the difference between real space and wavenumber space. The above comments are true for structure functions, because in $r$-space everything is local. In contrast, the nonlinear energy transfers in $k$-space are highly nonlocal. The dominant feature in wavenumber space is the flux of energy through the modes, from low wavenumbers to high. The Kolmogorov picture involves the onset of scale invariance at a critical Reynolds number, and the increasing extent of the associated inertial range of wavenumbers as the Reynolds number increases. The infinite Reynolds number limit in $k$-space then corresponds to the inertial range being of infinite extent. At finite Reynolds numbers, it will be of merely finite extent, but there is no reason to believe that there is any other finite Reynolds number correction. I believe that this is more than just a conjecture.

[1]A. M. Obukhov. On the distribution of energy in the spectrum of turbulent flow. C.R. Acad. Sci. U.R.S.S, 32:19, 1941.

[2] L. Onsager. The Distribution of Energy in Turbulence. Phys. Rev., 68:281, 1945.

[3] G. K. Batchelor. Kolmogorov’s theory of locally isotropic turbulence. Proc. Camb. Philos. Soc., 43:533, 1947.




Scientific discussion in the turbulence community.

Scientific discussion in the turbulence community.

Shortly after I retired, I began a two-year travel fellowship, with the hope of having interesting discussions on various aspects of turbulence. I’m sure that I had many interesting discussions, particularly in trying out some new and half-baked ideas that I had about that time, but what really sticks in my mind are certain unsatisfactory discussions.

To set the scene, I had recently become aware of Lundgren’s (2002) paper [1] and, having worked through it in detail, I was convinced that it offered a proof that the second-order structure function took the Kolmogorov `2/3’ form asymptotically in the limit of infinite Reynolds numbers. There is of course little or no disagreement about Kolmogorov’s derivation of the `4/5’ law for the third-order structure function. For stationary turbulence, it is undoubtedly asymptotically correct in the infinite Reynolds number limit. But in order to find the second-order form, Kolmogorov had to make the additional assumption that the skewness of the longitudinal derivative became constant in the infinite Reynolds number limit. Introducing the skewness $S$ as $S=S_3(r)/S_2(r)^{3/2}$, and substituting the `4/5’ law for $S_3$, results in the well-known form $S_2(r)=(-4/5S)^{2/3}\varepsilon^{2/3}r^{2/3}\equiv C_2\varepsilon^{2/3}r^{2/3}$. Numerical results do indeed suggest that the skewness becomes independent of the Reynolds number as the latter increases, but it remains a weakness of the theory that this assumption is needed.

Lundgren [1] started, like Kolmogorov, from the Karman-Howarth equation (KHE), and did the following. He put the KHE in dimensionless form by a generic change of variables based on time-dependent length and velocity scales, $l$ and $u$. He then chose to examine: first, Von Karman scaling; and secondly, Kolmogorov scaling, with appropriate choices for $l$ and $u$. In both cases, he solved for the scaled second-order structure function by a perturbation expansion in inverse powers of the Reynolds number. He then employed the method of matched asymptotic expansions which recovered the Kolmogorov form for $S_2$. The `4/5’ law was also recovered for $S_3$, both results naturally following in the large Reynolds number limit. A more extensive account of this work can be found in Section 6.4.6 of my 2014 book.

Before setting off on my travels, I consulted a colleague who, although specializing in soft matter, had some familiarity with turbulence. To my surprise he seemed quite unenthusiastic about this work. He said something to the effect that it was a pity that Lundgren had to assume the same scaled form for both the second-order and the third-order structure functions. Now, on reflection I saw that this was nonsense. All Lundgren did was introduce a change of variables: this is not an assumption; it merely restates the problem, as it were. Secondly, the basic Kolmogorov theory deals with the probability distribution functional, and this means that all the moments (and hence structure functions) will be affected in the same way by any operation on it [2].

On the first of my visits, I began to discuss this with Professor X, who seemed very sceptical at first, then his comments seemed increasingly irrelevant, then he realised that he was thinking of an entirely later piece of work by Lundgren. At that point the discussion fizzled out.

On a later visit to a different university, at an early stage in the discussion with Professor Y, I commented that the method relied on the fact that the Karman-Howarth equation was local in the variable $r$. To which he swiftly replied: `Yes Tom does have to assume that.’ That effectively brought things to a close, because once again we are faced with nonsense. In fact this particular individual seems to believe that the existence of an energy cascade implies that the KHE is nonlocal! But of course the nonlocalness is confined to the Lin equation in wavenumber space.

On a later occasion, I tried to bring the subject up again, but no luck. He said: `Tom just makes the same assumptions as Kolmogorov did. So there is nothing new.’ At this point I finally gave up. However, as we have just seen, Kolmogorov has to assume that the skewness $S$ becomes constant as the Reynolds number increases. In contrast, the Lundgren analysis actually shows that this is so. In addition, it also provides a way of assessing systematic corrections to the `4/5’ law at large but finite Reynolds numbers.

The basic theoretical problems in turbulence are very hard and perhaps even impossible to solve, in a strict sense. However, the fact that lesser problems of phenomenology are plagued by controversy, with issues remaining unresolved for decades, seems to me to be a matter of attitude (and culture) that leads to a basic lack of scholarship. I think we need to trade in the old turbulence community and get a new one.

[1]Thomas S. Lundgren. Kolmogorov two-thirds law by matched asymptotic expansion. Phys. Fluids, 14:638, 2002.

[2] I have to own up to an error here. For years I argued that only the second- and third-order structure functions were involved in Kolmogorov and hence conclusions based on higher-order moments were irrelevant. Then (quite recently!) I noticed in a paper by Batchelor the comment that as the hypotheses were for the pdf, they automatically applied to moments of all orders.




Intermittency corrections (sic) and the perversity of group think

Intermittency corrections (sic) and the perversity of group think.

In The Times of 11 January this year, there was a report by their Science Editor which had the title Expert’s lonely 30-year quest for Alzheimer’s cure offers new hope. Senile dementia is the curse of the age (even if temporarily eclipsed by the Corona virus) and the article tells how in 1905 Alois Alzheimer made a post mortem examination of the brain of a woman who in her later years had become confused and forgetful. He found two pathological features: one consisted of clumps of plaques of a protein called beta amyloid and the other consisted of sticky tangles of a different protein, later identified by a Professor Claude Wischik as a protein called tau.

Now, with two possible causes, you might imagine that researchers in the field would be interested in both. But you would be wrong. It seems that the community targeted the beta amyloid cause and for many years neglected the other possibility. Now, after decades of failure, the major pharmaceutical companies are developing anti-tau drugs. Even if none of these proves to be the magic bullet, it seems a healthier situation that both symptoms (and the possible interaction between them) are being studied. The article ends on a note of moderate optimism, but the question remains: why was the research skewed towards just the one possibility? The article seems to suggest that this may have been because beta amyloid was already known and possibly implicated in another pathology. As always, in applied research there is a temptation to go for the `quick and dirty solution’!

The behaviour of the researchers pursuing the beta amyloid option (to the exclusion of the equally possible tau option) exhibits some of the characteristics of what psychologists call group think. A similar phenomenon has been part of fundamental research on turbulence for at least five decades. As is well known, it started with a remark by Landau about the Kolmogorov (1941) theory; or K41 for short. This criticism is based on the idea that intermittency of the dissipation rate has implications for the K41 theory, despite the fact that the physical basis of that theory is the inertial transfer rate, which is sometimes equal to the dissipation rate. This criticism, along with various others, is discussed in Chapters 4 and 6 of my 2014 book on turbulence and I will not consider it further here. All I wish to note is that there has been an ongoing body of work on so-called intermittency corrections, and the strange thing is that more obvious corrections have been largely neglected, until quite recent times. Let us now expand on that.

Essentially Kolmogorov used Richardson’s concept of the cascade to argue that energy transfer would proceed by a stepwise process from large scales (production range) to small scales and this would result in a universal form for the structure functions in these small scales. Furthermore, for large Reynolds numbers, the effect of the viscosity would only be appreciable at very small scales, and there would be an intermediate subrange of scales where the local excitation would be controlled by inertial transfer into the subrange from the large scales and inertial transfer out of the subrange into the small scales where it would be dissipated by viscous effects.

At this point, I should enter a small caveat. I feel quite uncomfortable with what I have just written. The physical concept of the cascade is rather ill-defined in real space. I would be much happier talking in terms of wavenumber space where the cascade is well defined and the key concept is scale-invariance of the inertial flux. This fact was recognized by Obukhov (1941), by Onsager (1945) and by Batchelor (1947), and after that very widely. It is rather as if Kolmogorov, in choosing to work in real space, had opted for Betamax rather than VHS!

However, ignoring my quibbles, in either space one point is clear: this is an approximate theory. Either $S_2 \sim \varepsilon^{2/3}r^{2/3}$ or $E(k) \sim \varepsilon^{2/3}k^{-5/3}$ is only asymptotically valid in the limit of infinite Reynolds numbers. Under all other circumstances, there must be corrections due to finite-Reynolds number (FRN) effects. These corrections may be small enough to ignore: bear in mind that on various measures an infinite Reynolds number is not all that large. There is certainly no need to worry about zero viscosity (pace) Onsager and his hagiographers! We shall return to this specific point in later posts.

The response of Kolmogorov to Landau’s criticism was the somewhat ad hoc K62, in which the retention of the specific effect of the large scales of the system (in both structure functions and spectra), completely reversed the original assumption of the stepwise cascade leading to universal behaviour. For reasons that are far from clear to me, this sparked off a positive industry of intermittency corrections, anomalous exponents and various improvements (sic) on Kolmogorov, which lasts to this day. In contrast, from the late 1990s, increasing attention, both experimental and theoretical, has been given to FRN effects, and in particular the way in which they have been ignored in assessing the evidence for anomalous exponents and suchlike. We may highlight the situation in the field by contrasting two major papers, both published in leading learned journals within the last year.

The first of these is by Tang et al [1], who note in their abstract that K62 `has been embraced by an overwhelming majority of turbulence researchers.’ This paper is one in a series in which this group has investigated the alternative effect of finite Reynolds number corrections. In addition to their own analysis, they also cite many papers from recent years which support their conclusion that the failure to account for FRN effects has `almost invariably been mistaken for the intermittency effect’. In the main body of their paper, they express themselves even more forcibly. In contrast, the paper by Dubrulle [2], which is very much in the K62 camp, so to speak, cites not a single reference to FRN effects. Instead the author argues that small-scale intermittency is incompatible with homogeneity, and makes the radical proposal that the Karman-Howarth equation should be replaced by a weak form which takes account of singularities. At this point one takes leave of continuum mechanics and much else besides! If we consult Batchelor’s book, we find that homogeneity is defined in terms of mean quantities and is therefore entirely compatible with intermittency of the velocity field, which is nowadays understood to be present at all scales.

I was tempted to say that it is difficult to imagine such a fundamental gulf in any subject other than turbulence, but then that’s where we came in!

[1] S. Tang, R. A. Antonia, L. Djenidi, and Y. Zhou. Phys .Rev. Fluids 4, 024607 (2019).

[2] B. Dubrulle. J. Fluid Mech. 867, P1, (2019).




Bad proofs and `curate’s egg’ theories

Bad proofs and `curate’s egg’ theories.

At about the time I took up my appointment at Edinburgh, I heard about a pure mathematician who wanted to be remembered for his bad proofs. Some years later I read his obituary in The Times and this fact was mentioned again. I had thought that I had kept the cutting but it seems not, so I’m afraid that I don’t remember his name. But I do remember what was meant by the term `bad proofs’. This man’s view was that many proofs in mathematics have been polished by various hands over the years and he wanted to be remembered for his originality. His proofs would be unpolished and hence seen as original.

The choice of the word `bad’ is interesting, in view of its pejorative overtones. I would be inclined to think that the original proof would at least be valid and hence not to be described as bad. Perhaps, later more elegant versions of the proof would emphasise the unpolished nature of the original. Hence, perhaps `rough’ might be a better description. Presumably the word `bad’ was chosen to emphasise the paradoxical appearance of that statement. Well, at least he is being remembered for his quirky assertion about what he wanted to be remembered for.

For some time I have wondered whether there is an analogous term for turbulence theories. By which I mean attempts to solve the statistical closure problem. This was originally formulated by Reynolds for pipe flow, but as usual we will consider it here as applied to isotropic turbulence. Obviously `bad’ is no good, because we do not have the paradoxical juxtaposition that we have with the word `proof’, which in itself indicates success, which is certainly not bad. One obvious possibility would be `rough’ but somehow that does not appeal. `Rough theories’ does not sound good. In fact it sounds bad.

Recently I came up with the idea of the `curate’s egg’ theories, meaning `good in parts’. This saying stems from a cartoon which appeared in the British humorous magazine Punch in 1895. It shows a nervous curate breakfasting with the bishop. The bishop expresses concern that the curate’s egg is not a good one. The curate, anxious not to make a fuss, bravely asserts that his egg is `good in parts’. The term passed into everyday speech and was still current when I was young. In the 1960s I was commuting regularly by train, and I would buy Punch to read on the journey. On one occasion there was a commemorative issue and a facsimile of the original cartoon was reproduced, so I was interested to see the origin of the phrase. We didn’t have Google in those days!

The reason that I think that such a term might be helpful is that many members of the turbulence community seem to see a theory as being either right or wrong. And if it’s deemed to be wrong, then it should be dismissed and never considered again. A striking example of this kind of thing arose a few years ago when I was trying to get a paper on the LET theory published (see #10 in the list of recent papers)) and it had gone to arbitration. The Associate Editor who was consulted turned the paper down because `this is the sort of stuff Kraichnan did and everybody has known for the last twenty years that it’s wrong’.

This decision was easily overturned. The sheer idiocy of the proposition that, because one person had tackled a problem and failed, other people should be barred from making further attempts, ensured that. But what interests me is the fact that Kraichnan’s work is reduced to `the sort of stuff’ and regarded as `wrong’. This was done by someone who was an applied mathematician and not a theoretical physicist. I am not a betting man, but I would put a small amount of money on the assumption that this referee had very little knowledge of Kraichnan’s vast output, and was relying on hearsay for his opinion. I understand the difficulties facing anyone from an engineering background in trying to get to grips with this type of many-body or field theory although there are accessible treatments available. But if you are unable to understand this work in detail, then it is unlikely that you are qualified to referee it.

If we take an example from physics, in critical phenomena (e.g. the transition from para- to ferromagnetism) the subject was dominated by mean-field theory up until the late 1970s, when renormalization group (RG) was applied to critical phenomena. This does not mean that mean-field theory was immediately dismissed. In fact it is still taught in undergraduate courses. Prior to RG there was a balanced understanding of the limitations and successes of mean-field theory and no one ever thought of it as `right’, with the corollary that no one now dismisses it as simply `wrong’.

I know what I would like to have for other subjects, such as cosmology, particle theory or indeed musical theory. I would like to be able to read a simple account which explains the state of play, without going into too much detail. That is what I intend to provide for statistical theories of turbulence in future posts. In my view, most theories of turbulence can be regarded as `curate’s eggs’: they have both good and bad aspects. The important thing is that those working in the field of turbulence should have some understanding of the situation and should appreciate the importance of having further research in this area.




The infinite-Reynolds number limit: a first look

The infinite-Reynolds number limit: a first look.

I notice that MSRI at Berkeley have a programme next year on math problems in fluid dynamics. The primary component seems to be an examination of the relationship between the Euler and Navier-Stokes equations, `in the zero-viscosity limit’. The latter is, of course, the same as the limit of infinite Reynolds numbers, providing that the limit is taken in the same way with the same constraints. I think that it is a failure to appreciate this proviso that has resulted in the concept becoming something of a vexed question over the years. Yet it was clearly explained by Batchelor in 1953 and elegantly re-formulated by Edwards in 1965. As a result, a group of theorists has been quite happy about the concept, but many other workers in the field seem to be uneasy.

I first became aware of this when talking to Bob Kraichnan at a meeting in 1984. When I used the term, his reaction surprised me. He began to hold forth on the subject. He said that people were `frightened’ of the idea of the infinite-Reynolds number limit. Rather defensively I said that I wasn’t frightened by it. His reply was. `Oh, I know that you aren’t but you would be surprised at the number of people who are!’ Since then I have indeed been surprised by how often you get a comment from a referee which goes something like: `The authors take the infinite-Re limit … but of course you cannot really have zero viscosity, can you.’ This rather nervous addendum suggests strongly that the referee does not understand the concept of a limit.

Well, one thing I would claim to understand is the idea of a limit in mathematical analysis. This is because the first class of my school course on calculus dealt with nothing else. I can remember that class period clearly, even although it was about sixty five years ago. One example that our maths master gave, was to imagine that you were cutting up your twelve-inch ruler, which was standard in those days. You cut it into two identical pieces in a perfect cutting process, with no waste. Then you put one piece over to your right hand side, and now cut the left hand piece into two identical pieces. One of these you put over to the right hand side, and add it on to the six-inch piece already there, to make a nine-inch ruler. The remaining piece you again cut into two, and move half over to make a ten and a half inch ruler. However much you repeat this process, the ruler will approach but never reach twelve inches again. In other words, twelve inches is the limit and you can only approach it asymptotically.

Suppose we carry out a similar thought experiment on turbulence; although you could actually do this, most readily by DNS. What we are going to do is to stir a fluid in order to produce stationary, isotropic turbulence. Now at this stage, we don’t even think about dissipation. We are trying to drive a dynamical system and we start by specifying the forcing in terms of the rate of doing work on the fluid. We call this quantity $\varepsilon_W$ and it is fixed. Next our dynamical system is fully specified once we choose the boundary conditions and the kinematic viscosity $\nu$. Accordingly, providing the forcing spectrum is peaked near the origin in wavenumber space, and there has been an appropriate choice of value of the initial kinematic viscosity, energy will enter the system at low wavenumbers, be transferred by conservative inertial processes to higher wavenumbers, and ultimately dissipated at the highest excited wavenumbers. Once the system becomes stationary, the dissipation rate must be equal to the rate of doing work, and so the Kolmogorov dissipation wavenumber is given by $k_d = (\varepsilon_W /\nu^3)^{1/4}$.

Now let us carry out a sequence of experiments in which $\varepsilon_W$ remains fixed, but we progressively reduce the value of the kinematic viscosity. In each experiment, the viscosity is smaller and the dissipation wavenumber is larger. Therefore there is a greater volume of wavenumber space and it will take longer to fill with energy. Ultimately, corresponding to the limiting case, we have an infinite volume of wavenumber space and the system will take an infinite time to reach stationarity and in principle will contain an infinite amount of energy. Note that this is not a catastrophe! In continuum problems, a catastrophe is when you get an infinite density of some kind. Here the work, transfer and dissipation rates are the densities of the problem, and they are perfectly well behaved.

At this stage, when I try to discuss the infinite Reynolds number limit, people tend to get uneasy and talk about possible singularities or discontinuities. I don’t really think that there is any cause for such hand-wringing. You have to decide first, which Navier-Stokes equation (NSE) you are using. There are two possibilities and they are identical; but we arrive at them by different routes.

If we arrive at the NSE by continuum mechanics, then in principle we can take the limit of zero viscosity without worry. After all, this is just a model of a real viscous fluid and, among other things, it is rigorously incompressible which a real fluid isn’t. We accept that in practice that it is the flow which is incompressible, not the fluid. So if the density variations are too small to detect, we can safely use the NSE.

If you come by the statistical physics route, then you must bound the smallest length scale (here the Kolmogorov dissipation length scale) such that it is orders of magnitude larger than inter-molecular distances. In practice, we may see the asymptotic behaviour associated with small viscosity arising long before there is any danger of breaching the continuum limit. For instance, if we look at the behaviour of the dimensionless dissipation rate as the Reynolds number is increased (see Fig. 1 of paper #6 in my list of recent papers) we are actually seeing the onset of the infinite Reynolds number limit. The accuracy of the determinations of $C_{\varepsilon,\infty}$ in this work is very decent, but if greater accuracy were required, then a bigger simulation would provide. Just like in boundary layer theory, it is all a matter of quite pragmatic considerations. I will give a more pedagogic discussion of this topic in a future post.




A first look at Kolmogorov (1941)

A first look at Kolmogorov (1941)

Around the turn of the new millennium, I attended the PhD oral of one of my own students for the last time as Internal Examiner. After that the regulations were changed; or perhaps it was frowned on for the supervisor to also be the Internal. Later still I stopped attending in any capacity: I think it became that the student had to invite their supervisor if they wanted them to attend. Is this an improvement on the previous system? Actually, my own PhD oral was conducted by David Leslie, who had previously been my second supervisor, and Sam Edwards who was my first supervisor! The three of us had had many discussions of my work in the past, so the atmosphere was informal and friendly. But I don’t think the examination lacked rigour and I suppose it would have been difficult to find anyone else in the UK who could have acted as external examiner.

However, back to my own last stint as Internal. The candidate was a graduate with joint honours in maths and computer science. He was a very able young man and did good work, but he was not a physicist and never quite engaged with the physics. So when the External asked him if he could derive the Kolmogorov spectrum, he said `No’, then added pertly `Can you?’ Alas, the External was unable to do so. Fortunately the Internal was able to go to the blackboard and do the needful. The External was quite a well-known member of the turbulence community, so we will spare his blushes. Yet, it left me wondering how many turbulence researchers could sit down and derive the Kolmogorov energy spectrum, or equivalently the second-order structure function, without consulting a book? For any such benighted souls, I will now offer a crib. Virtue should be its own reward, but in the process of putting this together, I think I have found the answer to something that had puzzled me. I will return to that at the end of this post.

For simplicity, let’s work with the second-order structure function $S_2(r)$. This is what Kolmogorov did: the form for the energy spectrum came later. Glossing over the physical justification, we consider the question: how do we express $S_2(r)$ in terms of the dissipation rate $\varepsilon$ and the distance between measuring points $r$, for some intermediate range of values of $r$?

The first thing to notice is that $S_2$ has dimensions of velocity squared (or energy per unit mass: we won’t keep repeating this) and that the dissipation is the rate of change of the energy with time. It follows that $S_2$ depends on the inverse of time squared whereas dissipation depends on the inverse of time cubed. Hence, the structure function must depend on the dissipation to the power of $2/3$. Or,

\[S_2(r) \sim \varepsilon^{2/3}.\]

This is the Kolmogorov result. Put in its most general form: if you seek to express the energy in terms of the dissipation, inertial transfer, eddy-decay rate, or any other rate of change, you must have a two-thirds power from the need to have consistency of the time dimension across both sides of the equation.

Now what happens when we tidy up the dimensions of length? On the right hand side of the equation, we now have the dimensions of length to the power of $4/3$. In order to make this consistent with $S_2$ on the left hand side, we must multiply by a length to the power of $2/3$. From Kolmogorov (1941), this length must be $r$, and if we put a constant $C$ in front, we recover the well-known K41 result

\[S_2(r) = C r^{2/3}\varepsilon^{2/3}.\]

If however, we think that it might also depend on another length, then we only have available some length characteristic of the size of the system, say $L_{ext}$. If we include this, then we must multiply the right hand side by $L_{ext}^p r^m$, where $p+m=2/3$. In other words, the power of $r$ is no longer determined. This is, in effect, what Kolmogorov did in 1962, albeit by a more circuitous route. And, in the process he threw away his entire theory, which was based on the idea that the many steps of the Richardson cascade would lead to a universal result at small scales. In Kolmogorov (1962) that does not happen: the final result depends on the physical size of the system.

Let us now hark back to what had puzzled me. In a previous post I mentioned a contumacious referee. In fact this individual kept asserting that `$r^{2/3}$ is not Kolmogorov’. We pressed him to explain but it was clear that he had found his excuse for rejecting the paper and wasn’t prepared to be more helpful (or indeed scholarly). As our paper contained a discussion of the fact that the extended scale similarity technique gave the two-thirds law as an artifact in the dissipation range, it is possible that he was actually agreeing with us! However, taking his comment as a general statement, I would be inclined to agree with it. From the discussion we have given above, it should be clear that it is the dependence on the dissipation rate to the two-thirds power that is actually Kolmogorov. For anyone interested, the paper is Number 7 in the list of my recent papers given on this website.




The energy balance equation: or what’s in a name?

The energy balance equations: or what’s in a name? 

Over the last few years I have noticed that the Karman-Howarth equation is sometimes referred to nowadays as the `scale-by-scale energy budget equation’. Having thought about it carefully, I have concluded that I understand that description; but I think the mere fact that one has to think carefully is a disadvantage. To Anglophone speakers of English, the term `budget’ suggests some sort of forward planning. Actually I think that in physics the more correct term would be local energy balance equation. Let us consider the form of the KHE equation when it is written in terms of the second-order and third-order structure functions, thus:

\[0=-\frac{2}{3}\frac{\dd E}{\dd t} + \frac{1}{2}\frac{\dd S_2}{\dd t} + \frac{1}{6r^4}\frac{\dd}{\dd r}(r^4 S_3) – \frac{\nu}{r^4}\frac{\dd}{\dd r}\left(r^4\frac{\dd S_2}{\dd r}\right). \]

Note that all notation and background for this post will be found in my (2014) book on HIT. Also, I have moved the term involving the total energy (per unit mass) to the right of the equal sign, for a reason which will become obvious.

More recently I have seen exactly the same phrase used to describe the Lin equation, which is just the Fourier transform of the KHE to wavenumber space. This strikes me as even more surprising, but again I don’t want to say that it is actually wrong. Indeed in one sense I rather welcome it, because it makes it clear that the concept of scale belongs equally to wavenumber space. It can be all too easy to fall into a usage in which real space is regarded as `scale space’ and is distinguished in that way from wavenumber space. But the real problem here is that it is only valid for the simplest form of the Lin equation, and this in itself can be misleading.

Let us now consider the Lin equation in terms of the energy spectrum and the transfer spectrum. We may write this in its well-known form:

\[\left(\ddt + 2\nu k^2\right)E(k,t) = T(k,t).\]

Here, as with the KHE, we assume that there are no forces acting.

However, unlike the KHE, this is not the whole story. We may also express the transfer spectrum in terms of its spectral density, thus:

\[T(k,t) = \int_0^\infty\, dj \,S(k,j;t).\]

When we substitute this in, we obtain the second form of the Lin equation, and this is actually more comparable with the KHE as given above, because the transfer spectrum density contains the Fourier transform of the third-order structure function, which of course occurs explicitly in the KHE.

Now compare the two equations. The KHE holds for any value of the independent variable. If we take some particular value of the independent variable, then each term can be evaluated as a number corresponding to that value of $r$, and the above equation becomes a set of four numbers adding up to zero. If we consider another value of $r$, then we have a different four numbers but they must still add up to zero. In short, KHE is local in the independent variable.

The Lin equation, if we write it in its full form, tells us that all the Fourier modes are coupled to each other. It is, in the language of physics, an example of the many body problem. It is in fact highly non-local as in principle it couples every mode to every other mode.

A corollary of this is that the KHE does not predict a cascade. But the Lin equation does. This can be deduced from the nonlinear term which couples all modes together plus the presence of the viscous term which is symmetry-breaking. If the viscous term were set equal to zero, then the coupled but inviscid equation would yield equipartition states.

The well-known question at the head of this post is rhetorical and expects the answer `A rose by any other name would smell as sweet’. But I’m afraid that Juliet’s laissez-faire attitude to terminology would not be widely applicable. One thinks of the surgeon who fails to distinguish between the liver and the spleen. Or the pilot who thinks west is just as good a name for east. In the turbulence community, I suppose that `locality’ for `localness’, or `inverse’ for `reverse’ arise because they seem natural coinages to non-Anglophones. In the wider world, the classic case since the 1960s is Karl Popper’s idea that a scientific theory should be falsifiable. But in everyday English speech, to falsify means to make false. For instance, to falsify an entry in one’s accounts, means, to put it in the demotic, to cook the books!

I shall return to this point in future posts and in particular to the localness of the KHE.




Wavenumber Murder and other grisly tales

Wavenumber Murder and other grisly tales.

When I was first at Edinburgh, I worked on developing a theory of turbulent drag reduction by additives. But, instead of considering polymers, I studied the much less well-known phenomenon involving macroscopic fibres. This was because it seemed to me that the fibres were probably of a length which was comparable to the size of the smallest turbulent eddies. It also seemed to me that the interaction between fibre and eddy would be two-dimensional and that it might be possible to formulate an explanation of turbulent drag reduction on mainly geometrical grounds. In particular, I had in mind that two-dimensional eddies could have a reverse cascade, with the energy being transferred from high wavenumbers to small. That is, the reverse (but not inverse) of the usual process. In this way drag might be reduced.

I derived a simple model for this process, and a letter describing it was published by Nature Physical Science in 1974. So far so good. Then I set to work writing the theory up in more detail and submitted it to the JFM. The results were not so good this time, and I had three referees’ reports to consider. At least, George Batchelor did not feel the need to suppress any of the reports on the grounds of it being too offensive (someone I knew actually had this exeperience). But still, they were pretty bad.

No doubt this was salutary. I didn’t dissent from the view that the paper should be rejected. In fact I dismantled it into several much better papers and got them published elsewhere. But what sticks in my mind even yet is the referee who wrote: `The author commits the usual wavenumber murder. Who knows what unphysical assumptions are being made under the cover of wavenumber space?’

Well, that’s for me to know and you to find out, perhaps! Of course, now that I am older (a lot) and wiser (a little), I realise that I could have played it better. I could have written up the use of Fourier methods, quoted Batchelor’s book extensively, and thus made it very difficult for the referee to respond in that rather childish way. But why would that even occur to me? I was used at that stage to turbulence theorists who moved straight into wavenumber space without seeing any need to justify it. This is a cultural factor. Theoretical physicists are used to operating in momentum space which, give or take Planck’s constant, is just wavenumber space in disguise. Anyway, at the time I was surprised and disappointed that the editor did not at least intervene on this particular point.

I actually found that referee’s reaction quite shocking, but in one form or another I was to encounter it occasionally over the years, until at last it seemed to die out. Partially this could be attributed I would guess to the growth of DNS, with its dependence on spectral methods. Also, I think it could be due to better educated individuals becoming attracted to the study of turbulence.

Anyway, a few years ago, and just when I thought it was safe to mention spectral methods again, I made a big mistake. I had written (with three co-authors) a paper in which we used spectral methods to evaluate the exponents associated with real-space structure functions. It had been increasingly believed that the inertial-range exponents departed from the Kolmogorov (1941) forms, increasingly with both order and with Reynolds number, although it was actually realised that this could be attributed to systematic experimental error. So we had used a standard method of experimental physics to reduce systematic error and found that the exponent for the second-order structure function in fact tended to the Kolmogorov canonical form, as the Reynolds number was increased. This is precisely the sort of result that merits a short communication and accordingly we submitted it as such. One of the referees was contumacious (and I may come back to him in later blog), the other was broadly favourable but seemed rather nervous about various points. However, when we had responded to his various points, he wanted one or two more changes and then he would recommend it for publication. At the same time, he commented that he really did wish that we hadn’t used spectral methods.

This was where I made my big mistake. Overcome by kindly feelings towards this ref, and obeying my pedagogical instincts, I tried to re-assure him. I pointed out that he was quite happy with the pseudo-spectral method of DNS, in which the convolution sums in wavenumber space are evaluated more economically in real space and then transformed back into wavenumber. Now, I said, we are employing the same technique, but the other way round. We are evaluating the convolutions determining the structure function in real space, by going into wavenumber space. The response had a petulant tone. We were, he said, talking nonsense. The structure functions did not involve convolution integrals and he was rejecting the paper as mathematically unsound!

Later on we wrote up a longer version of the work and it was published: see #7 in the list of recent papers on this site. Appendix A is the place to look for the maths which bewildered the poor benighted referee. While accepting that this degree of detail was not given in the short communication, what is one to make of a referee who is unaware that a structure function can be expressed in terms of a correlation function and that the latter is a convolution integral?

Both referees were frightened of Fourier methods and between them almost seem to have bookended my career. But referees who are comprehensively out of their depth have not been a rare phenomenon over the years. The forms which this inadequacy takes have been many and varied and I shall probably be dipping into my extensive rogues’ gallery in future posts. There is also the question of the editor’s role in finding referees who are actually qualified to referee a specific manuscript, and this too seems a fit subject for further enquiry. However, I should finish by pointing out that being on the receiving end of inadequate refereeing is not exclusively my problem.

In the first half of 1999, the Isaac Newton Institute held a workshop on turbulence. During the opening week, we saw famous name after famous name go up to the podium to give a talk, which almost invariably ended with `and so I sent it off to Physica D instead’. This last was received with understanding nods and smiles by an audience who were clearly familiar with the idea. This quite cheered me up, it seemed that I was not alone. At the same time, the sheer waste of time and energy involved seemed quite shocking. It prompted the thought: is it the turbulence community that is the problem, rather than the turbulence? That is something to consider further in future posts.




HIT: Do three-letter acronyms always win out?

HIT: Do three-letter acronyms always win out?

In 1997, I visited Delft Technical University and while I was there gave a course of lectures on turbulence theory. During these lectures, I mentioned that nowadays people seemed to refer to homogeneous, isotropic turbulence; whereas, when I started out, it was commonplace to simply say isotropic turbulence. The homogeneity was assumed, as a necessary condition for the isotropy. After the morning session, when we were making our way back for lunch, the postgrads who were attending, said to me `Three-letter acronyms always win out!’. Naturally, I pooh-poohed this, but many years on, I have to confess that I use the three-word name of the subject (it was the title of my 2014 book) and the acronym as well. Sometimes it is just a matter of euphony. But does it do any harm? Well, that’s an interesting question, but for the moment let us make a short digression.

In recent years I have been thinking a little about cosmology (well it makes a change from turbulence) and have learned about the cosmological principle, which states that the universe is both homogeneous and isotropic.Homogeneous means that its properties are independent of position and isotropic means that its properties are independent of orientation. In everyday life, one might think of a piece of metal or plastic being homogeneous and isotropic, in contrast to wood which has a grain. So naturally when I step out into my back garden in the evening, I can observe this for myself … or rather, I can’t. Actually the night sky looks anything but homogeneous, let alone isotropic. Are the cosmologists deluded?

The answer lies in the fact that the cosmological principle applies to averaged properties. Apparently it is necessary to take averages over huge volumes of space, each of which contains vast numbers of galaxies, for the concepts of homogeneity and isotropic to apply. Evidently, to paraphrase J. B. S. Haldane (and following in the footsteps of Werner Heisenberg) the universe is not only bigger than we think, it is bigger than we can think. So, if I want to behave like an idiot, I should just go about proclaiming: `The cosmologists are mad. You only have to look up at the night sky to see that their claims about the uniformity of the universe are completely unjustified.’ In doing so, I would be ignoring the details of what the cosmologists actually said, and surely no one would be so silly as to do that before launching into speech? Well, in turbulence that is exactly what many people do.

In turbulence, for many years we have had flow visualisations based on direct numerical simulation of the equations of fluid motion. These undoubtedly show a spotty distribution of various characteristics of interest, especially the dissipation rate, and this is generally taken as supporting the idea that turbulence intermittency has implications for statistical theories. Indeed, there are those who go further and see results like this as invalidating assumptions of homogeneity and isotropy. What they leave out of the reckoning is; first, that homogeneity and isotropy are properties of average quantities, in turbulence as in cosmology. Secondly, the flow visualisations are snapshots or single realisations. If you average over them, the spottiness disappears, as indeed it has to, in order to conform to homogeneity and isotropy, and the field becomes uniform and without structure.

If we go to the fountainhead for this subject, in Batchelor’s classic monograph on page 3 we may read: `The possibility of this further assumption of isotropy exists only when the turbulence is already homogeneous, for certain directions would be preferred by a lack of homogeneity’. Batchelor also points out that homogeneity and isotropy are average properties of the random variable, and in fact they are defined formally in terms of the probability distribution functional (the pdf, or equivalently its moments).

So this is where I answer my own question. It does matter. It is needed for clear thinking and the best possible understanding that we are careful about the fact that homogeneity is a necessary condition for isotropy. In the process we have to be careful about definitions. In that way one can perhaps avoid the egregious errors which occur in a recent paper, where it is argued that intermittency at the small scales is incompatible with homogeneity and so invalidates the energy-balance equation derived rigorously by averaging the equations of motion. Actually, intermittency is present at all scales and is part of the exact solution of the equations of motion. It is not in any way incompatible with the pdf, which must take a form appropriate to the intermittent (single-realization characteristic) and homogeneous (ensemble-averaged characteristic) nature of the random field. We shall return to a more specific way to this publication in later posts.




The First Post

The First Post

Many years ago, early in my career, I learned the hard way that every paper submitted for publication should be ruthlessly pared down to consist solely of factual material and fully justified statements. Any personal opinions, speculations, whimsical thoughts, comments or suchlike, should be eliminated; as, in the words of the poet John Donne, they would offer `hostages to fortune’. That is, there would be at least one referee who would make such an opinion (suitably misinterpreted!) the basis for outright rejection of the manuscript, probably accompanied by gratuitously offensive comments. This of course raises questions about the role of the editor in this increasingly fraught process of peer review, and that is something to which I shall return in future blogs.
In the middle period of my career, I would occasionally receive a referee’s report which expressed regret that I had not included more of my own views, and indicated that they would be welcome. My response to this was `No fear’, to use an expression from my remote childhood.

Recently I gave in to the temptation to do just that and, in what might well be my last journal submission (rejected by four different journals), I sweepingly dismissed both the Kolmogorov (1962) `revised theory’ and Landau’s criticism of the Kolmogorov (1941) theory, without explaining why. I suppose I was relying on the critique published in my book of 2014. But they were seized upon by one referee to reject the paper, followed by the patronizing comment `Need I say more’. Well, actually what he needed to do was to say less and to think more. That too is something to which I shall return in future blogs.

Evidently my self-imposed constraints are beginning to chafe! So, as a blog (if it is to be of any value as offering clarification or stimulus) should in fact consist very largely of the things that I have omitted from papers, the temptation to blog is clear. As I began my postgraduate research in 1966, I am now in my forty fifth year of turbulence research, so there should be no lack of material. Oh, and it should also be both pithy and hard-hitting. You have been warned.