Intermittency corrections (sic) and the perversity of group think

Intermittency corrections (sic) and the perversity of group think.

In The Times of 11 January this year, there was a report by their Science Editor which had the title Expert’s lonely 30-year quest for Alzheimer’s cure offers new hope. Senile dementia is the curse of the age (even if temporarily eclipsed by the Corona virus) and the article tells how in 1905 Alois Alzheimer made a post mortem examination of the brain of a woman who in her later years had become confused and forgetful. He found two pathological features: one consisted of clumps of plaques of a protein called beta amyloid and the other consisted of sticky tangles of a different protein, later identified by a Professor Claude Wischik as a protein called tau.

Now, with two possible causes, you might imagine that researchers in the field would be interested in both. But you would be wrong. It seems that the community targeted the beta amyloid cause and for many years neglected the other possibility. Now, after decades of failure, the major pharmaceutical companies are developing anti-tau drugs. Even if none of these proves to be the magic bullet, it seems a healthier situation that both symptoms (and the possible interaction between them) are being studied. The article ends on a note of moderate optimism, but the question remains: why was the research skewed towards just the one possibility? The article seems to suggest that this may have been because beta amyloid was already known and possibly implicated in another pathology. As always, in applied research there is a temptation to go for the `quick and dirty solution’!

The behaviour of the researchers pursuing the beta amyloid option (to the exclusion of the equally possible tau option) exhibits some of the characteristics of what psychologists call group think. A similar phenomenon has been part of fundamental research on turbulence for at least five decades. As is well known, it started with a remark by Landau about the Kolmogorov (1941) theory; or K41 for short. This criticism is based on the idea that intermittency of the dissipation rate has implications for the K41 theory, despite the fact that the physical basis of that theory is the inertial transfer rate, which is sometimes equal to the dissipation rate. This criticism, along with various others, is discussed in Chapters 4 and 6 of my 2014 book on turbulence and I will not consider it further here. All I wish to note is that there has been an ongoing body of work on so-called intermittency corrections, and the strange thing is that more obvious corrections have been largely neglected, until quite recent times. Let us now expand on that.

Essentially Kolmogorov used Richardson’s concept of the cascade to argue that energy transfer would proceed by a stepwise process from large scales (production range) to small scales and this would result in a universal form for the structure functions in these small scales. Furthermore, for large Reynolds numbers, the effect of the viscosity would only be appreciable at very small scales, and there would be an intermediate subrange of scales where the local excitation would be controlled by inertial transfer into the subrange from the large scales and inertial transfer out of the subrange into the small scales where it would be dissipated by viscous effects.

At this point, I should enter a small caveat. I feel quite uncomfortable with what I have just written. The physical concept of the cascade is rather ill-defined in real space. I would be much happier talking in terms of wavenumber space where the cascade is well defined and the key concept is scale-invariance of the inertial flux. This fact was recognized by Obukhov (1941), by Onsager (1945) and by Batchelor (1947), and after that very widely. It is rather as if Kolmogorov, in choosing to work in real space, had opted for Betamax rather than VHS!

However, ignoring my quibbles, in either space one point is clear: this is an approximate theory. Either $S_2 \sim \varepsilon^{2/3}r^{2/3}$ or $E(k) \sim \varepsilon^{2/3}k^{-5/3}$ is only asymptotically valid in the limit of infinite Reynolds numbers. Under all other circumstances, there must be corrections due to finite-Reynolds number (FRN) effects. These corrections may be small enough to ignore: bear in mind that on various measures an infinite Reynolds number is not all that large. There is certainly no need to worry about zero viscosity (pace) Onsager and his hagiographers! We shall return to this specific point in later posts.

The response of Kolmogorov to Landau’s criticism was the somewhat ad hoc K62, in which the retention of the specific effect of the large scales of the system (in both structure functions and spectra), completely reversed the original assumption of the stepwise cascade leading to universal behaviour. For reasons that are far from clear to me, this sparked off a positive industry of intermittency corrections, anomalous exponents and various improvements (sic) on Kolmogorov, which lasts to this day. In contrast, from the late 1990s, increasing attention, both experimental and theoretical, has been given to FRN effects, and in particular the way in which they have been ignored in assessing the evidence for anomalous exponents and suchlike. We may highlight the situation in the field by contrasting two major papers, both published in leading learned journals within the last year.

The first of these is by Tang et al [1], who note in their abstract that K62 `has been embraced by an overwhelming majority of turbulence researchers.’ This paper is one in a series in which this group has investigated the alternative effect of finite Reynolds number corrections. In addition to their own analysis, they also cite many papers from recent years which support their conclusion that the failure to account for FRN effects has `almost invariably been mistaken for the intermittency effect’. In the main body of their paper, they express themselves even more forcibly. In contrast, the paper by Dubrulle [2], which is very much in the K62 camp, so to speak, cites not a single reference to FRN effects. Instead the author argues that small-scale intermittency is incompatible with homogeneity, and makes the radical proposal that the Karman-Howarth equation should be replaced by a weak form which takes account of singularities. At this point one takes leave of continuum mechanics and much else besides! If we consult Batchelor’s book, we find that homogeneity is defined in terms of mean quantities and is therefore entirely compatible with intermittency of the velocity field, which is nowadays understood to be present at all scales.

I was tempted to say that it is difficult to imagine such a fundamental gulf in any subject other than turbulence, but then that’s where we came in!

[1] S. Tang, R. A. Antonia, L. Djenidi, and Y. Zhou. Phys .Rev. Fluids 4, 024607 (2019).

[2] B. Dubrulle. J. Fluid Mech. 867, P1, (2019).




Bad proofs and `curate’s egg’ theories

Bad proofs and `curate’s egg’ theories.

At about the time I took up my appointment at Edinburgh, I heard about a pure mathematician who wanted to be remembered for his bad proofs. Some years later I read his obituary in The Times and this fact was mentioned again. I had thought that I had kept the cutting but it seems not, so I’m afraid that I don’t remember his name. But I do remember what was meant by the term `bad proofs’. This man’s view was that many proofs in mathematics have been polished by various hands over the years and he wanted to be remembered for his originality. His proofs would be unpolished and hence seen as original.

The choice of the word `bad’ is interesting, in view of its pejorative overtones. I would be inclined to think that the original proof would at least be valid and hence not to be described as bad. Perhaps, later more elegant versions of the proof would emphasise the unpolished nature of the original. Hence, perhaps `rough’ might be a better description. Presumably the word `bad’ was chosen to emphasise the paradoxical appearance of that statement. Well, at least he is being remembered for his quirky assertion about what he wanted to be remembered for.

For some time I have wondered whether there is an analogous term for turbulence theories. By which I mean attempts to solve the statistical closure problem. This was originally formulated by Reynolds for pipe flow, but as usual we will consider it here as applied to isotropic turbulence. Obviously `bad’ is no good, because we do not have the paradoxical juxtaposition that we have with the word `proof’, which in itself indicates success, which is certainly not bad. One obvious possibility would be `rough’ but somehow that does not appeal. `Rough theories’ does not sound good. In fact it sounds bad.

Recently I came up with the idea of the `curate’s egg’ theories, meaning `good in parts’. This saying stems from a cartoon which appeared in the British humorous magazine Punch in 1895. It shows a nervous curate breakfasting with the bishop. The bishop expresses concern that the curate’s egg is not a good one. The curate, anxious not to make a fuss, bravely asserts that his egg is `good in parts’. The term passed into everyday speech and was still current when I was young. In the 1960s I was commuting regularly by train, and I would buy Punch to read on the journey. On one occasion there was a commemorative issue and a facsimile of the original cartoon was reproduced, so I was interested to see the origin of the phrase. We didn’t have Google in those days!

The reason that I think that such a term might be helpful is that many members of the turbulence community seem to see a theory as being either right or wrong. And if it’s deemed to be wrong, then it should be dismissed and never considered again. A striking example of this kind of thing arose a few years ago when I was trying to get a paper on the LET theory published (see #10 in the list of recent papers)) and it had gone to arbitration. The Associate Editor who was consulted turned the paper down because `this is the sort of stuff Kraichnan did and everybody has known for the last twenty years that it’s wrong’.

This decision was easily overturned. The sheer idiocy of the proposition that, because one person had tackled a problem and failed, other people should be barred from making further attempts, ensured that. But what interests me is the fact that Kraichnan’s work is reduced to `the sort of stuff’ and regarded as `wrong’. This was done by someone who was an applied mathematician and not a theoretical physicist. I am not a betting man, but I would put a small amount of money on the assumption that this referee had very little knowledge of Kraichnan’s vast output, and was relying on hearsay for his opinion. I understand the difficulties facing anyone from an engineering background in trying to get to grips with this type of many-body or field theory although there are accessible treatments available. But if you are unable to understand this work in detail, then it is unlikely that you are qualified to referee it.

If we take an example from physics, in critical phenomena (e.g. the transition from para- to ferromagnetism) the subject was dominated by mean-field theory up until the late 1970s, when renormalization group (RG) was applied to critical phenomena. This does not mean that mean-field theory was immediately dismissed. In fact it is still taught in undergraduate courses. Prior to RG there was a balanced understanding of the limitations and successes of mean-field theory and no one ever thought of it as `right’, with the corollary that no one now dismisses it as simply `wrong’.

I know what I would like to have for other subjects, such as cosmology, particle theory or indeed musical theory. I would like to be able to read a simple account which explains the state of play, without going into too much detail. That is what I intend to provide for statistical theories of turbulence in future posts. In my view, most theories of turbulence can be regarded as `curate’s eggs’: they have both good and bad aspects. The important thing is that those working in the field of turbulence should have some understanding of the situation and should appreciate the importance of having further research in this area.




The infinite-Reynolds number limit: a first look

The infinite-Reynolds number limit: a first look.

I notice that MSRI at Berkeley have a programme next year on math problems in fluid dynamics. The primary component seems to be an examination of the relationship between the Euler and Navier-Stokes equations, `in the zero-viscosity limit’. The latter is, of course, the same as the limit of infinite Reynolds numbers, providing that the limit is taken in the same way with the same constraints. I think that it is a failure to appreciate this proviso that has resulted in the concept becoming something of a vexed question over the years. Yet it was clearly explained by Batchelor in 1953 and elegantly re-formulated by Edwards in 1965. As a result, a group of theorists has been quite happy about the concept, but many other workers in the field seem to be uneasy.

I first became aware of this when talking to Bob Kraichnan at a meeting in 1984. When I used the term, his reaction surprised me. He began to hold forth on the subject. He said that people were `frightened’ of the idea of the infinite-Reynolds number limit. Rather defensively I said that I wasn’t frightened by it. His reply was. `Oh, I know that you aren’t but you would be surprised at the number of people who are!’ Since then I have indeed been surprised by how often you get a comment from a referee which goes something like: `The authors take the infinite-Re limit … but of course you cannot really have zero viscosity, can you.’ This rather nervous addendum suggests strongly that the referee does not understand the concept of a limit.

Well, one thing I would claim to understand is the idea of a limit in mathematical analysis. This is because the first class of my school course on calculus dealt with nothing else. I can remember that class period clearly, even although it was about sixty five years ago. One example that our maths master gave, was to imagine that you were cutting up your twelve-inch ruler, which was standard in those days. You cut it into two identical pieces in a perfect cutting process, with no waste. Then you put one piece over to your right hand side, and now cut the left hand piece into two identical pieces. One of these you put over to the right hand side, and add it on to the six-inch piece already there, to make a nine-inch ruler. The remaining piece you again cut into two, and move half over to make a ten and a half inch ruler. However much you repeat this process, the ruler will approach but never reach twelve inches again. In other words, twelve inches is the limit and you can only approach it asymptotically.

Suppose we carry out a similar thought experiment on turbulence; although you could actually do this, most readily by DNS. What we are going to do is to stir a fluid in order to produce stationary, isotropic turbulence. Now at this stage, we don’t even think about dissipation. We are trying to drive a dynamical system and we start by specifying the forcing in terms of the rate of doing work on the fluid. We call this quantity $\varepsilon_W$ and it is fixed. Next our dynamical system is fully specified once we choose the boundary conditions and the kinematic viscosity $\nu$. Accordingly, providing the forcing spectrum is peaked near the origin in wavenumber space, and there has been an appropriate choice of value of the initial kinematic viscosity, energy will enter the system at low wavenumbers, be transferred by conservative inertial processes to higher wavenumbers, and ultimately dissipated at the highest excited wavenumbers. Once the system becomes stationary, the dissipation rate must be equal to the rate of doing work, and so the Kolmogorov dissipation wavenumber is given by $k_d = (\varepsilon_W /\nu^3)^{1/4}$.

Now let us carry out a sequence of experiments in which $\varepsilon_W$ remains fixed, but we progressively reduce the value of the kinematic viscosity. In each experiment, the viscosity is smaller and the dissipation wavenumber is larger. Therefore there is a greater volume of wavenumber space and it will take longer to fill with energy. Ultimately, corresponding to the limiting case, we have an infinite volume of wavenumber space and the system will take an infinite time to reach stationarity and in principle will contain an infinite amount of energy. Note that this is not a catastrophe! In continuum problems, a catastrophe is when you get an infinite density of some kind. Here the work, transfer and dissipation rates are the densities of the problem, and they are perfectly well behaved.

At this stage, when I try to discuss the infinite Reynolds number limit, people tend to get uneasy and talk about possible singularities or discontinuities. I don’t really think that there is any cause for such hand-wringing. You have to decide first, which Navier-Stokes equation (NSE) you are using. There are two possibilities and they are identical; but we arrive at them by different routes.

If we arrive at the NSE by continuum mechanics, then in principle we can take the limit of zero viscosity without worry. After all, this is just a model of a real viscous fluid and, among other things, it is rigorously incompressible which a real fluid isn’t. We accept that in practice that it is the flow which is incompressible, not the fluid. So if the density variations are too small to detect, we can safely use the NSE.

If you come by the statistical physics route, then you must bound the smallest length scale (here the Kolmogorov dissipation length scale) such that it is orders of magnitude larger than inter-molecular distances. In practice, we may see the asymptotic behaviour associated with small viscosity arising long before there is any danger of breaching the continuum limit. For instance, if we look at the behaviour of the dimensionless dissipation rate as the Reynolds number is increased (see Fig. 1 of paper #6 in my list of recent papers) we are actually seeing the onset of the infinite Reynolds number limit. The accuracy of the determinations of $C_{\varepsilon,\infty}$ in this work is very decent, but if greater accuracy were required, then a bigger simulation would provide. Just like in boundary layer theory, it is all a matter of quite pragmatic considerations. I will give a more pedagogic discussion of this topic in a future post.




A first look at Kolmogorov (1941)

A first look at Kolmogorov (1941)

Around the turn of the new millennium, I attended the PhD oral of one of my own students for the last time as Internal Examiner. After that the regulations were changed; or perhaps it was frowned on for the supervisor to also be the Internal. Later still I stopped attending in any capacity: I think it became that the student had to invite their supervisor if they wanted them to attend. Is this an improvement on the previous system? Actually, my own PhD oral was conducted by David Leslie, who had previously been my second supervisor, and Sam Edwards who was my first supervisor! The three of us had had many discussions of my work in the past, so the atmosphere was informal and friendly. But I don’t think the examination lacked rigour and I suppose it would have been difficult to find anyone else in the UK who could have acted as external examiner.

However, back to my own last stint as Internal. The candidate was a graduate with joint honours in maths and computer science. He was a very able young man and did good work, but he was not a physicist and never quite engaged with the physics. So when the External asked him if he could derive the Kolmogorov spectrum, he said `No’, then added pertly `Can you?’ Alas, the External was unable to do so. Fortunately the Internal was able to go to the blackboard and do the needful. The External was quite a well-known member of the turbulence community, so we will spare his blushes. Yet, it left me wondering how many turbulence researchers could sit down and derive the Kolmogorov energy spectrum, or equivalently the second-order structure function, without consulting a book? For any such benighted souls, I will now offer a crib. Virtue should be its own reward, but in the process of putting this together, I think I have found the answer to something that had puzzled me. I will return to that at the end of this post.

For simplicity, let’s work with the second-order structure function $S_2(r)$. This is what Kolmogorov did: the form for the energy spectrum came later. Glossing over the physical justification, we consider the question: how do we express $S_2(r)$ in terms of the dissipation rate $\varepsilon$ and the distance between measuring points $r$, for some intermediate range of values of $r$?

The first thing to notice is that $S_2$ has dimensions of velocity squared (or energy per unit mass: we won’t keep repeating this) and that the dissipation is the rate of change of the energy with time. It follows that $S_2$ depends on the inverse of time squared whereas dissipation depends on the inverse of time cubed. Hence, the structure function must depend on the dissipation to the power of $2/3$. Or,

\[S_2(r) \sim \varepsilon^{2/3}.\]

This is the Kolmogorov result. Put in its most general form: if you seek to express the energy in terms of the dissipation, inertial transfer, eddy-decay rate, or any other rate of change, you must have a two-thirds power from the need to have consistency of the time dimension across both sides of the equation.

Now what happens when we tidy up the dimensions of length? On the right hand side of the equation, we now have the dimensions of length to the power of $4/3$. In order to make this consistent with $S_2$ on the left hand side, we must multiply by a length to the power of $2/3$. From Kolmogorov (1941), this length must be $r$, and if we put a constant $C$ in front, we recover the well-known K41 result

\[S_2(r) = C r^{2/3}\varepsilon^{2/3}.\]

If however, we think that it might also depend on another length, then we only have available some length characteristic of the size of the system, say $L_{ext}$. If we include this, then we must multiply the right hand side by $L_{ext}^p r^m$, where $p+m=2/3$. In other words, the power of $r$ is no longer determined. This is, in effect, what Kolmogorov did in 1962, albeit by a more circuitous route. And, in the process he threw away his entire theory, which was based on the idea that the many steps of the Richardson cascade would lead to a universal result at small scales. In Kolmogorov (1962) that does not happen: the final result depends on the physical size of the system.

Let us now hark back to what had puzzled me. In a previous post I mentioned a contumacious referee. In fact this individual kept asserting that `$r^{2/3}$ is not Kolmogorov’. We pressed him to explain but it was clear that he had found his excuse for rejecting the paper and wasn’t prepared to be more helpful (or indeed scholarly). As our paper contained a discussion of the fact that the extended scale similarity technique gave the two-thirds law as an artifact in the dissipation range, it is possible that he was actually agreeing with us! However, taking his comment as a general statement, I would be inclined to agree with it. From the discussion we have given above, it should be clear that it is the dependence on the dissipation rate to the two-thirds power that is actually Kolmogorov. For anyone interested, the paper is Number 7 in the list of my recent papers given on this website.