Chapter New worlds versus scaling: from van Leeuwenhoek to Mandelbrot and beyond



Yüklə 11,7 Mb.
tarix01.08.2018
ölçüsü11,7 Mb.
#59867

Chapter 2. New worlds versus scaling: from van Leeuwenhoek to Mandelbrot and beyond


2.1 A new world in a drop of water: scale bound thinking

The two extreme opposite approaches for dealing with systems with structures over huge ranges of scale are the “scale bound” (“powers of ten”) and the “self-similar scaling” approaches associated with van Leuwenhoek (17th C) and Mandelbrot.

In the former, (section 2.1) every factor of ten or so of “zooming” leads to something totally different, whereas in the latter (section 2.2), on average, zooming changes nothing. We recall the familiar “powers of 10” book, and the documentary by Philip Morrison wherein we zoom into – and out of - the hand of a girl in a park ending up respectively with atomic nuclei and clusters galaxies. To illustrate this, we recall the tradition of meteorological classification based on size, noting the extreme “Washington school” approach in which every factor of two of scale is used to pigeon-hole structures into different classes each supposedly produced by different physical mechanisms.

2.1 A new world in a drop of water: scale bound thinking


We just took a voyage through scales, visually noticing structures in cloud photographs and wiggles on graphs that collectively spanned ranges of scale of factors of billions in space and billions of billions in time. We are immediately confronted with the question: what do we mean by scale? How can we conceptualize and model such fantastically variable behaviour?

Two extreme approaches have developed, by far the dominant one being what for brevity I call the “new worlds” view after Antoni van Leeuwenhoek (1632-1723), the inventor of the microscope, the other, the self-similar (scaling) view by Benoit Mandelbrot that I discuss later.

When van Leeuwenhoek peered through the first microscope, he was amazed at finding “animalcules” lurking in a drop of pond water: he is quoted as having discovered a “new world in a drop of water”: micro-organisms (fig. 2.1). That was the 17th century and today’s atom-imaging microscopes are developed precisely because of the promise of such new worlds. This scale-by-scale “newness” idea was graphically illustrated by K. Boeke’s highly influential book “Cosmic View” (1957) which starts with a photograph of a girl holding a cat, first zooming away showing the surrounding vast reaches of space, and then zooming in until reaching the nucleus of an atom. The book was incredibly successful, and was included in Mortimer Adler's “Gateway to the Great Books” (1963), a 10 volume series featuring works by Aristotle, Shakespeare, Einstein and others. In 1968 two films were based on Boeke’s book: “Cosmic Zoom” produced by the National Film Board of Canada and “Powers of Ten” (1968, re-released in 1977a) by Charles and Ray Eames and which encouraged the idea that nearly every power of ten in scale hosted different phenomena. More recently, there’s even the interactive Cosmic Eye (2012), app for the iPad, iPhone, or iPod.

Mandelbrot was the first to explicitly criticize the new worlds view and to propose the term “scalebound” to distinguish the new world view from his new scaling, fractal one:

“Scalebound denotes any object, whether in nature or one made by an engineer or an artist, for which characteristic elements of scale, such as length and width, are few in number and each with a clearly distinct sizeb.”1
The new worlds “powers of ten” view is thus essentially the same as “scalebound”.

Fig. 2.1: Antoni van Leuwenhoek discovering micro-organisms.

While “Powers of Ten” was propagating the new worlds view to a whole generation of scientists, there were other developments that pushed their thinking in the same direction. In the 1960’s, long ice and ocean cores were revolutionizing climate science supplying the first quantitative data at centennial, millennial and longer time scales. This coincided with the development of practical techniques to decompose a signal into oscillating components: “spectral analysis”. While it had been known since Joseph Fourier in the 19th century that any time series can be written as a sum of sinusoids, applying this idea to real data was computationally challenging and in atmospheric science it had been largely confined to the study of turbulence (see below). The breakthrough was the development of fast computers combined with Blackman and Tukey’s “Fast Fourier Transform” algorithmc (1968).

The beauty of Fourier decomposition, is that each sinusoid has an exact, unambiguous time scale: its period (the inverse of its frequency) is the length of time it take to make a full oscillation. Fourier analysis thus provided a systematic way of quantifying the contribution of information at each time scale to a time series. Take a messy piece of data – for example the time series in fig. 1.3: it has small, medium and large wiggles. Do the wiggles hide signatures of important processes of interest, or are they simply uninteresting details that should be averaging out and ignored?

Fig. 2.2 shows the result for all periods longer than 10 milliseconds. How to interpret the plot? One sees three strong spikes, at frequencies of 12, 28 and 41 cycles per second (corresponding to periods of 1/12, 1/28 and 1/41 of a second, about 83, 0.35, 24 milli seconds). Are they signals of a fundamental process or are they just noise?

Naturally, this question can only be answered if we have a mental model of how the process might be generated, and this is where it gets interesting. First of all consider the case where we have a single series such as in fig. 1.3. If we knew the signal was turbulent, then theory tells us that we would expect all the frequencies in a wide continuum of scales to be important, and indeed, that at least on average, that their amplitudes should decay in a power law manner (we’ll see later why). But the theory only tells us the spectrum that we would find if we averaged over a large number of identical experiments (each one with different “bump” and wiggles, but from the same overall conditions). In fig. 2.2, this is the smooth blue curve. But we see that there are apparently large departures from this average. Are these really exceptional or are these just “normal” fluctuations expected from a random piece of turbulence? If we used turbulence theory before the development of cascade models and the discovery of multifractals in the 1970’s and 80’s, then we would have expected that the up and down variations in the spectrum about a smooth curve passing through them should roughly follow the “bell curve”. If this was the case, then the spectrum should not exceed the bottom red curve more than 1% of the time and the top one more than once in ten billion. Yet, we see that even the latter curve is exceeded twice in this single but nonexceptional simulationd. Had we encountered this series in an experiment, the turbulence theory itself would probably have been questioned. Indeed, failure to fully appreciate the huge variability that is expected in turbulent processes has proved to be a major obstacle in their adoption.

So until the 1980’s we would have concluded that the bumps were significant - even if we knew the origin was simply a seemingly normal if turbulent wind trace on the room of the physics building (fig. 1.3).

But what would be our interpretation if fig. 2.2 was instead the spectrum of a climate series? We would have no good theory of the variability and we would typically only have a single trace (e.g. an ice core record at a given location). After painstakingly coring, sampling and analysing the isotopic composition in a mass spectrometer, and then estimating the age as a function of depth, and then digitising the result, the researcher would be eager for a quantitative look at what she had found. If the black curve in fig. 2.3 was a spectrum of such a core, how would she react to the bumps in the spectrum? Unlike the turbulence situation where there was some theory, an early core might even have had little to compare it with. Now we can see how the new worlds view could the influence our results. We would be strongly tempted to conclude that the spikes are so strong, so far from the bell curve theory that they represent real physical oscillations occuring over a narrow range of time scales/frequencies. We would also remark that the two main bumps in the spectrum involve several successive frequencies, according to usual theoretical assumptions, this would strengthen the evidence for the interpretation of a hidden oscillatory process.

We should not be surprised to learn that the 1970’s witnessed a rash of papers with spectra resembling that of fig. 2.2: oscillators were found everywhere. It was in this context that Murray Mitchell2 famously made the first explicit attempt to conceptualize the huge temporal atmospheric variability (fig 2.3a). Mitchell’s composite spectrum ranged from hours to the age of the earth (≈4.5x109 to 10-4 years, bottom, fig. 2.3a). In spite of his candid admission that this was mostly an “educated guess”, and notwithstanding the subsequent revolution in climate and paleoclimate data, over forty years later it has achieved an iconic status and is still regularly cited and reproduced in climate papers and textbooks3,4,5. Its continuing influence is demonstrated by the slightly updated version shown in fig. 2.3b which (until 2015) adorned NOAA’s NCDC paleoclimate web sitee. Interestingly, the site was surprisingly forthright about the figure’s ideological character. While admitting that “in some respects it overgeneralizes and over-simplifies climate processes”, it continues: “… the figure is intended as a mental model to provide a general "powers of ten" overview of climate variability, and to convey the basic complexities of climate dynamics for a general science savvy audience.” Notice the explicit reference to the “powers of ten” mindset over fifty years after Boeke’s bookf.

Certainly the endurance of the figure has nothing to do with its accuracy. Within fifteen years of its publication, two scaling composites (close to several of those shown in fig. 2.3 a), over the ranges 1 hr to 105 yrs, and 103 to 108 yrs, already showed astronomical discrepancies6, 7. In the figure, we have superposed the spectra of several of the series analysed in ch. 1; the difference with Mitchell’s original is literally astronomical. Whereas over the range 1 hr to 109 yrs, Mitchell’s background varies by a factor ≈ 150, the spectra from real data imply that the true range is a factor of a quadrilliong (1015), NOAA’s fig. 2.3b extends this error by another factor of tenh.

Writing a decade later, leading climatologists Shackelton and Imbrie7 laconically note that their own spectral slopei was “much steeper than that visualised by Mitchell”, a conclusion reinforced by several subsequent scaling composites8, 9. Over at least a significant part of this range, Wunsch10 further underlined its misleading nature by demonstrating that the contribution to the variability from specific frequencies associated with specific “spikes” (presumed to originate in oscillatory processes) was much smaller than the contribution due to the continuum. .”

Just as van Leuwenhook peered through the first microscope and discovered a “new world in a drop of water”, NOAA anticipates finding “new worlds” by zooming in or out of scale. It is an accurate description of what 1 called a “scale bound” scientific ideology: it is so powerful that even quadrillions are insufficient to shake it.



Fig. 2.2: Black: the Fourier spectrum of the changes in wind speed in the 1 second long simulation shown at the bottom left of fig. 1.3, showing the amplitudes of the first 100 frequencies (). The upper left is thus for one cycle over the length of the simulation, i.e. one cycle per second, a period of one second. The far right shows the variability at 100 cycles per second giving the amplitude of the wiggles at 10 milliseconds (higher frequencies were not shown for clarity). The brown shows the average over 500 random series each identical to that in fig. 1.3 is also shown: as expected, it is nearly exactly a (scaling) power law (blue). The three red curves show the theoretical 1%, on in a million and one in ten billion extreme fluctuation limits (bottom to top) assuming that the spectrum has bell curve (Gaussian) probabilities.



Fig. 2.3a: A comparison of Mitchell’s relative scale, “educated guess” of the spectrum (bottom, 2) with modern evidence from spectra of a selection of the series displayed in fig. 1.4 (the plot is logarithmic in both axes). There are three sets of red lines; on the far right, the spectra from the 1871-2008 20CR (at daily resolution) quantifies the difference between the globally averaged temperature (bottom) and local averages (2ox2o, top). This figure has been faithfully reproduced many times (with the same admittedly mediocre quality). It is not actually very important to be able to read the lettering near the spikes, if needed they can seen in fig. 2.3b which was inspired by it.

The spectra were averaged over frequency intervals (10 per factor of ten in frequency), thus “smearing out” the daily and annual spectral “spikes”. These spikes have been re-introduced without this averaging, and are indicated by green spikes above the red daily resolution curves. Using the daily resolution data, the annual cycle is a factor ≈ 1000 above the continuum, whereas using hourly resolution data (from the Lander series, fig. 4a), the daily spike is a factor ≈3000 above the background. Also shown is the other striking narrow spectral spike at (41 kyrs)-1 (obliquity; ≈ a factor 10 above the continuum), this is shown in dashed green since it is only apparent over the period 0.8 - 2.56 Myr BP (before present).

The blue lines have slopes indicating the scaling behaviours. The thin dashed green lines show the transition periods that separate out the regimes discussed in detail in ch. 3; these are at 20 days, 50 yrs, 80,000 yrs, and 500,000 yrs.


Fig. 2.3b: The updated version of Mitchell’s spectrum reproduced from NOAA’s NCDC paleoclimate web sitej. The “background” on this paleo site is perfectly flat; hence in comparison with the empirical spectrum in fig. 2a, it is in error by an overall factor ≈ 1016.

In Mitchell’s time, this scale bound view had already led to an atmospheric dynamics framework that emphasized the importance of numerous processes occurring at well defined time scales, the quasi periodic “foreground” processes illustrated as bumps – the signals - on Mitchell’s nearly flat background. Although in Mitchell’s original figure, the lettering is difficult to decipher, fig. 2c spells them out more clearly with numerous conventional examples. In the wake of the nonlinear revolution, one of its strands - low dimensional deterministic chaos - the bumps were increasingly associated with specific chaos models, analysed with the help of the dynamical systems machinery of bifurcations, limit cycles and the likek. From the spectral point of view, wide range continuum spectra are generic results of systems with large numbers of spatial degrees of freedom (“stochastic chaos” 12) and hence is incompatible with the usual deterministic chaos. Similarly, the spectra will be scaling - i.e. power laws – if there are no dynamically important characteristic scales or scale breaksl.

At weather scales, and at virtually the same time as Mitchell’s scalebound conceptualization of temporal variability, Isidoro Orlanski’s proosed his own influential phenomenological scalebound classification of atmospheric phenomena by spatial powers of ten (fig. 2.4)13. The figure is reproduction of Orlanski’s space-time diagramm with eight different dynamical regimes indicated on the right according to their spatial scales. Notice that he indicates that the climate starts at about two weeks (bottom row). The straight line embellishment was added in 1997 16 and shows that the figure is actually scaling (straight on this logarithmic plot).

Turning our attention to weather, The success of Mitchell’s composite climate spectrum parallels that of the earlier meteorological spectral composite in 14. The latter figure rapidly became a classic, and for several decades was regularly reproduced (often with embellishments). Notably - in spite of very strong early criticism - it consecrated the scale bound “meso-scale gap” notion that is still used today to justify the (empirically and theoretically untenable) division of atmospheric processes into small scale isotropic 3D and large scale isotropic 2D turbulence (see the discussion in section 3).

As a graduate student, I well remember the epistemic shock that I felt when shortly after it came out in 1977, I first encountered Mandelbrot’s seminal book: “Fractals, form chance and dimension”15. It was no accident that it wasn’t my supervisor nor other scientific colleague that introduced me to the book, but rather my mother who was an artist fascinated by the artistic potential of new technology and was awed by Mandelbrot’s imageryn.

Appropriately, it was Mandelbrot who was the first to explicitly propose the term “scalebound” to distinguish the new world view from his new scaling, fractal one:
.

“Scalebound denotes any object, whether in nature or one made by an engineer or an artist, for which characteristic elements of scale, such as length and width, are few in number and each with a clearly distinct size.”1


Ironically, this definition was proffered in an architecture journal, a milieu which – at least initially - was more receptive to fractals than most.

He contrasted a scalebound object with a scaling one:


“A scaling object, by contrast, includes as its defining characteristic the presence of very many different elements whose scales are of any imaginable size. There are so many different scales, and their harmonics are so interlaced and interact so confusingly that they are not really distinct from each other, but merge into a continuum. For practical purposes, a scaling object does not have a scale that characterizes it. Its scales vary also depending upon the viewing points of beholders. The same scaling object may be considered as being of a human's dimension or of a fly's dimension.”1
Scalebound thinking is now so entrenched that we find it obvious that “zooming in” opens up hidden secrets. Today we would likely express more wonder if we zoomed in only to find that nothing has changed, if the system’s structure was scaling! Yet in the last thirty years antiscaling prejudices have started to unravel; at first largely thanks to Mandelbrot’s path breaking “Fractal Geometry of Nature” (1983). His avant-garde use of computer graphics to render scaling fractals visually showed the realism of scaling.

Fig. 2.4: A reproduction of Orlanski’s space-time (“Stommel”) diagram with eight different dynamical regimes indicated on the right according to their spatial scales. Notice that he indicates that the climate starts at about two weeks (bottom row). The straight line embellishment was added in 1997 16 and shows that the figure is actually scaling (straight on this logarithmic plot).


Fig. 2.3


Fig. 2.4
This paradigm could be termed “scalebound”; it is a severe constraint on one’s conceptual horizons 1. From here, it is a short step to the “phenomenological fallacy”: the confounding of form and mechanism which we discuss below 17.

Scalebound thinking is now so entrenched that we find it obvious that “zooming in” opens up hidden secrets. Today we would likely express more wonder if we zoomed in only to find that nothing has changed, if the system’s structure was scaling! Yet in the last thirty years antiscaling prejudices have started to unravel; at first largely thanks to Mandelbrot’s path breaking “Fractal Geometry of Nature” (1983). His avant-garde use of computer graphics to render scaling fractals visually showed the realism of scaling.
neither of which was very explicit until recent Mandelbrot introduced the term, the first the “new worlds” of “scale bound view” view that have been
should be careful about importing
developed really the most appropriate for understanding and modelling the atmosphere? these are a priori

We are immediately confronted with two questions: what do we mean by scale? How can we conceptualize and model such behaviour? In our voyage


But what do we mean by scale? In our voyage, we implicitly used standard rulers and clocks to determine lengths and durations, but these are a priori notions of size, how do we know that they are the most appropriate? Think of Einstein’s theory of general relativity where the matter bends space and defining the distance notion (the metric) that is thus an emergent property: standard (Euclidean) notions of size turn out to be at best an approximation valid at low mass-energy densities. Indeed, below, I argue that the turbulent dynamics should be used to define the size of clouds and structures.

“I propose the term scalebound

to denote any object, whether in nature or one

made by an engineer or an artist, for which characteristic

elements of scale, such as length and width, are few

in number and each with a clearly distinct size.”


“A scaling object, by contrast, includes as its defining

characteristic the presence of very many different elements

whose scales are of any imaginable size. There

are so many different scales, and their harmonics are so

interlaced and interact so confusingly that they are not

really distinct from each other, but merge into a

continuum. For practical purposes, a scaling object

does not have a scale that characterizes it. Its scales

vary also depending upon the viewing points of beholders.

The same scaling object may be considered as

being of a human's dimension or of a fly's dimension.”

2.2 Scaling: Big whirls have little whirls and little whirls have lesser whirls

I recall the pioneers of the scaling approach: in particular, Richardson, the father of modern weather forecasting and the seminal scaling cascade idea (1922): “Big whirls have little whirls and little whirls have lesser whirls”. At first, this idea was vague: the cascades were more metaphor than real and were thought to be fairly uniform in space and in time. However starting at the end of the 1940’s the problem of intermittency was recognized in laboratory turbulence: in space by turbulent “spottiness” and in time by “sudden transitions from quiescence to chaos”. In the atmosphere the same phenomena manifest themselves by the concentration of atmospheric activity into storms and the cores of storms and in time by sudden and occasionally violent transitions.

As a first approach to understanding the structures within structures and their relationship to intermittency, we discuss various simple geometric examples (Cantor’s “perfect” set and the Koch “snowflake”) wherein the big and small are related in a simple way illustrating first sparseness and then wiggliness. Fractal sparseness is illustrated with the example of raindrops landing on blotting paper in the backyard, by examining the unstable layers within unstable layers from “dropsondes” and through the global distribution of weather stations. Fractal wiggliness is illustrated by the famous “Koch snowflake” geometric construction, by examining cloud perimeters and aircraft trajectories.

Fractality is enough to describe geometric sets of points (“on”/”off”, “black”/”white”), but more generally, we deal with fields: for example cloud pictures are not black or white but have different levels of intensity, each one can be used to define different fractal sets and each set will generally have different “wiggliness” or “sparseness” each other: clouds are intermittent “multifractals”. Using the examples of cloud brightnesses, precipitation from Nîmes, France and to the high resolution aircraft temperature measurements we give some intuitive analyses by simply systematically degrading their resolutions. This raises the question of how to create (simulate) such structures, we describe the simplest multifractal “alpha model” cascade process. Finally, we show that cloud brightnesses over Montreal are multifractal.

We finish section 2.2 by discussing some of the basic questions raised by the scaling approach: the issue of high and low level laws and how this can answer the famous question raised by Leo Kadanoff: “Fractals: where’s the physics”. This leads to a critique of the dominant scale bound approach that attempts to discern deterministic scale bound mechanisms rather than scaling, statistical ones.
2.3 Clouds how big, how small?

Small clouds may be puffy and roundish; zooming out - blowing them up – will never reproduce the big – but (relatively) thin - ones that cover continents yet are only kilometers thick. If the atmosphere respects the extreme self-similar scaling (i.e. on average it is the same in all directions), then big clouds would be hundreds or thousands of kilometers thick, it would not be compatible with the stratification. But how do we measure the size of a cloud? By its horizontal extent or by its thickness? It turns out that rather than attempt to impose a pre-ordained notion of size, we should let the clouds themselves (or rather the underlying turbulent processes) determine the appropriate notion of scale. Scale is thus an emergent property. (This is analogous to general relativity in which the notion of distance is an emergent property determined by the distribution of mass and energy.) We illustrate this idea with the laser smog data discussed in ch. 1 but also for the wind field with wind using measurements from a vast programme involving 14,500 aircraft flights.

In the atmosphere, the most important application of scale as an emergent property is in understanding the stratification. It turns out that both scale and size are properties that are emergent and since size = (scale)D where D is the dimension characterizing the stratification, the effective dimension of the atmosphere is also emergent, it has to be determined from theory and from experiment. Whereas from outer space, the atmosphere appears two dimensional (a nearly flat “onion skin” surrounding the earth), at human scales it appears more nearly 3-D (the same in all directions). It turns out that it is “in between” 2 and 3 with a dimension close to the theoretical value 23/9=2.555.

The theoretical estimate 23/9 is a result of assuming that energy fluxes dominate the horizontal structure whereas buoyancy forces dominate the vertical. Since buoyancy is responsible for atmospheric convection (notably visible in thunderstorms) and the latter is usually understood using highly scale bound conceptual models, it is important to show that convection is in reality a scaling phenomenon. Using CLOUDSAT satellite vertical sections of thunderstorms, we show that they are indeed scaling (with the predicting dimension 23/9). Finally, we show that the historical development of numerical weather models quantitatively demonstrates the same thing: the atmosphere is 23/9 dimensional.

In a last subsection we discuss the implications for one of the oldest and still most popular models of turbulence – that is on average the same in all directions i.e. isotropic turbulence in 2 dimensions and in three dimensions. In the classical picture the small scales are isotropic 3D and the large scale isotropic 2D with the two regimes separated by a hypothetical “mesoscale gap” (meaning that there are very few structures at around 10 km in size). We already saw in section 2.2 that there was no evidence for a gap. Here we give the most convincing, simplest evidence using hundreds of state-of-the-art devices dropped from 12km altitudes that measure wind, humidity and temperature with unprecedented vertical resolution: dropsondes.
2.4 The phenomenological fallacy

In order to account for the difference in scaling between the horizontal and vertical we have discussed the need to generalize Mandelbrot’s self-similar scaling so as to include stratification. In the 23/9 D model, in order for big and small to be the same, in addition to zooming, we must also squash structures. The result is that structures will be different at different scales even though they are produced by the same scaling mechanism, i.e. a mechanism that repeats scale after scale: that is scale invariant. This more general type of scaling (“Generalized Scale Invariance”) can also include the rotation of structures with scale and it visually demonstrates the “phenomenological fallacy”. The latter arises when empirical differences in structure/appearance from one scale to another are used to hypothesize the dominance of qualitatively different processes at different scales. Through computer generated images, we visually demonstrate how scale invariant systems generally have structures with different appearances at different scales even though the underlying mechanisms are the same at all scales.


2.5 Fluctuations as a microscope

In order to quantitatively test the scale bound and scaling approaches we need a quantitative, objective tool: fluctuation analysis. In this subsection, we show how to define fluctuations that are then evaluated at large and small scales and then compared statistically. In a scaling system, we have:

(Fluctuation) = (Scale)H

This expresses the scaling of the fluctuations and the scale invariance of the exponent H. We demonstrate this with some simple geometric fractal constructions (the “H model”).

With this tool, we apply it to the time domain to the data and proxy data discussed and displayed in ch. 1 thereby quantitatively demonstrating the division of atmospheric processes (from milliseconds to hundreds of millions of years) into 5 different regimes of which the weather, macroweather and climate are the closest to our day to day experience. We do the same in space in both the weather and macroweather regime.

1 Mandelbrot, B. Scalebound or scaling shapes: a useful distinction in the visual arts and in the natural sciences. Leonardo 14, 43-47 (1981).

2 Mitchell, J. M. An overview of climatic variability and its causal mechanisms. Quaternary Res. 6, 481-493 (1976).

3 Dijkstra, H. & Ghil, M. Low frequency variability of the large scale ocean circulations: a dynamical systems approach. Rev. Geophys. 43 (2005).

4 Fraedrich, K., Blender, R. & Zhu, X. Continuum Climate Variability: Long-Term Memory, Scaling, and 1/f-Noise, . International Journal of Modern Physics B 23, 5403-5416 (2009).

5 Dijkstra, H. Nonlinear Climate Dynamics. (Cambridge University Press, 2013).

6 Lovejoy, S. & Schertzer, D. Scale invariance in climatological temperatures and the local spectral plateau. Annales Geophysicae 4B, 401-410 (1986).

7 Shackleton, N. J. & Imbrie, J. The δ18O spectrum of oceanic deep water over a five-decade band. Climatic Change 16, 217-230 (1990).

8 Pelletier, J., D. The power spectral density of atmospheric temperature from scales of 10**-2 to 10**6 yr, . EPSL 158, 157-164 (1998).

9 Huybers, P. & Curry, W. Links between annual, Milankovitch and continuum temperature variability. Nature 441, 329-332, doi:10.1038/nature04745 (2006).

10 Wunsch, C. The spectral energy description of climate change including the 100 ky energy. Climate Dynamics 20, 353-363 (2003).

11 Chekroun, M. D., Simonnet, E. & Ghil, M. Stochastic Climate Dynamics: Random Attractors and Time-dependent Invariant Measures Physica D 240, 1685-1700 (2010).

12 Lovejoy, S. & Schertzer, D. in Chaos, Fractals and models 96 Vol. 38-52 (eds F. M. Guindani & G. Salvadori) (Italian University Press, 1998).

13 Orlanski, I. A rational subdivision of scales for atmospheric processes. Bull. Amer. Met. Soc. 56, 527-530 (1975).

14 Van der Hoven, I. Power spectrum of horizontal wind speed in the frequency range from 0.0007 to 900 cycles per hour. Journal of Meteorology 14, 160-164 (1957).

15 Mandelbrot, B. B. Fractals, form, chance and dimension. (Freeman, 1977).

16 Schertzer, D., Lovejoy, S., Schmitt, F., Chigirinskaya, Y. & Marsan, D. Multifractal cascade dynamics and turbulent intermittency. Fractals 5, 427-471 (1997).

17 Lovejoy, S. & Schertzer, D. in Nonlinear dynamics in geophysics (ed J. Elsner A.A. Tsonis) (Elsevier, 2007).




a The re-release had the subtitle: “A Film Dealing with the Relative Size of Things in the Universe and the Effect of Adding Another Zero” and was narrated by P. Morrison. More recently, the similar “Cosmic Voyage” (1996), appeared in IMAX format.


b Ironically, this definition was proffered in an architecture journal, a milieu which – at least initially - was more receptive to fractals than most.

c The speed up due to the invention of the FFT is huge: even for the relatively short series here (2048 points) it is about a factor of one hundred.

d I admit that to make my point, I made 500 simulations of the multifractal process in fig. 13 and then searched through the first 50 to find the one with the most striking variation. But if the statistics had been from the bell curve, then the extreme point in the spectrum in fig. 2.3 would correspond to a probability of one in 10 trillion so my slight “cheating” in the selection process couldn’t explain the result!

e The site explicitly acknowledges Mitchell’s influence.

f If this were not enough, the site adds a further gratuitous interpretation: assuring any skeptics that just “because a particular phenomenon is called an oscillation, it does not necessarily mean there is a particular oscillator causing the pattern. Some prefer to refer to such processes as variability.” Recall that any time series whether produced by turbulence, the stock market or pendulum can be decomposed into sinusoids: the decomposition has no physical content per se, yet we are told that variability and oscillations are synonymous.


g In section?, we plot the same information but in real space and find that whereas the RMS fluctuations at 5.53x108 years are ≈ ±10 K so that extrapolating Gaussian white noise over the range implies a value ≈ 10-6 K, i.e. it is in error by a factor ≈107.


h If we attempt to extend Mitchell’s picture to the dissipation scales at frequencies a million times higher (corresponding to millisecond variability), the spectral range would increase by an additional factor of a billion or so.

i  The absolute slope of the spectrum when plotted on logarithmic coordinates such as fig. 2.3a.

j The page is no longer up.

kMore recently updated with the help of stochastics: the “random dynamical systems” approach (e.g. 11 Chekroun, M. D., Simonnet, E. & Ghil, M. Stochastic Climate Dynamics: Random Attractors and Time-dependent Invariant Measures Physica D 240, 1685-1700 (2010)., 5 Dijkstra, H. Nonlinear Climate Dynamics. (Cambridge University Press, 2013).).

l Although in the random dynamical systems approach, the driving noise may be viewed as the expression of a large numbers of degrees of freedom, this interpretation is only justified if there is a significant scale break between the scales of the noise and of the explicitly modelled dynamics, it is not trivially compatible with scaling spectra.

m Sometimes called “Stommel” diagrammes after Henry Stommel.

n She had been working with newly developed colour Xerox machines to develop early electronic imagery ref to post modern currents.

Yüklə 11,7 Mb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə