Homepage
Christian Magnan


version française

 

 


Is dark matter fiction or reality?

philtre de sorcière

Does it exist in the cosmos a matter different from ordinary matter? It is what asserts with confidence a majority of scientists basing on the difficulty in explaining certain velocity measurements when the sole matter detected is taken into account. But are the observations on which they rely really convincing? How dare speak of a substance of unknown nature, introduced in an arbitrary manner, not provided by the theory, invisible, and for those reasons lying beyond science? Is it not the risk of falling into arbitrariness and thus open the door to uncontrollable abuse?

Christian Magnan
Astrophysicist



Translation of the french original version in progress: thank you for your patience!  

I claim in this page that the invention of a substance of unknown nature –so-called “dark” or even “black” in French because of its invisibility- is one of the most depressing frauds of modern science. The slippage is in my view all worse because the introduction of this concept is the fact of official scientists, a priori qualified, and not the result of a whim of incompetent amateurs or a chat between unbridled pseudo-scientists. I do not understand how researchers believed they were authorized to call to phenomena born from their own imagination and ignore the most elementary requirements of the scientific method, which feeds from theory, experience, observation and measurements, in that case all four absent from the scenario, as the enigmatic material is elusive at all levels.

The presence of missing matter is not established with certainty

For almost a century now astronomers rely insistently on the presence of a “missing mass” inside galaxies and between galaxies. In fact it is possible (subject to certain assumptions, moreover, as we shall see, quite restrictive) to infer from the measurement of velocities of galaxies, stars or interstellar clouds –as appropriate- the amount of material responsible from the gravitational forces acting in the region. Basically it is “sufficient” to apply the Newton’s Law of Universal Gravitation. And the amount of material found by the calculation seems to be much higher, by a factor which can reach tens or even hundreds, than the quantity of “visible” material detectable by conventional means.

Given those circumstances I will defend the following thesis. That it exists in the universe some “ordinary” matter undiscovered because due to its physical conditions (too hot, too cold, too scattered, composed of small solid pieces, non radiative, who knows?) it remains invisible to our instruments is one thing, which may be plausible (although to my eyes it is far from being established with certainty). But supposing there could be a kind of material of unknown nature supplementing the regular content of the universe is quite another thing. In this case the hypothesis becomes gratuitous and appears illegitimate as it is not based on sound science.

To justify my decision let us try to estimate the degree of robustness of the alleged evidence for the presence of dark matter. Which kind of observations is it? How are they deciphered? Are the velocities provided directly? Could the interpretations depend on additional assumptions not necessarily borne out in reality?

Historically the first scientist having frankly assumed the existence of invisible matter is Fritz Zwicky (1898-1974). Around 1933 the Swiss astronomer completes a series of observations of movement of galaxies found in clusters like Virgo or Coma and performs calculations on those data. Those clusters consist of thousands of galaxies bound together by gravity and, provided that a state of equilibrium has been reached, the velocity distribution of galaxies reflects the mass in the group. Thus a greater mass is able to hold galaxies of higher average velocities. Now it just so happens that the reported speeds are higher than those that would be expected in the presence of the only visible mass, a result which encourages precisely to think there is some hidden mass.

Today it is widely accepted by the astronomical community that in clusters of galaxies the mass deduced from dynamical measurements is greater than that corresponding to the total luminosity (mass “as seen”). But we cannot sustain from that sole reason that the existence of an invisible mass is really made because in our opinion there is ample evidence the majority’s argument is weak as passive acquiescence of the mass of researchers can anesthetize in them the positive and constructive doubt that should animate their spirit. I will remember here two categories of problems of understanding, which temper already, before jumping to conclusions, the need for a dark mass. The first category relates to the assumptions of equilibrium, the second to the application of the cosmological principle.

The interpretation of observations makes use of restrictive assumptions of equilibrium

First, the analysis of velocities of the galaxies composing a cluster is based on the application of a formula reflecting the so-called “virial theorem”. This formula connects the velocities of the galaxies to the sum of the masses present in the cluster. But the problem is that it has been demonstrated in a perfect and ideal situation and do not necessarily apply to clusters of galaxies in the real world, which are more complex than assumed in the theoretical scheme. What are the limiting assumptions made in the proof of this classical theorem of mechanics?

In the application of the virial theorem to cluster of galaxies, the term representing the effect of gravitational forces is calculated as if each galaxy were reduced to a point. But in the context of a cluster the behaviour of galaxies and the influence they exert on each other do not reduce to the action of reaction of points endowed with mass. Real galaxies are extended systems, and therefore not compact and non pointlike, mixing stars and interstellar matter. As a result the attraction between galaxies is much less simple than the one acting between points. In reality it is sensitive to the distribution of mass in the galaxies and to the tidal forces, able to twist the galaxies themselves. The galaxies are not rigid objects. They consist of flows of matter moving against each other and this fact makes their gravitational action different from that implied by the virial theorem. It is also known that galaxies undergo mutual encounters and that during those events they are strongly disturbed in their internal and external dynamics. And the consequent rate of change of their bulk velocity is not taken into account by the virial theorem. Finally they turn on themselves and the influence of this feature on the dynamics of rotation of the entire cluster should also be taken into consideration.

A key reason of the profound inability to bring a galaxy cluster with a gas of point particles lies in that the mutual distances between galaxies are comparable in magnitude to their diameters. Typically the distance between two galaxies would be only ten times greater than their average diameters. By exaggerating, it’s almost as if we went from one galaxy to another continuously. Thus our near neighbour, the Andromeda galaxy, is located to two or three million light years while the diameter of our own galaxy is about a hundred thousand light years. The separation of those two neighbours is therefore an order of magnitude larger than their size. Note carefully that the situation is very different in the case of stars, whose mutual distances are measured in light years and so are tens of millions times larger than their radii, which are measured in seconds of light. Except in special circumstances (black holes, close systems, multiple stars, very compact clusters) for a given star another star is comparable to a point. Not so for galaxies.

As a second serious restriction relative to the virial theorem, it applies only to a isolated system. For it to be valid the first condition is that the cluster remains identical to itself, that is to say not loses nor gains galaxies. It is also necessary that it suffers no external action. However those conditions have little reason to be satisfied because (i) the influence of the medium located between the clusters is established, in that it may disturb the clusters themselves, (ii) some galaxies can be ejected from the cluster or instead be accreted by it and (iii) collisions between galaxies continue to occur quite frequently, causing major changes in the dynamical state of interacting systems.

The Universe is home to large scale motions

Some cluster observed today is perhaps only a transient concentration of galaxies because the universe is the seat of violent turbulent motions of large scale and large size. The clusters that we see interact with their neighbours and have own particular velocities with respect to each other, apart from the expansion of space. Moreover the gas found between the galaxies, which comes into account in assessing the “dynamic mass”, is not in equilibrium (we speak of hydrostatic equilibrium to characterize a stationary gas) in contradistinction with the hypotheses involved in the application of the virial theorem. Incidentally astrophysics provides no example of gas in hydrostatic equilibrium (whether planetary, stellar, interstellar, intergalactic environments are concerned). The “astrophysical gases” are in constant motion, simply because the real systems are not confined to “container”, or “boxes” with solid walls. The heavenly bodies –all the heavenly bodies- are open systems, not closed and isolated systems. Therefore freedom of movement afforded to them is great and they cannot remain static.

Some studies even lead to believe that “bubbles” of empty space (perhaps originally creatd by intense radiation capable of blowing the matter) would play a pivotal role in our Universe. The galaxies would form preferentially within the walls delimiting the voids. Under such conditions the clusters could be drawn into laminar flows along the walls or even cross them. All those circumstances exclude of course the use of the virial theorem because they reflect the non-isolation of the clusters and their interaction with the environment in which they move. With speeds approaching hundreds or thousands of kilometres per second it is conceivable that motions generate significant dynamic disturbances!

As an additional point of major importance the application of the virial theorem implies that the gravitational action and reaction have lasted long enough for the cluster to be “stationary”, that is to say for it to having reached a steady state (one says that the cluster is “virialized” or “relaxed”). For example, a cluster exploding would fail the virial. But there is no reason that this stationary condition is fulfilled for every cluster. Indeed, as we have noticed before, there is nothing to suggest that clusters of galaxies are permanent structures isolated from the rest of the Universe, physically held and contained by the forces of gravity. For example the crucial assumption that the content of matter of the system in question remains the same over time is not necessarily verified.

Then, what ensures the relaxation of a cluster are the collisions between the galaxies that are the members of this cluster. But an estimate of the average time between mutual encounters of galaxies shows that in general the clusters have not benefited of sufficient duration since their formation to develop dynamical balance, especially if we consider young clusters, which is the case when we observe distant clusters (the more you look away, the more you look early in the history of the Universe). Experts suggest the Andromeda galaxy has a high risk of colliding with our own galaxy in about three billions years. But that time is of the same order of magnitude as that which elapsed since the formation of our local cluster of galaxies (about ten billions years), although it is lower. This indicates that since the birth of our local group the galaxies have not had time to undergo sufficient number of encounters that would have allowed them to distribute the total energy available in a balanced way. The conclusion is similar for the Coma cluster, as the calculation shows that the relaxation time of such clusters is at least of the order of tens of billions of ears. Thus after ten billions years of existence there is no assurance that this cluster would be “virialized”.

Pockets of galaxies in an expanding space?

After having questioned the application of the virial theorem, the second basic problem to report concerns the alas untouchable dogma that our universe should be submitted to the cosmological principle, that is to say that it would obey the equations of a homogenous and isotropic model. But this doctrine does not hold, because it is a dogma, precisely, without satisfactory justification. While all observations prove the contrary it is absurd to say that the universe behaves as if the space were homogenous and isotropic. Thus the expansion would be itself isotropic. According to the official doctrine, this space would be filled with clusters of galaxies not subject to this expansion. Those pockets of galaxies would be composed of gravitationally bound objects behaving as independent blocks of material free of interaction with the surrounding universe. But the reality is that the world is structured at all size scales (stars, galaxies, clusters of galaxies, superclusters, voids and walls, etc.) and is evidently not homogenous.

More fundamentally it will remain an illegitimate scientific point of view to assume that clusters of galaxies would be included as dynamically independent entities within an expanding underlying homogeneous space as we do not have the theory corresponding to such a physical schema. Indeed, to speak scientifically, and thus truthfully, of this model (and this holds true for any model) one should be able to simultaneously build, theorise, analyse, submit it to the calculations and put it in contact with the real world. In this condition we could see whether the model is physically consistent and discover its concrete properties (here, in particular, the evolution and the velocity distribution of the galaxies).

The theoretical difficulty, insurmountable for the moment, in speaking of a homogeneous, but locally inhomogeneous, universe can be presented as follows. Many cosmologists argue carelessly that since the universe appears homogeneous on large scale it is safe to assume that it satisfies the Cosmological Principle and thereby suffers a uniform expansion. But nothing guarantees that this reasoning is correct. Quite the contrary. To resolve the question of the structure of the universe no one will dispute that one should use the equations of Einstein’s gravitation theory, which contains terms describing the curvature of space on one side and terms measuring the mass-energy content on the other side. But when we replace the actual distribution of matter by an average of this distribution, that is to say when we average the terms indicating the distribution of matter and energy, it is not trivial to do the same operation on the other terms of the equation, which describe the curvature. Replace these terms with those corresponding to a homogenous universe (what the cosmologists blithely do) is not justified.

There is in this simplistic homogeneous and isotropic model (on which are based all current analyses) an even more unwarranted assumption, that of assuming that the cosmological fluid is a “perfect fluid”. This means that we assume that the gas of matter and energy is free of any viscosity and thermal conduction. This also means that forces acting on a given surface are only pressure forces, which are normal to the surface, but are not lateral forces or shear viscosity (also known as “stresses”). This assumption seems violently incorrect because everything suggests the existence of structures in the early periods of the universe, where the presence of vortices indicates the action of lateral forces. We know that everything is turning in the universe, friction and viscosity being present, as galaxies revolve in forming (and form in revolving!). And such rotations are automatically excluded in the context of an ideal gas.

In summary the equations of a “world average” are not equivalent to the “average equations” of the real world.

This is especially true since the terms to be averaged contain what are technically called “derivatives”. And those derivatives are difficult to tame when one tries to average them. Indeed by nature they are operators acting locally and it is therefore difficult, if not impossible, to give them an overall meaning on a large volume. Indeed, Einstein’s equations require, the fact that our universe is inhomogeneous implies that the expansion of space cannot be uniform on its side. Therefore even if space has certain homogeneity on large scale, it is probably wrong to assume that for this reason it would suffer a uniform expansion.

Qualitatively, if one assumes a uniform underlying expansion, one can hardly imagine that from a certain speed there is not a kind of “transition” (but which? we do not know) between the largest so-called peculiar velocities of galaxies and the “velocity” of Hubble expansion. Besides on can argue that this transition has occurred in the past, since if at the origin of the world any portion of space was subject to expansion, afterwards the elements having excess of density of matter have gained autonomy thanks to gravity to eventually form galaxies and clusters of galaxies (it still remains unclear whether individual galaxies grouped in clusters or if supermassive condensing clouds fragmented into galaxies). But if it did take place, how did this transition, no one knows.

Result: it is an illusion to think of being able to separate in the velocities of the galaxies a component due to cosmic expansion taken as strictly uniform and a component due a peculiar velocity of the object with respect to the centre of gravity of the cluster assumed motionless in the cosmological fluid. More precisely it is an idea still free and inconsistent because, I repeat, no seriously established theory is able to support it and thus give it some meaning. It would be wrong to think that the answer can be given by observation alone. If generally only the latter (I mean observation) is able to validate the theory, it cannot in any event compensate for a lack of theory. We have the Einstein equations and it is to them that we must return.

There is observational evidence that the simplistic model in which independent clusters (having the gravity as the origin of their independence) would be included in a homogeneous and isotropic underlying space is unable to account for the reality. In fact the observations show that in the clusters rather close to our Galaxy the own peculiar velocities of the galaxies are comparable to the expansion rate given by the Hubble law. Thus for the Coma cluster the pseudo overall velocity reflecting the expansion of space is about 7000 kilometres per second while the velocity dispersion (reflecting the motions internal to the cluster) is about 5000 kilometres per second. But if the expansion velocity and the local velocities are of the same order of magnitude, it shows that there has been (and there is still?) a blend of the two components that are considered on the contrary as independent from each other by the model. This shows that we cannot separate one component from the other. We need a sound theoretical model to provide the means to make this expected distinction between a peculiar velocity (which will enter the virial formula) and cosmic speed (reflecting the Hubble cosmic expansion) by conducting a redefinition of the concepts in the framework of a more consistent pattern.

Be careful… that we can represent a model close to this scenario by introducing proper parameters is (unfortunately!) possible. The computer does what it is asked to do, even if the exercise has no physical relevance. But parameterization is not physics. Instead astrophysics is dying right now of an abuse of parameterization. The thumb does not replace thinking. So far in science formal theory has always been critical and it is always a reference to a theory true and solid that allowed discovery. Not brutal, thoughtless and empirical calculation, especially if it claims an utopian precision with the proliferation of parameters and the epicycles of the mathematical formalism. The danger of trusting in a kind of “observational cosmology” is now slyly present. Yet empirical observation alone is not enough and one must necessarily go through a robust physical model for approaching the reality of things.

To summarize this point, it is clear that the universe is subject to expansion. But in the context of a world submitted to the cosmological principle (i.e. decreed homogeneous and isotropic) issues remain. Are superclusters expanding? If they are not and are instead at rest in the cosmological fluid, how is it possible to delimit them? From what distance to the centre of the cluster does the Hubble expansion appear? As long as we remain within the framework of this ideal homogeneous and isotropic model (a model that “does not correspond to reality” taking the expression literally!) we will not have at our disposal the formulas allowing only to ask the problem. What is expanding and what is not? How the passage from the static supercluster to the surrounding expanding space is done? How dare speak of the dynamics of clusters of galaxies and infer from it the presence of dark matter so long as those issues are not resolved?

The galaxies do not spin smoothly

After the arguments related to the study of clusters of galaxies, the existence of dark matter is still advanced by the advocates of this mysterious and ghostly “material” to explain, they say, the rotation curves of galaxies. But to begin with, what is meant by “rotation of a galaxy”? A galaxy is a complex and somewhat disordered set of stars, interstellar gas, molecular clouds and dust. These components do not rotate like a solid body. Instead they are submitted to differential motions and therefore it is not possible to assume that the measurements of the speed of one component will provide a valid number for the galaxy as a whole. And if it is true that organized structures such as spiral arms or rotation currents of matter or trailed clouds of dust stand out in those huge concentrations of stars, galaxies never show a perfect symmetry of rotation.

Technically speaking the difficulty of getting a speed of rotation from the data is severe. The difficulty has always been known by astronomers responsible for interpreting stellar spectra. It is indeed difficult to separate individual contributions from different regions of a star or other celestial objects because they are mixed along a same line of sight up the receiver. He has no direct access to the speed of a given region located at a certain distance from the centre of the galaxy. On the opposite he is faced to numbers to be decoded to distinguish the different emitting parts.

The general method of decoding the observational data consists in considering a more or less complicated model of the galaxy, then calculating which radiation can be expected along some line of sight and then comparing to the measurements in order to modify or validate the initial model. However this procedure is dangerous because it can easily lead to a circular reasoning prohibiting    any trustworthy conclusion. Indeed we do find in the model what we put into it. For example we commonly choose as a galaxy model a flat disk with axial symmetry while in the reality the galaxy has a certain thickness (which moreover is not constant), its components do not move at the same speed and it never has this perfect symmetry attributed to it (even if “wholesale” it often holds this property). Similarly the cinematic description of the spiral arms, which strike the eye with their sumptuous design is very difficult to include in the model for the origin of those arms is unclear, so that their existence had never been predicted by the calculations.

The world (as usual?) conspires to make things uneasy. While observing a velocity in astrophysics is easy in principle, because the measure of its effect, known to the Doppler effect, is the scope of instruments via a simple measurement of wavelength. But in practice it happens to be more difficult. Already the Doppler effect gives only the radial component of the velocity, that is to say the part of the velocity run in our direction (which may correspond to a motion either away from us or towards us). Any lateral movement is undetectable because it does not produce any effect (it could be detected by the move induced on the sky, but this is noticeable only for relatively nearby objects, otherwise the move is too small to be seen). Therefore if a galaxy is seen edge-on, we easily detect the rotational velocities at the edge, because they are there radial. In contract close to the centre of the galaxy the displacements due to rotation are tangential and thus become immeasurable. Moreover in this central area, regions at different distances from the centre will mix along the line of sight (a pitfall mentioned above) and thus will be difficult to separate. Alas in the opposite extreme case when the galaxy is seen face, different regions are indeed more easily distinguished, but unfortunately their speeds of rotation become undetectable because for us they represent transverse movements, producing no Doppler effect!

The rotations of galaxies are not stabilized

Finally, as in the case of clusters of galaxies, it is difficult to admit that the movements of matter within a galaxy have reached a steady state, while the models are based on this assumption. It is certain that groups of stars, old or new, compact or loose, are not physically separated form the rest of the galaxy. Moreover it is very difficult to know whether a particular observed star belongs or not to a particular group of stars. It may be situated by chance in the field of view or may be passing through the group. Therefore it is difficult to define a group of star that would be independent from the rest. The groups are made and unmade at the option of the moves of stars. So our Sun, since its birth, has gone many times (twenty?) round the Milky Way, and one suspects that its environment has changed significantly over time. Its speed has therefore not stabilised but continues to undergo random influences not described by the equations relating to the global galactic model aiming at interpreting observational data. The gravitational field that applies to a given star or a given (short-lived) group of stars varies greatly from one orbit to another. It is not uniform and not constant at a given fixed distance from the centre. We find again, in a different form, that a galaxy is not an object with the axial symmetry that the modelling supposes.

Lastly the description of a galaxy in terms of gas of star-points subject only to gravitational forces, a hypothesis that permits to write the virial theorem, is contradicted by the numerical simulations. Those simulations request to introduce artificial viscosity forces in the calculations in order to “reproduce the observations” (as they say). And this shows that the kinematics of galaxies relates to a true hydrodynamics of fluids, a chapter of astrophysics in which the processing tools are sorely lacking.

To still show how much the reality of galaxies is different from that described by the numerical models in equilibrium used by the astronomers, let us note the recent discovery of flows of matter around our own Galaxy. This latter would be devouring a nearby dwarf galaxy contained in the “Sagittarius current”, a filament of stars, interstellar clouds and dust surrounding the Milky Way itself. Nowadays we realize that the encounters between the galaxies are more common that we previously imagined. When a galaxy is born from the merger of two smaller ones, it even happens, as it has been observed, that it contains two populations of stars rotating in opposite directions. Moreover, even if the question is not still resolved, some evidence suggests that the galaxies are born rather by fusion of smaller components. Our Galaxy may have swallowed a dozen dwarf galaxies and even continue to do so now.

Flows of matter due to collisions: the situation is similar for our closest neighbour, the Andromeda galaxy. Its structure has always appeared as rather mysterious because the spiral has within it a vast ring of enigmatic origin. But simulation calculations and new observations suggest that the galaxy has been impacted by the small nearby galaxy (this is Messier 32 or M32, the Andromeda galaxy being M31, the thirty-first object of the famous Messier catalogue). This campaign would have crossed the disk of Andromeda approximately 200 millions years ago and produced huge shock waves. Under these conditions the presence of several rings of material and the complicated distribution of matter in the spiral makes it difficult to analyze the rotation of the whole. Finally, as I pointed out above it is likely that the Andromeda galaxy will collide with the Milky Way in some billions of years, an event which will seriously disrupt the rotation of both worlds.

The coup de grace to the analysis of the rotation of the Andromeda galaxy (which can be repeated for others) lies in that its nucleus was found double, a fact whose origins are mysterious, which obviously complicates the analysis of the movements of the various components of our object.

In such a context it is questionable whether the concept of “rotation curve” of a galaxy has a sufficiently precise and consistent meaning to allow us to draw a reliable mass measurement. The world is always, without exception, far more complicated than assumed in the products of our modelling simplifications.

In 1899 it was temporarily legitimate to appeal to an unknown physics

At this point I conclude from this discussion that the alleged evidence for the existence of dark matter are too fragile to be convincing. But it must be said that, especially in those conditions, the approach consisting in appealing to an unknown physics is, dare I say it, frankly reprehensible. How to accept that a physicist invokes a process that, precisely, lies outside its own physics? I know only one example, similar at first sight, which seems to give me wrong in my conviction, but we will just see that the context of the story was different.

At the end of the nineteenth century a serious debate shook the scientific world. It brought into conflict astrophysicists and geophysicists about the age of the Earth. The theoretical physicists like Lord Kelvin(1824-1907) advanced about one hundred million years based on a theory of cooling of the Earth, a theory to estimate how long the temperature of the surface was still temperate enough not to put lives at risk. Instead geologists felt that several billion years were necessary for the proper evolution of species had time to occur and that the deposits of fossil strata have been able to reach the heights measured. Faced with this profound and insoluble contradiction, the American geophysicist Thomas Chamberlin (1843-1928), in response to a rather arrogant intervention by Lord Kelvin at a scientific meeting in 1899, formulated the hypothesis of the presence in the centre of the sun of a source of unknown energy. This energy was thought of “atomic nature” because the temperature conditions prevailing in the heart of the stars would likely promote the release of new forms of energy contained in the elementary particles. Let us emphasize that he was leaning on physical grounds since it was shown (equations of internal structure of a star require) that the temperature at the centre of the sun must have been considerable, of the order of tens of millions of Kelvin (cons six thousands at the surface), which was calculated to induce phenomena enormously more powerful than those known on Earth (we recall the kinetic energy of particles in a gas is directly proportional to temperature).

The idea was suggested therefore that this still hypothetical “nuclear energy” would have been able to provide the energy needed for the time necessary. Chamberlin’s speech was terribly prophetic since a few years later Einstein proposed his famous formula E = mc2 establishing an equivalence between mass m and energy E and terminated the debate by showing off the fantastic  energy source for the Sun to shine for billions of years.

But would this call to an unknown physics, masterfully subsequently confirmed, justify the use by current cosmologists of an unknown form of matter? Certainly not! The situation in the early twentieth century had nothing to do with the current situation in the early twenty-first. First, in 1899, the contradiction between theory and observation was conclusively established: impossible to decline by an order of magnitude the times requested by the physicist of the Earth, impossible to increase by an order of magnitude the duration of Sun’s life paid by the physicists of the sky. Each camp had had the time to redo its accounts. Then the researchers were discovering at that time the atomic world evoked by Chamberlin: a world that would quickly show its richness and whose remarkable properties would be revealed at a rapid pace in various fields. Thus the discovery of radioactivity dates from the year 1896 and its discoverer Henri Becquerel (1852-1908) speaks also in a prophetic (but justified) way of a “new order of things”, that is a genuine new physics to come. Finally we come to see that the suggestion of Chamberlin was based on scientific findings of value, both from the experimental and the theoretical side.

Today invoke a physical mystery is unfounded

He certainly could be wrong, and it is in this spirit imbued with modesty that he intervened in his talk, but Chamberlin had therefore good and just reasons to appeal to the nascent atomic physics although it was still unknown. Instead cosmologists today do not have any theoretical or experimental evidence for their extravagant idea of dark matter (they do not show either the modesty that befits the true scholar!) Both the atomic world invoked by our talented geophysicist was a world that science was beginning to learn as much dark matter is a pure figment of the imagination brought to “the needs of the case” and taken from theoretical speculation (in the occurrence pretentious) of researchers.

The responsibility of followers of the dark matter toward science is great because I think there is in this case a real and serious danger of diverting our science from its founding principles and astray in his search for truth. Indeed, if recognized scientists allow themselves to postulate the existence of “anything” (a term which I think sums up the state of ignorance about the mythical dark matter) to resolve without further ado an apparent difficulty of interpreting observational data, what safeguard can be put in place to prevent bloom fanciful theories about every yet unexplained phenomenon?

We must remember that science has never progressed via pure “interpretation of data”. The theories that have proved powerful and productive are never born of a purely explanatory concern. Thus, the Copernican system was not designed to directly explain the movement of the planets discovered by Kepler experimentally. Einstein’s theory was not designed to explain the recession of galaxies later discovered by Hubble. And when the expansion of space has been proposed, it was established fairly strong enough not to be abandoned overnight on the grounds that it provided an age of the universe below the Earth! It is by a back and forth between theory and observation that science advances and makes discoveries. It is surely not merely for the sake of explanation of the observations (which, precisely, are almost always “questionable”, would do in their interpretation). Supporters of the dark matter do not make science: they do it a considerable disservice. In such an anti-science context, how to prevent people feel also authorized to offer irrational explanations and arguments in favour of a physics that we would ignore yet? A physics that would transcend the present? How to avoid the excesses of spiritualism?

In the summer of 2006, a reader wrote to me: “This is […] now the study of the immaterial. Are we not going to drift towards a metaphysical of religious approach in the explanations?" It is my deep fear. And I thank this correspondent for the accuracy and relevance of his query.

As a sign of this perversion, I have around me examples of researchers who will invoke dark matter to explain disparate facts whose interpretation seems problematic. I was thus entitled to the proposal of the impact of dark matter (sic) to explain some quirks (which I certainly will not detail!) concerning a falling meteor of good size in Eastern Siberia (near Tunguska) in 1908, a meteor that exploded to perhaps 8 km altitude. I have had a right to the dark matter to explain the anomalies in the behaviour of distant space probes (which seem to show differences between velocity measurements and calculations). In short, the dark matter in all sauces. The “whatever” leads to the “whatever”. This proves that scientists have cynically abandoned the usual criteria for ensuring the soundness of their reasoning.

Note in this connection that there exist a number of phenomena in astrophysics that are still misunderstood. This is not to say that the somewhat sensible scientists (extant ) would allow themselves to invoke alien physics to explain the unexplained. For example cosmic rays do exist. These are particles with an energy so great that we do not understand how it could be communicated to them. One does not know which phenomena have accelerated those particles at such dizzy speeds. But nobody would argue that they indicate the existence of an unknown energy source. Similarly we do not know all the processes leading to the formation of condensations that are planets, stars and galaxies. Will one support for this reason that mysterious angels have exercised their magical powers to give birth to the heavenly bodies? So why advocates of dark matter feel entitled to invoke dark forces? Do not they see that in doing so they promote deviations from rationality? Will we be surprised that this disrespect of the rules of science leads people to offer that life, whose circumstances of birth still elude us, is the work of an all-powerful creator god, and not the result of “natural” processes?

The new farce of dark energy

Quand la borne est franchie il n’est plus de limite (once the frontier is overstepped there is no more limit) (Ponsard). Once removed the barriers of critical vigilance by the introduction of the elusive dark matter into their theory, nothing prevented the cosmologists to fall again into their errors. Faced anew with a result of measurement difficult to interpret in the context of the models in use, they thought it wise (!) to once again call for an ingredient of unknown origin that might “make” the desired effect. Unbelievable but true! Estimating the distances of very distant supernovae, at depths of up to ten billions light years, astronomers have concluded that these galaxies were located farther than predicted by the models. It was as if space has stretched more than what was expected. It was as if the expansion was accelerating instead of slowing down (decelerating expansion is consistent with previously accepted models). The “farce” of dark energy began, replicating that of dark matter. A dark energy which, by its presence, would be able to expand space? Just insert a simple parameter in the equations and voila.

Dark energy: why a joke?

True to the spirit of this article, my initial answer in easy. Dark energy is a farce already for the simple reason that it was never foreseen by any theory! And no scientific discovery has ever been made without recourse to a solid theory. Science is not inherently explanatory, it is necessarily also creative.

And a theory following an observational proposition: is this possible? Not in that case. When trying to formalize the dark energy “after the fact”, that is to say in a semi-empirical way, the cosmologists are far from reaching a coherent model, as estimates of “preliminary” theoretical calculations (I am not opposed to such a procedure, as it always good to make calculations of order of magnitude) fall sixty orders of magnitudes below those claimed by the observers. This represents a gap unimaginably large: it would be as well to say that the incompatibility is total and the proposed theory is absurd. Those sixty orders of magnitude difference reflects, as I say and repeat in this site, the inability of current physics to include in a single conceptual framework both the atomic scale (characterized by the Planck time of 10-43 second) and the cosmic scale (characterized by the age of the Universe, 1017 seconds). We know that we cannot understand the difference between the atomic and the cosmic. It is therefore necessarily an imposture to pretend to introduce into the cosmic an energy whose characteristic scale would be atomic. It is an imposture to incorporate into the curvature of the entire universe a term which is the concern of the Planck scale. The misleading fiction of dark energy is based on that deception.

The degree of inaccuracy in estimating the distances of galaxies is extremely high

Clearly the entire story of dark energy is based on estimates of the distances of galaxies. But what are the measures of distance?

The distances from the most distant galaxies are not determined directly. They are based on the long-distance application of the Hubble law, which stipulates that the equivalent speed of recession is proportional to distance. Note that this approach opens the risk of falling into a circular reasoning. Indeed, measuring the distance using Hubble’s law and then find that the Hubble law is “verified” proves nothing about the validity of this law. To confirm the validity of the procedure the Hubble constant has to be calibrated at short distances and one must have at his disposal “external” measurements of distance independent of those based on the shift of spectral lines towards the red. For that purpose astronomers have used very bright supernovae, so still visible at large distances, classified as type Ia, and whose intrinsic brightness is assumed to be the same for all. Under this hypothesis a measure of the actual luminosity (apparent brightness) directly gives the distance. Do the two scales do distance, that of the galaxies and that of the supernovae, coincide? The answer is no. The most distant objects are more distant than predicted by a classical Hubble expansion, which slows down gradually as universe ages.

Instead of taking note of the discrepancy and question the validity of the assumptions, cosmologists have preferred to cling to them without questioning them. So is seen as an untouchable dogma the existence of a Hubble expansion corresponding to a model of the universe homogeneous and isotropic (that is to say an expansion of space independent of the direction) upon which specific motions of galaxies and clusters of galaxies would be superimposed (as I have described earlier). Similarly the intrinsic luminosity of Type Ia supernovae is taken as independent of the particular object being considered. The latter assumption, impossible to prove, is free: it would be the first time in astrophysics that a class of objects would be uniform (i.e. composed of identical objects). Again there should be external confirmations to validate the hypothesis and make sure that it does not reduce to a pious wish. Without wishing to play the prophet, you can bet without fear of being wrong that the non-uniformity of Type Ia supernovae is coming soon.

Everything depends of course on the value of the distances of galaxies. And with which accuracy or rather inaccuracy those famous distances are known? Just judge for yourselves! The distance from the nearest galaxy, Andromeda, is given, depending on the sources, as ranging between 2.5 and 3 million light-years. No one can benefit from a more accurate value. So we do not know the distance from our next-to-the-door Andromeda better than 20%. And you can easily imagine that the farther you go, the greater the degree of imprecision you get. This imprecision is huge. Is it not amazing? Is it not astounding that cosmologists claim with aplomb to resolve the question of the expansion of space in function of the distance (Hubble law, alleged accelerating expansion) when they know so bad the distances of galaxies? The Hubble constant is the ratio of velocity to distance. Therefore a percentage error on distance will necessarily lead to the same error on the Hubble constant, and so on all distance measurements of galaxies.

Please forgive me to use the following adjectives, but faced with this incompetence (obviously not to be blamed in itself: it would be sufficient to recognize it) I think that they are appropriate to the situation. I claim it is grotesque, absurd and inconsistent to dare to speak of accelerating expansion of space, a concept based on empirical measures of  distances to objects located billions of light years , while we do not know better than within 20% the distance from our neighbor, located a thousand times closer, at a mere three millions light-years. Could anyone answer seriously and honestly to such an objection?

Where does this inaccuracy in the measurement of distances come from? The answer is simple. The determination of the distances of galaxies is based on the measurement of the distances of the stars that compose them. But the theoretical models of stars that we have at our disposal are just an imprecise reflection of reality. The real stars are not “equal” to the theoretical stars. What about the imprecision in the distance measurements of stars? Still huge. The distance from nearby Pleiades star cluster, which is the charm of our winter nights, is not known to better than within 10% (despite the Hipparcos satellite measurements, which were supposed to provide us with reliable distances). It is located between 380 and 440 years of light. In June 2007 a new distance to the Orion Nebula (an interstellar cloud neighbor) was proposed at 1,300 light years, or 200 less that was admitted, which corresponds to a 15% error. Edifying.
With such a blur in the measurements, how could we resolve the issue of far distances while the question of nearby ones is still unsolved?

What is the remedy to overcome this scientific impasse? Return to a physical analysis of stars and give up (again, as in the case of the Universe) the idea that ultra-simplistic models used by the astronomers are able to describe the reality. All current discussions about stars, I mean all, are based on the dogma that stars are wholly characterized by its so-called “fundamental parameters”, namely its radius, total luminosity (or equivalently its temperature) and its mass (to which should probably be added age and chemical composition). But if these parameters have a meaning for theoretical models, they do not work in stark reality. Believing that a star is entirely determined by its mass, its luminosity and radius is a belief not only free but false. Besides, if theorists do not know, the researchers close to observations are well aware of the situation. They find that every star has its singularity and cannot be reduced to a few parameters. If the stars depended only on a few parameters, they would be divided into well-defined classes with identical elements, which is not the case. Stars share a common property: they are all different.

The most controversial and sensitive issue in stellar physics is undoubtedly the question  of the “radius” of a star. A star is an open system, as we said earlier, and only some of the stars are stable enough to show a clear separation of their material with the surrounding environment. Most of the time otherwise the surface gravity is insufficient to retain the outer layers so that a tumultuous transition zone develops with huge movements of great importance. These phenomena are neither inventions nor secondary facts: they show themselves when astronomers analyze, even roughly, the starlight. This is true for all the stars. Of course (world conspires to make things difficult to science) the most interesting objects are the brightest ones (because you see them far away) but these very bright stars are also the largest and without doubt the most recalcitrant to our modeling.

Here is another example proving conclusively that our stellar models can neither predict nor explain the physical phenomena occurring on the surface of stars. We know that the Sun is surrounded by a layer hotter than its “surface” (called the photosphere), where the temperature reaches tens of thousands of degrees and is called the chromospheres. At a little higher altitude the temperature rises dramatically to reach the hundreds of thousands of degrees of the corona, the medium which is revealed only during total solar eclipses. Yet current models of stars, which are based on assumptions of equilibrium, are unable to predict or reproduce this behavior of matter in the surface layers. Indeed they all say that temperature can only decrease outwards, whereas we see that dramatic increase after reaching a minimum. It is in these upper layers of the Sun that the solar wind develops, by ejecting matter. But no model can predict or calculate in a safe way how mass-loss occurs. We must be content with a digital parameterization, which is inherently unsatisfactory and artificial. The surface of the stars do not let tamed itself by our equilibrium physics.

A timely discovery, dating from April 2006, illustrates perfectly the inability to identify the real stars to models that theorists propose. The star Vega, in the constellation Lyra, is one of the most brilliant of the sky and brought happiness to our summer nights. She has always been considered a standard object, constituting a reference standard to calibrate by comparison the luminosity of the other stars. But wham, the beautiful edifice of measuring brightness, and therefore distance, of the stars is in the process of crack. Indeed we found that Vega was more complicated than the nice models supposed to describe her. The salient fact is that our favorite reference star rotates much faster than imagined. This rotation has not been noted previously because the axis of rotation is by chance directed toward us, so that the movements are transversal and therefore (as I said above) not detectable by Doppler Effect. Because of its rapid rotation, the structure of the star is all messed and the models are currently unable to incorporate the dynamic effects that result. Ultimately, it is likely that the intrinsic luminosity of Vega, as well as its “temperature” is poorly defined. Rotation causes the surface temperature to vary with latitude on the disk. In particular the temperature of the poles, corresponding precisely to the part that we see from Earth, could be2000 degrees higher than at the equator. What is the temperature of Vega? A star can really be characterized by its “temperature”?

Another blow to measure distances in the Universe. If our standards disappear, what can we rely on? But in that case how reliable are the arguments in favor of dark energy? They are of no value.

Rebelote: one year later (late May 2007) an exceptional new observation confirms, if there was still need, the inadequacy of our stellar models to reality. For the first time in history, astronomers have been able to reconstruct the image of the surface of the star Altair (in Eagle), and confirmed that the star was significantly affected in its structure by the rapidity of its rotation on itself (it performs three turns on itself daily against a turn in 25 days for our Sun!). The star has a pronounced impact on the equator bulge area where the temperature is lower than at the poles, and lighter and warmer spots mark its surface. Whereas this reality is indisputable it is yet ignored by the instruments of large surveys and models. Thus future analyzes, for which every star is a well-behaved homogeneous sphere will necessarily be marred by serious errors and distance measurements will be bad.

New extraordinary discovery in mid-August 2007: the star Mira, observed for four centuries and admired for its brightness variations (hence its name wonderful) shows a quite unexpected phenomenon not predicted by theoretical models l. Indeed, it has expelled into space a portion of its own material and comes in ultraviolet light like a giant comet whose tail is thirteen years of light long. Stars calculated by theorists are in equilibrium, Mira is not.

The hegemony of models is harmful and unacceptable

The conclusion of this discussion? The world is much more complicated than assumed in the theoretical models, whether stars, galaxies, clusters of galaxies and the universe as a whole are concerned. Therefore it is not only futile but also unhealthy and scientifically absurd to try at all costs to stick to the real world our summary modeling. It is inconsistent to introduce ad hoc parameters to adjust the observations of formal schemes whose physical content is insufficient. It is useless to ask what models cannot provide as they fail to contain the necessary answers.

Wild parameterization of rudimentary and incomplete physical models, a process largely boosted by the computer, is the major scourge plaguing today astrophysics.

As I tried to show, dark matter and dark energy are just parameters devoid of any physical meaning introduced in order to make models (anyway unsatisfactory) coincide with the reality. They do not correspond to the reality of things (I adooore that expression). We have seen that the astrophysical models are challenged at all levels. To put it trivially, there is not one to save the other. Theoretical stars, defined by a few basic parameters and assumed to be in a state of hydrostatic equilibrium, are not real stars (and do not answer me that it is possible to treat more complex cases by the addition of suitable parameters! I know it only too well). The galaxies are not in equilibrium and do not have axial symmetry. They undergo collisions and have a complex structure. No matter in the universe is in hydrostatic equilibrium. Clusters of galaxies are not "virialized" and also undergo the influence of cosmic movements. Space-time is not homogeneous and isotropic. Nothing in the universe is in equilibrium.

Because of the deficiencies of analysis resulting from this deep gap between real and modeling, distances in the Universe are known with a high degree of inaccuracy and movements within the galaxies defy understanding. But it is clear that cosmology is based on the astrophysics of stars and then galaxies (obviously because the universe is ultimately composed of stars). But because of the approximate conclusions of stellar studies, the claim by the cosmologists to achieve near-absolute precision in their modeling is unjustified.

In this regard, the comparison which can be made between the structure of stars and the universe is extremely enlightening and help me to summarize and conclude. The real stars are in a complex dynamic state with all kinds of velocity fields: rotation, large-scale turbulence, small-scale ejection of matter, internal circulation, organized currents, convection, etc...  On the contrary the stellar models are essentially static and spherically symmetry.

What must be understood (honestly… what a physicist can easily understand!) is that the equations of a static spherically symmetric star are not the same as those of a real star with complex dynamics. Therefore the solution of the first problem (the darling model to which astrophysicists blindly attach) does not amount to finding the mean parameters of a real star, one whose material suffers movements.

If we knew the equations of a real star and were doing on them a certain average, for example on the temperature (or if this average were directly done on observations), it is certain that the result would not be the temperature of a mean static model.

To think about: the average temperature of a real star is not the temperature of an average stellar model.

Faced with this undeniable truth the analyzes of the astrophysicists are sometimes crazy, judge for yourself. While the temperature of a true star can vary by several thousand Kelvin from one point to another on the surface, do you know that specialists “play” to determine the temperature of the supposedly equivalent model to a few degrees close? Not only does it make any sense but this provides numbers (because you can always arrange for the codes to give some results) that we cannot give any confidence to. What is certain on the contrary is that these numbers are wrong.

It is the same for our Universe. Cosmologists insist on using an average model corresponding to a homogeneous and isotropic universe containing no internal movement. And this position is absurd. This is not because this star is roughly spherical (as seen from a large distance) that a spherical theoretical model will match it in its reality. This is not because our Universe appears to be homogeneous and isotropic on a large scale that the homogeneous and isotropic theoretical model matches him in his reality.

The average expansion of our real dynamic Universe is not the expansion of an average homogeneous and isotropic universe.

We can glue the two… if you add parameters such as dark matter and dark energy, entities that do not have any physical meaning.

Truth does not triumph by itself

Let me end on a personal note. Entered into the research in the sixties, I have witnessed in laboratories where I was most of the major astronomical discoveries of the twentieth century. I lived by example the battle over the big bang with the recognition of this great theory by some and its irrational and visceral rejection by others. I lived around these discussions around those inexplicable stellar spectra that would soon be interpreted as those of very distant stars (and thus having a significant shift towards the red) and terribly bright: quasars. I remember the first pictures where we saw a pulsar flash. But I also witnessed helplessly as the birth of this incongruous idea of ​​dark matter and, more recently, in an equally unexpected and unmotivated way, that of dark energy, which they try to make us believe that it is 70% of the content of the Universe.

Mainly I am sorry to see astronomy sick of uncontrolled computerization coupled with indigent modeling ignoring most of the actual physics of celestial bodies and certainly not allowing achieving the accuracy claimed by fundamentalist theorists. Because I also experienced non-resistible rise of information technology and the exponential growth of publications that resulted, without bringing with it the quality of results that might have been expected. But what are computers and large studies that are issued if the value of the results does not improve in proportion?

What strikes me the most, and grieves me, because I feel disarmed vis-à-vis of these movements of opinion, is that experience has shown me that the truth as such is not carrying any extra argumentative weight, no persuasiveness. Instead you could say that the contrary holds, because often people are more likely to believe what they like subjectively than willing to pursue the objective dimension of the facts. In short, in my life as a scientist in front of the aberrations expressed by one or the other, I thought naively that untruths balloons deflate themselves: it was obvious to my mind that those misconceptions were destined to die themselves. But I was sorely frustrated this hope disappointed. These are the lowest intellectually opinions that have developed and proved the most popular. Or at least the most publicized. (The introduction of fatal anthropic principle is another example of a crazy idea, scientifically inconsistent, I thought doomed to oblivion how naive of me!). So I learned that despite ideas to triumph, even just ideas, he had to fight against the ambient intellectual laziness and against the ready-made certainties imposed by official science.

The statement is painful. What to do? I think these betrayals to scientific honesty reflect the spirit of the society in which we live, a company eager to profit and profitability. I argue that to change science (and this has to be done!) we must change society. Ours is carrying too many desires of domination and mastery of the world. Our society is tinged with too much concern about blind productivity. We were taught that bigger is more beautiful and more effective. But it is still not proven! Racing performance and the achievement of gigantic projects ignores justice, fairness and truth.

Thus the history of dark matter is sadly indicative of the state of our world today. It shows that focusing on the sanctified values ​​presented without proof as positive and promising "progress" (it is not specified upon what this progress acts) scientists have passed the passion of a genuine truth in the background. They impose the idea that we must run the largest computers, set up ever more ambitious and costly experiments such as saving a billion stars and millions of galaxies, such as measuring the distance of the Moon to the thickness of a hair's breadth. Since we produce, we do not allow the researchers to pause and begin to cast doubts upon the cosmological principle or equilibrium assumptions. They will believe that the most sophisticated models (in the form but not in substance) are the most accurate. They will inculcate the idea that the implementation of very large astronomical surveys ("Large surveys") will make us accomplish decisive discoveries. But as astronomers believe that the universe is subject to their cosmological principle and the stars comply with their three-parameter models, these projects are ahead sterile. Assign to a given star some temperature and radius, whereas in reality this star has neither temperature nor defined radius, is inconsistent and will not produce any knowledge. Also describe the real Universe, which is composed of fused material of all kinds with omnidirectional movements, by equations applicable to a homogeneous and isotropic material without internal speeds comparable to a perfect fluid can only lead to aberrations (this is precisely what we find!).

Perhaps we are witnessing now the decline of modern astrophysics. Perhaps the invention of dark matter and dark energy is a sign that cosmology, yet by vocation artisan of truth, becomes, by worship of performance, abuse of computerization and blind fanaticism of theory, a science of fraud.

As an epilogue

Finally I would note, with a touch of perfidy, that every minute, day or passing year continues to reinforce my critical position vis-à-vis of the frenzy of dark matter. Since the beginning of this farce researchers have found absolutely nothing that could give substance to their whim. From the theoretical side they advance anything but no results came on the market. As for the observations that would corroborate their crazy ideas, they are also missing, as is missing the elusive dark matter itself. In fact the more time progresses, the more the idea of dark matter loses its already shaky credibility. How believers in Dark Matter can to be so gullible? I do not know.

Over the seminars I attended during my career I was struck by the large proportion of speakers who had transferred their discovery in the aftermath. The astronomers of my generation are naive and faithful followers of the "free beer tomorrow" slogan (translated into english from the French expression "demain on rase gratis"; no translation in American) implying that tomorrow never comes since there is always a new tomorrow.

The conlusion is obvious: the discovery of dark matter is set for tomorrow.

 

 


last modified: 2014, July 27th


 

Homepage
Christian Magnan

URL :  http://www.lacosmo.com/DarkMatter.html