Posts com Tag ‘automatic’


The idea of multiple universes is more than a fantastic invention. It appears naturally within several theories, and deserves to be taken seriously, explains Aurélien Barrau.


What physics in the multiverse (ver na wikipedia)?

Our universe would be a paltry island in a vast “multiverse” infinitely vast and diverse? Many of our current models, admitted (as general relativity) or speculative (such as string theory), leads naturally to a multiverse. These multiple universes are not theories but the consequences of theories developed in response to clear questions of particle physics or gravitation. Many of the central problems of theoretical physics – the complexity and naturalness – find a natural explanation.But this revolutionary proposal is not without dangers and requires conceptual thinking deep epistemological.

Our entire universe is a tiny island within an infinitely vast and infinitely diversified meta-world? This could be either one of the most important revolutions in the history of cosmogonies or merely a misleading statement that reflects our lack of understanding of the most fundamental laws of physics.

The idea in itself is far from new: from Anaximander to David Lewis, philosophers have exhaustively considered this eventuality. What is especially interesting today is that it emerges, almost naturally, from some of our best – but most often speculative – physical theories. The multiverse is no longer a model, it is a consequence of our models. It offers an obvious understanding of the strangeness of the physical state of our universe. The proposal is attractive and credible, but it requires a profound rethinking of current physics.

At first glance, the multiverse seems to lie outside of science because it can not be observed. How, following the prescription of Karl Popper, can a theory be falsifiable if we can not observe its predictions? This way of thinking is not really correct for the multiverse for several reasons. First, predictions can be made in the multiverse: it leads only to statistical results, but this is also true for any physical theory within our universe, owing both to fundamental quantum fluctuations and to measurement uncertainties. Secondly, it has never been necessary to check all of the predictions of a theory to consider it as legitimate science. General relativity, for example, has been extensively tested in the visible world and this allows us to use it within black holes even though it is not possible to go there to check.Finally, the critical rationalism of Popper is not the final word in the philosophy of science. Sociologists, aestheticians and epistemologists have shown that there are other demarcation criteria to consider. History reminds us that the definition of science can only come from within and from the praxis: No active area of intellectual creation can be strictly delimited from outside. If scientists need to change the borders of their own field of research, it would be hard to justify a philosophical limitations preventing them from doing so. It is the same with art: nearly all artistic innovations of the 20th century have transgressed the definition of art as would have been given by a 19th-century esthetician. Just as with science and scientists, art is defined internally by artists.

For all of these reasons, it is worth considering seriously the possibility that we live in a multiverse. This could allow understanding of the two problems of complexity and naturalness. The fact that the laws of physics and couplings appear to be fine-tuned to such an extent that life can exist and most fundamental quantities assume extremely “unlikely” values would appear obvious if our entire universe were just a tiny part of a multiverse where huge different regions exhibit different laws. In this view, we are living in one of the “anthropically favored” regions. This anthropic selection has no strictly theological and teleological dimension and absolutely no link with any kind of “intelligent design”. It is nothing other than the obvious generalization of the selection effect that already has to be taken into account within our own universe.When dealing with a sample, it is impossible to avoid wondering if it accurately represents the full set, and this question must of course be asked when considering our universe within the multiverse.

The multiverse is not a theory. It appears as a consequence of some theories, and these have other predictions that can be tested within our own universe. There are many different kinds of possible multiverse, depending on the particular theories, some of them possibly even being interwoven.

The most elementary multiverse is simply the infinite space predicted by general relativity – at least for flat and hyperbolic geometries.An infinite number of Hubble volumes should fill this meta-world. In such a situation, everything that is possible (ie compatible with the laws of physics as we know them) should occur. This is true because an event with a non-vanishing probability has to happen somewhere if space is infinite. The structure of the laws of physics and the values of fundamental parameters can not be explained by this multiverse, but many specific circumstances can be understood by anthropic selections. Some places are, for example, less homogenous than our Hubble volume, so we can not live there because they are less life-friendly than our universe, where the primordial fluctuations are perfectly adapted as the seeds for structure formation.

General relativity also faces the multiverse issue when dealing with black holes. The maximal analytic extension of the Schwarzschild geometry, as exhibited by conformal Penrose-Carter diagrams, shows that another universe could be seen from within a black hole. This interesting feature is well known to disappear when the collapse is considered dynamically. The situation is, however, more interesting for charged or rotating black holes, where an infinite set of universes with attractive and repulsive gravity appear in the conformal diagram. The wormholes that possibly connect these universes are extremely unstable, but this does not alter the fact that this solution reveals other universes (or other parts of our own universe, depending on the topology), whether or not accessible. This multiverse is, however, extremely speculative as it could be just a mathematical ghost. Furthermore, nothing allows us to understand explicitly how it formed.

A much more interesting pluriverse is associated with the interior of black holes when quantum corrections to general relativity are taken into account. Bounces should replace most singularities in quantum gravity approaches, and this leads to an expanding region of space-time inside the black hole that can be considered as a universe. In this model, our own universe would have been created by such a process and should also have a large number of child universes, thanks to its numerous stellar and supermassive black holes. This could lead to a kind of cosmological natural selection in which the laws of physics tend to maximize the number of black holes (just because such universes universes generate more of the same kind). It also allows for several possible observational tests that could refute the theory and does not rely on the use of any anthropic argument. However, it is not clear how the constants of physics could be inherited from the parent universe by the child universe with small random variations and the detailed model associated with this scenario does not yet exist.

One of the richest multiverse is associated with the fascinating meeting of inflationary cosmology and string theory. On the one hand, eternal inflation can be understood by considering a massive scalar field. The field will have quantum fluctuations, which will, in half of the regions, increase its value, in the other half, the fluctuations will decrease the value of the field.In the half where the field jumps up, the extra energy density will cause the universe to expand faster than in the half where the field jumps down. After some time, more than half of the regions will have the higher value of the field simply because they expand faster than the low-field regions. The volume-averaged value of the field will therefore rise and there will always be regions in which the field is high: the inflation becomes eternal. The regions in which the scalar field fluctuates downward will branch off from the eternally inflating tree and exit inflation.

On the other hand, string theory has recently faced a third change of paradigm. After the revolutions of supersymmetry and duality, we now have the “landscape”. This metaphoric word refers to the large number (maybe 10 500) of possible false vacua of the theory. The known laws of physics would just correspond to a specific island among many others. The huge number of possibilities arises from different choices of Calabi-Yau manifolds and different values of generalized magnetic fluxes over different homology cycles. Among other enigmas, the incredibly strange value of the cosmological constant (why are the first 119 decimals of the “natural” value exactly compensated by some mysterious phenomena, but not the 120th?) Would simply appear as an anthropic selection effect within a multiverse where nearly every possible value is realized somewhere. At this stage, every bubble-universe is associated with one realization of the laws of physics itself and contains an infinite space where all contingent phenomena take place somewhere. Because the bubbles are forever causally disconnected (owing to the fast “space creation” by inflation) it will not be possible to travel and discover new laws of physics.

This multiverse – if true – would force a profound change of our deep understanding of physics. Laws reappear as the kinds of phenomena, the ontological priority of our universe would have to be abandoned. At other places in the multiverse, there would be other laws, other constants, other numbers of dimensions, our world would be just a tiny sample. It could be, following Copernicus, Darwin and Freud, the fourth narcissistic injury.

Quantum mechanics was probably among the first branches of physics leading to the idea of a multiverse. In some situations, it inevitably predicts superposition. To avoid the existence of macro-scopic Schrödinger cats simultaneously living and dying, Bohr introduced a reduction postulate. This has two considerable drawbacks: first, it leads to an extremely intricate philosophical interpretation where the correspondence between the mathe-matics the underlying physical theory and the real world is no longer isomorphic (at least not at any time), and, second, it violates unitarity. No known physical phenomenon – not even the evaporation of black holes in its modern descriptions – does this.

These are good reasons for seriously considering the many-worlds interpretation of Hugh Everett. Every possible outcome to every event is allowed to define or exist in its own history or universe through quantum decoherence instead of wave function collapse. In other words, there is a world where the cat is dead and another one where it is alive. This is simply a way of trusting strictly the fundamental equations of quantum mechanics. The worlds are not spatially separated, but exist more as kinds of “parallel” universes. This interpretation solves some tantalizing paradoxes of quantum mechanics but remains vague about how to determine when splitting of universes happens. This multiverse is complex and, depending on the very nature of quantum phenomena leading to other kinds of multiverse, it could lead to higher or lower levels of diversity.

More speculative multiverse can also be imagined, associated with a platonic kind of democracy or with mathematical nominalist relativism. In any case, it is important to underline that the multiverse is not a hypothesis invented to answer a specific question. It is simply a consequence of a theory usually built for another purpose. Interestingly, this result also solves many problems complexity and naturalness. In most cases, it even seems that the existence of many worlds is closer to Ockham’s razor (the principle of simplicity) than the ad hocassumptions that would have to be added to models to avoid the existence of other universes.

Given a model, for example the string-inflation paradigm, it is possible to make predictions in the multiverse? In principle, it is, at least in a Bayesian approach. The probability of observing vacuum i (and the associated laws of physics) is simply i = iprior f i where i prior is determined by the geography of the landscape of string theory and the dynamics of eternal inflation, and the selection factor i characterizes the chances for an observer to evolve in vacuum i. This distribution gives the probability for a randomly selected to be observed in a given vacuum. Clearly, predictions can only be made probabilistically, but this is already true in standard physics.The fact that we can observe only one sample (our own universe) does not qualitatively change the method and still allows the refuting of models at given confidence levels. The key points here are the well known peculiarities of cosmology, even with only one universe: the observer is embedded within the system described, the initial conditions are critical, the experiment is “locally” irreproducible, the energies involved have not been experimentally probed on Earth, and the arrow of time must be conceptually reversed.

However, this statistical approach to testing the multiverse suffers from severe technical short cuts. First, while it seems natural to identify the prior probability with the fraction of volume occupied by a given vacuum, the result depends sensitively on the choice of a space-like hypersurface on which the distribution is to be evaluated. This is the so-called “measurement problem” in the multiverse. Second, it is impossible to give any sensible estimate of i. This would require an understanding of what life is – and even consciousness of what is – and that simply remains out of reach for the time being. Except in some favorable cases – for example when all the universes of the multiverse present a given characteristic that is incompatible with our universe – it is hard to refute explicitly a model in the multiverse. But difficult in practice does not mean intrinsically impossible. The multiverse remains within the realm of Popperian science. It is not qualitatively different from other proposals associated with usual ways of doing physics. Clearly, new mathematical tools and far more accurate predictions in the landscape (which is basically totally unknown) are needed for falsifiability to be more than an abstract principle in this context. Moreover, falsifiability criterion is just one among many possible ones and it should probably not be over-determined.

When facing the question of the incredible fine-tuning required for the fundamental parameters of physics to allow the emergence of complexity, there are few possible ways of thinking. If one does not want God to use or rely on an unbelievable luck that led to extremely specific initial conditions, there are mainly two remaining possible hypotheses. The first would be to consider that since complexity – and in particular, life – is an adaptive process, it would have emerged in nearly any kind of universe.This is a tantalizing answer, but our own universe shows that life requires extremely specific conditions to exist. It is hard to imagine life in a universe without chemistry, maybe without bound states or with other numbers of dimensions. The second idea is to accept the existence of many universes with different laws where we naturally find ourselves in one of those consistent with complexity. The multiverse was not imagined to answer this specific question but appears “spontaneously” in serious physical theories, so it can be considered as the simplest explanation to the puzzling issue of naturalness. This of course does not prove the model to be correct, but it should be emphasized that there is absolutely no “pre-Copernican” anthropocentrism in this thought process.

It could well be that the whole idea of multiple universes is misleading. It could well be that the discovery of the most fundamental laws of physics will make those parallel worlds totally obsolete in a few years. It could well be that with the multiverse, science is just entering a “no through road”.Caution is mandatory when physics tells us about invisible spaces. But it could also very well be that we are facing a deep change of paradigm that revolutionizes our understanding of nature and opens new fields of scientific thought possible.Because they lie on the border of science, these models are dangerous, but they offer the extraordinary possibility of constructive interference with other kinds of human knowledge.The multiverse is a risky thought – but, then again, let’s not forget that discovering new worlds has always been risky.

About the Author

Aurélien Barrau, Laboratoire de Physique et de Cosmologie Subatomique (UJF/CNRS/IN2P3).

Self-organization is a process of attraction and repulsion in which the internal organization of a system, normally an open system, increases in complexity without being guided or managed by an outside source. Self-organizing systems typically (though not always) display emergent properties.




The most robust and unambiguous examples of self-organizing systems are from physics. Self-organization is also relevant in chemistry, where it has often been taken as being synonymous with self-assembly. The concept of self-organization is central to the description of biological systems, from the subcellular to the ecosystem level. There are also cited examples of “self-organizing” behaviour found in the literature of many other disciplines, both in the natural sciences and the social sciences such aseconomics or anthropology. Self-organization has also been observed in mathematical systems such as cellular automata.

Sometimes the notion of self-organization is conflated with that of the related concept of emergence. Properly defined, however, there may be instances of self-organization without emergence and emergence without self-organization, and it is clear from the literature that the phenomena are not the same. The link between emergence and self-organization remains an active research question.

Self-organization usually relies on four basic ingredients:

  1. Positive feedback
  2. Negative feedback
  3. Balance of exploitation and exploration
  4. Multiple interactions

History of the idea

The idea that the dynamics of a system can tend by themselves to increase the inherent order of a system has a long history. One of the earliest statements of this idea was by the philosopher Descartes, in the fifth part of his Discourse on Method, where he presents it hypothetically.[citation needed] Descartes further elaborated on the idea at great length in his unpublished work The World.

The ancient atomists (among others) believed that a designing intelligence was unnecessary, arguing that given enough time and space and matter, organization was ultimately inevitable, although there would be no preferred tendency for this to happen. What Descartes introduced was the idea that the ordinary laws of nature tend to produce organization[citation needed] (For related history, see Aram Vartanian, Diderot and Descartes).

Beginning with the 18th century naturalists a movement arose that sought to understand the “universal laws of form” in order to explain the observed forms of living organisms. Because of its association with Lamarckism, their ideas fell into disrepute until the early 20th century, when pioneers such as D’Arcy Wentworth Thompson revived them. The modern understanding is that there are indeed universal laws (arising from fundamental physics and chemistry) that govern growth and form in biological systems.

Originally, the term “self-organizing” was used by Immanuel Kant in his Critique of Judgment, where he argued that teleology is a meaningful concept only if there exists such an entity whose parts or “organs” are simultaneously ends and means. Such a system of organs must be able to behave as if it has a mind of its own, that is, it is capable of governing itself.

In such a natural product as this every part is thought as owing its presence to the agency of all the remaining parts, and also as existing for the sake of the others and of the whole, that is as an instrument, or organ… The part must be an organ producing the other parts—each, consequently, reciprocally producing the others… Only under these conditions and upon these terms can such a product be an organized and self-organized being, and, as such, be called a physical end.

The term “self-organizing” was introduced to contemporary science in 1947 by the psychiatrist and engineer W. Ross Ashby. It was taken up by the cyberneticians Heinz von FoersterGordon PaskStafford Beer and Norbert Wiener himself in the second edition of his “Cybernetics: or Control and Communication in the Animal and the Machine” (MIT Press 1961).

Self-organization as a word and concept was used by those associated with general systems theory in the 1960s, but did not become commonplace in the scientific literature until its adoption by physicists and researchers in the field of complex systems in the 1970s and 1980s.[1] After 1977’s Ilya Prigogine Nobel Prize, the thermodynamic concept of self-organization received some attention of the public, and scientific researchers start to migrate from the cibernetic view to the thermodynamic view.


The following list summarizes and classifies the instances of self-organization found in different disciplines. As the list grows, it becomes increasingly difficult to determine whether these phenomena are all fundamentally the same process, or the same label applied to several different processes. Self-organization, despite its intuitive simplicity as a concept, has proven notoriously difficult to define and pin down formally or mathematically, and it is entirely possible that any precise definition might not include all the phenomena to which the label has been applied.

It should also be noted that, the farther a phenomenon is removed from physics, the more controversial the idea of self-organization as understood by physicists becomes. Also, even when self-organization is clearly present, attempts at explaining it through physics or statistics are usually criticized as reductionistic.

Similarly, when ideas about self-organization originate in, say, biology or social science, the farther one tries to take the concept into chemistry, physics or mathematics, the more resistance is encountered, usually on the grounds that it implies direction in fundamental physical processes. However the tendency of hot bodies to get cold (see Thermodynamics) and by Le Chatelier’s Principle– the statistical mechanics extension of Newton’s Third Law– to oppose this tendency should be noted.

Self-organization in physics

There are several broad classes of physical processes that can be described as self-organization. Such examples from physics include:

  • self-organizing dynamical systems: complex systems made up of small, simple units connected to each other usually exhibit self-organization
  • In spin foam system and loop quantum gravity that was proposed by Lee Smolin. The main idea is that the evolution of space in time should be robust in general. Any fine-tuning of cosmological parameters weaken the independency of the fundamental theory. Philosophically, it can be assumed that in the early time, there has not been any agent to tune the cosmological parameters. Smolin and his colleagues in a series of works show that, based on the loop quantization of spacetime, in the very early time, a simple evolutionary model (similar to the sand pile model) behaves as a power law distribution on both the size and area of avalanche.
    • Although, this model, which is restricted only on the frozen spin networks, exhibits a non-stationary expansion of the universe. However, it is the first serious attempt toward the final ambitious goal of determining the cosmic expansion and inflation based on a self-organized criticality theory in which the parameters are not tuned, but instead are determined from within the complex system.[2]

Self-organization vs. entropy

Statistical mechanics informs us that large scale phenomena can be viewed as a large system of small interacting particles, whose processes are assumed consistent with well established mechanical laws such as entropy, i.e., equilibrium thermodynamics. However, “… following the macroscopic point of view the same physical media can be thought of as continua whose properties of evolution are given by phenomenological laws between directly measurable quantities on our scale, such as, for example, the pressure, the temperature, or the concentrations of the different components of the media. The macroscopic perspective is of interest because of its greater simplicity of formalism and because it is often the only view practicable.” Against this background, Glansdorff and Ilya Prigogine introduced a deeper view at the microscopic level, where “… the principles of thermodynamics explicitly make apparent the concept of irreversibility and along with it the concept of dissipation and temporal orientation which were ignored by classical (or quantum) dynamics, where the time appears as a simple parameter and the trajectories are entirely reversible.”[3]

As a result, processes considered part of thermodynamically open systems, such as biological processes that are constantly receiving, transforming and dissipating chemical energy (and even the earth itself which is constantly receiving and dissipating solar energy), can and do exhibit properties of self organization far from thermodynamic equilibrium.

LASER (acronym for “light amplification by stimulated emission of radiation”) can also be characterized as a self organized system to the extent that normal states of thermal equilibrium characterized by electromagnetic energy absorption are stimulated out of equilibrium in a reverse of the absorption process. “If the matter can be forced out of thermal equilibrium to a sufficient degree, so that the upper state has a higher population than the lower state (population inversion), then more stimulated emission than absorption occurs, leading to coherent growth (amplification or gain) of the electromagnetic wave at the transition frequency.”[4]


Self-organization in human society

The self-organizing behaviour of social animals and the self-organization of simple mathematical structures both suggest that self-organization should be expected in human society. Tell-tale signs of self-organization are usually statistical properties shared with self-organizing physical systems (see Zipf’s lawpower lawPareto principle). Examples such as Critical Massherd behaviourgroupthink and others, abound in sociologyeconomicsbehavioral finance and anthropology.[19]

In social theory the concept of self-referentiality has been introduced as a sociological application of self-organization theory by Niklas Luhmann (1984). For Luhmann the elements of a social system are self-producing communications, i.e. a communication produces further communications and hence a social system can reproduce itself as long as there is dynamic communication. For Luhmann human beings are sensors in the environment of the system. Luhmann put forward a functional theory of society.[citation needed]

Self-organization in human and computer networks can give rise to a decentralized, distributed, self-healing system, protecting the security of the actors in the network by limiting the scope of knowledge of the entire system held by each individual actor. TheUnderground Railroad is a good example of this sort of network.[original research?] The networks that arise from drug trafficking exhibit similar self-organizing properties.[original research?] Parallel examples exist in the world of privacy-preserving computer networks such as Tor.[original research?] In each case, the network as a whole exhibits distinctive synergistic behavior through the combination of the behaviors of individual actors in the network. Usually the growth of such networks is fueled by an ideology or sociological force that is adhered to or shared by all participants in the network.[original research?][citation needed]

In economics

In economics, a market economy is sometimes said to be self-organizing. Friedrich Hayek coined the term catallaxy to describe a “self-organizing system of voluntary co-operation,” in regard to capitalism. Most modern economists hold that imposing central planning usually makes the self-organized economic system less efficient. By contrast, some socialist economists consider that market failures are so significant that self-organization produces bad results and that the state should direct production and pricing. Many economists adopt an intermediate position and recommend a mixture of market economy and command economy characteristics (sometimes called a mixed economy). When applied to economics, the concept of self-organization can quickly become ideologically-imbued (as explained in chapter 5 of A. Marshall, The Unity of Nature, Imperial College Press, 2002).

In collective intelligence

Non-thermodynamic concepts of entropy and self-organization have been explored by many theorists. Cliff Joslyn and colleagues and their so-called “global brain” projects. Marvin Minsky‘s “Society of Mind” and the no-central editor in charge policy of the open sourced internet encyclopedia, called Wikipedia, are examples of applications of these principles – see collective intelligence.

Donella Meadows, who codified twelve leverage points that a self-organizing system could exploit to organize itself, was one of a school of theorists who saw human creativity as part of a general process of adapting human lifeways to the planet and taking humans out of conflict with natural processes. See Gaia philosophydeep ecologyecology movement and Green movement for similar self-organizing ideals. (The connections between self-organisation and Gaia theory and the environmental movement are explored in A. Marshall, 2002, The Unity of Nature, Imperial College Press: London).

Spontaneous order is the spontaneous emergence of order out of seeming chaos; the emergence of various kinds of social order from a combination of self-interested individuals who are not intentionally trying to create order.

The evolution of life on Earth, human language, and a free market economy have all been proposed as examples of systems which evolved through spontaneous order. Atheists and naturalists often point to the inherent “watch-like” precision of uncultivatedecosystems and to the universe itself as ultimate examples of this phenomenon.

Spontaneous order is also used as a synonym for any emergent behavior of which self-interested spontaneous order is just an instance.



History of the theory

According to Murray RothbardZhuangzi (369-c. 286 B.c.) was the first to work out the idea of spontaneous order, before Pierre-Joseph Proudhon and Friedrich Hayek. The Taoist Zhuangzi said, “Good order results spontaneously when things are let alone.” .[1] Proudhon said, “The notion of anarchy in politics is just as rational and positive as any other. It means that once industrial functions have taken over from political functions, then business transactions alone produce the social order.”[2] Proudhon’s position was that freedom is prerequisite for spontaneous order to take place, rather than liberty being the result of spontaneous order. Hence his statement, liberty “is not the daughter but the mother of order.”[3]

The thinkers of the Scottish Enlightenment were the first to seriously develop and inquire into the idea of the market as a ‘spontaneous order’ (the “result of human action, but not the execution of any human design”, as Adam Ferguson put it first [1]).

The Austrian School of Economics, lead by Carl MengerLudwig von Mises and Friedrich Hayek, would later refine the concept and use it as a centerpiece in its social and economic thought.

Most Austrian School thinkers and other libertarian figures such as Milton Friedman concurred with Proudhon’s position mentioned above, although they supported the existence of a minimal state to maintain the liberty requisite for spontaneous order to take place.[citation needed]



Many economic classical liberals, such as Hayek, have argued that market economies are creative of a spontaneous order – “a more efficient allocation of societal resources than any design could achieve.”[4] They claim this spontaneous order is superior to any order human mind can design due to the specifics of the information required.[citation needed] Centralized statistical data cannot convey this information because the statistics are created by abstracting away from the particulars of the situation.[5] In a market economy, price is the aggregation of information acquired when people are free to use their individual knowledge. Price then allows everyone dealing in a commodity or its substitutes to make decisions based on more information than they could personally acquire, information not statistically conveyable to a centralized authority. Interference from a central authority which affects price will have consequences they could not foresee because they do not know all of the particulars involved. This is illustrated in the concept of the invisible hand proposed by Adam Smith in The Wealth of Nations. Smith, however, rejected that Capitalist society was free in any form of all, stating that the workers were constantly enslaved and forced into labor regularly by their managers (Chapter 8 of Vol. 1 of The Wealth of Nations). Thus in this view by acting on information with greater detail and accuracy than possible for any centralized authority, a more efficient economy is created to the benefit of a whole society.

Game studies

The concept of spontaneous order is closely related with modern game studies. As early as in the 1940s, historian Johan Huizinga wrote that “in myth and ritual the great instinctive forces of civilized life have their origin: law and order, commerce and profit, craft and art, poetry, wisdom and science. All are rooted in the primeval soil of play”. Following on this in his book The Fatal Conceit, Hayek notably wrote that “A game is indeed a clear instance of a process wherein obedience to common rules by elements pursuing different and even conflicting purposes results in overall order”.


Anarchists argue that the state is in fact an artificial creation of the ruling elite, and that true spontaneous order would arise if it was eliminated. In the anarchist view, such spontaneous order would involve the voluntary cooperation of individuals. According to the Oxford Dictionary of Sociology, “the work of many symbolic interactionists is largely compatible with the anarchist vision, since it harbours a view of society as spontaneous order.” [6]


The concept of spontaneous order can also be seen in the works of the Russian slavophile movements and in specific the works of Fyodor Dostoyevsky. The concept of an organic social manifestation as a concept in Russia expressed under the idea ofsobornost. Sobornost was also used by Leo Tolstoy as an underpinning to the concept of Christian Anarchy. Vladimir Lenin also later exploited the concept of sobornost as a foundation for his own reforms. The concept was used to describe the uniting force behind the peasant or serf Obshchina in pre Soviet Russia.[7]


Entre tanta informação sobre Darwin a que temos sido metralhados, sugiro a leitura desse texto do Daniel Piza, “Darwin, Pensador da Cultura”. Clique aqui.

O melhor desse livro é que ele é praticamente só de aforismos. Fácil e penetrante leitura. Abraços

Origem: Wikipédia, a enciclopédia livre.

Meditações metafísicas, ou, em outras traduções, Meditações sobre a filosofia primeira, que tem como subtítulo nas quais são demonstradas a existência de Deus e a distinção real entre a mente e o corpo, é o nome da obra de René Descartes escrita e publicada pelo autor pela primeira vez em 1641. Nesta obra encontra-se o mesmo sistema filosófico cartesiano introduzido no Discurso do Método.

O livro é composto por seis meditações, nas quais Descartes põe em dúvida toda crença que não seja absolutamente certa, real, factível, e a partir daí procura estabelecer o que é possível saber com segurança.

Na primeira meditação encontram-se quatro situações que podem confundir suficientemente a percepção, a ponto de invalidarem, seguramente, uma série de enunciados sobre o conhecimento. O principal destes quatro argumentos é o do gênio maligno que tem a capacidade de confundir a percepção e plantar dúvidas sobre tudo o que podemos conhecer acerca do mundo e suas propriedades. Porém, mesmo podendo falsear a percepção, não pode falsear a crença nas percepções – ou seja, ele pode contra-argumentar contra a percepção mas não contra a crença que incide sobre as percepções. Descartes também conclui que o poder de pensar e existir não podem ser corrompidos pelo gênio maligno.

Na Segunda Meditação encontra-se o argumento de Descartes acerca da certeza da própria existência, certeza que prevalece sobre qualquer dúvida:

Convenci-me de que não existe nada no mundo, nem céu, nem terra, nem mente, nem corpo. Isto implica em que também eu não exista? Não: se existe algo de que eu esteja realmente convencido é de minha própria existência. Mas existe um engandor de poder e astúcia supremos, que está deliberada e constantemente me confundindo. Neste caso, e mesmo que o enganador me confunda, sem dúvida eu também devo existir… a proposição “eu sou”, “eu existo”, deve ser necessariamente verdadeira para que eu possa expressá-la, ou para que algo confunda minha mente.

Em outras palavras, a consciência implica na existência. Em uma das réplicas às objeções que faz no livro, Descartes resumiu a passagem acima em sua hoje famosa sentença: penso, logo, existo (em latimcogito, ergo sum)

O restante do livro, que não difere muito do precedente Discurso do Método, sendo porém mais acessível, contém vários argumentos que os filósofos modernos consideraram menos convincentes, tais como os argumentos ontológicos para a existência de Deus e a suposta prova do dualismo entre mente e corpo.

Veja também

Origem: Wikipédia, a enciclopédia livre.

Também chamada de “ciência espiritual” , a antroposofia (“conhecimento do ser humano”) é uma filosofia que foi erigida por Rudolf Steiner. Ele a apresenta como um caminho para se colocar em busca da verdade que preenche o abismo historicamente criado desde a escolástica entre fé e ciência. Na visão de Steiner a realidade surge no encontro dos mundos da idéia e da percepção.

Steiner coloca que, ao se pensar o pensar começamos a acessar uma consciência diferente da cotidiana. A primeira experiência que podemos ter de um conceito que não encontra correspondente nas percepções do mundo é a vivência do próprio eu. É a primeira instância de uma experiência no puro pensar. A partir daí muito mais pode ser vivenciado no puro pensar, vários conceitos que não encontram correspondentes em percepções físicas, mas para isso Steiner diz ser necessário ampliar nossa a capacidade de nossa consciência e apresenta exercícios para tal.

A base epistemológica da antroposofia está contida na obra A filosofia da liberdade, assim como em sua tese de doutoramento, Verdade e ciência. Estes e vários outros livros de Steiner anteciparam a gradual superação do idealismo cartesiano e do subjetivismo kantiano da filosofia do século XX. Assim como Edmund Husserl e Ortega y Gasset, Steiner foi profundamente influenciado pelos trabalhos de Franz Brentano, e havia lido Wilhelm Dilthey em detalhe. Por meio de seus primeiros livros, de cunho epistemológico e filosófico, Steiner tornou-se um dos primeiros filósofos europeus a superar a ruptura entre sujeito e objeto que Descartes, a física clássica, e várias forças históricas complexas gravaram na mente humana ao longo de vários séculos.

Steiner definiu a antroposofia como “um caminho de conhecimento para guiar o espiritual do ser humano ao espiritual do universo.” O objetivo do antropósofo é tornar-se “mais humano”, ao aumentar sua consciência e deliberar sobre seus pensamentos e ações; ou seja, tornar-se um ser “espiritualmente livre”.

Steiner ministrou vários ciclos de palestras para médicos, a partir dos quais surgiu um movimento de medicina antroposófica que se espalhou pelo mundo e agora inclui milhares de médicos, psicólogos e terapeutas, e que possui seus próprios hospitais e universidades médicas. Outras vertentes práticas da antroposofia incluem: a arquitetura (Goetheanum), aagricultura biodinâmica, a educação infantil e juvenil (pedagogia Waldorf), a farmácia homeopática (Wala, Weleda, Sirimim), a filosofia (A “Filosofia da Liberdade”), a euritmia (“o movimento como verbo visível e som visível”), e os centros para ajuda de crianças especiais (Vilas Camphill).

A antroposofia possui seus detratores. Os críticos designaram-na como um culto com similaridades em relação aos movimentos da Nova Era. Se for um culto, contudo, é um que fortemente enfatiza a liberdade individual. Ainda, alguns críticos sustentam que os antropósofos tendem a elevar as opiniões pessoais de Steiner, muitas das quais são estranhas às visões das religiões ortodoxas, da ciência e das humanidades, ao nível das verdades absolutas. Se existe alguma verdade nesta crítica, a maior parte da culpa pertence não a Steiner, mas a seus estudantes. Steiner freqüentemente estimulou seus estudantes a testarem tudo o que ele dizia, e em muitas ocasiões, até mesmo implorou a eles que não tomassem nada do que dissesse com base na fé ou autoridade.

Outra crítica afirma que alguns antropósofos parecem distanciar suas atividades públicas da possível inferência de que a antroposofia é baseada sobre elementos esotéricos religiosos, tendendo a apresentá-los ao público como uma filosofia acadêmica não-sectária. Uma dificuldade em avaliar essa crítica é que ela contém um preconceito oculto porque ignora uma questão que a antroposofia procurou levantar e responder: é possível para aquele que pensa ser tanto cientificamente quanto espiritualmente cognitivo, ao mesmo tempo? A antroposofia afirma que isso é possível. A crítica supramencionada, por outro lado, assume que não é possível, e portanto encontra uma contradição entre a afirmação de um não-sectarismo e um embasamento na experiência supra-sensível.