1. Introductions Section
In this section, I will give a few ways of approach to my
subject, and to may way of working. Both are a little unusual and off the beaten
paths. I will therefore present several different ways of approach. You may leaf
through this section in any way that pleases you, until you find an entry point
that suits you most. Hypermedia technology is a writing technology that
transcends writing. It is hard to conceive of the changes in store for us as a
consequence to this. Because the consequence is that some of our very thinking
modes and fundamental basic processes of thinking will change, modes that we
have become accustomed to in the last five millennia. These modes are so
ubiquitous, so pervading, and so subtle that they have make the foundation of
our civilizations, our sciences, even of what we think is essentially human. And
the consequences reach even further: when our thinking changes, our world
changes also.
This may be difficult to envision and even more difficult to
accept, even in the "what if" mode of thinking. So I predict that this text will
not be easy reading for most of you. The experience I had so far with people
trying to read is was often resistance and rejection. If you feel that you have
to reject this work, I am asking for your consideration. As Freeman Dyson says:
"At any particular moment in the history of science, the most important and
fruitful ideas are often lying dormant merely because they are unfashionable...
there is commonly a lag of 50 or 100 years between the conception of a new idea
and its emergence into the mainstream of scientific thought... it necessarily
follows that anybody doing fundamental work... is almost certain to be
unfashionable." (DYSON83
, 53) This work is
unfashionable, indeed. But I wouldn't have gone through this work of 15 years as
of the time of this writing, if I hadn't considered it important. And this is my
motto, from the Heraklit citation above: "If you don't aim for the unexpected
and the unthinkable, you will never find it: for it is untraceable and
inaccessible".
First, I will present my main mode of working - asking the
questions:
The Three Big W's:
What
,
hoW
, Why
1.1. The What: What I am writing about
This path of discovery is about asking fundamental questions
and getting unexpected answers. The citations above indicate this. It is the
systematic exploration of the unthought of and the unthinkable, that is the
background of this endeavor. But asking fundamental questions may be risky
business. "Sapere aude", "dare to be wise" starts with "dare to be foolish". If
you dare to ask the idiot question, if you risk being the fool, you might have a
chance to hit onto something so fundamental that it has escaped the most
inquisitive minds of humanity even though it has been sitting in plain view from
time immemorial. Asking questions always precedes getting answers. It is the way
we are asking the questions that is decisive. And so often do we get wrong and
useless answers because we don't ask the right questions, or don't ask them
consistently enough. Our current modes of organized knowledge have done much to
bar certain questions from being asked. There is a veritable taboo in human
societies against asking fundamental questions. I will do my part to rediscover
the lost art of asking questions, which we may also call the Socratic art. See:
->:
THAUMAZEIN, p. 115.
The most fundamental questions of this universe are:
What
, Ho
W
,
Why.
These, I call the Three Big W's. The successive application of these three types
of questions gives a modality of thinking, which we could call the "trialectics
of question
". With these mental tools, we can go on
asking: What is reality? What is producing this world? What does it mean to be
human? What is thinking? What is knowing? What is ignorance? What is the
relation between thinking and symbols? What is aisthaesis? What does thinking
with the body mean? All these questions relate to the first line of the title:
"
Infrastructures of Representation". Section 3.: "Semiotics, biological
and cultural aspects" will deal with these questions.
The second line of the title: "
A Quest for Multimedial
Symbolization Systems" is about this one question: How could humanity
improve its thinking? This question will be the core part of this work. It is
the tangible and practical aim to provide a discussion of the fundamental
principles, and the necessary infrastructure, the noetic, scientific, technical,
and societal requirements and foundations for the construction
of
a "
technological ars
memoriae
[6]
",
a "
syn-aisthetic
symbolization
tool"
[7]
,
or as
I call it in my terminology: the
Symbolator
[8]
. This
technology is emerging with the further development of computer based
hypertext
and multimedial
symbolization
systems that we can forecast with some
measure of probability for the next 20 years. I will give a perspective of what
kind of work could be done in these 20 years, if it is possible to continue on
the path laid out here, that is, if the necessary funding and infrastructure
will be available. Section 3. serves as supporting base for this issue, and
Sections 2. and 4. will deal with the practical and technical matters of its
implementation.
Humanity now has its moment of
kairos
[9]
, the
fortuitous historical instant which may pass all too soon.
We have the chance
to change the base of our thinking. That will change history, and humanity,
for ever and for sure. But unlike genetic engineering
,
which is right now already in the full process of changing history, and
humanity, for good or for the worse, when we change our thinking, we might have
a chance of understanding what it is we are doing before it is too late
to do anything about it.
The quest for understanding what it was that influenced
humanity so deeply 5000 years ago, when writing was invented, will lead us on a
long journey through human history. This is mainly covered in the Section
"Threads of History" and the ones
following.
[10] History and writing are
inseparably intertwined. But before writing, there was an epoch of at least 5000
years of high cultural development of humanity. This epoch is only now being
discovered as archeology uses a vastly expanded set of technical and scientific
tools to analyse the many diverse and different traces of early human culture as
they are preserved in the archeological record. Already, our picture of history
is changing profoundly as all these bits and details from archaic cultures are
slowly and painstakingly inserted into a new grand puzzle of prehistoric human
development. In the chapter "The Age of Aoidoi", I will give a speculative
account of an information structure of archaic prescriptural society that is
still vastly beyond the current scientific imagination.
Some of the unexpected results that I got: It is more
important to be able to ask the questions than to try to get an answer. Because
an answer I have today, may be useless to me tomorrow. The way we ask the
questions has a very subtle predetermining effect on what we can get as answers.
This is the advice Heraklit
gives us. Always be ready to
ask a question that you would never think of even asking. This has to do with
the higher forms of thinking,
deutero-mentation
,
and
trito-mentation. I use the more general term of
mentation
for forms of thinking that we wouldn't be
able think of now. As for all forms of
mentation
,
there must be
representations
that manifest the
thinking processes. In the conventional case, we call them
symbols.
[11] Formerly, there was only
the human brain doing all the symbol processing. That had a severely limiting
effect on the kind and the complexity of symbols that could be used. The human
brain is a very powerful processing system, but it means to misunderstand it
thoroughly when we consider it as a computer. Of course, the brain
can
function in a way the computer works. But that is
exactly not what its
strength is. In the earlier ages of humanity, we had to mis-use the brain as a
computer, because nothing better was there. Now that we have got the real thing,
we don't need to burden our brains with tasks that the computer can do so much
better. For example helping us to construct the symbol systems for
deutero-mentation
, and
trito-mentation,
thinking modes that are exceedingly difficult when attempted with the unaided
brain. And for those, who fear that humanity may be going all the way down the
drain, if we don't learn the differential and integrational calculus in school
any more, here is my consolation: There is a good chance, that if we have
constructed our symbolator, we might even be able to bootstrap ourselves into a
level of mental performance that lets us suddenly realize, that we don't need
the machine any more - we might find other symbolization systems that are much
better than anything we have now, and that are better suited to the functioning
of our brains. This might be the day when the dream of Leibniz comes true: when
we will have created the Characteristica Universalis. And it may be a little
different than what Leibniz imagined, because its characters may turn out to be
the
stoichea
, the building blocks that the
demiourgos
, the constructor of the universe, was
using himself to go about his work. This is what Plato talked about in
Timaios.
1.2. The How: How I am writing
In the discussion, I am following a dual track approach,
according to Leibniz
' motto:
One track is Bottom Up, pragmatic, empirical, technological,
and application oriented, to be used in contemporary hypermedia technology, with
attention to detail, implementation, technical feasability, organization,
juridical, patenting, economic rationalisation, management, and, most
importantly: financing.
The other track is Top Down, theoretical, and philosophical,
with the ancient meaning of the word root
theoria, as
Leibniz
meant it: The Vision, a bird's eye view from the
grand perspective of the
oumo universale
, the
universal thinker
, covering 10 billion years of cosmic
evolution, 10,000 years of human cultural evolution, 5,000 years of
phonetic-script based civilization as it emerged in the ancient cultures of
Mesopotamia
, Egypt
,
Greece
and Rome
.
And lastly, the immense amplification of the accumulation of
knowledge as it occurred after the invention of the printing press 500 years ago
and the application of the scientific principles of
Galileo
, Descartes
,
Leibniz
and Newton
since about 300
years. The philosophical
outlook is in the meaning as
intended by Plato
: The search for wisdom
(sophia
) for wisdom's sake, without direct application to
knowledge, techne, career or material profit What I hope to show is that a
balanced perspective resulting from both approaches may be possible, or even
profitable. That it is still today, in this age of ultimate specialization,
valuable to at least aim at some measure of universal
thinking
. Of necessity, a work based on such an approach
cannot be considered as finished at any stage of progress. The days of the
renaissance uomo universale
are long gone. But even if we
know that we will never get there, aiming, and striving for this goal might be
valuable in itself.
1.4. The Why: Why I am writing this
The question "
Why
?" is at the
beginning of all philosophy and maybe a little more in human intellectual and
mental history. Plato
and
Aristoteles
have put this question at the beginning of all
enquiry into what is in us and around us, the universal scientific and
philosophical enquiry which was still one and the same from
Plato
's time up to about the
Renaissance
. In the ancient greek terminology, it was the
thaumazein
, mala philosophikon pathos, the
sense of wonder, amazement, and astonishment which lead us into this process. It
seems that this question was forgotten when science and philosophy made their
split around the time of Leibniz
. Science asks about the
"how
" of things, not about the "what" and
"why
" which is relegated to (some specific branches of)
philosophy.
->:
THAUMAZEIN, p.
115
When I started my university studies in the then newly
established academic field of informatics
, Joseph
Weizenbaum
gave a talk at our department. His way of
asking the question "why
?" of informatics and technology
in general was stated as an aphorism: "
Nobody is helped that he can do
something ten times faster or more efficiently when he is in the process of
falling into a pit." Yet that seems to be exactly what happens with
technological cvililization. We don't know where we are going, but we are for
sure going there fast, with increasing speed every day. The human population on
this planet explodes. In the science laboratories and technical research
institutions all over the world, the speed of technological advance doubles
every five years. Data processing is a good example for the process. We can
process a million times the number of data bytes than people could do manually a
hundred years ago. Every year, the data processing capacity doubles, but to what
end? Are we therefore in a better position to deal with the problems of our
societies?
The vision of the age of enlightenment
of the 18th and 19th centuries is totally shattered. Instead of creating a world
of universal peace, comfort, and well-being, science and technology have helped
us create a juggernaut that is devouring the living substance of this planet to
the point of
eco-destruction
[12]. Just
about 500 million people on this planet live in a world of comfort and
well-being, like only kings lived up to 100 years ago. The rest live on
different levels of insecurity, like the poor and unemployed in the rich
countries, or the middle classes in poor countries. Then there is about one
billion people whose life is a constant confrontation with the spectre of death
by starvation, war, epidemic disease, and eco-poisoning. A life which is not a
life at all but hell on earth. It is unnecessary to add further to a discussion
which is being led in countless books and on the political scene about what
needs to be done to avoid the consequences of the seemingly inevitable
self-destruction course of humanity. I might mention as one of the most
thoughtful and insightful works the intensive appeal to humanity that Konrad
Lorenz made in LORENZ_TOD
and
LORENZ_ABB
. The insight is there, but the power structures
of society smother all attempts to effect any decisive change of course. The
spirit is willing, but the flesh is weak. For all those that have ears to hear,
but not the time nor the inclination to think, the words of Konrad
Lorenz:
It will be difficult to make them realize
that a culture can be snuffed out like a candle flame.
My answer, and my contribution, the reason
why
I am doing this work, is this: Humanity has
opened up a tool chest of incredibly powerful mental
tools
when it developed writing
,
mathematics
, and logics
, and began
to imitate certain aspects of nature with technology
. As
we say: "most things are easier to get into than out of". Once one is habitually
used to a certain procedure, it is hard to stop it. And there is a tradeoff, a
cost factor for every gain. It is often not visible where that cost is and how
it is exacted. That this cost factor exists has been known and has been pointed
out time and time again. Plato stated the dangers of writing in no uncertain
terms in Phaidros, and Leonardo
had been one of the first
to foresee the adverse consequences of technological advance in full
clarity
[13]
But the gains for those who could reap the profits always seemed to outweigh the
cost for the community
[14]
(HARDIN85
). Those that tried to alert us were brushed
aside, relegated to oblivion, and every once in a while put to death at the
stake, the cross, or other ingenious means, just to add a little deterrent
against going astray from the normatively accepted paths of society.
->:
LEONARDO, p. 36
Nowadays, the pendulum may be swinging in the opposite
direction. The costs have silently accumulated in their effect over millennia,
and are suddenly appearing on the balance sheet with all weight. The accumulated
effects of our mental auxiliary systems have developed such that we, our minds,
and our understanding, are completely dumbfounded and at the mercy of the
processes we started. We are in the position of the sorcerer's apprentice. The
tool chest has silently converted to a pandora's box.
1.5. Towards a Cybernetic Philosophy
I propose that we need to regain, and to rebuild our
originality and authorship of the sources of this thinking process that began
many thousand years ago, but became clearly visible around the time of
Plato
. That is,
we need to get a better understanding
of our thinking, and a new handle at what we are doing when we think. With
the new computer and multimedia
technology, different
modes of mentation will be possible that will allow us different perspectives
about our world and about ourselves. This will be a chance to open a new chapter
of human self-reflection and self-understanding. In its essence, it will be:
The continuation of Philosophy by other means -
Towards A Cybernetic Philosophy
This work presents a cybernetic world model for a
post-alphabetic humanity that is making the transition from a history of five
millennia of writing based civilizations into an era of fundamental
re-definition and re-shaping of our thinking processes through the new
hypermedia technology that is emerging in our days. Humanity is now at a
historical crossroads, a moment of historical decisiveness like it appeared 500
years ago with the invention of the printing press, or 5000 years ago when
writing was invented. The technology of writing has influenced and changed
humanity profoundly, and even if we owe practically all our civilization to
writing, we have also paid a high price. Non-verbal and non-conceptual thinking
(or mentation) has been largely pushed into oblivion. Therefore this work is
also paradoxical: It tries to put in written words something that humanity
has lost when we all came to rely too much on the written word. The new
technologies will change the kinds of mental processes we have conventionally
called "thinking" to an extent that were unimaginable in the age of writing.
Hypermedial representation and symbolization systems are much much more than,
and also much different from, the conventional pictorial adjuncts to our written
texts as we are used from picture books. They are also much different from the
moving pictures we know from cinematographic technology (movies and television).
To find the predecessors of what is in store for us in the near future, we must
go back to ancient civilizations such as Egypt when the art of picture-mentating
(thinking through pictures, as opposed through words and concepts) was still
alive.
This era will not just bring a change of re-presentation of
the things we have become used to. When we change our thinking, the things will
also change. The new representation will also change the world as we
experience it. That is, everything will take an almost revolutionary turn around
in the next 20 years to come, and we will end up with a totally different
physical infrastructure for our universe than the one we have now. This change
is consequential and inevitable. The old world models of matter made of atoms
that have been the foundation of the last 2500 years of human civilization are
in an inextricable way intertwined and interdependent with the atomic mental
technology of the alphabet. Just as the alphabet cuts up the spoken sounds of
language in indivisible, atomic, single sounds, so has humanity followed this
method in its intellectual course of those 2500 years, and especially the last
500 years, since the printing press allowed the incredible jump in efficiency of
processing this phonetic alphabet.
This profound change has already been expressed in the cyber
age motto: "All reality is virtual!". Except that very few people have so far
realized its full significance. The philosopher who had expressed this essence
long before anyone would think of computers and cyber age, was Arthur
Schopenhauer. If he were asked today to express the salient idea of his main
work in contemporary language, he would use this wording. And in the modern
cybernetic terms, his work would today have the title: "The Universe as
Impulsity and Representation". This can be translated without difficulty into
all major european languages without changing the main terms. Every educated
person in the whole world would immediately recognize its relevance for our
modern neuromancer cyber age and globally connected computer network
civilization. Schopenhauer's work, which was considered an obscure treatise of
speculative philosophy at his time, has gained a totally new momentum as the
world came to meet his projections. The difficulty modern physics has with the
wave-particle dualism, and the problems of quantum mechanics, indicate that the
way the physical world is viewed in our contemporary theories poses riddles that
are hard to solve in the present paradigms. The approach of Schopenhauer is
expressed in a contemporary information-based world model that could be called
hypo-physical. It is not meta-physical, as in the older
philosophical attempts, but it fits seamlessly into the scientific structure of
the exact sciences. Schopenhauer's representation is in modern terms a
cellular space. In a cellular space model, the presently known physical laws are
produced by a universal simulator that produces these phenomena because it is
programmed this way. In the older physical models, one talked about an ether,
and before science, there was the universe-generating matrix of ancient
mesopotamian and vedic cosmologies. They all refer to a substrate that underlies
the apparent phenomenal world. In the vedic terminology, this was called the
Akasha. In the present work, this is called the "Infrastructure of
Representation". This infrastructure is an information infrastructure.
Presently, information is defined in terms of physics. In the future, physics
will be defined in terms of information. This will not change anything in the
working of physical laws, only their foundation will change. In order that we
will be able to perceive this different infrastructure, we need a new mental
technology. Much as our present world views are interdependent with the ways we
think and perceive in words and concepts, so will the new mental tools that are
emerging with the new hypermedia technology allow us to form different
understandings of the world and of ourselves. If we are to look for an example
of this change in perspective, we may take the change from Newton's
gravitational theory to Einstein's general relativity. The newer theory didn't
prove the older one as false, but as a special case in a more general scheme. So
will it be with the informational paradigm of the universe. And within this
paradigm, we will have much less problems accomodating for such phenomena as
animated life and intelligence, that are so hard to come by in the present
materialistic and mechanistic paradigms of science.
Although at present, the work "Infrastructures of
Representation" is presented as a book, this is to be viewed only as a
scaffolding for a hypermedia work that will be based on the structure of the
text. The written word can only serve as a pointer to something that cannot be
expressed with the old text paradigm any more. The new ways of thinking will
have to find their own appropriate modes of expression and representation. To
achieve this, a 20-year program is formulated for the construction of the
Symbolator, a mental bootstrap for humanity to lift itself into
heretofore unthinkable modes of mentation. The written word only serves
as scaffolding, as outline, to point to something which it in itself can
never yield: Give us back, what 5000 years of civilization word use and abuse
have robbed humanity of: The immediate look-and-feel of dealing with the things
themselves, instead of just talking about them with possibly empty words,
phrases, and formalisms.
1.6. A philosophical "In a nutshell"
One day he rose in morning
dawn
And thus he stood and spake before the
sun:
O great light of sky, what would be your
joy
Had you not those whom you light the
day
NIETZSCHE-ZARA
, Also sprach
Zarathustra, 2
Instead of writing an "abstract" as introduction, one might
try to present one's views "in a nutshell
". This metaphor
is an alliteration to the germinating seed of something that becomes quite
involved as the progress of growth bears on. A large tree with all its branches
and leaves can become quite unmanageable. But by watching the nut germinate and
grow, we may get to know more about the tree than when we study it in its
finished form. So I will show the germinating nucleus of the work and then I
will let it unfold.
.
By architectonic
I
understand the art of systems
... By system I understand
the unity of manifold
cognitions
under an idea
This is the advice given by Kant
. The
focal point of any work is the idea expressed in it. As he points out further
down in his treatise, this idea may not be clear to us in the
beginning:
And it will be found that the originator
and even his ultimate followers will err around an idea that they haven't been
able to make clear to themselves... (A834/B862 10-20)
But even if it is not immediately visible, the idea serves as
ordering principle as we are refining it.
The more we succeed to clarify the
idea
, the more we have the chance to succeed with the
work, and to communicate it to others.
1.6.1. The idea, the thing, and the
representation
Now the idea of this work is
the relation between the idea,
the thing, and the
representation
[15]
.
A core question is about the appropriateness of representations, if and how
there may be representations more appropriate than others. We may re-phrase
this: the essence of our thought
processes, the
mentation
, which is called the "pure
reason
" by Kant
, has a
correspondence and a relation to external and internal means,
the
representation, like language
,
symbols
, images
, etc. we are using
to manifest, reflect, and structure our thinking. It is the mechanism of
reflection
, as it is philosophically called. The
meaning of the term
representation
follows the
outline given by Schopenhauer
in his "Die
Welt
als Wille
und
Vorstellung
" ("The World as Will and Representation",
abbreviated WWV
in the
following.)
[16] There has been an old
discussion about this relation which is aptly summed up by the
Leibniz'
commentary to Locke
's
argument:
Locke: Nihil est in intellectu, quod non prius fuerit in
sensu
Leibniz: Nisi intellectus ipse
Locke: Nothing is in the intellect which was not before
in the senses
Leibniz: Except the intellect itself
What no one noticed, possibly not even Leibniz himself, was
that they both agreed in their views much more than their differences might
suggest. George Berkeley
(BERKELEY
) and
Schopenhauer
already came quite close to the solution:
"
intellectus est in sensu
" - presto. That being
done, all that is left is to show the "How". This is what I call in the title
the "infrastructure of representation
".
1.6.2. The infrastructure is the
architectonic
Now it is easy to see that the term infrastructure, if taken
in a theoretical and technical (techne
) sense, leads into
the term architectonic
(archi
/archae
-tekton
).
By this we are back at Kant
and see how the
"
infrastructure of representation
" connects with
the "
architectonics of pure reason
". These are the
noetic requirements. In a different, more pragmatic meaning, infrastructure
implies all the technical and scientific requirements.
On or path, we will re-discover some aspects of thinking that
submerged, or became forgotten, in 5000 years of script based civilization, and
more so, in the last 300 years of scientific and technological progress. We will
see if those aspects are helpful in creating new tools that support our
thinking. Only if there are stringent reasons to assume that new and
unconventional representations for ideas are better suited than what we have
now, will it really make sense to invest a great amount of time and energy into
creating them. Otherwise, those will be right who say that our tried-and-trusted
cultural memory systems, with 5000 years experience of phonetic language
encoding in alphabets and our mathematics, all preserved in books, are the best
way to go about things in the future.
This metaphor of the nutshell can carry us even further: it
indicates some riddle, some enigma, something that is hidden and not easily
seen. There is a nut to crack, a difficult problem that cannot be solved by
applying tried-and-tested knowledge and methods. I will give a small selection
of fundamental problems and questions here. The first question is:
1.6.3. The
Kratylos
question
Oukoun eiper estai to onoma homoion to
pragmati, anankaion pephykenai ta stoicheia homoia tois
pragmasin
If now the word resembles the thing then by
necessity must the letters be similar to the things also.
Kratylos 434a, PLATO-WERKE
, Vol.
III, engl. transl. A.G.
Plato
makes an argument in this work
that words are not just arbitrary sound
combinations
whose relation to the things named is based only on social convention. While
this is an error according to modern linguistics, the statement quoted above is
even stronger. No wonder that Kratylos
is by scholarly
opinion a good example of a situation where Plato
was
completely wrong, no matter how deep and insightful his approach to other
subjects was. There is not very much one could cite for Plato's defense. The
examples given in the dialogue are not very convincing. But is the question he
posed really solved for all and for good? This will be taken up again in the
present work and we will see what comes out of it. A hint can be given right
now: it depends on what Plato meant by
pragma
. The
greek word has many connotations with work and business done, that is the result
of a process. It is also related to the
praxis
mentioned in the Leibniz
motto. If we take the position
that Plato meant pragma to be the result of a process of
mentation
, then this is a different issue than when we
assume that he meant "empirical things out there". For further discussion, see :
Our next question will be:
1.6.4. The
Oberschelp
question
It is possible that the much-renowned
computational models of Informatics like Turing machines, random-access
machines, and the whole digital numerically based computing methodology are
ultimately dead ends for our understanding of "thinking" and especially the
computations of nature.
...
All these experiences could be an
indication that the ordering structures which are derived from our contemporary
digital computing machinery, have unfolded the structural order of reality only
in a non-essential place, just like continuous mathematics, and that therefore
totally new computation and proof models will have to be developed which are
more adequate to nature.
OBERSCHELP80
, 41, 42, transl.
A.G.
While the Kratylos
question may be an
example from the awakening of the western mind of a question that may not be
rightfully asked as a philosophical question any more, the
Oberschelp
question stands as example for a debate that
goes on behind the scenes of the big-science community. Because if there is any
reason to rightfully ask this question, then so much of all the big-effort money
spending on technological methods (like Artificial
Intelligence
) may be wasted. We could squeeze an old joke
on the difference of philosophy, theology, and dialectical materialism to fit
the issue: Because it may be that humanity is searching with more and more
technological apparatus a black cat
in a dark
room
which has the perplexing habit of being less and
less there the more we cry: "We have it by the tail, a little more research
money, and we will have it all!" The salient issue behind this is that it may be
wise to put just a little research money into a totally different place, one
that is not so favorable with big science.
(STRAUB-GLAS
, 223-226).
After this mighty jump across 2300 years, we might as well be
finished with the riddles and get going with more tangible matter, but we are
not. Because we will now jump back right to where we came from: To another
enigmatic piece of Plato
's wisdom, or fancy, as we may
have it: his version of the creation of the world, in
Timaios
:
1.6.5. The Timaios
question
meignys de meta taes ousias kai ek trion
poiaesamenos hen, palin holon touto mouras hosas prosaeken dieneimen, hekastaen
de ek te tautou kai thaterou kai taes ousias memeigmenaen. aercheto de diairein
hode. mian apheilen to proton apo pantos moiran, meta de tautaen aphaerei
diplasian tautaes, taen d' au tritaen haemiolian men taes deuteras, triplasian
de taes protaes, tetartaen, de taes deuteras diplaen, pemptaen de triplaen taes
tritaes, taen d' hektaen taes protaes oktaplasian...
And when he had made the three into one, he
divided this whole into as many parts as was appropriate, and each of them was a
mixture of "the same", "the different" and "the substance". And he began to
divide thusly: First he took one part of the whole, then double of the same, as
third he took one-and-a-half of the second, it being thrice of the first, and as
fourth he took the double of the second, as fifth thrice the third, as sixth
eight times the first, as seventh twenty-seven times the
first...
Timaios, 35b-c, PLATO-WERKE
, Vol.
VII, engl. transl. A.G.
This we might call the Timaios
question. While we are postponing any attempt to decode this riddle, we just
observe that this statement of Plato
has to do with
number and proportion
, and it has to do with the
real nature of reality
.
Such, it addresses the same fundamental issue as the aforementioned
Oberschelp
question. What I will try to show in the
following is that we have here a core question on the concept of number and the
appropriateness of numbering systems when dealing with the nature of things.
Plato's account in Timaios
is an account of the
pythagorean
concept of number which is of a quite
different nature than we are used to nowadays. For further discussion, see:
->:
HARMONICS, p. 416, ->:
AVATAR, p.
345
1.6.6. To the seekers of higher
vision
triton de genos theion anthropon dynamei te
kreittoni kai oxytaeti ommaton eide te hosper hypo oxydorkias taen ano aiglaen
kai aerthae te ekei hoion hyper kai taes entautha achlyos kai emeinen kei ta
taede hyperidon panta haesthen to topo alaethino kai oikeion onti, hosper ek
pollaes tinos planaes eis patrida eunomon aphikomenos
anthropos.
Eine dritte Klasse endlich sind
gottbegnadete Menschen, die von stärkerer Kraft sind und ein
schärferes Auge haben, daher sehen sie sozusagen wie Fernsichtige den Glanz
dort oben und heben sich dort hinauf gleichsam über die Wolken und den
Dunst der irdischen Welt hinweg, und verbleiben dort in der Höhe, achten
das irdische alles gering und erquicken sich an jenem Orte welcher der wahre und
ihnen angestammte ist, so wie ein Mensch, der nach langer Irrfahrt in seine von
guten Gesetzen regierte Heimat zurückkehrt.
Finally a third kind are those god gifted
humans who possess the higher potential and the keener vision; therefore they
perceive eagle-eyed the splendor of the lofty heighths; and they lift themselves
up as if above the clouds and the mist of the phenomenal world, and they remain
in that suspense and elevation; and they care little for the base worldly
sensations, and they are refreshed and rejuvenated through their presence in
this place which is their home; just like a man who after a long errant
wandering has found his way back to his home where the good and just laws are
reigning.
1.6.7. Ode to Daedalos
Der Flug des Daedalos
Ich habe es gewagt und bin mit dem Gestell,
zusammengeheftet aus dem Draht der Logik
und verkleidet mit den Federn der Imagination,
endlich auf die Klippe gegangen, und bin abgesprungen,
vertrauend auf meine Kunst, und vertrauend auf
eine übergeordnete Instanz, die wohlweislich
Gesetze
geschaffen hat, denen ich mich anvertrauen kann, auch
wenn
ich sie nicht höre, sehe, rieche, schmecke, oder
fühle.
Und nun schwebe ich, ziehe meine Kreise hoch über
den
Wohnstätten meiner Artgenossen, sehe ihre vertrauten
Muster, die auch die meinen waren, sehe den Rauch,
höre den
Klang eines Ambosses in der Schmiede, und spüre den
salzigen Wind auf meiner Haut, der vom Meere kommt, wo
weit, weit hinten, jetzt von meiner hohen Warte aus
endlich
sichtbar, sich die Bergspitzen meiner Heimat zeigen, die
ich vor so undenklich langer Zeit verlassen
hatte.
Und ich gleite, erforsche die Gesetze der Thermik, lerne
die Lehren des Windes und lerne die Bewegung in der neuen
Dimension. Ich bin unschlüssig, wohin ich fliegen
mag.
Meine Heimat, nach der ich mich so lange in der
Gefangenschaft in den Labyrinthen der Erde sehnte, ist
auf
einmal, da erreichbar geworden, nicht mehr so
interessant.
Vor mir, hinter mir, und vor allem: Über mir -
ist
unendliche Weite ! Warum sollte ich zurück in die
Heimat
fliegen? Andere Fernen locken. Das Gefühl der
Freiheit
übersteigt die Sehnsucht nach der Heimat, der
Geborgenheit,
der Wärme, und des Wohlergehens. Wind, Wasser,
Wellen,
Ströme, Strahlen, Sonnen, und Sterne sind meine neue
Heimat,
und so ziehe ich auf meinen neuen Bahnen.
A.G.
The Flight of Daedalos
I have finally dared it
I have made my way up
equipped with the contraption
fabricated from the wires of logic
and lined with the feathers of imagination
I have finally climbed the cliff
and I have made the jump
trusting my art, and trusting a higher
order
whose providence has created the laws
that will support me even if I can't
see, hear, taste, smell, or feel them.
And now I am soaring in circles
high above the habitats of my former
kinsmen
I perceive their familiar patterns that had also been
mine
I see the smoke and hear the clink-clank of the anvil in
the forge
I feel the salty wind on my skin, coming from the
sea
where far, far beyond, now dimly visible from my high
vantage
I can finally perceive the mountain-tops of my
homeland
which I had lost so many eons ago.
And I am soaring, exploring the laws of
thermics,
learning the lessons of wind, and exercising
the freedom of movement in the new-gained
dimension.
I am hesitating where I want to fly now.
My home, for which I had longed so much
during my imprisonment down in the labyrinths of this
earth,
has now, while coming within reach, lost its
attraction.
Before me, behind me, and especially -
above me - is unlimited space!
Why should I return to my home?
There are other expanses of vastness awaiting
me
The feeling of unlimited freedom overcomes the yearning
for home
the wish for comfort and familiarity and
well-being.
Wind, Water, Waves, Suns, Stars, and Rays
are my new home
And so I am travelling my new orbits.
A.G.
1.7. Technical introduction: Another method of bootstrap
The philosophical part of this work is some kind of
mind-jogging: how to unhook a few stuck conceptual deadlocks, that we may have
unknowingly got bogged down with.
Here I will come to think about some of the technical details
of the Symbolator
project. What it is that I want to
build, what it will be good for, and how much it will cost to build. More on
this will of course be following in the later parts on organization, finance,
and technical detail.
The first technological principle is: It doesn't have to be
fully concordant with current academic consensus, in order to be workable. If
other working principles can be made plausible then that must be admitted as
working hypothesis. The theory of the Symbolator, as I am outlining it here,
cuts across a full spectrum of academic disciplines from philosophy,
linguistics, philology, semiotics, evolutional biology, neurology, informatics,
and a few more, and it is impossible to completely validate the assumptions made
beforehand by the standards of each discipline. The completed Symbolator is
essentially needed to validate the hypotheses. It will be a complete conceptual
and theoretical bootstrap. This is not such an unusual situation. When we look
into history, technology has been up front the academics more often than is
readily admitted today.
We know that technology and engineering had progressed pretty
much on its own dynamics up until the beginning of the 18th century. Toulmin has
given us the details. Inventors never waited for the scientists when they
implemented their gadgets (TOULMIN-KRIT
). Before
Galileo, engineering had nothing to do with the scholarly learning and that was
good so, because if engineers had relied on the aristotelian theories, the roman
armies would have never left their garrisons, because there were no roads, they
would have never won a battle, because there was no steel, and Caesar would have
never conquered Egypt, because his ships would have fallen apart while still in
the harbor, and the roman buildings would have collapsed while under
construction. If roman engineers had used roman numerals for their calculations,
as is today believed by academic historians, they wouldn't even today be
finished with their calculations, let alone have constructed one stable
building. Similar with modern computing. I think I am not mistaken that the
works of Englebart, Alan Kay, Ted Nelson, and all the others who invented the
current state of the art in hypermedia and virtual reality did not go through
complete academic verification before they implemented what they
wanted.
The chicken-egg problem with a new principle is always that in
order to prove it, an apparatus must be built that shows that it is working. A
little piece of new technology must be invented. This costs money. Therefore, if
the principle is entirely new, then it may be hard to convince those who control
the funding, that it is wise to spend money on exactly that project. Especially
if resources are tight, and other, well established and well-credited projects
are shrinking or even scratched. Then it is entirely understandable that
research establishments as well as all the rest of society, stick to the "tried
and tested". The methods that have shown their worth in yesteryear, and which
will also surely work tomorrow.
I wouldn't be doing what I am doing here if I were not
convinced myself that it is worth my time and effort to do it. I have invested
about 10 to 15 man-years into the project. Most of it was funded through my work
as industrial consultant. One hour of industrial consulting can be rated at
around $ 200,-. So we can assume that I made a capital investment of between $ 4
million and $ 6 million. That is no peanuts, is it? And I consider this an
investment and not just a hobby.
1.7.1. Autopoiesis, natural self
organization, and machine intelligence
When academic computing, or informatics, adopted the natural
scientific positivistic standard, it threw out a lot of cybernetic work in the
40's and 50's that didn't fit too well into that scheme. One notable proponent
of this other view was Gotthard Günther
. He had
unfortunately proclaimed that his work was based on
Hegel's
[17] logic which made his theories "non
grata" in the scientific community. Si tacuisses, philosophus mansisses. Had he
just said he was doing
neuronal morphogrammatics, no one would have
noticed, and he might have become famous. It happened otherwise. Now, he had
developed his ideas for a long time at the Uni of Illinois in Urbana, where
Warren McCulloch
had assembled a group of persons whose
work continued in the following years. Mainly by Heinz v.
Foerster
, who founded the Biological Computers Laboratory
in Urbana in 1957 and directed it until 1976. And from there emanated a stream
of ideas from now well-known researchers like Ross Ashby
,
Lars Löfgren,
Herbert
Brün,
Gordon Pask,
and
Humberto Maturana,
whose theory of autopoiesis is of
particular importance for this work.
Gotthard Günther
had introduced
idealism
into the cybernetic
debate, and that has not been fitting in the natural scientific position of
informatics
. Maturana
had tried
his autopoietic
solution on the natural scientific base,
but apparently there are some little, unobstrusive, but still the more fatal
consequences when you try to think the matter through, or worse still, implement
it on a computer.
What I am introducing here, is the neither-nor assumption as
it is proposed by Schopenhauer
. I don't mean to sell the
whole philosophy of Schopenhauer, neither his abnegation of life, nor his hate
of women. And I don't suggest to anyone who isn't really up to wading through a
maze of verbiage that he should read Schopenhauer
. No, it
is much more simple than that. Sch. has presented a working mental model that
doesn't have to rely on
neither the materialistic assumption nor the
idealistic. He of course never had engineering in mind, and so he didn't
prepare for any practical application of his work. That will have to be filled
in here. But he has left us with some very interesting mental bootstraps, that
can be beautifully converted into technical bootstraps, if we go about it the
right way.
He has left us a
principle for
autopoiesis
that cannot be thought through in the
framework of the ontological assumption
of science. In
order to think this principle through, one has to pursue the track a little
while outside the trodden paths of the last 300 years of scientific development.
One has to know how to do this, not to fall into
idealistic traps, or
esoteric phantasmagories. But it can be thought through, and then there
is something practically useful.
1.7.2. The practical definition of
the Symbolator
In this study it is not my aim to propose a new approach for
intelligent machinery
. I think that this is fine, for
those who like their computer to be smarter than they themselves are. I myself
believe that there are already enough smart-asses around there who try to tell
me things which I cannot verify nor falsify, and I either have to believe them,
or I just have to go my way. No, in this respect, I am very selfish. I want to
have something that makes
me smarter. If I can get that with a smarter
computer, that is fine. Those dumb "blech-idiots" are still so stupid that they
could sure use a little bit more intelligence. But it must always serve my
practical aim to make
me smarter, or otherwise the whole thing will be
useless, or rather, it will be terrifically dangerous. I always have to refer to
the "Neuromancer
" stories which are so incredibly
accurate in the picture of a brain-dominated
thousand-year-reich
of cyber age
they create. We already have a host of very dangerous influences of those darned
little mouse- and icon computer interfaces that are exerting their detrimental
effects on our mental processes. There are so many trojan
horse
s and mind-worm
s ready to be
hooked into our brains by a profit-seeking industry totally unencumbered by any
ethical concerns. What if our computers became only half as smart as we are? God
beware!
The trick of the trade is the same as with the old proverb of
"the glass of water that is either half full or half empty". I want to have a
computer that can be a hundred, or a thousand, or even ten thousand times
smarter, than I am now, provided that:
I am always at least twice as smart as my
computer
This is what I call a
Symbolator
. No great philosophical, semiotic, or
technical definition, of the nature of the symbol
process
, of neural feedback loops
,
or any of that gizmo. That can wait until I get around to it in the semiotics
and technical sections. No, plain useful down-to-earth, pragmatic application.
The only thing that counts for me is this thing between my ears:
the
brain
. And if I can get any technical device that
helps me use this thing between my ears in any better way, so much the
better.
Now, as we all have found out using these little computers, as
stupid as they may be, they have one devillish attitude: They make
us
feel so darn stupid! They have an unerring, unflinching, unforgiving,
uninfluencible, determined, mechanical intelligence
,
that, may it ever be so primitive, instills a certain type if fear in us. There
are whole sections of the population who still resist using the computer because
of this fear, which is entirely justified, because there is a very specific
danger in those devices, that those who are playing with them, the computer
engineers, and the computer kids, have gotten used to, just as much as one gets
used to wielding a chain saw.
To understand what is happening here, I had to go the whole
way through human history, like an archeologist scraping through the mud,
discovering shreds here, debris there, and vanished traces over there. Because
what is happening here is just another layer of transformation of our mental
processes as they are accomodating to just another new auxiliary device humanity
has fabricated for its comfort, material improvement, utility, or out of sheer
vanity.
The process is very similar to what has happened when
civilizations adopted writing. Since we in the western industrialized countries
are now a fully literate civilization, there is simply no way imagining any more
what life, and more specifically,
mentation
[18] was like, before we had
put anything that could be expressed in words into writing, and that, in
books.
The Symbolator
has to do with
re-learning a whole lot of things that we apparently have forgotten: The most
important of these seems to be: The world doesn't consist of words only, and it
doesn't consist of only those things that can be described with words, patented
with words, and put in legal contracts with words. And if we substitute "words"
by "words and mathematical formulae" then it will be the same.
The widespread
use of computers by large sections of society has the best chances to convert in
a matter of maximally 50 years the whole global industrialized population to
mental modes of functioning that are determined by the way user interfaces are
programmed. A similar conversion needed about 5000 years to induce the mental
states of world population to the mental modes of writing oriented verbal
thought, the prose style of talking and thinking.
1.7.3. The mentation
modalities
of sounding and moving visual
images
Ever since the days of ancient egypt
,
humanity has lost touch with the art of expressing symbolic
thought
in pictures. Of course, there has been art, and
architecture. But since the main operational system of
mentation
became word and concept oriented, there had
been a split. The egyptians
still knew about these
things, and it is not only by sheer conservativism, that they kept the
hieroglyphics
, even though they
had the demotic
cursive writing. And it is not completely
true that hieroglyphics were "just a fanciful form of alphabet". Even though
Champollion rightfully debunked the phantasmagoric ideas that Ficino, Pico della
Mirandola, and Athanasius Kircher entertained about them, these ideas had "a
grain of salt" about them
(
->:
SONG_SYNAISTHESIS
). Because there was
a certain way of "picture thinking" hidden behind those hieroglyphics,
even if they just encoded phonemes.
This is why I am making such a great detour through the whole
of human cultural history, to find those traces that may lead us to a better
appreciation of what it means to use the mentation modalities of
sounding and
moving visual images
.
Because if we don't have a good idea what that means, we will
always get stuck with the
logocentrism
, as
Derrida
calls it (DERRIDA74
).
The almost inexhaustible source of confusion that the ubiquitous, and 5000-year
old domination of verbal mentation
modes, what we usually
call thinking, are playing their tricks on us.
And there has never been a symbolic memory technology
available to humanity in the past that allowed the combination of sounding and
moving visual images at the same time. This is completely novel ground. There is
of course the whole field of theater
,
song
, dance
, puppet
theater
[19]
kinematic arts
, etc. But this was considered as art, not
as mentation
. (For further discussion of the connection
of visual symbols and sound, see
->:
HARMONICS
).
1.7.4. The infrastructure and
technical representation of sounding and moving visual
images
This is then the technical core of this work: Not just to have
computer technology that lets us manipulate pictures, and moving images with an
incredible pain***, as we have in contemporary CAD
,
drawing and drafting, and multimedia animating software
.
When you want to construct a reasonably complicated drawing with Corel
Draw
®, or any other of the standard market systems,
you can easily spend a day on just one drawing alone. That is allright, if you
are working as a designer for commercial graphics, and have to produce just
another nifty ad for Kako-Kalo
that will earn you a
bundle for just that one piece. But if you are a teacher wanting to design a
textbook of thermodynamics
, and you want to put a very
good picture next to each chapter you write, you are in trouble.
ABRAHAM-DYNAMIC
is a good example how a book like this
has to look like. Of course the drawings there were hand-made. You will be in
much worse trouble if you were audacious enough to design a whole course
completely multi-media based. Or you must have a lot of pocket money to spend,
or a rich uncle. Because you won't get that kind of money from your school
board. The current user interface metaphors are just not suitable for this kind
of work. And it is in the present mouse- and icon-technology where most of the
problems are hidden.
I invite you to try to construct a drawing of a reasonably
simple and normal biological system like the DNA
with
Corel Draw
: Just try to design a double spiral: that is a
spiral rolled up in a spiral. You will never make it. But if you use a
Logo
-like approach, it is almost trivial. Just use the
basic subroutine that is creating a circle
[20],
let it increase one step in the z-axis, while it makes about ten steps on the
x-y plane, and you have the basic spiral. Then you take that subroutine and
apply it to a copy of itself, with the values for xyz multiplied by 20. Of
course, you will have to use x(1,2) y(1,2), z(1,2) for the double indexing. You
will see that with about half an hour of programming, you will have created a
beautiful spiral, spiralling around itself, mathematically perfect, and so easy,
every fifth-grader could do it.
1.7.5. 3-D Script as programming
language of the Symbolator
This is exactly the kind of programming that none of the
available design programs on the market can do (if I am informed correctly).
Possibly Mathematica
can do that. I didn't have time to
check. But the design programs should let you do this also. There is a little
story to this: every one of these design programs converts its output to
Postscript®
. Now Postscript is a programming
language, albeit one that hardly anyone ever gets to see as live code, and if
one gets to see it, one will regret it for life. But just imagine, that we use
the approach of Postscript
, call it
3-D
Script
, add some sound and motion gizmos, think the
whole concept over a little bit so that it doesn't look as abhorrent and
write-only as current Postscript
does. Then one could add
a little topping and provide an interface like Visual Basic, and presto, you
will have the nicest Symbolator
that I could ever
imagine. And the cost? Just a trifle 50 to 100 man-years. That is nothing
compared to the thousands of man-years that have cost the computer industry
billions in the last operating-system wars
of Windows
NT
contra OS/2
contra Windows
95
, contra NeXT Step
...
I have done much of the dirty ground work in my Leibniz
system
, in about 5 man-years. So I have some know-how to
get about these things. I know what it means to build a mentation
bootstrap
.
1.7.6. The essential bootstrap
principle, and beware of the traps
So we can parse the famous principle of the software-industry,
the
bootstrap
, both ways, and it comes out right
both ways: The boot-strap
[21], by which you can
lift yourself, and the boots-trap, of which you must be aware of.
Because if we want to really get right into this matter, we
will have to find some new ways to think in visual and motional symbols. And to
get into that thinking, we must implement working models on a computer. And
while we are implementing them, we must perfect our own internal representation
of what we are doing in software, reflect it into our own "thing between the
ears" as I have called it above. In short: we must perform a whole visual and
auditive conceptual cultural bootstrap
, what would take
thousands of years reality time "out there" in normal cultural evolution, but it
can, and must, be done in just a few years if done with the right tools and with
the right theories.
Of course some versions of this work are already in full swing
all over the world since that is essentially what the computer industry is
trying to get to. Except they don't really know where they are heading, what is
driving them, and what is obscuring them. If it just were not for those few
inconspicuous little hidden details of the project, those hidden traps of
well-worn mental modes of logocentrism
, that would
invariably lure researchers and software engineers into by now well-filled
pitfalls of former failures.
It is just like in childbearing
: You
can't be just half pregnant. And with a halfway solution, you will have lost, no
matter how nice your gadget looks. Because you lose both ways: If you make your
gadget smarter than you are, you lose, because
it will control you
instead of
you controlling it
(
->:
CYBER_REICH
). If you don't make it
smart enough, you lose too, because you need the gadget desperately to
solve some problems waiting for you "out there" that you can't put off any
longer.
1.8. The Leonardo-Leibniz Project
In this section I will give some reason why this project is
named after Leonardo and Leibniz.
1.8.1. Leonardo and Leibniz - the
unequal twins
Leonardo
and
Leibniz
lived pretty exactly 200 years apart. They lived
pretty exactly to the same age. Leonardo was born on Apr. 15, 1452, in Vinci, he
died at the age of 67 on May 2, 1519, at Cloux, near Amboise, France. Leibniz,
was born July 1, 1646, died at the age of 70 on Nov. 14, 1716. Leonardo was the
characteristic figure of the uomo universale
of the
Renaissance. Leibniz was probably the last uomo universale of
humanity.
Both were considered by their fellow contemporaries somewhat
strange, a little or more than a little crazy. Their ideas were rarely
understood and often ridiculed, and not many people of their time recognized
them as geniuses. Neither was able to connect close enough to a center of
political and financial power that their ideas could be worked out to any
appreciable extent. Both died after a wearisome and restless life,
broken-hearted and worn out from continuously running up against the granite
walls of a humanity that was only interested in their petty games and local
politics, and base amusements. Both were not known to have womanized very
much
[22]. It didn't seem to make a lot of sense
to either one to procreate on the physical plane. Their children were
multitudinous, and long living, on the idea plane. This is what they have in
common. Apart from that, there could possibly not have been any personalities as
radically situated at opposing ends of the human spectrum as they
were.
1.8.2. Leibniz the
rationalist
Leibniz
was a thinker, a rationalist, a
politician, a mathematician, a logician. He had taught himself Latin at the age
of 12, in order that he could read the books in his father's library which
contained a quite complete overview of the collected wisdom and knowledge that
was available to humanity in that
era
[23]
. By the age of
20, he had mastered it. Apart from Latin, he conversed freely in French, and
when he wrote German, one can still feel that this language was not as well
suited to the discussion of great subtlety that he was used to. His ability to
use the formal language of mathematics was equally superior. His invention of
the calculus
made him immortal in the scientific halls of
fame. Leibniz was absolutely true in his principles and his loyalty and service
to his superiors. He was unswerving in his religious determination to do
anything possible to keep god in the world of science which was quickly
transforming to utter materialism. His universalistic outlook was not any more
fashionable in the emerging climate of modern natural science, nor were his
religious beliefs.
While Leibniz
professed as his motto
"
theoria cum praxi
" he never really lived up to
his good practical intentions. His political and technical projects never
succeeded even approximately as his eminent theoretical mind had intended, if he
got started with them at all. More often than not, they were too visionary or
all-encompassing to find credibility with the rather narrow-minded provincial
framework of local german princes for whom he worked. Probably his calculating
machine is the great notable exception of a technical construction that was
workable. His great life time project, which had kept him in service with the
house of Hanover
, to compile their complete history, was
never completed.
[24]
Perhaps the only practical thing that he ever really achieved in his lifetime
was that he assembled and managed one of the largest and best libraries of his
time, in Wolfenbüttel.
1.8.3. Leonardo the
empirist
Leonardo
was the embodiment of
"
praxis cum theoreia
". For him, theoreia was not
theoretical in any abstract or formal sense, but derived directly from the
antique meaning of "higher vision". He had the vision, and his sense of vision
must have been of unbelievable acuity, because he saw and drew things that we
are now aware of only because we have slow-motion
kinematography
, and polarized
light
for visualizing turbulences
.
Apart from that, he was through and through practical, one could say
empirist
, even though he never bothered about any -ism,
he believed only his senses, and his experiments. Being born as illegitimate
child of a wealthy advocate, he received only the standard middle class
education, consisting then as now of the three R's: Readin', wRidin',
'Rithmetics. He never learned Latin to the exent that he could profit from the
scholarly books which were at his time almost exclusively in Latin. For that
same reason, he also had little training in the mathematical arts. Therefore, he
had to rely by necessity on the most readily source of information available to
him: His sensorium, and his observation. And he developed these facilities to a
height of acuity that possibly no human attained to before or after. He was an
engineer and an artist, had no interest in philosophy, nor in politics, had no
loyalty to anyone, except himself, and he would have equally served the Sultan
in Istambul
[25] had he
offered him a decent life time tenured position
(LEON-HEYD
, 128). Leonardo
promised a lot to everyone who would listen to him, but he didn't seem to care
as much as Leibniz that he hardly ever kept a promise. For him, the paintings
and statues he sometimes delivered to his princely commissioners, were just a
way to get funding for his ever continuing and consuming research projects. If
he had no time left for the paintings, too bad for them, the research was more
important.
Leonardo was highly esteemed for his artistical work by his
contemporaries, but he was not quite considered a towering genius of humanity.
Those who dealt with him thought he was more a trickster and a cheat, since he
promised much and rarely delivered. He didn't care much about god, even less
about catholic religion, and last but not least, he was homosexual. Had he not
been so famous, and had he not been wise enough to keep to himself about what he
was thinking and researching, he would have most assuredly come to a glaring,
blazing, and illuminated end on top of an inquisition bonfire, together with all
his notes, sketches and technical models.
1.8.4. The historical kairos of
Leonardo
Leonardo was the greatest master and genius in the field of
visual and kinesthetic syn-aisthesis that humanity has ever had. Let us not
bother now too much of the specific meanings of these terms. The fact that
Leonardo could rise beyond being just a gifted artist to any stature of unique
and universal geniusness is directly attributable to specific and historically
singular circumstances of the Renaissance era. The
Renaissance
was the age of the uomo
universale
. At Leonardo's time, there were many
outstanding persons able and proficient in the fine arts as well as engineering:
Giotto
, Brunelleschi
, Francesco di
Giorgio
, Verrocchio
, Leon Battista
Alberti
, as well as other savants in the vein of to the
uomo universale whom he had himself met or heard about: Paolo del Pozzo
Toscanello
, Toscanelli
, Benedetto
dell'abbaco
, Carlo Marmochi
,
Francesco Araldo
, Domenico di
Michelino
. (LEON-KEMP
, 88,
LEON-BRAMLY
, 132, 143, 144). This was the fortunate
historical kairos
of Leonardo's time: the education of
engineering hadn't progressed far enough into the overweight of
formalisms
, pushing any aisthetic instruction out of the
back door, as is now practiced in all engineering schools on the globe. At his
time, there still existed the possibility that someone, who was primarily
aisthetically inclined, could absorb all the knowledge available, reach
perfection in it, and go beyond anything that humans were able to create in the
200 years to come. That is up and past the time of Leibniz. This was possible,
because Leonardo
had perfected the aisthetic techniques
of understanding
. Single handedly, intuitively,
unerringly guided by his senses, his style totally neglected by the formalist
ex-cathedra mentality that framed the higher school education and the
engineering curriculum in the 500 years after him. Leibniz was totally opposed
to Leonardo in this respect. Leibniz would have never lifted a hammer himself,
or used a screwdriver.
At any later time in european history, the engineering
education system
threw out any predominantly aisthetic
talent long before it could show its abilities in the profession. Someone like
Leonardo, trying to enter that system later would invariably be deterred by the
formalistic maze of standard technical education, and drop out after the first
term. He would possibly become a gifted, but entirely forgotten artist, as was
the fate of countless young men afterwards. Never bother about the women, they
were deterred by the system even before entering it. That is a sad fact of our
education systems, and Leonardo is practically the sole exception western
european civilization has to offer. Leonardo was not unique in his potential,
but western civilization has managed to lay to waste all that other potential
that was available.
1.8.5. Leonardo and Leibniz on the
fates of humanity: the memorable year 2019
Maybe Leibniz was more optimistic than Leonardo, he had even
developed an own philosophy, the theodicee
, to prove that
this world is the best of all possible worlds. But being able to prove that on
paper didn't really do anything about the actual state of humanity in that
bloody reality out there. Leonardo didn't care for philosophical consolations.
He was through and through engineer, and he knew about the iron dynamics of the
ensuing technological development long before it unfolded in full swing, that is
right now, 500 years after him. He envisioned global
catastrophe
as result of the inevitable course that
humanity would go under the influence of technological domination. His visions
of deluge and global ecologic catastrophe that he sketched near the end of his
life, are as magnificient and enigmatic as his greatest masterpieces, like the
Last Supper
or the Mona Lisa
. (See
LEON-HEYD
, 183, 184.,
LEON-WALL
, 181-183,
LEON-GANT
, ILL:LEO
) And they
are predicting pretty much what is happening today and in the near
future.
Am I being too prophetical, when I predict that humanity
will get to experience "the full swing" of his vision to materialize pretty
exactly around
the 500th anniversary of his death, the year
2019
?
1.9. Some biography and individual history, what led to the Symbolator project
In this section, I will give "my story". I am writing this
work as part of an ongoing discussion with the many people with whom I came in
contact about my project in the course of time. Some contacts are purely
scientific, but others are more personal. This is for those people who know me
personally, and those who read my paper and want to have a few personal
impressions. It always helps the reader who is not too interested in the
minutest technical details to get a feel for the person behind the story.
Therefore, it is part of the introduction.
What you are reading is a further step in a long history of
research and development that I have been involved in since about 15 years.
There have been numerous internal papers and publications as well as extensive
software development in the Leonardo-Leibniz Project
. An
outline of the writings can be found in the bibliography under BIB-AG. The
titles listed there show the conceptual development as it progressed starting
with the Diplom thesis in 1977, via several articles in the computer press in
1983-84, and the articles and reports on the
Leibniz
[26] System development in 1989-91 to
the later papers up to 1995. The whole documented project work so far covers
about 6 Megabytes of LPL (Leibniz
Programming Language)
source code and about 3 Megabytes text in articles and books.
1.9.1. The Leibniz motto: "Let us
calculate, Sir"
I cannot say when this quest started. I can recall occasions
and instances in the earliest years of my life that have a connection to the
present work. I have somewhat arbitrarily set the starting point to an event
that occurred to me wile travelling in India. I have enclosed a story on this in
the appendix. Unfortunately it is in German, and I cannot, or will not,
translate it. It is also not scientific, but again, for those who want to have a
bit of my personal story, this will serve well. See:
->:
All during my university studies, I had been busy looking far
and wide into things that had not the slightest career enhancing relevance for
my future profession as a computer scientist, like
philosophy
, neurology
,
linguistics
, indology
,
egyptology
, history
, comparative
religion
, and a few more. When I finally had to come up
with my degree, I chose a subject for the thesis that would give me at least
some leeways to amplify on my intellectual hobbies, while still containing a
core structure of computer science. I called it: "Artificial Intelligence and
Learning
" (BIB-AG:AI-LRN77
). As
motto I chose a quotation from Leibniz
: "Let us
calculate, Sir" which he had used as programmatic declaration for his
Characteristica Universalis
. This was actually a trojan
horse
, since I meant with the quote that there was
something missing in the approaches AI was using to make calculated
representations of the world. But of course I couldn't prove that. I just
thought that Leibniz
had a tack on the matter when he
engaged in his lively dispute with Clarke
near the end of
his life. I succeeded getting my degree, but then couldn't get any sensible
concept for my PhD thesis that would still fit into the computer science
framework. I had managed to diffuse out of the thought structure of what a good
computer scientist is supposed to think like, if he ever is to earn his PhD in a
German Computer Science department. I wasn't thinking like a natural scientist,
but I also didn't quite fit the scheme for the humanities. I had developed an
own style of thinking which didn't fit the categories at all.
I then started working in the computer field. I was in
California in 1978-1980, and had gone to the Palo Alto home brew computer
club
a few times. Somehow I was around while all those
things broke loose that were later called the micro computer revolution, even
though I didn't get to know Steve Jobs
personally. So I
chanced to get to work on the first Apple I
,
Imsai
, Sol
,
Altair
, Northstar Horizon
,
Dynabyte
, and what other unspeakable contraptions the
genius of microcomputer
tinkerers came up with. I had
read all the marvel stories in Byte magazine
, but was
much less impressed when confronted with the machines in reality. Sure, I loved
the personal feeling those little boxes gave you - finally having a computer all
for your own. But working with them required one to enjoy a specific kind of
masochism. Teletype
paper tape as the only long-time
storage, because disk drives cost a fortune. 8K Bytes of memory to work in. One
had to constantly defeat the machinery to do what you wanted it to do.
Conditions improved as time went on. One day, we all had floppy disk drives,
even though it was impossible to take the disk from one computer and go to the
next machine and use the data there. Not so, my friend, the designers have
decided that it is in the best interest of company profits to ensure customer
brand-fidelity, that they never even think of going with their data to a
different machine. And one fine day I even went to see a friend who had managed
to borrow, beg or steal, a hard disk drive from the company he was working for
and hook it up to his computer. What a marvel! Five full megabytes of almost
instantaneous access storage. Unimaginable, what one could store on this drive.
The price was also unimaginable.
1.9.2. To think directly into the
text editor
For me, the day of revelation came, when I had the first
CP/M
machine running a text processing program called
Wordstar. I had never been fond of writing. I just hated being terribly
self-conscious hitting those darn typewriter keys, always afraid of missing the
one I wanted and having to go through an ordeal of correcting or even re-typing
the page. I had learned touch-typing in school, so I didn't care for the
slow-but-sure hunt-and-peck method with two fingers. So when I got that text
program, somehow my whole world changed. I could write as fast as I wanted,
forget about the errors, they can be corrected later. The most important thing
is to get the ideas into the text while they come up. Wordstar used a scheme for
cursor movement, the Wordstar Diamond
(about which I will
be talking further down). This was denounced cryptic and unintuitive by many
people. I couldn't understand why. There was some kind of a learning curve in
the beginning, but once learned, it was extremely powerful. I learned that the
ability to express myself in writing was crucially influenced by the kind of
flexibility my system allowed me to modify my work as I was going about the
writing. I have heard of the writing style of other people. They told me they
were thinking out everything neatly and cleanly on paper, and then typed it in
themselves or had someone else do it for them. What nonsense, did I think. It is
so much faster and more efficient if I think directly into the text editor. But
somehow I have not been able to tell anyone what that really meant:
To think
directly into the text editor. Anyhow, I started writing and have never
stopped since. Up to now, it is about 10 Megabytes, including 6 Megabytes of
program code.
1.9.3. The essential question of
zero time feedback
I was doing work in real-time embedded industrial control
systems. This is the kind of devices that do computing for you, but you never
get to see a computer. Intelligent washing machines, car electronics, missile
guidance systems, and the like. Back in the olden days, they had to be squeezed
into exceedingly tight memory spaces. If a manufacturer could put a system into
16K Bytes instead of 32K, he could save a lot of money in those days of TTL
memory chips
. Today that constraint isn't as stringent
any more, but I got some experience of what it means to re-cycle single bytes.
One machine I was programming for was an ingenious device that acted as assembly
control for the manual assembly of electronic components. The machine supplied
all the parts to the operator as they were supposed to be mounted, and even
showed with a light point the exact location where they were to be mounted. The
human was needed for the superior ability to position the parts, for which much
more expensive and inflexible machinery would have to be used if done fully
automatical. This capital outlay could only be economic in very large batches.
For small to medium batches, it was more economic to do manual assembly. This
machine was an example of the closest possible human machine feedback loop one
could get. It capitalized on a domain of performance that is essential for any
kind of virtual reality, but so far hasn't really been come close to:
the
essential question of zero time feedback
. Now, of
course, zero doesn't mean physical zero, it is a physiological
zero
. The human nervous system
has
a reaction lag
constant of about 50 milliseconds, below
which, everything is experienced as instantaneous, and above which, we can say:
there has been a time difference between the events. At about 200 msec, we can
say: This one occurred earlier than that one (see: PÖPPEL,
->:
NEURO_BRAIN
).
1.9.4. The Leibniz
system
Besides the bread-and-butter work, I never lost track of my
quest. That Leibniz question
kept nagging me year in and
year out. Sometimes more, sometimes less perceptibly. I implemented a totally
self-contained computing universe, called the
Leibniz
system
. When I finally decided it was time to stop, the
system had grown to 100.000 lines in about 6 megabytes source. It could very
well have been the largest self-contained software system ever constructed by
one person. Anything over 10,000 lines is usually created in a team approach.
Creating a 100,000 line system takes anything from around 20 man-years to
unlimited when done in an industrial setting. That is without maintenance. I
have re-written the code of my system up to five times. In about 5 years, spare
time. It has to be pointed out, that a self-contained system cannot be compared
to a 100,000 line financial accounting package, which has a very narrow range of
application. There is a totally different flavor in a universal system, which
permeates through it whole and all.
1.9.5. The problems of an
auto-poietic software system
Unfortunately, the
Leibniz system
was too all-encompassing and too self-contained to be
explainable to anyone. I could not manage to get any other programmers to do any
real work on it, because as with everything self-contained, you have to
understand the whole before you can understand the parts. Today, you call this
auto-poietic
, if you want to be really scientific
about it. The common scientific approach is to understand complicated things by
reduction, by recursively dividing it up into parts and pieces, that are
small enough to be understood
[27]. Contrarily,
an auto-poietic thing can be understood
synthetically only. And anything
longer than about 10 pages of computer listing cannot be expected to be
understood synthetically by normal, ordinary, practical, down-to-earth folks, as
computer programmers usually are. Of course, programmers are specialists at
understanding systems of ten- or hundred thousand lines of code. This can be
done, if, and only if, there is a way of decomposing them into modules
that
are separable after the usual rules of the trade. Those rules are not
written anywhere, and at the universities, one usually never gets beyond a
complexity of around a thousand lines, so the real tough jobs are those that the
budding programmer novice gets thrown at his head once he enters the
professional field for earnest. If you have a system that is not separable after
the usual rules, it will be hard to explain to someone else. The whole thing
becomes a chicken-egg question. And in our world of time and money, there are
not many programmers willing to waste many hours if they can make an easier buck
with more conventionally structured systems.
So, I found out that I had to do all the programming myself.
Sometimes, people helped me, but when it came to integrate what they had
produced for me, I found out, that I had to write it again, and when I was
finished writing it again, it had changed totally beyond recognition from what
my original helper had produced.
I had thought to develop the
Leibniz
system
as a commercial product, but when people kept
asking me: "What does it do?" I could only answer: "It is a software development
environment". Now the next obvious question was: "What does it do better than my
favorite {Turbo C
/ Microsoft C
/
Pascal
/ Visual Basic
/
Smalltalk
/ Fortran
/
Cobol
} system?". Well, it did in fact do some things
better than those systems, otherwise I wouldn't have thought it worth the while
creating the whole thing. But I found out that the effort invested by a
programmer to learn one specific system and get proficient in it, is so great,
that you would have to offer exceptional advantages, and literally have to
promise him heaven on earth to make him change. As one cigarette commercial once
aptly coined it: "He would rather fight than switch". And that is quite
understandable. So the commercial success was not the kind of smash hit as I had
intended it to be.
1.9.6. What it means to have a
totally self-contained computing universe
You may not always get what you want, but you still get
something that is of use for something. After all, I had done the whole work
with that darned Leibniz question in the back of my mind. And it turned out that
I had got some hands-in, down-to-the-bits experience in a totally Leibnizian
endeavor: The possibility to create a totally self-contained computing
universe.
This opportunity existed only in the very narrow time frame
when microcomputers
were small enough that one single
programmer could manage to command the whole thing. That was from about 1978 to
1984. This was the time of the cometary rise of the Steve
Jobses
and Bill Gateses
, and many
more, who relapsed into oblivion as quickly as they had risen, when the IBM
PC
rendered all the other microcomputer models (except
the Apple
) obsolete. It was an extremely narrow time
frame of opportunity, a historical moment of
kairos
, that was unexpectedly, and
unprecedentedly, there, and passed, before anyone really knew what was going on.
When the general public had finally realized what had happened, there existed a
few new industrial big players: Apple
,
Microsoft
, SUN
, and the IBM PC
compatible industry
. And business was going on as usual.
Apple and Microsoft, and the rest of the microcomputer industry, were squarely
placed in the hands of the financial market, and what could and would be
developed rested on the decisions of those who allotted the megabucks that it
took to put a few hundred or even thousand programmers to churn out yet another
memory-buster, processor-buster example of over-byte bloated
we-can-do-it-all-in-one-exe-file monster software piece that we have become
accustomed to be presented with in ever-recurrung releases of new goodies for
the last decade or so. We have just seen the latest multi-megabuck, thousand
man-years, human wave battles being waged in the great "operating systems
wars".
After about five years of technological development, micro
computers
had passed the earlier generation of mini
computer
s in complexity, and after ten years, they were
up equal with the mainframes
. Now, no-one knows about
mini-computers any more, and the once mainframes are now called servers, or
something like this. The days of the lonesome software-cowboy toiling in his
garage, hoping to become as rich as Bill Gates
were over.
Of course, good money can still be made, when you write for a very special
market, on an existing platform like Microsoft-C
. But
this is not what I mean building a totally self-contained system from
scratch.
The possibility to develop whole systems single-handedly had
only existed on micros, because the mainframes and minis had always been
big-company affairs almost from the beginning. So there was never much of a
question that someone in his garage might create a huge system on them. In those
days there simply was not enough storage on those computers to allow a large
system. Minis typically had 16 to 64 K Bytes of RAM. When they became big, like
the Digital VAX
, the mini companies had long grown beyond
their own garage beginnings, forming multimegabuck corporations, with the usual
hierarchical management for their tasks, just as the
mainframe
industry had given them a model, amply
documented with the IBM OS for the /360
series that has
so vividly been described by Brooks (BROO75
). It may
be interesting to note that the creator of the VAX
operating system was commissioned by Microsoft to produce their Windows
NT
system. He could apply his expertise well.
So, the next instant of
kairos
was that even though micros had started with 16 K Bytes RAM in 1978, within
seven years, in 1984, they had 640 K usable RAM. And the whole thing at the same
prices. The money to buy 16 K Bytes RAM in 1978 allowed you to buy 640K in 1984,
and an added hard disk of 20 Megabyte. So, there was enough space to build a
sizeable system in, without having much more expense in terms of hardware. The
Leibniz system
had the space to grow. 640 K doesn't seem
much, but this is because of a secret pact between the software manufacturers
and the chip industry, which are, you may have guessed, all controlled by just
one and the same financial conglomerate, and it is therefore not hard to
understand why there is good reason for this ever-increasing resources hunger of
modern software. The whole computer industry is simply programmed as a
money-making machine
, forcing users to buy ever new
computers every year, because of ever-increasing hardware requirements, so that
they can run all those ever-so-fancy new packages which just won't seem to move
on an older machine.
If you use different programming strategies, you may end up
with a factor of ten in terms of processor and human efficiency playing at your
favor. But don't tell anyone, because the big guys might send a hit squad after
you, if they find out that you might be ruining their nice money game.
1.9.7. Virtual Reality of the second
kind:
How to play god in your own
computing universe
So, this historical moment of
kairos
offered me too, an opportunity. Even though
I didn't get rich and famous like Bill Gates
, I also
didn't end up losing millions, like Steve Jobs
. There
might only have been three or four people in this whole world who have
single-handedly created self-contained 100.000 line systems of the kind of the
Leibniz system
. But there are no other programmers who
could have noticed the not-so-technical significances of what they had created.
As I have said, computer programmers usually are normal, ordinary, practical,
down-to-earth, folks, and I mean it. The closest anyone of them will in his life
ever get to anything metaphysic
al is reading cyberpunk
science fiction
, like Neuromancer
.
Basta. And I still have to meet a philosopher dealing in metaphysics who would
trust a computer enough to touch it even with a five-foot pole. Those people are
still steeped in some kind of Hegelianism
that considers
anything made of filthy matter as beneath their dignity.
So I had the very rare opportunity to find out a few things
about computing universes that have later become known under the name of
Virtual Reality
. Except, even then, they didn't
get the full implications of what that meant. Because almost everyone thinks
that Virtual Reality is supposed to be some kind of carbon copy of the reality
we think we have around us. Nope. That is only half the rent. It gets really
exciting when you come to think of the metaphysics
.
Unfortunately metaphysics does not belong to the standard curriculum taught to
computer science students
. You have to go out of
convention's way pretty far to get any taste of that.
I found out what it means to play god in your own
universe.
And that is not as much sheer fun as reading the
bible
or any other good
mythological
fairy tale might make you think. Because the
sheer will-power of creation
turns into utter
necessity
no sooner than you turn around. As I found out,
it is devilishly difficult to be a god. Because the proverbial "
the
devil is in the details
" is the most profound
theological statement the human mind has ever come up with. Unfortunately, the
theologians themselves have never heard of that one. A very interesting story by
Stanislav Lem
with a similar title, which even made it
into a (probably not very profitable) movie, supported my findings that there
must have been something about the business of god, which all the theologians,
including the ever-so-logical Leibniz
had overlooked. God
must be sitting somewhere up there, being very ashamed of himself, and being
terribly self-conscious about the mess that he had created in this beta-version
3290449432.b,Release-IV of the universe, and he still hasn't gotten things as
right as he had intended. From the very mundane point of view of the master
programmer of the universe, everything of what we lowly earthly creatures call
deluges, cosmic catastrophes
, and so on, that had
suddenly interrupted the fates of this planet and this universe, and wiped out
life on earth (or humanity) again and again, were nothing but minor "deletes" in
some minor subroutines of the grand overall cosmic operating
system
, that our divine master
programmer
had apparently botched, and then tried to get
a little better in the next version. As we all know from ancient vedic
cosmology, every once in a few thousand billion years, the cosmic master
programmer makes a radical "re-format" of the whole hard disk of the universe
and starts afresh - If he doesn't get totally tired of programming altogether
and decides to go out fishing on this fine day of
Brahma
.
Of course, this conception didn't come out of nothing. In
1980, I had had an experience which shattered my conceptions of the
mind
, of the self
, and of the
universe
. If I were to date the start of my quest to any
specific date, it was then. I have recorded that event as good as I could. But
every one of those who have made similar experiences, knows that it is entirely
impossible to convey with words the force of such an impact. But it has been
tried again and again to express, even in the full knowledge that it is not
possible (
->:
AVATAR
). To try to
translate the account from German to English would be futile, and so I
leave it as I recorded it.
1.9.8. Humanity's irretrievable
moment of kairos
This is just one reason why I think that
the next 20 years
are the irretrievable moment of kairos
for humanity.
If humanity will not act, then other forces will. Leonardo and countless others
have tried to alert us to it. If humanity will not listen, well, what happens
then? You guessed it: God the master programmer, or whatever metaphor suits you
most about what is driving this universe,
will just make a minor delete in
one of his minor subroutines driving this universe, there will be a few minor
local catastrophes, maybe an insignificant planet called earth busted beyond
recognition. That is nothing new, and has happened time and again before.
But on the overall, business will go on as usual in the universe, on this fine
day of Brahma
. What is happening on this planet is
probably not very important for the whole universe, and the folks who are
running it. That's all. Of what great use is this speck of humanity on this
little ball in that insignificant corner of the universe anyhow?
1.9.9. Leibniz the patron
saint
It may be apparent by now, that what I call the
Leonardo-Leibniz project
is not just some trifle idea
that I dreamed up in my fancy. Gradually in my progress, I uncovered patterns of
something pervading all the development of humanity, into which I became
involved, or maybe even sucked up, with me being the last one to realize what
was going on. I will now say a few more things on how I came to the
project:
In the christian tradition, there is the beneficial
institution of the patron saint
. This has equivalences in
all the other major religious traditions. It is itself a continuation and
adaptation of the very old tradition of local and departmental spirits and
devas, genii, and sub-gods, that populated an immense pantheon of the lower
ranks of the spiritual hierarchy of antiquity. We know only the upper echelons
now: Zeus
, Athene
,
Mars
, Venus
, and so on. Each god
and sub-god had its specific field where s/he reigned and people prayed to
him/her to get luck in their affairs. The patron saint
is
the newest christian upper garment in a very old emperor's wardrobe. Behind this
is a psycho/mythological method. The patron saint serves for the one who
addresses him/her as guiding line in their own striving in life. When I adopted
Leibniz as my patron saint, I didn't really know about this very much. I had the
aims and goals of Leibniz guide me for 15 years now, and now I know a little
better, even though that may still be not very much. But it has served me. In a
world where a mentor
is very hard to get, and those who
pretend to be one, may be the worst frauds, which you will not find out until it
is too late for your life, you should try your best to find some pattern of
honesty, consistency, and consequence that you can adhere to while all the world
around you runs after every new fad, buys the newest bottle of
Kako-Kalo
(Kakon-Kalon
, as is the
solution of this riddle), and strongly believes in the unspoken central dogma of
modern materialistic-hedonistic society: "A hundred Million flies can't err.
I've got to be in that game also". Leibniz
may have
gotten quite a few things wrong in his life, and he may have run after a phantom
most of the time. But he was one of the most honest, most sincere, and most
consequential spirits, who was also gifted with one of the most exceptional
minds of humanity. But he could have been as intelligent as he could ever be. If
he were not honest, it might be very dangerous to follow him.
You may end up wandering some pretty lonely trails if you go
this way. And you may get hit over the head quite often by some entirely well
meaning fellow human, who tries in his best intention to save you from this
entirely senseless search of something that cannot be there because modern
science has found this is not a question one is supposed to ask. And you may be
howling in despair because all you experience is a seemingly endless pitchblack
dark night of the soul (St. John of the Cross
).
By gradually following his thought tracks, uncovering bits and
pieces here and there, I came to hit on some fundamental questions that he had
banged his head against vigorously without being too successful doing anything
about it. Not that I consider Leibniz' solutions he had devised as too
impressing. I considered, and still consider, reading his verbiage as painful
and I have to literally force myself to wade through his mental convolutions.
The only consolation I have is that it could be worse: I could have to read
Hegel
.
1.9.10. Rediscovering the art of
thaumazein
Now after many years of chipping and gnawing at the monumental
mental edifice that he and his collaborators had erected, I came to the
conclusion that there is one thing extremely more important than trying to
understand what they got as results. This essential thing is:
to learn the
questions. And even beyond that, it is more essential
to learn to ask the
questions. Because in every epoch we live in, things seem different from
which ever side we look at them. But some of the questions remain, and even if
the questions change, the art of
thaumazein
is
still where all the action is
(
->:
THAUMAZEIN
). And the primeval pattern
of the most fundamental questions is embedded in the Three big W's: What,
hoW, Why. These are the keys to the universe. This is what I learned from
Leibniz
, Aristoteles
, and
Platon
. May all their other works rest in piece and rot
together with their bones.
Collect the bones and give them a decent
burial
Chinese Proverb, related by Alfred Schinz
Unfortunately there is no sure way to get to the questions
except to wade through their verbiage, tantalizing as that may seem in times.
And there is a secondary problem: The more other people wrote about what they
think what these towering geniuses meant, the more they were heaping
interpretation upon interpretation. And reading the interpretations may not be
the best way to come closer to the questions. This pattern was not clear enough
to me until I read Schopenhauer
's "Über die
Universitätsphilosophie". And Schopenhauer stated it very succinctly: If
you want to make a career as professional philosopher in one of the
universities, then it is good to know what all the other philosophers have said.
It is essential that you learn their style of argument, and what it needs to win
them. But when you want to find out the real things, you have to go to different
places to find out. You won't get that at the universities.
1.9.11. The sensible truth of
Epikur: Trust your sensorium
Antes que se la coman los gusanos, que la aprovechen los
humanos
[28]
Epicurean Proverb
I finally understood that Leibniz in all his geniusness and
sincerety may have gotten something essential wrong. And his best intentions had
turned out quite the opposite. His logics and mathematics, that he perceived as
illuminated by the transcendent spirit of god, served only to implement a
technology that has no use for god. I then found out, that if we had understood
the so-called materialists and empiricists better, in the line of
Demokrit,
Epikur
[29]
,
and Berkeley, humanity might have fared better. Because Epikur and his school
had advocated something which makes people less vulnerable to -isms. He had
advocated that people trust their senses. He had said that the sense of
well-being, the bodily homeostasis
, was the most reliable
indicator to look for if we want to find out the worth of things (titles under:
EPIKUR
). When it doesn't feel good, it might sound
good, and logical, and entirely convincing. But it still isn't good. The
question may not be about logical truth
s, because what is
logically true may not be useful at all, nor advantageous in practice. Epikurean
teaching is about applicability, appropriateness, practical usefulness, and what
we call today
ecology
.
Because so much abuse and misinterpretation has been heaped on
the Epikurean teaching, especially as advocating an uninhibited
hedonism
, this needs to be clarified. Epikurean teaching
does not advocate debauchery
. In order to fine-tune the
bodily sensorium that it can act as an indicator, one must primarily avoid
clogging it, by food, drink, and stimulation. Consequently one should lead a
life there was fairly austere. Only then is it possible to experience the bodily
homeostasis
as enjoyment. Any excess in the above areas
is already a sign that the homeostasis has been lost. "The aisthaesis is where
the action is." (Epikurean proverb).
Berkeley
had also advanced a coherent
world model on reliance on the senses
(BERKELEY
).
I might also cite the much-misunderstood sophist
Protagoras
, who had made a very sensible statement, that
qualifies him as the founder of
ergonomics
.
panton chraematon metron einai anthropon
The human is the measure of all things of
the human domain.
1.9.12. The mental technology of
authoritarian domination
All through the ages, reliance on the senses has been the
opposite of scholarly, theological, and philosophical dogmas. Because the gist
of all other teachings was, somewhat simplified: The less good it feels, the
more you can be sure to be on the right path of
salvation
[30]. No matter what kind of path and
what kind of salvation was meant with that. This began already with
Parmenides
, and Plato
followed
suit. Initially the reason for distrust of the senses was that they supposedly
lead to error. Now the following schools of philosphy and theology might have
disagreed with Plato in any way they wished, but all agreed that one had to
follow the teachings of the masters and the preachers, but not the senses.
Because this was the most sure way to dominate people after they had become
insecure and distrusting of their own perceptions. This is the foremost
mechanism used in all the primary schools on this planet: Cut the children off
their used and familiar environment and sense inputs, lock them up in an
austere, sensorily denuded room, with bare walls, force them to sit still in
uncomfortable benches, never let them try out anything themselves because they
will break it anyhow, and most importantly, let them listen to their teacher,
and speak only if asked. This very same method has evolved into
a veritable
mind-bending technology throughout the millennia, and as we experience all
around us, these arts are having their heyday right now. All the
Hitler
s, Khomeini
s,
Stalin
s, and Mao
s, have been
masters of this school. A very insteresting analysis of that technology is in
KRAMER93
[31].
1.9.13. Mind over Matter, Matter
over Mind
So I have come to unroll the whole carpet again, from the
beginning, and arrived at the core questions, together with their
answers:
What is matter? - Never mind.
What is mind? - Doesn't matter.
and the variation thereof:
Now, is it Mind over Matter ?
Or is it Matter over Mind ?
Does this really matter ?
Why should I ever mind ?
We may state the question differently:
Rationalism
over Empirism
,
or Leonardo over Leibniz. To me, there are some stringent reasons for
Leonardo
over Leibniz
.
Leonardo seems to be the one who has been most successful following this path.
Since he was wise enough not to make any noise about what he did, he managed to
survive. And if we follow his thought tracks, we may eventually re-discover what
he found. And it is possible to follow the senses and not end up where the
empirists in the line of Locke went. Locke was quite correct, and he might have
been misinterpreted by his followers and his adversaries alike. So let's get
back to asking the questions. What was it with empirism that took a wrong
turn?
It is easier to find the wrong turn of rationalism. The
example of Leibniz shows that rationalism is entirely successful rationalizing
to complete logical satisfaction that "this world is the best of all worlds"
while humanity "out there" is having one of the most terrific experiences of
"hell on earth" that could ever be dreamed up by the most sinister-minded horror
movie director. This was the time of one of the greatest human desasters and
cultural breakdowns that had devastated central Europe
,
namely Germany
and Bohemia
: the 30
year war
. This had just ended when Leibniz was born,
devastating Germany to an extent that would have made Morgenthau envious. Not to
mention the human catastrophe of spanish mass extermination of American Indian
cultures, which was not long past.
->:
MASSACRES
->:
DESASTERS
As Toulmin
and other researchers have
shown, the time of the origin of modern science, the time of
Descartes
, Leibniz
and
Newton
was anything else than an optimistic, bright,
prosperous, and enlightened age. To the contrary, it was a time of extreme
cultural agony, fear, disorientation, and doubt. (See also:
TOULMIN-KOSMO
, BERMAN83 p.
25-61
, PIETSCHMANN83
,
ZINN89
)
Now inventing a theodicee might do serve to make the
impression of the desaster bearable
[32], so
that a sensitive spirit like Leibniz wouldn't have to collapse because of
desperation. But it must be asked if it not also served as a blinder. Because
maybe there was something about the plight of human existence on earth that
could be done about, if it were viewed from a different perspective.
[7] syn-aisthesis= the
synergetic cooperation of all the sensory instrumentarium
[9] kairos is the
ancient greek term for the momentary, fortuitous instant, that has to be grasped
or the opportunity will disappear never to be presented again.
[13] The turbulent desasters
engulfing human civilization that he drew are exactly the kind of phenomena you
get as consequence of global warming and destabilization of the climate that are
waiting for us. See also the chapter on Leonardo later in the introduction.
Literature: LEON-HEYD, 183, 184, LEON-WALL, 181-183.
[14] See the "double C -
double P game": the method how to get rich and powerful and have the rest of
humanity pay for it. How to Commonalize Cost and Privatize Profit.
(HARDIN85)
[15] The word
representation is used here in the sense of Schopenhauer. There is
a potential of confusion with the
representational school of in the field
of cognition research that is using the word in a different sense. This school
assumes that the sensory system extracts physical features of a physical object
and represents them in some way in the neuronal structure. We can express this
as a
representation of (something). In Schopenhauer's sense, there is no
"of": the
representation is It. In the words of the Virtual Reality
school:
Reality has always been Virtual. See also: MATURANA-BAUM,
144.
[17] Hegel is the
arch-idealistic "Gottseibeiuns", the ultimate terror image of all
natural-scientific thought.
[18] I have to differentiate
verbal and conceptual thinking, which has been most amplified by writing and the
printing press, from other mental process, which we may call thinking or not. To
stay clear of confusion, I call
mentation anything that includes verbal
thinking, but also forms of skills that have nothing to do with words, like
learning to mentally transform a drawing into a 3-d object.
[19] see: Heinrich v. Kleist:
"Über das Marionettenspiel"
[20] Forget about 2 r * pi !
It is "ten steps forward, one step to the left", repeated until you come back to
where you started out with the turtle.
[21] German ingeniosity has
found something quite as good, and about 300 years old by now: The famous
tried-and-tested "Münchhausen lift-youself-by-your-pigtail" trick, which
would even make an indian rope trick sorcerer look pale by comparison because
Münchhausen managed to lift not only himself, but also his horse. He
actually performed usable work with his trick, and not just some amusing
spectacle like the Indians do.
[22] About the womanizing of
Leibniz, even though nothing much is known, that doesn't mean that nothing
existed. It could as well mean that both parties involved had stringent reasons
to conceal anything going on as much as possible. Leibniz was a lowly bourgeois,
and all the people he dealt with, especially the princesses of the house of
Hanover, with whom he dealt most, were of highest nobility. Any suspicion of a
liaison must have been fatal for both. The baroque age is as well known for
double standards as any other age that had strict outward rules that were just
there to be broken, but let no one know about it. See also the entries on Sophie
Charlotte in: Eduard Vehse, "Illustrierte Geschichte des Preussischen Hofes",
Franckh, Stuttgart 1901
[23] A good overview of the
scientific spectrum is given by Tschirnhaus, who was a longtime friend and
collaborator of Leibniz (TSCHIRNHAUS). Thanks to Peter Zimmermann for the
tip.
[24] I have written a whole
book on Leibniz: BIB-AG:LEIB-CHR.DOC for anyone interested.
[25] Who was, for
christianity then, even worse than the devil.
[26] Leibniz is a trade mark
of A. Goppold
[27] Descartes: On
method.
[28] I am forever indebted to
my friend Bibiana for this invaluable juwel from the treasurehouse of folk
wisdom.
[29] Excerpt from SOFT-ENCYC
(Appendix I):
Epicurus
{ep-i-kyur'-uhs}
Epicurus, 341-270 BC, was a Greek
philosopher who founded the system known as Epicureanism. He studied with
followers of PLATO and DEMOCRITUS before opening his school in Athens. The
school, later called the Garden, accepted women and slaves. This, coupled with
Epicurus' teachings concerning pleasure, led to public criticism of the school
as a scene of debauchery. In reality, life there was fairly austere. Most of the
writings of Epicurus have been lost. Fragments from his most important work,
Peri physeos (On Nature), were recovered from the charred papyri of Herculaneum,
buried by an eruption of Vesuvius in AD 79.
Bibliography: DeWitt, N., Epicurus and His
Philosophy (1954; repr. 1973) Panichas, George A.,
Epikur grew up on Samos, in -323 he went to Athens. The family
was displaced in the wake of the makedonian conquest of greece. He started
teaching in -310 in Mytilene and Lampsakos, returned -306 to Athens, and opened
up his "philosophy school in the garden". His school had strong roman following
in the highest circles, and the fragments discovered in Herculaneum belonged to
the library of Caesar's father-in-law Piso.
[30] A particularly good
example is the puritan code of ethics: You may do anything you want, as long as
you don't enjoy it.
[31] Thanks to Marion Hera
NZ, for this information.
[32] See Voltaire's Candide,
where the theodicee was masterfully satirized.