The computer is an ongoing collaborative
project of generations
engineers and software developers,
each generation building on
the achievements of the past.
These principles are true for many other
technological implementations, but none
other than the computer sees them so
and in such a rapid
The computer thus truly is mankind's
cultural, scientific and
(Unfortunately its users, especially
generation web 2.0, are at large
far behind it and use it accordingly.)
This chapter will, as an example for the
continuous improvement, explain the
development of programming languages
The program memory of very early computers
was only a ROM
in the form of punched cardboard cards
and later punched paper tape
The holes in the card or tape allowed electrode
balls or pins to contact the other electrode
(wheel or strip) under the card/tape, thus
setting a bit
in the control unit.
The program pointer
was increased by a
moving the card or strip
by one row forward.
was done by punching the holes
painstakingly in the card or tape, using reference
tables to look up which desired effect needed
which type of bit pattern.
These programs were thus written
directly in machine code
, the bare bits,
which meant a lot of work to achieve,
compared with modern programs,
only very rudimentary tasks.
But some people got very good at this,
enough, to write ever longer,
ever more complex programs this way.
They got a computer to accept keystrokes from
a simple keyboard
, to save
the entered text
on an automatically punched tape, to read
this text again from the tape in order to
to it in a later session ...
... and finally to walk through some text,
perform various pretty elaborate calculations
and generate from the text new
— at which point the first programming language
was born, assembler
, making programming
in machine code obsolete
Programming in assembler meant writing
as actual text commands,
on a computer keyboard. Then this text
gets read by the assembler compiler
program which creates from it the
machine code to punch into the tape.
, every computer and every
microcontroller has its own
(but not writing to tape anymore),
upon which everything else rests.
Because the machine code is different
every microcontroller and microprocessor,
so differ the assemblers. Assembler thus
is a concept
, not a single language.
Assembler uses shorthand
that stand for one or for several
Reference tables tell the programmer
how many processor cycles each
assembler command really takes.
To give you an impression, here are
= "move from A to B"
the register result byte read
from the address A to the address B.
(Some assemblers use MOV B,A instead.)
the constant 7
to the last result of the ALU.
= "no operation"
command is used to let the
computer wait a tick, for instance to
synchronize with slower I/O data protocols.
If the last result of the ALU was not zero,
the program pointer is set to the address
calculated by the compilter from the label "l1".
are set as source code text lines
and can be used conveniently instead of
fixed address constants. The "l1" above
could be set for instance via the line
(between command lines):
Also, assemblers introduced variables
names for memory addresses that
will be assigned by the compiler.
And again, some people got so good at this,
and passionate enough, to develop with
assembler the next step, actual
more or less readable
source code lines of variable lengths
that are shorthands
dozens of assembler lines, or for
entire assembler sub-programs.
Also, they make many abstractions
compared with assembler, much closer to
what we as humans are used to think in,
for example from language grammar
(like if-then-else and loop types)
or mathematical formulas
Also, proper programming languages can
be used on various
Each type needs its own compiler for the
programming language, so a line like
"a = b + c*d" may be performed differently
on different microprocessors (machine code),
but the line itself is always the same
in the given programming language
and always has the same defined effect.
programming languages have been
developed, all with their own features,
ideas, strengths — and weaknesses.
Only a few reached widespread use,
and their success was not always
proportional to their overall quality
rather a result of corporate marketing.
Therefore, by far the most influential
programming language became one
simply named "C
", a comparatively cryptic
language that is as un-clever as its name
and is full of traps that let programmers
easily make catastrophic errors.
All early programming languages were still
quite cumbersome to use, but some people
again used them to implement new
programming languages that added ever more
features and avoided the
traps and errors of the old ones.
Or so it should have been. In reality, the next
generation often were just reformed
of the old languages, so from C came C++
for example, but they kept most
of the drawbacks of the old languages.
Also, new languages added ever more
features that came from
academic theorists and were heavily marketed,
but only slow
programs down, make them
use up more and more RAM
and disk space
and don't really improve productivity.
But marketing is strong, as it abuses psychology
as a professional, all-year active big industry,
so you will find many people who, despite
scientifically proven otherwise, will claim that
for instance object-oriented
is a really great thing.
As the internet
became seriously important,
special programming languages got developed,
for instance PHP
for programming software
that runs on a web server and manages
safe logins, storing user data, and so on,
for programming software
that runs in web browsers and lets the displayed
web page react dynamically to user input, or
is used to animate elements on the page.
Pretty much all of these are based on C,
but did some really heavy reforms that make
using them much more comfortable
the programs run slower
than C programs.
This is because in C, the programmer needs to
define how many bytes
a variable shall have
(fixed variable types
) and constantly needs to
pay attention to this setting throughout the code
their type anytime and the program
does all the checks and conversions on the fly,
single time a variable is used or set.
And again, even newer programming languages
get created, such as Eas
which in the current
but as a really totally new-designed
from the ground up, it has none of the usual
traps and drawbacks that all C descendents
have, and it is most comfortable
The story will continue, and I think that the
biggest potential lies in finally shaking off
the ballast of the old languages by fully
replacing them through modern-language
Why? Because my experience with big
software projects is that the greatest
advancements invariably are only made
by starting all over
again from scratch,
after having learned from working with the
old code for some generations/versions.
For instance, in one company they had a
crucial microcontroller program that had
been improved continuously by various
programmers in succession. When it was
handed to me, I threw it away and rather
wrote it all from scratch — after just two
weeks, my program was 1,000
faster, had more features and was
(in contrast to the old program)