“That is the way to learn the most, that when you are doing something with such enjoyment that you don’t notice that the time passes.”

With Father’s Day around the corner, here comes a fine addition to history’s greatest letters of fatherly advice from none other than Albert Einstein — brilliant physicist, proponent of peace, debater of science and spirituality, champion of kindness — who was no stranger to dispensing epistolary empowerment to young minds.

In 1915, aged thirty-six, Einstein was living in wartorn Berlin, while his estranged wife, Mileva, and their two sons, Hans Albert Einstein and Eduard “Tete” Einstein, lived in comparatively safe Vienna. On November 4 of that year, having just completed the two-page masterpiece that would catapult him into international celebrity and historical glory, his theory of general relativity, Einstein sent 11-year-old Hans Albert the following letter, found in Posterity: Letters of Great Americans to Their Children (public library).

Einstein, who takes palpable pride in his intellectual accomplishments, speaks to the rhythms of creative absorption as the fuel for the internal engine of learning:

My dear Albert,

Yesterday I received your dear letter and was very happy with it. I was already afraid you wouldn’t write to me at all any more. You told me when I was in Zurich, that it is awkward for you when I come to Zurich. Therefore I think it is better if we get together in a different place, where nobody will interfere with our comfort. I will in any case urge that each year we spend a whole month together, so that you see that you have a father who is fond of you and who loves you. You can also learn many good and beautiful things from me, something another cannot as easily offer you. What I have achieved through such a lot of strenuous work shall not only be there for strangers but especially for my own boys. These days I have completed one of the most beautiful works of my life, when you are bigger, I will tell you about it.

I am very pleased that you find joy with the piano. This and carpentry are in my opinion for your age the best pursuits, better even than school. Because those are things which fit a young person such as you very well. Mainly play the things on the piano which please you, even if the teacher does not assign those. That is the way to learn the most, that when you are doing something with such enjoyment that you don’t notice that the time passes. I am sometimes so wrapped up in my work that I forget about the noon meal. . . .

Be with Tete kissed by your

Papa.

Regards to Mama.

 

Cognitive scientist and philosopher Daniel Dennett is one of America’s foremost thinkers. In this extract from his new book, he reveals some of the lessons life has taught him …

Daniel Dennett: ‘Often the word “surely” is as good as a blinking light locating a weak point in the argument.’ Photograph: Peter Yang/August

1 USE YOUR MISTAKES
We have all heard the forlorn refrain: “Well, it seemed like a good idea at the time!” This phrase has come to stand for the rueful reflection of an idiot, a sign of stupidity, but in fact we should appreciate it as a pillar of wisdom. Any being, any agent, who can truly say: “Well, it seemed like a good idea at the time!” is standing on the threshold of brilliance. We human beings pride ourselves on our intelligence, and one of its hallmarks is that we can remember our previous thinking and reflect on it — on how it seemed, on why it was tempting in the first place and then about what went wrong.

I know of no evidence to suggest that any other species on the planet can actually think this thought. If they could, they would be almost as smart as we are. So when you make a mistake, you should learn to take a deep breath, grit your teeth and then examine your own recollections of the mistake as ruthlessly and as dispassionately as you can manage. It’s not easy. The natural human reaction to making a mistake is embarrassment and anger (we are never angrier than when we are angry at ourselves) and you have to work hard to overcome these emotional reactions.

Try to acquire the weird practice of savouring your mistakes, delighting in uncovering the strange quirks that led you astray. Then, once you have sucked out all the goodness to be gained from having made them, you can cheerfully set them behind you and go on to the next big opportunity. But that is not enough: you should actively seek out opportunities just so you can then recover from them.

In science, you make your mistakes in public. You show them off so that everybody can learn from them. This way, you get the benefit of everybody else’s experience, and not just your own idiosyncratic path through the space of mistakes. (Physicist Wolfgang Pauli famously expressed his contempt for the work of a colleague as “not even wrong”. A clear falsehood shared with critics is better than vague mush.)

This, by the way, is another reason why we humans are so much smarter than every other species. It is not so much that our brains are bigger or more powerful, or even that we have the knack of reflecting on our own past errors, but that we share the benefits our individual brains have won by their individual histories of trial and error.

I am amazed at how many really smart people don’t understand that you can make big mistakes in public and emerge none the worse for it. I know distinguished researchers who will go to preposterous lengths to avoid having to acknowledge that they were wrong about something. Actually, people love it when somebody admits to making a mistake. All kinds of people love pointing out mistakes.

Generous-spirited people appreciate your giving them the opportunity to help, and acknowledging it when they succeed in helping you; mean-spirited people enjoy showing you up. Let them! Either way we all win.

2 RESPECT YOUR OPPONENT
Just how charitable are you supposed to be when criticising the views of an opponent? If there are obvious contradictions in the opponent’s case, then you should point them out, forcefully. If there are somewhat hidden contradictions, you should carefully expose them to view — and then dump on them. But the search for hidden contradictions often crosses the line into nitpicking, sea-lawyering and outright parody. The thrill of the chase and the conviction that your opponent has to be harbouring a confusion somewhere encourages uncharitable interpretation, which gives you an easy target to attack.

But such easy targets are typically irrelevant to the real issues at stake and simply waste everybody’s time and patience, even if they give amusement to your supporters. The best antidote I know for this tendency to caricature one’s opponent is a list of rules promulgated many years ago by social psychologist and game theorist Anatol Rapoport.

How to compose a successful critical commentary:
1. Attempt to re-express your target’s position so clearly, vividly and fairly that your target says: “Thanks, I wish I’d thought of putting it that way.”
2. List any points of agreement (especially if they are not matters of general or widespread agreement).
3. Mention anything you have learned from your target.
4. Only then are you permitted to say so much as a word of rebuttal or criticism.

One immediate effect of following these rules is that your targets will be a receptive audience for your criticism: you have already shown that you understand their positions as well as they do, and have demonstrated good judgment (you agree with them on some important matters and have even been persuaded by something they said). Following Rapoport’s rules is always, for me, something of a struggle…

3 THE “SURELY” KLAXON
When you’re reading or skimming argumentative essays, especially by philosophers, here is a quick trick that may save you much time and effort, especially in this age of simple searching by computer: look for “surely” in the document and check each occurrence. Not always, not even most of the time, but often the word “surely” is as good as a blinking light locating a weak point in the argument.

Why? Because it marks the very edge of what the author is actually sure about and hopes readers will also be sure about. (If the author were really sure all the readers would agree, it wouldn’t be worth mentioning.) Being at the edge, the author has had to make a judgment call about whether or not to attempt to demonstrate the point at issue, or provide evidence for it, and — because life is short — has decided in favour of bald assertion, with the presumably well-grounded anticipation of agreement. Just the sort of place to find an ill-examined “truism” that isn’t true!

4 ANSWER RHETORICAL QUESTIONS
Just as you should keep a sharp eye out for “surely”, you should develop a sensitivity for rhetorical questions in any argument or polemic. Why? Because, like the use of “surely”, they represent an author’s eagerness to take a short cut. A rhetorical question has a question mark at the end, but it is not meant to be answered. That is, the author doesn’t bother waiting for you to answer since the answer is so obvious that you’d be embarrassed to say it!

Here is a good habit to develop: whenever you see a rhetorical question, try — silently, to yourself — to give it an unobvious answer. If you find a good one, surprise your interlocutor by answering the question. I remember a Peanuts cartoon from years ago that nicely illustrates the tactic. Charlie Brown had just asked, rhetorically: “Who’s to say what is right and wrong here?” and Lucy responded, in the next panel: “I will.”

5 EMPLOY OCCAM’S RAZOR
Attributed to William of Ockham (or Ooccam), a 14th-century English logician and philosopher, this thinking tool is actually a much older rule of thumb. A Latin name for it is lex parsimoniae, the law of parsimony. It is usually put into English as the maxim “Do not multiply entities beyond necessity”.

The idea is straightforward: don’t concoct a complicated, extravagant theory if you’ve got a simpler one (containing fewer ingredients, fewer entities) that handles the phenomenon just as well. If exposure to extremely cold air can account for all the symptoms of frostbite, don’t postulate unobserved “snow germs” or “Arctic microbes”. Kepler’s laws explain the orbits of the planets; we have no need to hypothesise pilots guiding the planets from control panels hidden under the surface. This much is uncontroversial, but extensions of the principle have not always met with agreement.

One of the least impressive attempts to apply Occam’s razor to a gnarly problem is the claim (and provoked counterclaims) that postulating a God as creator of the universe is simpler, more parsimonious, than the alternatives. How could postulating something supernatural and incomprehensible be parsimonious? It strikes me as the height of extravagance, but perhaps there are clever ways of rebutting that suggestion.

I don’t want to argue about it; Occam’s razor is, after all, just a rule of thumb, a frequently useful suggestion. The prospect of turning it into a metaphysical principle or fundamental requirement of rationality that could bear the weight of proving or disproving the existence of God in one fell swoop is simply ludicrous. It would be like trying to disprove a theorem of quantum mechanics by showing that it contradicted the axiom “Don’t put all your eggs in one basket”.

6 DON’T WASTE YOUR TIME ON RUBBISH
Sturgeon’s law is usually expressed thus: 90% of everything is crap. So 90% of experiments in molecular biology, 90% of poetry, 90% of philosophy books, 90% of peer-reviewed articles in mathematics — and so forth — is crap. Is that true? Well, maybe it’s an exaggeration, but let’s agree that there is a lot of mediocre work done in every field. (Some curmudgeons say it’s more like 99%, but let’s not get into that game.)

A good moral to draw from this observation is that when you want to criticise a field, a genre, a discipline, an art form …don’t waste your time and ours hooting at the crap! Go after the good stuff or leave it alone.

This advice is often ignored by ideologues intent on destroying the reputation of analytic philosophy, sociology, cultural anthropology, macroeconomics, plastic surgery, improvisational theatre, television sitcoms, philosophical theology, massage therapy, you name it.

Let’s stipulate at the outset that there is a great deal of deplorable, second-rate stuff out there, of all sorts. Now, in order not to waste your time and try our patience, make sure you concentrate on the best stuff you can find, the flagship examples extolled by the leaders of the field, the prize-winning entries, not the dregs. Notice that this is closely related to Rapoport’s rules: unless you are a comedian whose main purpose is to make people laugh at ludicrous buffoonery, spare us the caricature.

7 BEWARE OF DEEPITIES
A deepity (a term coined by the daughter of my late friend, computer scientist Joseph Weizenbaum) is a proposition that seems both important and true — and profound — but that achieves this effect by being ambiguous. On one reading, it is manifestly false, but it would be earth-shaking if it were true; on the other reading, it is true but trivial. The unwary listener picks up the glimmer of truth from the second reading, and the devastating importance from the first reading, and thinks, Wow! That’s a deepity.

Here is an example (better sit down: this is heavy stuff): Love is just a word.
Oh wow! Cosmic. Mind-blowing, right? Wrong. On one reading, it is manifestly false. I’m not sure what love is — maybe an emotion or emotional attachment, maybe an interpersonal relationship, maybe the highest state a human mind can achieve — but we all know it isn’t a word. You can’t find love in the dictionary!

We can bring out the other reading by availing ourselves of a convention philosophers care mightily about: when we talk about a word, we put it in quotation marks, thus: “love” is just a word. “Cheeseburger” is just a word. “Word” is just a word. But this isn’t fair, you say. Whoever said that love is just a word meant something else, surely. No doubt, but they didn’t say it.

Not all deepities are quite so easily analysed. Richard Dawkins recently alerted me to a fine deepity by Rowan Williams, the then archbishop of Canterbury, who described his faith as “a silent waiting on the truth, pure sitting and breathing in the presence of the question mark”.

I leave the analysis of this as an exercise for you.

– This is an edited extract from Intuition Pumps and Other Tools for Thinking by Daniel Dennett, published by Allen Lane.

Daniel Dennett: career in brief

Born in Boston in 1942, philosopher and cognitive scientist Daniel Dennett has dedicated his academic life to the study of the philosophies of the mind, science and biology. He studied at Harvard and Oxford and is currently a professor at Tufts University, Boston. An atheist and a secularist, he is often bracketed as one of the “four horseman of atheism” alongside Richard Dawkins, Sam Harris and the late Christopher Hitchens.
He has published extensively on subjects such as free will (Brainstorms, 1978), theory of the mind (Consciousness Explained, 1991) and the role of adaptation in evolution (Darwin’s Dangerous Idea, 1995). His ideas have been criticised by the palaeontologist Stephen Jay Gould and praised by the psychologist Steven Pinker.
In 2012, he was awarded the Erasmus prize, an European award for “a person who has made an exceptional contribution to culture, society or social science”; he was praised for “his ability to translate the cultural significance of science and technology to a broad audience”.

Learn computerese as a second language (that’s code for code)

By John Lenarcic, RMIT University

If horror meister Stephen King was a computer programmer, his language of choice would probably be COBOL: it’s quite verbose in exposition, has been around for ages and people still make a lot of money from it (through legacy systems and the like).

And even though he isn’t a programmer, King would still do well to study a computer language — as would the rest of us and our children.

Computer programming is not rocket science. Sure, it makes rocket science possible but anyone who can count, make choices and do things over and over again can probably learn how to program.

Fluency in one’s “native” computer tongue would be handy. But a firm grasp of sequence, selection and repetition is all that’s needed to code at beginner’s level, even in programming languages with exotic sounding names, such as Java, Python or C++.

Coding, of course, refers to the craft of designing, writing and debugging software. It may sound complex but it’s what we do when we draft a letter, compose a business report or author the Great Novel of our dreams.

 

simonov

 

If you learn how to read and write in English, with practice and several rejection slips under your belt, you can possibly become the next Stephen King. Ditto for computer programming: study how to read and write half-decent code and building the next Facebook can be within your reach.

Just watch the opening scenes of David Fincher’s 2010 film The Social Network to witness the humble origins of Mark Zuckerberg’s game-changing innovation: coding in all its simple glory as depicted eloquently in a Hollywood movie — who would have thought?

Here’s a simple algorithm that applies to both software and novels:

Writing works of greatness implies one has initially read likewise and recognised these to be so, which is the essence of being literate.

Stanford University’s Donald Knuth once touted the notion of “literate programming” as an approach whereby program logic is given depth of meaning with the frisson of natural language explanations, this being a cross between footnotes and critical interpretation.

The aim could have been to ultimately curl up in a comfy chair in front of an open fire with a bundle of good code to read but it didn’t quite work out that way. The pithy aphorism of MIT academics Hal Abelson and Gerald Sussman that “programs must be written for people to read, and only incidentally for machines to execute” is still only a pipe dream, but it shouldn’t be.

 

dullhunk

 

Programming languages are governed by syntax and semantics much like natural dialects. They can be viewed as tightly constrained variants of English, built as they are around the character set of the Latin alphabet, which is in part an accidental legacy of the American origins of this lingua franca of technology

(This may smack of Western imperialism to some, and in response the قلب (“alb”) programming language was recently created, based on Arabic script.)

The average English speaker may have a vocabulary of more than 30,000 words but a popular programming language such as Java only requires recognition of around 50 keywords and how they are used in context.

 

Ed Yourdon

 

Such brevity can mean getting things done with a computer language may require a penchant for puzzles or even poetry. A 2012 creative project soliciting “code poems” resulted in a limited edition publication now in its second edition.

The driving force behind this artistic endeavour, artist and engineer Ishac Bertran, is of the opinion that “code can speak literature, logic and maths”.

A total of 190 poems were submitted by writers from 30 different countries for the first edition of “code (poems)”, with the only submission criteria being that a poem should have a maximum size of half a kilobyte and that it was required to be executable on a computer without falling over in a heap of error messages. In other words, it had to be a poem as bug-free software that actually worked.

Contributions in arcane dialects such as HTML,C#, SQL, Objective C and AppleScript were most welcome.

Get literate

 

Larry Atkin (front) and David Slate at the 10th ACM (Association for Computing Machinery) Computer Chess Championship in Detroit, Michigan (1979). laimagendelmundo

 

Coding literacy is paramount importance to ongoing global innovation. There is a broad and grave concern that students are being turned off studying computing courses at university due to a misguided apprehension of programming being difficult to absorb.

They may have been poorly taught at some institutions with ill-focused textbooks and this accumulated over time to computer programming being perceived as something non-mainstream, within the geek domain.

To counter this perceived difficulty, campaigns are emerging from several quarters that seek to promote coding as an empowering ability, much like a second language.

Code.org is a not-for-profit foundation set up to champion the need for computer programming education. With supporting testimonials from the two famous Bills, Clinton and Gates, Ashton Kutcher and will.i.am of the Black Eyed Peas, the website and accompanying rah-rah videos attempt to revamp coding as being fun, creative and within the scope of all citizens, not just propeller heads.

 

@matylda

 

Volunteer-led efforts such as Codeacademy and ScriptEd are spreading the mission in this regard.

The ScriptEd initiative is immersing low-income high schools from Harlem into learning environments in which coding skills can be acquired naturalistically. This is more Berlitz language school in tone than the often implicit desperation evident in the enculturation of clichéd “work-ready” technical graduates.

Coding should be seen for what it is: another way to communicate, unleashing a liberating force that can literally enable better living through programming.

Esperanto — conceived and created in the late 19th century — was a noble but failed attempt to engineer a universal natural language. The panoply of existing computer programming languages is similarly artificial and each in their own subtle way influence how their “speakers” think.

Now is the time for a new breed of polyglots to arise and creatively tinker away in the process. Can you afford to not wax lyrical in computerese?

John Lenarcic does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The Conversation

This article was originally published at The Conversation.
Read the original article.