Why reading and writing on paper can be better for your brain

23.02.2015 16:10

My son is 18 months old, and I’ve been
reading books with him since he was born.
I say “reading”, but I really mean “looking
at” – not to mention grasping, dropping,
throwing, cuddling, chewing, and
everything else a tiny human being likes to
do. Over the last six months, though, he
has begun not simply to look but also to
recognise a few letters and numbers. He
calls a capital Y a “yak” after a picture on
the door of his room; a capital H is
“hedgehog”; a capital K, “kangaroo”; and so
on.
Reading, unlike speaking, is a young
activity in evolutionary terms. Humans
have been speaking in some form for
hundreds of thousands of years; we are
born with the ability to acquire speech
etched into our neurones. The earliest
writing, however, emerged only 6,000
years ago, and every act of reading remains
a version of what my son is learning:
identifying the special species of physical
objects known as letters and words, using
much the same neural circuits as we use to
identify trees, cars, animals and telephone
boxes.
It’s not only words and letters that we
process as objects. Texts themselves, so far
as our brains are concerned, are physical
landscapes. So it shouldn’t be surprising
that we respond differently to words
printed on a page compared to words
appearing on a screen; or that the key to
understanding these differences lies in the
geography of words in the world.
For her new book, Words Onscreen: The
Fate of Reading in a Digital World,
linguistics professor Naomi Baron
conducted a survey of reading preferences
among over 300 university students across
the US, Japan, Slovakia and Germany.
When given a choice between media
ranging from printouts to smartphones,
laptops, e-readers and desktops, 92% of
respondents replied that it was hard copy
that best allowed them to concentrate.
This isn’t a result likely to surprise many
editors, or anyone else who works closely
with text. While writing this article, I
gathered my thoughts through a version of
the same principle: having collated my
notes onscreen, I printed said notes,
scribbled all over the resulting printout,
argued with myself in the margins, placed
exclamation marks next to key points,
spread out the scrawled result – and from
this landscape hewed a (hopefully)
coherent argument.
What exactly was going on here? Age and
habit played their part. But there is also a
growing scientific recognition that many of
a screen’s unrivalled assets – search,
boundless and bottomless capacity, links
and leaps and seamless navigation – are
either unhelpful or downright destructive
when it comes to certain kinds of reading
and writing.
Across three experiments in 2013,
researchers Pam Mueller and Daniel
Oppenheimer compared the effectiveness
of students taking longhand notes versus
typing onto laptops. Their conclusion: the
relative slowness of writing by hand
demands heavier “mental lifting”, forcing
students to summarise rather than to quote
verbatim – in turn tending to increase
conceptual understanding, application and
retention.
In other words, friction is good – at least so
far as the remembering brain is concerned.
Moreover, the textured variety of physical
writing can itself be significant. In a 2012
study at Indiana University, psychologist
Karin James tested five-year-old children
who did not yet know how to read or write
by asking them to reproduce a letter or
shape in one of three ways: typed onto a
computer, drawn onto a blank sheet, or
traced over a dotted outline. When the
children were drawing freehand, an MRI
scan during the test showed activation
across areas of the brain associated in
adults with reading and writing. The other
two methods showed no such activation.
Similar effects have been found in other
tests, suggesting not only a close link
between reading and writing, but that the
experience of reading itself differs
between letters learned through
handwriting and letters learned through
typing. Add to this the help that the
physical geography of a printed page or
the heft of a book can provide to memory,
and you’ve got a conclusion neatly
matching our embodied natures: the
varied, demanding, motor-skill-activating
physicality of objects tends to light up our
brains brighter than the placeless,
weightless scrolling of words on screens.
In many ways, this is an unfair result,
effectively comparing print at its best to
digital at its worst. Spreading my
scrawled-upon printouts across a desk, I’m
not just accessing data; I’m reviewing the
idiosyncratic geography of something I
created, carried and adorned. But I
researched my piece online, I’m going to
type it up onscreen, and my readers will
enjoy an onscreen environment expressly
designed to gift resonance: a geography, a
context. Screens are at their worst when
they ape and mourn paper. At their best,
they’re something free to engage and
activate our wondering minds in ways
undreamt of a century ago.
Above all, it seems to me, we must
abandon the notion that there is only one
way of reading, or that technology and
paper are engaged in some implacable
war. We’re lucky enough to have both
growing self-knowledge and an
opportunity to make our options as fit for
purpose as possible – as slippery and
searchable or slow with friction as the
occasion demands.
I can’t imagine teaching my son to read in
a house without any physical books, pens
or paper. But I can’t imagine denying him
the limitless words and worlds a screen
can bring to him either. I hope I can help
him learn to make the most of both – and to
type/copy/paste/sketch/scribble precisely as
much as he needs to make each idea his
own.