Home » Culture & society, Language and Music, Language education, Learning experiences, Literacy

Can learning music help you learn to read and write?

Written By: Youki on March 24, 2009 12 Comments

Who here learned the English alphabet to the “Twinkle, twinkle, little star” melody? Of course since we have many international readers I’m half expecting a few people to say “I didn’t!” (or maybe you did? I’m curious). Well, from my own experiences growing up, the link between music and language was strong. From nursery rhymes to songs on Sesame Street, much of the language that I learned as a child came in musical form.

Somewhere along the line, though, the connection between music and language begins to weaken. Children start reading more “serious” books, books about facts and figures and historical events. You learn what “non-fiction” means and what a “textbook” is. Instead of being read to, you start to read for yourself and internalize the process of reading. Music plays an important role in language for young children, but what about older children and adults?

A recent study by Joseph M Piro and Camilo Ortiz entitled “The effect of piano lessons on the vocabulary and verbal sequencing skills of primary grade students” in Psychology of Music (if you’re a UC Berkeley student you can access the article through a campus connection or a proxy) suggests that there is a correlation between music instruction and performance in language and literacy:

Music and literacy are compatible, interdependent symbol systems that share content and process elements, organizational principles, and expressive qualities. Because of this, the domain of literacy presents a rich opportunity to examine the effect of music enhanced instruction. Several parallels have been noted between coding language and coding music (Hansen & Bernstorf, 2002; Wiggins, 2007). First, both music and language are major and frequent forms of communication for children. A sound, syntax, and semantic progression are present in each, and composition in both forms has traditionally required the musician or reader/writer to organize from established rules to communicate meaning. When students are asked to interpret what textual passages mean, they are likely to call upon stored syntactic and semantic strategies they have internalized to make some kind of meaning out of words. Included in these strategies are decoding, word attack, and comprehension skills. There is an existing linguistic infrastructure for the student to access. Likewise in mediated musical exchange, units, including pitch, timbre, texture, line, and form convey meaning and, just as in written text, meaning can be constructed by students using words, phrases, and sentences. Like text, music is also read from left to right and top to bottom (Lloyd, 1978). These same parallels have been identified by Hansen, Bernstorf, and Stuber (2004) who noted similar code-breaking strategies required for both music and literacy. They also suggest that dimensions of reading, such as phonological awareness, phonemic awareness, language reception, and fluency have counterparts in music learning and performance.

In the study, one group of students was given three years of scaffolded music instruction while another was given no music instruction. These groups were chosen from geograhically and demographically similar schools to help reduce confounding factors. Results showed that children who received music training outperformed children who didn’t on the Meeker Structure of Intellect (SOI) Vocabulary and Verbal Sequencing tests.

Of course, the authors of the study properly acknowledge that this is a preliminary study and the results only suggest a correlation between music instruction and language/literacy ability. Simply having different instructors, regardless of the subject matter, may be enough to produce measurable differences in ability. However, the question for me (right now) isn’t whether or not the results are reliable (although it would great if they were, because then we could work on developing curricula that are better suited to students’ needs), but how studies like this one help us think about language in different ways.

While browsing through the list of Twitter Resurrections, I came across the Marshall McLuhan twitter feed and saw the following intriguing quote:

Far more thought and care go into the composition of a prominent ad in a newspaper than go into the writing of their features and editorials

Why use the word composition? My McLuhan reader provides a clue:

In some ways, McLuhan was closer to such artists in his perceptions: a Kandinsky who held that “the environment is the composition,” and that “objects have to be considered in the light of the whole.” (The Essential McLuhan, p. 5)

The art of musical composition, not just being able to construct melodies but also understanding chords, keys, and timing (among many other musical concepts), is fundamentally a consideration of the relationship of the individual to the whole. Composing a song is very much like composing a story — from beginning to end, each element of the composition should fit into the larger structure. Composing music, for a beginning student, really just boils down to picking notes that fit the key and chord progression and changing the note slightly when you want to introduce tension. It is an exercise in understanding musical rules and how to work within those rules. Written language works in very much the same way — rules of grammar/syntax provide a general idea of how to write, and it’s up to the writer to choose appropriate elements that fit within those rules, but also elements that work slightly against them (so as to avoid sounding bland or repetitive).

To quote Ezra Pound, “Music rots when it gets too far from the dance. Poetry atrophies when it gets too far from music.” Reading is still musical for adults. We just to make sure we have enough poetry in our lives. So here you go:

Forgotten Language

Once I spoke the language of the flowers,
Once I understood each word the caterpillar said,
Once I smiled in secret at the gossip of the starlings,
And shared a conversation with the housefly
in my bed.
Once I heard and answered all the questions
of the crickets,
And joined the crying of each falling dying
flake of snow,
Once I spoke the language of the flowers. . . .
How did it go?
How did it go?

– Shel Silverstein


Hopefully in the future: “Can learning art help you learn to read and write?” or perhaps even “Can learning dance help you learn to read and write?”

related posts:

The Experience of Reading

Underlying structures of music (and language?)

The serendipity of nonce words

Music 2.0

Tags: ,

Digg this!Add to del.icio.us!Stumble this!Add to Techorati!Share on Facebook!Seed Newsvine!Reddit!

12 Responses to “Can learning music help you learn to read and write?”

  1. Usree Bhattacharya on: 26 March 2009 at 9:51 am

    A riveting post, Youki, I cannot thank you enough for posting on this. I really enjoy your posts discussing literacy and music, I look forward to reading more.

    One of the things that jumped out for me was in the extract from Piro and Ortiz:

    “Like text, music is also read from left to right and top to bottom (Lloyd, 1978).”

    Heh. Depends. That’s a culturally/linguistically specific claim, right? I understand this is all in the context of “piano music” within a-I presume-a “Western” context, but this claim falls apart when you think of cultures in which piano music may be read one way and “texts” another. This is only one tiny aspect of the larger push of their argument, but it bothered me anyway.

    Anyway, excellent post. LOVE IT!

  2. Youki on: 26 March 2009 at 2:40 pm

    yeah good point, they should revise their statement to be more culturally inclusive. It’s not the left-right/top-bottom aspect of these texts that is interesting, but the ways in which these texts are organized linearly and how we visually process the relations between elements in the texts.

    hmm, pianists learn to read two lines at once – the treble and bass lines. The really weird part is that the two lines are offset slightly (a “G” note on a treble clef is a “B” on a bass clef). [example] I wonder if that has any impact on learning to decode written texts. Probably not any significant impact, but it is interesting to think about (for me at least, hah!)

  3. Usree Bhattacharya on: 26 March 2009 at 2:55 pm

    Thanks for that…I agree with your analysis; tho’ it’s always fun to point these ethnocentrisms out. Ha.

    You know, I am reading Benveniste right now, “The Semiology of Language,” and came across this line in his discussion of the principle of nonredundancy:

    Semiotic systems are not “synonymous”; we are not able to say “the same thing” with spoken words that we can with music, as they are systems with different bases.

    I know it’s somewhat tangential, but was wondering if you agree with what he’s saying in that section? Or if you think it relates with what you’re saying here?

  4. Usree Bhattacharya on: 26 March 2009 at 3:06 pm

    ps. detailed discussion on pp. 236-239, if you’re so inclined.

    🙂

  5. Youki on: 26 March 2009 at 5:25 pm

    yeah I agree with that statement, but it’s also a bit limiting. It’s not about what is said, but the relations within what is said — the logic of the system, or more specifically, the grammar. Music has a grammar very much the same way written and spoken language has a grammar, and it’s the similarities between these grammar systems that may produce opportunities for understanding language in new ways.

    Think about Kress and his concept of visual grammar. From Literacy in the New Media Age, page 66:

    In the high era of writing, when the logic of writing dominated the page, the organisation of the page was not an issue. Now that organisation has become one resource for the meaning of the new textual ensembles. These meanings derive from the meanings of the mode of the visual, from the meanings of visual ‘grammar’. It becomes necessary therefore to say something about visual grammar. My brief excursion into etymology at the start of the chapter indicates why I am happy to use the term grammar, despite the danger of being accused of applying linguistic terminology to images. I feel confident about reappropriating the word for a much wider use in semiotic discussion of all modes of meaning-making, where the term can have real uses. In that new sense grammar is for me the overarching term that can describe the regularities of a particular mode which a culture has produced, be it writing, image, gesture, music or others.

  6. Usree Bhattacharya on: 26 March 2009 at 7:33 pm

    Benveniste later adds that “if music is considered as a language, it has syntactic features, but not semiotic features.” (p. 237) He also goes on to call it (music) a system of “nonsignifying units” (p. 238). It’s a sophisticated argument, and I am not sure I understand (or agree) with all aspects of it yet.

    So we’re here talking of grammar, which I am trying to understand in relation with Saussure‘s langue. Even Kress’ definition of visual grammar seems to be langue-like. Can you tell me how a discussion of langue may relate to this? I have some follow-up questions if so…

  7. Usree Bhattacharya on: 26 March 2009 at 7:38 pm

    ps. Youki, yes, in India, I DID learn the ABC’s to the tune of Twinkle, Twinkle…

    And on that note, do you remember watching this with me?

  8. Youki on: 26 March 2009 at 10:56 pm

    Music also has a general langue-parole system, langue being musical notation (and its associated rules, techniques, theories, generic chord structures/progressions, etc) and parole being musical performance.

    I simply cannot agree that music has no semiotic features. For one, songs fall within genres, and hearing a song (even without words) — let’s say a marching hymn or a soulful harmonica solo — will invoke a personal and cultural history. Even at the most basic level, a song in a minor key feels different from a song in a major key, often (but not necessarily accurately) associated with sadness and elation. The tempo of a song influences how the song is perceived — think of “Flight of the Bumblebee” or “Ride of the Valkyries.”

    I mean, could anyone make the argument that any country’s national anthem has no signification? That such a song would be meaningless? I may be missing his argument, but I absolutely disagree with the statement: “if music is considered as a language, it has syntactic features, but not semiotic features.” (p. 237)

  9. Usree Bhattacharya on: 28 March 2009 at 5:57 pm

    I agree with you…! Thanks for this.

    Hey, Youki, did you see this?

  10. Youki on: 28 March 2009 at 9:56 pm

    oh yeah I did, it’s in my “maybe blog about in the future” mental box.

  11. Usree Bhattacharya on: 28 March 2009 at 10:42 pm

    getting flooded much in there?

    🙂

  12. Tod Woodward on: 16 August 2010 at 10:12 am

    Asian languages have tonal inflections that make the language very colourful to listen to.
    Take Mandarin, Cambodian, Thai, and Vietnamese, their language are full of tonal inflections, which is part of the language… and is quite musical. A China-born mandarin speaking friend explained that English wasn’t so difficult to learn, it was just that he couldn’t understand “emotion” of the word, as it did not exhibit much tonal inflections like Mandarin does.

Leave a Reply:

You must be logged in to post a comment.

  Copyright ©2009 Found in Translation, All rights reserved.| Powered by WordPress| WPElegance2Col theme by Techblissonline.com