Saturday, 27 August 2011

Maureen Walsh

Autumn 2004

Talking English

Literacy in the age of technology

http://www.eqa.edu.au/site/literacyintheageof.html

MAUREEN WALSH gives an overview of some of the literature on the paradigm shift in the teaching and learning of literacy with students who regard SMS messages and MP3 downloads as a part of life.

The young person who watches digital TV, downloads MP3 music onto a personal player, checks email on a personal organiser and sends symbolised messages to a mobile phone of a friend will not be satisfied with a 500 word revision guide for [HSC] physics
(Abbott 2003, p. 45).
IN THE FIELD OF LITERACY EDUCATION, there has been growing acknowledgement since the 1990s of visual literacy and the impact of information and communications technology. Students of today live in an environment that is permeated with visual, electronic and digital texts. Along with a textual shift there is now a paradigm shift that is based on the belief that literacy is more than the reading and writing of print. There are several questions such a paradigm shift raises. Is learning and literacy developing in a different way for students who have grown up in a ‘multimodal’ environment far different from that of their parents and teachers? Are teachers, parents and the wider community aware of the implications of new modes of learning? Are teachers working within these new modes of learning and communication or within a learning paradigm that is still reliant on print-based structures?

Changed contexts for learning and literacy

Different terms have been used in an attempt to encapsulate the changing nature of reading, learning and communication within a multimodal environment. Terms such as ‘multiliteracies’ (New London Group 2000; Unsworth 2001), ‘new literacies’ (Lankshear & Knobel 2003) and ‘multimodality’ (Kress & van Leeuwen 2001) have been constructed around particular pedagogical frameworks. In educational publications we encounter a multitude of terms such as ‘digital literacy’, ‘media literacy’, ‘cultural literacy’, ‘technoliteracy’ and, more recently, ‘silicon literacy’ (Snyder 2003) and ‘hypermodality’ (Lemke 2002). Such terminology reflects attempts to explain and understand literacy and learning within changed learning contexts and to establish new learning paradigms.
Several researchers contend that visual texts are impacting on neural networks and changing conceptual schemata (Heath 2000). Educators need to understand the learning implications for students who are growing up in an environment of digital media where communication for children is entirely different from what school offers and prepares them for. Previously we could determine the types of meaning students would need to make from printbased texts. Now we need to investigate the way meaning is constructed through multimodal communication and the different ways learning is occurring for students.

Learning and literacy in a multimodal environment

Multimodal texts are those texts that have more than one ‘mode’ so that meaning is communicated through a synchronisation of modes. That is, they may incorporate written language and images, still or moving, they may be produced on paper or electronic screen and may incorporate sound. Different types of multimodal texts that students commonly encounter in their educational environment in print form are picture books, information books, newspapers and magazines. Film and video have been used in schools for many years but access to the electronic screen, the Internet and digital media vary depending on the technology resources of the school or sector. More often students have access to sophisticated technology outside the educational environment through the Internet, various forms of digital games, DVDs and text messaging.
Within multimodal texts, the function of modes such as image, movement, colour, gesture, 3D objects, music and sound, needs to be examined further. Several researchers are investigating different aspects of multimodality, either as a means of ‘representation’ and ‘communication’ (Kress & van Leeuwen 2001) or as a research tool for analysing classroom communication and learning (Kress et al 2001). These researchers propose that a semiotic theory of multimodality is needed rather than a theory of linguistics to describe the multimodal nature of learning.
Kress and van Leeuwen (1996; 2001) have challenged the traditional emphasis on print in the light of the growing dominance of multimodal texts and digital technology. They contend that a languagebased pedagogy is no longer sufficient for the reading practices that are needed in our information age. Crucial issues being raised by Kress and others are that ‘the screen’ and multimodal texts have developed new ways of communication. Written text is only one part of the message and no longer necessarily the dominant part.
New types of texts require different conceptualisations and different ways of thinking. Kress describes significant differences between the words and images. He shows that, with writing, words rely on the ‘logic of speech’ involving time and sequence, whereas the ‘logic of the image’ involves the presentation of space and simultaneity. Thus the reading of visuals involves quite a different process than the reading of words. Kress and Bearne (2001) have shown that schools foster the ‘logic of writing’ whereas contemporary children’s experiences are grounded in the ‘logic of the image’. In a recent publication, Bearne has shown examples of how students are now producing texts that assume integration of image and word, supplying, sound, elements of gesture and movement as they compose their own meanings (2003, p 98). Bearne contends that assessment processes need to take account of the changes in students’ texts.
Thus the nature of literacy, knowledge and classroom learning needs to be reconceptualised within continually changing modes of communication. Educational researchers need to examine evidence of how students are learning in response to multimodal texts and how this learning varies across different curriculum areas. Will such evidence demonstrate that reading and learning through digital, multimodal texts are consistent with, or different from, school approaches to knowledge and learning? Will the analysis of data provide evidence for developing a new pedagogy for a changed learning environment? What are the implications for teaching? These are crucial questions to investigate for students of today and the future.
References
Bearne, E (2003). ‘Rethinking Literacy: Communication, Representation and Text’ in Reading Literacy and Language, 37:3, November, p. 98.
Callow, J & Zammitt, K (2002). ‘Visual literacy: from picture book to electronic texts’ in Monteith, M (ed.) Teaching Primary Literacy with ICT, Open University Press, Buckingham.
Cope, B & Kalantzis, M (eds) (2000). Multiliteracies: Literacy Learning and the Design of Social Futures, Macmillan, Melbourne.
Lankshear, C & Knobel, M (2003). New Literacies Changing Knowledge and Classroom Learning, Open University Press, Buckingham.
Lemke, J (2002). ‘Travels in hypermodality’ in Visual Communication, 1:3, October, pp 299325.
Heath, SB (2000). ‘Seeing our Way into Learning’ in Cambridge Journal of Education, 30:1, pp 121131.
Kress, G & van Leeuwen, T (1996). Reading Images: The Grammar of Visual Design, Routledge, London. Kress, G & van Leeuwen, T (2001). Multimodal Discourse, Routledge, London.
Kress, G et al (2001). Multimodal Teaching and Learning: The Rhetorics of the Science Classroom, Continuum, London.
Snyder, I (ed) (2003). Silicon Literacies: Communication, Innovation and Education in the Electronic Age, Routledge, London.
Unsworth, L (2001). Teaching Multiliteracies Across the Curriculum: Changing Contexts of Text and Image in Classroom Practice, Open University Press, Buckingham.

No comments:

Post a Comment