Multimodality, ethnography and education in south america



Yüklə 6,97 Mb.
səhifə8/16
tarix15.08.2018
ölçüsü6,97 Mb.
#62523
1   ...   4   5   6   7   8   9   10   11   ...   16

References

Halliday, M. A. (1973). Explorations in the Functions of Language. London: Edward Arnold.

Halliday, M. A. (1978). Language as a Social Semiotic: the social interpretation of language and meaning. London: Edward Arnold.

Halliday, M. A. (1969). Options and functions in the English clause. Brno Studies in English , 8, 81-88.

Kress, G., & Van Leeuwen, T. (1996). Reading Images: The Grammar of Visual Design. London: Routledge.

Martin, J. R., & Stenglin, M. (2007). Materialising Reconciliation: Negotiating Difference in a Transcolonial Exhibition. i T. Royce, & W. Bowcher, New Directions in the Analysis of Multimodal Discourse (ss. 275-38). Mahwah, NJ: Erlbaum.

O’Toole, M. (1994). The language of displayed art. London: Leicester University Press (a division of Pinter).

Ravelli, L. (2006). Museum Texts: Communication Frameworks. London and New York: Routledge.

Van Leeuwen, T. (1999). Speech, Music, Sound. London: Macmillan.

Re-reading Hjelmslev’s notion of sign and its application in multimodal discourse analysis—focusing on animation in the educational context

Yufei He
University of Sydney


yufei.he@sydney.edu.au

In his Prolegomena to a Theory of Language (1961), Hjelmslev further develops Saussure’s conception of sign, the bonding of signifié and signifiant, which is illuminating in theorizing multimodal studies. Despite being reviewed by many linguists and semioticians (e.g. Barthes, 1977), this challenging volume still remains underexplored in many of the key ideas. One of the key conception is purport defined as a common factor among different languages. It is “an amorphous 'thought-mass', an unanalyzed entity”, “ordered, articulated, formed in different ways in the different languages” (50-51). By exploring a relatively new semiotic system of animation, particularly animation in the educational context, it is found that the concept of purport might also be used to extract common factors among different semiotic modes. Just as each language lays down its own boundaries within the purport, different semiotic systems also lay down their boundaries within the purport. A salient difference between language and animation lies in the representation of ‘circumstantial information’. Many languages (e.g. English, Chinese) set boundaries of different circumstantial information, including manner, location, etc. By contrast, animation doesn’t set boundaries within the purport: all the ‘circumstances’ are fused with the dynamic changes, which makes this information not circumstantial but essential elements. Hjelmslev’s other important notions including commutation are also used to develop the expression and content plane of animation. A re-reading of Hjelmslev’s notion of sign is expected to bring new light to the study of semiotic systems other than language, and more importantly, facilitate our understanding of the synergy between different semiotic systems in one multimodal text.



References

Barthes, R. (1977). Elements of Semiology. Macmillan.

Hjelmslev, L. (1961). Prolegomena to Theory of Language. Madison, Wisconsin: University of Wisconsin Press.

Animation and the remediation of school physics

Yufei He
University of Sydney


Yufei.he@sydney.edu.au

Theo van Leeuwen


University of Southern Denmark
leeuwen@sdu.dk

Building on earlier work on animation by Leão (2012) and Djonov and Van Leeuwen (2015, forthcoming), this paper investigates the affordances of animation for representing concepts that play a crucial role in the year 7-10 physics curriculum in Australia.

After discussing the representational affordances of animation in general, the paper will focus on the affordances of Explain Everything, a whiteboard software which is widely used in Australian schools.

While this software is particularly useful for animating motion, some types of movement are less easily animated, for instance simultaneous movements (e.g. the movement of particles in a liquid) and movements that involve a change of quality (e.g. evaporation).

The paper will then discuss how a class of 14- and 15-year-old school students attempted to overcome such constraints.

It will conclude with some methodological remarks on the developing social semiotic approach to semiotic technology, focusing in particular on the importance of interrelating the analysis of specific softwares with the analysis of their use (cf Kvåle, 2017).



References

Leão, G. (2012) Movement in Film Titles – An Analytical Approach. Unpublished PhD thesis, University of Technology, Sydney.

Djonov, E. and Van Leeuwen, T. (2015) Social Notes towards a semiotics of kinetic typography Social Semiotics 25(2): 2440253

Djonov, E. and Van Leeuwen, T, forthcoming. A social semiotic approach to software: A critical multimodal analysis of animation in PowerPoint.

Kvåle, G. (2016) Software as ideology: A multimodal critical discourse analysis of Microsoft Word and SmartArt. Journal of Language and Poilitics 15(3): 259-273

Looking at videogames, children’s pictures books and museum exhibits multimodally: an integrated multiliteracies project

Viviane M. Heberle


UFSC – The Federal University of Santa Catarina, Brazil
viviane.heberle@ufsc.br

Since the New London Group (1996) emphasized the need to integrate various modes of meaning-making, including the visual, the audio and the spatial modes in educationally relevant projects, several researchers have provided significant studies related to multiliteracies (Unsworth, 2001; 2008; Unsworth & Thomas, 2014; Jewitt & Kress, 2008), for instance. Considering the array of different modes in social semiotic multimodal theory, this paper looks at three meaning-making social artefacts in contemporary society, namely videogames, children’s picture books and exhibits in museums. These three kinds of artefacts are envisaged as part of an integrated multiliteracies project, based on systemic functional linguistics and the grammar of visual design. In terms of videogames and children’s picture books, the analysis focuses on the verbal and visual resources used to portray the characters (participants) and their corresponding actions, as well as the interaction with the players and readers. The analysis of museum exhibits, on the other hand, concentrates on the representational and interactive meanings, related to spatial discourse analysis. Educationally speaking, the integrated proposal may contribute to make students aware of the multiplicity of meaning-making resources and of innovative ways to produce and interpret multimodal meanings.



References

Jewitt, C.; Kress, G. (2008) Multimodal literacy. New York: Peter Lang.

The New London Group. (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 1996, 60-91.

Unsworth, L. (2001). Teaching multiliteracies across the curriculum: Changing contexts of text and image in classroom practice. Buckingham, UK: Open University.

Unsworth, L. (Ed.). (2008). Multimodal Semiotics: functional analysis in contexts of education. London, New York: Continuum.

Unsworth, L. & Thomas, A. (Ed). (2014). English Teaching and New Literacies Pedagogy: Interpreting and authoring digital multimedia narratives. New York: Peter Lang.



Artefacts as central to computer programming in early childhood education settings

Mia Heikkilä


School of Education, Culture and Communication, Mälardalen University, Sweden
mia.heikkila@mdh.se

This article presents the results of an analysis of how principles of computer programming are elaborated and used as a means to initiate interest for both computer programming and for mathematics to four and five-year-olds. The aim of this study was to broadly elaborate on how to understand programming as a new feature in preschools (Mannila, 2017). Multimodality was considered a relevant analytical tool in order to understand the interaction and communication going on in the sequences of education on computer programming, since children’s language use is not focused on in preschool learning processes (Kress, 2003, Selander, 2017).

The data collection consists of video recordings, interviews with teachers as well as of analysis of the documentation and the arguments for widely initiating programming as a tool in all early childhood education settings in this municipality. The video recordings were done in one early childhood education unit in a Swedish preschool during 2017-2018 and are the main data used here.

The analysis shows how programming creates great interest amongst the children, shown by the children’s patience and willingness to follow the content of the sequences. They do this meaning making through a continuation of use of modes relevant to the sequence and to each other. The analysis shows how modes such as pointing, gaze direction and body position are central to how the education sequences are done and how a learning process is here constituted. The affordances of these modes are analysed. The modes mentioned are strictly related to a common artefact that is programmed, for example lego blocks, musical instruments or a robot. Related to children’s meaning making, artefact focused activities such as play do not always include gaze contact which often is considered as a central aspect of human communication, and thereby perhaps seen as central to learning processes. Programming as a learning content can thereby be discussed to in some aspects happen outside the human body, but since the teachers are using the body as a ‘tool’ to program, bodily communication is still happening. Bodily communication and learning can be said to have a special meaning in children’s learning where language is not established as a normative focus in learning processes. The results show that girls and boys are equally active in the sequences. What this means to the programming content is analysed and discussed elsewhere.


Artificial intelligence in multimodality research

Tuomo Hiippala

University of Helsinki

tuomo.hiippala@iki.fi


Recent years have witnessed rapid advances in artificial intelligence (AI), particularly within the fields of machine learning and computer vision. More specifically, the approach known as deep learning has pushed the state-of-the-art for many AI tasks, such as classifying the contents of images, finding objects and drawing their outlines, and generating captions for entire images or their parts (LeCun et al. 2015). Many approaches are now explicitly addressing multimodality when developing solutions to these tasks, which makes them potentially useful for multimodality research.

This presentation discusses the extent to which deep learning can support multimodality research, evaluating the current possibilities and the semantic gap between humans and computers. Because artificial intelligence is already being introduced to the field of multimodality research in order to enrich corpora and to analyse larger volumes of data (see e.g. Bateman et al. 2016; Hiippala 2016; O’Halloran et al. 2018), this is a good time to take stock of the state of the art and to consider how the concept of multimodality is defined within the two fields.

Whereas artificial intelligence often defines ‘modes’ based on sensory modalities such as vision and hearing, multimodality research works with more fine-grained, semiotically-informed definitions, which do not necessarily align with sensory modalities (Bateman 2011). This difference does not only affect the applications of artificial intelligence to multimodality research, but has implications to potential contributions arising from multimodality research, for instance, in the form of systematic annotation frameworks for multimodal data. Solving these differences is a prerequisite for future dialogue between the fields.
References

Bateman JA (2011) The decomposability of semiotic modes. In: O’Halloran KL and Smith BA (eds.) Multimodal Studies: Multiple Approaches and Domains. London: Routledge, pp. 17–38.

Bateman JA, Tseng C, Seizov O, Jacobs A, Lüdtke A, Müller MG and Herzog O (2016) Towards next generation visual archives: image, film and discourse. Visual Studies 31(2): 131–154.

Hiippala T (2016) Semi-automated annotation of page-based documents within the Genre and Multimodality framework. In: Proceedings of the 10th SIGHUM Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities. Berlin, Germany: Association for Computational Linguistics, pp. 84–89.

LeCun Y, Bengio Y and Hinton G (2015) Deep learning. Nature 521: 436–444.

O’Halloran KL, Tan S, Pham DS, Bateman JA and Vande Moere A (2018) A digital mixed methods research design: Integrating multimodal analysis with data mining and information visualization for big data analytics. Journal of Mixed Methods Research 12(1): 11–30.




Multimodal expressions of identities in websites of South Asian diasporas

Dr. Preet Hiradar, Assistant Professor


Department of English, Lingnan University, 8, Castle Peak Road, Tuen Mun (N.T.), Hong Kong
preethiradhar@gmail.com, ph@ln.edu.hk

In the context of contemporary social changes of the global and networked society, new media technologies have brought about a reorientation of communication possibilities for representation and meaning. As contemporary communication has become increasingly multimodal (Kress and Van Leeuwen, 2001), and as technology gets acknowledged as what people make of it in a cultural context (Pauwels, 2005), websites are becoming potential sites for digital representations and cultural expressions. These new forms of electronic mediation have become particularly evident in various websites of diasporic communities of Southeast Asia, more recently. The current paper forms part of a larger study that investigates digital forms of cultural expressions in websites of South Asian diasporas in Hong Kong and Singapore. To study the diasporic communities’ configuration of semiotic resources in their websites, the paper deploys the multimodal social semiotic analysis framework (Kress, 2009) to analyse multimodal aspects of diasporic websites by addressing the following questions: i) What are the different modes and semiotic resources used in websites of South Asians in Hong Kong and Singapore? ii) What kinds of discourses of representation and cultures are articulated in websites of these diasporas? As findings indicate the significance of multiplicity of modes for representation through sign-makers’ agency in the shaping of meaning online, the analysis shows how diasporic websites seek to undo stereotypical and naturalized discourse of communities (Mitra, 2001) through the multimodal expressions of their identities and cultures.



Keywords: Multimodality, identity, websites, South Asian, diasporas

References

Kress, G. and T. Leeuwen (2001) Multimodal discourse: The modes and media of contemporary communication, London: Arnold.

Kress, G. (2009) Multimodality: A Social Semiotic Approach to Contemporary Communication, New York; London: Routledge.

Mitra, A. (2001) Marginal voices in cyberspace, New media & society, 3(1), 29-48.

Pauwels, L. (2005) Websites as visual and multimodal cultural expressions: opportunities and issues of online hybrid media research, Media, Culture & Society, 27(4), 604-613.


Runes in gold. Analyzing bracteates as multimodal texts

Per Holmberg


Gothenburg University
per.holmberg@svenska.gu.se

Some of the very first texts in Scandinavia were so called bracteates, a kind of medal produced in the 5th and 6th century AD to be worn as golden jewelry around the neck. About 1000 gold bracteates have been found from this period, and a little more than 100 make use of multimodal resources that combine visual elements with runes. The interpretation of this kind of texts is much debated. Several visual elements recur, for example quadrupeds, birds, fish and human heads in profile. The sequences of runes are short, and some of them relatively common, for example alu (’ale’?) and laukar (’leak’?). Furthermore, other signs are used, such as crosses, swastikas, dots and Latin letters. There is substantial archeological and runological research on bracteates. However, it has focused on either iconographic classification of motives and visual elements or interpretation of single bracteates. In my paper, I present a pilot study aiming at a deeper understanding of bracteates as multimodal texts. The more specific aim is to evaluate how the principle of concurrent semiotic systems can be applied to identify the visual and orthographic elements that are critical for meaning making (O’Toole 1994; Kress 2010). The data consists of a smaller corpus of bracteates which are visually characterized by a combination of head in profile and quadruped (from Axboe et al 1985–1989). This type has traditionally been interpreted as a representation of the god Odin (Hauck 1975). Later research has opened a critical debate (cf. Wicker 2015), and one of the alternatives that has been put forward is that the motive concerns the sun (Andrén 2014). Thus, I want to finally discuss how a social semiotic multimodal analysis may contribute to the multidisciplinary discussion about interpretation and social function.



References

Andrén, Anders (2014). Tracing Old Norse Cosmology. The world tree, middle earth and the sun in archaeological perspectives. Lund: Nordic academic press.

Axboe, Morten; Klaus Düwel; Karl Hauck & Lutz von Padberg (1985–89). Die Goldbrakteaten der Völkerwanderungszeit. Ikonographischer Katalog.Münstersche Mittelalterschriften 24, München.

Hauck, Karl (1975). Kontext-Ikonographie. Die methodische Entzifferung der formelhaften goldenen Amultettbilder aus der Völkerwanderungszeit. I: Fromm, Hans; Wolfgang Harms & Uwe Ruberg (red.) Verbum et signum Beiträge zur mediävistischen Bedeutungsforschung 1. München: WilhelmFink Verlag. 25–69.

Kress, Gunther (2010). Multimodality. A Social Semiotic Approach to Contemporary Communication. Oxford: Taylor & Francis.

O’Toole, Michael 1994. The Language of Displayed Art. Madison, New Jersey: Fairleigh Dickinson University Press.



Wicker, Nancy (2015). Bracteate inscriptions and context analysis in the light of alternatives to Hauck’s iconographic interpretations. Futhark. International Journal of Runic Studies. 5 (2014, publ. 2015). 25–43.

Audio description and multimodality: Accessing meaning-making in popular scientific texts

Jana Holsanova
Cognitive Science Department, LUX, Lund University, Sweden


The paper focuses on the production and reception processes in audio description of a Swedish multimodal popular scientific journal Forskning och Framsteg (Research and Progress). The contents of the journal are made accessible for blind and visually impaired audiences by producing an audio version. When interpreting the printed version and transforming it into an audio version, the processes of reception and production coincide. The meaning-making processes are uncovered with the help of a think aloud protocol.

In my analysis, I apply the framework of socio-semiotic theories (Kress & Van Leeuwen 1996/2006, O’Halloran et al., 2012, Jewitt 2014), in particular on visual construction of specialized knowledge (Unsworth 1997, Unsworth & Cleirigh 2014), in combination with cognitive theories on reception of multimodality (Holsanova 2008, Holsanova 2014a,b).

First, the printed journal is analysed in accordance with Unsworth (1997), focusing on how the resources of text, images and graphics are deployed in scientific explanation and how the meaning is constructed by the visuals. Second, the interpretative process of meaning-making is uncovered by think aloud protocols. In order to produce an aural version of the complex text, the interpreter must assess what to describe, how to describe it, and when to describe it (Holsanova 2015). He combines the contents of the available resources, makes judgements about relevant information, ways of verbalizing it, fills in the gaps missing in the interplay of the resources and re-arranges the order of information for optimal flow and understanding. In this way, he contributes to multimodal literacy (Walsh 2010, Kress & Jewitt 2003). The aural version of the journal is finally compared to the printed version to show how the semiotic interplay has been realized for the end users.

References

HOLSANOVA, Jana (2015): Cognitive approach to audio description. In: Matamala, A. & Orero, P. (eds.): Researching audio description: New approaches. London: Palgrave Macmillan, pp. 49–73.

HOLSANOVA, J. (2014a): Reception of multimodality: Applying eye tracking methodology in multimodal research. In: Carey Jewitt (Ed.), Routledge Handbook of Multimodal Analysis. Second edition, pp. 285–296.

HOLSANOVA, J. (2014b): In the mind of the beholder: Visual communication from a recipient perspective. In Machin, D. (Ed.) Visual communication, Mouton - De Gruyter, pp. 331–354.

HOLSANOVA, J. (2008): Discourse, vision, and cognition. Benjamins: Amsterdam/Philadelphia.

KRESS, G. & JEWITT, C. (Eds.) (2003): Multimodal literacy. New York: Peter Lang.

KRESS, G. and VAN LEEUWEN, T. (2006[1996]). Reading images: The grammar of visual design. London: Routledge.

UNSWORTH, L. and CLEIRIGH, C. (2014) Multimodality and reading: The construction of meaning through image-text interaction. In Carey Jewitt (Ed.), Routledge Handbook of Multimodal Analysis. Second edition. London: Routledge. pp. 1761-188.

UNSWORTH, L. (1997): Scaffolding Reading of Science Explanations: Accessing the Grammatical and Visual Forms of Specialized Knowledge.  Reading, Volume 31, Number 3, 1 November 1997, Wiley-Blackwell, pp. 30-42(13).

WALSH, M. (2010): Multimodal literacy: What does it mean for classroom practice? Australian Journal of Language and Literacy, 33, No. 3, pp. 211-239.

A Comparative Multimodal Discourse Analysis of Sino-US Front Page News Reports

Zhao Hong


School of Foreign Studies, China University of Mining and Technology, China
zhaohongxuzhou@126.com

It is widely acknowledged that with the development of modern technology and media, multimodality has become the apparent characteristic of this age. According to Kress & van Leeuwen (2001), the multimodal approach integrates language and language related resources such as image, sound, gesture, movements, etc. This integrative approach finds its root in everyday communication in which people make use of various semiotic resources to make and negotiate meaning.

By now, the grammars of many semiotic modes have been well sketched, most of which are based to some degree on the semiotic theories of Halliday and therefore share a common approach, for instance, the grammars of action (Martinec, 1998), of images (Kress & van Leeuwen, 1996), and of sound (van Leeuwen, 1999). At the same time, a great variety of concrete texts which comprise words, image, sound, etc. have been carefully analyzed, for example, Thibault (2000), Iedema (2001), Baldry & Thibault (2006), Kress et al, (2001), Gu Yueguo (2006), etc.

A great deal of work has been done in the field of news analysis by researchers, among whom van Dijk and Fowler are the most famous, though they mainly focused on news texts. Kress and van Leeuwen are the pioneer researchers who advanced an analytical framework for the analysis of newspaper front pages (1998).

Based on the above achievements in the study of multimodality, the author proposes an integrated framework for the analysis of front page news reports, which consists of three modes: front page layout, news text and news photograph. This paper chooses 30 front page news reports respectively from Chinese and American newspapers. Combining quantitative and qualitative approaches with statistic results demonstrated by tables and figures, it conducts a multimodal discourse analysis in order to comprehensively compare the similarities and differences in their meaning representation, and to further explore the different journalistic cultures in the two countries.


Yüklə 6,97 Mb.

Dostları ilə paylaş:
1   ...   4   5   6   7   8   9   10   11   ...   16




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə