2009年5月12日 星期二

Ubiquitous Engagement

Today is a sunny day good for outside activities although I have to prepare for my Qualification Exams, which look like an invisible monster to me. When I unwillingly carried my bookbag and got in my car, my three little kids went downstairs and said goodbye familiarly as usual. On my way to school for my preparation for the coming qualification exams, I looked at the scenery along both sides of the road, thinking about the schedule of my preparation and imagining what if I could not pass the exams.
After arriving at the school where I serve as a teacher, I suddenly remembered that the reading room has been reserved for an activity. If I wanted to study for the exam attentively, I have to find some other places for me to concentrate on my studies. When I was thinking about where I should sit and start my routine work on weekends, it occurred to my mind that I could have a seat under the tree on campus. S

2009年5月9日 星期六

質性研究的書目

1. 陳向明(2002)。社會科學質的研究。台北:五南。
(這本算是國內開質性研究法課的聖經本,雖然是大陸人寫的)
2. 潘淑滿(2003)。質性研究:理論與應用。台北:心理。
3. 丁興祥譯(2006)。質性心理學。台北:遠流。
4. 王國川譯(2005)。質性資料分析:如何透視質性資料。台北:五南。
Boyatzis, R. E. (1998). Transforming Qualitative Information: Thematic
Analysis and Code Development. Thousand Oaks: Sage Publications.
(質性資料分析這本,我覺得原文書比較好,可以的話,建議直接看原文)
5. 張芬芬譯(2005)。質性研究資料分析。台北:雙葉。
Miles, M. B. & Huberman, M.(1994). Qualitative Data Analysis: An Expanded
Sourcebook. Thousand Oaks: Sage Publications.
(這本也建議直接看原文)
6. 李政賢譯(2007)。質性研究導論。台北:五南。
(這本的後面幾章有談到量化與質性研究的混戰)
7. 郭俊偉譯(2008)。質性研究論文撰寫。台北:五南。
8. 羅世宏譯(2008)。質性資料分析:文本.影像與聲音。台北:五南。
9. 顧瑜君譯(2007)。質性研究寫作。台北:五南。
10. Rubin,H. J. & Rubin, I. S. (2005). Qualitative interviewing :
The art of hearing data. Thousand Oaks: Sage Publications.

敘說研究的書
1. 蔡敏玲譯(2003)。敘說探究-質性研究中的經驗與故事。台北:心理。
2. 王勇智譯(2003)。敘說分析。台北:五南。

有時候覺得質性研究也挺有趣的,至少可以讓研究者非常貼近人群,而不只是閉門造車。

敘說探究與個案研究法之比較

個案研究(case study)是質性研究法中的一大主流,由於敘說探究(narrative inquiry)與個案研究法有一些共通的研究觀點,因此兩者極易混淆。首先,個案研究與敘說探究具有共同的假設:採用整體(holistic)的觀點來研究社會現象(Goode & Hatt, 1952;Mitchell, 1983;Clandinin & Connelly, 2003)。社會情境是複雜的,無法被簡化為幾個不變的公理或定律,因此,不論是個案研究或敘說探究,研究者所關心的對象都是生活中的經驗(Stake, 1978;Clandinin & Connelly, 2003)。其次,個案研究與敘說探究具有共同的表現形式:故事。兩者都使用故事的形式來表現訪談或觀察所得,有主角、有情節、有故事背景等。


但兩者之間的差異何在?主要在於處理資料的手法。個案研究偏向自然主義(naturalism),採信受訪者在訪談中的所有回答,並以此為依據作為推論的根基,然而這樣的研究法傾向客觀觀察與全然複製,無法呈現社會上部分被打壓的聲音,如女性與少數民族等議題(Denzin & Lincoln, 2000)。相較之下,敘說探究更偏向詮釋主義,鼓勵發掘非主流的聲音,並容許研究者在三度的敘說空間中遊走,包括個人與社會的互動性、過去/現在/未來的時間性,以及地點的脈絡性,並以此做出詮釋(Clandinin & Connelly, 2003)。敘說探究容許研究者更自由地連結不同的時空背景,以便能更完整地詮釋主角的經驗,讓看似跳躍、不合理的人類經驗能被完整連接。

其次,個案研究法與敘說探究的研究目的不同。個案研究法是希望透過解謎(puzzle)的方式拼湊出真相(truth),因此研究者必須去檢驗受訪者所談的內容,以便求取更可信的事實。但敘說探究是希望透過語言了解建構在文字背後的邏輯(rationale),理解受訪者「如何說?」、「為何這樣說?」,而不是判斷這些言論是否屬實,因此敘說的目的在於找尋語言背後的意義,而非追求真實。

第三,研究者在個案研究與敘說探究兩者之間定位不同。個案研究法仍舊試圖追求「客觀性」(objectivity),因此研究者與受訪者之間必須保持客觀、中立的距離,以便能進行理性的分析。但敘說探究關注的本質是「經驗」,包括受訪者與研究者個別的經驗,甚至是兩者共同建構出來的經驗。例如在Clandinin & Connelly(2003)書中提到JoAnn Phillion的研究,她在進入田野之前,期望在受訪者身上看到一位生活在加拿大的西印地安教師對自己的民族意識,但事後卻發現並非如此,於是Phillion體會到自己過去的研究界線,原來她是抱持著研究一個「文化範本」而非「一個人的故事」來進行田野調查,也因此有了新的見解。從這個例子裡可以清楚看到,敘說探究注重研究者瞭解自己在研究中所站在的位置,與個案研究法追求的全然客觀並不相同。

其實個案研究法發展到後期也出現所謂的「詮釋型個案研究法」,認為研究者可以適時地將自己的觀點加入個案中,但從以上三點來看,儘管個案研究法也容許研究者進行個人詮釋,但在其處理方法、研究目的與研究者定位三點來看,敘說探究仍舊給予研究者更大的空間,作為質性研究法一種,可以與個案研究法產生相輔相成的效果。



Reference

1.Clandinin & Connelly, 蔡敏玲、徐曉雯譯,2003,敘說探究:質性研究中的經驗與故事,台北:心理。
2.Denzin, N.K., & Y. S. Lincoln, 2000, Handbook of Qualitative Research, 2nd ed., Thousand Oaks, CA: Sage
3.Goode, W.J., & Hatt P.K., 1952, Methods in Social Research, NY: McGraw-Hill
4.Mitchell, J.C., 1983, “Case and Situation Analysis”, Sociological Review, 31(2), 187-211
5.Stake, R.E., 1978, “The Case Study Method in Social Inquiry’, Educational Research, 7 Feb, 5-8

2009年5月6日 星期三

The Problem Solving Process

The Problem Solving process consists of a sequence of sections that fit together depending on the type of problem to be solved. These are:
Problem Definition.
Problem Analysis.
Generating possible Solutions.
Analyzing the Solutions.
Selecting the best Solution(s).
Planning the next course of action (Next Steps)

The process is only a guide for problem solving. It is useful to have a structure to follow to make sure that nothing is overlooked. Nothing here is likely to be brand new to anyone, but it is the pure acknowledgement and reminding of the process that can help the problems to be solved.

1. Problem Definition

The normal process for solving a problem will initially involve defining the problem you want to solve. You need to decide what you want achieve and write it down. Often people keep the problem in their head as a vague idea and can so often get lost in what they are trying to solve that no solution seems to fit. Merely writing down the problem forces you to think about what you are actually trying to solve and how much you want to achieve. The first part of the process not only involves writing down the problem to solve, but also checking that you are answering the right problem. It is a check-step to ensure that you do not answer a side issue or only solve the part of the problem that is most easy to solve. People often use the most immediate solution to the first problem definition that they find without spending time checking the problem is the right one to answer.

2. Problem Analysis

The next step in the process is often to check where we are, what is the current situation and what is involved in making it a problem. For example, what are the benefits of the current product/service/process? And why did we decide to make it like that? Understanding where the problem is coming from, how it fits in with current developments and what the current environment is, is crucial when working out whether a solution will actually work or not. Similarly you must have a set of criteria by which to evaluate any new solutions or you will not know whether the idea is workable or not. This section of the problem solving process ensures that time is spent in stepping back and assessing the current situation and what actually needs to be changed.

After this investigation, it is often good to go back one step to reconfirm that your problem definition is still valid. Frequently after the investigation people discover that the problem they really want to answer is very different from their original interpretation of it.

3. Generating possible Solutions

When you have discovered the real problem that you want to solve and have investigated the climate into which the solution must fit, the next stage is to generate a number of possible solutions. At this stage you should concentrate on generating many solutions and should not evaluate them at all. Very often an idea, which would have been discarded immediately, when evaluated properly, can be developed into a superb solution. At this stage, you should not pre-judge any potential solutions but should treat each idea as a new idea in its own right and worthy of consideration.

4. Analyzing the Solutions

This section of the problem solving process is where you investigate the various factors about each of the potential solutions. You note down the good and bad points and other things which are relevant to each solution. Even at this stage you are not evaluating the solution because if you do so then you could decide not to write down the valid good points about it because overall you think it will not work. However you might discover that by writing down its advantages that it has a totally unique advantage. Only by discovering this might you choose to put the effort in to develop the idea so that it will work.

5. Selecting the best Solution(s)

This is the section where you look through the various influencing factors for each possible solution and decide which solutions to keep and which to disregard. You look at the solution as a whole and use your judgement as to whether to use the solution or not. In Innovation Toolbox, you can vote using either a Yes/No/Interesting process or on a sliding scale depending on how good the idea is. Sometimes pure facts and figures dictate which ideas will work and which will not. In other situations, it will be purely feelings and intuition that decides. Remember that intuition is really a lifetimes experience and judgement compressed into a single decision.

By voting for the solutions you will end up with a shortlist of potential solutions. You may want to increase the depth in the analysis of each idea and vote again on that shortlist to further refine your shortlist.

You will then end up with one, many or no viable solutions. In the case where you have no solutions that work, you will need to repeat the generation of solutions section to discover more potential solutions. Alternatively you might consider re-evaluating the problem again as sometimes you may not find a solution because the problem definition is not well defined or self-contradictory.

6. Planning the next course of action (Next Steps)

This section of the process is where you write down what you are going to do next. Now that you have a potential solution or solutions you need to decide how you will make the solution happen. This will involve people doing various things at various times in the future and then confirming that they have been carried out as planned. This stage ensures that the valuable thinking that has gone into solving the problem becomes reality. This series of Next Steps is the logical step to physically solving the problem.

SOURCES:
http://www.gdrc.org/decision/problem-solve.html

2009年5月2日 星期六

A SYNTHESIS OF ETHNOGRAPHIC RESEARCH

AN ETHNOGRAPHY


"When used as a method, ethnography typically refers to fieldwork (alternatively, participant-observation) conducted by a single investigator who 'lives with and lives like' those who are studied, usually for a year or more." --John Van Maanen, 1996.


"Ethnography literally means 'a portrait of a people.' An ethnography is a written description of a particular culture - the customs, beliefs, and behavior - based on information collected through fieldwork." --Marvin Harris and Orna Johnson, 2000.


"Ethnography is the art and science of describing a group or culture. The description may be of a small tribal group in an exotic land or a classroom in middle-class suburbia." --David M. Fetterman, 1998.


Ethnography is a social science research method. It relies heavily on up-close, personal experience and possible participation, not just observation, by researchers trained in the art of ethnography. These ethnographers often work in multidisciplinary teams. The ethnographic focal point may include intensive language and culture learning, intensive study of a single field or domain, and a blend of historical, observational, and interview methods. Typical ethnographic research employs three kinds of data collection: interviews, observation, and documents. This in turn produces three kinds of data: quotations, descriptions, and excerpts of documents, resulting in one product: narrative description. This narrative often includes charts, diagrams and additional artifacts that help to tell "the story" (Hammersley, 1990). Ethnographic methods can give shape to new constructs or paradigms, and new variables, for further empirical testing in the field or through traditional, quantitative social science methods.


Ethnography has it roots planted in the fields of anthropology and sociology. Present-day practitioners conduct ethnographies in organizations and communities of all kinds. Ethnographers study schooling, public health, rural and urban development, consumers and consumer goods, any human arena. While particularly suited to exploratory research, ethnography draws on a wide range of both qualitative and quantitative methodologies, moving from "learning" to "testing" (Agar, 1996) while research problems, perspectives, and theories emerge and shift.


Ethnographic methods are a means of tapping local points of view, households and community "funds of knowledge" (Moll & Greenberg, 1990), a means of identifying significant categories of human experience up close and personal. Ethnography enhances and widens top down views and enriches the inquiry process, taps both bottom-up insights and perspectives of powerful policy-makers "at the top," and generates new analytic insights by engaging in interactive, team exploration of often subtle arenas of human difference and similarity. Through such findings ethnographers may inform others of their findings with an attempt to derive, for example, policy decisions or instructional innovations from such an analysis.


--------------------------------------------------------------------------------


VARIATIONS IN OBSERVATIONAL METHODS


Observational research is not a single thing. The decision to employ field methods in gathering informational data is only the first step in a decision process that involves a large number of options and possibilities. Making the choice to employ field methods involves a commitment to get close to the subject being observed in its natural setting, to be factual and descriptive in reporting what is observed, and to find out the points of view of participants in the domain observed. Once these fundamental commitments have been made, it is necessary to make additional decisions about which particular observational approaches are appropriate for the research situation at hand.


VARIATIONS IN OBSERVER INVOLVEMENT: PARTICIPANT OR ONLOOKER?


The first and most fundamental distinction among observational strategies concerns the extent to which the observer is also a participant in the program activities being studied. This is not really a simple choice between participation and nonparticipation. The extent of participation is a continuum which varies from complete immersion in the program as full participant to complete separation from the activities observed, taking on a role as spectator; there is a great deal of variation along the continuum between these two extremes.


Participant observation is an omnibus field strategy in that it "simultaneously combines document analysis, interviewing of respondents and informants, direct participation and observation, and introspection. In participant observation the researcher shares as intimately as possible in the life and activities of the people in the observed setting. The purpose of such participation is to develop an insider's view of what is happening. This means that the researcher not only sees what is happening but "feels" what it is like to be part of the group.


Experiencing an environment as an insider is what necessitates the participant part of participant observation. At the same time, however, there is clearly an observer side to this process. The challenge is to combine participation and observation so as to become capable of understanding the experience as an insider while describing the experience for outsiders.


The extent to which it is possible for a researcher to become a full participant in an experience will depend partly on the nature of the setting being observed. For example, in human service and education programs that serve children, it is not possible for the researcher to become a student and therefore experience the setting as a child; it may be possible, however, for the research observer to participate as a volunteer, parent, or staff person in such a setting and thereby develop the perspective of an insider in one of these adult roles. It should be said, though, that many ethnographers do not believe that understanding requires that they become full members of the group(s) being studied. Indeed, many believe that this must not occur if a valid and useful account is to be produced. These researchers believe the ethnographer must try to be both outsider and insider, staying on the margins of the group both socially and intellectually. This is because what is required is both an outside and an inside view. For this reason it is sometimes emphasized that, besides seeking to "understand", the ethnographer must also try to see familiar settings as "anthropologically strange", as they would be seen by someone from another society, adopting what we might call the Martian perspective.


--------------------------------------------------------------------------------


METHODOLOGICAL PRINCIPLES


Following are three methodological principles that are used to provide the rationale for the specific features of the ethnographic method. They are also the basis for much of the criticism of quantitative research for failing to capture the true nature of human social behavior; because it relies on the study of artificial settings and/or on what people say rather than what they do; because it seeks to reduce meanings to what is observable; and because it reifies social phenomena by treating them as more clearly defined and static than they are, and as mechanical products of social and psychological factors (M. Hammersley, 1990). The three principles can be summarized under the headings of naturalism, understanding and discovery:


1. Naturalism. This is the view that the aim of social research is to capture the character of naturally occurring human behavior, and that this can only be achieved by first-hand contact with it, not by inferences from what people do in artificial settings like experiments or from what they say in interviews about what they do elsewhere. This is the reason that ethnographers carry out their research in "natural" settings, settings that exist independently of the research process, rather than in those set up specifically for the purposes of research. Another important implication of naturalism is that in studying natural settings the researcher should seek to minimize her or his effects on the behavior of the people being studied. The aim of this is to increase the chances that what is discovered in the setting will be generalizable to other similar settings that have not been researched. Finally, the notion of naturalism implies that social events and processes must be explained in terms of their relationship to the context in which they occur.


2. Understanding. Central here is the argument that human actions differ from the behavior of physical objects, and even from that of other animals: they do not consist simply of fixed responses or even of learned responses to stimuli, but involve interpretation of stimuli and the construction of responses. Sometimes this argument reflects a complete rejection of the concept of causality as inapplicable to the social world, and an insistence on the freely constructed character of human actions and institutions. Others argue that causal relations are to be found in the social world, but that they differ from the "mechanical" causality typical of physical phenomena. From this point of view, if we are to be able to explain human actions effectively we must gain an understanding of the cultural perspectives on which they are based. That this is necessary is obvious when we are studying a society that is alien to us, since we shall find much of what we see and hear puzzling. However, ethnographers argue that it is just as important when we are studying more familiar settings. Indeed, when a setting is familiar the danger of misunderstanding is especially great. It is argued that we cannot assume that we already know others' perspectives, even in our own society, because particular groups and individuals develop distinctive worldviews. This is especially true in large complex societies. Ethnic, occupational, and small informal groups (even individual families or school classes) develop distinctive ways of orienting to the world that may need to be understood if their behavior is to be explained. Ethnographers argue, then, that it is necessary to learn the culture of the group one is studying before one can produce valid explanations for the behavior of its members. This is the reason for the centrality of participant observation and unstructured interviewing to ethnographic method.


3. Discovery. Another feature of ethnographic thinking is a conception of the research process as inductive or discovery-based; rather than as being limited to the testing of explicit hypotheses. It is argued that if one approaches a phenomenon with a set of hypotheses one may fail to discover the true nature of that phenomenon, being blinded by the assumptions built into the hypotheses. Rather, they have a general interest in some types of social phenomena and/or in some theoretical issue or practical problem. The focus of the research is narrowed and sharpened, and perhaps even changed substantially, as it proceeds. Similarly, and in parallel, theoretical ideas that frame descriptions and explanations of what is observed are developed over the course of the research. Such ideas are regarded as a valuable outcome of, not a precondition for, research.


ETHNOGRAPHY AS METHOD


In terms of method, generally speaking, the term "ethnography" refers to social research that has most of the following features (M. Hammersley, 1990).


(a) People's behavior is studied in everyday contexts, rather than under experimental conditions created by the researcher.


(b) Data are gathered from a range of sources, but observation and/or relatively informal conversations are usually the main ones.


(c) The approach to data collection is "unstructured in the sense that it does not involve following through a detailed plan set up at the beginning; nor are the categories used for interpreting what people say and do pre-given or fixed. This does not mean that the research is unsystematic; simply that initially the data are collected in as raw a form, and on as wide a front, as feasible.


(d) The focus is usually a single setting or group, of relatively small scale. In life history research the focus may even be a single individual.


(e) The analysis of the data involves interpretation of the meanings and functions of human actions and mainly takes the form of verbal descriptions and explanations, with quantification and statistical analysis playing a subordinate role at most.


As a set of methods, ethnography is not far removed from the sort of approach that we all use in everyday life to make sense of our surroundings. It is less specialized and less technically sophisticated than approaches like the experiment or the social survey; though all social research methods have their historical origins in the ways in which human beings gain information about their world in everyday life.


--------------------------------------------------------------------------------


SUMMARY GUIDELINES FOR FIELDWORK


It is difficult, if not impossible, to provide a precise set of rules and procedures for conducting fieldwork. What you do depends on the situation, the purpose of the study, the nature of the setting, and the skills, interests, needs, and point of view of the observer. Following are some generic guidelines for conducting fieldwork:


1. Be descriptive in taking field notes.


2. Gather a variety of information from different perspectives.


3. Cross-validate and triangulate by gathering different kinds of data. Example: observations, interviews, program documentation, recordings, and photographs.


4. Use quotations; represent program participants in their own terms. Capture participants' views of their own experiences in their own words.


5. Select key informants wisely and use them carefully. Draw on the wisdom of their informed perspectives, but keep in mind that their perspectives are limited.


6. Be aware of and sensitive to the different stages of fieldwork.


(a) Build trust and rapport at the entry stage. Remember that the researcher-observer is also being observed and evaluated.


(b) Stay alert and disciplined during the more routine middle-phase of fieldwork.


(c) Focus on pulling together a useful synthesis as fieldwork draws to a close.


(d) Be disciplined and conscientious in taking detailed field notes at all stages of fieldwork.


(e) Be as involved as possible in experiencing the observed setting as fully as possible while maintaining an analytical perspective grounded in the purpose of the fieldwork: to conduct research.


(f) Clearly separate description from interpretation and judgment.


(g) Provide formative feedback as part of the verification process of fieldwork. Time that feedback carefully. Observe its impact.


(h) Include in your field notes and observations reports of your own experiences, thoughts, and feelings. These are also field data.


Fieldwork is a highly personal experience. The meshing of fieldwork procedures with individual capabilities and situational variation is what makes fieldwork a highly personal experience. The validity and meaningfulness of the results obtained depend directly on the observer's skill, discipline, and perspective. This is both the strength and weakness of observational methods.


--------------------------------------------------------------------------------


SUMMARY GUIDELINES FOR INTERVIEWING


There is no one right way of interviewing, no single correct format that is appropriate for all situations, and no single way of wording questions that will always work. The particular evaluation situation, the needs of the interviewee, and the personal style of the interviewer all come together to create a unique situation for each interview. Therein lie the challenges of depth interviewing: situational responsiveness and sensitivity to get the best data possible.


There is no recipe for effective interviewing, but there are some useful guidelines that can be considered. These guidelines are summarized below (Patton, 1987).


1. Throughout all phases of interviewing, from planning through data collection to analysis, keep centered on the purpose of the research endeavor. Let that purpose guide the interviewing process.


2. The fundamental principle of qualitative interviewing is to provide a framework within which respondents can express their own understandings in their own terms.


3. Understand the strengths and weaknesses of different types of interviews: the informal conversational interview; the interview guide approach; and the standardized open-ended interview.


4. Select the type of interview (or combination of types) that is most appropriate to the purposes of the research effort.


5. Understand the different kinds of information one can collect through interviews: behavioral data; opinions; feelings; knowledge; sensory data; and background information.


6. Think about and plan how these different kinds of questions can be most appropriately sequenced for each interview topic, including past, present, and future questions.


7. Ask truly open-ended questions.


8. Ask clear questions, using understandable and appropriate language.


9. Ask one question at a time.


10. Use probes and follow-up questions to solicit depth and detail.


11. Communicate clearly what information is desired, why that information is important, and let the interviewee know how the interview is progressing.


12. Listen attentively and respond appropriately to let the person know he or she is being heard.


13. Avoid leading questions.


14. Understand the difference between a depth interview and an interrogation. Qualitative evaluators conduct depth interviews; police investigators and tax auditors conduct interrogations.


15. Establish personal rapport and a sense of mutual interest.


16. Maintain neutrality toward the specific content of responses. You are there to collect information not to make judgments about that person.


17. Observe while interviewing. Be aware of and sensitive to how the person is affected by and responds to different questions.


18. Maintain control of the interview.


19. Tape record whenever possible to capture full and exact quotations for analysis and reporting.


20. Take notes to capture and highlight major points as the interview progresses.


21. As soon as possible after the interview check the recording for malfunctions; review notes for clarity; elaborate where necessary; and record observations.


22. Take whatever steps are appropriate and necessary to gather valid and reliable information.


23. Treat the person being interviewed with respect. Keep in mind that it is a privilege and responsibility to peer into another person's experience.


24. Practice interviewing. Develop your skills.


25. Enjoy interviewing. Take the time along the way to stop and "hear" the roses.


--------------------------------------------------------------------------------


SITE DOCUMENTS


In addition to participant observation and interviews, ethnographers may also make use of various documents in answering guiding questions. When available, these documents can add additional insight or information to projects. Because ethnographic attention has been and continues to be focused on both literate and non-literate peoples, not all research projects will have site documents available. It is also possible that even research among a literate group will not have relevant site documents to consider; this could vary depending on the focus of the research. Thinking carefully about your participants and how they function and asking questions of your informants helps to decide what kinds of documents might be available.


Possible documents include: budgets, advertisements, work descriptions, annual reports, memos, school records, correspondence, informational brochures, teaching materials, newsletters, websites, recruitment or orientation packets, contracts, records of court proceedings, posters, minutes of meetings, menus, and many other kinds of written items.


For example, an ethnographer studying how limited-English proficient elementary school students learn to acquire English in a classroom setting might want to collect such things as the state or school mandated Bilingual/ESL curriculum for students in the school(s) where he or she does research, and examples of student work. Local school budget allocations to language minority education, specific teachers' lesson plans, and copies of age-appropriate ESL textbooks could also be relevant. It might also be useful to try finding subgroups of professional educators organizations which focus on teaching elementary school language arts and join their listservs, attend their meetings, or get copies of their newsletters. Review cumulative student records and school district policies for language minority education. All of these things could greatly enrich the participant observation and the interviews that an ethnographer does.


Privacy or copyright issues may apply to the documents gathered, so it is important to inquire about this when you find or are given documents. If you are given permission to include what you learn from these documents in your final paper, the documents should be cited appropriately and included in the bibliography of the final paper. If you are not given permission, do not use them in any way.


--------------------------------------------------------------------------------


ETHICS IN ETHNOGRAPHIC RESEARCH


Since ethnographic research takes place among real human beings, there are a number of special ethical concerns to be aware of before beginning. In a nutshell, researchers must make their research goals clear to the members of the community where they undertake their research and gain the informed consent of their consultants to the research beforehand. It is also important to learn whether the group would prefer to be named in the written report of the research or given a pseudonym and to offer the results of the research if informants would like to read it. Most of all, researchers must be sure that the research does not harm or exploit those among whom the research is done.


ANALYZING, INTERPRETING AND REPORTING FINDINGS


Remember that the researcher is the detective looking for trends and patterns that occur across the various groups or within individuals (Krueger, 1994). The process of analysis and interpretation involve disciplined examination, creative insight, and careful attention to the purposes of the research study. Analysis and interpretation are conceptually separate processes. The analysis process begins with assembling the raw materials and getting an overview or total picture of the entire process. The researcher's role in analysis covers a continuum with assembly of raw data on one extreme and interpretative comments on the other. Analysis is the process of bringing order to the data, organizing what is there into patterns, categories, and basic descriptive units. The analysis process involves consideration of words, tone, context, non-verbals, internal consistency, frequency, extensiveness, intensity, specificity of responses and big ideas. Data reduction strategies are essential in the analysis (Krueger, 1994).


Interpretation involves attaching meaning and significance to the analysis, explaining descriptive patterns, and looking for relationships and linkages among descriptive dimensions. Once these processes have been completed the researcher must report his or her interpretations and conclusions


QUALITATIVE DESCRIPTION


Reports based on qualitative methods will include a great deal of pure description of the program and/or the experiences of people in the research environment. The purpose of this description is to let the reader know what happened in the environment under observation, what it was like from the participants' point of view to be in the setting, and what particular events or activities in the setting were like. In reading through field notes and interviews the researcher begins to look for those parts of the data that will be polished for presentation as pure description in the research report. What is included by way of description will depend on what questions the researcher is attempting to answer. Often an entire activity will be reported in detail and depth because it represents a typical experience. These descriptions are written in narrative form to provide a holistic picture of what has happened in the reported activity or event.


REPORTING FINDINGS


The actual content and format of a qualitative report will depend on the information needs of primary stakeholders and the purpose of the research. Even a comprehensive report will have to omit a great deal of the data collected by the researcher. Focus is essential. Analysts who try to include everything risk losing their readers in the sheer volume of the presentation. This process has been referred to as "the agony of omitting". The agony of omitting on the part of the researcher is matched only by the readers' agony in having to read those things that were not omitted, but should have been.


BALANCE BETWEEN DESCRIPTION AND ANALYSIS


In considering what to omit, a decision has to be made about how much description to include. Detailed description and in-depth quotations are the essential qualities of qualitative accounts. Sufficient description and direct quotations should be included to allow readers to understand fully the research setting and the thoughts of the people represented in the narrative. Description should stop short, however, of becoming trivial and mundane. The reader does not have to know absolutely everything that was done or said. Again the problem of focus arises.


Description is balanced by analysis and interpretation. Endless description becomes its own muddle. The purpose of analysis is to organize the description in a way that makes it manageable. Description is balanced by analysis and leads into interpretation. An interesting and readable final account provides sufficient description to allow the reader to understand the analysis and sufficient analysis to allow the reader to understand the interpretations and explanations presented.



--------------------------------------------------------------------------------


REFERENCES AND SUGGESTED READINGS


Agar, M. (1996). Professional Stranger: An Informal Introduction To Ethnography, (2nd ed.). Academic Press.


Fetterman, (1998). Ethnography, 2nd ed., Thousand Oaks, CA: Sage Publications.


Hammersley, M. (1990). Reading Ethnographic Research: A Critical Guide. London: Longman.


Harris, M. & Johnson, O. (2000). Cultural Anthropology, (5th ed.), Needham Heights, MA: Allyn and Bacon.


Krueger, A. R. (1994). Focus Groups: A Practical guide for Applied Research, Thousand Oaks, CA: Sage Publications.


Moll, L.C. & Greenberg, J.M. (1990). Creating Zones of Possibilities: Combining Social Constructs for Instruction. In: L.C. Moll (ed.) Vygotsky and Education: Instructional Implications and Applications of Sociohistorical Psychology, New York, NY: Cambridge University Press.


Patton, M.Q. (1987). How to Use Qualitative Methods in Evaluation. Newberry Park, CA: Sage Publications.


Spradley, J. (1980). Participant Observation. New York: Holt, Rinehart and Winston.


Spradley, J. (1979). The Ethnographic Interview. New York: Holt, Rinehart and Winston.


Van Maanen, J. (1996). Ethnography. In: A. Kuper and J. Kuper (eds.) The Social Science Encyclopedia, 2nd ed., pages 263-265. London: Routledge.


Yin, R.K. (1989). Case Study Research: Design and Methods. Newberry Park, CA: Sage Publications.


http://www-rcf.usc.edu/~genzuk/Ethnographic_Research.html

Learning Theory - Schema Theory

Schemata are psychological constructs that have been proposed as a form of mental representation for some forms of complex knowledge.

Bartlett's Schema Theory
Schemata were initially introduced into psychology and education through the work of the British psychologist Sir Frederic Bartlett (1886–1969). In carrying out a series of studies on the recall of Native American folktales, Bartlett noticed that many of the recalls were not accurate, but involved the replacement of unfamiliar information with something more familiar. They also included many inferences that went beyond the information given in the original text. In order to account for these findings, Bartlett proposed that people have schemata, or unconscious mental structures, that represent an individual's generic knowledge about the world. It is through schemata that old knowledge influences new information.

For example, one of Bartlett's participants read the phrase "something black came out of his mouth" and later recalled it as "he foamed at the mouth." This finding could be accounted for by assuming that the input information was not consistent with any schema held by the participant, and so the original information was reconstructed in a form that was consistent with one of the participant's schemata. The schema construct was developed during the period when psychology was strongly influenced by behaviorist and associationistic approaches; because the schema construct was not compatible with these worldviews, it eventually faded from view.

Minsky's Frame Theory
In the 1970s, however, the schema construct was reintroduced into psychology though the work of the computer scientist Marvin Minsky. Minsky was attempting to develop machines that would display human-like abilities (e.g., to perceive and understand the world). In the course of trying to solve these difficult problems, he came across Bartlett's work. Minsky concluded that humans were using their stored knowledge about the world to carry out many of the processes that he was trying to emulate by machine, and he therefore needed to provide his machines with this type of knowledge if they were ever to achieve human-like abilities. Minsky developed the frame construct as a way to represent knowledge in machines. Minsky's frame proposal can be seen as essentially an elaboration and specification of the schema construct. He conceived of the frame knowledge as interacting with new specific information coming from the world. He proposed that fixed generic information be represented as a frame comprised of slots that accept a certain range of values. If the world did not provide a specific value for a particular slot, then it could be filled by a default value.

For example, consider the representation of a generic (typical) elementary school classroom. The frame for such a classroom includes certain information, such as that the room has walls, a ceiling, lights, and a door. The door can be thought of as a slot which accepts values such as wood door or metal door, but does not accept a value such as a door made of jello. If a person or a machine is trying to represent a particular elementary school classroom, the person or machine instantiates the generic frame with specific information from the particular classroom (e.g., it has a window on one wall, and the door is wooden with a small glass panel). If, for some reason, one does not actually observe the lights in the classroom, one can fill the lighting slot with the default assumption that they are fluorescent lights. This proposal gives a good account of a wide range of phenomena. It explains, for example, why one would be very surprised to walk into an elementary classroom and find that it did not have a ceiling, and it accounts for the fact that someone might recall that a certain classroom had fluorescent lights when it did not.

Modern Schema Theory
Minsky's work in computer science had a strong and immediate impact on psychology and education. In 1980 the cognitive psychologist David Rumelhart elaborated on Minsky's ideas and turned them into an explicitly psychological theory of the mental representation of complex knowledge. Roger Schank and Robert Abelson developed the script construct to deal with generic knowledge of sequences of actions. Schema theory provided explanations for many experiments already in the literature, and led to a very wide variety of new empirical studies. Providing a relevant schema improved comprehension and recall of opaquely written passages, and strong schemata were shown to lead to high rates of inferential errors in recall.

Broad versus Narrow Use of Schema
In retrospect, it is clear that there has been an ambiguity in schema theory between a narrow use and a broad use of the term schema. For example, in Rumelhart's classic 1980 paper, he defined a schema as "a data structure for representing the generic concepts stored in memory" (p. 34). Yet he went on to state that "there are schemata representing our knowledge about all concepts: those underlying objects, situations, events, sequences of events, actions and sequences of actions" (p. 34). Thus, schemata are frequently defined as the form of mental representation for generic knowledge, but are then used as the term for the representation of all knowledge.

There are severe problems with the use of the term schema to refer to all forms of complex knowledge. First, there is no need for a new technical term, since the ordinary term knowledge has this meaning. In addition, if schema theory is used to account for all knowledge, then it fails. A number of writers have pointed out that schema theory, as presently developed, cannot deal with those forms of knowledge that do not involve old generic information. Thus, schema theory provides an account for the knowledge in long-term memory that the state of Oklahoma is directly above the state of Texas. However, schema theory does not provide an account of the new representation one develops of a town as one travels through it for the first time.

Therefore it seems best to use the term schema in the narrower usage, as the form of mental representation used for generic knowledge. However, if one adopts the narrower usage one has to accept that schemata are only the appropriate representations for a subset of knowledge and that other forms of mental representation are needed for other forms of knowledge. For example, mental models are needed to represent specific nonschematic aspects of knowledge, such as the layout of an unfamiliar town, while naive theories or causal mental models are needed to represent knowledge of causal/mechanical phenomena.

Schema Theory in Education
Richard Anderson, an educational psychologist, played an important role in introducing schema theory to the educational community. In a 1977 paper Anderson pointed out that schemata provided a form of representation for complex knowledge and that the construct, for the first time, provided a principled account of how old knowledge might influence the acquisition of new knowledge. Schema theory was immediately applied to understanding the reading process, where it served as an important counterweight to purely bottom-up approaches to reading. The schema-theory approaches to reading emphasize that reading involves both the bottom-up information from the perceived letters coming into the eye and the use of top-down knowledge to construct a meaningful representation of the content of the text.

Broad versus Narrow Use of Schema in Education
The problem with the broad and narrow use of the term schema surfaced in education just as it had in cognitive psychology. For example, in Anderson's classic 1977 paper on schemata in education, he clearly takes the broad view. He attacks the narrow view and says that it is impossible "that people have stored a schema for every conceivable scene, event sequence, and message" (p. 421), and that "an adequate theory must explain how people cope with novelty" (p. 421). However in a paper written at roughly the same time (1978), Anderson states that "a schema represents generic knowledge" (p. 67), and he adopts the narrow view systematically throughout the paper. In a 1991 paper on terminology in education, Patricia Alexander, Diane Schallert, and Victoria Hare note that the systematic ambiguity between the narrow and broad views has made it very difficult to interpret a given writer's use of the term schema in the education literature.

Instructional Implications of Schema Theory
A number of writers have derived instructional proposals from schema theory. They have suggested that relevant knowledge should be activated before reading; that teachers should try to provide prerequisite knowledge; and that more attention should be given to teaching higher-order comprehension processes. Many of these proposals are not novel, but schema theory appears to provide a theoretical and empirical basis for instructional practices that some experienced teachers were already carrying out.


Impact of Schema Theory on Education
Schema theory has provided education with a way to think about the representation of some forms of complex knowledge. It has focused attention on the role old knowledge plays in acquiring new knowledge, and has emphasized the role of top-down, reader-based influences in the reading process.

See also: LEARNING, subentry on CAUSAL REASONING; LITERACY, subentry on NARRATIVE COMPREHENSION AND PRODUCTION; READING, subentries on COMPREHENSION, CONTENT AREAS.

BIBLIOGRAPHY
ADAMS, MARILYN J., and COLLINS, ALLAN. 1979. "A Schema-Theoretic View of Reading." In New Directions in Discourse Processing, Vol. 2: Advances in Discourse Processes, ed. Roy O. Freedle. Norwood, NJ: Ablex.

ALEXANDER, PATRICIA A.; SCHALLERT, DIANE L.; and HARE, VICTORIA C. 1991. "Coming to Terms: How Researchers in Learning and Literacy Talk about Knowledge." Review of Educational Research 61:315–343.

ANDERSON, RICHARD C. 1977. "The Notion of Schemata and the Educational Enterprise: General Discussion of the Conference." In Schooling and the Acquisition of Knowledge, ed. Richard C. Anderson, Rand J. Spiro, and William E. Montague. Hillsdale, NJ: Erlbaum.

ANDERSON, RICHARD C. 1978. "Schema-Directed Processes in Language Comprehension." In Cognitive Psychology and Instruction, ed. Alan M. Lesgold, James W. Pellegrino, Sipke D. Fokkema, and Robert Glaser. New York: Plenum.

ANDERSON, RICHARD C. 1984. "Role of the Reader's Schema in Comprehension, Learning, and Memory." In Learning to Read in American Schools: Basal Readers and Content Texts, ed. Richard C. Anderson, Jean Osborn, and Robert J. Tierney. Hillsdale, NJ: Erlbaum.

ANDERSON, RICHARD C., and PEARSON, P. DAVID. 1984. "A Schema-Theoretic View of Basic Processes in Reading Comprehension." In Handbook of Reading Research, ed. P. David Pearson. New York: Longman.

BARTLETT, FREDERIC C. 1932. Remembering. Cambridge, Eng.: Cambridge University Press.

BRANSFORD, JOHN D., and JOHNSON, MARCIA K. 1973. "Considerations of Some Problems of Comprehension." In Visual Information Processing, ed. William G. Chase. New York: Academic.

BREWER, WILLIAM F. 1987. "Schemas Versus Mental Models in Human Memory." In Modelling Cognition, ed. Peter Morris. Chichester, Eng.: Wiley.

BREWER, WILLIAM F. 1999. "Scientific Theories and Naive Theories as Forms of Mental Representation: Psychologism Revived." Science and Education 8:489–505.

BREWER, WILLIAM F. 2000. "Bartlett's Concept of the Schema and Its Impact on Theories of Knowledge Representation in Contemporary Cognitive Psychology." In Bartlett, Culture and Cognition, ed. Akiko Saito. Hove, Eng.: Psychology Press.

BREWER, WILLIAM F., and NAKAMURA, GLENN V. 1984. "The Nature and Functions of Schemas." In Handbook of Social Cognition, Vol. 1, ed. Robert S. Wyer, Jr. and Thomas K. Srull. Hillsdale, NJ: Erlbaum.

HACKER, CHARLES J. 1980. "From Schema Theory to Classroom Practice." Language Arts 57:866–871.

JOHNSON-LAIRD, PHILIP N. 1983. Mental Models. Cambridge, MA: Harvard University Press.

MINSKY, MARVIN. 1975. "A Framework for Representing Knowledge." In The Psychology of Computer Vision, ed. Patrick H. Winston. New York: McGraw-Hill.

RUMELHART, DAVID E. 1980. "Schemata: The Building Blocks of Cognition." In Theoretical Issues in Reading Comprehension, ed. Rand J. Spiro, Bertram C. Bruce, and William F. Brewer. Hillsdale, NJ: Erlbaum.

SCHANK, ROGER C., and ABELSON, ROBERT P. 1977. Scripts, Plans, Goals and Understanding. Hillsdale, NJ: Erlbaum.


http://education.stateuniversity.com/pages/2175/Learning-Theory-SCHEMA-THEORY.html

Schema Theory

Linguists, cognitive psychologists, and psycholinguists have used
the concept of schema (plural: schemata) to understand the
interaction of key factors affecting the comprehension process.
Simply put, schema theory states that all knowledge is organized
into units. Within these units of knowledge, or schemata, is
stored information.
A schema, then, is a generalized description or a conceptual
system for understanding knowledge-how knowledge is represented
and how it is used.
According to this theory, schemata represent knowledge
about concepts: objects and the relationships they have
with other objects, situations, events, sequences of
events, actions, and sequences of actions.
A simple example is to think of your schema for dog.
Within that schema you most likely have knowledge about
dogs in general (bark, four legs, teeth, hair, tails)
and probably information about specific dogs, such as
collies (long hair, large, Lassie) or springer spaniels
(English, docked tails, liver and white or black and
white, Millie). You may also think of dogs within the
greater context of animals and other living things;
that is, dogs breathe, need food, and reproduce. Your
knowledge of dogs might also include the fact that they
are mammals and thus are warm-blooded and bear their
young as opposed to laying eggs. Depending upon your
personal experience, the knowledge of a dog as a pet
(domesticated and loyal) or as an animal to fear
(likely to bite or attack) may be a part of your
schema. And so it goes with the development of a
schema. Each new experience incorporates more
information into one's schema.
What does all this have to do with reading comprehension?
Individuals have schemata for everything. Long before
students come to school, they develop schemata (units of
knowledge) about everything they experience. Schemata
become theories about reality. These theories not only
affect the way information is interpreted, thus affecting
comprehension, but also continue to change as new
information is received.
As stated by Rumelhart (1980),
schemata can represent knowledge at all levels-from ideologies
and cultural truths to knowledge about the meaning of a
particular word, to knowledge about what patterns of excitations
are associated with what letters of the alphabet. We have
schemata to represent all levels of our experience, at all levels
of abstraction. Finally, our schemata are our knowledge. All of
our generic knowledge is embedded in schemata. (p. 41)
The importance of schema theory to reading comprehension also
lies in how the reader uses schemata. This issue has not yet
been resolved by research, although investigators agree that
some mechanism activates just those schemata most relevant to
the reader's task.
Reading Comprehension as Cognitive-Based Processing
There are several models based on cognitive processing (see
Ruddell, Ruddell, & Singer, 1994, p. 813). For example, the
LaBerge-Samuels Model of Automatic Information Processing
(Samuels, 1994) emphasizes internal aspects of attention
as crucial to comprehension.
Samuels(1994, pp. 818-819) defines three characteristics
of internal attention. The first, alertness, is the
reader's active attempt to access relevant schemata
involving letter-sound relationships, syntactic knowledge,
and word meanings. Selectivity, the second characteristic,
refers to the reader's ability to attend selectively to
only that information requiring processing.
The third characteristic, limited capacity, refers to the
fact that our human brain has a limited amount of cognitive
energy available for use in processing information. In
other words, if a reader's cognitive energy is focused on
decoding and attention cannot be directed at integrating,
relating, and combining the meanings of the words decoded,
then comprehension will suffer. "Automaticity in
information processing, then, simply means that information
is processed with little attention" (Samuels, 1994, p.
823). Comprehension difficulties occur when the reader
cannot rapidly and automatically access the concepts and
knowledge stored in the schemata.
One other example of a cognitive-based model is Rumelhart's
(1994) Interactive Model. Information from several
knowledge sources (schemata for letter-sound relationships,
word meanings, syntactic relationships, event sequences,
and so forth) are considered simultaneously. The
implication is that when information from one source, such
as word recognition, is deficient, the reader will rely on
information from another source, for example, contextual
clues or previous experience.
Stanovich (1980) terms the latter kind of processing
interactive-compensatory because the reader (any reader)
compensates for deficiencies in one or more of the
knowledge sources by using information from remaining
knowledge sources. Those sources that are more concerned
with concepts and semantic relationships are termed higherlevel
stimuli; sources dealing with the print itself, that
is phonics, sight words, and other word-attack skills, are
termed lower level stimuli.
The interactive-compensatory model implies that the reader
will rely on higher-level processes when lower-level
processes are inadequate, and vice versa. Stanovich (1980)
extensively reviews research demonstrating such
compensation in both good and poor readers.
Reading Comprehension as Sociocognitive Processing
A sociocognitive processing model takes a constructivist
view of reading comprehension; that is, the reader, the
text, the teacher, and the classroom community are all
involved in the construction of meaning. Ruddell and
Ruddell (1994, p. 813) state, "The role of the classroom's
social context and the influence of the teacher on the
reader's meaning negotiation and construction are central
to this model [developed by R. B. Ruddell and N. J. Unrau]
as it explores the notion that participants in literacy
events form and reform meanings in a hermeneutic
[interpretation] circle."
In other words, this model views comprehension as a process
that involves meaning negotiation among text, readers,
teachers, and other members of the classroom community.
Schema for text meanings, academic tasks, sources of
authority (i.e., residing within the text, the reader, the
teacher, the classroom community, or some interaction of
these), and sociocultural settings are all brought to the
negotiation task. The teacher's role is one of
orchestration of the instructional setting, and being
knowledgeable about teaching/learning strategies and about
the world.
Reading Comprehension as Transactional
The transactional model takes into account the dynamic nature of language and
both aesthetic and cognitive aspects of reading. According to Rosenblatt (1994,
p. 1063), "Every reading act is an event, or a transaction involving a particular
reader and a particular pattern of signs, a text, and occurring at a particular
time in a particular context. Instead of two fixed entities acting on one
another, the reader and the text are two aspects of a total dynamic situation.
The 'meaning' does not reside ready-made 'in' the text or 'in' the reader but
happens or comes into being during the transaction between reader and text."
Thus, text without a reader is merely a set of marks capable of being interpreted
as written language. However, when a reader transacts with the text, meaning
happens.
Schemata are not viewed as static but rather as active, developing, and ever
changing. As readers transact with text they are changed or transformed, as
is the text. Similarly, "the same text takes on different meanings in
transactions with different readers or even with the same reader in different
contexts or times" (Rosenblatt, 1994, p. 1078).
Reading Comprehension as Transactional-Sociopsycholinguistic
Building on Rosenblatt's transactional model, Goodman (1994)
conceptualizes literacy processing as including reading, writing, and
written texts. He states,
Texts are constructed by authors to be comprehended by readers. The
meaning is in the author and the reader. The text has a potential
to evoke meaning but has no meaning in itself; meaning is not a
characteristic of texts. This does not mean the characteristics of
the text are unimportant or that either writer or reader are independent
of them. How well the writer constructs the text and how well
the reader reconstructs it and constructs meaning will influence
comprehension. But meaning does not pass between writer and reader.
It is represented by a writer in a text and constructed from a text
by a reader. Characteristics of writer, text, and reader will all
influence the resultant meaning. (p. 1103)
In a transactional-sociopsycholinguistic view, the reader has a
highly active role. It is the individual transactions between a
reader and the text characteristics that result in meaning. These
characteristics include physical characteristics such as orthographythe
alphabetic system, spelling, punctuation; format characteristics
such as paragraphing, lists, schedules, bibliographies;
macrostructure or text grammar such as that found in telephone books,
recipe books, newspapers, and letters; and wording of texts such as
the differences found in narrative and expository text.
Understanding is limited, however, by the reader's schemata, making
what the reader brings to the text as important as the text itself.
The writer also plays an important role in comprehension.
Additionally, readers' and writers' schemata are changed through
transactions with the text as meaning is constructed. Readers'
schemata are changed as new knowledge is assimilated and
accommodated. Writers' schemata are changed as new ways of organizing
text to express meaning are developed. According to Goodman (1994):
How well the writer knows the audience and has built the text to suit that audience
makes a major difference in text predictability and comprehension. However, since
comprehension results from reader-text transactions, what the reader knows, who the
reader is, what values guide the reader, and what purposes or interests the reader
has will play vital roles in the reading process. It follows that what is
comprehended from a given text varies among readers. Meaning is ultimately created
by each reader. (p. 1127)
Reading Comprehension as Influenced by Attitude
Mathewson's (1994) Model of Attitude Influence upon
Reading and Learning to Read is derived from the
area of social psychology. This model attempts to
explain the roles of affect and cognition in
reading comprehension.
The core of the attitude-influence model explains
that a reader's whole attitude toward reading
(i.e., prevailing feelings and evaluative beliefs
about reading and action readiness for reading)
will influence the intention to read, in turn
influencing reading behavior.
Intention to read is proposed as the primary
mediator between attitude and reading. Intention is
defined as "commitment to a plan for achieving one
or more reading purposes at a more or less
specified time in the future" (Mathewson, 1994, p.
1135). All other moderator variables (e.g.,
extrinsic motivation, involvement, prior knowledge,
and purpose) are viewed as affecting the attitudereading
relationship by influencing the intention
to read.
Therefore, classroom environments that include
well-stocked libraries, magazines, reading tables,
and areas with comfortable chairs will enhance
students' intentions to read. Mathewson (1994, p.
1148) states, "Favorable attitudes toward reading
thus sustain intention to read and reading as long
as readers continue to be satisfied with reading
outcomes."

2009年5月1日 星期五

Error analysis

The field of error analysis in SLA was established in the 1970s by S. P. Corder and colleagues. A widely-available survey can be found in chapter 8 of Brown (2000). Error analysis was an alternative to contrastive analysis, an approach influenced by behaviorism through which applied linguists sought to use the formal distinctions between the learners' first and second languages to predict errors. Error analysis showed that contrastive analysis was unable to predict a great majority of errors, although its more valuable aspects have been incorporated into the study of language transfer. A key finding of error analysis has been that many learner errors are produced by learners making faulty inferences about the rules of the new language.

Error analysts distinguish between errors, which are systematic, and mistakes, which are not. They often seek to develop a typology of errors. Error can be classified according to basic type: omissive, additive, substitutive or related to word order. They can be classified by how apparent they are: overt errors such as "I angry" are obvious even out of context, whereas covert errors are evident only in context. Closely related to this is the classification according to domain, the breadth of context which the analyst must examine, and extent, the breadth of the utterance which must be changed in order to fix the error. Errors may also be classified according to the level of language: phonological errors, vocabulary or lexical errors, syntactic errors, and so on. They may be assessed according to the degree to which they interfere with communication: global errors make an utterance difficult to understand, while local errors do not. In the above example, "I angry" would be a local error, since the meaning is apparent.

From the beginning, error analysis was beset with methodological problems. In particular, the above typologies are problematic: from linguistic data alone, it is often impossible to reliably determine what kind of error a learner is making. Also, error analysis can deal effectively only with learner production (speaking and writing) and not with learner reception (listening and reading). Furthermore, it cannot account for learner use of communicative strategies such as avoidance, in which learners simply do not use a form with which they are uncomfortable. For these reasons, although error analysis is still used to investigate specific questions in SLA, the quest for an overarching theory of learner errors has largely been abandoned. In the mid-1970s, Corder and others moved on to a more wide-ranging approach to learner language, known as interlanguage.

Error analysis is closely related to the study of error treatment in language teaching. Today, the study of errors is particularly relevant for focus on form teaching methodology.

http://en.wikipedia.org/wiki/Second_language_acquisition#Interlanguage

Language Transfer

Positive and negative transfer

When the relevant unit or structure of both languages is the same, linguistic interference can result in correct language production called positive transfer — "correct" meaning in line with most native speakers' notions of acceptability. An example is the use of cognates. Note, however, that language interference is most often discussed as a source of errors known as negative transfer. Negative transfer occurs when speakers and writers transfer items and structures that are not the same in both languages. Within the theory of contrastive analysis (the systematic study of a pair of languages with a view to identifying their structural differences and similarities), the greater the differences between the two languages, the more negative transfer can be expected.

The results of positive transfer go largely unnoticed, and thus are less often discussed. Nonetheless, such results can have a large effect. Generally speaking, the more similar the two languages are, the more the learner is aware of the relation between them, the more positive transfer will occur. For example, an Anglophone learner of German may correctly guess an item of German vocabulary from its English counterpart, but word order and collocation are more likely to differ, as will connotations. Such an approach has the disadvantage of making the learner more subject to the influence of "false friends" (false cognates).


Conscious and unconscious transfer

Transfer may be conscious or unconscious. Consciously, learners or unskilled translators may sometimes guess when producing speech or text in a second language because they have not learned or have forgotten its proper usage. Unconsciously, they may not realize that the structures and internal rules of the languages in question are different. Such users could also be aware of both the structures and internal rules, yet be insufficiently skilled to put them into practice, and consequently often fall back on their first language.

Interlanguage

An interlanguage is an emerging linguistic system that has been developed by a learner of a second language (or L2) who has not become fully proficient yet but is only approximating the target language: preserving some features of their first language (or L1) in speaking or writing the target language and creating innovations. An interlanguage is idiosyncratically based on the learners' experiences with the L2. It can fossilize in any of its developmental stages. The interlanguage consists of: L1 transfer, transfer of training, strategies of L2 learning (e.g.simplification), strategies of L2 communication (e.g. do not think about grammar while talking), and overgeneralization of the target language patterns.

Interlanguage is based on the theory that there is a "psychological structure latent in the brain" which is activated when one attempts to learn a second language. Larry Selinker proposed the theory of interlanguage in 1972, noting that in a given situation the utterances produced by the learner are different from those native speakers would produce had they attempted to convey the same meaning. This comparison reveals a separate linguistic system. This system can be observed when studying the utterances of the learners who attempt to produce a target language norm.

To study the psychological processes involved one should compare the interlanguage of the learner with two things:

Utterances in the native language to convey the same message made by the learner
Utterances in the target language to convey the same message made by the native speaker of that language.
Interlanguage yields new linguistic variety, as features from a group of speakers' L1 community may be integrated into a dialect of the speaker's L2 community. Interlanguage is in itself the basis for diversification of linguistic forms through an outside linguistic influence. Dialects formed by interlanguage are the product of a need to communicate between speakers with varying linguistic ability, and with increased interaction with a more standard dialect, are often marginalized or eliminated in favor of a standard dialect. In this way, interlanguage may be thought of as a temporary tool in language or dialect acquisition.

http://en.wikipedia.org/wiki/Interlanguage

What is the Role of Transfer in Interlanguage?

Sources:
http://www.ling.lancs.ac.uk/groups/crile/docs/crile33powell.pdf

1

In this paper I intend to consider the following question: to what extent is
transfer responsible for the form and function of a person’s interlanguage? In
order to answer this question, it will be necessary to examine what is meant
by a number of commonly used terms such as transfer, interlanguage and
interference. It will also be of use to review the history of interlanguage as a
concept in order to understand where it came from and where it may be
going.
There has been debate as to whether ‘transfer’ is a valid concept for use in
discussing language acquisition at all. Extremes range from Lado (1957) who
proposed that second language learners rely almost entirely on their native
language in the process of learning the target language, to Dulay and Burt
(1974) who suggested that transfer was largely unimportant in the creation of
interlanguage. It may be useful to briefly consider the historical context of the
development of interlanguage.
Contrastive Analysis
Lado (1957) and Fries (1945) are the names most closely associated with the
CAi hypothesis. In a specific attempt to rationalise and order language
teaching materials, Fries wrote:
The most effective materials are those that are based upon a scientific
description of the language to be learned, carefully compared with a
parallel description of the native language of the learner. (1945: 9)
The basic concept behind CA was that a structural ‘picture’ of any one
language could be constructed which might then be used in direct comparison
with the structural ‘picture’ of another language. Through a process of
‘mapping’ one system onto another, similarities and differences could be
i For Abbreviations please see Appendix 1
2
identified. Identifying the differences would lead to a better understanding of
the potential problems that a learner of the particular L2 would face.
Structurally different areas of the two languages involved would result in
interference. This term was used to describe any influence from the L1 which
would have an effect on the acquisition of the L2. This was the origin of the
term transfer, and a distinction was made between positive and negative
transfer. Positive transfer occurred where there was concordance between
the L1 and L2. In such a situation, acquisition would take place with little or
no difficulty. Negative transfer, on the other hand, occurred where there was
some sort of dissonance between the L1 and L2. In this case, acquisition of
the L2 would be more difficult and take longer because of the ‘newness’
(hence, difficulty) of the L2 structure.
These two concepts of transfer were central to CA and reflected an
essentially behaviourist model of language learning, which described the
acquisition of language in terms of habit formation. Reflecting Skinner’s
interpretation of laboratory experiments on rats (1957), where positive and
negative stimuli induced certain ‘learned’ behaviours, language acquisition
(certainly FLA) was described in the same way. The broad acceptance that
these views had in the 50s and 60s encouraged the Audiolingual Method of
teaching which focused on extensive drilling in order to form the required
‘habits’. Error was seen as an unwanted deviation from the norm and an
imperfect product of perfect input.
Challenging Skinner’s model of behaviourist learning, Chomsky (1959)
proposed a more cognitive approach to language learning which involved the
use of a LAD. This device, he argued, was reserved exclusively for
processing and producing language, and was separate from other cognitive
processes. Moreover, Chomsky posited that there are ‘language universals’
which all babies have access to and which are essentially innate in humans.
This idea of ‘innateness’ was particularly interesting and brought into question
the practices of Audiolingualism. Language, it was argued, was not simply a
matter of habit formation, but rather had its own natural agenda and its own
developmental course. Certain aspects of vocabulary learning may
3
follow behaviourist principles, but an important piece of counter-behaviourist
evidence is that children say things they could not possibly have heard from
those around them such as “runned” and “falled”. Chomsky argued that
children were perceiving regularities and forming rules for how the language
works rather than simply imitating other people. Importantly, language was
said to be rule-governed, structure-dependent and fundamentally generative.
Working with phonological and phonetic data in the early 1960s, Nemser
(1971) began talking about ‘deviant’ learner language. Many of his ideas
differed from essential concepts of IL. There were, however, certain points of
concordance. He wrote, for example:
Learner speech at a given time is the patterned product of a linguistic
system..distinct from [NL] and [TL] and internally structured (my emphasis)
(Nemser 1971: 116)
Nemser also identified the IL equivalent of fossilisation as a system of
“permanent intermediate systems and subsystems” (1971: 118). In his study
of the production and perception of interdental fricatives and stops (1971),
Nemser pointed out that productive and perceptive mechanisms were not
isomorphic, and that this was not taken fully into account in either CA or SLA.
He argued that in creating IL, learners sometimes made the L1 or L2
categories equivalent and sometimes they did not. Blends could also be
expected but not only from L1 and L2 material. Nemser provided evidence
for at least partial autonomy of an IL system: “The test data contain
numerous examples of elements which do not have their origin in either
phonemic system”. (Nemser, 1971: 134)
Brière (1968) and Selinker (1966) also provided results which supported
Nemser’s basic idea that language transfer does occur, but not in an ‘all or
nothing’ style, typical of the CA hypothesis.
The Birth of Interlanguage
Although Selinker (1972) coined the term “interlanguage”, it was Corder
(1967) who is considered responsible for raising issues which became central
to studies of IL. Building on ideas already explored by scholars such as
4
Nemser (ibid.) above, Corder suggested that there was structure in learner
language, and that certain inferences could be made about the learning
process by describing successive states of the learner language, noting the
changes and correlating this with the input. Moreover, Corder argued that the
appearance of error in a learner’s production was evidence that the learner
was organising the knowledge available to them at a particular point in time.
Errors, he stated, were the most important source of information, accounting
for the fact that learners have a ‘built in syllabus’ and that a process of
hypothesis formulation and reformulation was continuously occurring.
The value of error-making in language learning was consequently reassessed,
with a move away from seeing error as a purely negative phenomenon. Error
analysis became a valuable tool in the classroom for teachers and
researchers. Various taxonomies were devised to account for certain types of
error (e.g. Dulay and Burt 1974). It was suggested that spoken and written
texts produced different kinds of errors, that there were differences between
grammatical and lexical errors, that it was possible to construct a gradation of
serious and less serious errors.
In short, language learning began to be seen as a process which involved the
construction of an IL, a ‘transitional competence’ reflecting the dynamic nature
of the learner’s developing system. As a result of the variety of errors and the
difficulty associated with interpreting them, Corder proposed a ‘general law’
for EA and IL. He suggested that every learner sentence should be regarded
as idiosyncratic until shown to be otherwise (Corder, 1981). This is an
important concept to bear in mind since it emphasises the fact that IL is a
personal construct and process, and that while it may be true to say that
certain tendencies are typical of certain learners from the same linguistic
background, it cannot be true to say that all learners from that background will
have such tendencies. As Kohn (1986) notes:
for the analysis of (inter)language processes, group knowledge is of
absolutely no importance. It is the learner’s own autonomous and
functional knowledge and his own certainty or uncertainty which
determines his interlanguage behaviour. (1986:23)
5
Evaluating Acquisition
It is perhaps useful at this point to briefly focus on language learning and
consider the difficulties of defining terms such as ‘acquisition’. Sharwood
Smith (1986) claims that if a language item is used spontaneously by the
learner in ‘90% of obligatory contexts’, then it can be said to be acquired. But
what does this mean? If a learner’s production closely reflects L2 norms of
speaking, can it be assumed that the learner’s competence is also at a similar
level? Clark (1974) warns about the dangers of ‘performing without
competence’ where a student uses correct chunks of the language without
analysis, giving the impression that the norm has been attained. Conversely,
Sharwood Smith points out that it is equally possible that a learner may have
100% competence but 90% performance - ‘competence without performance’.
For example, a rule may be ‘acquired’ (competence) without showing itself
due to semantic redundancy or as a result of processing problems. A learner
may be able to hear the sound ÿØÿ very clearly and know that it is the correct
phonological representation of “th” in the word “think”, but nevertheless
produces the sound /ÿ/ instead. This clearly rejects the simplistic notion that
performance reflects competence. The relationship between performance
and competence is a complex one that is not fully understood, but I think it is
necessary to make the point that in trying to identify transfer in IL, there is a
danger of relying too closely on a product level analysis of data.
Natural Languages
In helping to define IL, it seems necessary to consider what is meant by
‘natural language’. Adjemian (1976) suggests it is:
any human language shared by a community of speakers and
developed over time by a general process of evolution
(1976: 298)
Arguing from the perspective of FLA, Wode (1984) links the idea of a natural
language with cognitive abilities. If general cognition determines structures of
learner language, then why, he asks, do children and adults produce similar
developmental structures? If cognitive deficits are used to explain children’s
language, then this is not applicable to adult language. While adults have a
6
developed and clear concept of negation, there is considerable evidence that
both adults and FLLs produce the same negative developmental structures
when learning English (Dulay and Burt 1974). Therefore, Wode argues, the
capacity to learn a ‘natural language’ is different from the ability to cognize
one’s environment. A Chomsky-like special type of cognition is implied, or as
Wode refers to it, a ‘linguo-cognition’ (Wode, 1981). It is these systems which
constrain and design the boundaries of ‘natural’ languages. Extending the
argument further, Bialystok (1984) specifically suggests that IL has many
properties of a ‘natural’ language because it is generated by the same
cognitive processes as those responsible for L1 acquisition.
A further characteristic of a ‘natural’ language is that it is adaptable to change.
This, Wode argues, is what makes it so useful as a means of communication.
There is a need for flexibility and a need to be able to transfer information
from one language into another:
Consequently, any linguistic theory that does not adequately provide for
transfer cannot possibly qualify as an adequate description of a language
or as a theoretical framework for describing natural languages. (1984:
182).
Universal Grammar in Interlanguage
In considering universals, there are perhaps two approaches worth
mentioning:
1. The Chomskyan approach
2. The Greenbergian approach
The Chomskyan approach would employ the notion of UG which could define
the classes of all possible human languages. Universal properties would be
argued to be innate which means, for example, that children can construct
grammars very quickly. A Greenbergian approach (1966), on the other hand,
would “search for regularities in the ways that languages vary, and on the
constraints and principles that underlie this variation” (Hawkins, 1983: 6).
Data showing surface feature language would need to be collected, and a
wide range of languages would need to be considered. Consequently, for
example, SOV languages are generally seen to have preposed rather than
7
postposed adjectives.
Selinker (1972) claims that ILs are systematic in the sense of a ‘natural
language’ and that ILs will not violate language universals. But what exactly is
meant be a ‘language universal’? What is the source of a universal, and do
all universals affect IL? Gass (1984) suggests five sources of a universal:
1. Physical basis (e.g. the physical shape of the vocal cords)
2. Human perception and processing devices
3. A LAD
4. Historical change
5. Interaction
These are the most common explanations given to the rationale behind
universals. Gass and Ard (1980) propose that universals stemming from
language/historical change are least likely to influence IL, whereas physical,
processing and cognitive universals are the most likely to have an effect on a
person’s IL.
Supporting a UG hypothesis in IL, Gass (1984) points to the hierarchy of
structures in a language, for example the hierarchy of relative clause types
which a language can relativise (Keenan and Comrie, 1977). Higher
hierarchical positions are easier to relativise than lower ones (Tarallo and
Myhill, 1983). Further evidence is provided by Kumpf (1982) in a study of
untutored learners whose tense and aspect systems did not correspond to the
L1 or L2. It was argued that the learners created unique form, meaning and
function relationships which corresponded to universal principles of natural
languages.
While there seems to be a certain amount of evidence that ILs are consistent
in that they do not violate constraints of UG, the question still remains as to
what in fact they are or more precisely, what they are constructed from. As
noted here by Kumpf (ibid.) and elsewhere by Corder (1967), IL is not a
hybrid of the L1 and L2 although certain elements of one or the other or
indeed both may be present. Much research suggests that transfer is an
important element in the construction of an IL although this assertion raises
8
several questions, namely: What is, or is not transferable between
languages, and why should this be so?
Transfer
Dulay, Burt and Krashen (1982) suggest that there are two possible ways of
describing the term ‘interference’. One is from a psychological perspective,
which suggests that there is influence from old habits when new ones are
being learned. The second is from a sociolinguistic perspective which
describes the language interactions which occur when two language
communities are in contact. Three such examples are borrowing, codeswitching
and fossilisation.
Borrowing essentially means the incorporation of linguistic material from one
language into another, for example, the borrowing of thousands of words from
old French into Anglo-Saxon after the Norman conquest of 1066. Such words
maintain their general sound pattern but alter the phonetic and phonological
system of the new language. ‘Integrated borrowing’, according to Dulay and
Burt (1974b), occurs when the new word in question is fully incorporated into
the learner’s IL. Selinker (1992) argues that this is in fact transfer.
‘Communicative borrowing’ on the other hand, reflects a communicative
strategy which helps to get over the deficiencies of the L2. The learner falls
back on structures or patterns from the L1 in order to get a message across.
Selinker (1992) notes that if communication is successful, then transfer will (or
may) happen. The danger is that successful communication does not depend
entirely on formal correction. Persistent errors (e.g. wrongly incorporated
errors, covert errors) could lead to fossilisation where a learner, uncorrected
for the reasons mentioned above, but still able to successfully get their
message understood, has no sociofunctional need to alter their IL and so it
fossilises in that state.
Code switching describes the use of two language systems for
communication, usually evidenced by a sudden, brief shift from one to
another. This phenomenon is not an indication of a lack of competence, but
rather tends to obey strict structural rules. Certain structural combinations, for
example, are not possible, e.g. switching before relative clause
9
boundaries or before adverbial clauses is ‘illegal’.
A more behaviourist interpretation of interference was mentioned earlier; two
types were suggested:
1. Positive transfer
2. Negative transfer
Both of these types refer to the automatic and subconscious use of old
behaviour in new learning situations. Specifically, semantic and syntactic
transfer of this nature reflects the most commonly understood uses of the
term.
Corder (1983) suggested the need for a word other than ‘transfer’ which he
claimed belonged to the school of behaviourist learning theory. He suggested
the term ‘Mother Tongue Influence’. Sharwood Smith (1986) refined the idea
still further by suggesting ‘Cross Linguistic Influence’, which would take into
account the potential influence of L3 on L2 where another learned language,
but not the L1 might have an effect on the learning of the L2. Also
encompassed within the meaning of CLI is the notion of possible L2 influence
on L1.
‘Transfer’ is also used by educational psychologists to refer to the use of past
knowledge and experience in a new situation, e.g. a literate SLL does not
have to learn that written symbols represent the spoken form of the new
language. Similarly, concepts such as deixis are already acquired when a
learner comes to learn a second language.
For many people, the proof of the pudding is seen in transfer errors which
reflect the equivalent structures of the L1. Thus, for example, if a Japanese
learner consistently omitted the indefinite articles of a sentence, then negative
transfer could be claimed. Conversely, if a French learner regularly included
the correct definite or indefinite articles in a sentence, then positive transfer
could be cited. The ‘proof’ would be in the fact that in Japanese the article
system does not exist, while French has a similar article system to English.
Generally speaking, in terms of article use, Japanese and French learners of
English do tend to follow the pattern suggested above. Is the case therefore
10
closed? Certain evidence suggests that the situation is somewhat more
complex.
Felix (1980) describes an English boy learning German who used the word
“warum” to mean both “why” and “because”. Felix points out that in, say,
Spanish or Greek, this one equivalent word does carry these two meanings.
So had the boy been Spanish, his error would almost certainly have been
identified as interference. Errors, Felix suggests, will always correspond to
structures in some language.
Butterworth (1978) noticed that Ricardo, a 13 year old Spanish boy learning
English, often used subjectless sentences. He therefore attributed this to
interference since it is perfectly acceptable to omit the subject in Spanish.
Felix, however, points out that it is also common in FLA to miss out the
subject of a sentence. Dulay and Burt (1974), after studying 513 errors
produced by Spanish children learning English, concluded that overall, less
than 5% of the total errors were exclusively attributable to interference. Felix
(1980) is clear that in certain circumstances interference does occur.
Nevertheless he concludes:
our data on L2 acquisition of syntactic structures in a natural environment
suggest that interference does not constitute a major strategy in this area.
(1980: 107).
There is also the perhaps surprising phenomenon of a lack of positive transfer
where learners make mistakes that they should not have made given the
similarity of their L1 background to the L2 in question (Richards, 1971).
LoCoco (1975) in a study of learner error suggested that 5% to 18% of the
errors observed should not have been made if positive transfer was in fact at
work in the learner’s IL. Coulter (1968) noted how CA predictions were
specifically falsified in an experiment on Russian learners of English. In
Russian, there are five forms of the plural which contrast clearly with singular
items in the language. Interference theory would suggest therefore that there
would be no difficulty in acquiring the s-morpheme of English plurals. If
anything there would be a positive transfer of complexity to simplicity since
English has just the one plural form. Extended observation of the Russian
11
subjects showed that this was not the case. In tests of production they failed
to consistently produce the required plural forms.
All of this suggests that while transfer seems to be a reasonable and logical
explanation for some part of the nature and form of ILs, there are certain
reservations that should be born in mind. Only certain structures or forms
seem to be transferable from the L1 and the identification of these items is
further complicated by the variables of context and the individual in question.
A question worth asking would be: Are there specific linguistic areas where
the L1 influences the L2?
Markedness
Kean (1986), in his paper ‘Core issues in transfer’, divides language into two
areas:
1. Core
2. Periphery
Core areas of the language obey highly restricted, invariant principles of UG.
Periphery areas of the language reflect language particular phenomena - not
defining properties of the grammars of natural languages. Kean argues that if
an IL has a ‘well-formed’ grammar, then ‘core’ universals must be
components of all such ILs. If such ‘core’ universals are not present, then the
ILs in question cannot be described as ‘normal’ grammars. Therefore, for
example, one should not find structure independent rules in a language.
Kean also discusses the pro-drop parameter. In Italian, for example:
1. The subject is not required
2. There is free inversion of overt subjects in simple sentences
3. Violations of the that-trace filter are admitted
(e.g. “Who do you think that will come?”)
He points out that if only one of these is missing then all of them will be
missing from a language. The suggestion is that the learner need only notice
one of the three items listed above in order to know what kind of a language
Italian is. Therefore a pro-drop parameter can be postulated, an either/or
12
situation, where a person’s learning system acts as a ‘scanning device’ only
picking up the marked values of a language.
Mazurkewich (1984a) adopts the core/periphery distinction outlined above
where unmarked properties of language are identified with core grammar and
marked properties with the periphery. Not being concerned with parameters
as such, she argues that if a learner is acquiring a language with a marked
structure, they will go through a stage of using the unmarked equivalent
before the marked one is acquired. In other words, A strong UG hypothesis is
maintained where in L2 acquisition, UG reverts back to its preset options,
taking no account of the learner’s prior learning experience with the L1. The
prediction is that all L2 learners will show the same developmental sequence
of unmarked before marked, regardless of their L1.
This suggests a superficial resemblance to the ‘natural order hypothesis’
proposed by Krashen (1981) which tries to explain certain morpheme
acquisition sequences by claiming they are part of a natural order. The
crucial difference here is that specific predictions are made in advance of the
data, based on the identification of structures as marked or unmarked. The
subjects of Mazurkewich’s study (ibid.) were 45 French, and 38 Inuktitutspeaking
high school students. Results from the study showed that the
French speakers produced more unmarked questions than marked ones, but
that conversely, the Inuktitut speakers produced more marked questions than
unmarked. Mazurkewich argued that these results support her hypothesis
that L2 learners will learn unmarked before marked language items. White
(1989), however, disagrees, pointing out that the behaviour of the native
speakers of French is equally consistent with their carrying over the unmarked
structures from their L1. Consequently it is impossible to tell whether the L2
learners had reverted to core grammar or not. It seems that the pure UG
hypothesis claiming that there will be an acquisition sequence of unmarked
before marked cannot be maintained on the basis of Mazurkewich’s results (at
least not for SLA). In the case of the French speakers it was impossible to
identify the cause of their preference for unmarked structures. Was it due to
the influence of the L1 or to the emergence of a core grammar? The fact that
13
the Inuktitut speakers behaved differently seems to suggest that the L1 was
indeed having an influence on the French speakers.
The Influence of L1 in IL - More Studies
There appear to be a range of studies which both support and question the
influence of L1 in IL. A question that needs to be asked is: what constitutes
L1 influence? Are there features that can be observed at a competence or
production level which can categorically be said to arise from the L1? In other
words, does the appearance of L1-like features in IL represent proof of L1
influence?
Lexis:
There seems to be considerable evidence for the influence of L1 lexis on
IL/L2. Ringbom (1978), for example, studying Swedish and Finnish learners
of English, suggested that the results from his study showed clear and
unambiguous evidence of CLI.
Negation:
Hyltenstam’s study (1977) showed that learners from a variety of L1
backgrounds go through the same stages of development in the acquisition of
the negative particle in Swedish. Wode’s research (1981) was aimed at
finding a universal sequence, true in essentials of all learners of all languages.
He suggested that learners go through five distinct stages of development:
1. Anaphoric sentence external: “No”
2. Non-anaphoric sentence external: “No finish”
3. Copula ‘be’: “That’s no good”
4. Full verbs and imperatives with “don’t”: “You have a not fish” or “Don’t say
something”
5. “Do” forms: “You didn’t can throw it”
(Taken from Cook 1991: 19)
Studies suggest that learners from different L1 backgrounds do in fact follow
the developmental order suggested by Wode.
14
Word Order
Once again, there is evidence and counter evidence of transfer in studies
related to word order. Studies have focused on whether, for example, SVO
L1s carry this pattern over into the L2. Rutherford (1983) suggested that
Japanese learners did not use their L1 SOV in learning English. McNeill
(1979) in fact argues for the SOV pattern as being the basic, universal word
order in L1 acquisition.
Subject Pronoun Drop:
The role of CLI in Subject Pronoun Drop (SPD) is not clear. There seems to
be much evidence that such deletion takes place in the L2 of Romance
language speakers, but it is also argued that such deletion is not restricted to
speakers whose L1 has SPD. Meisel (1980) found that Romance speakers in
their L2 dropped pronouns more in the third than first person. This,
seemingly, cannot be explained by transfer.
Semantic Differences:
In a study of Dutch learners, Bongaerts (1983) found that few of them had
problems with the semantic distinction between:
1. “Easy to see” and
2. “Eager to see”
Bongaerts argued that this was because of a similar semantic distinction in
Dutch which facilitated positive CLI. In another study, Bongaerts found that
French, Hebrew and Arabic L1 learners had considerably greater difficulties
understanding and acquiring the same semantic distinctions.
Relative Clauses:
Schachter (1974) conducted a study involving four groups of students with
different L1 backgrounds - Arabs, Persians, Japanese and Chinese. In a
quantitative study which involved counting the number of relative clauses
produced spontaneously in a classroom situation, he found that the students
could be divided into two distinct groups. Results showed that the Arab and
Persian learners made the most mistakes when using relative
15
clauses. Significantly, however, this group of students used relative clauses
two or three times more than the Japanese and Chinese students. Schachter
suggested that right-branching relative clause structures in Arabic and
Persian, which is the same in English, were responsible for the relatively
greater use of the clauses in spontaneous speech. As for the Japanese and
Chinese students, Schachter attributes their limited use of relative structures
to avoidance strategies. This occurs where a learner, confronted by a form of
the L2 that they are unfamiliar with or they find difficult, will simply avoid using
that structure. Sometimes, this is very difficult to identify since relatively
proficient students will be able to bypass using, for example, a certain difficult
structure by using another similarly appropriate structure. Another effect of
avoidance strategy, as seen in the study mentioned above, is the possibility
that a certain structure (such as a relative clause) may hardly be used, or
used very occasionally in set phrases which the student knows to be correct.
Simply quantifying the errors made in relation to this structure will not give a
clear picture of an individual learner’s competence. This is one of the main
weaknesses of EA which can only assess what the learner chooses to show
in their production.
Kellerman (1984) reviewing several studies of the kind mentioned above,
cautiously suggests that CLI operates on the surface form of IL reflecting
such processes as transfer. He goes on to argue that CLI operates on IL at
smaller and larger levels than the sentence. Advanced learners, he claims,
are equally affected by CLI as are beginners. The only difference perhaps, is
that beginners tend to show CLI more overtly in their syntax whereas
advanced learners tend to show CLI in less obvious, more discreet ways. e.g.
through subtle semantic errors or through the use of avoidance strategies.
Conclusion
In conclusion it seems clear that there is considerable evidence to support the
role of CLI in the development of IL. Studies indicate that in certain situations
and under certain conditions, the influence of the L1 can be clearly
demonstrated. It is the nature of these situations and conditions that is not
always clear.
16
The effect of CLI on IL is not necessarily instant or predictable. This is
reflected in the development of IL which is non-linear - like a flower,
development takes place at many different points at the same time, resulting
in more of a spiral model of development.
While the predictions of a pure UG hypothesis are in doubt (at least in SLA),
several studies suggest that there are universal parameters which a natural
language will not violate.
The creation of IL can perhaps most importantly be seen as a process which
is internally consistent, has many qualities of a natural language, and which is
in direct opposition to a view of language learning as a system of habitformation.
I would suggest that three aspects of this process are continually
occurring:
1. The learner is constantly making hypotheses about the L2 input available
to them. There is much evidence to suggest that the hypotheses tested will
not contravene universal boundaries of natural language.
2. There will be a selective use of the L1 knowledge. The process of
selection, and the extent to which it is conscious or unconscious is unclear.
3. There may be influence from other ILs known to the learner.
There would seem to be a need for further investigation to determine precisely
the role of transfer in the development of IL and the acquisition of L2. In the
position we are at present, it can only be tentatively suggested that the three
aspects of the process mentioned above, must all interact in some as yet
unknown way.
Geraint Powell
17
Bibliography
Adjemian, C (1976). ‘On the nature of interlanguage systems’. Language
Learning 26. pp 297-320
Bialystok, E (1984). ‘Strategies in interlanguage learning and performance’.
In Davies, A., Criper, C., and Howatt, A.P.R. (eds)
Bongaerts, T. (1983). ‘ The comprehension of three complex structures by
Dutch learners’. Language Learning 33. pp 159-82
Brière, E.J. (1968). ‘A psycholinguistic study of phonological interference’.
Cited in Selinker, L. (1992). Rediscovering Interlanguage. New York.
Longman.
Butterworth, G. (1978). ‘A Spanish-speaking adolescent’s acquistion of
English syntax’. Cited in Felix, S.W. (1980). Second Language
Developmental Trends and Issues. Gunter Narr Verlag Tübingen.
Chomsky, N. (1959). A review of B.F. Skinner’s verbal behavior. Cited in J.A.
Fodor. and J.J.Katz. (1968). The Structure of Language. Englewood
Cliffs. Prentice Hall.
Clark, R. (1974). ‘Performing without competence’. Journal of Child
Language 1. pp 1-10
Cook, V. (1991). Second Language Learning and Language Teaching.
London. Arnold.
Corder, S.P. (1967). ‘ The significance of learners’ errors’. International
Review of Applied Linguistics 5/4. pp 161-170
Corder, S.P. (1981). Error Analysis and Interlanguage. Oxford. Oxford
University Press
Corder, S.P. (1983). ‘A role for the mother tongue’. In Gass, S. and
Selinker, L. (eds).
Coulter, K. (1968). ‘Linguistic error analysis of the spoken English of two
native Russians’. Unpublished thesis. Cited in Selinker, L. (1992).
Rediscovering Interlanguage. New York. Longman.
Davies, A., Criper, C. and Howatt, A.P.R. (1984). Interlanguage. (eds).
Edinburgh. Edinburgh University Press
Dulay, H. and Burt, M. (1974). ‘Natural sequences in child second
language acquisition’. Language Learning 24 pp 37-53.
Dulay, H., Burt, M. and Krashen, S. (1982). Language Two. Oxford. Oxford
University Press.
Felix, S.W. (1980). Second Language Developmental Trends and Issues.
Gunter Narr Verlag Tübingen.
Fries, C.C. (1945). Teaching and Learning English - As a Foreign Language.
Ann Arbor. University of Michigan Press.
Gass, S. and Ard, J. (1980). ‘L2 data, their relevance for language
universals’TESOL Quarterly 14 pp 443-452
Gass, S. and Selinker, L. (1983). Language Transfer in Language Learning.
(eds). Rowley, Mass. Newbury House.
Gass, S. (1984). ‘Language transfer and language universals’. Language
Learning 34/2 pp 115-131
Gass, S. (1984). ‘The empirical basis for the universal hypothesis of inter18
language studies’. In Davies, A., Criper, C. and Howatt, A.P.R.(eds)
Greenberg, J.H. (1996). ‘Some universals of grammar with particular
reference to the order of meaningful elements’. Cited in Gass, S.
(1984). ‘The empirical basis for the universal hypothesis of interlanguage
studies’. In Davies, A., Criper, C. and Howatt, A.P.R (eds)
Hawkins, J (1983).Word Order Universals. New York. Academic Press.
Hyltenstam, K. (1997). ‘Implicational patterns in interlanguage syntax
variation’. Language Learning 27/2 pp 383-411
Kean, M.L. (1986). ‘Core issues in transfer’. In Kellerman, E and Sharwood
Smith, M (eds)
Keenan, E. and Comrie, B. (1977). ‘Noun phrase accessibility and universal
grammar’. Linguistic Inquiry 8 pp 63-99
Kellerman, E (1984). ‘The empirical evidence for the influence of the L1 in
interlanguage’. In Davies, A., Criper, C. and Howatt, A.P.R. (eds).
Kellerman, E. and Sharwood Smith, M. (1986). Crosslinguistic Influence in
Second Language Acquisition. (eds) New York. Pergamon.
Kohn, K. (1986) ‘The analysis of transfer’. In Kellerman, E and Sharwood
Smith, M. (eds)
Krashen, S.D (1981) Second Language Acquisition and Second Language
Learning. Oxford. Pergamon Press.
Kumpf, L (1982) ‘An analysis of tense, aspect and modality in interlanguage’
Paper presented at TESOL. Honolulu.
Lado, R. (1957). Linguistics Across Cultures. Applied Linguistics for
Language Teachers. Ann Arbor. University of Michigan Press.
LoCoco, V. (1975). ‘An analysis of Spanish and German learners’ errors’.
Working Papers on Bilingualism 7 pp 96-124
Mazurkewich, I. (1984). ‘The acquisition of the dative alternation by second
language learners and linguistic theory’. Cited in White, L (1989).
Universal Grammar and Second Language Acquisition. Philadelphia.
John Benjamins Publishing Company.
McNeill, D. (1979). The conceptual Basis of Language. Hillsdale. Erlbaum.
Miesel, J. (1980). ‘Strategies of second language acquisition: more than one
kind of simplification’. In Davies, A., Criper, C. and Howatt, A.P.R.
(eds). 1984).
Nemser, W. (1971). ‘Approximate systems of foreign language learners’.
IRAL 9/2 pp 115-123
Richards, J.C. (1971). ‘A non-contrastive approach to error analysis’
English Language Teaching 25 pp 204-219
Ringbom, H (1978). ‘The influence of the mother tongue on the translation of
lexical items’. Cited in Kellerman, E. ‘The empirical evidence for the
influence of the L1 in interlanguage’ (1984). In Davies, A., Criper, C.
and Howatt, A.P.R. (eds).
Rutherford, W. (1983). ‘Description and explanation in interlanguage
syntax’. In Davies, A., Criper, C. and Howatt, A.P.R. (eds). (1984).
Schachter, J. (1974). ‘An error in error analysis’. Language Learning 24
pp 205-214.
Selinker, L. (1966). ‘A psycholinguistic study of language transfer’.
Unpublished PhD dissertation. Cited in Selinker, L. (1992).
Rediscovering Interlanguage. New York. Longman.
19
Selinker, L. (1972). ‘Interlanguage’. IRAL 10/3 pp 209-231.
Selinker, L (1992). Rediscovering Interlanguage. New York. Longman.
Sharwood Smith, M. (1986). ‘The competence/control model, crosslinguistic
influence and the creation of new grammars’. In Kellerman, E. and
Sharwood Smith, M. (eds).
Skinner, B.F. (1957). Verbal Behavior. New York. Appleton-Century-Crofts.
Tarallo, F. and Myhill, J. (1983). Interference and natural language
processing in second language acquisition’. Language Learning 33pp
55-76.
White, L. (1989). Universal Grammar and Second Language Acquisition.
Philadelphia. John Benjamins Publishing Company.
Wode, H. (1981). ‘Learning a second language 1. An integrated view of
language acquisition’. Tübingen Gunter Narr.
Wode, H. (1986). ‘Language transfer: a cognitive, functional and
developmental view’. In Kellerman, E. and Sharwood Smith, M (eds)
Wode, H (1984). ‘Some theoretical implication of L2 acquisition research and
the grammar of interlanguages’. In Davies, A., Criper, C and
Howatt, A.P.R. (eds).
Appendix 1
Abbreviations
L1 - Native or first language
L2 - Target or second language
L3 - A third or other learned language
FLA - First language acquisition
SLA - Second language acquisition
FLL - First language learner
SLL - Second language learner
IL - Interlanguage
CA - Contrastive analysis
EA - Error analysis
SOV - Subject-Object-Verb language
SVO - Subject-Verb-Object language
CLI - Cross Linguistic Influence
1
20