Sunday, April 13, 2014

Translingualism in EFL 
Is the devil as bad as it is portrayed?


 The widespread belief in current EFL teaching is that using any language other than the target one in the classroom is to be avoided by all means. The "ecology" of the Target Language, in other words, is a highly valued asset in EFL classrooms. The main reason for such a radical view is the somewhat over-exaggerated fear of the negative interference from the native tongue.
   The article "Translanguaging in the Bilingual Classroom", however, tries to splash some more color on the white and black canvas of monolingual classrooms. Bilingual teaching , defined as "the use of two or more languages in instruction" might have its own benefits, the article states.
   Keeping the Universal Grammar in mind, the article views the availability of different languages in the classroom as a resource rather than a foe, a resource that

  • promotes better participation
  • makes meaning and transforms information
  • makes social, cultural and linguistic links for classroom participants 

  Student code-switching, in other words, might actually be valuable in case of a careful and judicious use.
This is a comforting thought, I have to admit.
  Let me tell you what has recently happened with one of my EFL groups. I was talking about the English proverb "Every dog has its day", when one of my Russian-speaking students instantly came up with the Russian equivalent of the proverb.
 "I am sorry, people'', she said," I'd say the Russian version but I don't want to pay 300 drams'' (In my classroom each non-English word is worth 50 drams).
  With this article fresh in my mind, I violated the rule I have established myself and authorized the student to break the precious ecological environment of the English language (the black language of Mordor uttered in Elvish lands from Lord of the Rings comes to mind:)).
  The answer came quick;
   "И на нашей улице бидет праздник"
  What happened? No thunder broke from the sky. The students appreciated the help of the familiar language and I was glad to see that they learnt the proverb painlessly and quickly.
  And may the God of English forgive my little misbehavior.

Monday, April 7, 2014


Automated Scoring of Writing Quality

Machines and Human writing


  I am sitting in a lab with students preparing for TOEFL IBT exam. My left ear catches phrases like "In this set of materials...", "The listening passage discusses the difference between to types of bacteria...", while my right ear, to its great surprise, catches the second halves of the same sentences "... the reading passage is a news bulletin on a job announcement, while the listening passage..", " ... the reading passage casts a doubt on the information in the listening passage". The same "automated phrases" are also used in TOEFL writing, as experience has shown me. With a faint smile, I lazily pity the poor person who checks those essays. 
   
  However, a recent  discovery of mine, related to TOEFL and other high-stakes tests is that the essays, written by students, are not only checked by human scorers but also by special automated scoring engines (the  e-rater® in case of TOEFL).  The scores of the human rater and the program are compared and a final score is then assigned.
  The advantages of an automated engine are obvious:
a) Objectivity: A computer program has neither  interests nor judgments. There is no need to worry that it might have a prejudice against you just because you mentioned Justin Beaber as a person you thoroughly admire.

b) Financially economical: Needless to say a computer program is an ideal employee in terms of money. Once it is installed, it only required careful maintenance.The rest is obedience and hard work.

 Unfortunately AES can not be used as a sole evaluating tool of writing (in case of high-stakes exams at least). Although the results of AES have often correlated with human scoring, still it is almost impossible to imagine a computer program justly evaluating the highly complex nature of human writing. Let's have a look at the criteria that the AES take into account while evaluating writing:

  • errors in grammar (e.g., subject-verb agreement)
  • usage (e.g., preposition selection)
  • mechanics (e.g., capitalization) 
  • style (e.g., repetitious word use)  
  • discourse structure (e.g., presence of a thesis statement, main points) 
  • vocabulary usage (e.g., relative sophistication of vocabulary)

 While aspects such as prepositions, agreement between verb and subject and capitalization can be possible handled by an artificial intelligence, more complex parts of human language such as syntax, quality of argumentation, collocation, punctuation, appropriacy of vocabulary,etc seem impossible to asses without human intervention. The quality of argumentation, for example, has little to do with the complexity of vocabulary.