Meredith Cicerchia is Director of E-Learning for Lingua.ly. She holds an M.Sc in Applied Linguistics and Second Language Acquisition from the University of Oxford and a B.A. in French language and literature from Georgetown University. You can find her on Twitter (@MereLanguage) or follow Lingua.ly’s blog of SLA inspired tips for language learners.

Maybe I was just being naive, but as a language learner, teacher, curriculum developer and researcher, I always thought Language Testing (LT) and Second Language Acquisition (SLA) were closely linked fields. I figured the LT people who spent their days constructing various assessment tools were not only aware of the most recent findings in the SLA research, but used them to inform test design. Similarly, I assumed the psychometric analyses put into test design were well understood by SLA researchers seeking out reliable instruments for their studies.

But several months into a major language testing-project, I quickly understood the reality of the situation: the worlds of LT and SLA could not be further apart. First off, I discovered I had barely scratched the surface in my understanding of language testing during an SLA degree. Secondly, when it came to the reading and listening content we were developing, the inner-workings of the test-tasks could not have been more foreign. I started asking around and realized I was not the only language-learning advocate surrounded by test developers and assessment gurus who seemed to be “speaking another language.”

So when it comes to LT and SLA- why the rift? To be honest, I still don’t know the answer to this question. The closest I’ve come to understanding it was at an LTRC Language Testing Researchers Conference last year in South Korea. A keynote delivered by British Council APTIS test designer Barry O’Sullivan caused quite a stir with the audience yet I felt myself nodding along as O’Sullivan argued that the two fields needed to make more efforts to work together.

After all, SLA trained item writers and language teachers often create test content. Test specifications typically reference parameters recognized by both fields. And most importantly, the same individuals who use SLA based curriculum and are taught by SLA trained teachers sit the exams (and depending on the stakes, make major life decisions based on the results).

Googling the issue doesn’t really add much to the discussion. According to Dr. Geoff Jordan “The relationship has certainly been theorized a number of times. But yet there remains very little contact between these two critical branches of applied linguistics research.” Cambridge University Press’s Interfaces between Second Language Acquisition and Language Testing Research says trends are being reversed thanks to overlaps in research interests and empirical approaches. Yet Shohamy’s 2000 study cites “limited interfaces” and a lack of relevance of language testing to SLA. What are we to think?

Language Tests from an SLA Perspective

I recall a former professor at Oxford saying exams were a snapshot and if they caught you at the wrong moment or measured only one skill, you could come away believing you were less fluent than you really were. Maybe that’s why I never fully bought in to the language-testing world.

At some point in their studies, most Applied Linguists encounter Selinker’s Interlanguage Continuum, a long and extended line with ups and downs. Selinker depicted language as a life-long journey characterized by many U-shaped curves. When we have understood a rule, we begin to apply it en masse. Eventually, we become aware of exceptions to the rule. Nonetheless, our execution is imperfect and unpredictable for the remainder of the upswing. Therefore, while we may not be 100% correct in demonstrating our knowledge, we have in fact moved forward on our continuum.

But what happens when a language test occurs during an up-swing? And if language testers are not using SLA research to inform their choice of constructs and task-design, how well do test results correlate with the ability and performance of learners? Of course there are many occasions on which we need standardized assessment tools and tests can come in all shapes and sizes. Yet what if there was a simpler, SLA based approach for the rest of us?

Vocabulary-Based Learning

These days I work in big-data fueled digital language learning on methods that simulate immersion and force a departure from the traditional beginner, intermediate and advanced levels that most of the industry is wedded to. So I got to thinking how SLA researchers typically measure ability level and realized the most common tool is some form of productive/receptive vocabulary test.

Considering vocabulary has long been hailed as one of the best predictors of proficiency across reading, listening, writing and speaking it does make some sense. And consequently there are a few startups out there who are imagining language learning in a whole new vocabulary driven light. They get to know an individual’s working vocabulary and let word look-up and frequency data from exposure to authentic content do the rest.

Lingua.ly, for example, is able to achieve this thanks to a robust backend that uses SLA rules governing acquisition from context to turn a measure of working vocabulary into a guideline for sourcing “comprehensible input” from the worldwide web. Bliu Bliu does it by asking learners outright what they do and don’t know. Individuals can bootstrap their way from there and scaffolding is provided in the form of dictionaries, flashcard-makers and smart review platforms.

New Kinds of Tests

Yet despite new vocabulary-driven approaches to language learning, there hasn’t really been a parallel wave of novel approaches to measurement — at least not to my knowledge. Earlier this year, the University of Ghent developed a research tool which spread like wildfire on social media as a fun, fast vocabulary test that tells you about your language ability in very little time. Its popularity was no surprise given it appeals to a generation of multi-tasking millennials who will do anything to avoid a three hour-long exam. But it was more of a game than a real test.

Can we take mobile exams and smartphone testing seriously? This past April, another startup popular with milennials, Duolingo, announced they were throwing their hat in the certified testing ring with a mobile LT center available to people who use their big-data fueled language learning platform. Duolingo’s lessons work via crowd-sourced translation so it will be interesting to understand more as their approach to testing develops.

Conclusion

Yet circling back, I still strongly believe that the separation between LT and SLA camps deserves more attention from everyone involved. With new digital approaches to learning we need new, dynamic and complementary assessment measures and more cross-pollination of ideas between the two fields, both in practice and in the research community.

Imagine the benefits if language testers and second language acquisition researchers came together in the digital startup age! We’d not only have enhanced insight into test constructs and new takes on task-construction (of particular importance for tests delivered in the digital medium) but a new generation of learning and testing tools to help life-long language learners meet their goals.

Featured Photo Credit: Mimolalen via Compfight cc. Text added by ELTjam.

Join our mailing list

Get new ELTjam posts & updates straight to your inbox

Powered by ConvertKit