Is adaptive learning ethical? Is it OK to experiment on one set of learners to improve results for another? Do EdTech companies have a higher duty of care to their users than regular businesses?

These are some of the questions that raced through my mind as I read that Facebook and Princeton University manipulated the emotional constitution of 689,003 users’ news feeds to discover what effect it would have on their emotions.

At first I was shocked. Why would a company deliberately manipulate the emotional state of their users, and then publicise it? And how could they possibly believe, as the study itself says, that:

“Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constitut[ed] informed consent for this research.”

Then I thought about how insignificant this must have seemed in that industry. In big data, everything is measurable – because interaction with the site is all that matters. Indeed, Facebook measured only the presence of key emotion-words in users posts; no one was interviewed. The user counts (and dollars) of big high-tech companies are so high, that their evidence-based design leaves ELT in its pseudo-scientific dust (forgive the hyperbole – great session at IATEFL). For example, take Google A/B testing 41 shades of blue for the links on their search page, to see which one is more conducive to clicks. So is Facebook’s experiment just an A/B test, with the same aim as all the others (improve user retention, increase sales conversion, etc.)?

Is there actually a fundamental difference?

Or is it merely that, as Professor of Law at the University of Maryland James Grimmelmann puts it:

“This study is a scandal because it brought Facebook’s troubling practices into a realm – academia – where we still have standards of treating people with dignity and serving the common good.”

Which is where the ELT industry comes in. Could we get away with such an audacious manipulation of our students? Could an EdTech company do this to its users?

As a test, I propose a similar experiment for Knewton: to manipulate the correctness of 689,003 users’ responses to discover what effect it has on their future answers. Give users incorrect solutions, and see if their performance is affected. Give them questions that are too difficult, and see if other (easier) questions are answered well (and vice-versa). Present inaccurate content and just see what happens.

There would be clear educational value to the knowledge produced: understanding the effect of confidence and reinforcement on performance. Understanding the importance (or otherwise) of content accuracy. Future learners could benefit from this knowledge.

Somehow I don’t think Knewton will take me up on this (though I welcome comments below from anyone who works for an adaptive education company). An education company manipulating the education of its users is not like a marketing company – Facebook – manipulating the emotions of its users.

As Jaron Lanier puts it in the New York Times:

“It is unimaginable that a pharmaceutical firm would be allowed to randomly, secretly sneak an experimental drug, no matter how mild, into the drinks of hundreds of thousands of people, just to see what happens, without ever telling those people. […] Unfortunately, this seems to be [acceptable] when it comes to experimenting with people over social networks.”

And here’s that accepted view on the Guardian: We shouldn’t expect Facebook to behave ethically.

Should we expect EdTech companies to?

Join our mailing list

Get new ELTjam posts & updates straight to your inbox

Powered by ConvertKit