Today was the final day of the ALTA Machine Learning Summer School in Crete and it was spent with a practical session looking at the Write & Improve API, and then a reflection on what we learned from the week and some practical steps we can take when back in the real world next week!

(You can read the summaries of the other days here: Day 1, Day 2, Day 3, Day 4)

 

The Write & Improve API

After a morning session looking in more detail at the Feebu app from Alphary, the W&I team gave us some information about the API and how it works, before setting up a practical session where we could have a play around with it ourselves. The W&I team are keen to get more people using the underlying software and so are making the API available to anyone for research and development purposes. This will allow educators, schools, product teams and others to explore different ideas for how the functionality could be used in a way that suits them.

The API uses a REST style of communication, with all communication via HTTPS using appropriate HTTP verbs and URIs. Responses use standard HTTP status codes and data is returned in JSON format. The PUT request allows you to submit texts or speech responses to prompts, and some metadata around them, while the GET request provides the overall score, as well as information about the different errors, their error code and suggested correction (if they have them), and a quality score for each sentence in the text. The data returned by the GET request varies depending on whether a text or speech is submitted.

You can see full documentation of the API here. If you would like to be set up with an account to try it out yourself, please contact sara [at] englishlanguageitutoring [dot] com.

The correction of my text and a print of the API response to the GET call

In order for us to get a better idea of how this works in practice, Marek built a basic app that let us visualise the JSON data from the GET request in different ways in the browser. We were also able to make changes to the prompts and the texts that we submitted so we could get an idea of the full experience. It was a great way to demonstrate the power of the processing and it was great to hear so many of the people in the room discussing possible applications for the API.

Part of this move from ALTA to open up access is that they want to hear from people what works for them and to get more information about what other data might be useful or helpful. As the models have been trained on Cambridge exam-syle promts, the feedback and correction on this sort of text would be more effective, but if in using the API people see or realise a use-case for which there is currently no model, the ALTA team are open to discussions about getting the necessary data to train models for different purposes. I think Simon le Maistre has plans already for some sort of ELT recipe product! Watch this space…

Final Thoughts

The final day ended with an ELTjam session reflecting on the week and thinking about what it all might mean for learners, teachers and the industry. A few key points to come out of the short discussion:

  • There’s so much powerful potential in these technologies, and it’s our role to help it filter down to learners and teachers so that it benefits them
  • It’s great to get people from various different disciplines in the same space to work together. This approach has great potential for the development of more effective and impactful language learning products and services. We need to continue this collaboration.
  • Efficiency for both teachers and learner was mentioned. Automate error correction could help save a lot of time making, waiting for results and actioning feedback. The quicker this loop for learners and teachers, the better.
  • We need to be careful though not to oversell the technology and be clear about what it can and can’t do. There is no silver bullet. This is especially the case when it comes to skills vs knowledge; a lot of the applications that could come from this sort of technology will help improve knowledge of English, and may contribute to accuracy, but
  • There may be resistance from some teachers who feel this technology could replace them, but this can be countered by aiming the technology at the teachers in the first place. These are tools that can save teachers time and help them with their work, and this should be highlighted.
  • It’s important to keep thinking of the wider possible implications and potential use cases for NLP and Machine Learning in language education. Error detection and correction is one aspect and application, what are the others? – This was added by me as I write this up!

Finally we all brainstormed the things we want to do in the next week or two as a result of our time here in Crete. The task then was to prioritise the top 3 and then set up a meeting with our groups to check in in a couple of weeks and see how we were progressing through our task list.

As I was presenting I didn’t have a group to help with accountability, so I’ll share my top three here:

  1. Set up a workshop with the rest of the ELTjam team to brainstorm possible uses of the W&I API
  2. Give all the posts from the week a proper edit and tidy up! Plus respond to any comments or questions
  3. Follow up with all the people here I discussed meeting up or collaborating with in the future

 

It’s been a great experience and so much food for thought. Thanks to ALTA and all the organisers for an excellent event. I’m sure I speak for all of us when I say that we’re already looking forward to the next one. But for now, it’s time to relax…

 

Join our mailing list

Get new ELTjam posts & updates straight to your inbox.

You'll also get news on our events, training and webinars.


We won't share your data with anyone else, and you can unsubscribe at any time. Here's our privacy policy.

Powered by ConvertKit