Questions as feedback

I’ve recently been mulling when and how to provide quiz feedback.

Whilst the general principle of providing feedback is sound: logically, we do have to inform a learner if something they are doing is incorrect. The form this should take is debatable. Some options are:
  • Right / wrong only
  • Explanation of why the right answer is right
  • Explanation of why wrong answers are wrong
  • Something new that I’ve just thought of – keep reading!
An issue here is that there’s quite a lot of effort required to write explanations. You could end up writing a textbook’s-worth of content that is rarely even seen. I’m also unconvinced of the value of such explanations in many cases. Many misconceptions persist despite pupils being told the correct answer.

Timing

An aspect of feedback that is much easier to configure is when to deliver the feedback. Various sources have convinced me that a small delay in feedback might have a significant impact on how much the question is thought about. There’s something intuitively right about the idea that a learner might find a question superficial if the answer is easily/rapidly available. Therefore, I’m aiming for an experience more like the common classroom model:
  1. Respond to a batch of questions first
  2. Then answers are shared for marking

Unlike the classroom experience, I’d provide the score before the question-by-question review. Also I’d not provide anything more than correctness indication on questions that were answered correctly.

So what is my new idea…

Feedback questions

As mentioned before, feedback explanations are effortful and, possibly, not very effective. A smarter way to follow up would be to ask a further question. (If you’re UK-based and reading in 2017, think of it like automated green pen work.)

This further question would be selected to investigate the nature of the misconception – asking about the same concept in a alternative or simpler way.

  • For the learner, this should make them think carefully about why they answered the original question incorrectly.
  • For the teacher they get an additional piece of diagnostic information that will help them to understand the pupil’s original misconception.

Driving creation and reuse

Writing these follow up questions may sound harder than writing feedback explanations – it is! The difference is that you can/will pull on existing questions (reuse) and any that you do create will be useful in their own right (the database becomes more complete).

The journey…

What does it look like when put all together…

  1. Pupil answers a series of questions (one-at-a-time)
  2. Pupil sees their score
  3. Pupil is invited to review answers (now provided as a list with their response showing, and correctness indication – tick/cross)
  4. Where a question has been answered incorrectly, a further question is offered – this time with instant feedback. (this could even be threaded – continuing to ask relevant questions until a correct answer is given)
  5. Once the related question has been answered, pupils may reattempt the original question.
  6. Teacher can review all interactions/responses via the dashboard.

*QuestionsDB is a content management system I’m building for the back-end of the quiz.

This is why we don’t really want textbooks back…

This is a response to Ben Newmark’s blog Where did the textbooks go?

Back when I taught I used textbooks and published scheme resources fairly extensively. There was some textbook scepticism but, particularly after doing supply, I learnt that this was a pretty easy way to keeping on top of planning and deliver a decent standard of lesson without working myself into the ground. As a Science teacher, following schemes also helped greatly with logistics such as equipment ordering and risk assessment.

So, why am I writing a response to Newmark’s blog? Why would I deny the teachers of today the convenience I enjoyed? Here’s why:

  • Constant change is the future
  • Technology moves on: It’s book vs device
  • Boards are not the problem
  • The world is moving on

Constant change is the future (whatever the apparent short-term outlook)

A big factor in textbooks becoming unpopular was the rate at which curricula/ specifications change. Ben notes that this might be going to abate. Without wanting to be pessimistic, I think we can all be confident that change will continue. But nevertheless, I don’t see any reason why this might improve the textbook situation. Publishers will not be investing more at this point. They update titles to match specifications at release, then they become just sales and delivery operations.

The ‘spec-book’ format may seem like a cynical planned redundancy but it is actually optimised based on customer demand. Given the choice of a book that aligns to the modules you are teaching and one that doesn’t, are you really going to create work for yourself? No, you can let a professional resource writer / compiler / graphic designer and editor collect together the bits you need and take things forward from there.

The real issue that they create is that you bind yourself to a specification for the years that you hope the books to last. What would be infinitely preferable would be subscriptions to quality resources that can switched year-to-year.

An aside: I once proposed to Nelson Thornes flipping the emphasis of their books: they seemed to put all the good stuff in side panels whilst the main text was a regurgitation of the spec. It was early days of iPad and we didn’t get beyond proof-of-concept, but I still believe there’s mileage in this idea.   

Technology moves on: It’s book vs device

When we talk about textbooks as if they are some perfect formulation we forget how much they have changed in recent decades. Publishers have continually introduced new printing technology to improve textbooks: with picture inserts, inline pictures, monochrome, colour printing, 2-sided colour printing. They have also evolved the formats from ‘ultra-dry’ text to the double-page spreads (with exercises).

It’s fairly obvious that devices should be the next step here. There are countless useful ways in which a screen can present information more clearly. I’m concerned and disappointed by the evidence against personal devices in the classroom. I’m convinced the detrimental factors (whether distraction, display quality or something else) will be overcome so that, eventually, pupils carry one light, robust personal device rather than having crates/bags of textbooks.

The killer reason for devices, however, is not replacing textbooks, it is replacing workbooks. So much teacher marking is menial – the exercises pupils complete could be machine marked and recorded. The benefits are huge: feedback can be instant (or optimised for whatever point has most learning effect), teacher workload is reduced, the response data is live – you could literally watch how far each pupil had progressed through their classwork.

Boards are not the problem

There’s a suggestion that boards have been used as a substitute for textbooks. It seems a bit of a stretch – boards, even with projectors, seem to me to be used for the same purposes boards have always been used for. The truth is sadder – pupils simply haven’t been expected to read very much.

Ben makes fair criticism of the IWB roll-out. As delivered, they don’t really deliver benefits and have driven workload up. It’s a big shame because there are obvious improvements they could drive:

  • Providing reference material for classes
  • Saving for re-use year on year
  • Sharing with others
  • Professionally produced board resources

The roll-out of IWB coincided with a (government agency driven) view of how teachers should plan, resource and teach. ‘Good’ whiteboard use was taken to be using prepared gimmicks rather than the rather simpler idea of just using it to save boards.

I do find it slightly ironic that Ben, who produces such beautiful whiteboards for his videos, dismisses the value of savable front-of-class resources and the potential of sharing/reuse.

I should add, I’m from the OHP/T generation. I had printable transparencies that saved my classes from the horror of my handwriting. Although my teaching career was short, many of these did also get reused (I’m good at filing).

 

The world of work is genuinely moving on from paper

I find the ‘just google it’ crowd as annoying as anyone else. But this particular fallacy does not extend to the whole internet as a vehicle. I visit a lot of workplaces. Paper is almost unthinkable for most business functions such as transmission and collection of information. Yes people use whiteboards, notebooks and sticky-notes too, but these are secondary. Anything that needs storing is photographed.

How different should schools be? Certainly, I would want pupils to get plenty of practise at handwriting and I would want them to be able to access diverse books in a library. But if, in ten years time, schools are still ploughing funds into paper textbooks, I fear we will have missed out on a huge opportunity to do things better and smarter. Why is this a problem? Because another part of the world will crack it first, and we will end up subscribing to their model and be railroaded into imitating someone else’s education system.

New project – graphing knowledge

Throughout the history of education, people have recorded knowledge to be learnt. This knowledge is understood at different ‘levels’ and teachers / authors / curriculum-people have delivered accordingly. This idea of levels is a useful heuristic that helps us deliver appropriate content to a given audience. However, (like any heuristic) it is limited – our classifications are based on intuition and a whole host of assumptions are made as we then apply this to learners.

The difficulty is that a more precise model of knowledge is highly complicated – not the sort of thing that a human being – even one equipped with fantastic knowledge, experience and equipment – is able to easily use. Knowledge is a complicated set of interrelations and an important factor is the development of concepts over time – my understanding of ‘solid’ is very different from an 8 yr old’s or a 13 yr old’s. It’s going to take something remarkable to surpass the intuition of an experienced teacher who, implicitly, probably has this type of understanding of their pupils.

I think I have come up with an original way of dealing with this, harnessing the ability of computers to manage, sort and visualise linked data.  I am creating a database of terms defined at different levels of complexity that also maps their dependencies (i.e. what level of understanding of other terms is required to understand this term at this level).

Having created a set of (admittedly, quite random) definitions for scientific terms, I’ve had a go at throwing them into a graphing layout algorithm (dagre) and got a fairly promising result:

So my immediate next steps are:

  • Get this into a live working model – I need to hook my CMS-based database up to the graphing/visualisation tools (harder than I’d wish, but still achievable)
  • Develop the datasets, potentially using categorisation to keep things managable

My long term thoughts:

  • How to validate/refine the model – gather raw data from learners?
  • How to analyse this – can we spot threshold concepts? concepts that are so interrelated they must be taught in tandem? logical loops!
  • How to apply this – to inform teaching sequences? to develop assessment? to track progress?

 

A temperature check

Traffic Light

I’ve been beating the drum for why I think certainty-based assessment is a tool every teacher should use as an enhancement to testing. Today, I thought I’d try and come at this from another angle – rather than exploring these assessments as an alternative to tests, to consider them as an alternative to RAG (red-amber-green) rating and other ‘temperature-check’ exercises.

The popularity of RAG rating shows that teachers recognise the relevance of assessing pupil confidence with the taught material. It’s a quick and instant way of gauging where a class are at and knowing what needs more explanation or rehearsal. It also gets pupils reflecting on their learning.

But, for assessment purposes, RAG rating doesn’t give accurate data:

  • It is not linked to any actual performance so ‘Green’ could easily be masking serious misconceptions
  • There is nothing at stake so it is likely to be confounded by the general confidence of the respondent
  • It’s confounded by the Dunning-Kruger effect (or regression to the mean, if you don’t completely buy into the Dunning-Kruger effect)

The first problem is not as serious as it sounds. No teacher is likely to use RAG-rating in isolation. However, the evidence that will identify the problem – test data or classwork – is, too often, reviewed after the event.

The second might be statistically addressed by using relative/ranking scales. Although this seems rather contrived, I could see a teacher using this as a way to decide which topics to cover in revision sessions.

The final issue cannot be resolved without bringing in some element of challenge.

So, whilst I’m not against RAG rating, I do see it as an opportunity missed. If you are going to the trouble of gathering feedback from your class, why not get accurate feedback? Naturally, you know my solution. By pairing a confidence scale with a question you can

  • Differentiate between certainty and misconception
  • Motivate pupils to reflect on the depth of their understanding
  • Build a rich picture of the development happening in your classroom

 

Image credit: FreeImages.com/Leanne Rook

 

Why bother measuring certainty?

It’s been a while since I blogged about the ideas that make me passionate about certainty-based assessment. In that time, I’ve had many conversations and a recurring topic is ‘why do something so complicated?’ I’ve covered off some of the boring answers about accuracy and reliability before so, today, I’m going to elaborate on one of the more exciting practical applications – assessment for learning.

Moving beyond the idea that things are either learnt or not

It is true that often, in life, you will be judged solely on the outcome of the decisions you make. You make the right call and you win, you make the wrong call and you miss out. Education is a preparation for life but we don’t always get the best result by mirroring such harsh realities. When using assessment for learning we should be looking for any measure that provides good evidence to inform the learning process – particularly those that go beyond a single correct performance.

Teachers do this all the time, requiring much more than a single correct answer before moving on, using verbal questioning to probe understanding and analysing written work for evidence of comprehension. Assessing certainty is simply a mechanism for gaining the same sort of insight quickly and efficiently.

Flagging misconceptions

My background is in teaching Science. Rather than an absence of knowledge, a much more common start point is of incorrect knowledge. This makes effective Science teaching very much an art of prediction. Until you have taught a concept a few times, it can be more like misconception whack-a-mole than a controlled delivery of new concepts.

There are, naturally, good books on common misconceptions that help, but this still leaves you needing to figure out which apply to your class. A regular quiz, set as a pre-test, might give you some clues but doesn’t differentiate well between misconception and ignorance. A certainty-based assessment categorises responses as correct-with-certainty / guess / misconception – exactly the information needed to plan effectively.

More thinking

Whilst the first two reasons I provided considered the insights gained, this final point considers the value of certainty assessment as a learning activity.

I’m not a psychologist so I use a simple rule of thumb – the more effort someone spends thinking about something, the more likely they are to learn it. Including the certainty scale (and ensuring it is meaningful by using motivational scoring) ensures that learners must spend extra effort considering their understanding for each answer they give. One twitter correspondent described it as “MCQs on steroids“. I don’t know how big the amplification effect is but, as it comes for free with every question, I see little reason not to use this technique on a regular basis.

Are you interested in trying out a certainty-based assessment?  I now have a free, Google-forms-based method for delivery. I’m even happy to help you out with design and implementation. Let me know via twitter if I can help.

A Google Forms-based prototype

When starting out on this project, one of my first thoughts was that I might harness Google Forms as a delivery mechanism. This would bring several advantages:

  • Uses existing infrastructure: Google gives all schools the ‘education’ version of apps for free
  • Pupil data sits within the school’s Google account: Much less security concern

However, my early attempts failed and I went down a different route. In the intervening period Google have improved both Forms and Sheets such that it is now much more feasible to use them for delivery. Therefore, this evening I have put together a first stab at a certainty-scored quiz and it works pretty well!

How to use:

Make a copy of the Class Assessments spreadsheet. (Use your school Google account as there will be pupil data going into this spreadsheet. The copy will be private to your account).

  • Rename your copy to something more useful (i.e. class name for secondary, subject for primary)
  • Add the pupil names in the first row of the ‘Summary’ sheet

Now make a copy of the Template Test form

  • Rename this with the test subject
  • Write some questions
  • Go to responses and click on the little green icon (Create spreadsheet)
  • In the popup (Select response destination) choose ‘Select existing spreadsheet’ and then choose your Class Assessments spreadsheet

Now pop back to the Class Assessments spreadsheet where a new sheet will have been created (Form responses 1).

  • Rename this (i.e. [Test name] responses)
  • Duplicate the ‘Analysis template’ sheet and rename it (i.e. [Test name])
  • Add the name of the responses sheet (i.e. [Test name] responses) to the top-left cell of the of the analysis sheet. You will know if this has worked because the questions will appear along the top row.
  • In the cell beneath each question enter the correct answer

At this point, you are good to go. Share the test (form) with your class by whatever means you have.

The summary sheet

This is optional but, if you should do a few of these tests, you may well want a reference to show trends. To get it working, simply fill out the top row with the name of the analysis sheet (i.e. [Test name] if you have followed my recommendations).

Notes:

  • Pupil names: I’ve built this on the premise that trying to get pupils to authenticate is an unnecessary complication. Instead, I have simply provided a ‘Enter your name’ question at the beginning. In the analysis sheet, pupils will appear in the order that they submit their answers. For the summary sheet, it will look up pupils by name from the Analysis sheet. To get this working slickly you should ensure pupils enter their names to match those you have in the summary sheet.
  • New tests: You can create your tests well in advance but you can only link one form at a time to a spreadsheet. When you need to switch, first unlink the old test (open the form, select responses, in the menu you’ll find ‘unlink form’), then follow the instructions above to link the next test to the spreadsheet.
  • Repeating tests: Forms retain their data so to repeat a test you need to either make a fresh copy or clear the data (riskier).

Where am I

It’s probably fair to say that if you were hoping WDYRK was coming soon you might be disappointed…

Since starting this up to occupy my under-occupied entrepreneurial-self, my daytime employer has decided (wisely) that I could be doing something more for them.

As well as that, I decided to move house.

So, apart from occasional blogging, don’t expect too much from me for a while.

Stealing the fun

Most people who have tried the prototype have found it a bit fiendish. My brother (with an MSci from Cambridge) had a slightly different experience.

His complaint was that the mechanism took the jeopardy out of quizzing. What he really enjoys in a quiz, apparently, is having to balance his level of certainty against a fixed scoring system. Being quite bright, he has always been a good guesser and most quizzes tend to reward this disproportionately.

This maybe puts paid to my fanciful ideas that this might make a good TV/social quiz mechanism. However, maybe a little bit of fun is a fair sacrifice in return for quality data.

 

Bad questions

I recently started work on the instruction manual for the site. It was all pleasingly straightforward (click on ‘New quiz’ to create a new quiz). However, when it came to writing questions I couldn’t leave it at that. I’ve spent many years writing MCQ and there are just too many pitfalls that I wanted to steer users away from.

I was reminded of a post by Cathy Moore (@CatMoore a US-based instructional designer): Can you answer these 6 questions about multiple choice questions. This is the perfect introduction to what happens when people are forced to generate MCQ without expertise or quality control.

If we want to use MCQ regularly, there are several potential solutions to this issue:

  • Have professionals write the questions
  • Provide extensive training and writing time to teachers
  • Make the mechanism less susceptible to these issues

If you are a regular reader, you’ll know which one of these I think is the solution. By providing a tool that turns basic statements into challenging questions (and meaningful data) we give every teacher the power to use MCQ to best effect.