New project – graphing knowledge

Throughout the history of education, people have recorded knowledge to be learnt. This knowledge is understood at different ‘levels’ and teachers / authors / curriculum-people have delivered accordingly. This idea of levels is a useful heuristic that helps us deliver appropriate content to a given audience. However, (like any heuristic) it is limited – our classifications are based on intuition and a whole host of assumptions are made as we then apply this to learners.

The difficulty is that a more precise model of knowledge is highly complicated – not the sort of thing that a human being – even one equipped with fantastic knowledge, experience and equipment – is able to easily use. Knowledge is a complicated set of interrelations and an important factor is the development of concepts over time – my understanding of ‘solid’ is very different from an 8 yr old’s or a 13 yr old’s. It’s going to take something remarkable to surpass the intuition of an experienced teacher who, implicitly, probably has this type of understanding of their pupils.

I think I have come up with an original way of dealing with this, harnessing the ability of computers to manage, sort and visualise linked data.  I am creating a database of terms defined at different levels of complexity that also maps their dependencies (i.e. what level of understanding of other terms is required to understand this term at this level).

Having created a set of (admittedly, quite random) definitions for scientific terms, I’ve had a go at throwing them into a graphing layout algorithm (dagre) and got a fairly promising result:

So my immediate next steps are:

  • Get this into a live working model – I need to hook my CMS-based database up to the graphing/visualisation tools (harder than I’d wish, but still achievable)
  • Develop the datasets, potentially using categorisation to keep things managable

My long term thoughts:

  • How to validate/refine the model – gather raw data from learners?
  • How to analyse this – can we spot threshold concepts? concepts that are so interrelated they must be taught in tandem? logical loops!
  • How to apply this – to inform teaching sequences? to develop assessment? to track progress?

 

Share

A temperature check

Traffic Light

I’ve been beating the drum for why I think certainty-based assessment is a tool every teacher should use as an enhancement to testing. Today, I thought I’d try and come at this from another angle – rather than exploring these assessments as an alternative to tests, to consider them as an alternative to RAG (red-amber-green) rating and other ‘temperature-check’ exercises.

The popularity of RAG rating shows that teachers recognise the relevance of assessing pupil confidence with the taught material. It’s a quick and instant way of gauging where a class are at and knowing what needs more explanation or rehearsal. It also gets pupils reflecting on their learning.

But, for assessment purposes, RAG rating doesn’t give accurate data:

  • It is not linked to any actual performance so ‘Green’ could easily be masking serious misconceptions
  • There is nothing at stake so it is likely to be confounded by the general confidence of the respondent
  • It’s confounded by the Dunning-Kruger effect (or regression to the mean, if you don’t completely buy into the Dunning-Kruger effect)

The first problem is not as serious as it sounds. No teacher is likely to use RAG-rating in isolation. However, the evidence that will identify the problem – test data or classwork – is, too often, reviewed after the event.

The second might be statistically addressed by using relative/ranking scales. Although this seems rather contrived, I could see a teacher using this as a way to decide which topics to cover in revision sessions.

The final issue cannot be resolved without bringing in some element of challenge.

So, whilst I’m not against RAG rating, I do see it as an opportunity missed. If you are going to the trouble of gathering feedback from your class, why not get accurate feedback? Naturally, you know my solution. By pairing a confidence scale with a question you can

  • Differentiate between certainty and misconception
  • Motivate pupils to reflect on the depth of their understanding
  • Build a rich picture of the development happening in your classroom

 

Image credit: FreeImages.com/Leanne Rook

 

Why bother measuring certainty?

It’s been a while since I blogged about the ideas that make me passionate about certainty-based assessment. In that time, I’ve had many conversations and a recurring topic is ‘why do something so complicated?’ I’ve covered off some of the boring answers about accuracy and reliability before so, today, I’m going to elaborate on one of the more exciting practical applications – assessment for learning.

Moving beyond the idea that things are either learnt or not

It is true that often, in life, you will be judged solely on the outcome of the decisions you make. You make the right call and you win, you make the wrong call and you miss out. Education is a preparation for life but we don’t always get the best result by mirroring such harsh realities. When using assessment for learning we should be looking for any measure that provides good evidence to inform the learning process – particularly those that go beyond a single correct performance.

Teachers do this all the time, requiring much more than a single correct answer before moving on, using verbal questioning to probe understanding and analysing written work for evidence of comprehension. Assessing certainty is simply a mechanism for gaining the same sort of insight quickly and efficiently.

Flagging misconceptions

My background is in teaching Science. Rather than an absence of knowledge, a much more common start point is of incorrect knowledge. This makes effective Science teaching very much an art of prediction. Until you have taught a concept a few times, it can be more like misconception whack-a-mole than a controlled delivery of new concepts.

There are, naturally, good books on common misconceptions that help, but this still leaves you needing to figure out which apply to your class. A regular quiz, set as a pre-test, might give you some clues but doesn’t differentiate well between misconception and ignorance. A certainty-based assessment categorises responses as correct-with-certainty / guess / misconception – exactly the information needed to plan effectively.

More thinking

Whilst the first two reasons I provided considered the insights gained, this final point considers the value of certainty assessment as a learning activity.

I’m not a psychologist so I use a simple rule of thumb – the more effort someone spends thinking about something, the more likely they are to learn it. Including the certainty scale (and ensuring it is meaningful by using motivational scoring) ensures that learners must spend extra effort considering their understanding for each answer they give. One twitter correspondent described it as “MCQs on steroids“. I don’t know how big the amplification effect is but, as it comes for free with every question, I see little reason not to use this technique on a regular basis.

Are you interested in trying out a certainty-based assessment?  I now have a free, Google-forms-based method for delivery. I’m even happy to help you out with design and implementation. Let me know via twitter if I can help.

A Google Forms-based prototype

When starting out on this project, one of my first thoughts was that I might harness Google Forms as a delivery mechanism. This would bring several advantages:

  • Uses existing infrastructure: Google gives all schools the ‘education’ version of apps for free
  • Pupil data sits within the school’s Google account: Much less security concern

However, my early attempts failed and I went down a different route. In the intervening period Google have improved both Forms and Sheets such that it is now much more feasible to use them for delivery. Therefore, this evening I have put together a first stab at a certainty-scored quiz and it works pretty well!

How to use:

Make a copy of the Class Assessments spreadsheet. (Use your school Google account as there will be pupil data going into this spreadsheet. The copy will be private to your account).

  • Rename your copy to something more useful (i.e. class name for secondary, subject for primary)
  • Add the pupil names in the first row of the ‘Summary’ sheet

Now make a copy of the Template Test form

  • Rename this with the test subject
  • Write some questions
  • Go to responses and click on the little green icon (Create spreadsheet)
  • In the popup (Select response destination) choose ‘Select existing spreadsheet’ and then choose your Class Assessments spreadsheet

Now pop back to the Class Assessments spreadsheet where a new sheet will have been created (Form responses 1).

  • Rename this (i.e. [Test name] responses)
  • Duplicate the ‘Analysis template’ sheet and rename it (i.e. [Test name])
  • Add the name of the responses sheet (i.e. [Test name] responses) to the top-left cell of the of the analysis sheet. You will know if this has worked because the questions will appear along the top row.
  • In the cell beneath each question enter the correct answer

At this point, you are good to go. Share the test (form) with your class by whatever means you have.

The summary sheet

This is optional but, if you should do a few of these tests, you may well want a reference to show trends. To get it working, simply fill out the top row with the name of the analysis sheet (i.e. [Test name] if you have followed my recommendations).

Notes:

  • Pupil names: I’ve built this on the premise that trying to get pupils to authenticate is an unnecessary complication. Instead, I have simply provided a ‘Enter your name’ question at the beginning. In the analysis sheet, pupils will appear in the order that they submit their answers. For the summary sheet, it will look up pupils by name from the Analysis sheet. To get this working slickly you should ensure pupils enter their names to match those you have in the summary sheet.
  • New tests: You can create your tests well in advance but you can only link one form at a time to a spreadsheet. When you need to switch, first unlink the old test (open the form, select responses, in the menu you’ll find ‘unlink form’), then follow the instructions above to link the next test to the spreadsheet.
  • Repeating tests: Forms retain their data so to repeat a test you need to either make a fresh copy or clear the data (riskier).

Where am I

It’s probably fair to say that if you were hoping WDYRK was coming soon you might be disappointed…

Since starting this up to occupy my under-occupied entrepreneurial-self, my daytime employer has decided (wisely) that I could be doing something more for them.

As well as that, I decided to move house.

So, apart from occasional blogging, don’t expect too much from me for a while.

Stealing the fun

Most people who have tried the prototype have found it a bit fiendish. My brother (with an MSci from Cambridge) had a slightly different experience.

His complaint was that the mechanism took the jeopardy out of quizzing. What he really enjoys in a quiz, apparently, is having to balance his level of certainty against a fixed scoring system. Being quite bright, he has always been a good guesser and most quizzes tend to reward this disproportionately.

This maybe puts paid to my fanciful ideas that this might make a good TV/social quiz mechanism. However, maybe a little bit of fun is a fair sacrifice in return for quality data.

 

Bad questions

I recently started work on the instruction manual for the site. It was all pleasingly straightforward (click on ‘New quiz’ to create a new quiz). However, when it came to writing questions I couldn’t leave it at that. I’ve spent many years writing MCQ and there are just too many pitfalls that I wanted to steer users away from.

I was reminded of a post by Cathy Moore (@CatMoore a US-based instructional designer): Can you answer these 6 questions about multiple choice questions. This is the perfect introduction to what happens when people are forced to generate MCQ without expertise or quality control.

If we want to use MCQ regularly, there are several potential solutions to this issue:

  • Have professionals write the questions
  • Provide extensive training and writing time to teachers
  • Make the mechanism less susceptible to these issues

If you are a regular reader, you’ll know which one of these I think is the solution. By providing a tool that turns basic statements into challenging questions (and meaningful data) we give every teacher the power to use MCQ to best effect.

Knowing what they know

Today I watched David Weston’s rather good TedX talk on developing teachers. It very clearly shines a light on the huge deficit in teacher development and is well worth watching if you have time.

But enough about him, I want to talk about the one thing he mentioned that links in to this tool. That thing … diagnostics.

One of the big advantages that experienced teachers have is their seemingly innate sense of what pupils know. It’s not innate though. New teachers really struggle to pitch learning correctly and often this undermines their efforts. After a few years of marking and evaluating their practice they develop a much more finely tuned sense of what pupils – generally – will know and what they will struggle with.

High quality diagnostics could transform this situation.

Firstly, if new teachers have this information live in lessons, they will learn much more quickly where their pupils are at and be able to adapt their teaching accordingly.

Secondly, experienced teachers with access to much more specific data about pupil knowledge will maximise challenge and progress.

Of course, there are methods in use already – formative assessment such as mini whiteboards – that sort of do this job. The technology exists to do much better – ask better questions, collect data more systematically.

In summary, I think I better crack on and build the reporting part of my tool.

P.S. The pilot is now open. If you want to have a try start here.

Making formative assessment easy(er)

Quizzing is a great way of doing formative assessment. However, if you have ever tried to write a multiple-choice quiz, you will know it’s hard.

It’s really hard.

It looks easy but you soon learn the road is paved with pitfalls and bear traps.

Why?

Because of guessing.

Guessing is a massive confounding problem. To counteract it you have to:

  • Devise several plausible distracters
  • Avoid intuitive facts and things that pupils have a modest idea of
  • Ask lots of questions

Many question writers find themselves deliberately misleading as they try and find a non-obvious way of asking about a concept. This is both hard work and leads to invalid questions.

And it’s not just MCQ: I’ve been reading up on Dylan Wiliam and the idea of hinge questions*. Exactly the same problem exists for these quick checks as for a longer quiz – coming up with a question is surprisingly hard work. Why? Because it must be challenging to be useful and many questions (in the context of a lesson on the same subject) will be too easily answered through guesswork.

Certainty-marked assessments correct against guesswork. One of the biggest advantages this brings is that it is easier to write questions. Even a lone true-false question (the easiest MCQ to write but highly guessable) gains a diagnostic significance if it can score between -6 and 3 marks. This means there will be many more questions you can ask that have the diagnostic power you need to inform teaching and learning.

Using a certainty-scoring system makes questions harder in a valid way and without additional teacher effort. This is exactly what edtech should achieve: Teachers’ practice is more effective and their job is made easier by technology.

*So much so that there’s probably a whole post to write about how certainty-marked assessments – and the data they produce – would work for this purpose.