Life after levels: where SLT fear to tread

Schools straitjacket themselves into levels; SLTs should cast off the shackles.

 

Image

Zombie levels

 

Rebecca: There is prodigious danger in the seeking of loose spirits. I fear it, I fear it. Let us rather blame ourselves and-

Putnam: How may we blame ourselves?

Arthur Miller’s The Crucible, Act One

 

Are rumours of the death of national levels greatly exaggerated? Since 2010, the DfE has announced the intention to remove and not replace them. They have now done so at Key Stage 3. As long ago as 2008, national tests at the end of Key Stage 3 (ages 11-14) were scrapped. But as education blogger and school leader Keven Bartle graphically points out in his magnificent post Spirit Levels: Exorcising The Ghost of Assessment Past, old regimes die hard:

‘National Curriculum Levels are dead. In secondary schools, at KS3, they have been dead for years now. Nobody much will miss NC Levels. They were a curriculum assassin and an accountability albatross rooted in suspect reliability and minimal validity. For teachers they gradually telescoped from being an end of key stage annoyance (their original intention) to being a half-termly assessment burden to being a daily lesson-by-lesson imposition to finally being a bitesize, Ofsted-friendly, 20 minute nugget of learning soundbite surrender.

Image   Image     

And yet!!! Like Hamlet’s father or Banquo the ghost of levels walks among us still, torturing us with the feeling that we should be doing something about it. They sit on our shoulders as we write schemes of work. They whisper in our ears as we construct lesson plans. They speak through us at Parents’ Evenings like Patrick Swayze communicates through Whoopi Goldberg. They are dead and yet they will not go away.’

Image

The reasons why levels loom on to haunt us still are understandable. To borrow from Kev, the DfE are caught between the Scylla of dependency on accountability and the Charybdis of mistrust in teacher judgement. Many SLT and teachers appear unable to think outside of leveled assessment. Kev again illuminates:

‘SLT, particularly at KS3, have failed to rise up to the challenge presented by the absence of KS3 tests. This is hardly surprising given the retention of the annual end-of-KS data collection phantasm and the eternally spectral nature of the Ofsted inspection regime (and here we have to remember that Ofsted are as reliant upon these ghosts as we are, given the limited nature of their inspection processes). But perhaps, at the very least, we ought to have exorcised a reliance on levels at years 7 and 8 where they have always been an unwanted visitation in many respects (remember the days before the Optional Tests anyone?).

‘Teachers, sometimes ghostly themselves in their removal from the decision-making processes nationally and even locally, have seemed to find solace and company in the presence of the eerily familiar. How many of our brightest and best teachers have grown up in a profession dominated by levels and grades, unable to conceptualise a time when they simply didn’t exist, and where students learned qualitatively rather than progressed quantitatively.’

Image

Another great blogger, Caroline Osborne, says that though levels are ‘unwieldy, untrustworthy, unfathomable and, ultimately, unworkable’, ‘like heroin addicts sitting in a side-room of Lloyds Chemist’s, awaiting their Methadone dose, some school leaders and teachers seem to be struggling to imagine the horrible cold-turkey-effect of a life without levels’.

She shares a revealing conversation:

Me: Of course, levels no longer exist so a lot of that data is now irrelevant.

Boss: We’re still using the levels next year: we must have some way of assessing the kids…

Me: Even the sub-levels?! But they’ve all been scrapped!

Boss: Yes. (slightly irked) We haven’t been given anything else…

 

‘How may we blame ourselves?’ This conversation is now being repeated in schools up and down the country. Life after levels is where SLT fear to tread. An analogy from a Guardian Secret Teacher strikes me as apt here:

 “We are like prisoners abandoned in a cell for so long that they no longer need to lock the door. Rattle the cage and we might discover that the door swings open and the warders have left.”

 Image

Schools and SLTs feel they lack the time and capacity to reinvent the entire assessment system. Frankly, it is also a matter of willpower: little matters more than how pupils’ learning is assessed.

In this climate, in which even the greatest schools I have visited or taught as, such as Burlington Danes, Hammersmith, King Soloman, Westminster, and Dunraven, Streatham, remain thoroughly wedded to their zombie brides of national levels, it is worth exploring why levels are so fatally flawed and damaging for teaching and learning. With this in mind, we might have the courage of conviction required to reinvent assessment.

Tom Bennett is scathing on this:

1. The descriptors are so vague they suggest Alzheimer’s.

2. Different teachers can give the same piece of work a different level.

3. Different schools can give the same piece a different level.

4. They invite subjectivity of analysis so advanced that it makes the scoring on Come Dine With Me look impartial.

5. Allowing teachers to assess levels and then have them assessed themselves on reaching those targets, is an invitation to inflate levels. You have created an incentive to cheat, exaggerate, or merely massage. Even without such obviously diabolic motivations, bias creeps in as preferred children are given the benefit of the doubt, and outcasts are not. 

6. The ghastly culture of sublevels, and levelled homeworks, and everything that its architects Wiliams and Black didn’t intend.

‘Levels are a damn mess; a pulpy wet love letter to wooly ideology. They assess nothing. They are pseudo assessment. They are cargo cult assessment. They are runes. I would take a hammer to the whole rotten cabinet. I would pull the lever myself, and dance as it dangled. Levels have been one of the most harmful ways that education has been metrified and commodified. This yardstick of progress and achievement is so unrelated to anything real that it becomes a perverse, surreal straitjacket within which we lash ourselves.

‘Paralysed by freedom, by inertia, until schools take the lead in this, they’re in limbo’.

Levels are not only broken; they never worked to begin with. On this I have been heavily influenced by Daisy Christodolou, and on whose unpublished paper I now heavily draw. She has been thinking about these ideas more than anyone else I know in the education sector, and her paper will be well worth reading on this when it comes out. Distilled from Daisy’s ideas, here are four of their most fatal flaws:

1. Imprecise: Levels can’t measure precise granularity

2. Misguided: Levels are founded on the flawed premise of generic content

3. Engulfing: Levels crowd out other forms of assessment

4. Distorting: Levels warp the curriculum

 

1. Imprecise

Level descriptors are deliberately vague and supposedly unprescriptive. As Tim Oates, head of Cambridge Assessment says:

‘Level descriptors in secondary Chemistry state that pupils must understand ‘that there are patterns in the reactions between substances’. Seemingly innocuous due to its generic character, this is, in fact, highly problematic. This statement essentially describes all of chemistry. So what should teachers actually teach? What are the key concepts which children should know and apply? The concept of entitlement becomes seriously eroded, if not absent, from a National Curriculum formed of such generic statements. Assessment becomes highly problematic, since a clear specification of what should be assessed becomes impossible. Assessment degrades into ‘ambush assessment’ since learners and teachers may not, in the school curriculum, have focussed on that which appears in a specific national test. Frantic search, by teachers and parents, for past test papers thus ensues, and the curriculum degrades into ‘that which will be assessed’ (Mansell W 2007; Stobart G 2008). Assessment developers, teachers, pupils and parents all are disadvantaged when this occurs.’

Teachers then get lumped with the adverb and adjective problem. Differences between levels in analytical skills are vaguely defined with words like ‘analyses clearly,’ ‘analyses precisely,’ ‘analyses with sophistication.’ It is very hard for teachers to know the difference with any real accuracy, let alone pupils or parents.

 

2. Misguided

Levels are based on a fallacy: the fallacy that skills are generic and can be taught in the abstract. As we know from decades of cognitive science, this notion is mistaken. Detaching skills from content doesn’t work.

In English, level descriptors are misguidedly designed to apply to any text and any question. They supposedly assess pupil’s analysis of a range of texts, from a cereal packet to The Tempest. But of course, even the same question (‘How does the author achieve certain effects in the text?’) is easier when it is asked of a cereal packet, and harder when it is asked of The Tempest. Not only that, but one teacher could set the essay question: ‘How is tension used in Act 1 of The Tempest?’ and another could set the essay question: ‘Compare how the themes of power and authority relate to the characters of Prospero and Caliban as The Tempest develops’, and the second would clearly be harder. Yet all these essays could be marked with level 7s being awarded for analysis, which makes a nonsense of the entire conception.

In Maths, levels are misguidedly designed for linear progression. But there are glaring inconsistencies about content across both numerical levels and mathematical domains. Within the same numerical level, you would classify both these questions as being of equal difficulty: What is 50% of 100? and What is 93% of 459? Most pupils, calculator or not, would get the first question right. Not nearly as many would get the second question right. Yet both of these, according to the levels, count as level 5 questions. Across domains, it is far easier for pupils to access level 6 statistics than level 6 algebra. So Maths teachers know in observations it is better to be teaching stats than algebra to demonstrate seemingly extraordinary ‘progress’. Again, this makes a nonsense of the entire conception of levels.

Levels encourage the mistaken notion that high-level skills are unrelated to the underpinning knowledge, and they offer no help in assessing such underpinning knowledge.

3. Engulfing

Levels crowd out other useful forms of assessment. The only form of assessment most schools are using is generic level descriptors. But these level descriptors cannot possibly take into account the huge range of possible tasks. So levels are being required to produce fine detail even though they are ill-equipped for the task. If a school only uses NC levels to mark work – as most state schools do – then there is much that is of value that can never be captured by these levels.

One example of this is multiple-choice questions. Very few English, History or other Humanities departments use multiple-choice options as regular formative assessment. Everything hinges on complex, high-level tasks. But complex, high-level tasks is not the only (or even the most effective) vehicle to assess underlying knowledge and understanding. In English, it doesn’t diagnose precisely enough what pupils don’t know about grammar, plot, context or character, and targets remain generic: ‘add more punctuation’ or ‘analyse more precisely’. Fine-grain assessment is crowded out by high-level assessment.

Such high-level, complex, ‘open’ tasks are easy to set, but very difficult to mark. When testing complex skills such as reading, writing and mathematical problem-solving, making judgments is hard. We compromise dependability and accuracy in assessment if we rely solely and exclusively on such a vehicle. All-engulfing levelled assessments every half-term or every fortnight compromise precise diagnosis and feedback.

 

4. Distorting

Levels distort what we actually teach, and result in contorted curriculum sequences.

In English, levels mean we neglect grammar, because it is only one of twenty or so ‘assessment foci’. Grammar then only gets taught in the odd starter activity here and there, without the dedicated time and attention devoted to mastering its fundamentals. Perhaps this is part of the reason why so many sixth formers still struggle with punctuation, and children in English classrooms round the country are telling their teachers: ‘I don’t know the difference between colons and semi-colons.’ Levels warp us away from the focus on mastering the fundamentals.

Maths

Maths2

In Maths, levels mean we move on to new topics too rapidly. Maths is sequenced in blocks vertically across domains so as to plough remorselessly on to a new topic every two weeks, heedless of who’s forgotten what about the last or previous topics. Pupils only spend a couple of weeks a year on fractions, and perpetually rehash the same forgotten material year after year. Perhaps this is part of the reason why children in Maths classrooms up and down the country, are telling their teachers: ‘I can’t do fractions.’ Levels combined with a much-vaunted spiral curriculum are a recipe for forgetting.

As I explained in my last post, this assessment regime is not designed with memory in mind. Remembering depends on sufficient attention, storage, usage and transfer. Insufficient attention is given to the underpinning fundamentals – level 3 calculation drills and ‘low-level’ grammatical, textual and contextual knowledge – which are neglected as a result of levels.

 

Flawed arguments for sticking with levels

There are a number of notions suggested for why we’d better stick with the devil we know:

‘OFSTED need to know national levels.’

No, they don’t, especially not at Key Stage 3. This is a stock response but it is mistaken. Accountability is focused at Key Stage 4 – Key Stage 3 is up to us.

‘There’s no alternative.’

Yes, there is. Many independent schools have never used levels, only percentages. I am writing about one alternative for my next blogpost.

‘We don’t want to confuse parents with different assessment systems in different subjects.’

They’re already confused. Few parents understand the difference between levels 4 and 5 across subjects, let alone between 4a and 5c, not should we expect them to: the distinctions are bureaucratic, arbitrary and distracting.

‘We don’t want to lose comparisons between departments.’

We never had meaningful comparisons. MFL started at level 1 at Key Stage 3. Maths went up to level 8 but English rarely got on to level 7. Levelled content rarely made coherent sense.

‘We don’t want to lose data we can use to benchmark our progress nationally.‘

Assessment is overloaded with purposes – to compare, to benchmark – but most important of all is to help learners improve. If we want to benchmark effectively we must separate out that purpose, just as we have separate PISA assessments for international comparisons.

‘It’s too much time and energy to develop and retrain staff into a new assessment system.’

This is what’s really behind such reluctance to budge on this: a lack of willpower. That’s why my next post will explain why those who redesign assessment will reap the rewards.

 

 Image

Halycon age or halcyon cage?

Teacher bloggers Kev Bartle, Caroline Osborne and Chris Hildrew have set out the challenge ahead:

  • ‘Build something magnificent on top of the unvisited grave of National Curriculum Levels.
  • Identify alternative methods of evaluating student learning
  • Ask our departmental leaders to use the new emphasis on knowledge in the revised national curriculum to identify expected learning at each year group in KS3
  • Muster all our professionalism and create a form of assessment at Key Stage 3 that is very much alive and kicking.’

‘There is plenty of mileage in Joe Kirby’s mastery model, but it needs flesh on the bones to become a viable proposition. Curriculum levels have been wrestling with the problem since 1989 through APP, end of key stage testing, teacher assessments, progress maps and sublevels – and, judging by the reaction to their demise, they have failed. I hope that whatever we design to replace them can do a better job but I, for one, am daunted by the prospect.’

As Caroline Osborne says, ‘What replaces levels must be simple and truthful. Is such a system possible?’

A reinvigorated system of assessment is possible, and we should more excited than daunted at the prospect. For it is nothing less than a chance to demonstrate the professionalism of the teaching profession. For too long schools have straitjacketed themselves into a flawed assessment system, and OFSTED and SLT’s prodigious fear of them have kept those shackles on. It is time to lay these spirit levels to rest. It’s time to let drop the albatross, rattle the cage, cast off the shackles, undo the straitjacket and leave the open jail cell. It’s time to explore an alternative assessment system, the subject of my next blogpost: a mastery model.

About Joe Kirby

School leader, education writer, Director of Education and co-founder, Athena Learning Trust, Deputy head and co-founder, Michaela Community School, English teacher
This entry was posted in Education. Bookmark the permalink.

35 Responses to Life after levels: where SLT fear to tread

  1. Heather F says:

    As a private school teacher I have been spared levels throughout my career. If generic levels are flawed (and I agree they are) don’t we have to accept that we cannot devise a different system which will do the job we liked to pretend levels were doing? I would argue that secondary school assessment can only be comparative across the subject cohort in the school and using common tasks. Without national common tests (SATS) I can’t see how a school can produce valid data comparing their performance to that in other schools.

  2. Michael Tidd says:

    I think most of the problems you raise fall into the same broad category as most others: the problem is not the levels themselves but their overuse.
    If, as intended, the levels were used once at the end of as key stage to give a best-fit assessment, then they would be imperfect but usable.
    The problems have arisen because as teaching and learning have – quite rightly – focused every more closely on frequent assessment and feedback, the levels have been stretched out of shape so badly add to be unrecognizable.
    My view is that we strongly need to separate the roles of different assessment tools. GCSE grades are probably as useful as levels for recording attainment annually in KS3. Other assessment – the formative detail, the information that allows students and teachers to know where they are on the journey – needs to be near unique to the school environment in which it is used. It should link closely to the curriculum and provide information that supports teaching and learning as its priority, rather than data analysis and benchmarking.

  3. apf102 says:

    Great piece which highlights some of the key issues. I have been working on this problem with relation to History. Have been developing a mastery style model and rationale which can be found here:

    http://www.andallthat.co.uk/2014-gove-ks3-blog.html

    I think the more we collaborate about this as a profession, the better. We need to take back our profession and stop throwing ourselves on the whims of fate.

  4. apf102 says:

    Great piece – worrying that Levels may well survive!!

    Have been working on the same issue with History – working on a mastery model which might be applicable. Have put all my current thinking and resources here

    http://www.andallthat.co.uk/2014-gove-ks3-blog.html

    Please feel free to borrow anything useful – we need to take back out professional freedom, not wait for others to solve the issue of progression for us.

    • Heather F says:

      I had a look on your site and although there were some really nicely worded explanations of how, for example, causation, can be understood by students there was still an assumption that understanding causation is independent of the material covered. A student’s ability to make subtle judgements is related to their grasp of the topic not degree to which they have acquired a generic skill. This is a key point in this blog. Therefore although it was good to see that you had not ranked the different sorts of judgements a student can make about cause (a step forward) there is still an assumption that there is a generic skill to be mastered just as with the old levels and this will still lead to lessons encouraging that skill rather than driven by a desire for students to more deeply understand a specific topic and thus be able to make subtle judgements/links etc. This is related to the point I made earlier that judgement of progress has to be related to a specific task as showing understanding of causation in one context by no means that will be replicated in another and the cause of variation between topics is actually deep knowledge of the content.

      • apf102 says:

        The important aspect of these will be tying them in to the specific content which is being covered. There are conceptual leaps that one needs to make and misconceptions to overcome in understanding history. However, the content and the concepts go hand in hand – concepts can only be developed as part of a meaningful historical enquiry, therefore the starting point is always the knowledge and period understanding (the primary concepts) which help to support the secondary concepts. I would never suggest that the two can be divorced from each other – generic skills education has caused a real mess in humanities especially!!

  5. Round of applause. Great post Joe; superb detail and gathering of arguments – linking with DC and KB brilliantly. We’ve been levels free for a while at KEGS – although the odd reference is made. Our *1,2,3 system serves our purposes- but mainly forces us to refer to actual work rather than any spurious code. Echoes some of my thoughts in Data Delusion posts and most recent one about ‘defining the butterfly. As ever, you’ve done a really good, serious job of this.

  6. If you are interested, the *,1,2,3 system is described here: http://vle.kegs.org.uk/course/view.php?id=47 Not perfect, but the focus is on capturing expected standards subject by subject in a way that is most directly relevant to our learners. The * represents real challenge.

    • apf102 says:

      Tom, we trialed a similar system in 2011, giving descriptions of target understanding on a Year by Year basis. We found it to be useful, but were also faced by the challenge that the whole school had not adopted a similar approach so we were forced to also send home levels each term, undermining the point of the system. I am a big fan of the simplicity of this particular approach however. The only issue we found with this system is that the descriptors were sometimes a little woolly for planning purposes.

  7. Heather F says:

    Of course I agree with you afp but the implication of this is that you can’t gain mastery in generic skills. However, that seems to be at the heart of your assessment focus. There is no generic skill of ‘understanding change and continuity are interwoven’ to be mastered. I guess you could say that “remembering that the teacher showed me that ‘change and continuity are interwoven ‘ and therefore I should look out for that again” could be viewed as something possible to remember/master. However that is a far cry from actually identifying those things. We agree that judgements can only be made when grounded in the content but your assessment system means we then part company as I don’t think that there is then an already mastered skill that a kid could draw upon. Previous mastery of the interaction of change and continuity would have been based on content knowledge and the conditions necessary for it to be demonstrated again are entirely dependent on grasp of relevant content NOT previous mastery in another content area. This is even more obvious with something such as reliability of sources. The reason I know that the newspaper article (e.g. saying Germans used prisoners as bell clappers during WW1) is unreliable is not because I remembered to go through nature, origin and purpose of the source. In fact we all do that pretty automatically if we know a lot about a topic. I am sure because I know enough about the period to quickly dismiss the claim and if I am ignorant of the essential details no mastered NOP routine will be of the least use. A bit of NOP might get me up the markscheme so kids must be drilled to use it but comments remain generic clap trap unless the kid knows lots about the issue and when this is the case they understand the reliability without any apparent generic mastery of NOP (although it may be useful for them to use that framework to demonstrate the understanding they have for the skills driven markscheme…) So why do some kids seem to repeatedly demonstrate an understanding of reliability when others don’t? It is because their already wide knowledge gives them the capacity to absorb more relevant information from the teaching which they can hook onto an already well developed knowledge of human motivation. For example if the weak kid is mad about football and the strong kid entirely ignorant you will suddenly find the weak kid is the one that seems to possess the skill of judging reliability (for example of a football managers analysis after the game) and the previously successful kid will fall flat on their face – no skill mastered after all.

  8. apf102 says:

    I agree with a lot of what you say here. This model is very much embryonic and I do struggle sometimes to grasp clearly what I would like to be assessing… However I do think that we need to have something which allows us to assess students’ interaction with the nuances of historical content, but also trains them into historical modes of thinking. The content is dealt with through the scheme of work and carefully planning – it is very difficult to talk about mastery here as we change topics reasonably frequently – at least every 6 weeks or so, meaning that mastery of content is quite hard to define in a meaningful way over time… Would we argue that a study of the Civil War for 6 weeks, no matter how thorough, would give someone a mastery of the period?

    I think a key part of History is training ourselves into a particular way of thinking about the past. To take this from a logical starting point, historian Marc Bloch (author of the Historian’s Craft) notes that there are a range of ways of thinking and interacting with evidence which we need to master in order to become good historians. Sam Wineburg and Peter Seixas also do a lot of work around this area. All of them however agree that these skills can only be developed through deep interaction with the content of history:

    “Competent historical thinkers understand both the vast differences that separate us from our ancestors and the ties that bind us to them; they can analyse historical artefacts and documents, which can give them some of the best understandings of times gone by; they can assess the validity and relevance of historical accounts, when they are used to support entry into a war, voting for a candidate, or any of the myriad decisions knowledgeable citizens in a democracy must make. All this requires “knowing the facts,” but “knowing the facts” is not enough. Historical thinking does not replace historical knowledge: the two are related and interdependent.” ~Seixas

    Therefore if we look at the issue of change and continuity (not my my personal favourite) – the fact that students are looking for patterns of change over time, and have some awareness of the kinds of factors which might influence these, is surely better than having no mode of thinking about this at all? Effective demonstration of such an understanding would of course involve engaging with the content of the particular time period being studied. Hence it would not be enough to explain the short term and long term changes brought by the Civil War without also referring to the context before the war, and giving examples of the extent and nature of the changes. Maybe my model needs a specific “knowledge” strand?

    Moreover, I think it is also important to remember that teaching good history also means encouraging teachers to think beyond questions of description or causation and encouraging teaching methods which show alternative approaches to understanding historical development.

    “This is even more obvious with something such as reliability of sources. The reason I know that the newspaper article (e.g. saying Germans used prisoners as bell clappers during WW1) is unreliable is not because I remembered to go through nature, origin and purpose of the source. In fact we all do that pretty automatically if we know a lot about a topic. ”

    I am not sure all students DO think automatically about nature, origin and purpose. But even if they did, the hallmark of this thinking in action would be to explain the impact of the specific nature, origin and purpose of a source rather than discussing it in generic terms. Ie. To say the newspaper is wrong because it is biased and written by a British paper is not much of a step up from just accepting the evidence at face value. You are right to say that the context will explain the source properly, but to do this, most people need to take themselves out of their everyday modes of thinking. As Wineburg discusses, it is very difficult to work with historical documents and not impose our own modern mentality. In order to effectively work with historical documents and sources we need to be able to bracket our own mentalities and beliefs. He goes on to give another example of this, arguing that people are disposed to think in particular ways. The spread of activation effect leads us to think down similar lines of thought once we have been pushed in a certain direction. Eg. When looking at a document which discusses Christopher Columbus, our politically correct thinking overrides other aspects of the document, we are encouraged to make a judgement about Columbus, rather than understanding the document in its own contextual terms. It is therefore a key part of historical education to help develop these habits of mind, as well as providing the context necessary to understand the sources.

    I completely agree with you that knowledge underpins understanding, but I also think that there are tens of thousand of students for whom and accumulation of knowledge might still never lead to effective historical thinking. It is both a state of mind as well as a development of knowledge.

    • Heather F says:

      I entirely agree with your last paragraph and also think students need to be taught how to think historically and that teaching should address historical questions, not be lists of facts. I agree that this way of thinking will not arise naturally in students and has to be taught directly, in the context of the topic. However, I do disagree still with your forms of assessment which assume a school student, a novice learner, can grasp a concept such as causation in a generic sense. When you describe ‘competent historical thinkers’ you assume the modes of thinking adopted by professional thinkers can be taught to students, mastered and assessed. it comes back to the old chestnut of whether we should be expecting our novice learners to act like academics, scientists or historians. Both national curriculum levels and your assessment model assume this is possible but much research seems to suggest that this is not a helpful approach http://mres.gmu.edu/pmwiki/uploads/Main/CritThink.pdf because experts through extremely wide knowledge are able to identify the deep structure of a problem or in this case the way a wide range of different events can be viewed through a particular lens for analysis. A Year 9 child has not a hope of acquiring the necessary range of knowledge to self identify trends in history in any meaningful way. Of course we should teach them through historical questions and expose them to a historical way of thinking but they don’t have the wide-ranging knowledge of the expert. We do often teach the material in such a way that kids are more likely to spot these trends but that does not mean they have really managed to think like a historian as that requires vast knowledge. So to return to the theme of the blog, we can assess how far a child has managed to learn to think historically from our modelling of doing so, for a particular piece of work, but they don’t tend to have the deep understanding to transfer this insight to another context without guidance, in any meaningful way. Whether the expert able to identify deeper structure because of wide knowledge could be viewed as having a generic skill – I don’t know but the skill does not transfer beyond their knowledge base.
      That same expertise allows a historian to set aside their modern mentality when looking at sources. However, I disagree that critical analysis of sources largely requires any special conscious activation of generic critical thinking skills. There is mountains of research on comprehension which is very relevant. It suggests that while techniques are of use (and reminding students to actively question whether they are using their own values fits in here) the main explanation of difference in quality of comprehension – or critical thinking skills -is prior knowledge. Most of the staff in my school would know that German clappers example was untrue because they have picked up enough general knowledge of WW1, not because they have been trained to think like historians. Most of our students thought the source was reliable despite all their NOP training because they didn’t yet know much about WW1.

      • apf102 says:

        Just wondering if you have models of progression you use in school – trying to decide on the best way forwards for 2014 and beyond and would be interested to see what others are using

  9. Heather F says:

    Good point and it was my first comment on this blog that any progression has to be in the context of the content taught. It would be so very convenient if it were possible to decouple progression from specific tasks/content and have generic statements but that is exactly what the National Curriculum tried to do with levels and it doesn’t generally work. Have you seen that blog where someone points out that the same essay level descriptors can be used to assess yr7 and university level work? What does ‘well chosen’ mean in a statement, ‘Supports argument with well chosen content’ – it could mean anything. If you look at the science levels, the attempt to decouple progress from growing knowledge of science leads to very vague statements and in languages I think I remember that the levels try and talk in general terms about improvements in ability to communicate. I think progression has to be judged in the context of the content. That is why I think meaningful assessment in history can only be through shared tasks where professionals can decide what constitutes a certain standard for that piece of work. That is what exams do.

  10. apf102 says:

    .

  11. apf102 says:

    Of course the other key point is that any assessment conducted would have a mark scheme which linked the concepts AND made specific reference to the content covered. Do you think it is ever possible for students to plan for improvement in different units? Or do you think this can only be a function of knowledge acquisition?

  12. Heather F says:

    Of course the nature of exam mark schemes means that in practice there is plenty of scope for generic advice to ensure hoops are jumped through. I also think it is right to explicitly teach how to write essays and give related sorts of advice or targets to students like ‘look for and include more supporting detail for your arguments’. I just don’t think you can effectively assess mastery of those ‘skills’, just performance in specific tasks. It is why exam skills based mark schemes end up being technique hoop jumping exercises- the examiner is forced to define what performance in the skill will look like in general terms when actually this should be content related and then the teacher ensures their A candidates don’t end up with D’s by drilling that specific technique. BTW that was no exaggeration for our AS document paper (which we have now sort of nailed after much poring over specs and board meeting attendance).

  13. Heather F says:

    Ha ha ha – but of course!

  14. apf102 says:

    Know exactly what you mean there then. Have a whole range of resources on this on the website if you ever need them:

    British Radicalism
    Italian Renaissance
    Explanations
    American West (Controversies)
    Personal Study

  15. Pingback: National Curriculum Levels: worth keeping? | e=mc2andallthat

  16. Pingback: Life after levels: who’ll create a mastery assessment system? | Pragmatic Education

  17. Reblogged this on CaterEduCater and commented:
    Excellent start to the thinking of “where next” on the subject of levels.

  18. Pingback: Why we shouldn’t close down the skills-knowledge debate | Pragmatic Education

  19. Pingback: Books, bloggers & metablogs: The Blogosphere in 2013 | Pragmatic Education

  20. Pingback: Replacing national curriculum levels | The Wing to Heaven

  21. Pingback: A guide to this blog | Pragmatic Education

  22. Pingback: A blog post from www.thewingtoheaven.wordpress.com | Assessment Without Levels - RBWM Professional Learning Community

  23. Pingback: Assessment | PGCEPhysicalEducation

  24. fantastic article
    I started teaching in 1991. I believed the system was flawed then. i witnessed the change from mainly reporting on level 5s or 6s for a whole cohort to the bright idea that we should tell individuals their level. Awarding a 5 or 6 suddenly became more divisive, and there’s been trouble since. It’s nice to read so many of my thought ina far more coherent way than I could ever write. The truth is out [primary school teachers are now openly admitting to nudging 2b to 3c to 3a to 4b etc because that’s the expected progress]. I feel vindicated.
    Irony – we have been through an era in which it is considered wrong to psotion scroes in a class or even give marks out of 10 and yet the teacher has to then give a student a 5c or 5.2 (yes I have encountered decimal levels as well)
    For maths I feel the future may be easier. These days any quick test I give, I create a very quick mark scheme ie 1 mark for some things 2 marks for others etc. I record the marks as percentages. It is easy to keep track of the average for each test, and also the average mark for each student across a set of tests. On a spreadsheet it is fairly easy to scale this data into any form you want. The idea of a spreadsheet may fill you with dread but the point I am making is that by nature of how we ‘tick’ maths questions, it is always going to be easier to measure this subject.

  25. Pingback: Just say no to junk data: Assessment at Michaela | Bodil's blog

  26. Pingback: Golden needles in a haystack: Assessment CPD trove #4 | Joe Kirby

  27. Pingback: Articles | Joe Kirby

Leave a Reply to apf102Cancel reply