Skip to content.
HEA logo ADM logo

The use of peer assessment to improve student journalists' court reporting skills

Abstract: Student journalists need to practice their reporting skills if a professional standard of writing is to be achieved and providing feedback on student journalists’ copy is a vital factor in enabling them to perfect their reporting skills. However, doing this via traditional written methods is time-consuming for staff, so limiting opportunities for formative assessment, and may have questionable results in terms of the extent to which students take on board that feedback. This exercise trials peer assessment as a way of increasing opportunities for formative assessment in a way which actively engages students in the process and which is also practicable in terms of staff time.

Author information:
Rachel Matthews is Principal Lecturer in Journalism in the School of Art and Design at Coventry University

Date: January 2009

Download PDF: The use of peer assessment to improve student journalists' court reporting skills

 

Why should we care about assessment?

Assessment is at the heart of an aligned theory of teaching. It is a continual process, happening formally and informally, and involves teachers and students making judgements about progress on an on-going basis (Freeman and Lewis,1998). It should work for us – lecturers and students – rather than us working for it.

 

“Assessment is the most powerful lever teachers have to influence the way students respond to courses and behave as learners.” (Gibbs, 1999, p.41)

Good assessment needs to balance the demands of different stakeholders in the course; engage students in deep learning strategies, which are appropriate to the course content; should provide valuable formative feedback, often in short, sharp bursts related to manageable chunks of content; and should be practical in terms of lecturer hours.


What is peer assessment?
Put simply, peer assessment is the process whereby students assess the work of their classmates. This may, or may not, result in them awarding an actual percentage mark for a piece of work but does result in them forming an opinion of some sort about the standard of the work in front of them.

I believe it is especially useful in teaching Print Journalism students practical skills; they already have extensive experience of the standard required because they are consumers of print products. Peer assessment gives them the opportunity to make explicit processes of judgment that they already exercise. Print journalism is also about publicly sharing your work; peer assessment promotes this process.

A useful model for peer assessment is put forward by Graham Gibbs, who evaluated its use in a second-year engineering module (Gibbs, 1999 pp43-47). In his situation, student numbers had doubled and lecturers were no longer able to mark weekly sheets designed to enable students to practice problem-solving skills. Marks dropped and a substantial proportion of students failed the module. The action taken to remedy the situation was to introduce peer assessment. Students completed a specified number of problem sheets, which were marked by other students using marking guides at weekly sessions. Sheets were then handed back immediately. Although no grades were given at this stage, summative results in subsequent tests improved dramatically.

We can extrapolate several pertinent points from this:

  1. completing regular problem sheets encouraged students to spend an evenly spaced and appropriate amount of ‘time on task’;
  2. the task itself was also appropriate and enabled students to learn more about what was required by enabling them to practice applying criteria and see how others succeeded in meeting those criteria, thus creating a heightened awareness of what was required; 
  3. feedback given was immediate and frequent;
  4. Gibbs notes another advantage to this form of assessment – the social dimension. Students were scrutinised by their peers. Even if work is peer-assessed anonymously, it is hard to ignore the impact of the thoughts of a fellow student; 
  5. Students internalise the marking criteria they use to assess another’s work, which can then be applied to their own work. Crucially they become able to monitor their own progress. Phil Race suggests that by including students in the process of setting criteria makes assessment even more effective because students know the ‘rules of the game’ (Race, 1993, p51).

 
The existing assessment context

JP113 Media Law is a level 1 Print Journalism module which runs over two semesters. The aim is two-fold; to teach students the practical skill of court reporting and to introduce them to the main legal theory affecting the profession. The assessment is by portfolio, with specified elements including three essays and five court reports. These assignments are handed in at given intervals across the two semesters for formal marking and feedback. An end-of-course examination accounts for 40 per cent of the final grade.

 
This exercise concentrated on the way in which the court reports are assessed, and considered why and how peer assessment might be a more effective way of providing feedback for students.

The JP113 assessment has to balance two major competing interests; those of the professional competence standard – students being able to write a court report fit for publication – and the academic standards demanded by higher education. The assessment, therefore, incorporates two sections; the report itself and the reflection on this report, which gives students the opportunity to demonstrate the academic skills required.

This exercise in peer assessment concentrates on the practical court report section of the portfolio, which form part of the summative assessment of the module. Students have to submit five assessments showing a range of reporting skills across Magistrates’, Crown and Coroners’ courts. They are free to select which courts are covered and when.

Although the portfolio does not have to be marked until the end of the module, a balance does need to be struck between awarding an overall grade and giving formative feedback on students to enable both them, and me, to monitor their progress.  It is for this reason that reports are submitted – and marked – at intervals.

Positive factors of the current assessment pattern include:

  1. combining formative and summative assessments, which can help to motivate students (Freeman and Lewis 1998 p33);
  2. incorporating formative feedback into an assessment pattern can encourage deep learning (Light and Cox 2001, p170);
  3. the pattern of assessment set out in JP113 also enables incremental feedback, as suggested by (Brown, 1999. p11); it gives students ‘rehearsal’ time rather than setting them up for one point of pass or fail;
  4. it is an appropriate assessment (Gibbs, 1999. p45) – students write actual reports – and are given repeated practice in doing so;
  5. student feedback to date – has also been positive. Formal evaluation indicates that students have found the assignments challenging and the written feedback useful. Informally, I know that students have thoroughly enjoyed field trips to court, which have had a high take-up rate. The assignments themselves also indicate that students are also largely progressing towards the standard of professional competence required.

Criticisms of the current assessment pattern include:

  1. students receive incremental feedback but this can only inform the next mark, not the one in hand;
  2. providing formal written feedback at no fewer than seven points across the module is time consuming for the lecturer; this may result in poorer feedback;
  3. There is no guarantee that students take written feedback on board (Gibbs and Simpson 2004, pp10-11).
Why trial peer assessment?

 

One could simply reduce the number of assessment points. Treading this path is, however, dangerous, according to Gibbs and Simpson (2004, p9), especially in the context of widening participation in higher education where students may well need more formative feedback, not less.

Is it, therefore, possible to alter the assessment pattern so students can continue to work on their reports until the portfolio submission stage, and would this make the assessment any less valid?

Our challenge is then to tailor our assessment to i) provide rehearsal time, ii) provide valuable feedback and iii) decrease time spent on marking for the lecturer. 

Practical exercise

Peer assessment was used to provide formative feedback on an inquest report as a part of JP113. The report did not form part of the summative portfolio.


Students were asked to write a report using a transcript provided. This means all students were looking at the same material; making it easier to dissect and develop assessment criteria so that the principles of peer assessment can be clearly elucidated.

The instant reaction of the first-year cohort who took part in the exercise was one of surprise. I explained the principles of peer assessment and why I felt it would benefit them. However, an important aspect of my ‘selling’ the process to them was that they are all consumers of journalism and so already have a knowledge of what standards the work before them is to be judged by (that it is fit for publication).

My next step was to involve students in deciding what they think the assessment criteria should be – i.e what do they consider the features of a good inquest report to be. This list was subject to debate until consensus was reached. Interestingly, the final list was very similar to my own.

Students had been provided with a blank marking grid and were asked to fill in the criteria and then use them to assess the work before them. Students were asked to use the criteria of first, second class, etc, down to fail. Again we discussed how they might define these classifications, e.g, First – excellent – no room for improvement. 2.1 Good with a couple of tweaks needed, etc, down to a fail.

Unfortunately time did not enable me to check the feedback given – I had intended for people to read out those stories they consider first class to see if others agreed. However, the ‘buzz’ among the group demonstrated that the scrutiny was close and that the comments were largely positive.

One positive outcome was that students seemed to really engage with the exercise and certainly appeared to enjoy it. They are, however, an established group which knows each other well and so may have been more comfortable with having their work scrutinised by their peers.

Most gratifying was the quality of inquest reports submitted for summative assessment two weeks later; they were very good.

How can this be used to improve JP113 assessment?

Extending the use of peer assessment within the JP113 assessment pattern would enable students to ‘practice’ writing court reports before they are submitted for summative assessment. They could still produce reports to the specified deadlines (the deadlines are themselves part of the professional discipline) for assessment by their peers. Students can then revise their work in time for the final submission as part of a portfolio.

The lecturer may then mark the entire portfolio or follow Gibbs’ suggestion that only a percentage are formatively assessed by ‘sampling’ (Gibbs, 1999, p49). We are able to do this because the valuable formative assessment has already taken place. And while the lecturer marks less, the ‘time on task’ is preserved by the requirement for a certain number of assignments to be included and because the student does not know in advance which will be sample marked.

Conclusion

I believe that peer-assessment is a valuable tool in explicating the standards required of journalism students and one which can be used in all spheres of journalism practice teaching.

 
Introducing peer assessment, and so enabling self-assessment, can be seen as a possible solution to our three problems; rehearsal time is given, as is prompt feedback. Tutor time spent on summative assessment is potentially lessened, but formative feedback is actually improved and the quality of learning is potentially deepened and extended. The validity of the assessment is potentially improved because the summative mark will reflect progression over the entire course – and not snapshots along the way.

However, self and peer assessment do not just happen; students need to be supported through the process so they see the advantages – rather than just seeing it as a lecturer passing the buck for what should be part of their job – and acquire the necessary skills. Race suggests that students need feedback on this form of assessment itself and that it should be introduced early if it is to be of maximum benefit (Race, 1993, p60).

But the benefits may go far beyond the context of module assessment. For Light and Cox self-assessment is also a key skill that students will take with them beyond higher education. It enables them set their own goals and work out the criteria, and so strategies, by which these might be attained. Students also learn to appreciate the views of others – again a valuable lesson for life (Light and Cox, 200,1 pp,181-182).

References

Brown, S. Institutional Strategies for Assessment. Brown, S and Glasner, A (eds) (1999), Assessment Matters in Higher Education, SRHE and the Open University Press, Buckingham.

 

Freeman, R and Lewis, R (1998) Planning and Implementing Assessment. Kogan Page Ltd, London.

Gibbs, G. Using Assessment Strategically to Change the Way Students Learn. Brown, S and Glasner, A (eds) (1999), Assessment Matters in Higher Education, SRHE and the Open University Press, Buckingham.

Gibbs, G & Simpson, C. Conditions Under Which Assessment Supports Students’ Learning. Learning and Teaching in Higher Education, Issue 1. 2004-05.

Light, G and Cox, R (2001) Learning and Teaching in Higher Education; The Reflective Professional. Sage. London.

Race, P (1993) Never Mind the Teaching Feel the Learning. SEDA Paper 80, Birmingham.

Ramsden, P. (2003) Learning to Teach in Higher Education. RoutledgeFarmer, Abingdon.

 

Please note: this exercise was carried out at the University of Gloucestershire where I was Senior Lecturer in Print Journalism. I moved to Coventry University in November 2008.

 

  • Page Navigation

Stop Press!