Good computer code is not just code that “works”. It is code which is easy for other people to read and understand, easy to extend later, and sufficiently clear that there are no hiding places for obscure bugs. Good programmers understand this from experience, but this can be a hard concept for students to appreciate, and for markers to assess consistently.

We plan to create a tool which allows students to compare the readability of code submitted by their peers. This will expose the students to a range of different styles and encourage them to think about different approaches. By applying a technique known as “Adaptive Comparative Judgement”, we will also be able to generate a ranking from these pairwise comparisons which can be used to inform the assessment process - we believe that this has the potential to produce a more consistent assessment, with less effort, at the same time as being particularly suitable for large courses. We will evaluate this by comparing the results with a traditional marking process, and with comparisons made by demonstrators and teaching assistants.


Paul Anderson (Informatics)
Ross McKenzie (Informatics)
Anna Wood (Education)
Timothy Hospedales (Informatics)

A University of Edinburgh PTAS project