I. Are E-Assessment Tools HelpfulIn Programming CoursesFour research questionswere formulated to measure the degree to which e-assessment tools have helpedto students and instructors 25: 1) Have e-assessmentTools proven to be helpful in improving student learning?2) Do students think thate-assessment Tools have improved their performance?3) After having used thetools, do instructors think that the tools have improved their teachingexperiences?4) Is the assessmentperformed by e-assessment Tools accurate enough to be helpful? 1.
Have e-assessment Tools proven to be helpful in improving student learning?In 2003, Edwards 26 presentedfascinating results when he changed the e-assessment tool in a junior levelcourse on comparative languages, i.e. Curator was replaced by WebCAT,demonstrating more timely submission of assignments along with test cases bythe students. In 2003, Woit 27 exhibited that online assessment of student’spractical skills imparts a more accurate measure of student ability. Thisopinion is supported by the data that was collected over five academic years,comparing student performance on online tests with and without e-assessmenttools.
In 2005, Higgins 28 described an experiment in which Ceilidh was substitutedby CourseMarker at the University of Nottingham. The passing percentage ofstudents was found to be very high, and has improved with the evolution ofCourseMarker. Also in 2005, Malmi 30 showed results from students usingTRAKLA and TRAKLA2, in which final exam grades increased when instructorsmodified the ways in which students were allowed to use the automated tool andwere allowed to resubmit their work. In 2011, Wang 31 showed that finalgrades of students using AutoLEP for grading were way better than gradesproduced without using any tool. Considering all these facts, a positive impact onstudent learning with introduction of e-assessment tools into a course can beinferred.
End-of-grades or final exam scores were major measures used tomeasure this. 2:Do students think that e-assessment tools have improved their performance? In 2003, Edwards 26created a 20-question survey for students using Web-CAT, and it was found thatperceptions of using Web-CAT were generally positive. In 2005, Higgins 27 distributed a survey toprogramming students who tested the tool CourseMarker and indicated that over75% of students’ loved the flexibility to re-submit a programming assignmentdue to use of an e-assessment tool. Specifically, most students felt thatseveral available submissions encouraged them to work for a higher grade. In2009, when Garcia-Mateos 32 introduced Mooshak, he presented students with asurvey designed as a series of questions prompting for agreement or disagreement.
77% of the students specified that “they learn better with the new methodologythan with the old one,” while 91% said that “if they could choose, they wouldfollow the continuous evaluation methodology again.” Also in 2012, Brown 33surveyed students using the JUG automated assessment tool on their insight ofthe tool’s impact. Given the question “Did the automatically graded tests matchyour expectations of the requirements?” the majority of students opted for themiddle answer, “Sometimes.” But the question “did the reports from the automaticgrader clarify how your code should behave?” elicited a much more positiveresponse, with the majority of students answering “Often.”Unconvincing resultsconcerning student perceptions of e-assessment tools were observed.
Studentshad a mixed reaction on this question; some were very positive, but asignificant number showed student dissatisfaction with the tools. 3:After having used the tools, do instructors think that the tools have improvedtheir teaching experiences? In 1995, Schorsch 35reported that 6 out of 12 teachers who used CAP to grade assignments statedthat the tool saved them around ten hours of grading per section of roughlytwenty students. In 2003, Venables 35 stated that the feedback provided bySubmit, the e-assessment tool she discussed, provided answers to many of thequestions students would need to ask while working on an assignment. This featureof the tool reduced class time that otherwise would have been required for respondingto students’ questions. In 2012, Queirós36 briefly stated that automated grading is better than manual grading inefficiency, accuracy, and objectivity.
E-assessment tools remove biases andother factors from the grading process, and submissions are marked at a greaterpace. Overall, instructorsappreciate e-assessment tools for the benefits they provide, such as the timesavings. Most instructors report they must devote time in larger quantitiesbefore a class first uses an e-assessment tool compared to subsequentsemesters, but the overall agreement is that these tools are effectivetime-savers and are capable at the tasks they are designed to perform. 4:Is the assessment performed by e-assessment tools accurate enough to behelpful? In 2005, Higgins 37stated that grading performed by CourseMarker tool in one section of a coursewas at par with the assessment done by a teaching assistant in some othersection of same course. In 2012, Taherkhani 38 demonstrated that for about75% of submissions, AARI could successfully recognize the algorithms used bythe student in a program that required them to sort integers in ascendingorder.
In 2014, Gaudencio 39 reportedthat instructors who manually graded assignments inclined to agree more withthe results of an e-assessment tool than with results provided by otherinstructors. E-assessment tools haveproven to impart beneficial results in assisting the assessment process.