Child pages
  • Engineering Assessment Portfolio

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


Alicia has been running reports periodically to make sure the faculty are participating in the new assessment process.  One report shows the assignments that have inserted  or linked to course assignmentsare aligned with program-level and institutional outcomes.  Since the report shows both the assignments name assignment names and the course ID, the report serves as live curriculum map.  Alicia can also generate reports with student performance results for each outcome.  The reports give summary statistics (mean, median, mode, standard dev, as well as counts and percentages) for each criterion in the evaluation rubric and detailed results for each student.   Alicia's program is up for review by ABET in two years, and she's already thinking about how much easier it will be to prepare the written reports and assemble information for the program review team. 

Fast forward to two years later:  Alicia has prepared reports for the ABET program review team that describe how describe  her department's assessment and continuous improvement processes.  The reports include a summary of the student outcomes data gathered over the previous two years and a few samples of student work representing different levels of attainment.  She sends the written report to Byung,  chair of the ABET program evaluation team, prior to the review team's campus visit.  Byung and his team read the report before they arrive and they've indicated that they would like more rated examples of student work.  Alicia helps the review team establish guest accounts and gives them read-only access to her department's assessment portfolio.  The reviewers can generate summary and detailed ratings reports by student, course, or program year.  They can also open and review the student submission's (evidence) on which the rating's were based. Byung and his review team are delighted to see that Alicia's program is taking such a systematic approach to program assessment.  However, they noted that some faculty raters were noticeably more or less rigorous than others.  Before leaving, they provide her with some suggestions for detecting and improving inter-rater reliability.