Child pages
  • Engineering Assessment Portfolio
Skip to end of metadata
Go to start of metadata

Context: Engineering department at a large research university
Project scope and duration:  all students and courses in program,
Primary Purpose:  assessment of student learning in relation to institutional and professional (ABET) outcomes for accreditation and continuous program improvement

Personas:

  • Assessment coordinator:  Alicia, a mid-career faculty member who hopes to move into administration.  Currently she represents her department on the college assessment committee and she also chairs the curriculum and assessment committees in her department. 
  • Instructor: Girish, Tenure track engineering professor (primary)
  • Student:  Courtney, 4th year industrial engineering
  • External program evaluator:  Byung,  chair of the ABET program evaluation team that will be conducting the review of Alicia's program

Note:  This scenario is a composite of several portfolio projects at IU.  Portfolio verbs are highlighted in green.

As the assessment coordinator for her department, Alicia is responsible for gathering and analyzing evidence of student attainment of the program outcomes defined by ABET (the accrediting body for programs in the applied sciences, computing, engineering, and technology education) and her institution's undergraduate learning outcomes. Wishing to take advantage of the new assessment capabilities in the campus learning management system, Alicia and her colleagues on the department's assessment  committee have been meeting regularly over the last year to  review and revise their program assessment processes.  The committee devoted most of their time to mapping the curriculum to ABET and campus learning outcomes, mapping ABET outcomes to institutional outcomes, and developing standard rubrics for evaluating student work.  Another important consideration was minimizing the the additional work required of faculty and students participating in the assessment process. 

Alicia is now ready to implement the assessment process recommended by the assessment committee.  She begins by inputting the outcomes on which students will be evaluated, a mix of ABET outcomes and institutional outcomes.  Several of the ABET outcomes are almost identical to the institutional outcomes, so instead of asking faculty to enter a rating for each, she links or maps the ABET outcome to the institutional outcome.  Next, Alicia creates an electronic version of the rubrics developed by the assessment committee and attaches a rubric to each outcome.  After reviewing her work, she publishes the outcomes and rubrics so they can be seen and used by all of the faculty and students in her department.

Alicia's department head and dean have already notified faculty about the new system and their role in the process, so Girish is already aware of his responsibilities.  First, in his syllabus, he must include a section that identifies that ABET and institutional outcomes most heavily emphasized in the course. He must also incorporate at least one assignment, paper, or project in which students will be able to document their abilities related to the outcomes listed in the syllabus.  Before the semester began, Girish made the necessarily modifications to his syllabus and revised the final project so that it addresses all of the emphasized outcomes.  He then associated (or tagged) the assignment with the relevant outcomes in the published assessment framework.

It's near the middle of the semester, and Courtney is ready to begin work on the final project.  When she opens the assignment to read the instructions, she sees the list of outcomes associated with the project, and she can also view the rubric attached to each outcome if she's interested.  Courtney studies the rubrics carefully because she wants to do well on the assignment.  The outcomes and rubric help her to understand the underlying goals of the project and what she needs to do to achieve them.  She works on the project intensively for several weeks.  The final product is a working robotic claw prototype, a video showing the prototype in action, and a paper describing how it was constructed.  She attaches the paper and a video to the assignment and submits it.  She also gives the working prototype to Girish so he can examine and test it.

After the due date for the final project, Girish sits down to start grading them.  It usually takes him 45 minutes or more to grade each project, and he's concerned about the extra work associated with rating the student on each outcome.  He opens the first submission, Courtney's, and locates the working model she gave to him.  On the same page where he usually enters the grade and comments, the list of outcomes appears with an option to evaluate the student on each.  Girish carefully reviews Courtney's work and decides to rate her on each outcome before assigning a grade.  He opens the rubric for the first outcome, "an ability to design a system, component, or process to meet desired needs within realistic constraints", and assign's a rating for each of the three criteria in the rubric.  He then repeats the process for the other three ABET outcomes, "an ability to identify, formulate, and solve engineering problems," "ability to apply knowledge of mathematics, science, and engineering", and "an ability to communicate effectively", all three of which were mapped to comparable institutional outcomes in Alicia's assessment framework.  Girish gives Courtney the highest possible rating for most criteria in the rubrics, with the exception of the communications criteria where her skills are just about average.  He then reviews all the ratings and assigns a grade of A- to the project.  In the past, he probably would have given a project like Courtney's an A+ but using the rubrics resulted in a more systematic and rational approach to his grading.  After inserting some suggestions for improving the writing and organization directly into he project description, Girish posts the grades, ratings, and comments.  Courtney was a bit disappointed when she saw her grade, but Girish's comments and his ratings for the communications rubric helped her understand how her writing could have been improved.

Alicia has been running reports periodically to make sure the faculty are participating in the new assessment process.  One report shows the assignments that are aligned with program-level and institutional outcomes.  Since the report shows both the assignment names and the course ID, the report serves as live curriculum map.  Alicia can also generate reports with student performance results for each outcome.  The reports give summary statistics (mean, median, mode, standard dev, as well as counts and percentages) for each criterion in the evaluation rubric and detailed results for each student.   Alicia's program is up for review by ABET in two years, and she's already thinking about how much easier it will be to prepare the written reports and assemble information for the program review team. 

Fast forward to two years later:  Alicia has prepared reports for the ABET program review team that describe  her department's assessment and continuous improvement processes.  The reports include a summary of the student outcomes data gathered over the previous two years and a few samples of student work representing different levels of attainment.  She sends the written report to Byung,  chair of the ABET program evaluation team, prior to the review team's campus visit.  Byung and his team read the report before they arrive and they've indicated that they would like more rated examples of student work.  Alicia helps the review team establish guest accounts and gives them read-only access to her department's assessment portfolio.  The reviewers can generate summary and detailed ratings reports by student, course, or program year.  They can also open and review the student submission's (evidence) on which the rating's were based. Byung and his review team are delighted to see that Alicia's program is taking such a systematic approach to program assessment.  However, they noted that some faculty raters were noticeably more or less rigorous than others.  Before leaving, they provide her with some suggestions for detecting and improving inter-rater reliability.

  • No labels