14 Assessments and Integrity
Assessments
Assessments reveal a lot about your values as an educator. They show what you want to spend time thinking about and exploring with your students, as well as your own understanding of how students can express their learning.
When assessments are designed to be equitable, it means that they are designed to meet the needs of students who historically have been denied the learning opportunities, support, or access that is traditionally offered to students from dominant cultures and identities. As Sasha Costanza-Chock points out in Design Justice, “some people are always advantaged and others disadvantaged by any given design.” Importantly, “this distribution is influenced by intersecting structures of race, class, gender, and disability.” Costanza-Chock calls on educators to “prioritize design work that shifts advantages to those who are currently systemically disadvantaged within the matrix of domination,” or to become “design justice practitioners.” Equitable assessments deliberately shift design benefits to those groups who are disenfranchised by powerful social, economic, and political systems.
Types of Assessments
There are two types of assessment to keep in mind in your course design:
- Formative assessments: low stakes, frequent knowledge checks that allow students to test their understanding and receive feedback to improve their performance (for example: reflection questions with instructor or peer feedback, discussion forums, short quizzes with multiple attempts, no time limits, and automatic answer explanations)
- Summative assessments: projects that typically weigh more in the final grade and take place every few weeks over the course of the term to holistically measure student achievement of course learning outcomes
Assessment formats vary widely by school, department, and teaching community. Assessment formats might include creating a portfolio of work, a research paper, or an art project over the course of the term. They might include conducting experiments, proposing a marketing plan, or a writing a lab report. Assessments might be a live performance, a presentation, a script, or a video. Assessments also take shape in more traditional formats like exams or tests. It’s important to note that while exams are often convenient to administer, their validity, or their capacity to accurately measure student knowledge, can be quite limited.
To shift design benefits to disenfranchised groups, consider how UDL, CRT, TILT, and OEP show up in your assessment planning. How can students achieve the following opportunities through your course assessments?
- Students have multiple means of expression in course assessments. They can choose between assignment formats (video, comic, written essay) and project options (student-generated research question, group project, community-based partnership).
- Students create exam questions as well as rubrics and criteria for success used to evaluate projects or exams.
- Students can easily find the purpose, task, and criteria for success for each assignment.
- Students are invited to reflect on how the course content connects to their lived experiences and personal learning goals.
- Students receive an orientation to informed consent about open licensing.
- Students are invited to share their coursework with an open license if they choose.
Ideally, your assessments will allow you and your students to measure their achievement of the course learning outcomes and engage in meaningful growth and development as people rather than as consumers of knowledge.
Rubrics[1]
Rubrics are great for making instructor feedback transparent and actionable for students. Students rely on rubrics to interpret instructor comments on the quality of their work and to prioritize steps for improvement.
Rubrics also save valuable time by allowing instructors to score student performance quickly with built-in criteria for success and detailed ratings associated with different performance levels.
Rubrics usually take the form of a table with performance ratings organized by column at the top and criteria for success organized by row with detailed descriptions of work that meets different performance levels. The Rubric Template and Grading Scheme Development by the University of Wisconsin-Madison is a great template to adapt.
To create a rubric, consider the qualities of student performance that matter most to you and your course-level learning outcomes:
- What would student success look like for this assignment?
- Is the assignment a specific product with many parts?
- Is the assignment a performance that shows a holistic demonstration of skills and techniques?
- Is the assignment a single submission or does it require many individual parts?
- What skills are you hoping students will demonstrate?
Naming what success looks like will help you determine your desired criteria and performance ratings.
The options for using assessment data to demonstrate equivalence (or not) of learning depend on the methodology employed to gather the data and the ease of communicating back the feedback to initiate change. The methodology used to gather data on learning is varied but informs all modalities of teaching, i.e., Face-to-face (F2F), hybrid/hybrid, and online, with some adjustments and selections. For example, the following lists three options with the pros and cons of the categories across the modalities. Looking at the category level gives the option to select applicable ones to the learning outcome being targeted. The assessment feedback feeds the specific modality and methodology for improvement. As Table 2 shows, higher-level data like aggregate data on student success, tend to be the same across modalities but do inform specifics in that assessment domain or modality.
Table 2. Pros and cons of using the common categories of assessment evidence across modalities
|
Evidence Source |
F2F |
Hybrid/ H |
Online |
|
D*- Course Work |
Pros – Traditional, comfort for students and faculty
Cons – Hard to manage unless recorded in system – Assessment variations |
Pros – Choice for students +-QM on course/faculty – Embedded in Course – Software help Cons – Easier to manage – Climate difference may affect students – Moderate tech skill for faculty/ student
|
Pros – UDL usage – Embedded in course +-QM on course/faculty – Software help Cons – Perception on equivalence – Requires higher Tech skill for faculty and students
|
|
D- Capstones |
Pros – Ease of Observation by faculty / student conversations – Measures multiple components
Cons – Time, restrictions – Feedback may not benefit those students |
Pros – Moderate observation – Better presentation but disparity between remote and on-campus students may exit in Hyflex – Moderately adaptable to tech savvy students – Measures multiple components Cons – Issue with supervision Can combine flexibility – Feedback may not benefit those students |
Pros – Some artefacts may not suite for remote/ online – Some presentations may garner better audience/ student – Adaptable to technology savvy Students – Measures multiple components Cons – Greater issues with supervision and field experience – Feedback may not benefit those students
|
|
I* – Student Success Data |
Pros – The same for all, aggregate Data can be used – helps check disparity in DEI – Indirect measure – Available resources at Century – Validation referral to resources at campus is easy Cons – May bring back assessment to grade, – may not align to learning outcomes
|
Pros – The same for all, aggregate Data can be used – helps check disparity in DEI – Indirect measure – Available resources at Century – Validation referral to resources at campus is easy to some
Cons – May bring back assessment to grade, – may not align to learning outcomes – Preference to modality may impact satisfaction and performance |
Pros – The same for all, aggregate Data can be used – helps check disparity in DEI – Indirect measure – Available resources at Century – Validation referral to resources at campus is easy to some
Cons – May bring back assessment to grade, – may not align to learning outcomes – Deep conversation with students may not be possible
|
D*- Direct Measure Category and I* – Indirect Measure Category and ease to (Faculty, Admin, or Students)
The SUNY Council on Assessment has also compiled a document on the pros and cons of the different formative and summative assessment approaches. The footnote for annex 26 has links to the table of activities for reference.
Principles of Assessment
Principle 1 – Assessment should be valid
Validity ensures that assessment tasks and associated criteria effectively measure student attainment of the intended learning outcomes at the appropriate level.
Principle 2 – Assessment should be reliable and consistent
There is a need for assessment to be reliable, and this requires clear and consistent processes for the setting, marking, grading, and moderation of assignments.
Principle 3 – Information about assessment should be explicit, accessible, and transparent
Clear, accurate, consistent, and timely information on assessment tasks and procedures should be made available to students, staff, and other external assessors
or examiners.
Principle 4 – Assessment should be inclusive and equitable
As far as is possible without compromising academic standards, inclusive and equitable assessment should ensure that tasks and procedures do not disadvantage any group or individual.
Principle 5 – Assessment should be an integral part of program design and should relate directly to the program’s aims and learning outcomes
Assessment tasks should primarily reflect the nature of the discipline or subject but should also ensure that students have the opportunity to develop a range of generic
skills and capabilities.
Principle 6 – The amount of assessed work should be manageable
The scheduling of assignments and the amount of assessed work required should provide a reliable and valid profile of achievement without overloading staff or
students.
Principle 7 – Formative and summative assessment should be included in each program
Formative and summative assessments should be incorporated into programs to ensure that the purposes of assessment are adequately addressed. Many programs may also wish to include diagnostic assessment.
Principle 8 – Timely feedback that promotes learning and facilitates improvement should be an integral part of the assessment process
Students are entitled to feedback on submitted formative assessment tasks, and on summative tasks, where appropriate. The nature, extent and timing of feedback for each assessment task should be made clear to students in advance.
Principle 9 – Staff development policy and strategy should include an assessment
All those involved in the assessment of students must be competent to undertake their roles and responsibilities.
Assessment Integrity tools
Consider the technologies and logistics available for the following key tasks supporting integrity of assessments at your institutions.
- Plagiarism detection Tools
- Online Proctoring and monitoring
References
Costanza-Chock, Sasha. “Design Values: Hardcoding Liberation?” in Design Justice: Community-Led Practices to Build the Worlds We Need. MIT Press. 2020. https://design-justice.pubpub.org/pub/3h2zq86d/release/1
- https://www.aacu.org/initiatives/value-initiative/value-rubrics ↵