Useful Vocabulary
Authentic Assessment
Authentic assessment is a form of assessment in which students are asked to apply their knowledge and skills to solve real-world problems. Authentic assessment is often compared with “traditional assessment,” which typically refers to standardized or instructor-generated multiple-choice, true-false, fill-in-the blanks tests.
Bloom's Taxonomy
Originated in 1956 (Bloom) and updated in 2001 (Anderson & Krathwohl), Bloom’s Taxonomy (for the cognitive domain) is a classification and hierarchy of learning objectives. There are 6 categories of learning objectives, moving from the lowest/simplest to the highest/most complex order of thinking:
Original version | Updated version | Description |
Sample active verbs
(to use in developing learning outcomes) |
---|---|---|---|
Knowledge | Remembering | Demonstrate memory/recall of specific facts, terms, basic concepts, principles, etc. | Define, Describe, Identify, Recall, Recognize, State |
Comprehension | Understanding | Demonstrate understanding of facts and concepts through interpreting, organizing, explaining, and comparing and contrasting. | Discuss, Explain, Give examples, Paraphrase, Summarize |
Application | Applying | Demonstrate the use of knowledge to solve problems in new situations | Apply, Construct, Illustrate, Predict, Predict, Solve |
Analysis | Analyzing | Demonstrate the ability to identify the parts, relationships and organizing principles of something, to make inferences and identify supporting evidence | Analyze, Categorize, Debate, Infer, Question, Relate |
Evaluation | Evaluating* | Demonstrate the ability to judge something based on its quality, validity, value etc., and to defend the conclusion | Appraise, Conclude, Criticize, Evaluate, Value |
Synthesis | Creating* | Demonstrate the ability to combine or integrate components into a new product, to propose alternative solutions | Assemble, Combine, Design, Formulate, Reconstruct |
*The order of “Evaluating” and “Creating” is reversed in the updated version of the Bloom’s Taxonomy.
Bloom’s Taxonomy provides a useful framework to define learning outcomes and their relationships. Relevant active verbs identified in each category are effective tools to guide the development of learning outcomes.
Concept Map
Concept maps are a useful tool to visualize how students organize their knowledge about concepts and processes. A concept map typically illustrates a group of concepts (represented in circles or boxes) and the relationships between them (represented by lines between them).
“Closing the Loop”
“Closing the loop” is used to describe the most important step in the continuous cycle of assessment. It is the step in which improvement actions, informed by the assessment data, are planned and implemented. It is an evidence-driven, reflective and collaborative process involving all stakeholders.
Curriculum Map
Curriculum map is a useful tool to ensure that curriculum/program components (e.g. courses) are designed to meet the program learning outcomes in a coordinated and systematic manner. It visually illustrates where the learning outcomes are introduced, practiced, and reinforced.
Direct vs. Indirect Assessment
Direct assessment uses measures that directly capture students’ learning and/or development of knowledge, skills, etc. Examples of direct assessment include exams, final papers, lab reports, etc.
Indirect assessment uses measures that capture perceptions or reflections about student learning, but do not measure learning itself. Examples of indirect assessment include student self-reflections, exit interview, employer survey, etc.
Embedded vs. External Assessment
Embedded assessment refers to assessment methods that are integrated into the regular curricular or co-curricular process. Embedded assessment often provides data that can be used to judge individual student performance in a course, AND can be aggregated to demonstrate how a program is helping its students meet the desired learning outcomes.
External assessment refers to assessment methods that use an instrument that is developed by someone external to the unit being assessed. External assessment often is summative, quantitative, and high-stakes (e.g. MCAT).
Formative vs. Summative Assessment
Formative assessment refers to the gathering of information or data about student learning during a course or program that is used to guide improvements in teaching and learning. Formative assessment activities are usually low‐stakes or no‐stakes; they do not contribute substantially to the final evaluation or grade of the student or may not even be assessed at the individual student level. For example, posing a question in class and asking for a show of hands in support of different response options would be a formative assessment at the class level. Observing how many students responded incorrectly would be used to guide further teaching. Formative assessment typically takes place during a course or program, and its primary purpose is to provide immediate feedback to students and instructor in order to improve teaching and learning. Formative assessment does not contribute substantially to the final grade of the student, and sometimes is not used to assess students at the individual level. Examples of formative assessment include one-minute papers, problem-solving task in class, etc.
Summative assessment typically takes place at the conclusion of a course or program, and its primary purpose is to measure student proficiency or mastery of knowledge and skills (often against others or pre-determined criteria). Summative assessment does not provide immediate feedback to the students being assessed, but can guide the improvement of teaching and learning for the next cohort of students. Examples of summative assessment include final exams, senior thesis, etc.
Holistic Scoring
Student work is evaluated to gain an overall impression of student performance, as opposed to multiple dimensions of performance (analytic scoring).
Individual vs. Institutional Assessment
Individual and institutional assessment differ in their level of analysis. Individual assessment is focused on the individual students and their learning, whereas institutional assessment is focused on how the institution is meeting its goals and objectives. Both levels of assessment are aimed to improve learning, with the former providing feedback to individual students and the latter ensuring the institution’s accountability. For example, a juried review of individual student presentations can be used to measure student oral communication skills.The individual scores highlight areas of improvement for the students, but the scores of a representative sample of seniors could be used as an indicator of how the institution is meeting the goal of “students develop effective oral communication skills.”
Norm-referenced vs. Criterion-referenced Assessment
Norm-referenced assessment measures and compares student performance in relation to the performance of an appropriate peer group. Students with the best performance receive the highest grade. Criterion-reference assessment measures and compares student performance in relation to pre-established standards or objectives. All students may receive the highest grade if they meet the established standards.
Portfolio
Portfolios are collections of multiple student work samples assembled overtime. They can be at the level of individual student, program, or institution. Portfolios are typically evaluated using rubrics.
Quantitative vs. Qualitative Assessment
Quantitative assessment collects data that are numerical in nature, and can be analyzed using quantitative and statistical techniques. Qualitative assessment, on the other hand, does not collect data that lend itself to quantitative methods, but rather to interpretive means.
Rubric (Analytic vs. Holistic)
Rubric is a tool for scoring student work. It is typically in the form of a table or matrix, describing the dimensions or components of the work (along the vertical axis) at varying levels of performance (along the horizontal axis). Each dimension or component can be scored individually (analytic scoring), or can be used to generate an overall score (holistic scoring). Rubrics can also be used to communicate expectations to students, and to provide formative feedback to guide students' learning efforts.
Standards, Benchmarks, or Criteria for Success
In the context of student learning assessment, standards, benchmarks and criteria for success all refer to the established level of proficiency that students are expected to demonstrate. Such expectations are determined by faculty, staff and other educators at the individual institutions.
Triangulation
The process of using multiple methods of assessment to see whether results converge or diverge is called triangulation.
Value-added Assessment
Value-added refers to the increase in students’ learning, or the contribution that institutions make to students’ learning, during a course or program. It can refer to an individual student or a cohort of students (e.g. senior vs. freshman). To capture the “value added,” a baseline measurement is needed (e.g. pre/post, historical comparison, cross-sectional, etc.).