SUNY Oneonta Assessment of Student Learning

Direct and Indirect Assessment

Today, a key issue in assessment of academic programs is the use of embedded assessment elements that yield direct and indirect measures of student learning.18

Embedded assessment elements are those faculty prepare such as syllabi, curricula, instructional materials, and methods, assignments, and exams and quizzes.19   

Direct methods of evaluating student learning are those that provide evidence that a student has command of a specific subject, content area, or skill, or that the student’s work demonstrates a specific quality such as creativity, analysis, or synthesis. Examples include student products and performances resulting from embedded assignments, tests, and other educational experiences. In contrast, indirect methods of evaluating student learning involve data that are related to the act of learning, such as factors that predict or mediate learning or perceptions about learning, but that do not relate directly to the learning itself. Examples include surveys, placements and other institutional research data. Indirect indicators can provide both qualitative and quantitative information over time and across situations. Both of these forms of evaluation are valuable, because one determines what a student has learned, and the other explains why a student has learned. Indirect measures are typically used to support or explain the information gathered from direct measures. Faculty are encouraged to decide which direct and indirect measures of student learning are relevant and meaningful for their program and students.

MSA provides the following examples of direct and indirect measures:

Direct Measures Indirect Measures
Course Level:  
  • Homework assignments
  • Examinations and quizzes
  • Term papers and reports
  • Case studies
  • Observations of field work, internship   and artistic performances and products
  • Course evaluations
  • Outlines of the information, concepts and skills covered on tests
  • Percent of class time spent in active learning
  • Number of student hours spent at intellectual or cultural activities related to the course
Program Level:  
  • Capstone projects, senior theses, exhibits, or performance
  • Student publications of conference presentations
  • Pass rates or scores on licensure, certification, or subject area tests
  • Employer and internship supervisor ratings of students’ performance
  • Registration or course enrollment information
  • Employer or alumni surveys
  • Student perception surveys
Institutional Level:  
  • Performance on tests of writing, critical thinking, or general knowledge
  • Rating-scale scores for class assignments in General Education
  • Performance on achievement tests
  • Transcript studies that examine patterns and trends of course selection and grading
  • The institution’s annual reports, including institutional benchmarks and graduation rates
  • Locally-developed, commercial, or national survey of student perceptions or self-report of activities

Assessment should involve the systematic and thorough collection of direct and indirect evidence of student learning, at multiple points in time and in various situations, using a variety of qualitative and quantitative evaluation methods that are embedded in courses, programs, and overall institutional processes.20

The assessment of just inputs or resources is no longer acceptable. That is, a program’s self-study cannot have as its focus the use, adequacy, or inadequacy of resources such as faculty lines, secretarial support, library allocations, department budgets, travel monies, temporary services lines and the like.

18 Other MSA issues such as “formative and summative” assessment and “benchmarking” are discussed on pages 20-21.
19 Middle State Commission on Higher Education: Student Learning Assessment – Options and Resources, 2003.
20 Middle States Commission on Higher Education, Student Learning Assessment Options and Resources, Guiding Principle 5.

© SUNY at Oneonta