[assessment] Feeding data into the structure from the less participational

An earlier note covered the many different ways data might be fed into the student learning outcomes assessment plan (SLOAP). There are a number of direct and indirect methods by which assessment data can be gathered. Direct methods might include locally developed tests, embedded assessment, competence interviews, direct student-by-student outcome-by-outcome measurement, item analysis of evaluation instruments that measure specific student learning outcomes, and portfolios. Indirect might include surveys, focus groups, sampling methods, and advisory councils.

One of the indirect methods, an interview approach to determining student attainment on outcomes, is possibly the most unorthodox of the proposed options. Conceptually the approach is blend of a form of item analysis and focus group approaches to assessment.

In an earlier note I argued that an experienced and qualified instructor who implements a course built on student learning outcomes attains an accurate ability to assess overall student achievement on an outcome-by-outcome basis. In an ad-hoc experiment, I then proceeded to predict performance generalities in MS 150 Statistics, and then did an item analysis of the final. The final was built from the measurable student learning outcomes on the outline. The item analysis duplicated the results I had intuitively predicted prior to the analysis.

Pilot's are trained to consider the worst case scenario and build planning from there. While we would hope that all faculty are excited and eager to get involved in the effort to generate assessment and improvement data, the reality is that some faculty are not going to be all that participative. This is not to say that they are not assessing their students, only that they are not generating information that can be used for formative assessment, only summative assessment for each student so as to assign a grade.1 How to get information from such instructors that can be used to help improve a course or program?

Given the above two paragraphs, my proposal is that a chair or instructional coordinator would sit down with the faculty member and conduct an interview using the outline as the basis for the interview. For each and every student learning outcome the chair would ask the instructor as to the performance of the students on that outcome. This might result in qualitative data of a descriptive nature, or it might generate quantitative data using a Likert type scale.

Dr. Mary Allen cautions as to the risk of using personally identifiable assessment data, that is, being careful not to turn program assessment into faculty assessment. In a small school where only one faculty member may be teaching a particular course, all course level data is potentially linkable to a faculty member. In this situation I can only ask that those who read this treat this as a program evaluation, not a faculty evaluation. I would also note that I do not mean to imply that the faculty member involved does not support assessment, they agreed to be a guinea pig in a pilot test of the interview concept.

The faculty member teaches a content intense course with 88 outcomes. In the fall term the faculty member had 87 students. I have previously noted that even on a one-to-one basis this is 7656 student-by-student outcome-by-outcome data points. If each student is given three shots at mastering an outcome, the grid or matrix balloons to 22,958 events.

They are the only instructor under my supervision of SC 101 Health Science. Because of the level of detail of the outcomes and the sheer number of outcomes, I opted for a quantitative Likert scale approach that would be familiar to the instructor.

Based on quizzes, tests, and other assignments, what is the level of competency of the students on the following student learning outcomes?
Very strong: 5
Strong: 4
Moderate: 3
Weak: 2
Very weak: 1

I then led an interview on all 88 student learning outcomes. A portion of the grid with the rating is shown below:
75 Explain the elements and method of transmission that pose the chain of infection. 4
76 Identify, various defense mechanism -environmental, constitutional, structural, cellular, structural, and chemical. 3
77 Explain what is known about the occurrence, symptoms, treatment, and prevention of many common diseases. 2
78 Identify the types of cancerous tumors and the prevalence of cancer. 3
79 Describe the factors that contribute to cancer. 3
80 Describe cancer countermeasures of prevention, early detection and diagnosis, and treatment. 3
81 Explain the difference between a drug and medicine. 5

The outline provided a nice structure and focus to our dialog. I also made notes on a number of comments and side discussions we had while working on ratings. The instructor noted, among other things, that while students can often demonstrate knowledge on a specific test or final, a month later the students may have forgotten much of what they have learned. I am indebted to Rachel for her excellent note on surface versus deep learning in this regard.

I use the word dialog in the above paragraph quite intentionally: the faculty member and I had an assessment dialog that provided insightful for both of us. In addition, I have a record of that dialog in comments made and performance ratings on 88 outcomes.

The ratings could be numerically agglomerated, but of more productive use is to note the areas of strength and weaknesses. Students perform more strongly on the reproductive system and the health issues surrounding psychoactive substances. Students are weaker in areas of less intense personal interest.

The above approach, what I term an interview on attainment of student learning outcomes, is something that a chair or instructional coordinator could arrange even with the least assessment oriented faculty member.

Bear in mind that the thinking in the assessment "community" is to move away from a five-year assessment cycle and towards an annual assessment cycle. An annual cycle that would look at not necessarily only one outcome, that too is yesterday's thinking. An annual cycle that would give an overview of the status on all major outcomes. I am reminded of the effort in business and industry at present to move towards the "real-time" enterprise. Assessment is headed in the same direction. Simpler, faster systems that generate useful formative information will be more effective and sustainable than complex projects.

The above data can be aggregated to generate information on program learning outcomes. Two broad summative conclusions come out my aggregation of the work I did with the faculty member. The first is that the course is heavy on content and operates primarily in the bottom of Bloom's taxonomy. The second has to do with the overall level of performance on the outcomes, which I am intentionally not including herein. On the formative side the faculty member and I now have a list of weaker areas in which the faculty member can consider ways to improve learning in the course.

There are a multitude of ways data can be gathered into the SLOAP, the SLOAP is merely a structure on which to "hang information." Although I tend to use the structure in one-to-one fashion for simplicity (each one specific outcome is mapped to a single program learning outcome and that is in turn mapped to one institutional outcome), other divisions might map many-to-one or many-to-many. These later two systems would simply be grids of information instead of "linear" lists. My own experience in the math science division is that most of the specific outcomes primarily serve a single program level outcome, but this may vary by division.

In closing, I would remind the concerned reader that no one method of collecting assessment data would be the single source of information. Data generated at the course level would be supplemented by surveys such as alumni surveys, employer surveys, and community surveys of programs by IRPO.

1 Formative assessment is assessment for the purpose of improving a course, program, or institution. Formative assessment would also include assessment that leads to improvement of a student's skills or knowledge. Summative assessment is often a single point evaluation of what a student can do, or of the overall quality of a program. Summative assessment does not typically tell one how to improve a student or a program. Grading finals and giving students a grade would be summative. Evaluating whether the questions on the final match the course outline is also summative. An item analysis of the final to determine learning deficiencies with the intent to alter the program to tackle those deficiencies would be formative assessment.