The College of Micronesia-FSM curriculum committee recently approved the mathematics program student learning outcomes.
Performance of these outcomes will be shown primarily by grades achieved by students working with qualified, academically trusted faculty who in turn are utilizing course outlines based on course-level student learning outcomes that map to the program learning outcomes.
Validation of the grades delivered will be accomplished by evaluating a sample of students in sequence terminal mathematics courses. This will form a secondary system of evaluation.
A diagram of the relationship of these elements to the program student learning outcomes is available. The relationship of the program learning outcomes to the performance based budget outcomes is shown in a table.
The right side of the diagram that shows the relationship between primary and secondary mathematics program student learning outcomes is a new system. A pilot instrument was developed and deployed. The actual instrument is an OpenOffice.org document, but a copy of the contents exists as a web page.
The initial run is intended primarily to trial run the items being evaluated. The main intent is to refine the instrument. That the instrument might give a first look at whether program student learning outcomes are being achieved is only a secondary intent. As a result the results are statistically confounded by a number of factors.
First and foremost is that the instrument was not delivered to a random selection of students approaching graduation. The instrument was given to 38 students in MS 150 Statistics, a capstone mathematics courses for some majors, and to 21 students in MS 095 PreAlgebra.
Having the MS 095 students undertake the evaluation was an attempt to test question clarity for mathematically weak students. There was also the intent of looking at whether significant differences occurred between students early in the mathematics program at the College and students at the end of the mathematics program.
A second serious statistical problem is that not only was the instrument not delivered to a random selection, but all of the students have a single same instructor.
Other problems include a non-representative sample of majors - Computer Information Systems majors dominate the sample while liberal arts majors are seriously underrepresented.
The item analysis spreadsheet is available in both its native OpenOffice.org format and as an Excel file produced by OpenOffice.org Calc. Note that there was no question nine, it was deleted late in the editing process but the instrument was not then renumbered.
The pilot instrument focused on interpreting graphical information with a secondary focus on solving simple word problems. The logic was that post-graduation the bulk of the mathematics that people encounter in life is actually graphical information.
The number of people who actually need to solve quadratic equations as a part of their daily life is starkly few. The bulk of what is taught in a mathematics program is quickly forgotten with no strongly negative impact on the alumni who has forgotten it. Even the faculty of the division of natural science and mathematics would be pressed to pass an MS 100 College Algebra examination taken cold.
At the program evaluation level the intent was to look at the life skills implied by the mathematics program student learning outcomes. The eleven items were intended to look at the following outcomes (see the outcomes page for more complete information on each of these):
During the evaluation questions were raised by students concerning questions two and six. Both of those questions were deemed unclear.
Errors were found in questions five and eight, corrections were put onto the board in each class. Question three included an option to choose a logarithmic model, this was deleted as knowing what a logarithmic function looks like is covered only in the optional elective MS 101 Algebra and Trigonometry course.
The online evaluation has all ready had these issues addressed, the original instrument wording now exists only as a hard copy. The online version also has been renumbered so that there is no longer a missing question nine. The analysis tables were done prior to this correction. Hence numbers 10, 11, 12 in the analysis correspond to 9, 10, and 11 in the online version.
The overall results were as follows:
The total possible represents the 59 total students who participated. The number correct is the number of students who got that particular question correct.
The overall success rate of 53% suggests that any attempts at setting a baseline measure up at a more traditional 70% "C" or higher would be unrealistic.
The strongest single question performance was an 83% on reading a y-value from a graph given an x-value for question four. Question two was the exact same of question but yielded only a 56% success rate. The difference can probably be attributed to the line providing an exact answer for four but not for two. Two required a willingness to estimate and round that four did not require. The difference suggests subtle differences in problem can play out as large differences in success.
Questions seven, eight, and eleven (renumbered as 10 in the version presently online) were fairly traditional word problems involving nothing more complex than a linear equation. Success rate of 80%, 68%, and 63% suggests, relative to the 53% mean, that students have some capability with regards to these problems.
Students in MS 150 Statistics and MS 095 PreAlgebra were given the evaluation on 21 March 2003 with the following results by course:
|150 – 95 diff||-0.1||0.06||-0.24||0.18||0.09||0.11||0.13||0.09||0.27||0.23||0.15||-0.01||0.18||0.09||0.07|
The difference in performance between the students in the two courses is not large. The MS 150 student average was only 9% higher than the MS 095 student average. If the evaluation were statistically sound this difference would only be marginally significant. Given the numerous statistical problems that confound this running of the evaluation this difference cannot be taken as significant.
Of some interest were those questions on which the MS 095 students outperformed the MS 150 students. Question one required knowledge of the definition of commutivity, a math concept taught in developmental math and apparently partially forgotten by the students in MS 150 Statistics.
The stronger performance of MS 095 on question three might be puzzling, but the MS 095 performance is simply evidence of students knowing what they have been recently taught: the shape of functions was covered in MS 095 during the section of the text where the text discussed the power of a term.
Twenty-eight women and thirty-one men participated in the evaluation. Their averages were identical, both groups attaining a 53% average. There were no gender differences in performance.
Only four majors had enough students to take a statistical look at their averages.
|Computer Info Sys||26||0.53||0.22|
The Accounting students appear to the strongest mathematically, bearing in mind that the underlying problems in the study design prevent meaningful tests of statistical significance. Complicating any analysis is that majors are not evenly distributed between the courses. The Liberal Arts majors, for example, are almost exclusively in MS 095.
The state counts do not add to 59 due to non-FSM citizens in the courses. The non-FSM citizens did not constitute a statistically significant sample size.
While the differences per se are not statistically significant, there may be some shades of meaning in the rank order of the states. All students faced the same questions, all should have been equally affected by the flaws. There is a differential distribution of students between the the two courses involved. MS 095 has no Yapese students and only a single Kosraen student. The intercourse difference, however, was smaller than the range in the state differences.
Studies in the 1990's showed that there were statistically significant differences in student performance by state and gender. Cross-tabulation studies often revealed that differences were even greater if gender differences were taken into account.
The data above is reported only for those samples which consist of five or more students, ns refers to a "not statistically" significant number of students. Note that the manner in which these averages are calculated (by taking individual student averages horizontally first) yield results which are not consistent with the earlier calculations (where averages were run vertically).
The state-gender cross tabulation indicates that differences may exist and suggest that future studies continue to look at state-gender cross tabulations. The cross-tabulation also suggest that a group which was identified as needing assistance in earlier studies, males from Chuuk, continues to need assistance in mathematics. Compared to the overall average of 53%, the Chuuk male average would likely achieve statistical significance in a properly done study.
Overall, performance on basic calculate and solve problems appears acceptable. More complex analysis of graphical information appears to be weaker. The 38 students who have completed MS 100 College Algebra and are now in MS 150 Statistics have a 56% average. Even given the acknowledged problems in the study, this percentage is surprisingly low for such a mathematically mature group of students. This may not, however, be unusual in studies of this type at other schools.
The next steps will include having the division review the instrument for changes for next year.