>> Ethics-related Policies and Procedures main page
Ethics-related Policies and Procedures
- Standard 2.B Related
ACCREDITATION HANDBOOK
1999 Edition
COMMISSION ON COLLEGES AND
UNIVERSITIES
8060 165th Avenue
NE, Suite 100
Redmond,
WA 98052-3935
Phone: 425/376-0596 www.nwccu.org
Standard Two - Educational Program And Its Effectiveness
Standard 2.B - Educational Program Planning and Assessment
Educational program planning is based
on regular and continuous assessment of programs in light of the needs
of the disciplines, the fields or occupations for which programs prepare
students, and other constituencies of the institution.
2.B.1 The institution's processes
for assessing its educational programs are clearly defined, encompass
all of its offerings, are conducted on a regular basis, and are integrated
into the overall planning and evaluation plan. These processes are consistent
with the institution's assessment plan as required by Policy 2.2 - Educational
Assessment, pages 36-39. While key constituents are involved in the process,
the faculty have a central role in planning and evaluating the educational
programs.
2.B.2 The institution identifies
and publishes the expected learning outcomes for each of its degree and
certificate programs. Through regular and systematic assessment, it demonstrates
that students who complete their programs, no matter where or how they
are offered, have achieved these outcomes.
2.B.3 The institution provides evidence
that its assessment activities lead to the improvement of teaching and
learning.
2.2 Policy on Educational Assessment
The Commission on Colleges and Universities
expects each institution and program to adopt an assessment plan responsive
to its mission and its needs. In so doing, the Commission urges the necessity
of a continuing process of academic planning, the carrying out of those
plans, the assessment of the outcomes, and the influencing of the planning
process by the assessment activities.
As noted in Standard Two, implicit in
the mission statement of every postsecondary institution is the education
of students. Consequently, each institution has an obligation to plan
carefully its courses of instruction to respond to student needs, to
evaluate the effectiveness of that educational program in terms of the
change it brings about in students, and to make improvements in the program
dictated by the evaluative process. Assessment of educational quality
has always been at the heart of the accreditation process. In earlier
times, this assessment tended to focus more upon process measures and
structural features; hence, there was considerable emphasis placed upon
resources available to enhance students' educational experiences such
as the range and variety of graduate degrees held by members of the faculty,
the number of books in the library, the quality of specialized laboratory
equipment, and the like. More recently, while still stressing the need
to assess the quantity and quality of the whole educational experience,
the communities of interest served by the accreditation enterprise have
come to appreciate the validity and usefulness of using output evaluations
and assessment as well as input measures.
Nearly every postsecondary institution
accredited by the Commission on Colleges and Universities engages in
some type of outcomes assessment. Some are more formalized than others;
some more quantified; some less so; some well- developed and long-utilized,
and some of more recent origin and implementation. The intent of Commission
policy is to stress outcomes assessment as an essential part of the ongoing
institutional self-study and accreditation processes, to underline the
necessity for each institution to formulate a plan which provides for
a series of outcomes measures that are internally consistent and in accord
with its mission and structure, and, finally, to provide some examples
of a variety of successful plans for assessing educational outcomes.
Central to the outcomes analyses or assessments
are judgments about the effects of the educational program upon students.
These judgments can be made in a variety of ways and can be based upon
a variety of data sources. The more data sources that contribute to the
overall judgment, the more reliable that judgment would seem to be. There
follows a list of several outcomes measures which, when used in appropriate
combinations and informed by the institutional mission, could yield an
efficacious program of outcomes assessment. This list is intended to
be illustrative and exemplary as opposed to prescriptive and exhaustive.
a. Student
Information.
From what sources does the institution acquire its students? What percentage
directly from high school? Community college transfers? Transfers from
other institutions? What blend of gender, age group, and ethnicity has
the institution attracted over time? Retained over time? Graduated over
time? What is the mean measured aptitude, over time, of entering students?
What are the local grade distribution trends? What changes have appeared
over time?
b. Mid-Program
Assessments.
If the institution has some kind of required writing course or an emphasis
on writing across the curriculum, what evidence is there that students
are better writers after having been exposed to the course or curriculum?
How are these judgments rendered? If student writing improves, do students
appear to retain this newly acquired proficiency? If so, why, and if
not, why not? What changes are planned as a result of the assessment
exercise?
A required course, program, or sequence in mathematics can be assessed in a
similar fashion. What evidence is there that the skills improved or declined
as a result of the program? How are these judgments rendered? Does the
improvement appear permanent or transitory? How has the program been
changed as a result of the assessment program?
A required course, program, or sequence in any subject matter can be addressed
in a similar fashion, as can nearly any part of the program in general
education or the program as a whole.
c. End of Program Assessment.
What percentage of those students who enter an institution graduate? Is the
percentage increasing or decreasing? Why? What is the mean number of
years in which students graduate? Is that mean increasing or decreasing?
Why? What are the criteria for these judgments? What is the several-year
retention pattern from one class to the next, such as freshman to sophomore?
If patterns reflect significant losses between one level and another,
what are the reasons? Similar questions may be asked by gender and/or
ethnic background. If the institution or program requires a capstone
experience at the end of the curriculum, are present students performing
better or worse than their predecessors? What are the reasons? What are
the bases for the judgments? (e.g. "The cumulative judgment of the
faculty is that the quality of the senior theses in art has improved
during the past five years. This judgment is based upon the following
evidence . . ." or "The Psychology Department requires the
advanced test on the Graduate Record Examination of all graduates. These
scores have declined by an average of 2% each year for the past five
years. The faculty is of the opinion that the reasons for this decline
are . . ..")
d. Program
Review and Specialized Accreditation.
Some institutions require periodic program review of each academic program,
either through an institutionally approved internal process and/or through
seeking and achieving specialized accreditation, or by utilizing external
experts. Either or both of these activities can provide a wealth of outcomes
assessment data, particularly if the methodology remains somewhat standardized
over time.
e. Alumni
Satisfaction and Loyalty.
A number of institutions engage in a variety of alumni surveys which elicit,
over time, the judgments of alumni of the efficacy of their educational
experience in a program or at an institution. Use of such a mechanism
can assist an institution in understanding whether alumni satisfaction
with various aspects of the educational program, particularly those facets
which the institution stresses, appears to be growing or diminishing
over time. If satisfaction is increasing, why? If decreasing, why? What
are the bases for the judgments? What curricular implications do these
findings have?
f. Dropouts/Non-completers.
What methods has the institution utilized to determine the reasons why students
drop out or otherwise do not complete a program once they have enrolled
in it? What is the attrition rate over the past five years? Is it increasing
or decreasing? What are the reasons? What programs or efforts does the
institution engage to enhance student retention? Which tactics have proved
to be efficacious?
g. Employment
and/or Employer Satisfaction Measures.
One relatively straightforward outcomes measure used by some institutions concerns
that number and/or percentage of former students who have sought and
found employment. Are they happy with what they have found? Do they think
the program prepared them well for their chosen occupations? If trained
in a particular area, teacher education, for example, have they found
a teaching position?
Other institutions have found qualitative comments of frequent employers to
be particularly helpful in assessing educational outcomes. Do the employers
regularly recruit program graduates? Why or why not? How well do program
graduates perform in comparison with graduates from other similar programs?
Are there areas of the curriculum in which program graduates are particularly
well prepared? Which areas? Why is preparation judged to be particularly
good? Where are the weaknesses? Why? What is being done to provide remedial
activity?
Adopted 1992
|