1. TEACHING ASSISTANT EVALUA TlON
      2. TEACHING ASSISTANT EVALUA TION
      3. Course and Instructor Evaluation form
      4. Protection of Privacy Collection Notice
      5. Teaching Assistant Evaluation form
      6. Protection of Privacy Collection Notice

MEMO
S.10-162
Se
n
a
t
e Co
mmitt
e
e
o
n Univ
e
r
s
it
y
T
e
ac
hin
g
and
Learn
i
ng
(
S
C
U
TL)
ATT
E
NTION S
cnnl
c
C
lllllmilt
N!
o
n
A
'c
nd
:
1
:l
nd Rul
es
F
ROM
SIC 1
1\
;':11 S
l
eC
t
O
r
,
C
hair
, S
enat
e CO
lil
m
it
t
ee o
n
Univcr
s
i
l
'
Tca
c
h
i
n
' a
nd
Lcarnin'
RE
Stud
e
nt
s
,
lIld in
S
IH
l
c
l
o
r
Co
ur
s
e E
v
a
l
uat
io
n
s
;
R
cs
ubmi
ssio
n
of S
C
U
TL
rc lO
r!
a
nd
rc
lIes
t
f
o
r
aC
l
io
n
DA
T
E No\'ember 17
.
20
I
0
Dear
Me
mb
e
r
s
of
Ih
e Se
n
a
l
e Co
mmunit
y
o
n
Age
nda
a
nd
Rul
e
s
:
T
h
e
i
ss
u
e of co
ur
s
e a
nd
in
s
tru
c
l
o
r
[C&
I]
eva
lu
ations
a
l
SFU
ha
s
bee
n
l
o
n
gs
l
a
ndin
g.
In
ear
l
y
2009
,
SCUT
L
co
mpl
e
t
e
d
a
report,
"
Eva
l
uat
i
ng
H
ow
W
e Eva
l
uale
"
a
n
d
s
ubmilt
ed
Ihi
s
10
SCA
R
in
March
2009 w
ith
a req
u
e
s
t
for advice
on
how
t
o
proceed
.
SCAR
's
response wa
s
that
se
ndin
g
th
e
r
epon
t
o Se
n
ate would be premature
.
Give
n th
e
timin
g
oflhe
r
equest, i
t
was
suggested
th
a
t
t
h
e
r
eco
lllm
e
ndati
o
n
s
in
t
h
e
r
epo
r
t could be
in
co
rp
o
rat
ed
in th
e
Ta
sk
Force
o
n
Teachin
g
a
nd
L
ea
rnin
g
w
hi
c
h h
ad
ye
t to
r
e
port
t
o
Se
nat
e.
The
Task
Fo
r
ce
on Teaching and Learni
n
g (TFTL)
incorporated
o
llr
r
eco
mm
e
ndation
s
at a
concep
tu
a
l l
eve
l
in
it
s
fi
n
al reco
m
menda
ti
o
n
s.
T
h
ese we
r
e accep
t
ed
by
Dr.
Jon Driv
e
r
,
VPA,
e
arlier thi
s year.
H
oweve
r
,
ac
tivit
y o
n
t
h
e
i
ss
ue
of
C&
I
eva
lu
a
tion
s
h
as
yet
to
co
mm
e
n
ce.
In
th
e Sep
t
e
m
ber
20
I
0
S
C
U
TL
mee
tin
g,
Dr.
Dr
i
ver attende
d
a
nd di
s
c
u
ss
ed
thi
s
i
s
s
u
e a
n
d
advised
u
s
t
o
f
o
r
ward
the i
s
s
u
e of C&
I
e
va
u
la
t
io
n
s
to Senat
e
fo
r
r
e
vi
ev.
'
a
nd
a
ppro
va
l.
Dr.
Driver
exp
r
e
ss
ed a wi
l
lingne
ss
to
s
up
port action to im
pro
ve
th
e
proces
s
and
the
in
s
trume
rnt
s
u
se
d
fo
r
C&
I
e
va
lu
a
ti
on
s
at SF
U
p
e
nd
i
n
g
Se
n
a
t
e
approval
a
n
d
dir
e
c
t
ion
.
As
noted
in
th
e
TFTL
's
i
n
fo
nn
a
ti
o
n
gat
h
e
r
i
n
g
in
2008
/
9
,
th
e C&
I
eva
lu
at
i
o
n
wa
s
a
n
important
i
s
s
u
e
to
man
y
respondents.
D
epar
tm
ents
h
ave a
l
so
b
ee
n
reque
s
tin
g c
h
a
n
ges
to
th
e
gener
i
c for
m
wit
h
th
e add
iti
o
n
o
f
spec
ifi
c
q
u
est
ion
s
,
d
iff
e
r
e
n
t opt
i
ons depending
o
n th
e
type
ofcollrse
,
a
nd
op
ti
on
s
in
it
s admi
ni
s
trat
i
o
n
(e
.g.
o
nlin
e).
Lack
in
g
in
sti
tuti
o
nal
directions
o
r
alternativ
es
,
so
m
e
departme
nt
s
ha
ve
created t
h
e
ir
o
wn
eva
lu
a
t
ions.
Furth
e
rm
ore
,
at the
in
st
ituti
ona
l
l
eve
l
,
both
th
e 20
1
0.
1
3 acade
mi
c
plan
and
th
e TFTL
reco
m
mendations
e
mpha
s
i
ze
div
e
r
s
i
f
y
in
g
s
tud
e
nt
s
'
l
ea
rnin
g expe
r
ie
n
ces.
Thi
s
however
,
wo
uld
requi
r
e a
ran
ge
of op
ti
ons
in
C
&
I
eva
lu
a
ti
o
n
s
which
d
o
n
o
t
p
r
ese
nil
y ex
i
s
t.
Th
ere
for
e
,
we
respectfu
ll
y
request
that
Se
n
ate review
o
u
r attached
r
e
p
ort
and
approve
act
i
o
n
on
th
e
development
of
n
ew course
and
in
s
tru
c
t
o
r
evaluation
s
a
l
SF
U
. Se
n
ate approval
\Almlld
then
a
ll
ow the
V
PA
,
in
co
nju
nc
ti
o
n
wit
h
SCUT
L
,
to estab
li
s
h
a
cou
r
se eva
lu
at
i
o
n
project
w
hi
ch wo
u
ld
in
c
l
ude
input
by
a
br
oad ra
n
ge
of
s
tak
e
hold
e
r
s
a
t
SF
U
.
T
h
ank
you
f
o
r
y
o
ur
attent
i
o
n
to
thi
s
m
atter
.
We
l
ook forwa
rd
to
yo
ur
re
s
p
on
s
e.
S
IM
ON
FHf\
S
L]t
UNlvr:nS
I
TY
TH
I
N
K
I
NG
DF
T
HE
WORLD

--
--
-
---
-
-
-
----
-
---
---
-----
-
PLEASE SEE REVERSE SlOt
TEACHING ASSISTANT EVALUA TlON
Fill
IN THE
CIRCLES
ERASE CHANGES
CO
MPLET
elY
s"wfllf
loIARIC
O<DGl
-
:"
WrQllO
I
.."
.... 1
-
-uw
tol 1'1:0«:".0>4'
51.
DO NOT USE INK
~=::!.:~~~
J=
OR FELT PENS
EVALUAT
I
NG
I
-
lOW
WE
EV
ALUATE
Exami
nin
g SFU's
co
ur
se
and
in
st
ructor
eva
luation
system
---
prepared by
m
embers of
the
Senate
Co
mmittee
o
n
U
ni
vers
it
y
T
eac
hin
g a
nd L
ea
rnin
g
S
umm
er
2008

Executive Summary
Introduction
UNDERSTANDING WHAT WE DO AND HOW WE CAN DO IT BETTER
I - Practice
STUDENT EVALUATION OF COURSE AND INSTRUCTORS AT SFU
A MIX OF FORMS
DISTRIBUTION METHODS
USES OF
DATA COLLECTED
IS THIS WORKING?
II - Perspectives
AN HISTORICAL PERSPECTIVE
History of evaluations at Simon Fraser University
Timeline
of activities at SCUTL
STAKEHOLDER PERSPECTIVES
Students
Instructors
LITERATURE FROM THE FIELD
III - Recommendations
A VARIETY OF OPTIONS AND RECOMMENDATIONS
Summary
of recommendations
DESIGNING A
SOLUTION THAT WORKS FOR SFU
Recommendations
CREATING
BEST-PRACTICES GUIDELINES FOR CONDUCTING EVALUATIONS
INFORMING
OUR UTILIZATION OF EVALUATIONS
SUPPORTING
TEACHING AND LEARNING
IV -Sources
REFERENCES AND WORKS CITED
V - Appendices
Appendix A SFSS Course Evaluation - c. 1974 (SFSS Archives)
Appendix B
Sample Anti-Calendar (unknown source, SFSS Archives)
Appendix C SFU Course and Instructor Evaluation - Current
Appendix D
SFU Teaching Assistant Evaluation - Current
Appendix E
SFU Forms - Statements of Use
2
2
3
3
3
4
4
5
7
7
7
9
10
10
II
12
16
16
16
16
16
18
19
20
22
22
24
25
27
28
29
30

-1
Executive Summary
The Senate Committee on University Teaching and Learning (SCUTL) has been engaged
in a process
of reviewing the university's current practices and tools related to student
evaluations
of courses and instructors. This process has been wide-ranging and pursued at great
length through a variety
of avenues.
SCUTL has reviewed the history of evaluations at SFU and the various incarnations that
the instruments and methods take across the university. The committee has engaged in wide-
ranging consultation with experts in the field
of evaluation and higher education, and has
engaged students, faculty, staff, and administrators in their discussions. Overall, the committee
recognizes the potential for valuable information, critiques, and suggestions to be collected
through student evaluations
of courses and instructors at SFU, but feels that the current
instruments and methods
of conducting evaluations do not encourage a full utilization of this
potential.
As such,
SCUTL has prepared and presents this report on student evaluations of courses
and instructors at
SFU. The committee recommends that evaluation methods and implements
should be renewed, allowing the university community to make the best use
of the information
that can potentially be collected. While the committee recognizes the constraints that current
fiscal realities place on the operations
of the university, it also feels that reviewing and renewing
evaluation practices and procedures could be a strategic investment in assuring the quality
of
education at SFU, by engaging students, staff, and faculty in assessing our educational
endeavours and thereby identifying points
of strength and areas of potential improvement.
Broadly, the
Senate Committee on University Teaching and Learning recommends the
following, which are discussed in-depth later in the report, along with subsidiary
recommendations:
• SFU should develop or obtain new course and instructor evaluation forms that can be offered
to the university community.
• SFU should develop a best-practices guide for conducting student evaluations of courses and
teaching.
• SFU should develop a best-practices guide for using the information collected through student
evaluations for administrative and operational purposes.
• SFU should develop and ensure support for responding to student evaluations of courses and
instructors.
evaluating how we evaluate - senate committee on university teaching and learning

-2
Introduction
UNDERSTANDING WHAT WE DO AND HOW WE CAN DO IT BETTER
As part of its tenns of reference, the Senate Committee on Teaching and Learning
(SCUTL) is charged with " ... provid[ing] advice and guidance on the development and upgrading
of teaching evaluation instruments in use in the University" (SFU 2007).
SCUTL
has been engaged in reviewing the evaluation instruments in use at SFU for some
time. A history
of activities at SCUTL is presented in the second part of this report, but it is
important to note the significant amount
of effort that has been placed into reviewing current
evaluation instruments and practices. The committee feels strongly that there is a potential for
valuable and useful information to be collected through student evaluations
of courses and
instructors at
SFU but that current forms and practices are hindering these well-intentioned
efforts.
SCUTL feels that evaluations can be useful and that ours can be done better.
This self-examination is particularly pertinent in light
of changing delivery
systems/course methodology at
SFU. For example, we need to be able measure the impacts of
current changes to class sizes, removal of tutorials from certain courses, etc. in order reliably to
assess the consequences of these changes.
This report will discuss the current practices
of student evaluation of courses and
instructors at
SFU (Part I), will discuss a variety of perspectives, the history of evaluations at the
university, and a brief review
of leading academic literature on the subject (Part II), and will
present a number
of recommendations with detailed suggestions on how current practices can be
improved, for the benefit
of the entire university community (Part III).
The committee realizes that this subject can potentially be a contentious matter, but
stresses that with proper consultation and engagement
of all university stakeholder groups, the
value
in
student evaluation of courses and instructors can be recognized and utilized to ensure
that the university
is achieving its goals of engaging all of its communities in providing the best
education that it can possibly provide. The committee will remain seized
of this matter and is
willing to provide resource and assistance in any actions that arise out
of this report.
The committee wishes to acknowledge the efforts
of a number of people who have
provided invaluable assistance. Acknowledgements are given to Chris Groeneboer, Amrit
Mundy, Maria Davis, Gary Poole, Ted Kirkpatrick, and others who have provided advice or
assistance during this process. A special thanks goes to the
2007-2008 members of SCUTL:
Paul
Neufeld (Faculty of Education and Chair), Janet McCracken (Faculty of Applied Sciences),
Nicky Didicher (Faculty
of Arts and Social Sciences), Stephen Spector (Faculty of Business
Administration), Timothy Beischlag (Faculty
of Health Sciences), Chris Kennedy (Faculty of
Science), Kevin Harding (undergraduate student), Joe Qranful (graduate student), David
Kaufman (Director, Learning
and Instructional Development), Elaine Fairey (Director, Student
Learning Commons), and Nancy Johnston (Senior Director, Student Learning and Retention).
evaluating how we evaluate - senate committee on university teaching and learning

-3
I - Practice
STUDENT EVALUATION OF COURSE AND INSTRUCTORS AT SFU
At the end of each semester, most SFU students are asked to fill in one or two bubble
sheets evaluating the instructors who have taught or interacted with them. The forms are
relatively simple; they collect simple demographic information with regard to the students, ask
the students why they took the course, and then they delve into a complex matrix
of evaluative
criteria on the instructor and the course.
In order to understand better how departments and academic units across the university
are using evaluations, a survey was developed and administered to academic units who have a
staff person on the
SFU Departmental Assistant contact list. Of the fifty-nine units on this list,
seven were eliminated because they are units or sub-units that
do not directly conduct
evaluations, such as a
Special Arrangements office or a Surrey campus office, leaving a sample
size
of fifty-two units to which the survey was administered. All units but one responded,
providing a very high (98.1
%) response rate (Groeneboer 2008, 4).
Of the fifty-one responses, four units did not have undergraduate sections, and graduate
programs do not appear to conduct evaluations as often as undergraduate programs. Removing
these four units from the sample, all
of the remaining forty-seven units reported that they
evaluated "all courses each
term" (Groeneboer 2008, 8). Drawing from this, it is reasonable to
assume that student evaluations
of courses and instructors are a nearly universal practice at SFU.
A MIX OF FORMS
However, it is not safe to assume the same level of universality for the forms and
instruments used across the university. As Figure I below indicates, while most (66%) units use
the standard blue and green evaluation bubble forms
so familiar to most of the SFU community,
some (23% and 11%) use either a combination
of the standard fonns and departmental forms or
FIGURE 1 - TYPES OF FORMS (n=47)
departmental forms only, respectively (Groeneboer 2008).
The varied use of forms is a preliminary indicator of some of the concerns that have
arisen with regard to the current
SFU evaluation forms. While two-thirds of departments do use
the standard green and blue forms, others have supplemented these forms with their own, or
entirely replaced them,
in
order to capture more useful data. According to the canvassing report
I
evaluating how we evaluate - senate committee on university teaching and learning

prepared
by
Chris Groeneboer, some departments use their own forms "because they wanted
more space for comments[,]" or because
"a couple of extra questions are added." This raises
some questions as to the efficacy
of current instruments.
DISTRIBUTION METHODS
-4
The ways in which the evaluations themselves are conducted are also not universal across
the university. As illustrated below
in
Figure 2, while a majority of units (55%) use the
FIGURE 2 - METHODS OF DlSTRlBUTlON (n=47)
"standard procedure" for conducting evaluations, the rest use a variety of methods.
The
"standard procedure" is one that most members of the community are familiar with.
Evaluation forms are handed out, a brief explanation
is given, and the instructor leaves the room.
A student volunteer is asked to collect the fonns and return them to the department at the end of
class.
Some departments use a slight variation on this standard procedure
as departmental staff
or teaching assistants conduct the evaluations instead
of student volunteers. A significant
proportion
of departments said that they had set procedures for conducting evaluations but did
not specify the exact methodology.
One department reported using email surveys and one
department reported using online surveys.
In
addition to this is the course evaluation employed
by the Centre for Online and Distance Education (CODE). CODE administratively supervises
distance education courses and uses an online form for evaluations primarily, while giving
students the option to request a paper form for evaluations
if they so desire (Groeneboer 2008).
Again, this variety of distribution methods foreshadows concerns with current evaluation
implementation-there does not appear to be a
"best practices" model employed or
communicated, raising a number
of additional concerns with current practices.
USES OF DATA COLLECTED
How units actually use the data collected through evaluations is likely to be just as varied
as how the evaluations are conducted. While the survey administered did not directly ask units
how they used the data, comments and open-ended responses received can be used
to gauge this
to a certain degree.
It
is generally understood that units use instructor, course, and TA evaluations to some
degree with regard to human resources requirements. Generally, instructor evaluations are used
evaluating how we evaluate - senate committee on university teaching and learning

-5
by departmental tenure and promotion committees (TPCs), teaching appointment review
committees
(TARCs), and appointment committees. To what extent different departments use
the
data collected is largely unknown; however, it is generally understood that evaluations are
"only a
part" of the human resources processes, which may also include discussion of teaching
portfolios and other materials. Teaching assistants also have the results used during HR
procedures such
as hiring, with such use governed by the collective agreement between the
TSSU and SFU.
Open-ended responses from the various departments also highlighted other specific
usage-related information that is valuable. A number
of units reported that the open-ended
comments
on evaluation fonns represent the most valuable data collected. Some departments
place a summary
of course evaluation data into personnel files. Some departments asserted that
they "really follow
up" if there are negative comments for instructor or TA (Groeneboer 2008).
This seems to imply a systemic assumption that no actions are taken to improve teaching or
course quality in response to negative student evaluations.
Again, the varied usage
of the data collected, at least as far as can be inferred from
comments, shows the beginnings
of a number of concerns that were further discussed during
SCUTL's review of evaluations. There is no best-practices guide for usage of data, nor does
there appear to be an overarching university policy that directly governs the usage.
TPCs and
other such bodies do not currently have guidance in interpreting or following up on the
information they receive via student evaluations.
IS THIS WORKING?
This brings us to the central question that SCUTL has been pondering for some time: are
current methods
of student evaluations of courses and instructors working?
As hinted at above, there are a number
of concerns with the current practices around
student evaluation
of courses and instructors. While there does seem to be a widespread
agreement on the value
of undergraduate evaluation of courses and instructors, given that 100%
of respondents indicated that undergraduate programs were being evaluated, the same cannot be
said
of graduate programs. Due to small class sizes and concerns over confidentiality, not all
graduate sections are evaluated. This raises questions as to whether or not graduate courses
should be evaluated, and
if so, how.
Additionally, concerns emerge over the current evaluation practices. \Vhile two-thirds
of
units use the standard green and blue SFU evaluation forms, a large number either supplement
these with their own, additional forms or use other instruments entirely. This would seem to
indicate a disconnect between what information departments
are looking for in evaluations and
what they are currently receiving.
Similarly, a majority of units reported following what has
become standard procedure in conducting evaluations, but
just less than half report following a
variation
of this. There is no best-practices guide on conducting evaluations aside from requiring
that the person being evaluated leaves the room. This opens the current practice up to questions
of efficacy and fairness as different units use different methods, all of which can potentially have
an impact on the results.
evaluating how we evaluate - senate committee on university teaching and learning

-6
The fooos themselves are areas of significant concern. The questions asked are often
irrelevant
to actual teaching and learning in the courses evaluated. Some courses use teaching
methods (such
as team teaching) different from that which is assumed in the instruments,
creating problems for evaluation. Additionally, faculty have serious (and well grounded)
concerns over the amount
of additional data that can be collected during evaluations, with
concerns raised over additional sheets
of complaints or concerns being attached to evaluation
forms instead
of being pursued through formal avenues in departments and units. Many
departments have expressed an opinion that the comments section
of the form is the most
valuable, and have expressed a desire
to see that portion preserved. Many of the individuals
involved in discussions at
SCUTL expressed concerns over the questions asked other than the
open-ended portions, with regard both to their design and their wording.
No information is
available as to the author(s)
of the questions on these forms or the validity of the questions, and
the forms have not been updated in at least twenty-nine years.
One of the largest areas of concern that emerges is in regard to how the data are used.
Both
of the major constituencies involved in evaluations (students and instructors) seem to have
the most concerns here. While the perspectives
of students and faculty will be explored later in
this report, the concerns over utilization can be explored here. Given that there do not appear
to
be university-wide best practices for using the data, or even common ways of using the data
collected through evaluations, there is a good deal
of confusion as to how the data are actually
used. Faculty are generally aware as to the fact that the data actually are used, but they are not in
agreement as
to what extent data are used. Students are generally surprised to learn that any
weight is ever applied to the consideration
of the data collected, with many assuming that the
evaluations are simply exercises in futility. Departments, faculty, and students all expressed a
desire for useful data from evaluations that could be used towards improving teaching and
learning at
SFU.
Overall,
the members of SCUTL believe that evaluations can indeed be valuable.
However, members also agree that improvements must be made.
evaluating how we evaluate - senate committee on university teaching and learning

-7
II - Perspectives
A~
HISTORICAL PERSPECTIVE
In order to understand the rationale, recommendations, and substance of this report better,
a variety
of perspectives must be considered, including an historical perspective on evaluations at
the university and the activities at
SCUTL that give rise to this report.
History of evaluations at Simon Fraser University
Formal and official evaluations did not exist in any recognizable form when SFU opened
its doors to its charter students in 1965. Indeed, university-sponsored evaluations did not start
until the
1970s, and it was not until much later that university policies were established that
required nearly-universal student evaluation
of instructors and courses and universal survey
forms (Johnston
2005, 143).
Prior to this policy change, students organized unofficial "anti-calendars," which were
published through the
Simon Fraser Student Society (SFSS). Anti-calendars were published
compilations
of results of surveys and included statistical summaries of student responses to a
variety
of questions-with some questions very similar to those on the current course evaluation
forms. Anti-calendars also included biographies
of instructors, various information about the
instructor's teaching history at the institution, and recommendations from sample students
in
regard to the course in question.
An SFSS student course questionnaire from 1974 (a reproduction of which is included as
Appendix A) asked students these questions, in addition
to others:
• "What would you tell another student ifhe asked whether or not he should take
this course?"
• "How well was the instructor prepared for his lectures?"
• "How much freedom of choice
in
written assigrunents?"
• "Was
the lecturer successful in stimulating your interest in course material?"
• "What was the tutor's attitude towards the subject of the course?"
• "Any suggestions for the improvement of this course?"
A sampling of other SFU and SFSS course evaluation questionnaires, including those
administered by departmental student unions, indicates a wide variety
of questions asked of
students when evaluating, including whether or not the course as taught met the student's
expectations after reading the course outline (Simon Fraser
Student Society 1974).
When published, anti-calendars included statistical breakdowns
of the responses to the
questions posed
in
the questionnaires and open-ended commentaries from students. An example
of an anti-calendar, from an unknown source, is reproduced in Appendix B.
It
should be noted
that at
SFU such questionnaires appear to have been solely conducted by the SFSS or its various
departmental student unions.
Other universities (such as the University of Ottawa) conducted
joint course and instructor evaluations, with the student society and the university sharing efforts
and resources.
evaluating how we evaluate - senate committee on university teaching and learning

-8
However, solely student-led evaluations do not appear to have lasted much more than
fifteen years at
Simon Fraser University. As Hugh Johnston notes in his SFU history book,
Radical Campus,
"SFU faculty began conducting student evaluations themselves, partly in self-
defence and partly because they were, sometimes reluctantly, persuaded that they had a
value"
(143).
Indeed, in about 1979, the university began to change its policies and procedures around
evaluations.
In one report to the
VP
Academic, authors M. Gates and P.E. Kennedy made a
number
of recommendations with regard to the "principles which should govern the evaluation
of teaching and the procedures which could be used" (1979, 1).
The Gates and Kennedy report made a number of suggestions, specifically that course
evaluations should be tools for career progress advancement. Notable were the suggestions to
limit severely the amount
of input that students had in the evaluation processes, establishing
university-wide
"general and flexible guidelines" for departments and units to follow in
evaluation, eliminating numerical ratings, and changing the names/titles
of course evaluations
implements from
"course evaluations" to "students' opinions." (Gates and Kennedy 1979).
Particularly memorable is the recommendation that student input on course and instructor
evaluations be limited to four questions:
i) What do you consider to be the weakest features of this course?
ii) What do you consider to be the strongest features of this course?
iii) What do you consider to be the weakest features
of this instructor as a
teacher?
iv) What do you consider to be the strongest features
of this instructor as a
teacher?
(Gates and Kennedy 1979, 4)
Interestingly, these proposed questions appear nearly verbatim (the four questions have simply
been condensed into two) on the current form
of SFU course and teaching evaluations (See
Appendix C). This would suggest that while the mechanical fonn of course evaluations may
have changed between 1979 and now (the Gates and Kennedy report makes reference
to carbon
copies
of comments) the questions posed and the kinds of data collected may not have been
reviewed or updated for as many as twenty-nine years.
In the meantime, the student-led evaluations at
SFU seem to have disappeared. Many
reasons are cited, notable amongst them the costs
of legal review of draft anti-calendars to avoid
charges
of slander and libel. Very few, if any, departmental student unions still produce anti-
calendars. The only
DSU in recent memory to have contemplated publishing one was the
Biology Student
Union. The Sociology/Anthropology Student Union was puzzled as to what an
anti-calendar was, upon discovering that their constitution mandated an anti-calendar standing
committee. Alternatives to anti-calendars have appeared to fill this void, notably the
introduction
of ratemyprofessors.com.
evaluating how we evaluate - senate committee on university teaching and learning

-9
SFU departments and units still overwhelmingly conduct student evaluations of courses
and instructors. Students do not display a large amount
of confidence in the procedures, and do
not always complete them or give them a large amount
of attention. A considerable amount of
discussion at SCUTL has centred around perceived deficiencies in current course evaluations,
and proposed alternatives.
Timeline of activities at SCUTL
In 2005, SCUTL decided to act on a request from the Simon Fraser University Faculty
Association to examine the issue
of student evaluations, including perceived deficiencies in the
current system(s) and possibilities for updating them.
Over a period of two years, a sub-
committee with help from an
LIDC Research Assistant (Amrit Mundy) devoted time and energy
to investigating many issues connected to student evaluations. Their activities included
• a literature review to determine current thinking as to validity and
efficacy
of student evaluations, and best practices in the field,
• a number of limited, informal surveys of students, faculty, and dept.
chairs
to determine their concerns with and hopes for student evaluation
forms,
• interviews with administrators at Canadian academic institutions which
had recently been through the process
of updating their student
evaluation forms, and
• examination of a number of different forms commercially available but
designed at reputable academic institutions.
In the summer of 2007, the SCUTL sub-committee prepared to pilot one of the two
commercially-available fonn sets which we felt best matched both the criteria which our faculty
and students had identified as important and pedagogical concerns which our literature review
had indicated to be important. However, both
of our preferred form sets were American, and one
of the criteria SFU faculty had indicated was important to them was that forms be processed
locally, so we began negotiations to see
if we could pilot one of them and process the results in
Canada.
SCUTL
also determined, both from the literature review and from the interviews with
administrators at other institutions, that in order to make a major change in student evaluations
it
was important that all three of the major stakeholders (faculty, administrators, students) have
significant input into the decision-making processes.
In
early 2008, A VP Academic Bill Krane announced a new task force to review a number
of issues related to teaching and learning at SFU. Members ofSCUTL feel that examining
student evaluation
of courses and instructors should be included in the work undertaken by the
new task force. This report is intended to be a textual statement
of our beliefs in regard to course
evaluations.
evaluating how we evaluate - senate committee on university teaching and learning

-to
STAKEHOLDER PERSPECTIVES
Across the university, student evaluations are received with varying degrees of warmth,
interest, or tolerance. Each constituent group
of the university community acknowledges the
value
of student course evaluations-to a degree-and each wants course evaluations to be as
useful as possible. What differs between each group is how the group views course evaluations
and how each group feels that they should be used.
Students
In the heady times of SFU's birth, students conducted their own course evaluations and
published them. They considered this to be a democratic form
of participation in the university's
governance.
Over time, the university itself began conducting course evaluations, and student-
led evaluations have gradually disappeared.
On the whole, students informally surveyed indicated a great deal of support for an ideal
type
of course evaluation
in
which their input was genuinely considered in times of course
revision or review or when faculty and instructors were undergoing performance evaluations.
However, students expressed a great deal
of doubt with regard to current course evaluation
practices. Common responses were
• a feeling that course evaluations do not seem to go anywhere: students
fill them out and then never seem them or results again, so it seems to
them pointless to offer comments, and
• a desire to see some form ofpublic1y accessible results of scores from
evaluations.
Further consultations with student leaders such as student senators and members
of the Board of
Directors of the SFSS resulted in a few additional comments:
• course evaluation results should be available to Senate decision-making
committees such as
SCUS and SCUP when reviewing course proposals,
program restructuring, or program proposals, and
• course and instructor evaluations should be available to external
reviewers.
In
2007, the Student Forum of the SFSS voted unanimously to express support for
revising current evaluation forms to make them more useful and suggested the publication
of
some form of the results.
Graduate students have a considerably different experience with evaluations than do
undergraduate students. First, they are often evaluated as teaching assistants in addition to
offering evaluations
of their instructors. Second, with the small class sizes of graduate classes,
the principle
of anonymity is not as guaranteed as it is with large undergraduate lectures. Third,
the relationships between graduate students and their instructors who may also
be supervisors
can be difficult to negotiate. Some graduate students expressed concerns with current practices
in regard to the use
of teaching assistant evaluations. According to one student, some
departments offer large lecture courses with TA support that do not have tutorials, yet they still
I
evaluating how we evaluate - senate committee on university teaching and learning

-11
use TA evaluation forms geared toward leadership in tutorials (see Appendix D). Concern was
expressed
as to the value of the evaluations received from students who were evaluating a T A
with whom they may have had
no contact, and concern was additionally expressed about the
weight
of these evaluations in employee files.
Overall, students expressed support in principle for evaluations, but expressed a desire to
ensure that they were meaningful and conducted in valuable ways.
Instructors
Given that a request from the Simon Fraser University Faculty Association was one
impetus
to SCUTL to be evaluating how we evaluate, it would appear that faculty have a desire
to revise the current system. They also have a distinct perspective on student evaluations.
It
should be noted, however, that an in-depth survey of faculty was not conducted, and as such, no
claims
of representativeness are made with regard to the concerns discussed herein.
Many concerns raised
by instructors have to do with the usage of the data collected and
with the construction
of the green and blue SFU forms that are predominantly used for course
and instructor evaluation.
Some concerns are as follows:
• questions asked on the green and blue forms seemed to be poorly
constructed, with good and bad poles
of scales being inconsistent-for
example, questions 5 and 6 are centre-weighted while the rest are left-
weighted (see appendix C)
• the various bodies and persons that receive the results do not necessarily
have a best-practices guide on how to use the data collected
• any public disclosure of data collected (as has been suggested by
students) may be a violation
of privacy
• academic freedom and innovative teaching methods may be
compromised
if results are overly weighted in TPC and other
assessments
• there is a possibility of grade inflation if instructors desire high student
evaluations.
Faculty also have a considerable number
of constructive criticisms:
• evaluations should be processed locally to provide for quick tum-around
times (note: the current forms are processed at
UBC)
evaluations should not conducted by a private or commercial enterprise
as this raises questions of privacy
• evaluations must be conducted, processed, and stored in Canada due to
concerns around privacy engendered
by the Patriot Act
• evaluation questions should be validated by experts in the field in the
field
of psycho-educational measurement.
Additionally, faculty expressed concerns over the impacts
of teaching evaluations on the
practice
of teaching. Concerns were expressed over the interplay between popularity and
evaluating how we evaluate - senate committee on university teaching and learning

-12
effective teaching, and the effects that this has on teaching and learning in classes. Concerns
were expressed that popularity may have a higher influence on outcomes of evaluations than
would effective teaching. During discussions,
SCUTL acknowledged the tensions experienced
by faculty through course evaluations.
LITERATURE FROM THE FIELD
The following text is taken from a brief report prepared by fonner SCUTL chair Ted
Kirkpatrick. The committee appreciates his work on the report.
The literature on student evaluations
of teaching is vast. The IDEA Center (at Kansas
State University, see http://www.theideacenter.orglcategory/idea-center/about-us) claims over
2000 articles have been published on the topic. To provide good entry points into this literature,
this report highlights several articles and special issues. Most
'of these articles are reviews or
even formal meta-analyses
of prior work. In addition to these review articles,
I
list several other
articles that make specific points
of particular note.
Overviews
IDEA (n.d.) summarizes the literature in
a single page. No citations are provided, but the
conclusion summarizes the consensus
of most experts:
Student ratings can be valuable indicators of teaching effectiveness, and
they can help guide improvement efforts. But they are most useful when
they are a part
of a more comprehensive program which includes
additional evaluation tools and a systematic program for faculty
development.
Two more IDEA reports provide excellent summaries. Cashin (1989) developed a
framework for evaluating college teaching. He expanded the definition
of teaching to include
seven areas:
Subject matter mastery
Curriculum development
Course design
Delivery
of instruction
Assessment
of instruction
Availability to students
Administrative requirements (book orders, grades, etc. completed and
on time)
Then he listed eight sources
of data for evaluating these areas:
• Self
• Files
Peers-faculty members knowledgeable about the subject matter
Colleagues-faculty members not knowledgeable about the subject
matter
• Chair/dean-the faculty member's immediate academic supervisor
Administrators-who do not have direct supervisory relationship
Instructional consultants
evaluating how we evaluate - senate committee on university teaching and learning

- 13
Others
He developed a table (Table 1, p. 3) with the seven areas as rows and the eight sources as
columns, suggesting which sources of data might be useful in evaluating which areas. He
concluded that student evaluations are
of most utility in evaluating the latter four areas (delivery,
assessment, availability, and administrative requirements) and
of little to no use in evaluating the
first three areas (subject matter, curriculum development, and course design). Specific examples
are given by IDEA (n.d.), that students cannot judge
"the appropriateness of the instructor's
objectives, the relevance
of assignments or readings, the degree to which subject matter content
was balanced and up-to-date,
or the degree to which grading standards were unduly lax or
severe."
In a later report, Cashin (1996) provided 20 guidelines for successful evaluation of the
complete faculty contribution, including teaching, research, and service components,
as
applicable. He argued,
As one reads the different authors, one is struck by the high degree
of
agreement among them. I would suggest that among those knowledgeable
of the literature and experienced in the field, there is 80 to 90 percent
agreement about the general principles that should guide effective faculty
evaluation. The answers to the important questions are known, although
not necessarily on every campus.
These three articles are short and reflect the contemporary consensus, despite their age.
Validity concerns
A large literature exists on the validity of student evaluations, under what circumstances
they are valid, and what constructs they measure. A Current Issues section
of American
Psychologist (Greenwald, 1997) presented competing viewpoints (d'Apollonia
&
Abrami, 1997;
Greenwald
&
Gillmore, 1997; Marsh
&
Roche, 1997; McKeachie, 1997).
Each
of the four articles provided different answers to four validity questions:
1. Conceptual structure: Are ratings conceptually unidimensional or multidimensional?
2. Convergent validity: How well are ratings measures correlated with other indicators
of effective teaching?
3. Discriminant validity: Are ratings influenced by variables unrelated to effective
teaching?
4. Consequential validity: Are ratings results used in a fashion that is beneficial to the
educational system?
The key point, from
my perspective, is in Greenwald's (1997) originating question:
My interest in student ratings had a sudden onset.
In
1989, I received the
highest student rating evaluations I had ever received for teaching an
undergraduate honors seminar. The sudden interest came, not then, but a
year later, when I received my lowest ever evaluations. The two ratings
were separated by eight deciles according to the university's
norms-about
2.5 standard deviations apart. But these two ratings were for the same
evaluating how we evaluate - senate committee on university teaching and learning

course, taught in the same fashion, with a syllabus that was only slightly
changed. (p. 1182)
McKeachie (1997) replied,
Had I been consulting with him about the ratings, I would have said
something like this:
"Tony, classes differ. Effective teaching is not
just a matter of finding a
method that works well and using it consistently. Rather, teaching is an
interactive process between the students and the teacher. Good teaching
involves building bridges between what is in your head and what is in the
students' heads. What works for one student
or for one class may not
work for others. Next time, get some ratings early in the term, and if
things are not going well, let's talk about varying your strategies."
I
think this puts the validity debate in perspective.
-14
Olivares (2003) offered another widely-cited critique of the validity of student ratings.
His arguments strike
me as attacks on a straw figure. He lists four requirements for student
ratings to be valid objective measures (p. 236). Not surprisingly, he can assemble counter-
examples to every one of these requirements, and therefore student ratings lack psychometric
validity. However, his conclusion acknowledges that student ratings may still have use:
Considering the foregoing analysis what can
be concluded regarding the
utility
of numerical student ratings of teachers?
It
depends on the purpose
of SRTs. Armstrong (1998) suggested, "there is no evidence that the use
of teacher ratings improves learning in the long run" (p. 1223). Nor is
there evidence to show that
SRTs improves teacher quality (Feldman,
1983; Ryan et aI., 1980). If, however, SRTs are intended to serve as a
convenient method to evaluate teachers using students' opinions
of their
satisfaction with the course
or teacher, then SRTs can be considered to
have practical utility.
Thus, a lack
of validity does not mean that SRTs are not useful; rather, it
just suggests that SRTs are not measuring what they intended to measure
and therefore inferences regarding teacher effectiveness or student
learning should be constrained.
Much
of the debate concerning validity of student ratings centers around differing
definitions
of "what they are intended to measure." I would argue for McKeachie's (1997) more
pragmatic emphasis, that student ratings are best viewed as subjective responses and can provide
useful infonnation when viewed
in that light. As Mike Theall (2006) wrote in an online
discussion forum
Ratings are intended to present students' opinions and they are doing just
that.. .. Pseudo-psychometric issues ... are far less important than emphasis
on appropriate interpretation and use of those data ....
evaluating how we evaluate - senate committee on university teaching and learning

Appropriate use of students' opinions requires that they are embedded in a
comprehensive process for evaluating faculty perfonnance; that they are
considered in light
of other evidence; that context and other factors are
considered
in interpreting the data; and that evaluation is accompanied by
administrative support in as many forms as are necessary
to insure
effective teaching and learning.
I consider that the best conclusion to the debate over validity
of student evaluations.
Course evaluations as predictors of learning
-15
A recent study by Weinberg, Fleisher, and Hashimoto (2007) took a new approach to
assessing the value of course evaluations. As economists, Weinberg et al. took an economic
value viewpoint. Their paper argues that course evaluations are ultimately intended to help the
university fulfil its role
of increasing the nation's human capital. After several pages of dense
economic theory (which I am unqualified
to evaluate), they present a model relating student
evaluations
of teaching in a first course in economics to expected performance in a second
course building on the first. Co-varying for
many other factors (including grades received,
ethnicity, and others), they assess how well students' assessment
of the first course's
effectiveness predicts their grade in the second course. They conclude,
When both current and future course grades are included in the same
regression (column 3), the effect
of current grade clearly dominates, and
the coefficient [for teaching effectiveness as a predictor of] future grade
is
small and insignificant. (p. 15)
This study is the fIrst one of any size that considers teaching evaluations in terms of grades in
later courses.
It
appears well-designed, and its message that teaching evaluations do not measure
learning
of knowledge that will be useful
in
future courses should be taken into consideration.
[end
of embedded report by Ted Kirkpatrick]
evaluating how we evaluate - senate committee on university teaching and learning

-16
III - Recommendations
A VARIETY OF OPTIONS
At.~D
RECOMMENDATIONS
The Senate Committee on University Teaching and Learning spent a considerable
amount
of time assessing the various forms of evaluation implements used at institutions across
the province, the country, and in other locations. Careful examination
of the variety of options
available across the spectrum has lead the members
ofSCUTL to develop a number of
recommendations for proceeding.
Summary of recommendations
The recommendations that SCUTL makes can be summarized into changing current
implements, updating current practices, and ensuring that support exists to work with the
information collected through evaluations. Below is a four-point summation of the
recommendations that
SCUTL makes; each point will be expanded on below. There are sub-
recommendations and detailed suggestions that accompany each wider recommendation.
In
broad terms, SCUTL recommends the following:
1. SFU should develop or obtain new course and instructor evaluation
forms that can be offered to the university community.
2. SFU should develop a best-practices guide for conducting student
evaluations
of courses and teaching.
3. SFU should develop a best-practices guide for using the information
collected through student evaluations for administrative and
operational purposes.
4.
SFU should develop and ensure support for responding to student
evaluations
of courses and instructors.
DESIGNING A SOLUTION THAT WORKS FOR SFU
Much discussion at SCUTL centred on perceived deficiencies of the current course and
instructor evaluation forms that
SFU uses (see Appendix C, but recall that different departments
and units supplement these or use other forms) and discussion around how the various fonus
could be improved to ensure that adequate information was being collected and that the questions
asked are as fair and as useful as possible. This gives rise to the first set
of recommendations, all
of which deal with improving the implements that we use for student evaluations of courses and
instructors.
Recommendations
1. Develop or obtain new course and instructor evaluation forms that can
be offered to the university community.
The committee felt strongly that a revision
or replacement of the current evaluation forms
must be conducted. This opinion was arrived at through a considerable amount
of discussion of
current implements, usages, and methods of evaluation. SCUTL felt that new forms could either
be
11
developed locally,
21
modified from currently existing f01111s, or 3/ obtained from
evaluating how we evaluate - senate committee on university teaching and learning

-17
commercial suppliers. However, members also felt that the fOIms should be processed locally,
due both to time constraints and privacy concerns.
1.1.
Options similar to those used at UBC (a common core module
plus unit-specific questions)
or the University of Washington
(different forms for various units/kinds
of courses) should be
considered, balancing the need for university-wide assessment
and unit-specific information.
The committee felt that a "one-size-fits-all" solution would not work at
SFU, given both
the recommendations in the literature (which recommends different methods
of evaluation for
different disciplines or methods
of teaching) and the desires of various units to use solutions that
fit their needs best. Current practice supports this, with a third
of units that conduct evaluations
either supplementing
SFU forms with their own or using their own exclusively. However, the
committee balanced this with university-wide operational needs. The University
of British
Columbia uses a specific method that approaches a best-case median
of this dilemma, with a
common core set
of questions that are asked of students across the university and a number of
questions that can be specified by the various units. This sort of evaluation instrument would
allow not only for discipline-specific questions, but also allow units to tailor evaluations for
different methods
of delivery such as field schools, wet labs, practicum courses, seminars, etc.
This would likely satisfy the majority
of units at SFU, allowing them to continue asking the
questions that they currently are (and continuing to use any amassed data) while providing an
institution-wide refresh
of evaluation implements.
1.2. Distinction in the
form should be made between evaluation of
the course as a course and the instructor as a teacher.
The committee felt that one
of the problems that currently exists with the SFU evaluation
procedures is a fuzzy boundary between evaluating the instructor as an instructor and the course
as a course: the section called
"general" seems to be where the course evaluation happens, but it
could have a clearer title. The section called
"course grading" may apply either to the course or
the instructor, depending on the circumstances
of delivery. Other instruments make a very clear
distinction between course
and instructor evaluation, with some going so far as to have separate
forms for these purposes. While the committee recognized that courses are often developed by
individual instructors, it also recognized the value in evaluating courses against student
expectations and desires.
SCUTL also discussed the utility of evaluating programs or streams,
and opportunities for this.
Some members believed that a clearer division between evaluation of
the instructor and evaluation of the course could provide an ability for institutional use of the
comments or data collected.
1.3. Consideration should be given to the unique environment
of
graduate courses and instruction when developing new fonus
and practices.
Given that graduate courses often have small enrolments, graduate students reported a
concern over the anonymity when responding to course evaluations. The unique
supervisor/student relationship also factored heavily into this discussion.
Some departments are
conducting graduate evaluations
in
unique ways, such as having the graduate chair of the
program interview classes. Whatever course
of action with regard to student evaluations SFU
I
evaluating how we evaluate - senate committee on university teaching and learning

-l8
settles on, consideration should be given to the unique nature of graduate instruction and courses
when developing new implements.
1.4. Create an electronic repository to allow for university-wide
access to fonns and related resources.
Whichever way new implements are developed, the committee felt that sharing
information with regard to the instruments and practices was vital to ensure that the maximum
use
of evaluations was being realized. Specifically, an online repository of information relating
to evaluations was suggested, to allow all members
of the university community access to
combined and collected knowledge. Note: this would not be a repository
of student ratings and
comments, but one which details the fonns
SFU supports and the best-practices guidelines for
conducting evaluations.
CREATING BEST-PRACTICES GUIDELINES FOR CONDUCTING EVALUATIONS
One of the most pervasive issues that continuously appeared in discussions at SCUTL
was the many methods by which various units conduct evaluations. Some members were
concerned about the variety
of methods, and in discussions, other methods of conducting
evaluations were considered.
2. Develop a best-practices guide for conducting student evaluations
of
courses and teaching.
SCUTL members strongly agreed that a best-practices guide should be developed and
made available
to the university community. Such a best-practices guide should not be
mandatory policy, but set out ideal conditions and procedures through which evaluations should
be conducted. This guide should also outline acceptable levels and means
of public
dissemination
of results. A number of additional recommendations accompany the
recommendation for a best-practices guide.
2.1. Students should be informed
of the uses of the information
collected in a clear and coherent manner.
Currently, students are informed as to the uses
of evaluations through a legalistic-
sounding disclaimer on the back
of the current forms. While the student members of SCUTL
have become somewhat intimately familiar with practices around the use of data, the same
cannot
be said of the student population. While most faculty and T As know how data are used
during reviews or tenure and promotion committee meetings, most students feel that the
evaluations are simply exercises in futility. It is likely that a clear and coherent explanation
of
the uses of the data collected would go a long way in assuaging the concerns of students.
2.2. Explain and make students aware
of the opportunities that exist
for students to provide feedback outside
of course evaluations.
A large amount
of concern currently exists over the perception that student evaluations
are
an opportunity for disgruntled undergraduates to exact revenge on their instructors.
However, this may very well be because students are not necessarily aware
of the alternative
ways that they can voice complaints about courses and instructors.
SCUTL feels that the student
evaluations are not an appropriate venue for initiating grievances/actions versus the instructor,
and, as part
of the best-practices guide, the committee feels that students should be infonned as
to the various opportunities that exist for them to pursue these outside
of evaluations (such as
evaluating how we evaluate - senate committee on university teaching and learning

-19
bringing concerns to the department/unit Chair or to a departmental student union), in an attempt
to ensure that constructive responses are collected on evaluations.
2.3. Timing issues should be explored.
Currently, there is no university-wide recommendation on when to conduct evaluations,
aside from the fact that they should be conducted before final examinations.
SCUTL does not
necessarily endorse this stipulation: one advantage
of offering electronic evaluations after the
fmal examination would be that students could offer comments on the fairness
of the exam and
of the grading of materials which they do not receive back until after the end of classes. In
general, the committee felt that timing issues should be considered when developing a best-
practices guide.
2.4. Explore moving to on-line evaluation implements or processing
evaluations locally.
While reviewing options for evaluation implements, the committee felt that consideration
should be given to moving to online surveys.
Online surveys have a number of advantages over
paper surveys, given that they can be processed almost instantly, are secure, can guarantee
confidentiality, and are highly customizable. However, drawbacks were also identified,
especially in lower response rates (online evaluations through
CODE receive a response rate of
approximately 20%). To improve these, completion of evaluations could be tied to grade release
(in a manner similar to library fines) or
SFU could offer incentives, for example a lottery for
prizes such as ipods/laptops drawn from all completed evaluations.
2.5.
Encourage instructors to engage in informal fonnative
evaluations throughout the semester.
During SCUTL discussions, innovative practices already in place at SFU were also
identified.
Some faculty currently conduct informal evaluations at mid-term, which allows
students to provide immediate feedback and see results during the course.
Consideration should
be given to encourage instructors who are willing to do this to continue
to do so and share their
experiences with others. We also note that mid-tenn evaluations accompanied by a narrative
account
of changes
in
methodology/content which the instructor institutes to respond to those
evaluations can be useful as part
of a teaching dossier in faculty applications for promotion and
tenure.
Additionally, the committee discussed the possibility
of creating an anonymous,
consistent, and university-wide communication mechanism for students to provide feedback
outside
of course evaluations, with such feedback going to the instructor of the course.
INFORMING OUR UTILIZATION OF EVALUATIONS
Along with developing a best-practices guide on how to conduct evaluations, the
committee also recommends developing a best-practices guide on how to use the information
collected.
3. Develop a best-practices guide for using the information collected
through student evaluations for administrative and operational
purposes.
3.1.
TPCs and T ARCs should be informed about best practices for
interpreting data collected from evaluations.
evaluating how we evaluate - senate committee on university teaching and learning

-20
The committee felt that the committees that deal with instructor evaluations should be
informed about the best practices for interpreting data collected. Some specific suggestions
included suggesting criteria-based assessment when reviewing evaluations, along with looking
at
the data collected
in
light of the full teaching portfolio. Other suggestions included a formalized
process
for allowing faculty to enter responses to course evaluations to review committees, and
ensuring that evaluations are only a part of the overall assessment.
3.2. The persons and bodies responsible for hiring sessional
instructors and teaching assistants should
be informed about
best practices
for interpreting data from evaluations.
This suggestion follows from
the suggestions for faculty and instructors, as the
committee
felt that any HR related use of the information collected through evaluations should
be accompanied by an understanding of the best practices for interpreting the data.
3.3. Issues around tutorials, labs, and TAs must be considered when
considering evaluation results.
Some graduate students have reported that they have been evaluated as T As even when
they have no direct contact hours with students. These students are performing marking and
related duties and attending office hours and lectures, but have
no tutorials-and yet students are
asked to evaluate them. There are undoubtedly other considerations with more complicated
vectors,
and these should be kept in mind when considering evaluation results.
3.4. Information from course evaluations should be shared with
decision-making bodies where appropriate
and possible.
Various decision-making bodies approve course changes or program changes without
having access
to even the meagre data currently collected that could be very useful. One
example is the Senate Committee on Undergraduate Studies (SCUS), which must approve course
edits or prerequisite deletions, amongst other changes. While current
forms ask students if
prerequisites are necessary, such data are not presented to SCUS for consideration. The
committee
felt that such data, where possible to be shared, should be shared in the interests of
informed decision making. This will
be of particular significance in assessment of current
changes
to formats for course delivery (changes to class sizes, removal of tutorials, etc.).
SUPPORTING TEACHING AND LEARNING
SCUTL felt that course evaluations have the potential ability to identify cases where
good teaching could be recognized or where opportunities
for improvement could be identified.
As such, the committee felt strongly that all opportunities to support teaching and learning
should
be clarified and promoted to all instructors at SFU. Additionally, the committee felt that
proactive engagement with all those engaged in teaching
at SFU should be considered.
4.
Develop and ensure support for instructors in responding to student
evaluation.
The committee felt that current programs that are intended to support teaching and
learning need
to be continued and that new programs and initiatives should be explored,
wherever possible.
4.1 Opportunities for supporting teaching and learning need to be clarified
and promoted as an opportunity for instructors.
evaluating how we evaluate - senate committee on university teaching and learning

-21
The committee recognized that a number of opportunities for instructional development
currently exist, but it felt that the opportunities are often not promoted to the fullest extent or are
not clarified as opportunities for instructors to participate in.
4.2 Support should be available for all instructors, irrespective of the
evaluation results and without consideration
of rank.
The committee felt that all instructors should have the opportunity to participate in
instructional development opportunities, irrespective
of evaluation results, and without
consideration
of their academic rank or status. The committee felt that this would provide all
individuals engaged in teaching at
SFU with the opportunity to engage in informed self-
assessment, share their skills and abilities, and learn varied approaches
to teaching across the
many disciplines and units at the university.
4.3 Opportunities and support should be provided for one-on-one
consultation with peers, peer mentors, peer networks, or
available specialists at
SFU.
The committee recognized the potential value of peer networks and peer mentoring
programs
in which faculty work together on sharing skills and suggestions. Additional value
was recognised in working with teaching specialists in support departments, faculty in other
departments, and a potential institutionalization
of the peer mentorship program found in some
units.
CONCLUSION
SCUTL respectfully submits this report to the Senate of Simon Fraser University, asking
Senate to read it and to consider our recommendations. We feel that re-examining current
instruments and practices with regard to student evaluations is
of particular significance to SFU
in light of ongoing changes in course delivery methods and formats and in light of SFU's current
Task Force on Teaching and Learning.
evaluating how we evaluate - senate committee on university teaching and learning

-22
IV- Sources
REFERENCES AND WORKS CITED
Cashin, W. E. (1989). Defining and evaluating college teaching (Paper No. 21): IDEA Center,
Kansas State University.
Cashin, W. E. (1996). Developing an effective faculty evaluation system (IDEA Paper No. 33):
IDEA
Centre, Kansas State University.
d'Apollonia,
S.,
&
Abrami, P. C. (1997). Navigating student ratings of instruction. American
Psychologist, 52(11), 1198-1208.
Gates, M. and
P. E. Kennedy. 1979. "Evaluation of Teaching at S.F.U." Unpublished.
Accessed from SFSS Archives, folder #703.l5 (Course Evaluation lVliscellaneous)
Greenwald,
A. G. (1997). Validity concerns and usefulness of student ratings of instruction.
American Psychologist, 52(11), 1182-1186.
Greenwald, A. G.,
&
Gillmore, G. M. (1997). Grading leniency is a removable contaminant of
student ratings. American Psychologist, 52(11),1209-1217.
Groeneboer,
Chris. 2008. "SCUTL Course Evaluation Canvass Final Report." Unpublished.
Available from Learning and Instructional Development
Centre, Simon Fraser
University.
IDEA. (n.d.). Overview of student ratings: Value and limitations: IDEA Center, Kansas State
University.
Johnston, Hugh. 2005. radical campus: Making Simon Fraser University. Vancouver: Douglas
&
McIntyre, Ltd.
Marsh, H. W.,
&
Roche, L. A. (1997). Making students' ratings of teacher effectiveness
effective. American Psychologist, 52(11), 1187-1197.
McKeachie, W.
1. (1997). Student ratings: The validity of use. American Psychologist, 52(11),
1218-1225.
Olivares, O. 1. (2003). A conceptual and analytic critique of student ratings of teaching in the
USA with implications for teacher effectiveness and student learning. Teaching in Higher
Education, 8(2), 233-245.
Simon Fraser Student Society. 1974. "Student Course Questionnaire." Accessed from SFSS
Archives, folder #703.15 (Course Evaluation SFSS Anti-Calendars)
Simon
Fraser University. 2007. "SCUTL - Senate - Simon Fraser University."
http://www.SFU.ca/senate/SenateComms/SCUTL/
Accessed 28 May 2008.
evaluating how we evaluate - senate conunittee on university teaching and learning

-23
Theall, M. (2006, Electronic discussion entry, Sep. 7). Re: Effect of research on SETs. Retrieved
May 16,2008, from http://listserv.nd.edu/cgi-
binlwa? A2=ind0609&L=pod&F=&S=&P=37 66
Weinberg,
B. A., Fleisher, B. M.,
&
Hashimoto, rvI. (2007). Evaluating methods for evaluating
instruction: The case
of higher education (Working paper No. 12844). Cambridge, MA:
National Bureau
of Economic Research.
evaluating how we evaluate - senate committee on university teaching and learning

-24
v -
Appendices
evaluating how we evaluate - senate committee on university teaching and learning

~z-
8U!W'B~1
pU'B
8u!l{Je~1 Al!sJ~Alun
uo
~~u!UJUlOJ ~1'BU~S
-
~1'Bnl'BA~ ~M
MOq
~unenleA~
!:l
J;!
\)
:b\
~e
S'
:c
~e~e
r
e~!. ~ e.;~~r
mS'~ ~§~fl§~ ~mi
HUh
""1!I!fi,.ll p
t~[§I'~!f~llt!
:-f=l
~12
t
l:
~
e
'
rJ
~
t!
C
~g.
ii
It
".ii"
a
.
'''s..
i
e
~
g:
11
raIL
a
~
~,:!;.
~.
u
I
iI.
r
5'
F. f I
It'.
t~
!
'I
i
:;
i
I!
r
EI
-
rt'
R
~
l::.
~
i
~
f
.0
r
r
II.
(
s
c
CIo
0'
~
D
II
i
~
g
&-
~
r
ii'
:L
I
.,
t
'4!
t
;
e
D
~
i
t'
I
!.,
f
.,
j
r
1
i.
j
j
!
..
';
II
I
~
l._
I
.;
a
So
~
j
f
~
'4
C
D
[
~
~
z
!!1
~
);
~
~
o
i
~
~
....
fI
~~
~ft
0'''
f
'W~ir
"
Don C"u
!=!.
SS$~O
E:';ttr
jjlSj:~)1
--SoO tft'H
ft~
I
flf~l.d
[ ftl
1'1
"-
~
"-
~";"""""""'I~~"""'"
n
~"<:I""
::i
OJ
~n
0'11
.........
9
ir
::t."
r
~fH[r
i~.
-:
II"
1
SIllS'
1
F
I~-
f
~~i<'
:~Iflf
~r'"
n
II
..
!h
f,
i~[l
~.
I II
~
.
·'~,l·:··
..
t ..
,~
L:,.
,.",:-,
I;
"
....
~0---
•• -' ..... " •• ".-.-- .... -.--- •• -- ..... -.,...- ......
:-"'1"'7
~""!,
!:I
~
;S
~
~.:
~!!.~:.
~~,; ~e,~it
ee
n
!~"It e.e~~e;r.:'
~l;'i~ ~ ~~~:s
[
~~fiWa O'~~~~~i ~~rfG":,'";
9'5 P-i-E
1 i
8i!~~~
'8
iii'
~~~:.
1
B
!);
~
tIl"
tOfl~
...
1i
"
......
:
..•.
'.',".
tI. Ii" B ... •
5
j
,'<'C
< •• fi i
0 G
~
i
!
.' '
~ Ii';r~
s-$
i
R.
R
r~i5i
i:i
~i1
!;g.
gji2i[
~~~
"1
R. &.<:
I!./::~'
~ ~
~ :R~
-tJ tfDllf
t.f~,
I
9
!
E-
c
-
~!i,It:
~I!.[.
Ji
J
ii-i!
~
Eo
i
a.
Ii
.
.t.
ft.{:~
:".!
-
.. " If' -.
~
,
er
Q,.
iii }}.
r-
-0
~S-
i.
a-
9..
1'" .. '
~
,:<:~
'.'."";\.1
ita
I?,
1
ar:
X
a.1I'
.
~
j' .
.
.~r~
l' ......
;~:.'
.....
~
..
i
:i
[
,.,
0"
a~ ~.
C .:"7
.
.r;
It
e
'=
.
~:
:-t.
STUDENT
COURSE-QUESTiONNAIRE·~/v~··ltr-~·~{?·
Simon Fraser Siudent Sodely
Dept:
Course
No:
ProF:
TA:
Yo,.
M~
Mial 1I.\l('t/ 10
(,DMplt'~
,,,'"
fluntwfllIlI"r
on
part D,t II
pml"'
"r
SF.S.S lO rsrtfltJlllr
4IfI
;mti-<ahttd"
a:
51_ FfTISV.
Yt.ur
allinult.f
1IntInr/,.
t»td
~r"UII""
tJprt!fJf'd
lin'
.~t'TJmrd",
'lui,
/s#u,.,. rowst,
_" Itt
piQllrtKd
.."a
(vru/unnf
with thl "Nltlll 'JI ","'Iot/uni,y
"lIIIrlll
(It/fOlium.
YOII
(I"
"Qr ",Atd 1(.
.,~'"
rhls
",wlIlun"ai,..
~(J
./urr .,.,,, ..
"pini",.s
,,'mllrJI ,.,. t'p"nn"('t/
wi",
.'lVU
prrmff#ll, Cu.
,oj"
t:llJJJl"i'l! ti(Jf11
if
"ttdttt
JO thlll
II
('11ft
lit
d~I,.,,","nI
.. h,.,,,,, "I'iltim/l
('MC'''''''''"
lJoi. CVU/k
mor
"I
GIrd , ••
urrr1
fJ/ r:rmN'llfmti,.n.
Il/(f'. rrndr
PC""/ uwor",,,,
tIl:
INSTRUCTIONS
the '"
118 pt'1't.,1 D' ht.vit! ... htn
,.lIIng
0lIl the __ l?.ect
---
Fot _h
q~q,tC)l\ C~oOOi
....
Ir
~
elt, .... li\l'(l,
Tt.
ccom~
can CCOpt ....
i.h
only
an~
IIn""1tr
" 0$ .... nt.,; tn.,
rho
mwlfo:.t., sectior,
be !'iliad
Ol.l~
aIt,edl\' ir. lhe boa ptDvided II' the tcp r.gh1
h ... d
tCW,..'
of the tm_ lho.
• p .... rUe
~
• .., .. ,,. "--' ia pro ..
id~
for
edd!1,oMI_"Iet:
""""*"", Do no'
w,IIe..., ,''' __ .......
I. "
~
'"'
QvftfiDot'
0"
",....,."! pIN.
0l1li1 ...,
~
TOIl
(S~All(:l.I\f
SS.:IS)
vL6
t ';) - UOp
un
r
UA
3
;;)SJnoJ
SS.:IS
V
X!pU~ddV
"'""
~,:
~f
~

'-D
N
I
. .-'" .,. .._,' "" -
, '"r .
SECTION A: THE
COURSE
AS A WHOLE
10.
The
~~n;':'; 'ti~
spent on written wortnous:
:"}
""
a) FlU'.., nlde •
a
~
.;:
~
".,
2
~
....
0(
e.
~
~
}-
~
a
on
)
o
~
,-
~
...
:z
~
z;:
;.
Q
~
«
}-
z
?
-
S:? g ..
~
!!:
~
.!!
1
!!
~
~
«'-
~
[.
8
~
-iJ
2
~
~
-I~'o
~
';I
U
Z
1l
~
a.!:
Q
l
f
:;
W
$
u.
~
]
~
. )
;l
... -6
'"
~
~
~
s
~
~
t
~ ~
1
:;
a....;
~~
-8 :
~
i.2
~~ ~
$
~
~~~~o~
lI)e
~IU
.!
~:e
::
.J?
l
.
.f
.!
~
...
!'
.':,g~J!...2l
~"''';M-;<tJ
~...
~:
~ .!N~g
0 oS
~:ss;~
~
0..,0':
~
0;10
~ ~.
_
~~~,!!ij
~
2.2JUf;
-:J~:::o~
:~
s ':
i~
=:
~I?,~~;:;
o~~~~ ~ ~rsJ';cil!
':l~
i
:!...: .....
:!
....:ri..; .... ""
:!
....;,.;t"i ... '" 3 ,;",t"J..;..n
~~
- N....
..
I. What" . ,,,ur
principal
reason for taking this co.k?
~) ~~J.rJr:.OO
little.
)
iI)
R"'llUrf"l.
d\ SmuewhiJ.t too much.
;.)
~:
..
I:~=..:i~hiL'b
rnuubedlo.en.
e) Pauoomuc:h.
d~
I\;I"Y crt'Cfil.
J ,. flow much
r~m
or dJoioc=
in
written ASSiIRIJ1C!IIU'
d Finc.d my
~boduJc.
.) Fat' tao little.
.
b) SomewMl 100 DIlle.
2.
,,)
Tht'
LcctuNs..
most C1ljOyable
pan of «'be
c:uurse W3$:
d)
~)
SODK'WhiI
Abouc
Ii"
too muc:h.
&) Tutorials..
e) FBI" too mtII:h.
~~ ~~r;.~d ~lt.
12. CommeDts OIl
~
writtm
work
were:
l'!)
r(!~1
cnnsultlltioa
~~;tb
instnlCtM".
b~ L~~rucUYc.
l. llh: oll.::rall org.:anizatlon of lhe cour.e w.u:
c) Never CIl'IntnH:thv.
sa) \'CJ'Y
",~D p!3~
d}
Destruc:tlft.
b) Wdll'lanrWd.
e) NocortmaeftcsatalL
c) About r18hL
SECTION C: EXAMINATIONS
d
dl Somewhat
Fat
too
dbot
dIs~~
Il.
II}
Thf
Lecture
eums
material.
wemol"K!D.cd
Iowod'd:
4. W11al would you tell :ulIlll..,r "turk",!
If It ..
3.,k.,.J W Mllzel' or nn'
b
~
IIDd J'CIIdiDa.
he !bould
t8l;~
this eotaneP
C
R~
a)
Dim't
miss IL
14. 11_ :shoul lhe deames&
of
exam questiom?
b) I1"A II guuch'OW'X-
a) V;que
~
amblJUOUl.
c:) U's
rulC:quate.
h)
M.;a..MWlIy
dear.
d) TI1Ju: il only
if
you luive to.
c)
U~y
qUire clear.
e) Avoid it.
d) Vetydmr.
5. Criteria Jor i)I"dlnS In Ihl ...
c:our~ \\'e~:
15.
nD~':~~
did eMms
I~t
Y(l1lf onrall
IlDmpr6Jen~iun
:0)
f:..r and clearly
~t:t.red.
of
h) Unfair aJld c:le:Srly
.~ted.
a) Not a! 1111.
') .. ' 1
1-.lI
b) VerT imtdequ:Uel)',
c ...
o,Sta.,....
c) Somewluat
~leIy.
d) Adequately.
I
n.1
he
,,[and.nd~
tued in Br:lding this cuu,.,. ..
W"'rl:!:
3)
Vent !cnirot.
,
b)
Son,,~whllt
knkm.
I
c) About rl;hL
:
t1)
S(Jlltew~t
strict.
I
I
SECTION
c)
VelY strict.
B: READINGS AND WRITTEN WORK
I'
1, The reading
I\s5il!l1m~ntl;
m:re;
::I}
Rd~vant
and stimuJatinJ.
1
Il)
R~leTllnt.
c) AIkQlmte.
d) Irrelevant.
I
i
8,
e)
The
1~levant
lIbra.)'
f~)Urces
II]]d duD.
rel~';lnl
10
the murse aTe:
.,) 'TN.:snV Imuff!eienl.
I
b) SIJIr>cWh;'lt msulT.cienl.
c) AdcqIJate.
d) Tn1:tUJ' sufficient.
c)
~t"dlen~.
I
OJ.
How much lime sprnl each ""ttl. n'lIding fof' tlle cnuNe?
a) 0
to
I hour.
b) 2 or l hOUB.
c) 4 or ! bows..
d) 6
01'
'7 houtL
I
e) More th:m 7l1ours.
SECTION
D:
LECTURES
t6. now Wl:lJ was
lhe
IDstructor pl'q)llftd fur
hi.
l«:Ettn:t?
t~
c:)
tI::Jrv':!L=.
Inadequarcly prrpu-ed at times.
d)
F~uently
nmdeqmzteJy
~mt.
e) Never ptqmecL
17.
The
f\'~" ~lc.iCl&
ability ls:
D) Vcry RQUd.
b}
Good:
c}
Adcquarc.
d) Poor.
d Vttypoor.
It!. The
Ii'(:tn~t'"
"hilily to Cllplain j,,,;
it)
Vrry gonel.
b) GoOd:
c) Adoqaatc.
d) Poor.
.. ) Vtny poor.
'9 WM
,h~
lecturer :lUtte!lOsful In sllmlJla.in£
)'OW'
intel'eSt
In 1M C'olll'lle rrnlItetUl?
a)
V~rv
suceesml.
b) Oulle su«cssfuL
c) NOl very succ:ossful.
d) Tot:llly.unsuccas.ful
ClIl
.5
E
c;s
~
"t:I
s::
c;s
s::
ClIl
:E
c;s
u
B
.~
~
CI.)
.2:
:;I
s::
=
0
a:l
CI.I
>(
,I
:.a
I1J
0.
=
0
0.
u
-<
~
c;s
s::
~
CI.)
~
:;I
(0
>
CI.)
~
~
0
.s::
s::
ClIl
.~
.E!
c;s
>
CI.)

Sample Anti-Calendar (unknown source, SFSS
Archives)
BIOL
106:
BIOLOGY
Ted Robinson
c-;
Scol'_:_"
fIOLICI
... __
ASSESSMENT ..... -
c
r ..
..
(}
f
'-
r_
0
,_ 10
,
,
(}
I
r
,-
~
...
0-11
"
'''Ir-.rl
C/Hrfr .....
Itt~.tpt"".,
LECTURE" LlQUAER,
Uc __
.~
..
oIl111i.IY •••••• ""
WrII
1tt,_I'IIOf
__
~
A,"-_
A ......
ItI'~
lfowllli ••
ID~'ItitI«
.. ,., ..
1t
".-",-"'"
If,",.""...,.t>-_,fd
TUTORS. TUTORIALS
Al,«tdfdtrDj,
A""'IIMIrJt-.l"riOil
D",,,,,,~tIIa'_
"'/pfu"~ie
~fd_
" ...
IIIil.",~IftoI"'IOt"*,,
WfllOIPf/itldltlt_,fd
,,,,.,
Prw:TICALS.OlMONUAATOAS
...
,,,.,
A'~1tIII.I1
C_,..".."lI1l«ftNft
H,/jItc.I""IIWM'''''I('
WORKLOAD
"~""'I
GIfd,,-
CENERAL
__
""-._'."'*'''''
E~6(/Id~'.ulltUltItcutJt'
~",_tr
.. , .. ."."",
'_""'111"011111."
AdfQull,
A.",_tr
"'--.""'"
IIrJJr,.,.AV.1
1M
..
m#dI
,
~II""
01
,utfI..,
Opinion 1
... IIMtooti.
Rapo.-Guidol
SolI - __
0-_
10
..
u
--
-
-
__
__
SIOl 105 is a prerequisite. Arms
&
Camp - great tuxt.
book (the best I've seenl. Tho lecturer (Ted Robinson)
was excellent.
If
I were giving him a grade
I'd
gille him an
"A". Maybe thero should be one lecture
and
one tutorial
instead of two lectures/week. This course is the best one
I
have done all year. The subject was IIerv interesting as
well.
Opinion 2
Read widely, it's not strictly neceuary, but it's interesting
to do
so. Keep up with the work and get help with prob-
loms as they arise. I enjoyed the casual atmosphere of the
pracs. and the helpfulneu of the staff. In some of the mini
courses there was a discrepancy between the demon-
strations mentioned on the tape and what was available in
the lab; Lab sessions need more tutors.
---b~.
BIOL
205:
INlr100UCTORV BIOCHEMISTRY
Dr Hiller
c:.....,
1101.111
... " .......... ,110_
,
AUESSMENT
,-
:
'-
, r __
,
r
,
:= .... '-I
o
II
,..,,.,-.-
CIw,.,
••
~
,.",....Jo/,.,,.ta
UQURIS. UCTUIIIRI
L_ .. "' ....... IoIIiIlIY ••••. , ""
...
'*-littr
___
~rll1
AIIftl:IId_'
~IOI...,.",,/~/_
WOUId.a'lII,."" ,M ,«", .. ., ....
lUTORS. tUTORIALS
R_'~'''~'i,.,
AI
"tII..,..,II(/~_1d
__ '
A .. /MIlr'",<_,,,,1OIf
tJidltO'~.lMJtr""
"'''fuI~'''-"-'tr
~/ttc'III"
fIoutI
ri ..
to "-,,,,,
til,.,. ....
"',-"",
""AtTICALS
W."
Al __
.....
_'-Ittld
'
• DEMONSTRATORS
~"tId
/fm" ••
",~r~_'Jlrtre
GIfd,......,
WORICLOAD
GINERA'-
tI,,./W#r'''''''
___ ' ..
""101.
~.IIMp''''
l"rour""~I,U"t<"_""
tI._alT
",~n,.''''''Ib''
•••
uu""
Cwrll"lrrlll' /0"'1'
~
AtHqwI.
"«OtMlOtll!COUt
_
_
..
0' ,ftC, .. ..,
_',f.AIlI
Opinion
,..-,--
..........
11_0
A
._
....
..
U
.......... ...
10
o
.,_
... "'1 .. ..
......
Read from the text book frequently. Don't foil far behind
In Prac. write ups. BIOL
210
or
HSC
BiologV would be
handy in covering respiration and photosynthesis. lecture
notes were good, Icctures were O.K., but certainly re-
quired clarification from textbook. Rosemarv Davy was
an excellent tutor and went to lengths to make sure every.
one understood the theory behind the experiments.
Hiller's summary of lectures at the and
wa~
good. Maybe
he could give them a bit slower so that notes could be
taken, and avoid standing in front of overhead projector
and casting
shadow
on screen. ",nother point in favour of
the course was that the experiments covered work from
the lectures well.
*.
Examination -
6n%
2
assignments
&
seminar -
20%
2 problem tesu 20%
evaluating how we evaluate - senate committee on university teaching and learning
-27

Appendix C
SFU Course and Instructor Evaluation - Current
-
----
COURSE
AND
<3'
PLEASE
INSTRUCTOR
••
SEE
=n
REVERSE
IMX
SIDE
WI
EVALUATION
DO NOT USE INK
OAFELTPENS
-
BACKGROUND
1. Whalia your cumulative
2.
Why
did you 1ak81h1a
course?
_ Please answer
Ih.
grade potnt average?
Choose the single most Important reason.
foltowing question. 10 tho
-
bu' 01
your
lib/lily.
-
_ con,id,,,d
The r.sutts
tn
."
d,c/tlon.
c.refully
regardfng cours.
_ revl,lon •• ,
Inti promotion
-
_
-
and
3.
How
often
of
faculty
cId
you
attend
the
lectures/semlnars?
-
4.
The course prerequisites were
-
5. The cweraJllevel
of
difftcully for
the
course was
-
6. The amount 01 work requlrud for the course was
-
7.
How ,valuable
was
the course content?
-
8. The course text Dr supplementary material was
-
,9. I Would rate thIa
course as
-
COURSE GRADING
-
10. The asslgnmema
and
Iectutelsemlnar matenaJ were
-
1,1.
The
exams
and assignments were on the whole
-
12.
The marking
scheme
waa on the whole
...
INSTRUCTOR AND LECTURES I SEMINARS
... ' 13. How Inrormaflve were the Imures/semlnara?
... 14. The Instructor's crganiZallon and preparation were
16. The Instructor's ability to communicate material was
3.5
or over
<D
3.0
to
3.49 (i)
2.5
to
2.99
(i)
2.0
to
2.49
@
befow2.0
always
essential
rooeaay
tooli1tle
very
relevant
well related
fair
fair
Inlormatlve
excellent
eKC81lent
",',
18.
The Instructor's interest in the course content
appeared
to
be
high
17. The Instructcts teedback on my work was
adequate
.. 18. Ouestlonl ctunng
cfasa
were
encouraged
.. 19.
Was the Instructor reasonably
accessible
for extra help?
avaJlabIe
.. 20. Was
the
Instructor responsive to suggesJIons or complaints?
very
-
21.
0veraH.
the Instructor's attitude towards l1Udonts was
oxcel!ent
-
-
22. I would rate
the
instructor's toaching ability as
eD
It
was compulsory
@ I
am interosted
In the subject
(i)
No alternative course available
@
It looked like an
easy
c:redI1
Other
reasons
G)(i)(i)(!)(!)
hardly
ever
G)Ci)(i)(!)<V
not
essential
G)(i)(i)(!)<i)
too
difflcult
G)Ci)(j)(!)<i)
roo much
G)(i)(i)(!)
<V
not
very
G)(i)(i)(!)@
irrelevant
<t>~@@(f)
G)Ci)(i)(!)(!)
unrelated
G)<!>(i)(!)@
unfair
G)@@@(j)
unfair
G)(i)(i)(!)(j)
uninformative
G)Ci)(i)(!')('j)
poor
G)@(i)@@
poor
G)@(j)(!)('j)
low
G)@(i)(!)(j)
inadequate
G)Ci)(i)(i)(i)
discouraged
G)(i)<i>(i)<i)
never
available
G)Ci)(i)(!)<V
not at
aD
G)(i)(j)(!)<i)
poor
(6)@(S)@(f)
-28
~------------------------------------------------------------------------~
-
Course:
GENERAL
COMMENTS
1.
What
do you
consider
ro be
the
strongest and weakest
features
of
thelnstnJctOt.
as
a teacher?
2. What do
you
consider to be
the strongest and weakest
features of the course?
3. MY other comments or
suggestlons?
Semester:
Instructor's
Name:
1io~"U'TtlON
CANADA FORM NO. Q.102913-SFU
PC3 3099 • 991H2
Appendix D
SFU Teaching Assistant Evaluation - Current
I
evaluating how we evaluate - senate committee on university teaching and learning

---
--
-
-
-
-
-
-----
-
---
-
-
-
-
-
1,
PLEASE SEE REVERSE SIDE
TEACHING
ASSISTANT
EVALUA
TION
FILL IN THE CIRCLES
ERASE
CHANGES COMPLETEL V
..:-_.:c:::UU::=:"=D=PE=HCI=L=O=H:=LY==:::A. •• U ..
...,
DO NOT USE INK
OR FELT PENS
~®@@®
Wrong
G:>®@@®
Wrong
SAMPLE MARK
G:>®@.®
Righi
Please answer the following QuestIOns to the best 01 your ability
Your assessment of the T,A.'s leaching abilities will become pan 01 hiS/her employment record. ThiS
information will also prOYlde feedback to l1elp your TA refine and improve hIs/her teaching methods.
How allen did you allend your tutonal(or lab)?
always
<D®G)0®
hardly ever
-
-
----------
-
-
-
-
-
-
Course:
1.
What
GENERAL
do
10.
2.
you
3.
B,
6.
9,
5,
7.
4
Old
QuestIons
Tutorials
The
The
Was
Was
I
I
conSIder
COMMENTS
would
would
the
TA's
T.A.'s
the
Ihe
T,A.
rate
rate
T.A.
T,A.
(or
10
dunng
marking
Interest
be
the
the
keep
labsl
punctual
reasonably
Ihe
Monal
TA..s
Semester:
tutonal
10
and
In
was
hIS/her
the
lectures
teaching
in
(or
(or
course
accessible
slarting
lab)
scheduled
tab)
were
as
ability
conlent
wele
lutorials
lor
as
ollice
exira
appeared
(or
hours?
labs)?
help?
to
bo
coordinaled
<D®@0®
nol
coordinated
_
strongest and weakest features of
-
-
_
-
2.
theTA?
Can you offer any suggostions
_
for improving the T.A:s style of
_
presenlation. indiyidual
consultation.
marking.
etc.?
-
-
T.A'sName
available
<D®@0@
never available
always
<D®@0<D
hardly ever
encouraged
<D®@<D®
discouraged
faIr
0®@<D®
unfair
high
0®<Y<D@
low
al\vays
(D®<DG>®
hardly ever
@@@@®
@®©@®
IMPORTANT:
Don't 'orget to specify your T.A.'s name.
-29
_~
______________________
~
________
~~~n~~.~.oo~~~,~~~~.=~~,~~~~.~=,,~~~.,.~,~u~
.•
~'~'~~'~~~
••
~'~~=~'~~~'I~O.~.~~~.~.'~~~'m~m~~h~t.
________
~
:;:AHTAOH
CANADA F()$<I.IP,O O.ttJOa7.srU
,~~~~~~~;.........
~3)'~
'091'.
Appendix E
SFU Forms - Statements of Use
evaluating how we evaluate - senate committee on university teaching and learning

Course and Instructor Evaluation form
Protection of Privacy
Collection Notice
The information on this form is collected under general authority of the University Act
(A.S.B.C. 1979. c.419) and SFU Academic Policies A11.02, A12.01, A12.02 or
A12.0S. It is related directly to and needed by the University to operate its personnel
management
and academic programs. The information will be used to evaluate the
qualifications and performance of faculty according to their assigned duties and
responsibilities:
to decide on salary increases. promotion. contract renewal or tenure:
and to evaluate an academic program. This evaluation form is completed
anonymously, however, please
be advised that any handwritten comments you
provide on this form will be available to the person being evaluated and university
administrators.
If you have any questions about the collection and use of this
information please contact the administrative staff
in the academic department
responsible for
the course.
Teaching Assistant Evaluation form
Protection of Privacy
Collection Notice
The information on this form is collected under general authority of the University Act
(A.S.B.C. 1979. c.419). the SFU/TSSU Collective Agreement (Article 17)
andlor
SFU
Academic Policy A 12.09. It is related directly to and needed by the University to
operate its personnel management and academic programs. The information will be
used to evaluate the qualifications and performance of non-faculty teaching support
staff according
to their assigned duties and responsibilities: to decide on
reappointment and to evaluate an academic program. This evaluation form is
completed anonymously. however. please be advised that any handwritten comments
you provide on this form will be available to the person being evaluated and university
administrators.
If you have any questions about the collection and use of this
information please contact the administrative staff
In the academic department
responsible
for the course.
evaluating how we evaluate - senate committee on university leaching and learning
-30

Back to top