The Course Instructor Opinion Survey (CIOS) is a tool for gathering information about student perceptions of instructor and course effectiveness at Georgia Tech. Unlike surveys and focused discussion, CIOS is completed at the end of the semester, and students completing the survey do not receive any direct benefit from providing their feedback. Despite frequent criticism, CIOS is a tool that can be used as a source of information about instructor and course effectiveness – provided care is taken to understand the nature of the data that has been collected.
Click on the links below for more information and guidance about using CIOS to inform your teaching.
For additional support: Contact us via ctlhelp@gatech.edu!
Five Steps for Responding to CIOS Feedback
- Reflect. Before you look at your CIOS scores and comments, think about what you are expecting. What went well in your course? What are some areas for growth/development/change?
- Read and React. Look at your collected scores and student comments, then allow yourself to have an emotional reaction – good or bad – in response to the data.
- Relax. Take a break. Process your emotions and prepare yourself to come back to the data with a fresh, more objective approach.
- Revisit. Return to your CIOS report(s), and process it with a view to understanding your students' perceptions and experiences in your class.
- Respond. Make decisions about what you will and will not change in your course(s) and your teaching, based on the feedback your students have given you. Make some notes for yourself alongside your course materials, so that you can remember what you would like to incorporate and/or change next time.
Interpreting CIOS Data Systematically
The best approach to this sort of feedback is to begin with a systematic analysis of the data you have collected. This will help you identify the strengths of your approach to your class – in the eyes of your students – as well as the areas of discomfort or dysfunction. Your aim is to first understand your students' perceptions and experiences, then use that to help you make decisions about changes to make, and things to keep the same.
Interpreting Quantitative Data
Your CIOS report includes the response rate and interpolated median score for each question students were asked, as well as a breakdown of student responses across the options. As you review this data consider the following:
- Which scores align with my general sense of student experiences in this course?
- Which scores are surprising – either because they are higher or lower than expected?
- What factors likely contributed to these surprises?
- How do these scores compare with (similar) courses I have taught in the past?
Next, turn to your students' comments, to help you understand the scores you have received.
Interpreting Qualitative Data
For responses to open-ended questions, you need to sort your responses and identify themes. Note the frequency of themes, areas of agreement and disagreement among students, and suggestions students have for changes you might make. In addition, look for ways in which the comments your students have offered explain their numerical ratings.
One way to do this is to sort your students' comments according to this chart:
Comment Type | What to Do |
---|---|
Unrelated to Teaching and Learning |
Discard these comments, as they do not contribute to your assessment efforts. |
Nonspecific |
Discard these comments, as they do not contribute to your assessment efforts. |
Positive | These comments tell you what (students think) is working in your class. Enjoy these comments, and compare the themes with less positive comments, looking for areas of agreement and disagreement among students. |
Actionable Suggestions | These comments offer suggestions or shed light on pain points in the class that you can do something about (if you so choose). Look for themes among student suggestions, and compare this feedback with positive comments, looking for areas of agreement and disagreement among students. Consider the trade-offs associated with making each suggested change (e.g., effort required to make the change and impact on student learning), as well as ways in which you can give students additional information to help them understand why things are set up the way they are. |
Nonactionable Suggestions | These comments offer suggestions or shed light on pain points in the class, but they are items you cannot address in the context of your class. Sort these comments into themes, and consider passing them along to individuals who can effectively respond (e.g., department chair and/or curriculum committee). |
After you have identified themes and made decisions about what to change or keep constant in your class, make notes for yourself alongside your course materials. This will help you incorporate your CIOS feedback the next time you teach this course.
Increasing CIOS Response Rates
The higher our response rate, the more likely we are to get useful information and data from students. That said, it is often difficult to get a high response rate from students. Here are two creative and effective methods devised by Georgia Tech faculty to raise their response rates above 90 percent:
On the last day of class, I start off by writing on the board:
What did you learn in this class?
I explain that many different people might plausibly ask this question, such as someone who is helping them to pay for their education, or a potential employer. “I see that you took ‘Biomedicine and Culture’ – what did you learn in that?” We discuss answers as a class.
Next I pull up the syllabus on the screen. If it’s a class that I’ve taught many times, the answers that they generated will generally match the course objectives almost verbatim, and the students are impressed by that. If it’s a class that’s more in development, there may be some mismatch, and we discuss whether the objectives should be changed in the future to better reflect what was learned, or if there might be strategies to better meet the objectives.
Then we go through the syllabus as a class. What did they think of the assignments? Any readings that were favorite or excruciating? Was it easy to participate enough? A nice element of this is that students respond to each other’s gripes. For example, someone will usually say “It was a drag that we had to write blogs about the readings every day for which there were readings – I wish we could write fewer.” But then someone else will say “I found the blogs a little annoying, too, but I don’t think I would have done the readings otherwise, and I would have gotten a lot less out of the class.” Someone might say “This reading was too hard,” and someone else might respond, “But that was one of my favorites! Maybe just give us a better warning about what to expect?” I take careful notes on this feedback, and it’s super helpful. After we’ve discussed all of the syllabus elements, I welcome further comments, and we discuss.
Finally, I have the students pull out their laptops and fill out the evaluations right then and there, while they have all these thoughts fresh in their minds.
With this approach, the response rate is generally upwards of 90%, the ratings are very high, and the written comments are more focused.
I highly recommend it!
--Dr. Anne Pollock, School of Literature, Media, and Communication
Show more/less of Anne's story
I incentivize my students to fill out their course evaluations by giving them the opportunity for a "choose your own adventure" final exam. The standard final exam in my class is five questions long, but if they manage to get us to a 95% response rate by the last day of classes, then I post a poll on Piazza that allows the class to choose between having more time (four questions instead of five), or more flexibility (six questions available, and they answer five) on the final exam.
I periodically update the class with our current response rate (via email and in class), and remind them that I take the evaluations seriously and look forward to their feedback. Students typically respond by urging one another to contribute, and my response rate skyrockets.
Behind the scenes, it's pretty simple: I write a five-question exam, and then based on the results I give either a four-question or a "six-question-answer-five" exam. If it is a four-question exam I use a random number generator to randomly remove a question.
--Dr. Ryan Lively, School of Chemical and Biomolecular Engineering
Strengths and Struggles with CIOS Data
End-of-semester feedback from students can be a valuable source of information about our teaching effectiveness, but it is not perfect. Results are statistically noisy, especially in small or mid-sized courses, and survey results across different courses, instructors, or disciplines are not directly comparable. Further, not all students complete CIOS, so results may not be representative.
That said, tools like CIOS provide an estimate of teaching effectiveness, with comments shedding light on student experience from a whole-course perspective. The bottom line is that CIOS – while somewhat blurry/noisy – provides a valid estimator of teaching effectiveness.
Click on each item below to reveal what research has shown us about surveys like CIOS:
Class Size: "Student ratings are affected by class size." Student ratings have a curvilinear relationship with class size, where small and very large class sizes receive better ratings than medium or large class sizes. (Kuo, 2007)
|
Difficulty Level: "Making the course easier will boost instructor ratings." Students tend to value learning more highly in challenging courses requiring more commitment. (Hativa, 2013)
|
Discipline: "My discipline has a harder/easier time getting good student ratings." STEM disciplines have been found to have significantly lower student ratings than non-STEM disciplines. (Cashin, 1990; Kember & Leung, 2011)
|
Gender: "There is gender bias in survey results." Opinions vary as to the effect of gender on tools like CIOS, and the results of scholarly work on the issue have been mixed. Hativa (2013) reports that no substantive research evidence has been found relating instructor ratings to race, ethnicity, nationality, or other diversity issues. Boring et al (2016) report the contrary: student evaluations of teaching are significantly correlated with instructor gender, with students regularly rating female instructors lower than male peers.
(Additional sources include Centra & Gaubatz, 2000; Huston, 2005; Huston, 2005-6; Mangan & Fleck, 2011) |
Grades: "High grades will result in better instructor ratings." There is near-zero correlation between expected or actual grades, and student ratings of instructors. (Abrami et al, 1980; Centra, 2003)
|
Popularity: "Student ratings are a popularity contest." Popularity and instructor enthusiasm are moderately correlated with results on student ratings of instructors, but they have also been found to contribute to student learning. As a result, popularity may be a useful, albeit indirect measure of teaching effectiveness. (Aleamoni, 1999)
|
Workload: "Lowering the course workload will improve instructor ratings." Course difficulty and workload were found to be almost entirely unrelated to instructor ratings. (Hativa, 2013)
|
References
- Abrami, P.C., Dickens, W.J., Perry, R.P., & Leventhal, L. (1980). Do teacher standards for assigning grades affect student evaluations of instruction? Journal of Educational Psychology. 72: 107-118.
- Aleamoni, L.M. (1999). Student rating myths versus research facts from 1924 to 1998. Journal of Personnel Evaluation in Education. 13(2): 153-166.
- Boring, Anne, Kellie Ottoboni, and Philip B. Stark. (2016). Student Evaluations of Teaching (Mostly) Do Not Reflect Teaching Effectiveness. ScienceOpen Research.
- Cashin, W.E. (1990). Students do rate different academic fields differently. In M. Theall & J. Franklin (Eds.), Student ratings of instruction: Issues for improving practice. New Directions for Teaching and Learning. Vol. 43, pp. 113-121). San Francisco: Jossey-Bass.
- Centra, J.A. (2003). Will teachers receive higher student evaluations by giving higher grades and less course work? Research in Higher Education. 44(5): 495-518.
- Centra, J.A. & Gaubatz, N.B. (2000). Is there gender bias in student evaluations of teaching? Journal of Higher Education. 70(1): 17-33.
- Hativa, Nira (2013). Student Ratings of Instruction: Recognizing Effective Teaching. USA: Oron Publications.
- Huston, T. (2005). Research report: Race and gender bias in student evaluations of teachi. Seattle University, Center for Excellence in Teaching and Learning.
- Huston, T.A. (2005-6). Race and gender bias in higher education: Could faculty course evaluations impede further progress toward parity? Seattle Journal for Social Justice. 4(2): 591.
- Kember, D., & Leung, D.Y.P. (2011). Disciplinary differences in student ratings of teaching quality. Research in Higher Education. 52(3): 278-299.
- Kuo, W. (2007). Editorial: How reliable is teaching evaluation? The relationship of class size to teaching evaluation scores. IEEE Transactions on Reliability. 56(2): 178-181.
- Mangan, M.A. & Fleck, B. (2011). Online student evaluation of teaching: Will professor “hot and easy” win the day? Journal on Excellence in College Teaching. 22(1): 59-84.
- CIOS core questions
- Teaching Assistant Opinion Survey (TAOS) core questions
- Optional additional questions [pdf]
- CIOS FAQs from the Office of Academic Effectiveness
(text and background only visible when logged in)