Accounting for Student Satisfaction
A relatively quick post this week, here at the end of the school year with a lot of moving pieces and things to do…
In consulting with a school looking to implement a student college counseling survey (probably a post for the future!), it was shared with me that part of their motivation was the feedback provided in other, anonymous surveys that students were not overly satisfied with “the results” of recent graduating classes. This meant that the surveys reflected an anonymous desire for graduates to end up at “better” universities. The college office felt some pressure, and so they sought to examine this more closely.
That being said, the undertaking smacked of yet another example of a disconnect between data points thought to relate to college counseling and an understanding of root causes.
First, the survey referenced asked a general question of all students about their satisfaction with the college process and outcomes. It did not ask students to comment on their own processes and outcomes. In this way, the question solicited an overall sense of things. Additionally, the survey went to all students, including students who had not yet entered the college counseling process in any formal way (at this school, grade 9 and 10 students). As such, the question itself seemed designed to reflect more of a reputational, “word on the streets” sort of metric. Certainly, it did not reveal anything concrete or quantifiable. As one of the counselors put it, “we are asking grade 9 and 10 students to provide an opinion on a topic that we have not taught them about yet.”
In light of this, the college office made a number of decisions in regards to their own student survey:
They decided for it not to be anonymous so they could understand the specific responses within each student’s specific context; the other question from the other survey would remain, though. From their thinking, this would allow them to understand student perceptions.
They broke down their survey questions into two parts: one focusing on process and one focusing on outcome. In regards to the process section, they asked several questions asking students to reveal their level of engagement and benefit from different elements of their programming from the time spent in classes, to the time spent in individual meetings, to additional programming. Then they asked students to reflect on their individual satisfaction of their overall process in a Likert-scale evaluation. A similar process ensued for the outcome part, with some initial questions about their outcome before a direct request to evaluate their overall satisfaction with their outcome. The team there thought that this would allow them to see the juxtaposition between process and outcome, honoring the reality that outcome is largely outside of anyone’s control.
Finally, the office spent a great deal of time in deliberation about who should receive the survey. In the end, they decided to ask the survey of all grade 11 and grade 12 students at the end of the academic year (June). They had begun the process with all grade 11 students in January, and so things were well underway with that group, and with most of their graduates headed to North America, the vast majority were concluded with their process at the time of the survey in grade 12. The one adjustment between the two grade levels was not to ask the outcome questions of the grade 11 students, although the process questions remained. From their perspective, the college office wanted to hear from seniors when as many had completed their process as possible but also from juniors at a time when they might adjust their tactics to better support students mid-process.
They rolled out their survey in recent weeks, and then we enjoyed a robust conversation about the real value of student satisfaction with outcome. What they discovered is that they received tremendous feedback about their process and the ways students felt that the process might be improved.
However, the outcome results, they found, were entirely dependent upon a number of variables outside their control. For instance, they shared examples of students with low outcome satisfaction who were disappointed that they did not get into their dream school, which was far outside of their ability to gain admission; students who felt forced to go to one institution because their parents or finances dictated as such instead of another place they preferred; recruited athletes who did not end up being recruited by the level of school they had hoped for; and so on.
For their part, the office feels now firmly in possession of data and metrics to support their programming and also the adjustments they are moving forward with making to better support their students. They certainly feel able to articulate a response to any doubts suggested by the separate, anonymous survey.
In the end, the office concluded after this first year of surveying that they needed to revisit whether they would even continue to ask the outcome satisfaction question given how unhelpful the results were. Looking ahead, they next plan to roll out a similar survey to parents. However, again, understanding what satisfaction means is a tricky business and one that requires a great deal of examination and thoughtfulness.