Almost certainly more helpful than some of the things that appear on student evaluations.
More seriously, though, in my most recent efforts to produce an "evaluations" document for job applications, I noticed more than previously how random the collection of things on the evaluations are: some have to do with my quality as an instructor, some have to do with how the class is set up, some have to do with the students' own self-perception of the work involved in the class, and some rather more obviously show the institution's sense of what is important (my Duke evaluations had categories for "was participation encouraged?" and "was the grading system fair?"; NC State, at least when I taught there, asked neither). These are not ordered in any logical manner, i.e. "now we're going to ask you about the teacher" "now we're going to ask you about the class".
It does make me think that professors could benefit from more clearly explaining the evaluations to their students. When we grade, we usually give rubrics, or at least criteria, that explain what different parts of an assignment are important, and why they are important. We do this in part, I think, because we don't always expect students to know or recognize this right away. The same, it seems, should apply to evaluations: they play a crucial role in how we're evaluated, but we expect students to just magically intuit why they're important and what the different questions mean.