Friday, November 8, 2019

Making Your Own Selectivity

Today's thoughts are from Maggie Koerth, writer for Five Thirty Eight and former science writer for Boing Boing. Maggie is also the author of Before the Lights Go Out.

Today on Twitter she was ruminating about the news that colleges buy the names of students with low SAT scores from the College Board,  then encourage the students to apply, knowing they will reject them in order to boost the colleges' selectivity rating. Ugh.

Maggie had this to say:

I've been thinking about this for a day and a half now. And here's why. This game exists because a metric (selectivity) exists. And the metric exists because ... once upon a time a news magazine decided they could make some good side money ranking colleges.

Does "selectivity" actually tell you anything useful about how good your education will be? What does "selectivity" actually measure that is of value to a student?

Why do I have the feeling somebody chose this metric cause they just needed more stuff to rank by?

A few years ago, I read a research paper that used the US News and World Report college rankings as an example of how we take subjective arbitrary metrics and adopt them as representative of objective worth and then reshape whole systems around them.

And it boils down to this: You could make a useful-to-students guide to colleges. But that takes a lot of reporting. Because many of the things students want to know are subjective and can't be well captured in numbers. And you have to do it, yourself, over and over.

Or, you could come up with a bunch of metrics that colleges submit to you and that look scientific and objective. And that's a lot easier. Requires fewer reporters. Makes the universities compete for your favor in ways you can maybe monetize. So why not?

That, my friends, was an editorial decision.

An editorial decision that has had broad-ranging effects on how kids feel about themselves, how schools treat them, what schools feel pressured to spend $ on (and raise tuition for) ...

And I guess the big takeaway for me is to spend more time thinking about the downstream effects of editorial decisions. When I make a choice, what could the consequences of that choice be?

Well, that, and my usual rant about "just because it's data doesn't mean it's objective fact that tells you what you need to know."

Relatedly, parents, keep this shit in mind when you're thinking about Great Schools rankings in your area. Ask questions about the metrics your public school is judged by. And then think about whether they are really telling you what you want to know.

At one point, I was looking at a house in a neighborhood whose grade school at a "2" ranking with Great Schools. And I got all worried. And I looked into it. It had low test scores because more than 50% of its students were ESL. But the teachers were winning awards.

The principal was beloved. It had some really cool after school robotics and language and arts stuff. It was not a bad school. It was a school that scored poorly on a ranking system.

I once literally heard the principal at my kids' current school tell a prospective parent that part of why our school didn't score as high as another school across town was because we didn't hire a person whose sole job was to massage the Great Schools ranking.

1 comment:

Michael Leddy said...

More tricks: Waive the application fee for a limited time to boost applications. Have recruiters visit classrooms and give every student an application form. Selectivity immediately improved.