By James Kwak
Daniel Hamermesh points out a Wall Street Journal article on how colleges and universities are trying to increase accountability and productivity by measuring costs and benefits quantitatively. The “star” example is Texas A&M, which created a report showing a profit-and-loss summary for each professor or lecturer, where revenues are defined as external grants plus a share of tuition (if you teach one hundred students, you are credited with ten times as much revenues as someone who teaches ten students).
Let’s not argue about whether our colleges and universities are doing a good job. Let’s not even argue about whether we need more transparency and accountability in higher education. Assuming we do, this is just about the most idiotic way of doing it that I could imagine. No, wait; there’s no way I could have imagined something this stupid.
The “professor P&L” is an attempt to bring private-sector “efficiency” into higher education, but I can’t believe anyone who actually worked in the private sector could think this could work. At my company, we* thought a lot about the problem of software productivity and how to measure it. And the problem is, there really is no way to do it on an individual level. Measuring lines of code is crazy, because ideally you want to solve a given problem in as few lines of code as possible. Measuring classes or methods is equally crazy. Measuring what some people call “function points” is crazy, because they depend on what you call a function point. Most fundamentally, measuring quantity of output is crazy, because quality is much, much, much more important than quantity. It’s better to write a little bit of software well, in a way that doesn’t break anything else, can be tested reliably, and can be expanded on in the future, than to write a lot of software badly. So instead, we look at whether the software does what it is supposed to do, whether it can be tested, and whether it does what our customers want it to do. That’s how the private sector works.
And if you can’t measure software productivity, how are you going to measure educational productivity, which is much more complicated? What is the unit of output you are going to measure? More fundamentally, what is the output you are trying to produce?
What is Texas A&M measuring? The fact that you are teaching someone–not what you are teaching, or how well. In some fantasyland, you might think that the “free market” for college classes will make students flow toward the professors who teach useful things well. As anyone who has ever gone to a university knows, however, students flow toward (a) required courses and (b) professors who give easy grades. And they are measuring the grant dollars you bring in, not what you do with those grant dollars. So, for example, computer science will do worse than mechanical engineering, simply because it is less capital-intensive.
If you’re going to measure outputs, the places to look are whether students graduate from college knowing things, whether they are satisfied with their educations, and whether they are able to do the jobs employers need them to do. We could have a reasonable debate about whether those things can be measured, and whether they are worthwhile to measure. There is a lot of evidence that this kind of testing has harmful unintended consequences at lower levels, and it seems even more inappropriate for college, but that’s something that could be debated and tested.
So where did this idea come from? The Texas Public Policy Foundation, a conservative think tank. Apparently they didn’t even finish first-semester economics. They got the bit about how market forces are driven by profits and losses. But they missed the bit about why markets work: markets only increase social welfare if the prices of things reflect their value, not if they are completely artificial.
* By “we,” I mainly mean other people at the company.