Too Much Information
How Product Labeling Killed Some Truly Righteous Weed and Might Even Raise Your Hospital Bill
Everyone favors greater transparency in markets. But surprisingly, sometimes it can backfire.
"Transparency" is one of those policy prescriptions that pretty much everyone can agree with, like "Be nice to people" and "Never run with scissors". Even libertarians generally smile when the government steps in to force businesses to provide more information, though to be sure, when the business is a fast food restaurant, and the information is how much fat we'll pack on if we eat the 10-piece McNuggets we're craving, that smile may take on a certain strained, false quality.
Still, transparency! As all-around marvelous as motherhood and apple pie; pretty much everyone is theoretically in favor. But at least in one market, Mark Kleiman reports, transparency may have had the unexpected result of killing a product that everyone liked:
It appears that one local grower developed a strain sold under the “Purple Urkle” label. It was widely held, by producers and consumers alike, to be truly righteous weed, and it flew off the shelves.
Then the fashion for chemical testing came in. Purple Urkle tested at a mere 7% THC – perhaps twice the THC content of what was called “marijuana” when I was in college, but well below the 12-18% that current products claim (more accurately in some cases than in others). Result: even the consumers who had already experienced and enjoyed Purple Urkle, and had been asking for it by name, wouldn’t touch it. They were so used to the idea that quality is defined by THC content that they didn’t want to smoke what they now “knew” to be weak weed. So the brand more or less died.
In principle, a legal cannabis market could improve consumer satisfaction and safety by delivering products of known chemical composition. But if the heavy users who dominate the market in terms of volume have a prejudice in favor of maximum THC content, the practical outcome could fall well short of the promise.
It would be useful if regulators had a solid scientific basis on which they could tell consumers about the likely impacts on them of using various quantities of various products through various routes of administration. But that scientific basis does not now exist, and it’s not especially “interesting” science in terms of fundability or publication prospects; it’s more like the work that gets done at Pillsbury than it is like the work that gets done at NIH. And of course you couldn’t even think about doing it in this country; even if some IRB would approve it, the monopoly producer of research cannabis at the University of Mississippi couldn’t deliver nearly the quality or variety of material available in any California or Washington State dispensary.
Strange to say, this is not the only case of this happening in the literature.
A few years back, two experimental economists named Bart Wilson and Arthur Zillante were doing some research on market design, and an economic problem known as "the lemons problem". First described by George Akerlof in 1970, this problem shows how asymmetrical information between buyers and sellers can kill a market, or lead to a bad equilibrium where only bad products are sold at poor prices.
What is asymmetrical information, I hear you cry.
Well, it's a popular economics blog hosted at the Daily Beast and written by the world's tallest female econoblogger. But asymmetrical information is also a term of art in economics. It means that one party to a transaction has more information than the fellow on the other side.
Asymmetrical information can lead to profound market failure, as in the market for used cars (the "lemons" to which Akerlof referred). Why do cars lose so much value as soon as you drive them off the lot? In part because once you've driven more than a few miles, you now have much more information than the buyer about how the car works. There is a dark suspicion that if you are selling so soon, it is probably because you have discovered something wrong with the car, and are now trying to pawn off your substandard goods on an unwary buyer. So they demand a discount.
Of course, this makes selling your nearly-brand-new car a costly proposition. You'll take a bath. So the only people who want to sell are . . . people who have just discovered something very wrong with the car.
In theory, this can lead to a "death spiral" where no one will buy the car. In practice, we have mechanisms such as warranties which can halt the spiral. But this is still a real problem in many markets. It's thought to be a major issue in the market for health insurance, for example, where you know better than your insurer whether you're likely to get very sick.
But back to Wilson and Zillante. They needed to get a baseline for how bad the lemons problem would be in an experimental setting. They set up three experiments, each of which had two goods, a superior good, and a regular good. Buyers had no way to tell whether their product was "super" or regular until they'd bought the damn thing. Moreover, they could only buy one product from a seller, and they only bought once, which meant that there was no way for a seller to prosper by establishing a reputation for good value. The way you made the most money as a seller was to make as much as you could on single sales.
Since the sellers of super products had a higher "cost of goods" than the sellers of regular products, and buyers valued super products much more highly than regulars, the way to make the most possible money was to rip people off: get them to pay "super" prices for a low-cost "regular" good. The purpose of this baseline experiment was to establish how many buyers would end up getting ripped off.
In one of the experiments, there were no posted prices; buyers approached sellers and bargained. In the second experiment, there were posted prices, but the sellers could only see their own prices, not the prices that other sellers were offering. In the third experiment, however, prices were posted so that everyone could see them. The third market was the most transparent, and also the closest to how most markets function in modern America.
If anything, the transparent market "should" have been more efficient, because it is one of the truisms of economics that more information makes markets better. But something remarkable happened: a spectacular failure.
In the first two markets, they didn't have a lemons problem. The experiments contained barely any ripoffs; most of the prices were efficient and fair.
But in the third market, the most transparent one, almost all of the regular goods were sold at ripoff prices. Once the sellers of regular goods could see what the vendors of "super" goods were charging, it was easy to advertise their regular goods as "super" goods at just slightly less than the price of the real thing. They made a killing, while the buyers got creamed.
This was not at all what Wilson and Zillante had expected to find. Remember, this was just supposed to be the same thing with slight variations, to provide a baseline for another experiment. They weren't looking for any surprising differences between the groups. Besides, "more transparent" markets are supposed to be synonymous with "more efficient" and "fairer" markets. The most transparent market should have been the one with the smallest lemons problem. But not in this case; adding more information made consumers worse off.
Does this have any real-world application? Well, just today, the Obama administration published data about what hospitals around the country are charging for various procedures, showing that prices can vary wildly between hospitals that are quite close together. The hope is that this will help fix some of these disparities. But looking at Wilson and ZIllante's work, we might also fear that this will help "fix" the disparities by allowing the cheaper hospitals to raise their rates.
Of course, I'm sure hospitals have already spent quite a lot of effort finding out what other hospitals are charging, and there are all sorts of other institutional factors that will hopefully hold that sort of thing in check. And this shouldn't be taken to prove that "transparency is bad". The problems created by a little transparency might be solved by even more transparency--for example, by punishing fraud.
But it just goes to show how under the right conditions, even something as obviously good as more information can actually cause more problems than it solves. We don't just need "more transparency". Instead, should think hard about what "better transparency" would look like.
Update: One commenter argues that hospitals already have the charge data, and it's useless. Another commenter points out a market where we actually did see this dynamic play out: CEO pay. Publishing the figures has led compensation committees to set CEO pay by looking at the packages given to other, comparable CEOs. The result has been a general inflation on CEO compensation packages.
Update II: More examples! On Twitter Ted Frank reminds me that law firm profits are another instance of transparency having adverse impacts; it is generally agreed, he says, that ALM's reporting on this data had adverse results. And GMU's Eric Angner points me to this paper on conflict-of-interest disclosure, which argues that it may in fact increase biased advice by giving people moral license to put their thumb on the scales.