The WSJ spewed forth this bit of online privacy pustulance from an alleged "professor of economics", Paul Rubin Paul Rubin's First Falsehood
1) Privacy is free. Many privacy advocates believe it is a free lunch—that is, consumers can obtain more privacy without giving up anything. Not so. There is a strong trade-off between privacy and information: The more privacy consumers have, the less information is available for use in the economy. Since information helps markets work better, the cost of privacy is less efficient markets.
Its not that "privacy is free" anymore than "freedom is free". Privacy is the right to not be watched all the time. Clearly the groups working on privacy are expending time and energy. Does not sound free to me. But lets take a closer look at Paul Rubins' falsehoods.
Fallacy #1.1 : "consumer privacy means the economy has less information" and "information helps the markets work better". Paul does not make a case that the consumers' private information is the information needed to make the markets work. He just says consumers give less information and that information is needed for an efficient economy. Fallacy #1.2: "helps" How much value is derived from the consumers private information? Notice that Paul himself is fudging with that wussy word "helps". Does the economy function 10% less efficient? 5%? 3%? What exactly is the realized benefit to the economy? Fallacy #1.3: The consumer realizes some benefit Does the consumer giving up the information realize any tangible value? Or is the economic value realized only to the recipient of the information. Most transactions involve an exchange of value. Does the consumer realize anything of value? How many sites ask for private information and then offer nothing useful. Or worse turn out to be scam sites. Paul Rubin's Second Falsehood
2) If there are costs of privacy, they are borne by companies. Many who do admit that privacy regulations restricting the use of information about consumers have costs believe they are born entirely by firms. Yet consumers get tremendous benefits from the use of information. Think of all the free stuff on the Web: newspapers, search engines, stock prices, sports scores, maps and much more. Google alone lists more than 50 free services—all ultimately funded by targeted advertising based on the use of information. If revenues from advertising are reduced or if costs increase, then fewer such services will be provided.
Fallacy 2.1: Uncle Sam is counting on you! Give up your privacy or the world will end!! This assertion is simply ludicrous. I know it is sooooo last century, but does anyone remember broadcast TV? maybe radio? Did everyone remember to "register" with your favorite FM station before listening to the free music? Of course not! Did the advertisers refuse to advertise on radio for the last 70 years because they didn't have targeted information about the listeners? How about newspapers? Of course not! Clearly the economy managed to function quite well without demanding private information from consumers. Fallacy 2.2: News flash: advertising revenue is already down. And it ain't because of privacy groups. The basic economics of online advertising is flawed. There is simply so many places to display ads that the value of each display ad even on a popular site like Facebook is in the range of about $0.00002 ( yes, Dorothy much less than a penny) And this is for a site like Facebook which has a lot of private information about its users. Fallacy 2.3: News flash: Advertisers can use the information Reality here is that most ad buyers still have very limited mechanisms to segment their target audience: sex, approximate age and that is about it. All that detailed information the consumer is being asked to give up? for the most part unused. Fallacy 2.4 The companies depend on the information they are gathering to make enough money to stay in business and without the information the companies will disappear. Completely without substance. Companies that fold in Silicon Valley go out of business for many reasons. The most common reason is spending all the invested capital before figuring out how they will make money. Viable internet companies don't go out of business. Once an internet business becomes cashflow positive, the company is successful. Consumer privacy issues have never changed a viable internet business into a failure. The more usual case is that in spite of gathering all this private information, the company couldn't figure out how to make money with the information. Paul Rubin's Third Falsehood
3) If consumers have less control over information, then firms must gain and consumers must lose. When firms have better information, they can target advertising better to consumers—who thereby get better and more useful information more quickly. Likewise, when information is used for other purposes—for example, in credit rating—then the cost of credit for all consumers will decrease.
Fallacy 3.1: Fallacy of the win-lose by implication scenario: "consumers have less control over information, then firms must gain and consumers must lose." Paul is arguing the inverse here. He is implying a falsehood, if "consumers have more control over information, then firms must LOSE". Apparently, Paul can not imagine a scenario where firms manage to function without the consumers' private information. Paul really needs to revisit the economic history of this country. Maybe Adam Smith can help him out. Once again, the economy managed to function without privacy being invaded. Fallacy 3.2 Red Herring: Credit scores are not an online privacy issue. Credit gathering for the purpose of issuing loans are a specific transactions already covered by consumer law. Online privacy is all about information gathering that is not needed for a specific immediate transaction. Paul Rubin's Fourth Falsehood
4) Information use is "all or nothing." Many say that firms such as Google will continue to provide services even if their use of information is curtailed. This is sometimes true, but the services will be lower-quality and less valuable to consumers as information use is more restricted. For example, search engines can better target searches if they know what searchers are looking for. (Google's "Did you mean . . ." to correct typos is a familiar example.) Keeping a past history of searches provides exactly this information. Shorter retained search histories mean less effective targeting.
Fallacy 4.1: Google does not need past history to correct a search. I have search history turned off. And I have had no problems. If this is indeed such a problem for google, then every library patron who searches the internet from a public computer must have this "problem". After all my search for "butterflies" is going to be blended with the search history of every other library patron. Fallacy 4.2: The "lower" quality is some how meaningful At a certain point, additional precision is meaningless. For example, if you ask your kids where they are, is it really more useful if they reply "I am 3.4 meters from the front door facing to 3degrees to the north, sitting down." or if they say "I am at home". Paul Rubin's Fifth Falsehood
5) If consumers have less privacy, then someone will know things about them that they may want to keep secret. Most information is used anonymously. To the extent that things are "known" about consumers, they are known by computers. This notion is counterintuitive; we are not used to the concept that something can be known and at the same time no person knows it. But this is true of much online information.
Fallacy 5.1 "Anonymous data" It is relatively easy to deanonymize data. Netflix was forced to cancel their second planned contest because it was demonstratively easy to deanonymize the Netflix data. This was in spite of Netflix doing their best to prevent exactly that. So a motivated company trying to anonymize can't do so. A less motivated company is going to do better? Fallacy 5.2 Deanonymizing takes a lot of effort. In fact, zip code, age, gender deanonymizes 87% of all data. Anyone asking "Happy birthday! How old are you?" at your birthday party has enough information. Netflix is now facing a lawsuit about this.
The suit is also asking the court to stop Netflix from launching its promised second contest to improve the recommendations — this time giving out user data that includes ZIP codes, ages and gender, along with movie ratings and ID numbers substituted for user names. That’s a foolish idea on Netflix’s part, according to University of Colorado law professor Paul Ohm, who in a blog post in September called the idea “a privacy blunder that could cost millions of dollars in fines and civil damages.” Ohm, a former Justice Department lawyer, recently authored a legal paper calling into question the practice of anonymizing data, essentially finding that if data is useful to researchers, it could also, by definition, be re-identified. Read More
I think Netflix would disagree with Paul Rubin. Paul Rubin's Sixth Falsehood
6) Information can be used for price discrimination (differential pricing), which will harm consumers. For example, it might be possible to use a history of past purchases to tell which consumers might place a higher value on a particular good. The welfare implications of discriminatory pricing in general are ambiguous. But if price discrimination makes it possible for firms to provide goods and services that would otherwise not be available (which is common for virtual goods and services such as software, including cell phone apps) then consumers unambiguously benefit.
Fallacy 6.1 Price discrimination is o.k. no matter what it is based on. Paul Rubin is willfully ignoring Redlining:
Redlining is the practice of denying, or increasing the cost of, services such as banking, insurance, access to jobs, access to health care, or even supermarkets to residents in certain, often racially determined, areas. The term "redlining" describes the practice of marking a red line on a map to delineate the area where banks would not invest; later the term was applied to discrimination against a particular group of people (usually by race or sex) no matter the geography. During the heyday of redlining, the areas most frequently discriminated against were black inner city neighborhoods. Through at least the 1990s this practice meant that banks would often lend to lower income whites but not to middle or upper income blacks. Reverse redlining occurs when a lender or insurer particularly targets minority consumers, not to deny them loans or insurance, but rather to charge them more than would be charged to a similarly situated majority consumer.
Paul Rubin, as a economics professor you should know about Redlining. Paul Rubin's Seventh Falsehood
7) If consumers knew how information about them was being used, they would be irate. When something (such as tainted food) actually harms consumers, they learn about the sources of the harm. But in spite of warnings by privacy advocates, consumers don't bother to learn about information use on the Web precisely because there is no harm from the way it is used.
Fallacy 7.1 Consumers understand and are willing participants in giving up their privacy. The Facebook privacy policy is longer than the U.S. Constitution:
If you guessed the latter, you’re right. Facebook’s Privacy Policy is 5,830 words long; the United States Constitution, without any of its amendments, is a concise 4,543 words.
Considering how vague the Facebook policy is, most consumers have no idea what the meaning of the policy is. Fallacy 7.2 : Ignorance means permission. Presuming that consumer ignorance is because there is no harm is a huge leap. The consumer has no ability to ask Google, Netflix, or Yahoo for an exact list of who got their information. No phone number to call, no email address that will be responded to. Even a motivated consumer is in the dark. Paul Rubin's Eighth Falsehood
8 ) Increasing privacy leads to greater safety and less risk. The opposite is true. Firms can use information to verify identity and reduce Internet crime and identity theft. Think of being called by a credit-card provider and asked a series of questions when using your card in an unfamiliar location, such as on a vacation. If this information is not available, then less verification can occur and risk may actually increase.
Fallacy 8.1 Gathering information reduces fraud. The opposite is true. By having more private information stored on more computers at more companies there are more opportunities for hackers to gain access to the information. The hackers only need to penetrate the company with the weakest security. Paul Rubin's Ninth Falsehood
9) Restricting the use of information (such as by mandating consumer "opt-in") will benefit consumers. In fact, since the use of information is generally benign and valuable, policies that lead to less information being used are generally harmful.
Fallacy 9.1 : "The information is valuable but not really." If the information is so valuable, why shouldn't consumers be allowed to protect it? Paul Rubin's Tenth Falsehood
10) Targeted advertising leads people to buy stuff they don't want or need. This belief is inconsistent with the basis of a market economy. A market economy exists because buyers and sellers both benefit from voluntary transactions. If this were not true, then a planned economy would be more efficient—and we have all seen how that works.
Fallacy 10.1 Advertising doesn't work! Do I really need to say more? Advertising has no ability to induce demand. Women with 300 pairs of shoes really need and want 300 pairs of shoes. Coming next week, Paul Rubin will write an article about the evils of cash purchases. Paul will explain how cash purchases deprive desperately poor banks of needed purchase information. I might add more later but enough with the pustulance!