Wednesday, September 25, 2013

The one weird old trick to make driving in traffic quicker and less frustrating

On Wonkblog today, a collection of traffic engineering factoids, including one about humanity's collective rejection of a driving technique that would indisputably improve their lives: zipper merging.
We're all basically idiots when it comes to merging. Whenever a road shrinks from two lanes to one, it would actually be most efficient for everyone to stay in their lane right until the point where the lanes converge and then execute a "zipper merge." But most drivers feel bad about doing this and tend to shift over to the open lane early. This causes congestion.
Here in Minnesota, the Department of Transportation has instituted a "Don't Be Minnesota Nice!" campaign to encourage zipper merging.  Ten minutes on I-94 will confirm the campaign's failure.

So that's my little tip to make driving more tolerable for everyone: learn when zipper merging is appropriate, and use it!  I zipper merge without apology, and I can report back that it is fantastic.  Few things are worse than sitting in traffic; likewise, few things are better than roaring down a completely-empty exit lane, blowing by hundreds of stopped cars, and knowing it's all for the common good.  Zipper merging makes your life easier and, while other drivers may not appreciate it, it makes their lives easier too. There aren't many times in day-to-day life you can feel absolutely certain that you're doing The Right Thing and remain secure in your decision in the face of other people's disapproval.  But here, you are not only entitled to act with smug confidence that your own superior knowledge gives you the privilege to skip to the end of the line, but you're actively encouraged to do so by authorities.  Relish the opportunity.

Tuesday, September 24, 2013

Today's most ridiculous news story

Somehow, it wasn't anything Ted Cruz said.

No, it's this:
The Federal Open Markets Committee released its decision not to taper bond purchases last Wednesday at exactly 2 p.m. eastern time. The news should have taken 7 milliseconds to travel at the speed of light from Washington to Chicago. But Chicago markets began moving in response to that news only 3 milliseconds after 2p.m. Is news traveling faster than the speed of light now? 
The time difference at stake is way less than a blink of an eye (it takes a person 300 or 400 million milliseconds to do that), but in trading, it matters. Trading computers are so smart now that you can plug high-speed data feeds into them, and the computers will "analyze the news as it comes in and execute pre-programed trading strategies," CNBC reports. Again, in Chicago, trades shouldn't have happened until 7 milliseconds after 2p.m. But Nanex Research, a firm that "has been at the forefront of discovering market abuses, particularly in the area of high-frequency trading," discovered a buying increase in the eMini futures market in Chicago just 3 milliseconds after 2p.m. Gold futures started accelerating just 1 or 2 milliseconds after 2p.m. 
The Fed is now investigating whether or not a media organization may have leaked the "no taper" news early. CNBC reported on Tuesday on the conditions surrounding the embargoed news — reporters were literally locked in a room in Washington at 1:45p.m. on Wednesday. The reporters got copies of the Fed's statement at 1:50p.m., but were not allowed to breathe a word of it until 2p.m. (according to the national atomic clock in Colorado).
I was part of of pretty long debate today about whether this is something we should care about.  And I really just can't see how it is.  It's definitely true that the Fed's announcements are important for the market; billions of dollars are riding on the FOMC's decisions.  It's also true that enough money is on the line to encourage Wall Street to take some pretty ridiculous measures.  But what's that to us?  For everyone that isn't a giant firm, the whole exercise is pretty aggressively zero-sum. So if banks want to invest tens of millions of dollars in setting up communications systems that will transmit the Fed's decisions back to their trading computers and make trades at speeds that approach the absolute physical limit, well, so be it. Why should we pay any attention to this absurd game?

And building from that, why should the Federal Reserve, a government body, be so invested in setting up safeguards that ensure a level playing field?  All the security measures--reporters must remain in a locked room, unable to speak, until an exact precise moment as determined by an atomic clock--are disturbingly solicitous of the interests of a handful of Wall Street banks.

Lots of market-moving information is communicated by the government in lopsided, asynchronous ways.  The best example of this is Supreme Court decisions, of which there are dozens every year with potentially major economic consequences.  Despite this, you don't see some sort of strange system where all the Supreme Court reporters are lined up at the starting line, waiting for Roberts to fire a pistol and signal the precise moment everyone is allowed to call the networks.  Instead, everyone piles into a courtroom, the judges read an opinion, and if someone figures out the holding first, so be it.  If they screw up and report the wrong result, that's also something that can happen.  In the end, it doesn't matter to America whether Trader A or Trader B gets an arbitrage opportunity, so long as someone takes it eventually.  The market will take care of itself.

And I mean, where does this end?  Like, have we factored in the speed of sound? The time it takes for a reporter's voice to travel to a phone receiver might disadvantage the reporter vis-a-vis another reporter who, say, uses a series of light signals and video chat.  What about reporters sitting towards the back of the room?  Their signals have to travel further than someone communicating over a shorter length of wire!  Yes, the difference is minuscule... but once you're talking about the the amount of time it takes light to travel from DC to Chicago, stupid little things like this become relevant.

To me, the fact that the Fed is wringing its hands over this stuff demonstrates badly misplaced priorities.  Because really, what's the philosophy behind doing it the way we currently we do it?  Fairness?  It's true, the current system does make sure everyone has a equal opportunity to arbitrage FOMC announcements--provided, of course, you've got a bleeding-edge supercomputer and a billion dollars in investment capital.  I'm sure some bank would cry foul if FOMC decisions were reported, by, say, Bernanke standing in front of a TV camera and reading off a sheet of paper.  It's not fair, they'd moan.  Citibank's guy was standing closer and heard first, and those .0215 seconds made ALL the difference.  To which we ought to respond, So friggin what?  This is the game you got into, and you're not allowed to complain if you lost.  Your stupid play, your money, your problem. 

The Star Tribune thinks this video will make you into an Islamic radical

Here's how the Minneapolis paper describes an August video aimed at recruiting Minnesotans to the Somalian terrorist group Al-Shabab:
The 40-minute recruiting video is as slick as any marketing tool for the twenty-something male demographic: patriotic music, commitment to a greater good, guns, even the promise of enjoying life in “the real Disneyland.”
Skim through it and judge for yourself:



Did you somehow resist the call to jihad?  Or are you booking a flight to Mogadishu as we speak?

I probably should have something more sophisticated to say about this, but, honestly, the main thing I'm feeling is embarrassed for my hometown paper's utter cluelessness.  Maybe Al-Shabab will continue to draw recruits from Cedar-Riverside, but I'd hesitate to ascribe their success to marketing savvy.  But what do I know?  I'm a twenty-something male; play a little patriotic music and my brain turns to mush.

Monday, September 23, 2013

Slate redesign redux

Thinking a little bit more about why the new front page is such a mess, the biggest problem is that there's no discernible hierarchy of importance.  This is particularly frustrating as you travel further down the page.  Years of reading newspapers have taught us that the most important stuff A. gets the biggest headlines, and B. is placed at the top of the page, meaning that it's possible to read a publication at a desired level of detail.  If you've only got a few minutes, or just don't care that much about the publication in question, you can peruse the top-level stuff; the longer you've got or more interested you are, the further down the page you can travel.

But the Slate redesign turns all this on its head by placing big features, with pictures and everything, at seemingly random intervals on the page.  As my earlier post tried to demonstrate, there's no real obvious throughline that lets you skim the page and get the most important stuff.  There are some hierarchical elements, in the form of lists (most recent, most shared, most viewed, etc.), but these are also scattered seemingly at random around the page; their relation to each other isn't clear.  Making matters worse, advertising panes are intermixed with content panes quite freely; the only way to know what not to click is to actually read the ads and recognize that's what they are.  This is consistently irritating and feels like a waste of time, though I'm sure advertisers themselves prefer it.

All in all, it takes a process that used to be quick and frictionless--check Slate.com for three seconds, see if there's anything of interest, move on--and makes it quite a bit more labor-intensive.  I'm sure the web designers hope this will encourage people to spend more time browsing the page, but if anything, the increased time cost of visiting the site will probably result in me checking it less habitually.  Just anecdotally, there's a bad precedent here: a year ago, The New Republic went through a startlingly similar redesign, changing from a newspaper-esque format to a Web 2.0, super-tablet-friendly format much like Slate's; the consequence is that TNR changed, overnight, from a site I visited daily to a site I visit maybe once a month.  That's what happens when you value form, and faddish web design, over function.

GTA V's artistic achievement

Leigh Alexander has a lot of smart things to say about GTA V, but she ultimately gets sidetracked by the GTA-that-was and misses what's exhilarating about the GTA-that-is.

She's making a common mistake: the old GTA was a parody of modern society, but the joke's wearing thin.  A lot of people think that means the series is on the downslope.  They're wrong.  It no longer needs to be about tearing through a satirical America as an amoral force of nature.  The achievement of the GTA series, circa 2013, isn't the crime-movie storylines or the lousy, juvenile humor even the transgressive freedom it gives you. As Alexander notices, the game doesn't actually seem especially interested in giving the player complete, unrestricted freedom these days.  Even its signature free-form crime-sprees are toned down: you can still behave like an abhorrent, murdering psychopath if you please, but the game actively works to guilt you out of it, and, for the first time ever, running from the police will trigger a heavier crackdown, a process which can pretty quickly spiral out of control (much like real life), meaning it really behooves aspiring bandits to keep a low profile unless they have some sort of express criminal objective (again, much like real life).

But that's all okay.  Because what's notable about the modern-day Grand Theft Auto series is the really, really remarkable sense of place that it creates. Making a fake place seem real--particularly one that's as wide open as GTA's world--is something that's exceptionally difficult to do, but is also something that's easy to overlook if you don't play a lot of video games. The old GTAs were impressive in their time, but large chunks of them felt artificial--stuff wasn't laid out quite right, buildings were too close together, too small, too big, whatever. The world was impressive for a video game but it still felt like a video game. GTA V, more than any game I've ever played, almost certainly more than any game ever made, feels translated directly from the real world, a feat which is all the more impressive because its Los Santos is, at best, only an echo of a real city. It's an incredible display of observation and artistry, building something in two years out of a whole cloth that can be mistaken for a city built by millions of people over centuries.

People get fixated on the game aspect of the game and miss this. If GTA V's Los Santos were a sculpture, no one would mistake the staggering artistic achievement on display. But it is a sculpture, just one that exists in a conceptual space instead of a physical space. And it's more than a sculpture--it contains a multiplicity of systems, designed to emulate, as perfectly as possible, the real rhythms of modern life. Traffic flows. Pedestrians wander around, talk to each other. Yesterday I saw a traffic accident in the middle of an intersection; cars stopped and a few minutes later an ambulance arrived.  And I saw it in the game, too.

The recreation goes beyond the visual or the tangible. GTA V simulates an entire social network. Everything that happens in the game is continually commented upon in the game's fake social networks; make someone angry and you can go read them tweeting about it. Where a normal narrative concerns itself only with the things that happen directly in the audience's view, GTA continually updates a web of optional interactions, ancillary to the main plotline, that do nothing but maintain the illusion of a real, invisible set of social relationships.

Consider: one of the game's main playable characters is Michael, a retired mobster attempting to salvage a rocky relationship with his children. Playing temporarily as one of Michael's younger criminal associates, on a whim, I called Michael's son, Jimmy. Jimmy confessed to me something he'd never say directly to his dad: that he was worried about his father, and was particularly worried about his reinititiation into the criminal fraternity.

Think about that: this is a completely optional, easily overlooked moment. There was no particular reason to call Jimmy at that moment, and there were a dozen other people I could have called instead. I bet 98% of GTA players never make that call, never find out that Jimmy cares more about his dad than he lets on, or that he feels more comfortable opening up to the younger gangster than his family. And there's no doubt a thousand other, similar interactions I've missed. They're out there; in some sense, they exist, even if I don't see or explore them, alongside the vast swathes of physical geography in the game that I'll probably never encounter.

So yeah, GTA V is a lot of things, but it's not disappointing.  Compared to a lot of art in a lot of mediums, GTA V might fall short--but that's only because there's no good word for the medium it's still inventing.

The Slate redesign: a reading journey

I'm sure Slate's new look has been carefully tailored to look pretty on tablets and garner the absolute maximum number of links, but it sure fails on the "not giving readers a headache" front.

Observe: one's man journey through the front page.



FIN.

Sunday, September 22, 2013

Krugman on health care costs

You should check out this excellent blog post from Paul Krugman, which, in a few short paragraphs, distills out some of the fundamental questions affecting American health policy:
Everyone who’s serious about the budget realizes that to the extent we do have a long-run fiscal problem — which we do, although it’s far from apocalyptic — it’s mainly about health care costs. And then there’s much wringing of hands about how nobody knows how to control health costs, so maybe we should just give people vouchers, and if they still can’t afford insurance, too bad.  
 Meanwhile, we have ample evidence that we do know how to control health costs. Every other advanced country does it better than we do — and Medicaid does it far better than private insurance, and better than Medicare too. It does it by being willing to say no, which lets it extract lower prices and refuse some low-payoff medical procedures.  
 Ah, but you say, Medicaid patients have trouble finding doctors who’ll take them. Yes, sometimes, although it’s a greatly exaggerated issue. Also, middle-class patients would surely be unhappy if transferred from the open-handedness of Medicare to the penny-pinching of Medicaid.  
But the problems of access, such as they are, would largely go away if most of the health insurance system were run like Medicaid, since doctors wouldn’t have so many patients able and willing to pay more. And as for complaints about reduced choice, let’s think about this for a moment. First you say that our health cost problems are so severe that we must abandon any notion that Americans are entitled to necessary care, and go over to a voucher system that would leave many Americans out in the cold. Then, informed that we can actually control costs pretty well, while maintaining a universal guarantee, by slightly reducing choice and convenience, you declare this an unconscionable horror.
Of course, the underlying point here is one that anyone with even a vague awareness of health policy has known for a long time: health care spending isn't driven by who gets care so much as it's driven by what kind of care they're getting.  As a result, expanding access doesn't fix cost growth--or necessarily worsen it, either.  The bad thing about this is that, as Krugman says, it forces us to make genuinely hard choices about rationing care.  The nice thing about this is that it frees us up to listen to our consciences when confronting questions of access.

Tuesday, September 17, 2013

Some stray thoughts about GTA V

With every iteration, Rockstar's worlds become more real.  This is no exception.  It's not the big stuff--I haven't had time to see the big stuff--but the little things, like how the underside of a freeway overpass, or a alleyway closed off with a chain-link fence, feel a little unsafe.  At one point I saw a white building across a wide, slightly unkempt plaza, and immediately knew it was a government building.  On closer inspection, it was a county courthouse.  It's a tiny thing but a remarkable achievement, capturing a place so well on so many levels that it bypasses the need for exposition or explanation feels like a direct extension of the actual world.

It's stuff like that which makes playing Grand Theft Auto such a joy.  I'm weird like this, but I invariably think the first few hours of a new GTA game are the best.  That's because the game usually has you doing small stuff: go to the store, buy some things, climb over a fence to steal a bike.  There aren't huge shootouts, there aren't explosions, there aren't over-the-top antics; I spend of my early time on foot walking down city streets, getting lost, getting hassled by bums.  This has the counterintuitive effect of making the game seem enormous.  It's a sensation that's difficult to describe: fiddling with tiny details in one corner of a world that goes on practically forever.  Games feel small when you hit their edges, which are usually hidden just out of view.  Here, there isn't an edge to be found.  GTA isn't the only series that starts you off, say, walking to your friend's house, but it's the only one where, instead of walking to his house and watching television, I could, if I wanted, keep walking, for hours and hours, eventually out of the city, past beaches and forests and small towns and national parks, watching the people and topography change, until eventually I ended up somewhere every bit as real as where I started, but totally different.

Unfortunately, a lot of people find these relatively sedate opening chapters boring.  They want to get right to the good part, which, for most, it seems, isn't appreciating the incredible world-in-a-bottle that Rockstar has made, but shooting lots of police officers, stealing their cars, and smashing them into other cars.  Frustratingly, Rockstar has listened to these people this time around.  So the game's prologue is a bank heist, resulting in a shootout with dozens of police.  The third mission also results in a ridiculous shootout.  As always with the current-gen GTAs, mass murder strains uneasily against Rockstar's narrative aspirations; for instance, the latter shootout is precipitated, absurdly, by the player's ill-advised attempt to repossess a motorbike.  After being cornered by the owners, the player's buddy draws a pistol and a major battle erupts.  The player's character quite reasonably shouts that "It's only a bike!" and "I'd rather have gotten beaten up than gotten into a gunfight!" but I don't expect these tossed-off appeals to common decency and common sense to prevent similar slaughters from recurring dozens of times over the course of the game. I find it frustrating: is the world so dull that, after simulating it in intricate detail, we only want to shoot it up?

Stop all the networking

The last few months of being a thoroughly jobless individual has raised my awareness of some of the weirder features of the culture of unemployment, circa 2013.  Among these, perhaps nothing is as weird as the obsession with networking, as exemplified by the totally ubiquitous, and, to the best of my knowledge, utterly useless LinkedIn.

I didn't even have a LinkedIn account until relatively recently, when I was told in no uncertain terms by a former professor that I must get one, shaming me into creating a profile.

But I still don't really understand what's happening here.  On my profile is my résumé.  While there's nothing wrong with my résumé, and I'm more than willing to show it to anyone that wants it, I can't think of circumstance where I felt that someone wanted to see my résumé, but wasn't able to.  In fact, I suspect that I, like most job-seekers, have the opposite problem: I frequently want to show people my résumé, but no one really cares to see it.  So why does it need to be posted on the internet, again?

My intuition that LinkedIn isn't really helping me get a job was all-but-confirmed by Ann Friedman's excellent article for The Baffler recently.  It casts the site as, in essence, a slightly scammy attempt to promote useless self-help pieces:
LinkedIn’s architects are self-aware enough to know that, even in the age of social-media following, some of us must be leaders. In October, the site enabled users to “follow” a handpicked set of “thought leaders.” LinkedIn has given this “select group” permission “to write long-form content on LinkedIn and have their words and sharing activity be followed by our 187 million members.” So far, 190 leaders have made the cut. The “most-followed influencers” are familiar names to anyone who’s ever killed time in an airport bookstore: Richard Branson, Deepak Chopra, Arianna Huffington, Tony Robbins...
[M]ost of the thought-leading counsel on offer at LinkedIn boils down to search-engine-friendly, evergreen nuggets of business advice. An article titled “Three Pieces of Career Advice That Changed My Life” is illustrated with stock photos showing street signs at the corner of “Opportunity Blvd.” and “Career Dr.” At this very promising intersection, LinkedIn CEO Jeff Weiner explains that readers can do anything they put their minds to, that technology will come to rule everything, and that changing lives is a better goal than merely pushing paper around. This set of warmed-over management nostrums is one of the all-time top five “influencer posts” on the site.
Really, though, there's a bit more to LinkedIn than that.  The site predates its "thought leader" pieces, and the networking craze predates LinkedIn.  The idea that networking is the foundation of the job search is endemic in university career offices, for instance.  My law school sponsors an unending stream of networking events, where desperate students can rub elbows with employers.

What's strange about the emphasis on networking is that it leaves out an incredibly important step: the part where someone gets hired.  It's employment through osmosis: it relies on the inexplicable logic that the knowing of people with jobs will eventually transmute into a job itself.

How that happens, no one seems to be able to say.  I know because I've asked.  Career specialists who give very clear, very straightforward advice on where to go, what to say, and how to act in order to make connections starting speaking in vague generalities when you ask them what to do with those connections.  Massage them, apparently.  It's the final and most crucial step in the process, which is a very odd time to suddenly get circumspect.

It's not really their fault, though.  The right way to get hired is obvious, but as the Friedman article intimates, no one wants to hear it.  The most direct and reliable (though by no means foolproof) route to employment remains the same as ever:
  1. Find an employer with an available position.
  2. Apply for that position.
  3. Be the most qualified applicant and give a good interview.
The problem with this, from a career advice perspective, is that it places a lot of importance on factors outside the job-seeker's control: employers, competing job-seekers, macroeconomic policymakers, etc.  That's not very heartening for people who want jobs, and not very helpful if you want to sell services teaching those people how to improve their employment prospects.  As a result, the career industry has placed undue amounts of emphasis on a handful of factors that influence hiring indirectly, factors that only increase the odds of finding a position marginally.  

Over time, the industry's laser focus on factors within the job-seeker's control has taken on a life of its own, with huge expenditures of time and effort on almost comically narrow aspects of the process.  Consider résumé writing, something for which you can currently find services, courses, and other guidance.  Lost in the barrage of tips, tricks, and techniques is the simple reality that a résumé is just a list of qualifications and those are what are ultimately being judged, not whether the applicant is correctly performing a ritual of composition.

LinkedIn and networking services are just another manifestation of this.  Making connections--particularly offline connections--isn't useless, of course, but it only helps at the margin, and like most marginal improvements, there are diminishing returns.  A person with 900 LinkedIn connections isn't a compulsively employable juggernaut who employers deny at their own risk.  In fact, at some point, one has to question whether the time spent getting to know potential employers or finding job leads might not have been better spent just learning some substantive skills.

But to a person who wants a job in an economy where jobs are hard to come by, learning substantive skills  feels, ironically, like a pretty circuitous path to employment, and one with little guarantee of material reward.  By contrast, networking feels like leaving no stone unturned.  Small wonder that so many pursue it so relentlessly.

(As an aside,there are a lot of parallels in this respect between the career industry and the personal finance industry.  As described in Pound Foolish, Helaine Olen's indictment of personal finance, most people's long-run financial outlooks are determined by large-scale factors thoroughly out of their control: market performance, national employment trends, the inner workings of the pension and retirement industries. But that's discomfiting to average Americans with no control over the financial landscape, leading to the emergence of a booming advice industry, with endless financial self-help books, gurus, and courses. In these, investment celebrities like Suze Orman preach, essentially, a variant of the prosperity gospel: be smart with your money, and you will inevitably prosper.  The problem is that even smart use of retirement accounts can't save you when the stock market tanks six months before you retire, so in many cases this advice amounts to little more than a mystical belief that financial prudence will result in karmic rewards.  Even in optimistic cases, the cost of the advice will usually outstrip any predictable gains it could produce, and even the best advice can't protect working men and women from economic calamities outside their control.)

Monday, September 16, 2013

Well this is terrible

Ever wonder what it would be like if a sinister-looking clown were wandering your neighborhood, standing menacingly in the street and looking into your house?  Northampton, England doesn't have to wonder, because someone is doing that and blah blah blah look you can get the details here but all you really need to see is this picture.  LOOK AT THIS PICTURE:


I've got nothing to add here but "Oh my god no no no no no no no."

Sunday, September 15, 2013

A deep comic malaise: the coming viral video crisis

My fellow citizens.

Hours ago, The Fox by Ylvis reached 28.4 million Youtube hits.  In doing so, it surpassed one of the all-time greatest veterans of Youtube--the infamous Star Wars kid.

According to current projections, The Fox will eclipse Double Rainbow's 38 million hits in slightly over 72 hours.  From there, we can only speculate about its trajectory.

The Fox is only the latest example of a growing danger to the enjoyment of internet videos: resource depletion.

In the early, heady days of viral internet memes, hilarious content seemed inexhaustible.  The internet was the proverbial horn of plenty, capable of producing comic material to satisfy the needs of all humanity.

We know now that this early understanding was only a mirage.  It was the combination of two trends, which are both reaching their natural conclusion.

First, viral video pioneers had access to an enormous backlog of pre-internet materials.  Home video tapes, old infomercials, long-forgotten films, all could be strip-mined for comic value.  This created an exaggerated impression of the availability of movies of funny things happening.

But inevitably that stockpile is being exhausted.  Already the amount of pre-internet material reaching the public's consciousness is shrinking.  More cannot be produced.  One day we will simply run out.



Second, inefficient distribution has helped create a false impression of abundance.  Initially, videos were circulated by literal word of mouth, or shared on file-sharing sites.  The process was slow.  Only after years did the joke begin to wear out.  Star Wars Kid was posted on Youtube in 2006, averaging only four million hits a year.

But the process began to accelerate.  Double Rainbow was posted in 2010 and has averaged almost ten million hits a year, peaking early on with six hundred thousand views per day.  According to top scholars, "Double rainbow what does it mean" had been drained of all comic value by early 2010.  The phrase was rapidly retired, and today only persists in needy populations, like high school Facebook friends or your grandmother.

And distribution continues to accelerate, dramatically.

Three days after it was posted, The Fox received four million views in a day.  If current trends were to hold, by the time Christmas arrives, it will have as many views as people live in America. At that point, "What does the fox say?" will inspire only apathy, and perhaps despair.

In some ways, this alarming trend is a triumph of American ingenuity.  We have developed ever more centralized technologies for the sharing of viral content--first Youtube itself, then Twitter, then Facebook's news feed, and finally, Buzzfeed.

But as a consequence, the days when a group of friends would gather around a computer, sharing their favorite hilarious clips, are long over.  Today, within hours of first appearing, a video is wrenched out of its natural habitat, and, after being processed through a long chain of distributors, served to millions of viewers. Industrial-scale exploitation of the video continues relentlessly, with no respect for the integrity of the video's comedy, until the source is exhausted and discarded.

As this process grows ever faster, and our reserves diminish, a dark day approaches: the moment when our capacity for viewing internet videos will outstrip our ability to find or create them.

When that day finally comes, who knows what effects it will have on American society?  We risk entering a period of extended comic malaise, in which truly enjoyable memes appear only sporadically, and are eagerly pounced on by the ravening masses, dissected and rendered inert within hours.

How we will compensate for this sudden decline in our quality of life?  Perhaps we will warm ourselves by the light of artificially-manufactured substitutes, dreamed up by late-night comedians to fill the void in our hearts.  Maybe we'll settle for diminished quality of viral content, subsisting on half a loaf where we once had a whole one.

But rest assured, my fellow Americans, unless something is done, this crisis is coming.

Many of the nation's top thinkers are working on this problem.  For years, they have been experimenting with methods for extracting more efficient comedy from smaller amounts of precursor material, and while results have been mixed, there is some cause for optimism.

I'm sure at this point many of you are wondering if you can do your own small part to help out.  You can, by carefully considering any undiscriminating use of Facebook, Twitter, and other sites.  Oversharing on these networks speeds the crisis, and further inflates the fortunes of those who would thoughtlessly burn through our viral resources. We must collectively guard against the acquisition of unwarranted influence, whether sought or unsought, by the meme-industrial complex.

It won't be easy.  The strength of America is in the inexhaustible resources of its people, not its exhaustible reserves of cell phone videos.  By working together, combining American ingenuity with its tradition of self-reliance, we can build a bright, sustainable future, one where our children, and our children's children, can enjoy watching internet cats as much as we have.

Thursday, September 12, 2013

The ghost of the patriarchy

The patriarchy has been a hot topic lately--first with this Hanna Roisin post claiming that it's dead, and now with Matt Yglesias's response, saying that, if it's dead, there sure are a lot of men still running things.

Yglesias points out that men control the vast majority of the business world; for instance, 95% of Fortune 1000 companies have male CEOs.  (He leaves this out, but men also dominate politics, composing 83% of Congress and presiding over an amazing winning streak of 44 straight presidents.)  He argues that it doesn't really matter whether discrimination is the cause of this imbalance, because men are the de facto leaders of society either way.  

He might be right about that, but I'm still wary about his approach, citing gender disparities at the very tip top of our social institutions without evidence that these disparities are creating problems for women somewhere further down the line.  The problem is that there are at least two mechanisms which will tend to concentrate the effects of discrimination and other obstacles for women at the top of the professional ladder, such that, even if the patriarchy is truly on its last legs, its lingering remnants will be most visible in the apex of the political and business worlds.  

Mechanism one: the composition of the top tier of society is a lagging indicator, reflecting social attitudes from previous decades as much as attitudes from today.  

The key here is that leaders (business, political, or otherwise) are rarely selected from a pool composed of everyone in society.  Instead, candidates for these positions selected from a class of notables situated just below the top.  The people in that class are selected from a group below them, and so on.  Climbing all the way up the ladder usually takes half a lifetime or more.  

As a result, when the groups near the top are composed primarily of men, that, to some extent, is because the people entering business or politics thirty years ago were primarily men.  Whatever challenges women face today, the obstacles confronting them were far more severe in the past.  And since the highest levels of professional success often take the longest to reach, they reach further into the past than any other.

Mechanism two: small pressures against promotion can compound over multiple tiers of advancement.  Even if any given woman at any given level of professional success is only mildly disadvantaged in comparison to her male counterparts, the actual proportion of women will shrink at each level.  

I threw together a quick example on Excel.  It's artificial but it should illustrate my point.  We start with ten million people--let's say the population of a whole state.  Ten percent of the population then advances to the first tier of achievement; ten percent of that group advances to Tier 2, and so on.  Women's opportunity for advancement is slightly less than if they were being selected at random: they only have a nine percent chance of doing so.



By the time you've selected the top ten most successful members of the population, they're 73% male.  Depressingly, that's still better than real life, although real life probably has more than seven levels and this demonstration makes no attempt to incorporate the historical factors above.  But the point should be clear: you can't really determine the degree of the pressures against women simply by looking at how many women have been promoted to the highest levels; indeed, you're much better off looking at pressures felt at the very bottom.  

These two points can probably be read as both support for Matt's argument and as a mild critique of it.  On one hand, they demonstrate how important it is, if we want society to be truly equitable from top to bottom, to purge even mild bias from our institutions.  Small pressures at the bottom of society can have major impacts at the top, and the effects of discrimination can long outlive discrimination itself.  Even small factors, like mostly-male boards of directors, might have more of an effect than we'd think.

On the other hand, we do need to be careful when interpreting one-dimensional statistics like "The number of CEOs who are women."  Those statistics might remain unpromising long after the ground has shifted beneath them, in favor of more equitable advancement for women.  Politicians and CEOs can't really be used as shorthand for the rest of America, and, ironically, the average working women might find that her workplace is a lot less imbalanced than the average female CEO.  Even if the patriarchy is dead, its ghost might live on for a span, in corporate boardrooms and the Senate.  

Because of this, if we want to make the argument that the gender imbalance at the top of society is not evidence of a patriarchal order, but has patriarchal effects, it needs to be made directly.  As a personal aside, I actually find that case pretty intuitively compelling, but recognize that it's significantly more difficult to demonstrate empirically.  It's hard to say whether a society with a greater number of powerful women would work much differently from our own, because we've never had one.  But for the same reason, it remains very plausible that it might.

Monday, September 9, 2013

The problem with public polls about Syria

It's Iraq.
A new Associated Press poll shows a majority of Americans oppose a U.S. strike on Syria, despite a weeks-long Obama administration campaign to respond to chemical weapons attacks the U.S. blames on President Bashar Assad's regime. Most of those surveyed said they believe even limited U.S. attacks — as President Barack Obama has promised — would lead to a long-term commitment of military forces in Syria.   
That's from here.

There are lots and lots of reasons to be skeptical of a Syria strike, but the danger of the U.S. getting bogged down in a long-term engagement has to be low on the list.  Absolutely everyone, top to bottom, thinks that a lengthy commitment in Syria is a terrible, horrible idea.

Nonetheless, the possibility looms large in the public consciousness.  It's availability bias: when people think of war in the Middle East, and when they hear their leaders reassuring them that an operation will be quick and painless for Americans, they immediately think of the disastrous Iraq War.  Nevermind that the last time this debate played out, things went almost precisely according to plan.  Nevermind that there are countless important distinctions between an unprovoked invasion of Iraq and an air campaign to chastise the use of real, honest-to-god WMDs.  The collective opinions of large masses of people are only able to incorporate a certain level of nuance; apparently not enough to differentiate between Syria and Iraq.

These polls aren't meaningless.  Politicians have an obligation to faithfully represent the views of their constituents, even if their constituents are wrong.  Beyond that, the political climate plays an important role in the politics of the Syrian civil war, making Obama's job harder and giving Assad a bargaining chip, and maybe even a sympathetic ear.  But it's clear that polls like this are taking gauge of past events more than they're reflecting future possibilities.  

Did John Kerry accidentally prevent the war?

The latest news on the Syria front is that Russia and Syria are considering a deal with the U.S. wherein Syria turns over its chemical weapons, and the U.S. keeps its planes and bombs on the ground at home.  I really hope this happens, for a variety of reasons:
  1. It would prevent the U.S. from getting involved in an ugly war, saving lives, money, and the like.

  2. It would make me look really smart, because I predicted it.
Really, I did!  Look at my post from August 31st:
A couple important elements of this plan are getting widely overlooked. One is time. Congress won't vote for at least a week, which means military strikes aren't going to happen for a while... That gives Assad a chance to react, which isn't an entirely a bad thing. Knowing he's in a bad spot, he's got a chance to forestall action against him by changing his current war footing--no compromise, total war--to something more acceptable to the United States. 
...
No one can question anymore that Obama is willing to attack Syria, even if an attack on Syria may or may not happen. And there's at least a good chance that it'll end with strikes being authorized. For Assad, it's not a shot across the bow, it's Russian Roulette, and the barrel is spinning. It's clever, because it demonstrates Obama's resolve without firing a shot. Combine that with the timing issue above, and it might force concessions from Syria without actual war.
Of course, before I get too big on myself, we have to remember that these negotiations could still go anywhere, or nowhere.  Syria hasn't proven itself the most reliable country to work with, and Assad is clearly under a lot of different pressures at the moment.  Russia is famously difficult as well.  The Syrian government obviously thinks its chemical weapon stockpiles provide it with some advantage, or it wouldn't have accumulated them; the idea that it would freely relinquish them in the middle of an existential conflict seems a little far-fetched.

Another strange aspect of this development is that appears to have been entirely accidental, arising from an apparently unplanned remark by John Kerry.

Now, that's not proof that it is accidental, because even if this were a carefully constructed gambit by the Obama administration, it would have to look inadvertent.  The reason why is simple: Assad's sincere belief that U.S. warplanes are on the way is creating pressure to negotiate.  If he discovers that the U.S. diplomatic strategy is a clever bluff intended from the outset to force a non-military settlement, he'd no longer feel that pressure, and we'd return to the "If you're going to hit me, then hit me!" standoff of two weeks ago.

With that said, if I were President Obama, and, having convinced the world that I wanted to bomb Syria, were trying to send out diplomatic feelers to Assad, a bumbling public remark from the Secretary of State isn't how I'd do it.  So I do think in all likelihood this sequence has just been a happy accident: the administration tripping into the opportunity it unknowingly created.  Still, worse things have happened.  Let's hope for a happy ending.

Friday, September 6, 2013

Why to be skeptical of a Summers Fed nomination

Wonkblog on Larry Summers:
At Harvard, Summers gave an infamous speech about women in math and science that came across as suggesting, as he later acknowledged, that the leader of one of the world’s top universities didn’t think women were quite as talented in these fields as men. His defenders argued he was just trying to stir intellectual debate and was actually making a narrower statistical point, raising the question of whether women were less likely than men to produce the very best (and very worst) mathematicians and scientists. Summers concluded he engaged in an act of “ spectacular imprudence.”  
At the White House, he drove many colleagues crazy with endless debates. When he thought someone else had a bad argument, he’d declare, “You’ll get killed!” He made people think he favored nationalizing banks, when he didn’t. He wanted to shape not only economic policy but health care and energy. Some of his former colleagues say the style led to dysfunction. Christy Romer, former chairwoman of the Council of Economic Advisers, told me she’s not sure Summers has the “the managerial skills and personality to win over a large committee” that is the Fed.
Here's the thing, though: the problem with Summers isn't the things he says.  It's the things he's done, or, to be more exact, the lack of them.  This is a man who spent most of his adult life at the absolute pinnacle of American society, teaching at and subsequently running Harvard, serving as Treasury Secretary, and advising the president during the financial crisis.  He's had endless opportunities to distinguish himself as a public servant.  He hasn't.  He's been okay, but I can't think of a brilliant policy idea that anyone ascribed to Larry Summers.  He's come across as clever but never as particularly prescient, deeply insightful, or even, despite his penchant for iconoclasm, an especially novel thinker.  He's just a really smart dude who made good, with the help of a very successful family.

Would he be okay as Fed Chairman?  Yeah, sure, maybe.  But is there anything to recommend him over the many, many thousands of other smart dudes (or dudettes!) in America?

There's another thing about Summers, too.  The traits described in the Wonkblog post are those of someone enamored of his own intelligence.  He's not wrong, but that's still dangerous.  Larry Summers might be a bona fide genius, but the world has a knack for humbling geniuses.  Policy on a large scale is frequently beyond the comprehension of the smartest economist; intelligence becomes a weakness when it convinces someone that they're seeing more clearly than the rest of us.

Policymakers ought to start with the mantra "Everyone's stupid, and most of all me." Summers' seems to go something more like "Everyone's stupid.  But not me."

Thursday, September 5, 2013

Can airlines survive a classless society? Maybe not. Can they survive without first class? Yes.

As previously established, this blog is firmly anti-price discrimination.  So it was with interest that I read this post from Rory Sutherland, defending price discrimination in airfares as a necessary evil, and encouraging that other services emulate their practices.

Sutherland's basic point is this: by dividing airplanes into first-class and coach-class seating, airlines are able to attract wealthy, high-paying passengers who subsidize most of the cost of air travel.  As a result, airlines can sell tickets to the rest of us plebes at improbably affordable prices.  (Although he doesn't use the term, what he's describing is textbook price discrimination: selling more expensive tickets to people who are willing to pay more.)  He goes on to suggest that a similar two-tiered system could be replicated elsewhere; that we'd be able to provide more subways, school buses, and other niceties if society was a little less sensitive to class divisions in our services:
I sometimes suggest that we would similarly benefit from having different classes of travel on the London Underground. If the first two carriages in each train cost three times as much as the others but offered free Wi-Fi, and were furnished not with basic seats but with the sumptuousness of an Edwardian-era New Orleans brothel, you could afford to run more trains. Almost everyone finds this idea repellent. 
But I’d like to issue a challenge to any libertarians, economists, ethicists or software gurus reading this column. How do you get people with wildly differing willingness or ability to pay to fund some common good other than through redistributive taxation?
He's right in part and wrong in part.  Building off Sutherland's point, Felix Salmon does better: he points out that price discrimination exists both between first- and coach-class customers, and among them.  Two people sitting in the same might have paid wildly different amounts for their ticket.

To the extent that price discrimination is occasionally necessary, Felix gives a far better example.  Maintaining certain for-profit services might require a degree of "redistributive taxation" in the form of price discrimination, but that price discrimination can't be used to justify class distinctions in services.  Class distinctions simply wouldn't accomplish what Sutherland thinks they would.

This is the part where I use strained hypotheticals to show you what I mean.  (What can I say? Law school habits die hard.)

So imagine that you own an airline that flies three routes every day. It's expensive to provide those routes and you'll only make back your costs if you fill at least a third of them with high-paying customers.  Fortunately for you, high-paying customers like the flexibility your airline provides (three routes! Every day!), so each day all the rich people in town choose one of the routes and fly it.  You don't know in advance which one they'll pick, so you have to maintain all three.  And after they buy their tickets, you may as well sell tickets on the other routes to low-paying customers--you can’t get rid of the routes without losing your core clientele, and there’s no reason to not make an easy buck or two on the excess capacity.

In that situation, it can accurately be said that high-paying passengers are subsidizing the low-paying passengers.  Selling tickets to the rich is the airlines' raison d'être.  The airline was built for them; the rest of us get to come along for the ride, which is ultimately good for us.

You might point out that this is highly unrealistic.  In real life, maybe you'd expect the profit-generating customers to split more-or-less evenly between the three routes.  But if that's the case, it's virtually guaranteed that a large portion of the capacity along each route will be used to fly loss-generating customers.  In this instance, you're better off just reducing capacity overall; i.e., buy smaller planes.

One way to think about this is to realize that in order for a rational airline owner to spend money providing any given seat, there must be some chance that a profit-generating customer will fill it.

Another way of looking at it is to realize that if the value of a service for profit-generating customers is its capacity, and if that required capacity is in excess of what profit-generating customers can consume, then you'd expect de facto price discrimination that benefits society at large.  By selling tickets at continually lower prices until all seats are filled, the airline ensures it is generating the maximum possible profit from its available resources.  Some of the seats might be sold at a loss--the airline would save money by not producing the seat and therefore not selling the ticket--but the required capacity isn't created, no one, rich or poor, would buy a ticket.

It's not just transportation.  Salmon's post talks at length about price discrimination in the newspaper industry, which is a pretty good example of another service with similar characteristics.  A lot of a newspaper's credibility and currency are derived from the scale and scope of its readership; this is a big part of why, for instance, the New York Times is better-regarded paper than the Toledo Blade.  But the number of profitable subscribers to the Times is far smaller than the number of readers required to maintain its current global reach.  So it makes sense to institute a system in which wealthy customers are encouraged to pay full price while other customers can avail themselves of bargains or visit the site for free or sneak past the paywall.

So far, so good.  But here's the problem with Sutherland's argument: this logic only works when the consumers of a particular service are undifferentiated.  When you know that a service is mostly attracting profit-generating or loss-generating customers, then the reasoning breaks down.

Think again about the three air routes above.  Now, let's imagine that our airline enhances one of the routes with luxury service, free drinks and food, spacious seating arrangements, and the rest.  In doing so, it hopes to consistently attract the rich passengers to that route.  That's all well and good--it's certainly the airline's prerogative to do this if it wants.  But since it now knows its profitable customers will end up flying the one route and not the two others, why even maintain the second two?  They're losing money!  It's better off scrapping them.

Alternatively, the airline could create two classes of seat on every plane.  One provides excellent service to the rich, and one provides minimal service for everyone else.  But again, if you know which part of your service will attract profitable customers, why provide the rest of your service to everyone else?  Once again, you ought to just buy smaller planes.

Or think about newspapers again.  If papers know that both classes of consumer are attracted to the whole product, then price discrimination sounds like a good idea.  But what happens if they discover that rich subscribers are mainly reading the real estate section, while poor moochers are mainly reading the funnies?  Why spend money producing the funnies?

In short, it's unlikely that a private firm would provide a service that its knows will be consumed at a loss.  This makes intuitive sense, it's worth noting.  It also shows the defect in Sutherland's reasoning.  In all of his examples, services are split into two tiers: a fancy service for the rich and a crappy service for the poor.  But as a mechanism for cross-subsidization, tiered service doesn't work--the service provider is always better off just providing the upper tier.

One might counter that Sutherland didn't mean that low-paying consumers are truly loss-generating. But if that's the case, it's hard to see what the benefit of price discrimination is to the rest of us.  If none of the tickets are in any respect losing money, the service provider can still profitably provide the same level of service with standardized (and for some people, lower) prices.  It's true that expensive tickets might help subsidize cheap ones while holding profits constant, but I'm dubious that this forms a great case for the social necessity of  price discrimination.

Of course, all this raises the question of why some services are tiered.  And there are couple of answers to that.  First, there is still an element of price discrimination here--of the consumer-gouging kind.  Even if it's not strictly necessary to keep your firm afloat, price discrimination is still a good way to make a few extra dollars on the back of hapless consumers.  Second, I'm sure there's some profit in creating services tailored to certain needs.  Some people just want first-class seating; others want cramped, cheap seats; airlines cater to both in various degrees.  And within those categories, Sutherland's argument works just fine.  Airlines need a certain capacity of each seating class, and fill any excess with bargain tickets sold to low-paying customers.  The problem is that, while possibly necessary to sustain the industry, this practice is also necessarily invisible to consumers.

You might wonder why I care so much about this relatively esoteric point.  There are a few reasons.  First, as I've said before, price discrimination is actively harmful to consumers, so we need to be careful about rationalizing its widespread adoption.  Thinking through the problem in detail helps us see when certain forms of price discrimination are and are not necessary; I think we can safely say, for instance, that airlines can exist just fine without dividing their passengers into castes.  Besides, class divisions in society seem generally pretty corrosive, so arguments in favor of them ought to be tested.

But more broadly, I just don't believe policymakers think enough about the mechanics of consumer spending.  Companies sink tons of research and money into creating novel ways to squeeze a few dollars more out of each consumer purchase, and no one bats an eye.  People accept as an obvious truth that price discrimination and similar transactional innovations can change behavior on the producer side of the supply and demand curve.  But people rarely consider that these things have an equal and opposite effects on consumers.  Depressed consumer purchasing incentives and reduced consumer surplus are every bit as real and worthy of discussion as sales tactics--so let's discuss them.

No, this was not the most successful summer in Hollywood's history

It was, you might have heard, a summer of discontent for movie studios, as audiences endured (or, rather, ignored) flop after flop, like R.I.P.D., Turbo, The Lone Ranger, and White House Down. Does this represent the death of something more than Ryan Reynold's leading-man dreams? Is it the death of the entire modern Hollywood business model? "The summer-blockbuster strategy itself may have tanked," Catherine Rampell writes in the New York Times Magazine.  
But Hollywood's summer blockbuster strategy -- essentially: adapt books, make sequels -- didn't really "tank." In fact, it was the biggest summer in history, in nominal dollars.
Relatedly, in nominal terms, the most successful movies of all time were probably the German Expressionist masterpieces of the early 1920s.   Unless Zimbabwe has a film industry I don't know about.

Wednesday, September 4, 2013

A bad prognosis for Syria

For a smart quantitative take on the effects of a Syria intervention, head over to the cryptically-named bennstancil.com (presumably operated by one Ben N. Stancil). 
[D]o existing civil wars become more violent after an intervention?  
The limited historical record indicates that it’s relatively rare - but it’s even less rare that they reduce violence. A conflict with an intervention was preceded by a civil war in 28 instances. In four cases, the conflict became more violent during the year of the intervention compared to the year prior. Twenty-three stayed the same, while only 1 declined. (Including conflicts that are already major conflicts in this sample is potentially problematic, however, because these conflicts have already reached this dataset’s upper bound of violence. For minor conflicts, 4 of 16 increased in violence, while zero decreased.)
...
Violence, however, isn’t the only measure of a conflict. There are other potential benefits to interventions, such as bringing the conflict to an end more quickly. From this perspective, history isn’t in favor of the interventionists. Only 8 of the 28 conflicts ended in two years following the intervention, with many lasting much longer (75% of those that endured lasted at least seven years following the intervention year). Furthermore, conflicts don’t appear to end more quickly if the intervener remains present either - in the 20 cases in which conflict endured, 10 reverted back to civil wars, and 10 continued as interventions, with roughly equal portions of each lasting seven or more years.
As the post accurately points out, this doesn't do much to explain causation.  It may be that intervening in civil wars makes them more violent, or it may be that foreign military powers are reluctant to get involved in any but the worst civil wars.  Either way, though, it's just more hard evidence in support of what everyone already fears: the Syria situation is a disaster and it's unlikely to get much better anytime soon.

Sunday, September 1, 2013

Free-to-play games are awful, and economics tells us why

Today, gaming website Kotaku published an article by a "free-to-play design consultant," defending his job and, more generally, the spread of free-to-play across the games market.  This gives me an excuse to write about something I've wanted to write about for a while, which is how free-to-play is strangling the gaming market.

Okay, not everyone who reads this blog plays video games--I'd hazard to say that most don't--so first, a quick explanation of what "free-to-play" means.  Also commonly abbreviated F2P, these are games, always multiplayer, which are provided free.  But players can make purchases within the game, unlocking new items, characters, levels, whatever.  In a typical example, it's a first-person shooter where everyone can play with a little trifle of a pistol but you have to pay real-world money to use a big, impressive assault rifle.  Many variations exist--for instance, at least one free online game seems to have generated a small economy around players paying real-world money to give their characters a variety of silly hats.

Everyone hates it.  The practice is widely despised among gamers.  Periodically, articles appear attempting to convince gamers that this is irrational, and that free-to-play is something they ought to embrace, but to no avail.  The Kotaku piece above is just the latest in this proud tradition.

The gamers are completely right.

Their hatred of free-to-play is not only understandable, it's actually a completely economically rational feeling, and I'm going to explain why.  Be forewarned, this is slightly on the long side.  But I think it's worth reading, if you're even slightly interested in games or economics.

Price Discrimination

Free-to-play games are a sophisticated, clever form of price discrimination.  Price discrimination, for the non-econ nerds out there, is the practice of encouraging each consumer to pay what a product is worth to them.  It's a way for producers of a particular product to extract the maximum possible value from consumers of that product.

Let's use an example to see what I mean.  Imagine you go to buy a hamburger.  You're a little bit hungry, you'll pay three bucks for that burger, and you're in luck, because that's exactly what they cost.  Now another guy comes into the burger joint, and he's starving.  He'll pay a huge amount for that burger, up to fifteen dollars; to him, a hamburger is worth fifteen dollars.  If he pays three, like you did, he keeps twelve dollars in value (extra value known to economists as consumer surplus).

But the hungry guy also represents a potential economic opportunity for the restaurant.  If they know he's hungry, they can charge him the full fifteen, retaining all the extra value for itself.  Let's not get into the ethics of this just yet: it's easy to see how this is an enticing proposition for the burger makers.  In fact, they'd love to be able to sell burgers to any buyer for the amount the buyer was willing to pay.  Three for you, fifteen for the hungry guy, maybe eight for the next customer, four for the next customer, and so on.  That technique, of charging everyone what they're willing to pay, is price discrimination.

Two important asides: first, you can see how, in practice, this is very difficult for sellers to do.  The burger maker has no good way of knowing what someone is willing to pay for the burger, unless the buyer is willing to say so upfront.  And even if the producer could figure that out, other people on the market could undercut it.  For instance, you could buy your three-dollar burger, turn around, and resell it to the hungry guy for fourteen.  He'd get a relative bargain and you'd get an eleven-dollar profit, enough to buy a new burger and then some.

Second, the seller obviously doesn't want to sell to anyone for less than the cost of manufacturing the burger; it would be taking a loss.  That puts a price floor on the product, even if price discrimination is occurring.

There are some times when economists agree that price discrimination is actually helpful; for instance, in a market dominated by a single monopolist.  But this has led to the impression, among many economists and policymakers, that price discrimination is, on-balance, a positive practice in virtually any market, and that's wrong.  And it's wrong for exactly the reasons demonstrated by free-to-play games.

Price Discrimination and Free-to-Play

The structure of free-to-play games is entirely determined by very basic economics.  First, their free-ness: unlike hamburgers and other physical products, there is no "price floor" determined by the cost of manufacturing each additional copy of a game.  The cost of additional digital copies is very close to zero.  (There are, of course, initial development costs, but those are the same regardless of whether ten people or a million people play the game.)  As a result, game producers can price discriminate over the entire potential price range: up from zero to whatever astronomical amount players are willing to pay.

And hopefully, the example above helps illustrate why free-to-play games are a form of price discrimination.  Basically, the bottomless well of add-ons and upgrades let a gamer who really likes a game spend a lot of money on it, while allowing a gamer who only kinda-sorta likes it to only spend only a few dollars on it.  It's not perfect price discrimination--the two people are getting slightly different products--but it's pretty darn close, about as close to the abstract version of the practice as could conceivably be achieved in the real world.

Finally, gamers may have noticed that all free-to-play games are online games.  That's not a coincidence!  Remember the part above where I said you could turn around and sell your hamburger to the hungry dude for fourteen dollars?  That's no less possible in the games market--unless everyone has to buy the game in question from a centralized manufacturer.  The main example of this is online games, where you're barred from playing on the game's central servers if you aren't paying the game's producer.

The astute gamers out there might also now be thinking of the other major form of price discrimination in the games market, which no less common and no less hated.  That would be DLC (for the uninitiated, downloadable content, which is a euphemism for paid expansions to existing games--levels, items, and other new features--ranging from tiny, cheap, and cosmetic, to sprawling and expensive).  Think about how DLC works: it's an add-on you buy for a game you really like, or in economics terms, extra money paid for a product the buyer highly values.  But one feature of DLC is that in today's market, it is always bought direct from the manufacturer.  Even if I buy a used Xbox game from my friend, it doesn't come with his DLC.  Once again, this kind of has to be the case, because price discrimination doesn't work if there's a resale market.

I think it's fair to say at this point that the game industry is obsessed with price discrimination.  It's completely bonkers for it!  Barely a new game is released without significant DLC, and free to play is rapidly gobbling up the online gaming marketplace.  Many if not most online-only games are now being designed as free-to-play, and even older games, designed to be sold for a fixed price, are actually being converted to free-to-play in retrospect.

This makes sense.  If you're the manufacturer, price discrimination is awesome.  You make literally the maximum amount of money that you could possibly make with your product.  I'm sure if you're making games, it seems like a gold mine.

But what's it like for the consumer?  That's a question that isn't often addressed.  The gaming market, however, provides a clue.  And the answer, I think, is a lot less rosy than the manufacturers currently believe.

Why Consumers Hate Free-to-Play

Right now, think about the things you bought recently that you were happiest with.  Seriously, do it!  Here, I'll do mine: I bought Crusader Kings II and all the expansions for $20.  I play that game a lot! I'd have happily paid five times as much for it!  I bought a Margaritaville Frozen Concoction Maker for about $200, but I've surely drunk twice that much in delicious frozen margaritas from the machine!  A fantastic bargain!

That sense of happiness I'm feeling with my purchases?  That's what consumer surplus is.  I've got similar feelings, in various degrees, about most things I buy.  Chinese food for dinner.  A pack of gum.  A new pair of pants.  If a product is sold for a fixed price, most buyers of that product will come away a little bit happy with their purchase.

Price discrimination, in the form of DLC and free to play games, is the practice of taking away that happy feeling for 100% of a product's customers.  In the strictest economic terms, it's literally the practice of taking consumer surplus and transferring it to producers, ensuring that consumers derive no extra enjoyment from their product.

Go back to the hamburgers.  You will pay for a three-dollar burger and pay three dollars for a burger, and a hungry guy will pay fifteen dollars and pays three.  The hungry guy is presumably happier than you at this point, because he wanted it more.  But if you pay three and he pays fifteen, both of you are, economically speaking, equivalently happy.  In fact, every burger buyer is exactly the same amount of happy with their food: just enough to buy it, but no more.

Increasingly, that's the game market.  The industry hopes that it can get everyone to pay exactly what they'd be willing.  As a result, no one ever gets a bargain.  Maybe everyone likes their games enough to buy them, but no one ever feels very good about it.  The market is completely homogenized; there are no good deals, only just-good-enough deals.

Short run, this might be a bonanza for manufacturers.  They make bank, and yet people keep buying.  But who knows what the long-run effects are?  I think we're already seeing a few, in gamer circles.  Consumer exhaustion. Frustration.  It's not hard to find: go to any game blog or site, find the announcement of some new free to play game (or some new DLC) and read the comments.  These two things are hated by gamers, probably more than anything else in the game market right now.  They're completely anathema to the excitement that gamers traditionally feel about new games.  And that's completely by design.  DLC and free-to-play monetize that excitement, extract every cent of it from customers.

No one can say for sure, but I think producers are slowly killing the golden goose here.  It's good for them when their customers are giddy about their products.  It's bad for them when their customers are part of a farming operation, slowly being milked for cash until the entire industry is lost in a fog of perceived mediocrity.

Markets require both consumers and producers.  Can a market survive when half of those people are participating grudgingly at best?  Maybe.  Can it thrive?  I doubt it.