Bruce Bartlett discusses the Basic Income Guarantee, vis-a-vis Switzerland’s recent proposal to give every citizen the equivalent of $2,800 per month in guaranteed income:
In October, Swiss voters submitted sufficient signatures to put an initiative on the ballot that would pay every citizen of Switzerland $2,800 per month, no strings attached. Similar efforts are under way throughout Europe. And there is growing talk of establishing a basic income for Americans as well. Interestingly, support comes mainly from those on the political right, including libertarians.
The recent debate was kicked off in an April 30, 2012, post, by Jessica M. Flanigan of the University of Richmond, who said all libertarians should support a universal basic income on the grounds of social justice. Professor Flanigan, a self-described anarchist, opposes a system of property rights “that causes innocent people to starve.”
Bartlett quotes F.A. Hayek from Law, Legislation, and Liberty:
The assurance of a certain minimum income for everyone, or a sort of floor below which nobody need fall even when he is unable to provide for himself, appears not only to be a wholly legitimate protection against a risk common to all, but a necessary part of the Great Society in which the individual no longer has specific claims on the members of the particular small group into which he was born.
Milton Friedman also supported a Basic Income Guarantee in the form of a Negative Income Tax:
Friedman’s argument appeared in his 1962 book, “Capitalism and Freedom,” based on lectures given in 1956, and was called a negative income tax. His view was that the concept of progressivity ought to work in both directions and would be based on the existing tax code. Thus if the standard deduction and personal exemption exceeded one’s gross income, one would receive a subsidy equal to what would have been paid if one had comparable positive taxable income.
Bartlett also points to Matt Feeney, writing for Reason, who notes that a Basic Income Guarantee, if it completely replaces the present welfare state, would enhance personal liberty, preserve human dignity, and save money:
one of the tragedies of the current welfare system is that it strips welfare recipients of their dignity while treating many of them like children, and functions on the underlying assumption that somehow being poor means you are incapable of making good decisions.
Instead of treating those who, often through no fault of their own, have fallen on hard times like children who are incapable of making the right choices about the food they eat or the drugs they may or may not choose to take, why not just give them cash? Doing so would not only cut down on the huge administrative costs of America’s welfare programs, it would also promote personal responsibility and abolish much of the humiliation and stripped dignity associated with the current welfare system.
Obviously this is still government redistribution, and as such, violates the much beloved Non-Aggression Principle that many Libertarians abide by. Nonetheless, the Basic Income Guarantee strikes me as a perfect example of not allowing the perfect to be the enemy of the good. If we could eliminate the cumbersome bureaucracies that define the present welfare state, and replace them with a simple cash transfer, that seems like a win for increasing individual liberty and reducing the size of government. It also takes the fate of the poor out of the hands of government agencies, who may deny someone access to benefits on so little as an outdated form. If there is such a thing as a Libertarian welfare state, the Basic Income Guarantee is a way to achieve it.
"Look at the difference: In 1977 I bought a small house in Portland Oregon for $24,000. At the time I was earning $5 per hour working at a large auto parts store. I owned a 4 year old Chevy Nova that cost $1,500. Now, 36 years later that same job pays $8 an hour, that same house costs $185,000 and a 4 year old Chevy costs $10,000. Wages haven’t kept up with expenses at all. And, I should point out that that $5 an hour job in 1977 was union and included heath benefits."
an anonymous online commenter on the current economy. (via alchemy)
LTMC: When I was working at a gas station, I had an old-timer come in and tell that he used to make $2/hour at a factory job when he was in his late 20’s. He said he could feed his whole family for the night by buying a 24-cut pizza for $2. Fast forward to my gas station job, where I was making $8/hour, but a 24-cut pizza in my town costs closer to $20—2.5 times more on a dollar-for-dollar basis. He said he had no idea how I even survived on what I was making (I was insured through college at the time, but had no savings, and relied on family for large expenses).
This is what people mean when they talk about income inequality. The reason wages have not kept pace with expenses is because the nation’s previous method of wage redistribution—union representation—has declined substantially. Wage increases have subsequently been absorbed on an increasingly larger basis by corporate entities and the top 1% of earners. Strong unions used to serve as a soft redistribution mechanism to help ensure that increases in prosperity were shared equally. A critical mass of union representation in the labor force has always had derivative wage benefits in the non-union labor market. That critical mass no longer exists, however. Consequently, the decline of union labor has led to a concurrent decline in wages relative to expenses, because there’s no longer an institutional mechanism for redistribution of earnings increases in the economy. The critical mass of union representation is gone, and nothing has taken its place.
When opinions and data collide, I always go with data.
LTMC: To play devil’s advocate for a moment, some economists would argue that this graph misattributes inflation to the CPI. Inflation is simply an increase in the money supply. An increase in prices is rather a symptom of inflation, not inflation per se. We also don’t see represented here what the effect would have been if the money supply were not increased. Presumably we would have had a decrease in the price index (or if you prefer, “deflation”), relative to the current baseline.
Plenty of economists would view deflation as a bad thing. But there are others who would view such deflation as merely the hangover from decades of artificial growth fueled by inflationary monetary policy. Under this view, deflation would actually be a good thing, because it is the first step on the road to a healthy economy where prices come closer to representing real (rather than artificial) wealth, as opposed to a proxy hybrid of the former and latter. The business cycle would finally have time to readjust. Deflation represents that adjustment.
I’m not saying I agree with any of this. But I do think these ideas need to be addressed to get a complete picture of the economy. 2% price inflation is a great target (and even higher inflation might be better under certain circumstances). But we ought to reckon with the idea that there may be additional consequences to increasing the money supply over and above increases in the consumer price index—consequences which we can’t see on this graph. And I say this as a guy who thinks Keynes mostly had the right idea.
Rohin Dhar, back in March, said that engagement rings are a sham, and it’s time that we stop asking men (or women, as the case may be) to buy them:
Americans exchange diamond rings as part of the engagement process, because in 1938 De Beers decided that they would like us to. Prior to a stunningly successful marketing campaign 1938, Americans occasionally exchanged engagement rings, but wasn’t a pervasive occurrence.
Not only is the demand for diamonds a marketing invention, but diamonds aren’t actually that rare. Only by carefully restricting the supply has De Beers kept the price of a diamond high.
Countless American dudes will attest that the societal obligation to furnish a diamond engagement ring is both stressful and expensive. But here’s the thing - this obligation only exists because the company that stands to profit from it willed it into existence.
A diamond is a depreciating asset masquerading as an investment. There is a common misconception that jewelry and precious metals are assets that can store value, appreciate, and hedge against inflation. That’s not wholly untrue…Diamonds, however, are not an investment. The market for them is neither liquid nor are they fungible.
Edward Epstein puts numbers on the issue:
Because of the steep markup on diamonds, individuals who buy retail and in effect sell wholesale often suffer enormous losses. For example, Brod estimates that a half-carat diamond ring, which might cost $2,000 at a retail jewelry store, could be sold for only $600 at Empire.
In other words, when you buy a diamond for your significant other, you’re not buying an appeciating asset, as many people probably think. Your spouse’s diamond ring is more comparable to a used car than a precious work of art.
A friend of mine has a mother who works as a jeweler, and confirmed Dhar’s conclusions. She said that a ring actually loses value when it has a diamond in it, because it costs jeweler’s more money than it’s worth to remove the diamond and resell them. Plain wedding bands are actually a much better investment. Diamond rings, however, are a no more reliable store of value than your 1996 Ford Taurus with missing mirrors and a rusted tailpipe.
Here’s what we know. There is a federal minimum wages and state minimum wages. Businesses are required by law to pay people at that rate. Right now federal minimum wage is $7.25 per hour. At this current state of our economy, we are coming out of a recession much slower than what needs to happen. A big part of this is because there is a large degree of uncertainty right now. Companies don’t know how much it will cost to hire new employees. So they don’t hire people.
You might say “BUT BUT UNemployment IS goint doWn”. However unemployment is measured by counting exactly how many people file for unemployment. This is an innacurate measurement and stops counting people who become discouraged workers and go off unemployment. This also does not count part time workers who often still need more work. The true measure of how many people are working should be the CLFPR. The civillian labor force participation rate. It the percentage of working age Americans that are employed.
As you can see.the percentage is steadily declining since around January 2007.
The economy is on a razors edge, anything could happen. One of the reasons there is uncertainty is talks of minimum wage increasing.
Lets tackle the first reason. Minimum wage is intended to aid low wage earners. These people are usually unskilled workers, people who can not work another job (sometimes disability) and people like me, college students who are home for the summer. According to eh Bureau of Labor Statistics, 49% of minimum wage earners are younger than 24. Of this 49% of workers 62% live in families that have two to three times the income of the poverty line. This same ages experience the highest levels of unemployment. People younger than 25 have an unemployment rate of 16.2% over double the national unemployment rate. Keep in mind, earlier I said that unemployment understates the ammount of people not working. The other 51% of people are people who live in ususally poor but not impoverished families. 24.8% of those people are employed voluntarily working part-time. Only 4.7% of people who earn minimum wage are older than 25 and are main income earners. This proves that raising minimum wage, all negative effects aside, would be inneffective. It would not decrease the number of people on government welfare.
“But if we can help that small percent…”
Raising unemployment would not help. Uncertainty is a huge factor to employers. People often forget that employers do not hire people only because they need work done, they also must consider the cost of this persons salary or wage, the cost of health care if the employer provides that and if the total of these costs will be less than or equal to the ammount of work the person will do. An employer will not higher if it is cost inefficient to hire a worker. I could go on about healthcare too as another cause of uncertanty for employers. With Obamacare, many employers must either cut a workers hours so they do not meet the regulatory 30 hours per week or they must fire employees. The slow recovery would be halthed if minimum wage is raised.
One of the biggest mistakes that armchair economists make is stating a theory for which there is evidence as an absolute unimpeachable fact. While it is true that the majority of economists believe that minimum wage laws increase unemployment, there is a lot of nuance beneath this general proposition that gets lost if you don’t interrogate the data correctly.
First, it should be obvious to anyone that a minimum wage law that increases the wage floor too far above what the market will tend to bear will have a negative effect on employment. So when minimum wage critics ask, “why not raise the minimum wage to $100?”, they’re not actually asking a terribly profound question. The answer is quite simple: that would be dumb, because the shock to the labor market would be so severe that it would upend the economy—just as converting America’s fiat currency back to a gold standard would likely cause a monetary contraction so large that it would cause a global depression.
Second, there have been many studies which suggest that minimum wage ordinances which do not raise the wage floor beyond an unreasonable level are economically beneficial (or at least have no negative effects on unemployment). Schmitt (2013) concluded that “The weight of that evidence points to little or no employment response to modest increases in the minimum wage.” Allegreto et al. (2010) concluded that “[i]ncluding controls for long-term growth differences among states and for heterogeneous economic shocks renders the employment and hours elasticities [for teenage employment] indistinguishable from zero and rules out any but very small disemployment effects.” Reich et al. (2010) found that “Increasing the minimum wage does not lead to the short- or long-term loss of low-paying jobs[.]” Dube et al. (2010) concluded that “On balance, case studies have tended to ﬁnd small or no disemployment effects,” and that ”traditional [analytical] approaches that do not account for local economic conditions tend to produce spurious negative effects due to spatial heterogeneities in employment trends that are unrelated to minimum wage policies.” Thompson (2008) found that in Indiana and surrounding midwestern states, “minimum wage increases did not have a significant impact on employment growth in the region controlling for state GDP and population growth.” And of course, there is the infamous Card & Kreuger study, which found no negative unemployment effect when New Jersey raised its minimum wage in 1992. Additionally, Magruder (2011) found evidence that a minimum wage drove industrialization in Indonesia, citing “strong trends” indicating “formal employment increases and informal employment decreases in response to the minimum wage.” Fleck (2008) noted that New Deal programs had a similar effect on labor markets in the American South due to Farm programs and Labor law changes,
Is all of this unimpeachable? Of course not. There are plenty of scholars that disagree with the work cited above. But the point is that condemning the minimum wage wholesale as an economic policy tool obscures a ton of nuance in how minimum wage policy is implemented. While a $100 minimum wage hike would clearly be catastrophic, there is a lot of evidence that modest minimum wage increases have little to no negative effects on unemployment.
Lynn Stuart Parramore discusses the work of an economics grad student who recently blew the lid off Reinhart and Rogoff’s infamous study concluding that a 90% or higher GDP-to-debt ratio results in dramatic reductions in economic growth:
Since 2010, the names of Carmen Reinhart and Kenneth Rogoff have become famous in political and economic circles. These two Harvard economists wrote a paper, “Growth in the Time of Debt” that has been used by everyone from Paul Ryan to Olli Rehn of the European Commission to justify harmful austerity policies. The authors purported to show that once a country’s gross debt to GDP ratio crosses the threshold of 90 percent, economic growth slows dramatically. Debt, in other words, seemed very scary and bad.
Parramore notes that austerity advocates have used the Reinhart-Rogoff study to justify the implementation of austerity measures in multiple countries. There is one tiny problem, however: the Reinhart-Rogoff data spreadsheet contains a massive error:
Enter Thomas Herndon, Michael Ash and Robert Pollin of University of Massachusetts, Amherst, the heroes of this story. Herndon, a 28-year-old graduate student, tried to replicate the Reinhart-Rogoff results as part of a class excercise and couldn’t do it. He asked R&R to send their data spreadsheet, which had never been made public. This allowed him to see how the data was put together, and Herndon could not believe what he found. Looking at the data with his professors, Ash and Pollin, he found a whole host of problems, including selective exclusion of years of high debt and average growth, a problematic method of weighing countries, and this jaw-dropper: a coding error in the Excel spreadsheet that excludes high-debt and average-growth countries.
What’s the end result?
In their newly released paper, “Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff” Herndon, Ash and Pollin show that “when properly calculated, the average real GDP growth rate for countries carrying a public-debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0:1 percent as published in Reinhart and Rogoff. That is, contrary to R&R, average GDP growth at public debt/GDP ratios over 90 percent is not dramatically different than when debt/GDP ratios are lower.”
I imagine that Reinhart and Rogoff are gearing up for a response. Meanwhile, Parramore also links us to Daniel Schuchman at Forbes, who appears to have quickly respun this fairly devastating academic take-down as simply a sign that “academic economics, like many social sciences, is grounded in hubris and pseudo-precision.”
Perhaps so. But but I don’t think this requires throwing the baby out with the bathwater. Indeed, we needn’t draw any conclusion from this episode other than the narrow one it stands for: there’s nothing automatically devastating about the 90% GDP-to-debt ratio. It would be a mistake to conclude that this justifies terminal apathy about the size of the public debt, just the same as it would be to conclude that discrediting the Reinhart-Rogoff study leaves austerity policies with no other legs to stand on. Nonetheless, it seems proponents of the latter will have to rely on other metrics going forward, as the methodology of the Reinhart-Rogoff study appears to be terminally flawed.
Mark Weisbrot discusses the popular success of Ecuadorian president Rafael Correa, a Phd economist who has used “big government” economics to get his country back on its feet:
[Ecuador’s] Unemployment fell to 4.1% by the end of last year – a record low for at least 25 years. Poverty has fallen by 27% since 2006. Public spending on education has more than doubled, in real (inflation-adjusted) terms. Increased healthcare spending has expanded access to medical care, and other social spending has also increased substantially, including a vast expansion of government-subsidised housing credit.
If all that sounds like it must be unsustainable, it’s not. Interest payments on Ecuador’s public debt are less than 1% of GDP, which is quite small; and the public debt-to-GDP ratio is a modest 25%. The Economist, which doesn’t much care for any of the left governments that now govern the vast majority of South America, attributes Correa’s success to “a mixture of luck, opportunism and skill”. But it was really the skill that made the difference.
Correa may have had luck, but it wasn’t good luck: he took office in January of 2007 and the next year Ecuador was one of the hardest hit countries in the hemisphere by the international financial crisis and world recession. That’s because it was heavily dependent on remittances from abroad (eg workers in the US and Spain); and oil exports, which made up 62% of export earnings and 34% of government revenue at the time. Oil prices collapsed by 79% in 2008 and remittances also crashed. The combined effect on Ecuador’s economy was comparable to the collapse of the US housing bubble, which contributed to the Great Recession.
And Ecuador also had the bad luck of not having its own currency (it had adopted the US dollar in 2000) – which means it couldn’t use the exchange rate or the kind of monetary policy that the US Federal Reserve deployed to counteract the recession. But Ecuador navigated the storm with a mild recession that lasted three quarters; a year later it was back at its pre-recession level of output and on its way to the achievements that made Correa one of the most popular presidents in the hemisphere.
How did they do it? Perhaps most important was a large fiscal stimulus in 2009, about 5% of GDP (if only we had done that here in the US). A big part of that was construction, with the government expanding housing credit by $599m in 2009, and continuing large credits through 2011.
But the government also had to reform and re-regulate the financial system. And here it embarked on what is possibly the most comprehensive financial reform of any country in the 21st century. The government took control over the central bank, and forced it to bring back about $2bn of reserves held abroad. This was used by the public banks to make loans for infrastructure, housing, agriculture and other domestic investment.
It put taxes on money leaving the country, and required banks to keep 60% of their liquid assets inside the country. It pushed real interest rates down, while bank taxes were increased. The government renegotiated agreements with foreign oil companies when prices rose. Government revenue rose from 27% of GDP in 2006 to over 40% last year. The Correa administration also increased funding to the “popular and solidarity” part of the financial sector – co-operatives, credit unions and other member-based organisations. Co-op loans tripled in real terms between 2007 and 2012.
The end result of these and other reforms was to move the financial sector toward something that would serve the interests of the public, instead of the other way around (as in the US). To this end, the government also separated the financial sector from the media – the banks had owned most of the major media before Correa was elected – and introduced anti-trust reforms.
Weisbrot concludes by saying what’s on everybody’s mind:
[T]he conventional wisdom is that such “business-unfriendly” practice as renegotiating oil contracts, increasing the size and regulatory authority of government, increasing taxes and placing restrictions on capital movements, is a sure recipe for economic disaster. Ecuador also defaulted on a third of its foreign debt after an international commission found that portion to have been illegally contracted. And the “independence” of the central bank, which Ecuador revoked, is considered sacrosanct by most economists today. But Correa, a PhD economist, knew when it was best to ignore the majority of the profession.
If anything, these examples demonstrate that there’s a time and a place for every economic policy. Clearly “big government” economics are not always appropriate for every economic situation. but in Ecuador’s case, the “big government” approach appears to have worked. Sweden and Switzerland have also experienced economic success with central banks that intervene far more aggressively in the economies than the Federal Reserve. And despite Sweden’s purported free-market reforms, the Swedish government continues to fund a rather robust welfare state with high income tax rates, steady economic growth, and regular budget surpluses. Surplus-driven welfare states are not only possible, but we currently have many excellent examples of them working in different parts of the world.
Morning Star is a California company that is responsible for processing 40% of California’s tomato crop. They also have no management. (Via Reason.tv):
Morning Star has many of the usual positions that one would expect at an ordinary company: there are floor workers, payroll personnel, folks that handle the mail and outside communications, and so on. The difference is that, from a bird’s eye view, no single person at Morning Star is anybody else’s boss. The entire operation appears to thrive on the power of collective expectations, and by giving workers a direct stake in the success of the company. Workers at Morning Star make their own decisions about how to perform their job, what tools they need to keep the machines running, and how to structure their work day to keep production running smoothly. As one employee put it, there is no bureaucracy that he has to fight through if he needs something for his lab. He just goes out and purchases it.
To some, this may seem like a frightfully inefficient way to run a business. If employees can make instantaneous discretionary purchases of lab equipment on the company dime, then where is the cost control? Such a system seems doomed to failure without a hierarchy of some sort to check potentially unwise exercises of indiscretion.
The answer is that these checks are built into the system of collective expectations. As another Morning Star employee put it, Morning Star’s business model presumes that employees who are closest to a particular business process are the most qualified to make decisions about how to keep that process running efficiently. Thus, one would expect an unwise purchase to be met with scrutiny by one’s peers on the factory floor. Morning Star’s firm model thrives by ensuring that one individual is never and uncontested decision-maker solely responsible for decisions related to a business process at the company. Every worker has a stake in the outcome of everybody else’s labor. The threat of discipline from management is unnecessary to achieve desired outcomes.
Morning Star is not the first company to adopt this business model. Valve Corp., a wildly successful Video Game company that currently dominates the Video Game industry through it’s Steam platform, also has no formal management. Gore Inc., the makers of Gore-Tex, are an 8,500 strong company that has no company organization chart. Though Gore does retain a few corporate officer titles for various purposes within the company, those officials have little direct power over other employees in the corporation. Those same officers are also not unilaterally chosen by the Board of Directors, but rather, in a more democratic fashion:
In Gore’s self-regulating system, all the normal management rules are reversed. In this back-to-front world, leaders aren’t appointed: they emerge when they accumulate enough followers to qualify as such. So when the previous group CEO retired three years ago, there was no shortlist of preferred candidates. Alongside board discussions, a wide range of associates were invited to nominate to the post someone they would be willing to follow. ‘We weren’t given a list of names – we were free to choose anyone in the company,’ Kelly says. ‘To my surprise, it was me.’
Other firms have shown that “non-management management” approach is feasible. At IDEO Corp., a Palo Alto engineering company responsible for such ubiquitous inventions as squeezable toothpaste tubes, or the mouse you are using to point & click things on your computers, there are no bosses, and no management structure. Sun Hydraulics is a $170 million dollar manufacturing firm with no job titles, no organization chart, and even lacks job performance criteria for its employees. There is a Plant Manager, but their job is not to supervise employees: it’s to water the company’s plants.
How are so many companies, in areas as diverse as tomato farming, hydraulics production, and video game production, running successful businesses without traditional management? In a society built on Capitalism, the common wisdom is that productive firms require managers with coercive authority to motivate people to do their jobs. Most ordinary people are shocked when they learn that there are companies who stay profitable with no bosses. How can this be an efficient way to run a company?
As it turns out, there’s a lot of evidence that top-down management is an inefficient form of firm organization. Gary Hamel, writing for the Harvard Business Review, noted several reasons to abandon traditional management hierarchies, one of which is the fact that managers add both personnel costs and unnecessary complexity to a firm:
A small organization may have one manager and 10 employees; one with 100,000 employees and the same 1:10 span of control will have 11,111 managers. That’s because an additional 1,111 managers will be needed to manage the managers. In addition, there will be hundreds of employees in management-related functions, such as finance, human resources, and planning. Their job is to keep the organization from collapsing under the weight of its own complexity. Assuming that each manager earns three times the average salary of a first-level employee, direct management costs would account for 33% of the payroll.
Top-down management also centralizes risk-taking in the hands of fewer decision-makers, which increases the likelihood of a disastrous event:
… As decisions get bigger, the ranks of those able to challenge the decision maker get smaller. Hubris, myopia, and naïveté can lead to bad judgment at any level, but the danger is greatest when the decision maker’s power is, for all purposes, uncontestable. Give someone monarchlike authority, and sooner or later there will be a royal screwup. A related problem is that the most powerful managers are the ones furthest from frontline realities. All too often, decisions made on an Olympian peak prove to be unworkable on the ground.
The personal whims of managers can also kill or disincentivize ideas that are good for the company, especially when ideas have to be filtered through multiple levels of management:
…[A] multitiered management structure means more approval layers and slower responses. In their eagerness to exercise authority, managers often impede, rather than expedite, decision making. Bias is another sort of tax. In a hierarchy the power to kill or modify a new idea is often vested in a single person, whose parochial interests may skew decisions.
Then there’s “the cost of tyranny:”
The problem isn’t the occasional control freak; it’s the hierarchical structure that systematically disempowers lower-level employees. For example, as a consumer you have the freedom to spend $20,000 or more on a new car, but as an employee you probably don’t have the authority to requisition a $500 office chair. Narrow an individual’s scope of authority, and you shrink the incentive to dream, imagine, and contribute.
The success of these business models demonstrate one of the fundamental criticisms of traditional Capitalist modes of production that Marx attempted to illustrate when he was writing Das Kapital. While Marx was wrong (in my opinion) about quite a few things, the success of the companies above demonstrates that Marx was correct to point out that divorcing employees from management decisions related to their own labor is an inherently inefficient means of production. Divorcing employees from the product of their labor separates them from one of the primary motivating forces to perform that labor. This process of alienation itself is what creates the necessity for “bosses”—employees whose primary purpose is to oversee & discipline other employees in their assigned tasks.
Thus, what we really see in Marx’s Theory of Labor Alienation was, inter alia, an argument about firm management: the need for “bosses” in the workplace only arises when employees are completely divorced from the means of production. When workers have a direct stake in the final product of their labor, they no longer need the threat of coercion from superiors to do their job. An employee’s direct interest in the outcome, combined with the power of collective expectations of their peers in the workplace, replaces the threat of, and need for, discipline from above.
With all this being said: I am not attempting to argue here that the success of non-managed firms proves that stateless socialism is viable, or validates Marxism writ large. Indeed, I’m sure that the folks at Reason have a much different view on Morning Star’s success than I do—and moreover, I remain, as I have always been, a fan of mixed economies.
What I think is clear, however, is that Marxist theorists are right to point out that there is nothing inherently “natural” or “necessary” about the way the workplace is organized in most Western societies today. There is plenty of evidence to suggest that top-down hierarchies in the workplace are neither necessary for profitability, nor an extension of natural human activities. Indeed, if Gary Hamel’s observations about the inefficiency of management are true, we appear to have been doing it wrong for quite some time. Though perhaps we could have come to the same conclusion more easily by just reading Dilbert comics:
A recent NYT article discusses the plight of users of an online crowd-sourced room rental service called Airbnb, which allows users to offer their rooms for rent for travelers looking to avoid paying exorbitant hotel fees. The problem? In many cities, such as New York City, people offering their rooms for rent to travelers are breaking local laws:
Back in September, Nigel Warren rented out his bedroom in the apartment where he lives for $100 a night on Airbnb, the fast-growing Web site for short-term home and apartment stays. His roommate was cool with it, and his guests behaved themselves during their stay in the East Village building where he is a renter.
But when he returned from a three-night trip to Colorado, he heard from his landlord. Special enforcement officers from the city showed up while he was gone, and the landlord received five violations for running afoul of rules related to illegal transient hotels. Added together, the potential fines looked as if they could reach over $40,000.
New York City ordinances outlaw this sort of “crowd-sourced” approach to offering lodging for travelers:
local laws may prohibit most or all short-term rentals under many circumstances, though enforcement can be sporadic and you have no way of knowing how tough your local authorities will be. Your landlord may not allow such rentals in your lease or your condominium board may not look kindly on it … [NYC law] says you cannot rent out single-family homes or apartments, or rooms in them, for less than 30 days unless you are living in the home at the same time.
The NYCRR is a labrynthine mess that even lawyers have trouble navigating. Needless to say, though I’ve worked with the NYC regs before, I was unaware of this particular restriction.
What struck me about these ordinances, however, is that it appears to be a textbook definition of rent-seeking by hotel concerns when I read it. Indeed, after reading further, the justification for these laws seem flimsy at best:
New York City officials don’t come looking for you unless your neighbor, doorman or janitor has complained to the authorities about the strangers traipsing around.
“It’s not the bargain that somebody who bought or rented an apartment struck, that their neighbors could change by the day,” said John Feinblatt, the chief adviser to Mayor Michael R. Bloomberg for policy and strategic planning and the criminal justice coordinator. The city is also concerned with fire safety and maintaining at least some availability of rental inventory for people who live there.
These justifications don’t hold up upon interrogation. The “bargain” in question is one governed by the terms of the lease, and landlords are generally free to dictate the terms of that lease as they please. Landlords could, for example, place a restriction on this sort of short-term room rental if they wanted to. The fact that the landlord at issue in this case did not only proves further that this isn’t really a concern that comes up that often. If it was, you can bet the landlord would have a section in their lease devoted to banning this practice, so as to ensure they don’t get held liable for their tenants’ violation of the ordinance in question.
Second, the fire safety concern is related to the number of people in the building at any given time. That would be controlled by placing restrictions on maximum occupancy, which already exist. Notably, the fire hazard concern would also be implicated where people simply allowed friends to sleep over in their apartments, which a ban on individual room rentals would not prevent.
Third, the idea of “maintaining at least some available rental inventory for the people who live there” doesn’t even make sense. The only way these rooms get rented out is by someone who already occupies them. There’s no way that crowding out of rental space could occur here. The room is already “unavailable” to the other residents of the city because somebody already lives there.
So all we are really left with in this case is a law that represents rent-seeking by hotel businesses in New York City. There doesn’t seem to be a good reason to place a per se restriction on this sort of transaction where other laws already account for the justifications given. Which makes this whole thing a shame, because people clearly benefit from having this option available to them. Particularly in New York City, where reasonably safe and clean hotel rooms are notoriously expensive.
This is a good example of an instance where we really should just let the market (and the wonders of the internet) do its thing. For the reasons cited above, I can see no legitimate reason for this type of ordinance other than fattening the pockets of both hotel concerns and city governments, who get to impose fines every time a violation occurs. Regulations that attempt to solve legitimate problems with land use in a heavily populated suburban area are one thing. Regulations that serve merely as revenue-raising and rent-seeking provisions for the city—and its attendant private beneficiaries—are another thing entirely.
"Using data on annual crime rates for large cities in the United States, we find that living wage ordinances are associated with notable reductions in property related crime and little impact on non-property crimes."
— Jose M. Fernandez et al., The Impact of Living Wage Ordinances on Urban Crime (2012). Download the paper at SSRN. h/t CrimProf Blog.