Friday, August 22, 2014

Ferguson: No Justice in the American Police State

Paul Craig Roberts

There are reports that American police kill 500 or more Americans every year. Few of these murdered Americans posed a threat to police. Police murder Americans for totally implausible reasons. For example, a few days before Michael Brown was gunned down in Ferguson, John Crawford picked up a toy gun from a WalMart shelf in the toy department and was shot and killed on the spot by police goons.

It appears that the murder of Michael Brown did not satisfy the blood lust of the goon thug cop murderers. Less than four miles from Ferguson, goon thugs murdered another black man on August 19. The police claims of “threat” are disproved by the video of the murder.

You can see the entire scene much better here. This is a clear case of outright murder of a man by our Nazi Gestapo police. The police then handcuff their dead victim.

Clearly, the American police are an enormous danger to the public. It will be interesting to see what excuses the police shills will come up with to justify this murder. It is not American civilians with carry permits who murder 500 people a year. It is the goon thug police. Gun control should be applied to the police who lack sufficient intelligence and judgment to go around armed.

Five hundred is more than one killing by police per day. Yet the reports of the shootings seldom get beyond the local news. Why then has the Ferguson, Missouri, police killing of Michael Brown gone international?

Probably the answer is the large multi-day protests of the black community in Ferguson that led to the state police being sent to Ferguson and now the National Guard. Also, domestic police in full military combat gear with armored personnel carriers and tanks pointing numerous rifles in the faces of unarmed civilians and arresting and threatening journalists make good video copy. The “land of the free” looks like a Gestapo Nazi state. To much of the world, which has grown to hate American bullying, the bullying of Americans by their own police is poetic justice.

For those who have long protested racial profiling and police brutality toward racial minorities, the police murder of Michael Brown in Ferguson is just another in a history of racists murders.

Rob Urie is correct that blacks receive disproportionate punishment from the white criminal justice (sic) system. See, for example.

Myself, former US Representative Dennis Kucinich, and others see Michael Brown’s murder as reflective of the militarization of the police and police training that creates a hostile police attitude toward the public. The police are taught to view the public as threats against whom the use of violence is the safest course for the police officers.

This doesn’t mean that racism is not also involved. Polls show that a majority of white Americans are content with the police justification for the killing. Police apologists are flooding the Internet with arguments against those of the opposite persuasion. Only those who regard the police excuse as unconvincing are accused of jumping to conclusions before the jury’s verdict is in. Those who jump to conclusions favorable to the police are regarded as proper Americans.

What I address in this article is non-evidential considerations that determine a jury’s verdict and the incompetence of Ferguson’s government that caused the riots and looting.

Unless the US Department of Justice makes Michael Brown’s killing a federal case, the black community in Ferguson is powerless to prevent a cover-up.

What usually happens in these cases is that the police concoct a story protective of the police officer(s) and the prosecutor does not bring an indictment. As Obama and his Attorney General, Eric Holder, are partially black (in skin color alone), the black majority community in Ferguson, Missouri, might have hopes from Holder’s visit. However, nothing could be more clear than the fact that Obama and Holder, along with the rest of “black leadership,” have been co-opted by the white power structure. How else would Obama and Holder be in office? Do you think that the white power structure puts in office people who want justice for minorities or for anyone other than the mega-rich?

If the cop who killed Brown is indicted and he is tried in Ferguson, the jury will contain whites who live in Ferguson. Unless there is a huge change in white sentiment about the killing, no white juror can vote to convict the white cop and continue to live in Ferguson. The hostility of the white community toward white jurors who took the side of a “black hoodlum who stole cigars” against the white police officer would make life for the jurors impossible in Ferguson.

The trouble with purely racial explanations of police using excessive force is that cops don’t limit their excesses to racial minorities. White people suffer them also. Remember the recent case of Cecily McMillan, an Occupy protester who was brutalized by a white goon thug with a record of using excessive force. McMillan is a young white woman. Her breasts were seized from behind, and when she swung around her elbow reflexively and instinctively came up and hit the goon thug. She was arrested for assaulting a police officer and sentenced by a jury to a term in jail. The prosecutor and judge made certain that no evidence could be presented in her defense. Medical evidence of the bruises on her breast and the police officer’s record of police brutality were not allowed as evidence in her show trial, the purpose of which was to intimidate Occupy protesters.

In America white jurors are usually sheep who do whatever the prosecutor wants. As Cecily McMillan, a white woman, could not get justice, it is even less likely that the black family of Michael Brown will. Those who are awaiting a jury’s verdict to decide Michael Brown’s case are awaiting a cover-up and the complicity of the US criminal justice (sic) system in murder.

If there is a federal indictment of the police officer, and the trial is held in a distant jurisdiction, there is a better chance that a jury would consider the facts. But even these precautions would not eliminate the racist element in white jurors’ decisions.

The situation in Ferguson was so badly handled it almost seems like the police state, in responding to the shooting, intended to provoke violence so that the American public could become accustomed to military force being applied to unarmed civilian protests.

Ferguson brings to mind the Boston Marathon Bombing. Two brothers of foreign extraction allegedly set off a “pressure cooker bomb” left in a backback that killed and injured race participants or observers. The two brothers were deemed, without any evidence, to be so dangerous that the entirety of Boston and its suburbs were “locked down” while 10,000 heavily armed police and military patrolled the streets in military vehicles conducting door-to-door searches forcing residents from their homes at gun point, while the police ransacked homes where it was totally obvious the brothers were not hiding. Not a single family evicted from their residences at gunpoint said: “Thank God you are here. The bombers are hiding in our home.”

The excessive display of force and warrantless police home intrusions is the reason that aware and thoughtful Americans do not believe one word of the official account of the Boston Marathon Bombing. Thoughtful people wonder why every American does not see the bombing as an orchestrated state act of terror in order to accustom Americans to the lock-down of a city and police intrusion into their homes. Logistically, it is impossible to assemble 10,000 armed troops so quickly. The obvious indication is that the readiness of the troops indicates pre-planning.

In Ferguson all that was needed to prevent mass protests and looting was for the police chief, mayor or governor to immediately announce that there would be a full investigation by a civic committee independent of the police and that the black community should select the members it wished to serve on the investigative committee.

Instead, the name of the cop who killed Michael Brown was withheld for days, a video allegedly of Michael Brown taking cigars from a store was released as a justification for his murder by police. These responses and a variety of other stupid police and government responses convinced the black community, which already knew in its bones, that there would be a coverup.

It is entirely possible that the police chief, mayor, and governor lacked the intelligence and judgment to deal with the occasion. In other words, perhaps they are too stupid to be in public office. The incapacity of the American public to elect qualified representatives is world-renown. But it is also possible that Michael Brown’s killing provided another opportunity to accustom Americans to the need for military violence to be deployed against the civilian population in order to protect us from threats.

Occupy Wall Street was white, and these whites were overwhelmed by police violence.
This is why I conclude that more is involved in Ferguson than white racist attitudes toward blacks.

The founding fathers warned against allowing US military forces to be deployed against the American people, and the Posse Comitatus Act prevents the use of military forces against civilians. These restrictions designed to protect liberty have been subverted by the George W. Bush and Obama regimes.

Today Americans have no more protection against state violence than Germans had under National Socialism.

Far from being a “light unto the world,” America is descending into cold hard tyranny.

Who will liberate us?

This from BloombergBusinessweek.

Tuesday, August 5, 2014

Fracking’s untold health threat: How toxic contamination is destroying lives

America's natural gas boom has real consequences for children and animals,
researchers tell Salon

Lindsay Abrams

If we’re going to talk about fracking, we can’t just talk about energy independence, or the economy, or the potential for natural gas to act as a “bridge fuel” to help solve the global warming crisis. We also need to talk about the effect that hydraulic fracturing is having on the communities where it’s taking place, and to ask whether that cost — to people’s health and property — is too high.

The main barrier to that conversation, of course, is that it’s one the industry definitely doesn’t want to be having, aside from insisting that fracking is safe. Michelle Bamberger, a veterinarian, and Robert Oswald, a professor of molecular medicine at Cornell, believe differently, and they have the research to back up their claims. The two have documented cases of contaminated water and air, of sick pets and dying livestock and of similar symptoms experienced by the animals’ owners, all with few apparent explanations. And that, the researchers, argue, is the real scandal: It’s up to the people being affected, and not the industry causing the damage, to prove that something’s wrong.

In The Real Cost of Fracking: How America’s Shale Boom is Threatening Our Families, Pets and Food,” Bamberger and Oswald share the stories of people whose lives have been affected — and in some cases, destroyed — by fracking, in a way that aims to open up the conversation to what’s at stake. “Simply put,” they write, “we are not certain of the public health implications of large-scale industrial oil and gas drilling.” The effects we are seeing, they add, are being seen most prominently in animals, children and oil and gas workers: the ones who, because they are so sensitive to hazards from gas operations, end up serving as the canaries in the coal mine.

Bamberger and Oswald spoke with Salon about the challenges of studying the health risks of fracking, and about why they believe the evidence they’ve found is enough to make us seriously question whether they’re risks worth taking. This interview has been lightly edited for length and clarity.

What got you started investigating this aspect of fracking?
MB: We heard about this issue about five years ago when we read in our local paper about a community group that was going through all the leases and just plotting out where surface and mineral rights were leased in the county. It made it really easy to know where you were and who your neighbors were that were leased — the land that was leased around you. We’re out in the country, about 15 minutes from Ithaca, and we saw that we were surrounded by farmers who owned 100 acres or more of land and had leased. The way compulsory integration works in New York is that if you are in a land unit, which is usually one square mile, that is 60 percent or more leased, then your land could be drilled under.

So that got us really interested in the issue, because we’re not leased, but we would be drilled under. So we thought, we’d better start paying attention to this. So we started attending meetings and learning more about it. And in the process, I started hearing about cases in Pennsylvania where animals were becoming ill, and no one was trying to figure out why, or owners didn’t know why, or their vets didn’t know why. That’s what pulled me into it; and for Robert’s part, I think I can speak for him, he started getting involved with it from looking at the documentation I was starting to collect.

Was it hard to find people who were willing to speak about the experiences they’re having?
MB: I started to get emails from people who knew I was a veterinarian who were local farmer-type people up here in New York who had connections with people in other states through the farming groups. So they started putting me in contact with people, and I started to become known as a vet who was interested in looking into these cases and starting to document them, and that’s how I got pulled into this.
 
What are some of the more shocking things you turned up?
MB: I can think of one particular occasion — this was in Louisiana in April 2009 — and that was the one where the cattle were exposed to hydraulic fracturing fluid and they died within an hour. What was shocking about that was that these are animals, which are over 1,000 pounds, and it takes something pretty powerful to knock them out, that they’re exposed to it and then dead in just an hour. That really grabbed me by the neck, because what I’ve been reading about was usually cattle exposures, where even if it’s pretty toxic, it’s one to three days. One to three days is pretty fast, actually, but within an hour is pretty amazing. So I think that was the most amazing thing that I heard of with these cases. Robert is shaking his head in agreement.

RO: I think that was the most dramatic case we had. We had a lot of cases that were interesting but that was a dramatic one.

MB: Robert, the other answer you give for this is the case where we were sitting at the kitchen…

RO: That wasn’t dramatic but it had a big effect on me, let’s put it that way. We went to visit some people and they had actually had some documented contamination on their land and their cows were quarantined. And we’re just sitting in their dining room, which is off their kitchen, and you can look through their kitchen window and all you can see out their kitchen window is a well pad. We look outside the dining room window and about 10 feet away from it is a driveway, and that’s the access road to the pad. So I realized for these people, all this drilling and fracking and everything, it was right on top of their house. These people had several hundred acres and they didn’t want them to put the pad there, but the company insisted on putting the pad right by their house. That was a thing that was really early on and it really struck me as something that I just didn’t understand — how people could live with that, and how the companies could actually do that.

Would you say that all adds up to these people’s lives being dominated, or ruined, by drilling operations? Or is it just that we’re not hearing enough about any of these things that are happening?
MB: I think their lives are in many cases being dominated, and I think that’s true especially in the cases when people lose their water — we all know what it’s like when our electricity goes out or our power goes out and we can’t run our spigot. To have that be all the time, how do you compensate for that? What do you with water that’s not good, and you can’t drink it and maybe you can’t even bathe in it? You’re getting rashes, you’re getting ill — it really does turn your life upside down and it does dominate it. We have one woman we described in our book who said, “I go to sleep thinking of water. I wake up thinking of water. Every minute is thinking of water.” It just made me realize that we take so much for granted. But this is huge: When you have to think of every drop, counting exactly how much water you’re going to need and how much you’re going to use and think of your community and think of your neighbors, it’s really overwhelming. It’s hard to really understand. We got a little bit of a taste of it when we went and visited these people and spent some time with them, but I think no one could ever understand it unless you go through it.

RO: You know, Tony Hayward, the CEO of BP, during the BP oil spill, he said he wanted his life back. That had such a hollow ring to it. These are people who really need their lives back, and they’re not going to get it back.

There’s one point in the book where you compare some of these people to victims of rape, which seems like a pretty extreme comparison.
MB: The thing I was trying to get at there as an analogy was lack of control. They’re powerless. And again, you can get that feeling through all the chapters that we’ve written in describing the cases. Especially in that last chapter, on environmental justice, where they’re at the complete mercy of these companies that are working around them and then at the companies’ mercy as to whether they’re provided with a water buffalo [a large container of replacement water], to whether it’s decided that the results of their testing show they need it. What do they have to prove in order to be able to have good water again? I think that’s the sort of thing I was trying to build and get there with that analogy: powerlessness and lack of control.

Going back to your research, how many of these case studies that you feature are backed up by conclusive evidence that says “Yes, fracking is definitely causing these problems?”
MB: So on those cases, a few are, most are not. We feel strongly that it’s because of the current testing methods that are used and the fact that for a lot of these chemicals, we don’t know what they are actually using — especially the proprietary mixes, we don’t know what all the components are. But also we don’t know what the maximum contaminate levels (MCLs) are. So, in other words, what is the level below which there are no health effects and above which definitely there are? And what are the effective screening levels for air? If we don’t really know them, then we believe these people have no recourse because there’s no MCL. And that came out really strongly for me. We have several cases in the book that are part of the EPA study, where I was shocked when I saw the water results that a large majority of those chemicals the EPA was testing didn’t have MCLs. And if you don’t have an MCL, you can’t go into litigation, you can’t go to court and say “we have conclusive evidence.” It doesn’t matter how sick they are and that they can’t use their water or that when they stop using their water they get better and when they use it again they get worse. None of that counts as conclusive evidence. Having said that, we do have several cases you can read about in the book where it is conclusive evidence. But it’s the rarity really because of the many reasons we discussed in the book.

RO: You should also sort of realize that it took about 30 years to determine conclusively that cigarettes caused cancer, and part of the reason is that there’s always some sort of plausible deniability. It really depends on what we accept as a level of truth and what’s more important. Is it more important to absolutely prove there’s contamination here, or is it more important to prove that there’s not contamination here? And where do we find the balance? The balance, unfortunately, is very much in favor of companies and not in favor of the people who are living with this.

So you mentioned more testing. Are there simple things that could be put in place to help make the link more clear, or to help protect people?
MB: That’s a really good question. We are now, getting back to the testing thing, thinking of looking at it in a different light, to make it simpler for people to know right away: “Is this water I shouldn’t be drinking? And if it needs much further testing that maybe I can’t afford, at least I shouldn’t be drinking it.”

As far as simple things that could be done that might lessen the effects right now, I think the best discussion of that is in our first paper, where we talk about what could be done: just getting further away from these operations, for the drilling companies to operate further away, there’s also been a lot, lately, about cement casing failures. I think the big thing is that we were shocked about the number of inspectors. There are so few inspectors that they cannot get out and really make sure things are running correctly, even as they stand now. So there’s something that’s really simple and really basic, and the state regulators would probably say we don’t have the money for that, we can’t afford it. But then it comes down to this question that we’re hearing all over the country now: “What’s more important, to get the energy out of the ground or people’s health?” That’s a real basic question; that’s what it comes down to now and we strongly believe people’s health and children and animals and food and all of that should come way before going after an energy source that’s not really viable, especially in light of the climate change we have — but that’s another issue.

I can’t imagine that the industry has had a positive response to your work.
MB: Energy In Depth is one of the energy industry sites and they pretty much attack anybody who doesn’t say that this is great stuff they’re doing. So we are not the only ones who have been attacked. But we look at it like we don’t really care what they have to say. We’re just going to do the best science in the most objective way possible and that’s what we’re still trying to do. The reason to write the book, in addition to the articles, is to reach an audience that might not read an article, even though our article was pretty easy to read. A lot of people hearing it from a scientific journal just would not read it. So the book is an effort to reach those people who would read a book. So we’re hoping to get more people aware of the situation, and if more people are aware maybe things will change eventually.

Would you say nondisclosure agreements are making it harder to get those stories out there?  
MB: That is true and I think that’s happening more and more. And it’s been hard; we’ve had a few cases shut down and people say “I can’t provide you with any more information,” or right up front we were not able to follow up on a really good case because they’d already signed. So for us as researchers that really cuts out a lot of information where we’re trying to find out what’s happening, especially as health researchers for the public health — it’s hard to protect the public health if you can’t ask what’s happening.

Leaving aside the climate change aspect, and just so far as the direct effects on people who live near fracking operations, do you see the point where the industry could make significant enough improvements that fracking will be safe — or at least safe enough — to be justifiable, from an energy standpoint?
RO: Well I think they can do better, that’s true. And maybe they are doing better. I don’t know. I don’t think there will ever be a case where it will always be safe. There will always be problems; mistakes happen. And when they wipe out a community’s water by making a mistake, that’s the major issue. When you get right down to it, what’s more important: Do we find alternative ways of getting energy? I think there are alternatives, but I think what we’re doing is sending all our money to subsidize the oil and gas industry and sending very little money subsidizing alternative energy. It’s that balance for change for alternative energy, which has become much more affordable for people. I think it would not be worth taking the risk of contaminating water and air and ruining some people’s lives.

How the middle class got screwed: College costs, globalization and our new Insecurity Economy

The social safety net is in tatters. No jobs are safe. Who is to blame — and what has the anxiety done to us all?
Marianne Cooper


Excerpted from "Cut Adrift: Families in Insecure Times"

It is clear that American families have been struggling in recent decades. Less obvious are the forces that are responsible for this reversal of fortune. However, a significant body of research now points to a confluence of economic and social trends that many scholars agree have played a crucial role in the rise of financial insecurity.

The Rise of the Service Economy

Since the 1970s, work in the United States has undergone a dramatic transformation—a regression from the New Deal quest for stability and from shared prosperity to insecurity security to a state in which work is precarious. In the words of sociologist Arne L. Kalleberg, work has become more “uncertain, unpredictable, and risky from the point of view of the worker.”

One reason for the rise of precarious work is the wholesale restructuring of the American economy from one based on manufacturing to one based on services. After World War II the manufacturing sector comprised 40 percent of the labor force; by 2005, that share had fallen to only 12 percent. The service sector now makes up about 80 percent of the jobs in the United States. Durable manufacturing jobs (autoworker, machinist, chemical engineer) offering higher wages and good benefits have been replaced by service sector jobs (store clerk, cashier, home health-care aide) that pay less, offer few or no benefits, and are more insecure.

Moreover, while the manufacturing sector tends to create good jobs at every employment level, the service sector tends to create a relatively small number of high-skill, high-paying jobs (in fields like finance, consulting, and medicine) along with a large number of low-skill, low-paid jobs (in retailing, child care, and hospitality). The result is that secure, semiskilled middle-income jobs like those that once fueled the rapid expansion of the American middle class are increasingly hard to find.

The Impact of Globalization

Beginning in the mid-to-late 1970s, U.S. firms began to face dramatically increased competition from around the world. To compete, American companies sought to lower labor costs, in part by outsourcing work to lower-wage countries. Technological advances aided this outsourcing process, as the growth in electronic tools for communication and information management meant that goods, services, and people could be coordinated and controlled from anywhere around the globe, enabling businesses to more easily move their operations to exploit cheap labor sources abroad.

Perhaps the most far-reaching effect of globalization has been a renegotiation of the unwritten social contract between American employers and employees. Managers now demand greater flexibility to quickly adapt and survive in an increasingly competitive global marketplace. In this context, the traditional employment relationship, in which work is steady and full-time, workers are rarely fired except for incompetence, working conditions are generally predictable and fair (often defined by union-negotiated contracts), and good employees can expect to climb a lifetime career ladder in the service of one employer, has come to seem unrealistic and onerous to business leaders. Today that traditional arrangement has largely disappeared, replaced by nonstandard, part-time, contract, and contingent work, generally offering reduced wages and scanty benefits. Mass layoffs are no longer an option of last resort but rather a key restructuring strategy used to increase short-term profits by reducing labor costs in both good times and bad.

The Decline of Unions

In this new environment, unions are struggling. Although manufacturing workers have a long history of labor organizing, service sector workers such as restaurant and retail employees do not, making it harder for service employee unions to grow. Moreover, globalization, technological changes, and the spread of flexible work arrangements have combined to enable employers to make an end run around unions by moving jobs to countries or parts of the United States where anti-union attitudes and laws predominate. As a consequence of these developments, union membership has steadily declined. In 1954, at the peak of union membership, 28 percent of employed workers were in unions. By 1983, only 20 percent of workers were union members. In 2012, union membership reached a historical low, with membership comprising only 11 percent of American workers. Among full-time workers, the median weekly earnings for union members is $943, while among nonunion workers the median weekly earnings is $742. The decline of unions has severely curtailed and diminished workers’ ability to collectively bargain to maintain high wages and good benefits, indirectly fueling a steady decline in the value of the minimum wage. Moreover, the decline of unions has eroded a broader moral commitment to fair pay, which even nonunion workers previously benefited from.

Together, the rise of the service economy, globalization, the decline of unions, and the erosion of the old work contract between employers and employees have created a precarious work environment for more and more Americans. Between the 1980s and 2004, more than 30 million full-time workers lost their jobs involuntarily. And during the Great Recession of 2008–2009, another 8.9 million jobs were lost. In the past few years, long-term unemployment has reached levels not seen since the government began monitoring rates of joblessness after World War II.

Risk Shifts to the Individual

Over the last several decades, both government policy and private sector labor relations have evolved to reduce the sharing of the economic risks involved in managing lives, caring for families, and safeguarding futures. Instead, individual Americans are increasingly being asked to plan for and guarantee their own educations, health care, and retirements. If today’s families want a safety net to catch them when they fall, they need to weave their own.

Underlying this shift in risk is neoliberal political ideology, often identified with leaders like Ronald Reagan and Margaret Thatcher, which holds that people will work harder and make better decisions if they must defend themselves against the vicissitudes of life. Neoliberal doctrine views dependence in a negative light (arguing that “coddling” by government undermines individual initiative) and actually celebrates risk and uncertainty as sources of self-reliance. In this new paradigm, the individual is encouraged to gain greater control over his or her life by making personal risk-management choices within the free market (and living with the consequences of any misjudgments). In this “ownership society,” individuals must learn to be secure with insecurity; the goal is to amass security on our own rather than look to government help or collective action as sources of support.

With the rise of neoliberalism, the ethic of sharing risk among workers, employers, and the federal government that emerged after the New Deal was replaced by an aggressively free-market approach that pushed deregulation and privatization in order to minimize the role of government in economic life. At the same time, responsibility for social welfare has steadily devolved from the federal government to states, localities, and even the private sector. The push toward privatizing social services reached a new level when President George W. Bush, through his establishment of the office of faith-based organizations, sought to formally create public-private partnerships in which welfare provision would increasingly be supplied not by the government but by religious organizations. The result of this devolution of social services has been the replacement of a relatively stable, consistent system of safety-net programs with a patchwork of state, local, and private programs, all of which scramble to find funding.

Though many Americans may be unfamiliar with the risk shift story, the results are widely known. From 1980 to 2004, the number of workers covered by a traditional defined-benefit retirement pension decreased from 60 percent to 11 percent. In contrast, the number of workers covered by a defined-contribution retirement benefit like a 401(k) plan, in which the worker is fully responsible for saving and managing his or her savings, grew from 17 percent in 1980 to 61 percent in 2004.

Traditional employer-provided health-care coverage began to erode as well. From 1979 to 2004, coverage dropped from 69 percent to 55.9 percent. In 2010, 49 million Americans were uninsured, an increase of close to 13 million people since 2000. For workers who continue to receive coverage, their share of the costs has increased drastically. A survey conducted by the Employee Benefit Research Institute found that to cover medical costs, 45 percent have decreased their contributions to other savings, 35 percent have had difficulty paying other bills, and 24 percent have had difficulty paying for basic necessities.

The Affordable Care Act, passed in 2010 and upheld by the Supreme Court in 2012, will greatly expand affordable health care. As a result of the legislation, it is estimated that by 2019, 29 million Americans will gain health insurance coverage. However, an equal number will still be uninsured. And the number of uninsured may rise depending on how many states opt out of expanding Medicaid eligibility. Currently twenty states will not participate in the Medicaid expansion. Analysis of states that won’t expand Medicaid has found that, as a result, about 5.3 million people will earn too much under their state’s Medicaid eligibility level to qualify but will earn too little to be eligible for tax credits that help offset the cost of insurance. Of the top ten least-insured metropolitan areas in the United States, seven are in states that will not expand Medicaid eligibility.

When it comes to aid for higher education, federal funding has grown, but that aid has mostly come in the form of loans rather than grants. Over the last decade, grants have made up between 22 and 28 percent of federal aid for education, while loans have made up between 61 and 70 percent. Moreover, even though there has been a 15 percent increase in the number of low-income students who receive a Pell Grant, the maximum award these students can receive now covers only about a third of the costs of a college education, as compared to around three-quarters in the 1970s.

The high price of a college degree is linked with a significant decline in the number of low- and moderate-income students who enroll in and graduate from college. Between 1992 and 2004, the percentage of low-income students enrolled in a four-year college decreased from 54 to 40 percent and the percentage of middle-income students decreased from 59 to 53 percent. For low-income children, the college completion rate has increased by only 4 percentage points between the generation born in the early 1960s and the generation born in the early 1980s. In contrast, among high-income children the college graduation rate increased 18 percentage points between generations. If education is the ladder by which less-advantaged Americans can hope to rise to the middle class and beyond, the rungs of that ladder are increasingly out of reach—yet another way in which the traditional system of shared social responsibility has been gradually dismantled over the past forty years.

Feeling insecure

With instability and uncertainty figuring prominently in people’s lives, it is important to ask if these social and economic trends are reflected in the way Americans feel. Do Americans feel more insecure? Have they become more worried? This question turns out to be a difficult one to answer.

The first obstacle to figuring out the answer is that we lack rich, long-term survey data that would enable us to tease out an in-depth answer. As a recent Rockefeller Foundation report noted, efforts to assess and measure people’s sense of security are rare. And the surveys we do have focus almost exclusively on job loss, which is just one risk among many that needs to be explored.

A second obstacle to measuring perceptions of security and insecurity across the decades is whether or not, over time, people continue to judge and evaluate their situations by the same criteria. In other words, can we assume that year in and year out people use the same yardstick to measure whether or not they are having a good or bad year? If assessments and meanings change over time and surveys don’t capture these subjective changes, then it’s not clear what our assessments are really measuring.

Analysis by Richard Curtin, the director of the Survey of Consumers at the University of Michigan, addresses the subjective nature of evaluation in his analysis of changes in the standards by which consumers have judged the economy over the last fifty years. For example, during the 1960s people had high expectations and were very confident about the government’s ability to control the economy and keep things on track. Such optimism about rising affluence ran into a brick wall during the economic shocks of the 1970s and early 1980s. Initially, dissatisfaction ensued as people continued to hold on to the economic aspirations from the past. By the mid-1980s, however, after repeated economic setbacks, consumers lowered their expectations about achievable growth rates and became more tolerant of high inflation and high unemployment. By the early 1990s, fears about job security grew as Americans became skeptical about the government’s ability to use economic policy to prevent downturns.

At this point expectations were so diminished that it took one of the longest economic expansions in U.S. history to reset high levels of optimism. Fueled by the dot-com boom, aspirations soared. In 2000, consumer confidence hit a new peak. With expectations high, consumers in the early 2000s cited high unemployment as an issue even though it was only around 6 percent, half as much as it had been in the early 1980s. The optimism of the late 1990s soon gave way to pessimism because of the successive recessions of 2001 and late 2007. In fact, between January 2007 and mid-2008, the Index of Consumer Sentiment fell by 42 percent, the greatest percentage decline compared to any other recession.

By mapping out historical shifts in consumers’ assessments of the economy, Curtin illustrates how “the same level of economic performance, say in terms of the inflation or unemployment rate, can be evaluated quite differently depending on what was thought to be the expected standard.” Moreover, changes in standards of evaluation usually occur very slowly and therefore can be difficult to detect. And since different groups of Americans have fared differently as a result of macroeconomic changes, it stands to reason that some Americans may have altered their standards and expectations sooner than others, and some may have altered their aspirations more significantly, and perhaps more permanently. In all likelihood, for example, those employed in the waning manufacturing sector, like autoworkers, had to let go of their expectations for a secure economic life long before and to a much larger degree than have college-educated Americans employed in the expanding service sector.

With this in mind, when sociologists Katherine Newman and Elisabeth Jacobs looked at survey data from the late 1970s to just before the Great Recession that examined people’s economic perceptions, they found something interesting. Their analysis revealed that, despite a few peaks and valleys, overall trends during this period suggest that Americans came to see themselves as more secure and in better financial shape, with about the same likelihood of losing their job. As we might expect, their analysis found that those with the lowest incomes and least education expressed the most vulnerability to employment insecurity and financial hardship, while those with higher incomes and more education expressed lower levels of concern.

Yet, despite their lower levels of concern overall, Americans with higher earnings, bachelor’s degrees, and managerial jobs have nonetheless exhibited the biggest increase in worry. Over the last thirty years, the proportions of college graduates and managers who said that they are likely to lose their jobs next year and the proportions who said they did worse financially this year than last year have gone up. The rise in concern about job security and financial stability among this group reflects new realities. During this period, the rate of job loss for the most educated went up faster than the rate of job loss for less-educated Americans. And when these workers lost their jobs and found new ones, the new jobs often didn’t pay as much. By 2001, workers with a bachelor’s degree experienced about a 23 percent drop in their earnings after losing a job. Such trends stand at odds with a long-standing belief among Americans with college degrees that their skills and credentials will translate into a solid footing. If discontent emerges when there is a gap between expectations and outcomes, then it would make sense for concern to increase more among the group that still thought it was well positioned to maintain a good, secure life. When this kind of an expectation smacks into job loss and downward mobility, people will start to worry.

For Americans with less education and lower earnings, it is very possible that worry as measured by feelings about job insecurity and financial hardship did not increase as much over a sustained period because they altered their expectations sooner and more permanently than did better-off Americans. As Newman and Jacobs point out, when those at the bottom lose a job, there is not as far to fall. For such families, their economic situation doesn’t change much from year to year; it’s always bad. Alternatively, other families may have taken on debt in order to hold on to their standards for security. The lack of a consistent and steep increase in worry among less well-off Americans thus does not necessarily signal that they feel more secure than they used to feel. To be sure, it could actually mean that they have gotten used to having less or gotten used to the high levels of debt required for them to hold on to traditional conceptions of security amid declining fortunes. What is also likely going on is that people’s frame of reference for what security even means has undergone a transformation. Finally, it could also be the case that our standard measures for these issues (concern about job security and whether or not we are worse off this year than last) don’t allow us to accurately assess people’s feelings.

We do not have the kind of comprehensive longitudinal survey data that would enable us to detect subjective changes in Americans’ views about what constitutes security and insecurity and whether such definitions shape trends in worry and concern over time. But other measures point to increases in insecure feelings among Americans. For example, even before the Great Recession started, about half of those surveyed worried somewhat about their economic security, with one-quarter “very” or “fairly” worried. By 2009, just over half of those surveyed were now “very” or “fairly” worried. A Pew Research survey done in 2011 found that only 56 percent of those polled felt that they were better off financially than their own parents were when they were the same age, which is the lowest percentage since the question was first asked in 1981, when 69 percent said they felt better off. In 2012, the General Social Survey (GSS) found that less than 55 percent of Americans agreed that “people like me and my family have a good chance of improving our standard of living,” the lowest reported level since 1987. That same year, the GSS also found that a record number of Americans (8.4 percent) identified themselves as “lower class,” which is the highest percentage reported in the forty years that the GSS has asked this question.

And we may be seeing changes in the definition of the American dream. The American dream has long been equated with moving up the class ladder and owning a home, but recent surveys have noted shifts away from such notions. When Joel Benenson, chief pollster for President Obama, examined voters’ thoughts about economic security and the American dream in 2011, he found something new. His polling discovered that middle-class Americans were more concerned about keeping what they have than they were with getting more. Another 2011 survey found the same thing. When asked which is more important to them, 85 percent of those surveyed said “financial stability” and only 13 percent said “moving up the income ladder.” In 2007, a survey found that owning a home defined the American dream for 35 percent of those surveyed. By 2013, the top two definitions of the American dream were “retiring with financial security” (28 percent) and “being debt free” (23 percent). Only 18 percent of those surveyed defined the American dream as owning a home.

As the economy experienced wide-reaching transformations, meanings and feelings have likely changed along with it. A National Journal article noted how even the definition of being middle class has undergone adjustment, especially in light of the rise of contract workers or “permatemps,” those who may make a good wage but receive no benefits and can expect no job security. Capturing this adjustment, the article asks, “If they make a decent income, are permatemps middle class? Not by the standards of the past. But by the diminished redefinition, maybe they are: earning a middle-class living—for the moment.”

Amid these shifting economic tides and morphing definitions, many have lost their way. While old beliefs such as that hard work will lead to security and prosperity have fallen by the wayside, it’s unclear to many Americans what new truths lay in their stead. As President Obama’s pollster Joel Benenson discovered, this lack of direction causes a great deal of unease. “One of the big sources of concern for the people we talked with,” Benenson said, “was that they didn’t recognize any new rules in this environment. All of the rules they had learned about how you succeed, how you get ahead—those rules no longer apply, and they didn’t feel there was a set of new rules.” These kinds of examinations suggest that in the age of insecurity, Americans are not just trying to weather an economic storm, but they are also feeling their way through the dark.

In the throes of the Great Depression, Americans decided that there had to be a better way to organize government and society, one that would allow individuals and families to enjoy greater stability and security. This philosophical shift from “rugged individualism” to “united we stand, divided we fall” paved the way for the New Deal, the Great Society, and the forging of an unwritten but pervasive social contract between employers and employees that rested on mutual loyalties and protections. The government invested in its citizens, employers invested in their employees, and individuals worked hard to make the most of those investments. As a result, in the decades immediately following World War II, prosperity reigned, inequality decreased, and a large and thriving middle class was born.

Beginning in the 1970s, this system began to unravel. Large-scale changes from globalization and the rise of the service economy to a philosophical shift toward free-market ideology and a celebration of risk changed the landscape of security in America. Against this backdrop, the government curtailed its investments in and protections of its citizens, and employers rewrote the social contract to increase their own flexibility and demand greater risk bearing by workers. Individuals continued to work hard, but instead of getting ahead, more Americans struggled harder and harder just to get by.

Insecurity now defines our world. The secure society has become the “risk society.” The belief that we are all in this together has been replaced with the assumption that we are each on our own. Cut adrift, Americans are struggling to forge security in an insecure age.

Circular Debate


Monday, August 4, 2014

Defining Away Economic Failure

Paul Craig Roberts

Last week’s government guesstimate that second quarter 2014 real GDP growth will be 4% seems nonsensical on its face. There is no evidence of increases in real median family incomes or real consumer credit that would lift the economy from a first quarter decline to 4% growth in the second quarter. Middle class store closings (Sears, Macy’s, J.C. Penney) have spread into the Dollar stores used by those with lower incomes. Family Dollar, a chain in the process of closing hundreds of stores is being bought by Dollar Tree, the only one of the three Dollar store chains that is not in trouble. Wal-Mart’s sales have declined for the past 5 quarters. Declining sales and retail store closings indicate shrinking consumer purchasing power. Retail facts do not support the claim of a 4% GDP growth rate for the second quarter, and they do not support last Friday’s payroll job claim of 26,700 new retail jobs in July.

What about the housing market? Don’t the headlines accompanying last Friday’s payroll jobs report, such as “Hiring Settles Into Steady Gains,” mean more people working and a boost to the economy from a housing recovery? No. What the financial press did not report is that the US is in a structural jobs depression. In the 12-month period from July 2013 through July 2014, 2.3 million Americans of working age were added to the population. Of these 2.3 million only 330 thousand entered the labor force. My interpretation of this is that the job market is so poor that only 14% of the increase in the working age population entered the labor force.

The decline in the labor force participation rate is bad news for the housing market. The US labor force participation rate peaked at 67.3% in 2000 and has been in a sustained downturn ever since. The rate of decline increased in October 2008 with the bank bailout and Quantitative Easing. From October 2008 to the present, 13.2 million Americans were added to the working age population, but only 818 thousand, or 6%, entered the labor force.  Despite government and financial press claims, the Federal Reserve’s multi-year policy of printing money with which to purchase bonds did not restore the housing or job markets.

What about the stock market? It has been down in recent days but is still high historically. Isn’t the stock market evidence of a good economy? Not if stocks are up because corporations are buying back their own stock. Corporations are now the largest buyers of stocks. Recently we learned that from 2006 through 2013 corporations authorized $4.14 trillion in buybacks of their publicly traded stocks. Moreover, it appears that corporations have been borrowing the money from banks with which to buy back their stocks. Last year there were $754.8 billion in authorized stock buybacks and $782.5 billion in corporate borrowing. In the first three months of this year, companies purchased $160 billion of their own stocks.

Borrowing to buyback stock leaves a company with debt but without new investment with which to produce revenues to service the debt. The massive stock buybacks demonstrate that American capitalism is now corrupt. In order to maximize personal short-term financial benefits flowing from bonuses, stock options, and capital gains, CEOs, boards of directors, and shareholders are decapitalizing public companies and loading them up with debt.

Well, isn’t the economy being helped by the return of manufacturing to America? Apparently not. Data for 1999-2012 indicate that the offshoring of manufacturing increased by 9%.

One economist, Susan Hester, an economist for the Retail Industry Leaders Association, has decided to turn the loss of manufacturing jobs into a virtue. Her argument is that retail employment dwarfs manufacturing employment and that more American jobs can be created by selling more imports than by encouraging manufacturing in order to provide exports.

According to Ms. Hester’s research, the US makes more money from the retail side than from the production side. She concludes that the value added to a product by offshore labor is a small percentage of the value added by “managing offshored production, handling Customs clearances, managing warehouses and distribution, marketing apparel products, and by millions of people in the retail sector stocking shelves and working cash registers.”

In other words, the US manufacturing jobs moved offshore are just a throwaway. The money is made in selling the imports.

Ms. Hester neglects to recognize that when offshored production is brought to the US to be marketed, it comes in as imports and results in a larger US trade deficit. Foreigners use dollars paid to them for the products that they make for US firms to purchase ownership of US bonds, stocks, and real assets such as land, buildings, and companies. Consequently, interest, profits, capital gains, and rents associated with the foreign purchases of US assets now flow to foreigners and not to Americans. The current account worsens.

It works like this: The excess of US imports over US exports leaves foreigners with claims on US income and wealth that are settled by foreign purchases of US assets. The income produced by these assets now flows abroad with the consequence that income earned by foreigners on their US investments exceeds the income earned by the US on its foreign investments.

According to Ms. Hester’s reasoning, Americans would be better off it they produced nothing that they need and in place of manufacturing relied on the incomes of US fashion designers and pattern makers who specify the offshored production for US markets, on the compliance officers and freight agents, on production planning and expediting clerks, and on longshore workers and railroad employees who deliver the foreign-made goods to US consumer markets.

Ms. Hester believes that the value-added by offshored manufacturing is inconsequential. How then did China get rich from it, becoming the second largest economy and employing 100 million people in manufacturing (compared to America’s 12 million), and acquire the largest foreign reserves of any country?

After Ms. Hester answers that question she can explain why US corporations go to the trouble to offshore their manufacturing if the contribution to value-added is so low? The value added is obviously substantial enough for the labor cost savings to pay for transportation costs to the US from Asia, for the cost of set-up and management of foreign based facilities, and for the cost of the adverse publicity from abandoning US communities for Asia and still leave value-added after all costs to enlarge profits and drive up stock prices and executive bonuses.

Ms. Hester fools herself. The low value that she calculates Chinese, Indian, or Vietnamese labor adds to the price of a shirt reflects the low foreign labor cost, not a low value of the shirt in US markets or a low value of an iPhone in European markets. Marketing, warehousing and distribution are done in the US by more highly paid people, and this is why it looks like the value added comes from sources other than manufacturing. Ms. Hester overlooks that the lower cost of foreign labor does not translate into a less valued product but into higher profits.

Economists assume that the labor cost savings are passed on to the consumers in lower prices, but I have not experienced declining prices of Nike and Merrell sports shoes, of sheets and towels, of Brooks Brothers and Ralph Lauren shirts, of Apple computers, or whatever as a result of moving US production offshore. The labor cost savings go into profits, managerial bonuses, and capital gains for shareholders and is one reason for the extraordinary increase in income and wealth inequality in the US.

Focused on short-term profit, manufacturers and retailers are destroying the US consumer market. The average annual salary of a US apparel manufacturing worker is $35,000. The average salary of US retail employees is less than half of that amount and provides no discretionary income with which to boost consumer spending in retail stores.

The American corporate practice of offshoring manufacturing has made it impossible for the Obama regime to keep its promises of creating manufacturing jobs and exports. Unable to create real jobs and real exports, the US government has proposed to create virtual jobs and virtual exports made by “factoryless goods producers.” In order to keep his promise of doubling the growth of US exports, the Obama regime wants to redefine foreign output as US output.

A “factoryless goods producer” is a newly invented statistical category. It is a company like Nike or Apple that outsources the production of its products to foreign companies. The Obama regime is proposing to redefine companies such as Apple that own a brand name or a product design as manufacturing companies even though the companies do not manufacture.

In other words, whether or not a US company is a manufacturer does not depend on its activity, but on its ownership of a brand name made for the company by a foreign manufacturer. For example, Apple iPhones made in China and sold in Europe would be reported as US exports of manufactured goods, and iPhones sold in the US would no longer be classified as imports but as US manufacturing output. Apple’s non-manufacturing employees would be transformed into manufacturing employment.

Clearly, the purpose of this statistical deception is to inflate the number of US manufacturing jobs, US manufacturing output, and US exports and to convert imports into domestic production. It is a scheme that eliminates the large US trade deficit by redefinition.

The reclassification would leave the government’s Office of Statistical Lies with the anomaly that products made in China, India, Indonesia or wherever become US GDP as long as the brand name is owned by a US corporation, but the payments to the Asian workers who produced the products remain as claims on US wealth and can be converted into ownership of US bonds, companies, and real estate.

For example, Chinese workers produced the Apple products, and China has the claims on US wealth to prove it. How are these claims accounted for statistically by the Obama regime’s redefinition? The US can add China’s production of the Apple products to US GDP, but how does the US deduct the Chinese-produced Apple products from China’s GDP? And how does the Obama regime’s redefinition get rid of the payments by Apple to the Chinese labor that produced the products? These payments comprise claims on US wealth.

In other words, the reclassification would double count the output of Apple’s products. If every country does this, world GDP will rise statistically regardless of the fact that no more goods and services are produced. Perhaps this is the way to define away world poverty.

“Factoryless goods producers” was foreshadowed by Harvard professor Michael Porter’s 2006 competitiveness report, a justification for jobs offshoring. Defending jobs offshoring, Porter downplayed the rise in the US trade deficit and decline in the US GDP growth rate caused by jobs offshoring. Porter argued, in effect, that ownership of the revenues and products, not the location in which the revenues and products are produced, should determine their classification. As I pointed out in my critique (see The Failure of Laissez Faire Capitalism and Economic Dissolution of the West), the result would be to raise US GDP by the amount of US production outsourced abroad and by the output of US overseas subsidiaries and to decrease the GDP of the countries in which the manufacturing actually takes place. Consistency would require that the German and Japanese autos, for example, that are produced in the US with US labor would become deductions from US GDP and be reported as German and Japanese GDP.

As I have emphasized for years, the West already lives in the dystopia forecast by George Orwell.


Jobs are created by hypothetical add-ons to the reported payroll figures and by inappropriate use of seasonal adjustments. Inflation is erased by substituting lower priced items in the inflation index for those that rise in price and by redefining rising prices as quality improvements. Real GDP growth is magicked into existence by deflating nominal GDP with the understated measure of inflation. Now corporations without factories are going to produce US manufacturing output, US exports, and US manufacturing jobs!

Every sphere of Western existence is defined by propaganda. Consequently, we have reached a perfect state of nihilism. We can believe nothing that we are told by government, corporations, and the presstitute media.

We live in a lie, and the lie is ever expanding.

Sunday, August 3, 2014

The Secret Government Rulebook For Labeling You a Terrorist


The Obama administration has quietly approved a substantial expansion of the terrorist watchlist system, authorizing a secret process that requires neither “concrete facts” nor “irrefutable evidence” to designate an American or foreigner as a terrorist, according to a key government document obtained by The Intercept.

The “March 2013 Watchlisting Guidance,” a 166-page document issued last year by the National Counterterrorism Center, spells out the government’s secret rules for putting individuals on its main terrorist database, as well as the no fly list and the selectee list, which triggers enhanced screening at airports and border crossings. The new guidelines allow individuals to be designated as representatives of terror organizations without any evidence they are actually connected to such organizations, and it gives a single White House official the unilateral authority to place entire “categories” of people the government is tracking onto the no fly and selectee lists. It broadens the authority of government officials to “nominate” people to the watchlists based on what is vaguely described as “fragmentary information.” It also allows for dead people to be watchlisted.

Over the years, the Obama and Bush Administrations have fiercely resisted disclosing the criteria for placing names on the databases—though the guidelines are officially labeled as unclassified. In May, Attorney General Eric Holder even invoked the state secrets privilege to prevent watchlisting guidelines from being disclosed in litigation launched by an American who was on the no fly list. In an affidavit, Holder called them a “clear roadmap” to the government’s terrorist-tracking apparatus, adding: “The Watchlisting Guidance, although unclassified, contains national security information that, if disclosed … could cause significant harm to national security.”




The rulebook, which The Intercept is publishing in full, was developed behind closed doors by representatives of the nation’s intelligence, military, and law-enforcement establishment, including the Pentagon, CIA, NSA, and FBI. Emblazoned with the crests of 19 agencies, it offers the most complete and revealing look into the secret history of the government’s terror list policies to date. It reveals a confounding and convoluted system filled with exceptions to its own rules, and it relies on the elastic concept of “reasonable suspicion” as a standard for determining whether someone is a possible threat. Because the government tracks “suspected terrorists” as well as “known terrorists,” individuals can be watchlisted if they are suspected of being a suspected terrorist, or if they are suspected of associating with people who are suspected of terrorism activity.

“Instead of a watchlist limited to actual, known terrorists, the government has built a vast system based on the unproven and flawed premise that it can predict if a person will commit a terrorist act in the future,” says Hina Shamsi, the head of the ACLU’s National Security Project. 


“On that dangerous theory, the government is secretly blacklisting people as suspected terrorists and giving them the impossible task of proving themselves innocent of a threat they haven’t carried out.” Shamsi, who reviewed the document, added, “These criteria should never have been kept secret.”

The document’s definition of “terrorist” activity includes actions that fall far short of bombing or hijacking. In addition to expected crimes, such as assassination or hostage-taking, the guidelines also define destruction of government property and damaging computers used by financial institutions as activities meriting placement on a list. They also define as terrorism any act that is “dangerous” to property and intended to influence government policy through intimidation.

This combination—a broad definition of what constitutes terrorism and a low threshold for designating someone a terrorist—opens the way to ensnaring innocent people in secret government dragnets. It can also be counterproductive. When resources are devoted to tracking people who are not genuine risks to national security, the actual threats get fewer resources—and might go unnoticed.

“If reasonable suspicion is the only standard you need to label somebody, then it’s a slippery slope we’re sliding down here, because then you can label anybody anything,” says David Gomez, a former senior FBI special agent with experience running high-profile terrorism investigations. “Because you appear on a telephone list of somebody doesn’t make you a terrorist. That’s the kind of information that gets put in there.”

The fallout is personal too. There are severe consequences for people unfairly labeled a terrorist by the U.S. government, which shares its watchlist data with local law enforcement, foreign governments, and “private entities.” Once the U.S. government secretly labels you a terrorist or terrorist suspect, other institutions tend to treat you as one. It can become difficult to get a job (or simply to stay out of jail). It can become burdensome—or impossible—to travel. And routine encounters with law enforcement can turn into ordeals.

nomination_chart


A chart from the “March 2013 Watchlisting Guidance”

In 2012 Tim Healy, the former director of the FBI’s Terrorist Screening Center, described to CBS News how watchlists are used by police officers. “So if you are speeding, you get pulled over, they’ll query that name,” he said. “And if they are encountering a known or suspected terrorist, it will pop up and say call the Terrorist Screening Center…. So now the officer on the street knows he may be dealing with a known or suspected terrorist.” Of course, the problem is that the “known or suspected terrorist” might just be an ordinary citizen who should not be treated as a menace to public safety.

Until 2001, the government did not prioritize building a watchlist system. On 9/11, the government’s list of people barred from flying included just 16 names. Today, the no fly list has swelled to tens of thousands of “known or suspected terrorists” (the guidelines refer to them as KSTs). The selectee list subjects people to extra scrutiny and questioning at airports and border crossings. The government has created several other databases, too. The largest is the Terrorist Identities Datamart Environment (TIDE), which gathers terrorism information from sensitive military and intelligence sources around the world. Because it contains classified information that cannot be widely distributed, there is yet another list, the Terrorist Screening Database, or TSDB, which has been stripped of TIDE’s classified data so that it can be shared. When government officials refer to “the watchlist,” they are typically referring to the TSDB. (TIDE is the responsibility of the National Counterterrorism Center; the TSDB is managed by the Terrorist Screening Center at the FBI.)

In a statement, a spokesman for the National Counterterrorism Center told The Intercept that “the watchlisting system is an important part of our layered defense to protect the United States against future terrorist attacks” and that “watchlisting continues to mature to meet an evolving, diffuse threat.” He added that U.S. citizens are afforded extra protections to guard against improper listing, and that no one can be placed on a list solely for activities protected by the First Amendment. A representative of the Terrorist Screening Center did not respond to a request for comment.

The system has been criticized for years. In 2004, Sen. Ted Kennedy complained that he was barred from boarding flights on five separate occasions because his name resembled the alias of a suspected terrorist. Two years later, CBS News obtained a copy of the no fly list and reported that it included Bolivian president Evo Morales and Lebanese parliament head Nabih Berri. One of the watchlists snared Mikey Hicks, a Cub Scout who got his first of many airport pat-downs at age two. In 2007, the Justice Department’s inspector general issued a scathing report identifying “significant weaknesses” in the system. And in 2009, after a Nigerian terrorist was able to board a passenger flight to Detroit and nearly detonated a bomb sewn into his underwear despite his name having been placed on the TIDE list, President Obama admitted that there had been a “systemic failure.”

Obama hoped that his response to the “underwear bomber” would be a turning point. In 2010, he gave increased powers and responsibilities to the agencies that nominate individuals to the lists, placing pressure on them to add names. His administration also issued a set of new guidelines for the watchlists. Problems persisted, however. In 2012, the U.S. Government Accountability Office published a report that bluntly noted there was no agency responsible for figuring out “whether watchlist-related screening or vetting is achieving intended results.” The guidelines were revised and expanded in 2013—and a source within the intelligence community subsequently provided a copy to The Intercept.
tbu2

“Concrete facts are not necessary”

The five chapters and 11 appendices of the “Watchlisting Guidance” are filled with acronyms, legal citations, and numbered paragraphs; it reads like an arcane textbook with a vocabulary all its own. Different types of data on suspected terrorists are referred to as “derogatory information,” “substantive derogatory information,” “extreme derogatory information” and “particularized derogatory information.” The names of suspected terrorists are passed along a bureaucratic ecosystem of “originators,” “nominators,” “aggregators,” “screeners,” and “encountering agencies.” And “upgrade,” usually a happy word for travellers, is repurposed to mean that an individual has been placed on a more restrictive list.

The heart of the document revolves around the rules for placing individuals on a watchlist. “All executive departments and agencies,” the document says, are responsible for collecting and sharing information on terrorist suspects with the National Counterterrorism Center. It sets a low standard—”reasonable suspicion“—for placing names on the watchlists, and offers a multitude of vague, confusing, or contradictory instructions for gauging it. In the chapter on “Minimum Substantive Derogatory Criteria”—even the title is hard to digest—the key sentence on reasonable suspicion offers little clarity:
“To meet the REASONABLE SUSPICION standard, the NOMINATOR, based on the totality of the circumstances, must rely upon articulable intelligence or information which, taken together with rational inferences from those facts, reasonably warrants a determination that an individual is known or suspected to be or has been knowingly engaged in conduct constituting, in preparation for, in aid of, or related to TERRORISM and/or TERRORIST ACTIVITIES.”

The rulebook makes no effort to define an essential phrase in the passage—”articulable intelligence or information.” After stressing that hunches are not reasonable suspicion and that “there must be an objective factual basis” for labeling someone a terrorist, it goes on to state that no actual facts are required:
“In determining whether a REASONABLE SUSPICION exists, due weight should be given to the specific reasonable inferences that a NOMINATOR is entitled to draw from the facts in light of his/her experience and not on unfounded suspicions or hunches. Although irrefutable evidence or concrete facts are not necessary, to be reasonable, suspicion should be as clear and as fully developed as circumstances permit.”

While the guidelines nominally prohibit nominations based on unreliable information, they explicitly regard “uncorroborated” Facebook or Twitter posts as sufficient grounds for putting an individual on one of the watchlists. “Single source information,” the guidelines state, “including but not limited to ‘walk-in,’ ‘write-in,’ or postings on social media sites, however, should not automatically be discounted … the NOMINATING AGENCY should evaluate the credibility of the source, as well as the nature and specificity of the information, and nominate even if that source is uncorroborated.”

There are a number of loopholes for putting people onto the watchlists even if reasonable suspicion cannot be met.

One is clearly defined: The immediate family of suspected terrorists—their spouses, children, parents, or siblings—may be watchlisted without any suspicion that they themselves are engaged in terrorist activity. But another loophole is quite broad—”associates” who have a defined relationship with a suspected terrorist, but whose involvement in terrorist activity is not known. A third loophole is broader still—individuals with “a possible nexus” to terrorism, but for whom there is not enough “derogatory information” to meet the reasonable suspicion standard.

Americans and foreigners can be nominated for the watchlists if they are associated with a terrorist group, even if that group has not been designated as a terrorist organization by the U.S. government. They can also be treated as “representatives” of a terrorist group even if they have “neither membership in nor association with the organization.” The guidelines do helpfully note that certain associations, such as providing janitorial services or delivering packages, are not grounds for being watchlisted.

The nomination system appears to lack meaningful checks and balances. Although government officials have repeatedly said there is a rigorous process for making sure no one is unfairly placed in the databases, the guidelines acknowledge that all nominations of “known terrorists” are considered justified unless the National Counterterrorism Center has evidence to the contrary. In a recent court filing, the government disclosed that there were 468,749 KST nominations in 2013, of which only 4,915 were rejected–a rate of about one percent. The rulebook appears to invert the legal principle of due process, defining nominations as “presumptively valid.”


Profiling categories of people

While the nomination process appears methodical on paper, in practice there is a shortcut around the entire system. Known as a “threat-based expedited upgrade,” it gives a single White House official the unilateral authority to elevate entire “categories of people” whose names appear in the larger databases onto the no fly or selectee lists. This can occur, the guidelines state, when there is a “particular threat stream” indicating that a certain type of individual may commit a terrorist act.

This extraordinary power for “categorical watchlisting”—otherwise known as profiling—is vested in the assistant to the president for homeland security and counterterrorism, a position formerly held by CIA Director John Brennan that does not require Senate confirmation.

The rulebook does not indicate what “categories of people” have been subjected to threat-based upgrades. It is not clear, for example, whether a category might be as broad as military-age males from Yemen. The guidelines do make clear that American citizens and green card holders are subject to such upgrades, though government officials are required to review their status in an “expedited” procedure. Upgrades can remain in effect for 72 hours before being reviewed by a small committee of senior officials. If approved, they can remain in place for 30 days before a renewal is required, and can continue “until the threat no longer exists.”

“In a set of watchlisting criteria riddled with exceptions that swallow rules, this exception is perhaps the most expansive and certainly one of the most troubling,” Shamsi, the ACLU attorney, says. “It’s reminiscent of the Bush administration’s heavily criticized color-coded threat alerts, except that here, bureaucrats can exercise virtually standard-less authority in secret with specific negative consequences for entire categories of people.”

The National Counterterrorism Center declined to provide any details on the upgrade authority, including how often it has been exercised and for what categories of people.

Pocket litter and scuba gear

The guidelines provide the clearest explanation yet of what is happening when Americans and foreigners are pulled aside at airports and border crossings by government agents. The fifth chapter, titled “Encounter Management and Analysis,” details the type of information that is targeted for collection during “encounters” with people on the watchlists, as well as the different organizations that should collect the data. The Department of Homeland Security is described as having the largest number of encounters, but other authorities, ranging from the State Department and Coast Guard to foreign governments and “certain private entities,” are also involved in assembling “encounter packages” when watchlisted individuals cross their paths. The encounters can be face-to-face meetings or electronic interactions—for instance, when a watchlisted individual applies for a visa.

In addition to data like fingerprints, travel itineraries, identification documents and gun licenses, the rules encourage screeners to acquire health insurance information, drug prescriptions, “any cards with an electronic strip on it (hotel cards, grocery cards, gift cards, frequent flyer cards),” cellphones, email addresses, binoculars, peroxide, bank account numbers, pay stubs, academic transcripts, parking and speeding tickets, and want ads. The digital information singled out for collection includes social media accounts, cell phone lists, speed dial numbers, laptop images, thumb drives, iPods, Kindles, and cameras. All of the information is then uploaded to the TIDE database.

Screeners are also instructed to collect data on any “pocket litter,” scuba gear, EZ Passes, library cards, and the titles of any books, along with information about their condition—”e.g., new, dog-eared, annotated, unopened.” Business cards and conference materials are also targeted, as well as “anything with an account number” and information about any gold or jewelry worn by the watchlisted individual. Even “animal information”—details about pets from veterinarians or tracking chips—is requested. The rulebook also encourages the collection of biometric or biographical data about the travel partners of watchlisted individuals.

The list of government entities that collect this data includes the U.S. Agency for International Development, which is neither an intelligence nor law-enforcement agency. As the rulebook notes, USAID funds foreign aid programs that promote environmentalism, health care, and education. USAID, which presents itself as committed to fighting global poverty, nonetheless appears to serve as a conduit for sensitive intelligence about foreigners. According to the guidelines, “When USAID receives an application seeking financial assistance, prior to granting, these applications are subject to vetting by USAID intelligence analysts at the TSC.” The guidelines do not disclose the volume of names provided by USAID, the type of information it provides, or the number and duties of the “USAID intelligence analysts.”

A USAID spokesman told The Intercept that “in certain high risk countries, such as Afghanistan, USAID has determined that vetting potential partner organizations with the terrorist watchlist is warranted to protect U.S. taxpayer dollars and to minimize the risk of inadvertent funding of terrorism.” He stated that since 2007, the agency has checked “the names and other personal identifying information of key individuals of contractors and grantees, and sub-recipients.”


Death and the watchlist

The government has been widely criticized for making it impossible for people to know why they have been placed on a watchlist, and for making it nearly impossible to get off. The guidelines bluntly state that “the general policy of the U.S. Government is to neither confirm nor deny an individual’s watchlist status.” But the courts have taken exception to the official silence and footdragging: In June, a federal judge described the government’s secretive removal process as unconstitutional and “wholly ineffective.”

The difficulty of getting off the list is highlighted by a passage in the guidelines stating that an individual can be kept on the watchlist, or even placed onto the watchlist, despite being acquitted of a terrorism-related crime. The rulebook justifies this by noting that conviction in U.S. courts requires evidence beyond a reasonable doubt, whereas watchlisting requires only a reasonable suspicion. Once suspicion is raised, even a jury’s verdict cannot erase it.

Not even death provides a guarantee of getting off the list. The guidelines say the names of dead people will stay on the list if there is reason to believe the deceased’s identity may be used by a suspected terrorist–which the National Counterterrorism Center calls a “demonstrated terrorist tactic.” In fact, for the same reason, the rules permit the deceased spouses of suspected terrorists to be placed onto the list after they have died.

For the living, the process of getting off the watchlist is simple yet opaque. A complaint can be filed through the Department of Homeland Security Traveler Redress Inquiry Program, which launches an internal review that is not subject to oversight by any court or entity outside the counterterrorism community. The review can result in removal from a watchlist or an adjustment of watchlist status, but the individual will not be told if he or she prevails. The guidelines highlight one of the reasons why it has been difficult to get off the list—if multiple agencies have contributed information on a watchlisted individual, all of them must agree to removing him or her.

If a U.S. citizen is placed on the no fly list while abroad and is turned away from a flight bound for the U.S., the guidelines say they should be referred to the nearest U.S. embassy or consulate, which is prohibited from informing them why they were blocked from flying. According to the rules, these individuals can be granted a “One-Time Waiver” to fly, though they will not be told that they are traveling on a waiver. Back in the United States, they will be unable to board another flight.

The document states that nominating agencies are “under a continuing obligation” to provide exculpatory information when it emerges. It adds that the agencies are expected to conduct annual reviews of watchlisted American citizens and green card holders. It is unclear whether foreigners—or the dead—are reviewed at the same pace. As the rulebook notes, “watchlisting is not an exact science.”

Josh Begley, Lynn Dombek, and Peter Maass contributed to this story.
Photo credits: TSA: G.J. McCarthy/Dallas Morning News/Corbis (2); Guidance: Josh Begley; White House: Win McNamee/Getty Images; Airport: Nick Ut/AP Photo
//

Saturday, August 2, 2014

What’s So Bad About Casual Drug Use?

Most people who try cocaine don't go on to become addicts
By Nick Gillespie TIME

So Representative Trey Radel, the Republican from Florida, a self-styled “conservative voice” in Congress, has pleaded guilty to a misdemeanor charge of cocaine possession. And Toronto’s city council has stripped Mayor Rob Ford of much of his power after it came out that he had smoked crack (never mind that Ford’s well-known drunken antics were not cause for such censure).

Rather than arguing whether such figures are hypocrites (Radel voted in favor of mandatory drug testing for food-stamp beneficiaries) or debating how they should attempt damage control (he’s also pledged to enter a substance-abuse program after paying a fine and receiving a year’s probation), let’s ask a more basic question: What’s so scandalous about casual drug use?

Yes, some substances are illegal (though as the case of marijuana shows, this status is subject to change). But absent other indicators of dangerous and antisocial behavior — like driving while intoxicated — why should snorting coke be treated any differently than, say, drinking a beer?

Prohibitionists typically deny the very possibility of responsible or voluntary use of currently illegal substances. They argue that drugs such as coke, heroin, ecstasy, methamphetamine and even marijuana are verboten precisely because they simply can’t be used casually. Any use either already constitutes abuse or quickly leads to it. “Drugs are not dangerous because they are illegal,” former drug czar William Bennett and former Health, Education and Welfare Secretary Joseph Califano wrote in a 2011 Wall Street Journal op-ed, “they are illegal because they are dangerous.”

Nearly 50% of people have tried an illegal drug at least once, yet most don’t repeat the experience. With cocaine, most who have tried it not only don’t go on to became addicts under even the most expansive possible definition of the term, they don’t even go on to become regular users.

According to the latest National Survey on Drug Use and Health, 14.5% of Americans ages 12 and older have tried cocaine at least once, but just 1.8% report using the drug recreationally in the past year. And just 0.6% have used it in the past 30 days, which would seem to be the minimal definition of a casual user.

The same pattern is true for heroin, which is typically talked about as magically addictive. Fear of the drug is surely one of the reasons why just 1.8% of Americans have ever tried it at all. But only 0.3% report using it in the past year and just 0.1% in the past month. That pattern simply shouldn’t be possible if these drugs were as addictive as commonly thought.

In the early 1970s, researcher Lee N. Robins led a study commissioned by the Department of Defense that followed tens of thousands of Vietnam War veterans as they returned to the U.S. Use of narcotics and heroin was rampant among soldiers stationed in Southeast Asia, with as many 20% showing signs of addiction. Yet during the first year back, “only 5% of those who had been addicted in Vietnam were addicted in the U.S.” and “at three years, only 12% of those addicted in Vietnam had been addicted at any time in the three years since return, and for those readdicted, the addiction had usually been very brief.” It wasn’t for lack of access to junk, either: half of the returning addicts said they’d tried heroin at least once since arriving back home.

As my Reason colleague Jacob Sullum has documented, such take-it-or-leave-it findings are common in drug research. In his 2004 book Saying Yes and other places, he’s detailed work in which researchers find a surprising range among heroin users, including a study that concluded, “It seems possible for young people from a number of different backgrounds, family patterns and educational abilities to use heroin occasionally without becoming addicted.”

It’s also true that regular drug users can often function quite well. Sigmund Freud used cocaine habitually for years, and his first major scientific publication was about the wonders of the drug (he eventually forsook it). Another pioneering late 19th and early 20th century man of medicine, William Halsted, was dependent on cocaine and morphine during an illustrious career that revolutionized and modernized surgical techniques.

None of this is a brief for snorting cocaine, shooting heroin or smoking marijuana (a substance that 58% of Americans think should be legal for recreational use) any more than it is a plea for drinking single-malt whiskey or pinot noir.

But in an age in which we are expected to use legal drugs (like beer) and prescription medications (Adderall) responsibly, it’s time to extend that same notion to currently illegal substances whose effects and properties are widely misunderstood. Indeed, the effects of coke, heroin and the rest are a mystery partly because their outlaw status makes it difficult both to research them and have honest discussions about them.

Radel has announced that he’ll be taking a leave of absence from Congress while he enters rehab. Perhaps he does need to sober up — that’s really for him and his family to decide — but it’s far from clear that his problem is particular to cocaine or illegal drugs. Indeed, in announcing his plans, he didn’t blame cocaine for his troubles but “the disease of alcoholism,” which he says led him to make really bad decisions. And alcohol, after all, is perfectly legal.