foreigners
The Root-Cause War
Palestinians may be the big winners of the Lebanon conflict.
By Shmuel Rosner
Posted Monday, Aug. 7, 2006, at 12:34 PM ET

It's an amazing figure: Almost 15,000 shells were fired by the Israeli armed forces in the last six weeks. Not on Lebanon, but in the Gaza strip. The number of Palestinians killed in that period is close to 300. No wonder Palestinian leaders are screaming for a halt to the "aggression" and feeling forgotten by the world as the war in Lebanon keeps moving from one "worst attack thus far" to yet another even worse assault.

But the Palestinians will have one thing to celebrate as the Lebanon war nears its final act of violence. On the diplomatic front, they might be the winners of this war, or, at least, the main beneficiary. And this achievement, more than many others, reflects Israel's failure to win the propaganda battle with its enemies.

This was very clear last week when Gen. John Abizaid, head of the U.S. Central Command, testified before the Senate Armed Services Committee. Speaking mainly about the war in Iraq, he was also asked about Lebanon, the unavoidable question of the day. What he had to say should serve as a wake-up call for all those trying to persuade the world that the eruption of violence in the Middle East reflects the final, most reliable proof that the conventional strategic pyramid of the past—the notion that solving the Israeli-Palestinian conflict is the key to solving other problems of the greater Middle East—is no longer relevant.

"We must find a comprehensive solution to the corrosive Arab-Israeli conflict," Abizaid said. That is, until we solve that problem, no real progress can be achieved in the region.

British Prime Minister Tony Blair, who is generally supportive of the Israeli effort in the north, expressed similar sentiments in his speech to the World Affairs Council in Los Angeles last week.

Blair began with an apology: "I know it can be very irritating for Israel to be told that this issue is of cardinal importance," he declared. But then he made his key point:

I want, what we all now acknowledge we need: a two-state solution. … Its significance for the broader issue of the Middle East … is this. The real impact of a settlement is more than correcting the plight of the Palestinians. It is that such a settlement would be the living, tangible, visible proof that the region and therefore the world can accommodate different faiths and cultures. … It is, in other words, the total and complete rejection of the case of Reactionary Islam. It destroys not just their most effective rallying call, it fatally undermines their basic ideology.

This is exactly the opposite of the conclusion Israel wanted. A couple of days ago, David Makovsky of the Washington Institute wrote that Israel discovered in recent years that "[e]nding occupation … has not brought Israel greater security from either of these rejectionist groups." Israeli Prime Minister Ehud Olmert, in his speech to the Knesset explaining the rationale for war, said, "The campaign we are engaged in these days is against the terror organizations operating from Lebanon and Gaza. These organizations are nothing but 'sub-contractors'… of the terror-sponsoring and peace-rejecting regimes."

In other words, Israel believed that the war in the north would show the world that occupation is not what's destabilizing the region; rather, it is the regimes in Damascus and Tehran and the terrorists operating on their behalf. These regimes are fighting Israel to eliminate it—not to reach a two-state solution. And if that is the case, it is these regimes the international community should be tackling first, not the Palestinian problem.

"The assumption prevailed [among U.S. administrations] that resolving this dispute would solve other problems faced by the U.S. in the area," wrote professor Steven Spiegel in his authoritative book about America's Middle East policy. This was true starting with Eisenhower and through the final act of the Clinton administration at Camp David, but it ended in the first years of the Bush administration. On the eve of the war in Iraq, the current administration seemed to think that solving the problems of the greater Middle East would lead to the solution of the Arab-Israeli conflict.

But now, yet again, many believe it's time to go back to the old formula—and, it must be said, the failed formula—of "the Arab-Israeli conflict first." The war accelerated the tendency to revert to this old habit—maybe out of conviction, but mainly out of frustration. Lacking a satisfying solution to the more difficult problems of the region, the world turned—at least in rhetoric—to the approach it has tried many times in the past: Solve the Israeli-Palestinian conflict, and an improved Middle-East will emerge.

Shmuel Rosner, chief U.S. correspondent for the Israeli paper Haaretz, writes daily at Rosner's Domain.

Article URL: http://www.slate.com/id/2147254/

Copyright 2006 Washingtonpost.Newsweek Interactive Co. LLC

 

August 2, 2006

The President

Bush’s Embrace of Israel Shows Gap With Father

By SHERYL GAY STOLBERG

WASHINGTON, Aug. 1 — When they first met as United States president and Israeli prime minister, George W. Bush made clear to Ariel Sharon he would not follow in the footsteps of his father.

The first President Bush had been tough on Israel, especially the Israeli settlements in occupied lands that Mr. Sharon had helped develop. But over tea in the Oval Office that day in March 2001 — six months before the Sept. 11 attacks tightened their bond — the new president signaled a strong predisposition to support Israel.

“He told Sharon in that first meeting that I’ll use force to protect Israel, which was kind of a shock to everybody,” said one person present, given anonymity to speak about a private conversation. “It was like, ‘Whoa, where did that come from?’ “

That embrace of Israel represents a generational and philosophical divide between the Bushes, one that is exacerbating the friction that has been building between their camps of advisers and loyalists over foreign policy more generally. As the president continues to stand by Israel in its campaign against Hezbollah — even after a weekend attack that left many Lebanese civilians dead and provoked international condemnation — some advisers to the father are expressing deep unease with the Israel policies of the son.

“The current approach simply is not leading toward a solution to the crisis, or even a winding down of the crisis,” said Richard N. Haass, who advised the first President Bush on the Middle East and worked as a senior State Department official in the current president’s first term. “There are times at which a hands-off policy can be justified. It’s not obvious to me that this is one of them.”

Unlike the first President Bush, who viewed himself as a neutral arbiter in the delicate politics of the Middle East, the current president sees his role through the prism of the fight against terrorism. This President Bush, unlike his father, also has deep roots in the evangelical Christian community, a staunchly pro-Israeli component of his conservative Republican base.

The first President Bush came to the Oval Office with long diplomatic experience, strong ties to Arab leaders and a realpolitik view that held the United States should pursue its own strategic interests, not high-minded goals like democracy, even if it meant negotiating with undemocratic governments like Syria and Iran.

The current President Bush has practically cut off Syria and Iran, overlaying his fight against terrorism with the aim of creating what Secretary of State Condoleezza Rice calls “a new Middle East.” In allying himself so closely with Israel, he has departed not just from his father’s approach but also from those of all his recent predecessors, who saw themselves first and foremost as brokers in the region.

In a speech Monday in Miami, Mr. Bush offered what turned out to be an implicit criticism of his father’s approach.

“The current crisis is part of a larger struggle between the forces of freedom and the forces of terror in the Middle East,” Mr. Bush said. “For decades, the status quo in the Middle East permitted tyranny and terror to thrive. And as we saw on September the 11th, the status quo in the Middle East led to death and destruction in the United States.”

Now, as Mr. Bush faces growing pressure from Arab leaders and European allies to end the current wave of violence, these differences between father and son have come into sharp relief.

“There is a danger in a policy in which there is no daylight whatsoever between the government of Israel and the government of the United States,” said Aaron David Miller, an Arab-Israeli negotiator for both Bush administrations, who has high praise for James A. Baker III, the first President Bush’s secretary of state. “Bush One and James Baker would never have allowed that to happen.”

Other advisers who served the elder Mr. Bush are critical as well, faulting the current administration for having “put diplomacy on the back burner in the hope that unattractive regimes would fall,” in the words of Mr. Haass.

Whether the disagreement extends to father and son is unclear. The president has been generally critical of the Middle East policies of his predecessors in both parties, but has never criticized his father explicitly. The first President Bush has made it a practice not to comment on the administration of his son, but his spokesman, Tom Frechette, said he supports the younger Mr. Bush “100 percent.”

Brent Scowcroft, the former national security adviser, who has been openly critical of the current president on Iraq, did not return calls seeking comment. He wrote an opinion article in The Washington Post on Sunday calling on the United States to “seize this opportunity” to reach a comprehensive settlement for resolving the conflict of more than half a century between Israel and the Palestinians. Mr. Baker also did not return calls.

The differences between father and son are partly to do with style.

“Bush the father was from a certain generation of political leaders and foreign policy establishment types,” said William Kristol, the neo-conservative thinker who worked for the first Bush administration and is now editor of The Weekly Standard. “He had many years of dealings with leading Arab governments; he was close to the Saudi royal family. The son is less so. He’s got much more affection for Israel, less affection for the House of Saud.”

That affection, Mr. Bush’s aides say, can be traced partly to his first and only trip to Israel, in 1998. It was a formative experience for Mr. Bush, then governor of Texas. He took a helicopter ride — his guide, as it happened, was Mr. Sharon, then the foreign minister — and, looking down, was struck by how tiny and vulnerable Israel seemed.

“He said that when he took that tour and he looked down, he thought, ‘We have driveways in Texas longer than that, “ said Ari Fleischer, the former White House press secretary. “And after the United States was attacked, he understood how it was for Israel to be attacked.”

Others say Mr. Bush cannot help looking at Israel through the prism of his Christian faith. “There is a religiously inspired connection to Israel in which he feels, as president, a responsibility for Israel’s survival,” said Martin S. Indyk, who was President Clinton’s ambassador to Israel and kept that post for several months under President Bush. He also suggested that Republican politics were at work, saying Mr. Bush came into office determined to “build his Christian base.”

But the White House press secretary, Tony Snow, dismissed that idea, telling reporters last week that Mr. Bush does not view the current conflict through a “theological lens.”

Mr. Bush has to some extent played the traditional peacemaker role in the region, especially in dealing with relations between Israel and the Palestinians. He called for the creation of an independent Palestinian state, set out a “road map” to achieving a lasting peace and was critical of some of Mr. Sharon’s policies.

But he has drawn a sharp distinction between the Palestinian people and Israel’s conflicts with what he regards as terrorist organizations. He came into office refusing to meet with Yasir Arafat, the Palestinian leader, and cut off Mr. Arafat entirely in early 2002, after the Israeli Navy captured a ship carrying weapons intended for the Palestinian Authority. That foreshadowed the way he is now dealing with Hezbollah.

His father’s pre-9/11 policies were more concerned with the traditional goals of peace, or at least stability, in the Middle East. Relations between the first President Bush and his Israeli counterpart, Yitzhak Shamir, hit a low point when Mr. Bush refused Israel $10 billion in loan guarantees to resettle Soviet Jews. And Mr. Baker, as secretary of state, was once so frustrated with Israeli officials that he scornfully recited his office phone number and told them to call when they were serious about peace in the Middle East.

But Mr. Bush has enjoyed singularly warm relations, particularly after 9/11. “It is this event, 9/11, that caused the president to really associate himself with Israel, with this notion that now, for the first time, Americans can feel on their skin what Israelis have been feeling all along,” said Shai Feldman, an Israeli scholar at Brandeis University who has been in Tel Aviv since the hostilities began. “There is huge, huge appreciation here for the president.”

August 6, 2006

When a Pill Is Not Enough

By TINA ROSENBERG

In the whole AIDS epidemic, no question is more heartbreaking and confounding than this: Why would a mother choose to condemn her baby to death?

Mothers with H.I.V., the virus that causes AIDS, pass it along to their newborns at birth 25 to 30 percent of the time, and in poor countries, some half a million babies a year are born with H.I.V. But the rate of transmission can be cut to 14 percent with a simple and cheap program: H.I.V.-positive mothers take a single pill of an antiretroviral called nevirapine when they begin labor, and their newborns are given nevirapine drops.

At the Alexandra Health Center and University Clinic in South Africa , pregnant women can get nevirapine free. The antenatal clinic is a complex of low brick buildings on a pretty hospital campus in the middle of the township of Alexandra, a bleak neighborhood on the outskirts of Johannesburg. The clinic has a doctor only on Thursdays, but an advanced midwife and two nurses attend a crowd of patients every day. I had been in South Africa for four days when I visited the clinic, and I had already seen the stigma that AIDS still carries in the country — those dozens of funerals every Saturday in the townships? Oh, say family members, it was asthma, or tuberculosis, or “a long illness.” I thought I understood how powerful denial could be. But I was unprepared for what Pauline Molotsi, a registered nurse at the clinic, told me.

About twice a week, a woman who has tested H.I.V.-positive begins labor at the clinic but refuses to take the nevirapine that might save her baby’s life. “She says, ‘Oh, no, I’m not positive,”’ Molotsi told me. Even though the only person who will know her H.I.V. status is the nurse — who knows already, since she is holding the patient’s chart — the woman won’t take the incriminating pill. “They have not accepted their status,” Molotsi said. “They are still in denial.”

In most of the world, the biggest reason so many babies are born with the AIDS virus is that their governments do not offer nevirapine; because of shortages of health-care personnel, in many countries this program, like all AIDS programs, is available only in urban hospitals. But in South Africa, there’s a different problem. Nevirapine is widely available, yet more than 70,000 babies a year are born there with H.I.V. The government can get nevirapine, condoms and AIDS treatment out to the most remote corners of the country — by truck or wheelbarrow, to modern hospitals and to clinics with no electricity. But it cannot penetrate what has become the most difficult terrain in AIDS work: the insides of people’s heads.

A significant minority of women in South Africa refuse to take an AIDS test. It’s not only that they do not want to confront painful facts that could lie buried a while longer. It’s also that being tested can be dangerous. At the Alexandra clinic, I listened to a tall young man named Vernon as he gave pretest group counseling to about two dozen pregnant woman. “Think about your baby before you think about yourself,” he urged them. He assured them the results of their H.I.V. tests would be confidential but encouraged the women to tell their families and partners. “Don’t hide it. Don’t use the phone — tell him face to face. You use the phone, he will hunt you down. Try to prepare him. Some people are very violent. He will beat you. But when he’s alone, he will think about it. If anything happens to you, your family knows you went to tell him your H.I.V. status and never came home.” This speech seemed unlikely to encourage many women to be tested. But it obviously reflected reality. Prudence Mabele, who works for a feminist organization, told me about a woman whose husband greeted her disclosure by pouring a kettle of boiling water over her.

Other women end up infecting their babies through breast feeding because they cannot follow the clinic’s advice to bottle-feed only — tantamount in some areas to announcing you have H.I.V. The very present danger posed by disclosure outweighs the future risk that the baby will get sick. And there are those whose denial is so deep it engulfs them. “Labor is already a stressful environment,” says Macharia Kamau, a Kenyan who is Unicef’s representative in South Africa. “You are pregnant, poor, vulnerable, marginalized, uneducated. At that point, what do you rely on? What your mother told you when you left home? Your cultural beliefs — or this stranger who’s standing there saying, ‘Take this pill?”’

As AIDS passes the quarter-century mark, in several countries the epidemic appears to be declining. South Africa is not one of them. In 1990, South Africa and Thailand both had H.I.V. prevalence rates in adults of less than 1 percent. Today, Thailand’s rate is 1.4 percent. But in South Africa, AIDS exploded in the 1990’s, and now 18.8 percent of adults are infected — and the number is still rising, though very slowly. Last year 300,000 new South Africans were infected with H.I.V. At the Alexandra Health Center, about 60 percent of women test positive. Choose any two 15-year-olds in South Africa; the odds say one of them will get AIDS.

South Africa is not even the worst of it. In Botswana, 24.1 percent of adults have H.I.V., and in tiny Swaziland, a third of all adults do. AIDS rates in southern Africa are far higher than they are anywhere else in the world. No one really knows why. South Africa has astronomical rates of sexual violence — more than a quarter of the time, a young woman’s first sexual experience is coerced — and a strong culture of male entitlement to sex, but so do many other countries. Much of the blame may go to apartheid, which kept male workers in hostels and their families in villages far away. Similar geographical dislocations come from mining, southern Africa’s main industry. Separating families encourages people to maintain ongoing relationships in two places. This is more dangerous than serial monogamous relationships, as H.I.V. is far more contagious when freshly caught.

South Africa’s post-apartheid government, besieged with problems, largely ignored AIDS. As president, Nelson Mandela did not publicly speak in South Africa on AIDS until 1998, more than three years into his term. Then came spectacular irrationality — the government of Thabo Mbeki spent years insisting AIDS was a Western plot, that the drugs were poison, that it was better to use African “cures,” that all those people were dying of something else. Now the public troublemaking of government officials has died down. What has replaced it is not the crusade so badly needed but just an official silence.

In the last few years, however, South Africans have forced their government to begin saving lives despite itself. The country is now spending millions to provide free antiretroviral drugs to AIDS patients, equip maternity clinics with nevirapine and run prevention campaigns. South Africa is successfully pushing services out to its people. But that doesn’t mean people always use them. Mothers sometimes reject nevirapine. People decline AIDS tests. Some sick people refuse to take free antiretrovirals. Some orphans will starve — even though help is available — rather than make the shameful admission that their parents died of AIDS. And of course, millions of people who know better continue to risk their lives every time they have sex.

All over the world, human psychology, local custom and the pressures of poverty are AIDS’s best friends. None of this should be foreign to Americans. We know we should quit smoking. We know we should go have that lump checked out. We know we should give up the French fries. But we don’t. In America, as around the world, a good amount of sickness and death is at least in part self-inflicted. In all aspects of health care, the challenge of providing not just solutions but ones patients will embrace is only now beginning to get attention. We are accustomed to thinking of noncompliance as the patient’s fault. But when a pregnant woman chooses to keep the nevirapine tablet in her pocket, the real failing belongs to the health system, which did not consider what would help her to follow medical advice. Such thinking is always crucial for health professionals but never more so than with AIDS, a disease that is shrouded in the dark and forbidden — sex, drug use, betrayal, rejection, death, rape, the struggles of intimate relationships — and that primarily hits the notoriously irrational young.

But the AIDS establishment has not yet assumed this challenge. “The technology is doing O.K., it’s moving,” says Peter Piot, executive director of the United Nations’ AIDS agency, Unaids. “But we have grossly, grossly neglected the social, cultural and personal stuff that makes it work.”

In a bland corporate research office in a strip mall in the Johannesburg suburbs one day late last spring, American and South African investigators were intently trying to prove Piot wrong. They were sitting behind a two-way mirror, watching five young women from Soweto talk about vaginal gel. The research office, normally employed to assess South Africans’ views on laundry detergent or breakfast cereal, was now the site of a series of focus groups designed to solve one of the biggest problems in AIDS prevention: the failure of the condom.

It is a social failure, not a mechanical one. Condoms prevent AIDS transmission quite well when people use them consistently. But men would rather not, and in Africa men usually call the shots. One of the most chilling findings of AIDS researchers is that marriage can be a risk factor. Studies in Kenya and Zambia found that young, married, monogamous women had higher rates of AIDS infection than sexually active single women of the same age; if condom use is hard for single women to negotiate, it is nearly impossible for married women. Even women who know their husbands are unfaithful cannot demand condoms, for to do so indicates a lack of trust. Husbands can get violent, or accuse the woman of infidelity. Condoms are also not an option for couples who wish to conceive. Women need a method of H.I.V. protection that they can control, that does not impede fertility and that men do not object to. +

It does not exist — yet. But one form of it, a vaginal microbicide, may be available within five years. The Johannesburg focus groups were designed to test three different gels, for use once a day, that may someday contain an ingredient that kills H.I.V. before it can infect the woman. The sessions were run by the International Partnership for Microbicides (I.P.M.), which is based near Washington. I.P.M. scientists realize that creating an effective medicine is just half the battle, and so they are taking a proactive approach to marketing the gel; before the microbicide’s active ingredient has even been invented, researchers have spent years figuring out how to get women in a variety of cultures to use it.

“A microbicide could be marketed as a sexual aid, or as something to make a woman feel more attractive inside and out,” Dr. Zeda Rosenberg, I.P.M.’s C.E.O., told me when I first met her in 2004. She was still puzzling it out when I spoke to her this year in South Africa. “Maybe H.I.V. prevention would be a secondary selling point,” she said. “This could be a lubricant that stops H.I.V. If the product made sex great, they would use it even if there were a trust issue.”

The focus groups were a chance for I.P.M.’s researchers to hear from their target market. Five young women from Soweto, all paid to participate in the study, sat around a table laden with platters of food and chatted in Zulu, Sotho and English about the gels, which they had been using for the last three weeks. The moderator asked whether they would want to use the gels to avoid getting H.I.V. All responded with enthusiasm. “I would recommend it to women who are married but do not trust their husbands,” said a participant. Just as important, they talked about how they handled the issue with their boyfriends. “I didn’t tell my boyfriend, but he noticed something different,” said Dimakatso, a young-looking girl with a ponytail. She explained to him what she was using, and it was no problem.

But most women preferred stealth — and it worked. Some didn’t tell because South Africans don’t normally discuss sex. Others said their boyfriends were superstitious. “He will think I am using something for witchcraft,” said one woman. Overall, the women preferred the gel whose texture was easiest to hide from their sexual partners.

Women’s groups have been talking about a microbicide for more than a decade, since it became obvious that AIDS was developing into a woman’s disease. But the rest of the world wasn’t listening. In the late 1990’s, Rosenberg was senior scientist for H.I.V.-prevention research at the National Institutes of Health. She, along with some others, tried to focus money and research on developing an AIDS-prevention product that women could control. “It was difficult to get people’s attention,” she says. “It was not considered interesting scientifically. It was seen as a product-development issue, not a scientific problem. Scientists in drug and cosmetic companies don’t get papers published.” Research was slow to get moving. Rosenberg left N.I.H. and eventually became C.E.O. of I.P.M. It is one of several organizations working to develop a microbicide.

For a microbicide, the traditional public-health approach — invent it, put it out there and tell people to use it — won’t cut it. Nearly as important as whether it kills H.I.V. is whether a microbicide feels acceptable, whether it can be used discreetly if necessary and how it is packaged and promoted. Dr. Mark Mitchnick, the group’s senior scientific consultant, worked on sunscreens and other products before switching to AIDS prevention. “One thing I learned with sunscreen is that people will often need a second reason to buy,” he says. “You want people to use sunscreen because it protects against melanoma. But people buy it because it prevents wrinkles.”

“The cosmetics industry can get women to use all sorts of topical products they don’t need,” Rosenberg said. Maybe the same tools could be used to make a microbicide popular. “Is there a way to think about it that isn’t H.I.V.? Public health can’t tell us that.”

Every weapon in the fight against AIDS needs to pass these same two tests — it has to work and people have to use it. But particularly in poor countries, where most of these services are by necessity free, AIDS treatments and prevention strategies are usually offered as if marketing were unnecessary. That is especially true for antiretroviral therapy. After all, the logic goes, it’s a lifeline. Surely no one would throw it back.

And when they have access to it, most people don’t. Antiretrovirals are now saving lives all over South Africa. The public-health system has gone from 0 to 175,000 people on antiretrovirals in two years. Add in programs run by businesses and nongovernmental groups like Médicins Sans Frontières, and more than a third of South Africans who need antiretrovirals are now taking them, and the figure continues to rise. Patients who have agreed to start antiretrovirals are very good about taking their medicine, and when they do, few are dying.

But the surprise is that South Africa has indeed had to sell AIDS treatment — and it’s often a hard sell. “People think the health department wants them to be dead,” said Sylvia Maguma, a traditional healer, or sangoma, I met in the township of Bekkersdal. I heard many people say this. It may be a hangover from the apartheid years, when it was literally true, and more recently, the government has spent years criticizing as poisonous the same drugs it is giving out now. Some antiretrovirals do have awful side effects, especially at first. But denial and stigma make things worse. People with AIDS tend not to admit, even to themselves, that they are sick; they seek help only when death is imminent. They start the antiretrovirals too late, and then the rumor spreads: the medicines killed her.

But there is something else at work here: the weight of traditional culture. In the township of Tembisa I met Vusi Ziqubu, a 33-year-old who was dying of AIDS. He could get free antiretroviral treatment at his local clinic. But he preferred the herbal remedies of Grace Mhaula, his sangoma. “He was gone,” said Mhaula of the moment she first saw Ziqubu. “He was frail, smelling of death.” Mhaula gave him a solution of herbs to drink four times a day. When I visited him in his house, he was thin, but looked strong and was up and around.

It is commonly said in South Africa that 80 percent of blacks go to a traditional healer first when they are sick. To South Africa’s poor, the bones of the sangoma are the reassuring and trustworthy medicine their families have used forever. It is the clinic’s fabulous tales of invisible bugs that sound to them like hoodoo. The science of the rich is the magic of the poor, and vice versa. And the sangoma, unlike the nurses at the clinic, can spend time with the patient.

But traditional healers can be a dangerous first stop for people with H.I.V., and not just because they often mean a delay in starting antiretrovirals. Sometimes the consequences are more dire. “I discourage older men from going to young girls to cure AIDS,” said Mhaula, but horrifyingly, some healers do not, spreading the message that sex with a virgin is curative. Many sangomas, Mhaula said, induce diarrhea or vomiting to clean out the illness, which can be debilitating for someone sick with AIDS.

So South African officials have begun to train traditional healers about H.I.V. Training often lasts only a few days, and it varies greatly in quality, but it is nonetheless useful and has reached thousands of sangomas. Mhaula took the training and trained others herself. I met her in April, and I later found out that she died suddenly three weeks after I visited her, of an infection unrelated to AIDS. She was an enormous woman of 53 who greeted me in a muumuu and fuzzy pink slippers. The daughter of two traditional healers, she had been one herself since the late 1970’s. But she also worked in the labs of a multinational drug company for 27 years, and the company paid her college tuition. Arthritis forced her into early retirement, but she was bored at home. At Tembisa’s health clinic, she received training in H.I.V. counseling and caring for the terminally ill. Her own daughter died of AIDS six years ago, and Mhaula was raising her daughter’s child.

Off her patio was a small room — her indumba, or consulting room. The walls were lined with hundreds of glass jars and plastic tubs containing mixtures of herbs. Animal skins and straw mats covered the concrete floor. Hanging from the ceiling were candles, the clothes of her ancestors and beaded necklaces. There was a plate of bones. When her clients (she does not call them patients) visited her, she read the bones. When she was alone, she put on the clothes of her ancestors and called their spirits. There were seven different ancestors that she talked to.

Mhaula walked me through what she did when she recognized symptoms of H.I.V. “I say: ‘Think about it. We live in the modern age. Don’t you think we should go to the clinic? You will be in a safe environment.’ They say, ‘Will you go with me?’ I say, ‘Yes.’ Sometimes they want me to go get their test results. They say, ‘Don’t tell me the results, just give me imbiza”’ — the herbal mixture she makes that she says boosts the immune system. “I say, ‘How are you going to change your behavior?’ They say, ‘I’m not yet ready.’ I tell them: ‘It’s good to have one partner. You must use condoms.”’

Working with traditional healers is hugely important for fighting AIDS in South Africa. But it has a dangerous side. The problem lies in the stack of white tubs that were behind the door of the indumba — Mhaula’s imbiza. She was careful not to call it a cure. It might indeed strengthen the immune system — it has never been tested in clinical trials, so we don’t know. But it cannot be taken with antiretroviral drugs. That meant Vusi Ziqubu had to choose.

“Traditional healing is being manipulated to put forth a political agenda,” says Jonathan Berger, head of policy and research at the AIDS Law Project in Johannesburg. “It’s a way to push the anti-Western-medicine line by appealing to culture and tradition.” When I was in South Africa, a “cure” called the mopane worm was on the front pages of the tabloid papers. Health officials’ embrace of a long line of charlatans has encouraged a thriving industry in such cures. Hundreds of sangomas sell them.

They are very tempting to people fearful of the impersonal clinic. “With us, you don’t have to take it the rest of your life,” Mhaula told me. “And there are no side effects. Patients come in, and they are so afraid, and then I give them the imbiza and I give them some porridge to eat. And it’s all right.”

Imbiza seemed to be helping Ziqubu — for now. But there was another patient taking Mhaula’s imbiza, a close family friend, a mother of three children. She was doing well, Mhaula told me — please come talk to her. Two days later, I came back to meet the woman. But she had already died.

AIDS is a disease of taboos. For its sufferers, psychological comfort, like that provided by traditional healers, is paramount — sometimes more important than even staying alive. But over the next few years, word will spread about the Lazarus effect of antiretroviral drugs. Although logistical and personnel problems will no doubt remain, few people will be able to argue that the drugs are poison, and few will shun them for herbal remedies.

There is also reason for optimism that other weapons in the fight against AIDS will win more public acceptance. Improvements in service will encourage more women to protect their babies. In the Alexandra clinic, the resourceful nurse Pauline Molotsi has hit on a strategy that sometimes helps. If an H.I.V.-positive woman does not want to take the nevirapine, Molotsi thrusts a piece of paper and a pen toward the woman, essentially making her take responsibility for her decision. “Would you really like your baby to have the virus?” she asks. “If you don’t take the pill, you will have to sign.” At Chris Hani Baragwanath Hospital in Soweto, which has an unusually well-financed and -run antenatal clinic, 98 percent of pregnant women agree to be tested for H.I.V. There will always be psychological barriers, but good service can overcome them.

That may not be true with South Africa’s most basic challenge: to bring down AIDS’s astronomical prevalence in the general population. Help could come from the brand-new technology of microbicides, but it could also come from the very old one of circumcision, which may offer some protection from H.I.V. infection. (Clinical studies due to conclude next year may tell how much protection.) That’s the future, though. For the moment, AIDS prevention is entirely a conundrum of psychology and culture — one we know very little about how to solve. The small list of countries that have had some success with prevention includes such dysfunctional places as Haiti, Zimbabwe and Cambodia. Experts can point to some good programs in these countries, but plenty of nations with rising AIDS rates have the same programs. The country that had an early drop in AIDS prevalence, Uganda, probably achieved this because its particular culture of openness brought the disease into the public eye, and the country treated it like World War III.

In South Africa, where AIDS has already exploded through the general population, prevention is an even more overwhelming challenge. One disturbing fact: Surveys show that South Africa’s teenagers know about AIDS and how it is transmitted. They know the behaviors that put people at risk. But they don’t apply this information to themselves. There is no correlation between information and behavior change. Two-thirds of young people who test H.I.V.-positive — in anonymous surveys, so they don’t know it — do not consider themselves at risk for AIDS. Especially for teenagers, the psychology of sexual behavior resides in some deep and mysterious place, apparently shielded from the reach of traditional public-health messages as if by a lead curtain. The question is whether anything can get through.

South Africa is trying to answer that question with a controversial H.I.V./AIDS-prevention program called loveLife, which generally serves youths from 12 to 17. It is as far from the traditional campaigns as it could be. I went to the community hall in Emzinoni, a black township in Mpumalanga province in the country’s east, to hear a dialogue staged by loveLife. Outside, geese ran in the dirt yard next to purple loveLife banners. Inside the auditorium, vibrant music blared and balloons filled the stage. A pop star named Elle sang a song about believing in yourself. A woman in jeans and a pink hat and a man in khaki shorts strode back and forth in front of the crowd, each with a microphone in hand, bantering in Zulu and English with about 500 Emzinoni parents and children, leading them in games and discussions about AIDS. Sithembile Sefako, the woman, and Mnqobi Nyembe, the man, are trainers from loveLife’s national office. They are local versions of a motivational speaker like Tony Robbins, traveling the country holding these events — but the problems they are discussing are not the ones Tony Robbins usually has to confront.

Sefako asked for volunteers for a little play: a university student named Beauty comes back from college to tell her parents she is pregnant and has H.I.V. Afterward, the actors compared their skit to reality. “Our parents scream at you and call you names,” said the young man who played the father. “They say: ‘I’ve seen you walking in the street! I knew you were going to fall pregnant!’ They beat you.”

“We use culture as an excuse,” Sefako said. “They say, ‘I can’t talk to my children, it’s not right.’ We hide behind culture.”

Next Sefako opened a discussion about responsibility for teen sex. A girl in a flowered cap said: “Most guys force us. Then they say if you are going to open a case with the police, we’ll beat you. We’ll come with a group and we’ll kill you.”

“Guys compete,” one boy said. “You say, ‘I’m going to sleep with six girls before Sunday.”’

“Is it true most women are falling pregnant to prove they can bear children?” Sefako asked.

One girl said: “We mustn’t lie. Most fall pregnant because they want the money” — the South African government’s grant of $30 per month per child. “They think, I’ll buy myself sneakers and jeans.”

A man differed: “The reason women fall pregnant is that we see females in the street in a miniskirt.”

“Are you saying young girls are getting raped because of what they wear?” Sefako asked.

“Yes, because of the way they are dressing, they end up in trouble.”

A girl responded: “Then what about someone who rapes a 3-year-old child?”

“A child from 10 upward knows how to sleep with a guy, and she knows the way she is dressing,” the man responded. The crowd hooted.

These unnerving comments contrasted bizarrely with the festive tone of the event. What was most remarkable to participants, however, was not what people were saying but that they were saying anything at all. Nelson Mandela often said that when he told traditional chiefs that he planned to speak out about AIDS and sex, they told him he would lose their support. What passes for communication between parents and children about sex is often just a cryptic warning to girls to “stay away from boys” and to boys, nothing. Yet children whose parents do talk to them about sex abstain longer and are more likely to use condoms. In general, openness is the anti-AIDS — if the sick came out of hiding, it would be easier for their friends and neighbors to accept that they, too, are at risk. That’s one reason loveLife’s principal slogan is “Talk About It.”

By 1997 AIDS was a crisis of biblical proportion in South Africa, with 13 percent of adults infected. The red-ribbon billboards that passed for an AIDS-prevention campaign were failing disastrously, especially with young people. For girls — who tend to have sex with older men — the riskiest age was between 12 and 17. The Kaiser Family Foundation, a health organization based in California, pledged that if South Africans could decide what was needed to prevent the spread of AIDS in young people, the foundation would pay the bill for the first five years.

Kaiser hired Judi Nwokedi to help plan the program. Nwokedi is a charismatic whirlwind who is head of government relations for Motorola in South Africa. A psychologist by training, she worked with sexually abused children and on AIDS projects while in exile in Thailand and Australia. Nwokedi met with AIDS groups, government officials and international experts to forge agreement on the basics. She also commissioned surveys of South Africa’s teenagers. The surveys found that teenagers tuned out the traditional prevention messages and were most receptive to an AIDS campaign that was about more than just AIDS. The teenagers also said their parents didn’t talk to them about sex or relationships — and they desperately wanted that kind of communication and wanted their parents to set limits. Significantly, the study found that poorer girls realized their first sexual encounter would probably be coerced and violent.

The next question was how to reach the children and young people at risk. “The normal way of AIDS or any peer education with young people was to pack them into the church hall or the school hall,” Nwokedi says. “They would have to sit there while someone would stand up there and talk at them. And whatever they told you, you went out and did the exact opposite because you were so angry that they kept you there for five hours. I wanted H.I.V. education to have another dimension — it had to be interactive, engaging, question-and-answer, vibrant debate.”

Under apartheid, young people identified with collective action. Now they were tired of politics, tired of “we.” An expansion of electrical service in the late 1990’s had allowed the number of households with televisions to soar. Young people were tuning into the global popular culture they saw on TV, with a very high level of awareness of brands.

The working title for the campaign had been the National Adolescent Sexual Health Initiative. Nwokedi, consulting with teenagers, public-health leaders and marketing experts, nixed it. “You’re dead before you can even go out to young people,” she said. “They’d call it Nashi as an acronym — that was soooo public health!”

The AIDS-prevention program had to be branded. The closest model was a recent relaunch of Sprite. “Sprite took the brand off the shelf into the communities,” Nwokedi says. “They did basketball, sponsored concerts, sent cool kids onto campus, talked up Sprite in Internet chat rooms. It was very driven by celebrities in the community creating the hype. I was looking at what is tactile about your brand, what experiences you create.”

Instead of a fear-driven, preachy, stodgy Nashi, the AIDS prevention campaign became loveLife — positive, hip and fun, “an aspirational lifestyle brand for young South Africans,” as the group’s literature says. Today loveLife is one of the 15 best-known brands in South Africa. The country is dotted with 1,750 loveLife billboards. Radio call-in shows reach three million young listeners a week. LoveLife has TV spots and TV reality shows, including one that sent attractive young people into the wilderness to compete in AIDS-related games, like using the other sex’s tools of seduction. A Web site (www.lovelife.org.za) and magazines feature not only graphic information about H.I.V. but also fashion, gossip and relationship advice.

There are very few South Africans who lack strong opinions about loveLife. South Africa has other AIDS-themed TV series and media campaigns and many other behavior-change programs. But at $25 million a year, loveLife is the giant, and it attracts most of the controversy. Initially, I was a skeptic. LoveLife struck me as empty cheerleading — telling young people who live in cardboard houses and eat a few handfuls of cornmeal mush each day to look on the bright side, when there is no bright side.

LoveLife started out promising too much, pledging to halve the rate of new H.I.V. infections among young people in five years. More recently, it has suffered management problems. South Africans cluck about the fact that the Global Fund to Fight AIDS, Tuberculosis and Malaria cut off a loveLife grant last year — one of only three grants stopped worldwide. The money was being used to, among other things, build rooms where teenagers could go, known as “chill rooms,” in health clinics. Brad Herbert, who was chief of operations at the Global Fund at the time, told me that the grant was canceled because construction was too slow and expensive, but that there were no charges of impropriety. (The grant arrived six months late, and loveLife officials argue that the delay caused cash-flow and exchange-rate problems.)

But many people also question loveLife’s basics. Virtually every South African adult I met thinks that the messages on loveLife’s billboards — the media most visible to adults — are incomprehensible. Many — like “Get Attitude!”— indeed appear to have nothing to do with AIDS. But loveLife’s leaders argue that the billboards, like all of loveLife’s media, are not there to educate young people but to draw them into the face-to-face programs. They promote loveLife as an exclusive club that you, as a teenager, can join. The celebrity gossip and fashion advice in loveLife magazines is also not a message but a delivery system. “The logic of the brand is to create something larger than life, a sense of belonging,” says Dr. David Harrison, a tall, lanky, white physician who became head of loveLife in 2000. “That creates participation in clinics, schools — people go because they like to be a part of loveLife.”

As Sprite did, loveLife uses kids to recruit their peers. It has programs now in a third of the country’s high schools, a seventh of the nation’s health clinics, 130 community organizations and 16 loveLife centers. All these programs are run by what loveLife calls, with a typical typographical flourish, groundBREAKERs. They are young people between 18 and 25, trained and hired for one year at minimum wage to talk about sex, AIDS and relationships, help run school sports competitions (South Africa’s only public-school sports in most of the country), radio stations and computer workshops. Perhaps most important, they are taught how to motivate young people by sharing their own personal histories. That is crucial, as loveLife’s challenge is not to impart information but to cut through fatalism and denial to get young people to apply the information they already know.

I met Harrison in loveLife’s headquarters in the Johannesburg suburb of Sandton, a pleasant campus of modern buildings with interiors painted in loveLife’s trademark purple and white. He said that loveLife’s research found that what particularly put young people at risk was coerced sex. Other factors were low self-esteem, absence of belief that the future offered any reason to make wiser choices today, peer pressure, lack of parental communication and the popular belief that a girl is not a woman until she has a baby. Poverty, low education and marginalization also led to higher rates of AIDS.

LoveLife cannot do much about those last three. Instead it tries to promote family and society communication and help young people acquire the skills and motivation to resist pressure to have sex, especially unprotected sex. “When I ask young people what made them change, they never say, ‘You gave us information,’ ” Harrison says. “They say: ‘I feel an identity with a new way of life. I can be like my friend whose life has changed.”’

There have been some good recent analyses about how to tinker effectively with teenagers’ heads. A study last year led by Dolores Albarracín of the University of Florida examined evaluations of hundreds of H.I.V.-prevention programs. The group found that threats and fear don’t work. This finding argues against “AIDS kills” messages and also against more sophisticated programs that encourage teenagers to confront how AIDS has ravaged their families. For young people, not surprisingly, one of the most effective arguments for making healthier choices is that their peers are doing the same. Programs that produced the most behavior change combined H.I.V. information, attitude change and training in skills like saying no to sex without a condom.

The most serious criticism is that loveLife is aimed in the wrong direction. “LoveLife is too focused on individual choice,” says Warren Parker, the executive director of Cadre, an AIDS group. “We need community organizing around the issues of sexual violence, gender imbalance.” The question of whether to try to change an individual’s behavior or a society’s culture is a big debate in AIDS work. Certainly in South Africa, both seem necessary.

“To stop the epidemic in the long term we need to tackle sexual violence,” says Piot of Unaids. “But the problem is we still have a crisis. If we’re going to wait till men and women have equality and no one has to sell their body — well, we can’t wait for that.”

LoveLife’s message is the same public-health gospel a Nashi would have used: abstinence, fidelity, condoms. But that message is received very differently if it comes during a five-hour lecture in the church hall than it is if it comes from Sibulele Sibaca, a petite, enthusiastic, energetic 23-year-old from Langa, a township outside of Cape Town. Today she is a corporate social investment manager in Richard Branson’s Virgin Group in South Africa. That, she says, is because of loveLife. When she was 12, her mother died of AIDS. When she was 16, her father followed. “Before I joined loveLife, I had a serious history of self-destruction,” she said by phone from Cape Town. “I saw my life ending up in the township, pregnant, not knowing who the father of my child is.”

She got through high school. A friend told her about loveLife, and she began going to its programs. “I had been engaging in highly risky behavior, but loveLife helped me realize there were things I wanted to achieve in my life, and I couldn’t afford to have sex without a condom,” she said. “The reality is that every young person has a dream, but a lot of us look at our situation and think, Who are we kidding? But the minute someone triggers in your brain that it is possible, you start looking at life in a different way.

“Seeing billboards of a dying person didn’t tell me about me,” Sibaca says. “But when someone says, ‘You have such amazing potential that H.I.V. shouldn’t be a part of it’ — then it wasn’t about H.I.V. It was about me. No one is wagging a finger at me. These were people the same age as me. It wasn’t a celebrity telling me their story living in a million-dollar house. It was another young person from the same township as me.”

She applied to be a groundBREAKER. LoveLife trained her to do motivational speaking and gave her facts and ways to talk about teen pregnancy, peer pressure, H.I.V. and other issues. She went to work in a high school, visiting the same class every day for 21 weeks. I asked her whether she felt it helped anyone. She told me about one girl in her class two years ago, also from Langa. “She was 15 and came to me and said, ‘My boyfriend is pressuring me to have sex without a condom.’ Her fear was that her boyfriend would break up with her if she said no, and she had to hold on to him because he gave her money and clothes that her family could not provide her with. I gave her all the different choices and consequences and said, ‘Are you willing to live with those consequences at age 16?’

“She came to me the next week and said, ‘I’m single.’ She had broken up with her boyfriend. I hugged her and started crying — she saw her fears and was willing to go through with it anyway.” Sibaca saw the young woman again a few months ago. “She was not H.I.V.-positive and not pregnant, and she was going to study law next year.”

This is cheerleading — but it’s not empty cheerleading. LoveLife cannot promise any South African teenager that life will be good. But living on one meal a day is even harder if you have AIDS. It seemed valuable to help young people realize that there were reasons to stay healthy and that the choice is theirs.

In Orange Farm, a forlorn and violent township southwest of Johannesburg, I visited a loveLife center, a complex of buildings that draws kids in with a basketball court, a radio-production facility and a computer workshop — but first, kids have to do AIDS training. LoveLife seemed to be Orange Farm’s only after-school alternative to drinking, gangs and sex. In a mining district in rural Limpopo, I visited several health clinics. Nurses at clinics are famous for simply yelling at kids who come in with gonorrhea or a request for contraception, or threatening to tell their mothers. Now these clinics have loveLife chill rooms manned by groundBREAKERs. They have persuaded nurses not to drive teenagers away and will escort teenagers into their appointments.

I watched groundBREAKERs give talks on H.I.V. in schools and after school. The quality of their programs varied with their skills and the local environment. Some were pretty good. At Serokolo high school in the Limpopo mining town, I watched 23-year-old Tebatso Klass Leswifi run a class through a quiz on H.I.V., with discussion that ranged from whether girls become pregnant because of the country’s child grant to why you would want to know your H.I.V. status. He also works at the local health clinic and helps run a league with 10 basketball teams. The high school’s aerobics team — also coached in part by Leswifi — put on a show to the music of the pop hit “Gloria.” I met a 17-year-old named Princess who said she calls Leswifi every day for some words of wisdom to motivate her to stay in school. In another Limpopo health clinic, however, I watched about 20 bored-looking kids sit through a lecture by groundBREAKERs on H.I.V. and loveLife’s programs. It was done in the rote-memorization style still typical in South Africa’s rural schools, with practically no discussion. Still, I heard too many young people tell me loveLife had changed their lives to dismiss it. The organization seemed a little like a cult — and that’s good. Many young people I met told me that loveLife had saved them in big or little ways, and they said they were on a mission to pass that along to others.

There are strong indications that loveLife does indeed change young people’s behavior. In 2003, the Reproductive Health Research Unit of the University of the Witwatersrand in Johannesburg did a survey of 15- to 24-year-olds. It found that people who had participated in loveLife’s programs were only 60 percent as likely to be infected with H.I.V. as those who had not, and the risk diminished further for those who had participated in more than one program. There was also a strong association between loveLife participation and increased condom use — although there was no statistically significant effect on abstention or partner reduction. Since the study was not a randomized, controlled one, it could not prove that loveLife programs caused the behavior change.

LoveLife has not, of course, produced the promised 50 percent drop in new H.I.V. infections. But loveLife’s face-to-face programs have been working nationwide since only 2002. “It is too early to dismiss this,” says Purnima Mane, the director of policy, evidence and partnerships at Unaids in Geneva. “It can take five or six years to see results.” And last month, the South African government reported that new surveys of pregnant women showed that rates of infection in teenagers are holding steady, while the rates of other age groups are rising. This suggests something is working with teenagers.

LoveLife currently reaches around 40 percent of South Africa’s youth with face-to-face programs. That’s a lot, but more would be better — given the scope of the catastrophe, $25 million a year is not that much. There are other programs that take a different but equally sophisticated approach, and it would help if they were broadened as well. Where the likelihood your partner is infected is as high as in South Africa, ordinary success might not be enough.

The thinking behind loveLife — get into their heads — needs to become part of every AIDS program, in South Africa and around the world. Governments are still setting goals of providing “access” to medicines or condoms, but access and accessed are very different things. It will be a complicated and expensive change, because what works in one culture may not work in another. It will also require people to take into account what works. It sounds strange to say it, but this is often not a factor. Across Africa, groups are turning to abstinence-only programs not because they work — they don’t — but because that’s what Washington wants to finance. Rigorous evaluation to show which AIDS programs are effective is also necessary, something that is only an occasional afterthought today.

Without attention to the social, psychological and cultural factors surrounding the disease, we are throwing away money and lives. This is the new frontier. Twenty-five years into the epidemic, we now know how to keep people from dying of AIDS. The challenge for the future is to keep them from dying of stigma, denial and silence.

Tina Rosenberg writes editorials for The New York Times. She has written for the magazine about AIDS, malaria and tuberculosis, among other subjects.

 July 30, 2006

Disowning Conservative Politics, Evangelical Pastor Rattles Flock

By LAURIE GOODSTEIN

Correction Appended

MAPLEWOOD, Minn. — Like most pastors who lead thriving evangelical megachurches, the Rev. Gregory A. Boyd was asked frequently to give his blessing — and the church’s — to conservative political candidates and causes.

The requests came from church members and visitors alike: Would he please announce a rally against gay marriage during services? Would he introduce a politician from the pulpit? Could members set up a table in the lobby promoting their anti-abortion work? Would the church distribute “voters’ guides” that all but endorsed Republican candidates? And with the country at war, please couldn’t the church hang an American flag in the sanctuary?

After refusing each time, Mr. Boyd finally became fed up, he said. Before the last presidential election, he preached six sermons called “The Cross and the Sword” in which he said the church should steer clear of politics, give up moralizing on sexual issues, stop claiming the United States as a “Christian nation” and stop glorifying American military campaigns.

“When the church wins the culture wars, it inevitably loses,” Mr. Boyd preached. “When it conquers the world, it becomes the world. When you put your trust in the sword, you lose the cross.”

Mr. Boyd says he is no liberal. He is opposed to abortion and thinks homosexuality is not God’s ideal. The response from his congregation at Woodland Hills Church here in suburban St. Paul — packed mostly with politically and theologically conservative, middle-class evangelicals — was passionate. Some members walked out of a sermon and never returned. By the time the dust had settled, Woodland Hills, which Mr. Boyd founded in 1992, had lost about 1,000 of its 5,000 members.

But there were also congregants who thanked Mr. Boyd, telling him they were moved to tears to hear him voice concerns they had been too afraid to share.

“Most of my friends are believers,” said Shannon Staiger, a psychotherapist and church member, “and they think if you’re a believer, you’ll vote for Bush. And it’s scary to go against that.”

Sermons like Mr. Boyd’s are hardly typical in today’s evangelical churches. But the upheaval at Woodland Hills is an example of the internal debates now going on in some evangelical colleges, magazines and churches. A common concern is that the Christian message is being compromised by the tendency to tie evangelical Christianity to the Republican Party and American nationalism, especially through the war in Iraq.

At least six books on this theme have been published recently, some by Christian publishing houses. Randall Balmer, a religion professor at Barnard College and an evangelical, has written “Thy Kingdom Come: How the Religious Right Distorts the Faith and Threatens America — an Evangelical’s Lament.”

And Mr. Boyd has a new book out, “The Myth of a Christian Nation: How the Quest for Political Power Is Destroying the Church,” which is based on his sermons.

“There is a lot of discontent brewing,” said Brian D. McLaren, the founding pastor at Cedar Ridge Community Church in Gaithersburg, Md., and a leader in the evangelical movement known as the “emerging church,” which is at the forefront of challenging the more politicized evangelical establishment.

“More and more people are saying this has gone too far — the dominance of the evangelical identity by the religious right,” Mr. McLaren said. “You cannot say the word ‘Jesus’ in 2006 without having an awful lot of baggage going along with it. You can’t say the word ‘Christian,’ and you certainly can’t say the word ‘evangelical’ without it now raising connotations and a certain cringe factor in people.

“Because people think, ‘Oh no, what is going to come next is homosexual bashing, or pro-war rhetoric, or complaining about ‘activist judges.’ ”

Mr. Boyd said he had cleared his sermons with the church’s board, but his words left some in his congregation stunned. Some said that he was disrespecting President Bush and the military, that he was soft on abortion or telling them not to vote.

“When we joined years ago, Greg was a conservative speaker,” said William Berggren, a lawyer who joined the church with his wife six years ago. “But we totally disagreed with him on this. You can’t be a Christian and ignore actions that you feel are wrong. A case in point is the abortion issue. If the church were awake when abortion was passed in the 70’s, it wouldn’t have happened. But the church was asleep.”

Mr. Boyd, 49, who preaches in blue jeans and rumpled plaid shirts, leads a church that occupies a squat block-long building that was once a home improvement chain store.

The church grew from 40 members in 12 years, based in no small part on Mr. Boyd’s draw as an electrifying preacher who stuck closely to Scripture. He has degrees from Yale Divinity School and Princeton Theological Seminary, and he taught theology at Bethel University in St. Paul, where he created a controversy a few years ago by questioning whether God fully knew the future. Some pastors in his own denomination, the Baptist General Conference, mounted an effort to evict Mr. Boyd from the denomination and his teaching post, but he won that battle.

He is known among evangelicals for a bestselling book, “Letters From a Skeptic,” based on correspondence with his father, a leftist union organizer and a lifelong agnostic — an exchange that eventually persuaded his father to embrace Christianity.

Mr. Boyd said he never intended his sermons to be taken as merely a critique of the Republican Party or the religious right. He refuses to share his party affiliation, or whether he has one, for that reason. He said there were Christians on both the left and the right who had turned politics and patriotism into “idolatry.”

He said he first became alarmed while visiting another megachurch’s worship service on a Fourth of July years ago. The service finished with the chorus singing “God Bless America” and a video of fighter jets flying over a hill silhouetted with crosses.

“I thought to myself, ‘What just happened? Fighter jets mixed up with the cross?’ ” he said in an interview.

Patriotic displays are still a mainstay in some evangelical churches. Across town from Mr. Boyd’s church, the sanctuary of North Heights Lutheran Church was draped in bunting on the Sunday before the Fourth of July this year for a “freedom celebration.” Military veterans and flag twirlers paraded into the sanctuary, an enormous American flag rose slowly behind the stage, and a Marine major who had served in Afghanistan preached that the military was spending “your hard-earned money” on good causes.

In his six sermons, Mr. Boyd laid out a broad argument that the role of Christians was not to seek “power over” others — by controlling governments, passing legislation or fighting wars. Christians should instead seek to have “power under” others — “winning people’s hearts” by sacrificing for those in need, as Jesus did, Mr. Boyd said.

“America wasn’t founded as a theocracy,” he said. “America was founded by people trying to escape theocracies. Never in history have we had a Christian theocracy where it wasn’t bloody and barbaric. That’s why our Constitution wisely put in a separation of church and state.

“I am sorry to tell you,” he continued, “that America is not the light of the world and the hope of the world. The light of the world and the hope of the world is Jesus Christ.”

Mr. Boyd lambasted the “hypocrisy and pettiness” of Christians who focus on “sexual issues” like homosexuality, abortion or Janet Jackson’s breast-revealing performance at the Super Bowl halftime show. He said Christians these days were constantly outraged about sex and perceived violations of their rights to display their faith in public.

“Those are the two buttons to push if you want to get Christians to act,” he said. “And those are the two buttons Jesus never pushed.”

Some Woodland Hills members said they applauded the sermons because they had resolved their conflicted feelings. David Churchill, a truck driver for U.P.S. and a Teamster for 26 years, said he had been “raised in a religious-right home” but was torn between the Republican expectations of faith and family and the Democratic expectations of his union.

When Mr. Boyd preached his sermons, “it was liberating to me,” Mr. Churchill said.

Mr. Boyd gave his sermons while his church was in the midst of a $7 million fund-raising campaign. But only $4 million came in, and 7 of the more than 50 staff members were laid off, he said.

Mary Van Sickle, the family pastor at Woodland Hills, said she lost 20 volunteers who had been the backbone of the church’s Sunday school.

“They said, ‘You’re not doing what the church is supposed to be doing, which is supporting the Republican way,’ ” she said. “It was some of my best volunteers.”

The Rev. Paul Eddy, a theology professor at Bethel University and the teaching pastor at Woodland Hills, said: “Greg is an anomaly in the megachurch world. He didn’t give a whit about church leadership, never read a book about church growth. His biggest fear is that people will think that all church is is a weekend carnival, with people liking the worship, the music, his speaking, and that’s it.”

In the end, those who left tended to be white, middle-class suburbanites, church staff members said. In their place, the church has added more members who live in the surrounding community — African-Americans, Hispanics and Hmong immigrants from Laos.

This suits Mr. Boyd. His vision for his church is an ethnically and economically diverse congregation that exemplifies Jesus’ teachings by its members’ actions. He, his wife and three other families from the church moved from the suburbs three years ago to a predominantly black neighborhood in St. Paul.

Mr. Boyd now says of the upheaval: “I don’t regret any aspect of it at all. It was a defining moment for us. We let go of something we were never called to be. We just didn’t know the price we were going to pay for doing it.”

His congregation of about 4,000 is still digesting his message. Mr. Boyd arranged a forum on a recent Wednesday night to allow members to sound off on his new book. The reception was warm, but many of the 56 questions submitted in writing were pointed: Isn’t abortion an evil that Christians should prevent? Are you saying Christians should not join the military? How can Christians possibly have “power under” Osama bin Laden? Didn’t the church play an enormously positive role in the civil rights movement?

One woman asked: “So why NOT us? If we contain the wisdom and grace and love and creativity of Jesus, why shouldn’t we be the ones involved in politics and setting laws?”

Mr. Boyd responded: “I don’t think there’s a particular angle we have on society that others lack. All good, decent people want good and order and justice. Just don’t slap the label ‘Christian’ on it.”

Correction: Aug. 2, 2006

A front-page article on Monday about the Rev. Gregory A. Boyd, a Minnesota pastor who has preached against church involvement in politics, included an outdated reference to the school where he taught and where the Rev. Paul Eddy, a theology professor who called Mr. Boyd “an anomaly in the megachurch world’’ teaches. It is Bethel University, not Bethel College. (The name was changed in 2004.)

 

July 24, 2006

Op-Ed Contributor

He Who Cast the First Stone Probably Didn’t

By DANIEL GILBERT

LONG before seat belts or common sense were particularly widespread, my family made annual trips to New York in our 1963 Valiant station wagon. Mom and Dad took the front seat, my infant sister sat in my mother’s lap and my brother and I had what we called “the wayback” all to ourselves.

In the wayback, we’d lounge around doing puzzles, reading comics and counting license plates. Eventually we’d fight. When our fight had finally escalated to the point of tears, our mother would turn around to chastise us, and my brother and I would start to plead our cases. “But he hit me first,” one of us would say, to which the other would inevitably add, “But he hit me harder.”

It turns out that my brother and I were not alone in believing that these two claims can get a puncher off the hook. In virtually every human society, “He hit me first” provides an acceptable rationale for doing that which is otherwise forbidden. Both civil and religious law provide long lists of behaviors that are illegal or immoral — unless they are responses in kind, in which case they are perfectly fine.

After all, it is wrong to punch anyone except a puncher, and our language even has special words — like “retaliation” and “retribution” and “revenge” — whose common prefix is meant to remind us that a punch thrown second is legally and morally different than a punch thrown first.

That’s why participants in every one of the globe’s intractable conflicts — from Ireland to the Middle East — offer the even-numberedness of their punches as grounds for exculpation.

The problem with the principle of even-numberedness is that people count differently. Every action has a cause and a consequence: something that led to it and something that followed from it. But research shows that while people think of their own actions as the consequences of what came before, they think of other people’s actions as the causes of what came later.

In a study conducted by William Swann and colleagues at the University of Texas, pairs of volunteers played the roles of world leaders who were trying to decide whether to initiate a nuclear strike. The first volunteer was asked to make an opening statement, the second volunteer was asked to respond, the first volunteer was asked to respond to the second, and so on. At the end of the conversation, the volunteers were shown several of the statements that had been made and were asked to recall what had been said just before and just after each of them.

The results revealed an intriguing asymmetry: When volunteers were shown one of their own statements, they naturally remembered what had led them to say it. But when they were shown one of their conversation partner’s statements, they naturally remembered how they had responded to it. In other words, volunteers remembered the causes of their own statements and the consequences of their partner’s statements.

What seems like a grossly self-serving pattern of remembering is actually the product of two innocent facts. First, because our senses point outward, we can observe other people’s actions but not our own. Second, because mental life is a private affair, we can observe our own thoughts but not the thoughts of others. Together, these facts suggest that our reasons for punching will always be more salient to us than the punches themselves — but that the opposite will be true of other people’s reasons and other people’s punches.

Examples aren’t hard to come by. Shiites seek revenge on Sunnis for the revenge they sought on Shiites; Irish Catholics retaliate against the Protestants who retaliated against them; and since 1948, it’s hard to think of any partisan in the Middle East who has done anything but play defense. In each of these instances, people on one side claim that they are merely responding to provocation and dismiss the other side’s identical claim as disingenuous spin. But research suggests that these claims reflect genuinely different perceptions of the same bloody conversation.

If the first principle of legitimate punching is that punches must be even-numbered, the second principle is that an even-numbered punch may be no more forceful than the odd-numbered punch that preceded it. Legitimate retribution is meant to restore balance, and thus an eye for an eye is fair, but an eye for an eyelash is not. When the European Union condemned Israel for bombing Lebanon in retaliation for the kidnapping of two Israeli soldiers, it did not question Israel’s right to respond, but rather, its “disproportionate use of force.” It is O.K. to hit back, just not too hard.

Research shows that people have as much trouble applying the second principle as the first. In a study conducted by Sukhwinder Shergill and colleagues at University College London, pairs of volunteers were hooked up to a mechanical device that allowed each of them to exert pressure on the other volunteer’s fingers.

The researcher began the game by exerting a fixed amount of pressure on the first volunteer’s finger. The first volunteer was then asked to exert precisely the same amount of pressure on the second volunteer’s finger. The second volunteer was then asked to exert the same amount of pressure on the first volunteer’s finger. And so on. The two volunteers took turns applying equal amounts of pressure to each other’s fingers while the researchers measured the actual amount of pressure they applied.

The results were striking. Although volunteers tried to respond to each other’s touches with equal force, they typically responded with about 40 percent more force than they had just experienced. Each time a volunteer was touched, he touched back harder, which led the other volunteer to touch back even harder. What began as a game of soft touches quickly became a game of moderate pokes and then hard prods, even though both volunteers were doing their level best to respond in kind.

Each volunteer was convinced that he was responding with equal force and that for some reason the other volunteer was escalating. Neither realized that the escalation was the natural byproduct of a neurological quirk that causes the pain we receive to seem more painful than the pain we produce, so we usually give more pain than we have received.

Research teaches us that our reasons and our pains are more palpable, more obvious and real, than are the reasons and pains of others. This leads to the escalation of mutual harm, to the illusion that others are solely responsible for it and to the belief that our actions are justifiable responses to theirs.

None of this is to deny the roles that hatred, intolerance, avarice and deceit play in human conflict. It is simply to say that basic principles of human psychology are important ingredients in this miserable stew. Until we learn to stop trusting everything our brains tell us about others — and to start trusting others themselves — there will continue to be tears and recriminations in the wayback.

Daniel Gilbert, a professor of psychology at Harvard, is the author of “Stumbling on Happiness.”

July 15, 2006

Public Schools Perform Near Private Ones in Study

By DIANA JEAN SCHEMO

WASHINGTON, July 14 — The Education Department reported on Friday that children in public schools generally performed as well or better in reading and mathematics than comparable children in private schools. The exception was in eighth-grade reading, where the private school counterparts fared better.

The report, which compared fourth- and eighth-grade reading and math scores in 2003 from nearly 7,000 public schools and more than 530 private schools, found that fourth graders attending public school did significantly better in math than comparable fourth graders in private schools. Additionally, it found that students in conservative Christian schools lagged significantly behind their counterparts in public schools on eighth-grade math.

The study, carrying the imprimatur of the National Center for Education Statistics, part of the Education Department, was contracted to the Educational Testing Service and delivered to the department last year.

It went through a lengthy peer review and includes an extended section of caveats about its limitations and calling such a comparison of public and private schools “of modest utility.”

Its release, on a summer Friday, was made with without a news conference or comment from Education Secretary Margaret Spellings.

Reg Weaver, president of the National Education Association, the union for millions of teachers, said the findings showed that public schools were “doing an outstanding job” and that if the results had been favorable to private schools, “there would have been press conferences and glowing statements about private schools.”

“The administration has been giving public schools a beating since the beginning” to advance its political agenda, Mr. Weaver said, of promoting charter schools and taxpayer-financed vouchers for private schools as alternatives to failing traditional public schools.

A spokesman for the Education Department, Chad Colby, offered no praise for public schools and said he did not expect the findings to influence policy. Mr. Colby emphasized the caveat, “An overall comparison of the two types of schools is of modest utility.”

“We’re not just for public schools or private schools,’’ he said. “We’re for good schools.”

The report mirrors and expands on similar findings this year by Christopher and Sarah Theule Lubienski, a husband-and-wife team at the University of Illinois who examined just math scores. The new study looked at reading scores, too.

The study, along with one of charter schools, was commissioned by the former head of the national Center for Education Statistics, Robert Lerner, an appointee of President Bush, at a time preliminary data suggested that charter schools, which are given public money but are run by private groups, fared no better at educating children than traditional public schools.

Proponents of charter schools had said the data did not take into account the predominance of children in their schools who had already had problems in neighborhood schools.

The two new studies put test scores in context by studying the children’s backgrounds and taking into account factors like race, ethnicity, income and parents’ educational backgrounds to make the comparisons more meaningful. The extended study of charter schools has not been released.

Findings favorable to private schools would likely have given a lift to administration efforts to offer children in ailing public schools the option of attending private schools.

An Education Department official who insisted on anonymity because of the climate surrounding the report, said researchers were "extra cautious" in reviewing it and were aware of its “political sensitivity.”

The official said the warning against drawing unsupported conclusions was expanded somewhat as the report went through in the review.

The report cautions, for example, against concluding that children do better because of the type of school as opposed to unknown factors. It also warns of great variations of performance among private schools, making a blanket comparison of public and private schools “of modest utility.” And the scores on which its findings are based reflect only a snapshot of student performance at a point in time and say nothing about individual student progress in different settings.

Arnold Goldstein of the National Center for Education Statistics said that the review was meticulous, but that it was not unusual for the center.

Mr. Goldstein said there was no political pressure to alter the findings.

Students in private schools typically score higher than those in public schools, a finding confirmed in the study. The report then dug deeper to compare students of like racial, economic and social backgrounds. When it did that, the private school advantage disappeared in all areas except eighth-grade reading.

And in math, 4th graders attending public school were nearly half a year ahead of comparable students in private school, according to the report.

The report separated private schools by type and found that among private school students, those in Lutheran schools performed best, while those in conservative Christian schools did worst.

In eighth-grade reading, children in conservative Christian schools scored no better than comparable children in public schools.

In eighth-grade math, children in Lutheran schools scored significantly better than children in public schools, but those in conservative Christian schools fared worse.

Joseph McTighe, executive director of the Council for American Private Education, an umbrella organization that represents 80 percent of private elementary and secondary schools, said the statistical analysis had little to do with parents’ choices on educating their children.

"In the real world, private school kids outperform public school kids," Mr. McTighe said. "That’s the real world, and the way things actually are."

Two weeks ago, the American Federation of Teachers, on its Web log, predicted that the report would be released on a Friday, suggesting that the Bush administration saw it as "bad news to be buried at the bottom of the news cycle."

The deputy director for administration and policy at the Institute of Education Sciences, Sue Betka, said the report was not released so it would go unnoticed. Ms. Betka said her office typically gave senior officials two weeks’ notice before releasing reports. "The report was ready two weeks ago Friday,’’ she said, “and so today was the first day, according to longstanding practice, that it could come out."

 

August 3, 2006

Energy From the Restless Sea

By HEATHER TIMMONS

NEWCASTLE, England — There is more riding the waves here than surfers, thanks to a growing number of scientists, engineers and investors.

A group of entrepreneurs is harnessing the perpetual motion of the ocean and turning it into a commodity in high demand: energy. Right now, machines of various shapes and sizes are being tested off shores from the North Sea to the Pacific — one may even be coming to the East River in New York State this fall — to see how they capture waves and tides and create marine energy.

The industry is still in its infancy, but it is gaining attention, much because of the persistence of marine energy inventors, like Dean R. Corren, who have doggedly lugged their wave and tidal prototypes around the world, even during the years when money and interest dried up. Mr. Corren, trim and cerebral, is a scientist who has long advocated green energy and pushed through numerous conservation measures when he was chairman of the public energy utility for the city of Burlington, Vt.

Another believer in the technology is Max Carcas, head of business development for Ocean Power Delivery of Edinburgh. “In the long run, this could become one of the most competitive sources of energy,” said Mr. Carcas.

His company manufactures the Pelamis, a snakelike wave energy machine the size of a passenger train, which generates energy by absorbing waves as they undulate on the ocean surface.

With high oil prices, dwindling fuel supplies and a growing pressure to reduce global warming, governments and utilities have high hopes for tidal energy. The challenge now is turning an accumulation of research into a viable commercial enterprise, which for many years has proved elusive.

No one contends that generating energy from the oceans is a preposterous idea. After all, the “fuel” is free and sustainable, and the process does not generate pollution or emissions.

Moreover, it is not just oceans that could be tapped; the regular flow of tides in bodies of water linked to oceans, like the East River, hold promise too. In fact, it seemed like such a sensible idea that inventors started making the first wave of such generators centuries ago. Many operated like dams, trapping water and then releasing it after the tides fell. But they were outmoded with the rise of steam engines and other more efficient fuel sources.

Ocean energy had a brief revival when oil prices rose in the 1970’s, and prototypes were tested in Europe and China. But financing dried up when oil prices were low in the 1990’s, and advances in wind turbines and other renewable energy elbowed out tidal projects.

These days, wave power designs vary from machines that look like corks bobbing in the ocean to devices that resemble snakes pointing into waves. There are shoreline machines that cling, like limpets, to rocks.

Tidal power machines, in contrast, often come in the form of turbines, which look like underwater windmills, and generate energy by spinning as tides move in and out; some inventors also are testing concrete-and-steel machines that lie on the seabed and pipe pressurized water back to the shore.

Even big commercial power companies are joining the action. General Electric; Norsk Hydro, a Norwegian company; and the Germany power giant Eon have recently pledged money for new projects or investments in tiny marine energy companies.

“It is an untapped renewable energy source,” said Mark Huang, senior vice president for technology finance in General Electric’s media and communications business, which is financing marine projects. “There is no where to go but up,” Mr. Huang said. He added that solar or wind energy should be viewed “as a case study” for the direction marine energy could take.

Right now, wave power generators are being tested near the shores of New Jersey, Hawaii, Scotland, England and Western Australia. A long-awaited East River tidal turbine project is to start this fall, and Representative William D. Delahunt, Democrat of Massachusetts, has proposed that the United States follow in Britain’s footsteps to build an ocean energy research center, the country’s first, off the Massachusetts coast.

A handful of commercial projects are also in the works, including the world’s first “wave farm,” as the fields of machines are known, being installed off the north coast of Portugal. A field of tidal turbines is also being built off the shore of Tromso, Norway.

Britain could generate up to 20 percent of the electricity it needs from waves and tides, according to an estimate by a government-financed group here called the Carbon Trust. That is about 12,000 megawatts a day at current usage, or three times what Britain’s largest power plant produces now. In fact, England and Scotland have become experimental laboratories for ocean energy development. As reserves shrink and the offshore oil business in the North Sea winds down, governments are trying to capture the accumulated knowledge and transform oil industry jobs into other ways of generating energy.

One research center here in Newcastle is putting marine devices to the test in a wave pool, and another is deploying them in the roiling ocean off the Orkneys, the low islands off northernmost Scotland. The Scottish government has pledged to generate 18 percent of its energy from renewable resources by 2010.

If marine energy replaces the burning of some fossil fuels like coal, it can help reduce overall carbon dioxide emissions and possibly increase the diversity and security of energy supply, said John Spurgeon, a marine energy specialist in the British Department of Trade and Industry. Since 1999, the government has committed more than $47 million to research and development, $93 million to commercialize that research and additional money to bring the energy into the electrical grid, Mr. Spurgeon said.

No energy source is perfect, though, and marine energy developers are running into some hurdles. While such generators do not emit smoky pollutants or leave behind radioactive waste, the machines are not small or delicate, and can be an eyesore. To draw energy from the ocean, they often need to be rooted on sea floors relatively close to shore, or mounted on rocks on the shore — places that have not traditionally been used for energy generation.

And despite their green-friendly intentions, inventors are finding some of the stiffest resistance is coming from environmental groups.

Take the case of Verdant Power, Mr. Corren’s company, which has been trying for years to erect a small field of tidal turbines in the East River — a project that may finally get started this fall. Mr. Corren, the company’s technology director, first developed the turbines as part of a New York University project in the 1980’s and planned to attach them to the Roosevelt Island Bridge.

After the school pulled the plug on the project, the design team spent years trying to find a new home. One executive even brought a prototype to Pakistan, but the data it collected was lost when the computers and instruments went missing.

Verdant embarked on a new East River turbine project in 2003, but it has taken two and a half years to get regulatory approval for the project from environmental agencies and the United States Army Corp of Engineers. The issue was not blocking the river to boat traffic, or how it would hook up to the electrical grid or even how it might mar the view, because it is mostly underwater. It was the fish population of the East River.

“We had eight fish biologists against it, and no one on the other side advocating for clean air” or other environmental issues, said Ronald F. Smith, the chief executive of Verdant Power. “You can see that the regulatory process is extremely biased towards doing nothing,” Mr. Smith said, adding that regulators were worried about complaints that could arise from any new projects.

To get approval, the company is installing $1.5 million in underwater sonar to watch for fish around the turbines “24 hours a day, 7 days a week,” and the data will be shown online, Mr. Smith said. Verdant Power executives warn against looking forward to a live “East River cam” that broadcasts the murky mysteries beneath the water. Sonar transmissions look more like fuzzy black and white television, they say, and besides they have seen “very, very few fish” on their visits to the river.

Ultimately, Verdant estimates it can generate 10 megawatts of electricity from the East River’s tidal flows — enough to power several thousand homes, though its test turbines will be used primarily to power a Gristedes grocery store on Roosevelt Island.

To date, studies on the effect of wave and tide machines on marine life have been sporadic and sometimes bizarre. For example, in one British trial, frozen fish were shot like projectiles onto a piece of metal that was supposed to estimate the effects of the turning blades of marine turbines.

Proper testing will involve putting some of these devices where they are not wanted, a problem reminiscent of the wind industry’s battle to construct new turbines. Some leading environmental advocates say that the issue is part of a larger wrenching change being thrust on the green movement.

“It’s a major psychological and cultural challenge for the environmental and conservation movement,” said Stephen Tindale, executive director of Greenpeace UK. “What we need to combat climate change is a complete transformation of our energy system, and that requires a lot of new stuff to be built and installed, some of it in places that are relatively untouched.”

But the potential of marine energy is too strong to ignore. For example, a recent report identified San Francisco Bay as being the largest tidal power resource in the continental United States. “There are tremendous resources for generating power along the northern coast of California,” said Uday Mathur, a renewable energy consultant to government agencies and private enterprises.

The biggest hurdle is creating a landscape for development “where these technologies can thrive,” he said, which includes a combination of government involvement, community support and of course the availability of financing.

“The situation is very similar to wind 15 years ago,” said John W. Griffiths, a former British gas executive and founder of JWG Consulting, which advises on renewable energy projects. He added: “We think that this is an industry waiting to happen.”

 

July 30, 2006

Disowning Conservative Politics, Evangelical Pastor Rattles Flock

By LAURIE GOODSTEIN

Correction Appended

MAPLEWOOD, Minn. — Like most pastors who lead thriving evangelical megachurches, the Rev. Gregory A. Boyd was asked frequently to give his blessing — and the church’s — to conservative political candidates and causes.

The requests came from church members and visitors alike: Would he please announce a rally against gay marriage during services? Would he introduce a politician from the pulpit? Could members set up a table in the lobby promoting their anti-abortion work? Would the church distribute “voters’ guides” that all but endorsed Republican candidates? And with the country at war, please couldn’t the church hang an American flag in the sanctuary?

After refusing each time, Mr. Boyd finally became fed up, he said. Before the last presidential election, he preached six sermons called “The Cross and the Sword” in which he said the church should steer clear of politics, give up moralizing on sexual issues, stop claiming the United States as a “Christian nation” and stop glorifying American military campaigns.

“When the church wins the culture wars, it inevitably loses,” Mr. Boyd preached. “When it conquers the world, it becomes the world. When you put your trust in the sword, you lose the cross.”

Mr. Boyd says he is no liberal. He is opposed to abortion and thinks homosexuality is not God’s ideal. The response from his congregation at Woodland Hills Church here in suburban St. Paul — packed mostly with politically and theologically conservative, middle-class evangelicals — was passionate. Some members walked out of a sermon and never returned. By the time the dust had settled, Woodland Hills, which Mr. Boyd founded in 1992, had lost about 1,000 of its 5,000 members.

But there were also congregants who thanked Mr. Boyd, telling him they were moved to tears to hear him voice concerns they had been too afraid to share.

“Most of my friends are believers,” said Shannon Staiger, a psychotherapist and church member, “and they think if you’re a believer, you’ll vote for Bush. And it’s scary to go against that.”

Sermons like Mr. Boyd’s are hardly typical in today’s evangelical churches. But the upheaval at Woodland Hills is an example of the internal debates now going on in some evangelical colleges, magazines and churches. A common concern is that the Christian message is being compromised by the tendency to tie evangelical Christianity to the Republican Party and American nationalism, especially through the war in Iraq.

At least six books on this theme have been published recently, some by Christian publishing houses. Randall Balmer, a religion professor at Barnard College and an evangelical, has written “Thy Kingdom Come: How the Religious Right Distorts the Faith and Threatens America — an Evangelical’s Lament.”

And Mr. Boyd has a new book out, “The Myth of a Christian Nation: How the Quest for Political Power Is Destroying the Church,” which is based on his sermons.

“There is a lot of discontent brewing,” said Brian D. McLaren, the founding pastor at Cedar Ridge Community Church in Gaithersburg, Md., and a leader in the evangelical movement known as the “emerging church,” which is at the forefront of challenging the more politicized evangelical establishment.

“More and more people are saying this has gone too far — the dominance of the evangelical identity by the religious right,” Mr. McLaren said. “You cannot say the word ‘Jesus’ in 2006 without having an awful lot of baggage going along with it. You can’t say the word ‘Christian,’ and you certainly can’t say the word ‘evangelical’ without it now raising connotations and a certain cringe factor in people.

“Because people think, ‘Oh no, what is going to come next is homosexual bashing, or pro-war rhetoric, or complaining about ‘activist judges.’ ”

Mr. Boyd said he had cleared his sermons with the church’s board, but his words left some in his congregation stunned. Some said that he was disrespecting President Bush and the military, that he was soft on abortion or telling them not to vote.

“When we joined years ago, Greg was a conservative speaker,” said William Berggren, a lawyer who joined the church with his wife six years ago. “But we totally disagreed with him on this. You can’t be a Christian and ignore actions that you feel are wrong. A case in point is the abortion issue. If the church were awake when abortion was passed in the 70’s, it wouldn’t have happened. But the church was asleep.”

Mr. Boyd, 49, who preaches in blue jeans and rumpled plaid shirts, leads a church that occupies a squat block-long building that was once a home improvement chain store.

The church grew from 40 members in 12 years, based in no small part on Mr. Boyd’s draw as an electrifying preacher who stuck closely to Scripture. He has degrees from Yale Divinity School and Princeton Theological Seminary, and he taught theology at Bethel University in St. Paul, where he created a controversy a few years ago by questioning whether God fully knew the future. Some pastors in his own denomination, the Baptist General Conference, mounted an effort to evict Mr. Boyd from the denomination and his teaching post, but he won that battle.

He is known among evangelicals for a bestselling book, “Letters From a Skeptic,” based on correspondence with his father, a leftist union organizer and a lifelong agnostic — an exchange that eventually persuaded his father to embrace Christianity.

Mr. Boyd said he never intended his sermons to be taken as merely a critique of the Republican Party or the religious right. He refuses to share his party affiliation, or whether he has one, for that reason. He said there were Christians on both the left and the right who had turned politics and patriotism into “idolatry.”

He said he first became alarmed while visiting another megachurch’s worship service on a Fourth of July years ago. The service finished with the chorus singing “God Bless America” and a video of fighter jets flying over a hill silhouetted with crosses.

“I thought to myself, ‘What just happened? Fighter jets mixed up with the cross?’ ” he said in an interview.

Patriotic displays are still a mainstay in some evangelical churches. Across town from Mr. Boyd’s church, the sanctuary of North Heights Lutheran Church was draped in bunting on the Sunday before the Fourth of July this year for a “freedom celebration.” Military veterans and flag twirlers paraded into the sanctuary, an enormous American flag rose slowly behind the stage, and a Marine major who had served in Afghanistan preached that the military was spending “your hard-earned money” on good causes.

In his six sermons, Mr. Boyd laid out a broad argument that the role of Christians was not to seek “power over” others — by controlling governments, passing legislation or fighting wars. Christians should instead seek to have “power under” others — “winning people’s hearts” by sacrificing for those in need, as Jesus did, Mr. Boyd said.

“America wasn’t founded as a theocracy,” he said. “America was founded by people trying to escape theocracies. Never in history have we had a Christian theocracy where it wasn’t bloody and barbaric. That’s why our Constitution wisely put in a separation of church and state.

“I am sorry to tell you,” he continued, “that America is not the light of the world and the hope of the world. The light of the world and the hope of the world is Jesus Christ.”

Mr. Boyd lambasted the “hypocrisy and pettiness” of Christians who focus on “sexual issues” like homosexuality, abortion or Janet Jackson’s breast-revealing performance at the Super Bowl halftime show. He said Christians these days were constantly outraged about sex and perceived violations of their rights to display their faith in public.

“Those are the two buttons to push if you want to get Christians to act,” he said. “And those are the two buttons Jesus never pushed.”

Some Woodland Hills members said they applauded the sermons because they had resolved their conflicted feelings. David Churchill, a truck driver for U.P.S. and a Teamster for 26 years, said he had been “raised in a religious-right home” but was torn between the Republican expectations of faith and family and the Democratic expectations of his union.

When Mr. Boyd preached his sermons, “it was liberating to me,” Mr. Churchill said.

Mr. Boyd gave his sermons while his church was in the midst of a $7 million fund-raising campaign. But only $4 million came in, and 7 of the more than 50 staff members were laid off, he said.

Mary Van Sickle, the family pastor at Woodland Hills, said she lost 20 volunteers who had been the backbone of the church’s Sunday school.

“They said, ‘You’re not doing what the church is supposed to be doing, which is supporting the Republican way,’ ” she said. “It was some of my best volunteers.”

The Rev. Paul Eddy, a theology professor at Bethel University and the teaching pastor at Woodland Hills, said: “Greg is an anomaly in the megachurch world. He didn’t give a whit about church leadership, never read a book about church growth. His biggest fear is that people will think that all church is is a weekend carnival, with people liking the worship, the music, his speaking, and that’s it.”

In the end, those who left tended to be white, middle-class suburbanites, church staff members said. In their place, the church has added more members who live in the surrounding community — African-Americans, Hispanics and Hmong immigrants from Laos.

This suits Mr. Boyd. His vision for his church is an ethnically and economically diverse congregation that exemplifies Jesus’ teachings by its members’ actions. He, his wife and three other families from the church moved from the suburbs three years ago to a predominantly black neighborhood in St. Paul.

Mr. Boyd now says of the upheaval: “I don’t regret any aspect of it at all. It was a defining moment for us. We let go of something we were never called to be. We just didn’t know the price we were going to pay for doing it.”

His congregation of about 4,000 is still digesting his message. Mr. Boyd arranged a forum on a recent Wednesday night to allow members to sound off on his new book. The reception was warm, but many of the 56 questions submitted in writing were pointed: Isn’t abortion an evil that Christians should prevent? Are you saying Christians should not join the military? How can Christians possibly have “power under” Osama bin Laden? Didn’t the church play an enormously positive role in the civil rights movement?

One woman asked: “So why NOT us? If we contain the wisdom and grace and love and creativity of Jesus, why shouldn’t we be the ones involved in politics and setting laws?”

Mr. Boyd responded: “I don’t think there’s a particular angle we have on society that others lack. All good, decent people want good and order and justice. Just don’t slap the label ‘Christian’ on it.”

Correction: Aug. 2, 2006

A front-page article on Monday about the Rev. Gregory A. Boyd, a Minnesota pastor who has preached against church involvement in politics, included an outdated reference to the school where he taught and where the Rev. Paul Eddy, a theology professor who called Mr. Boyd “an anomaly in the megachurch world’’ teaches. It is Bethel University, not Bethel College. (The name was changed in 2004.)

 

 July 28, 2006

Changing Reaction

Tide of Arab Opinion Turns to Support for Hezbollah

By NEIL MacFARQUHAR

DAMASCUS, Syria, July 27 — At the onset of the Lebanese crisis, Arab governments, starting with Saudi Arabia, slammed Hezbollah for recklessly provoking a war, providing what the United States and Israel took as a wink and a nod to continue the fight.

Now, with hundreds of Lebanese dead and Hezbollah holding out against the vaunted Israeli military for more than two weeks, the tide of public opinion across the Arab world is surging behind the organization, transforming the Shiite group’s leader, Sheik Hassan Nasrallah, into a folk hero and forcing a change in official statements.

The Saudi royal family and King Abdullah II of Jordan, who were initially more worried about the rising power of Shiite Iran, Hezbollah’s main sponsor, are scrambling to distance themselves from Washington.

An outpouring of newspaper columns, cartoons, blogs and public poetry readings have showered praise on Hezbollah while attacking the United States and Secretary of State Condoleezza Rice for trumpeting American plans for a “new Middle East” that they say has led only to violence and repression.

Even Al Qaeda, run by violent Sunni Muslim extremists normally hostile to all Shiites, has gotten into the act, with its deputy leader, Ayman al-Zawahri, releasing a taped message saying that through its fighting in Iraq, his organization was also trying to liberate Palestine.

Mouin Rabbani, a senior Middle East analyst in Amman, Jordan, with the International Crisis Group, said, “The Arab-Israeli conflict remains the most potent issue in this part of the world.”

Distinctive changes in tone are audible throughout the Sunni world. This week, President Hosni Mubarak of Egypt emphasized his attempts to arrange a cease-fire to protect all sects in Lebanon, while the Jordanian king announced that his country was dispatching medical teams “for the victims of Israeli aggression.” Both countries have peace treaties with Israel.

The Saudi royal court has issued a dire warning that its 2002 peace plan — offering Israel full recognition by all Arab states in exchange for returning to the borders that predated the 1967 Arab-Israeli war — could well perish.

“If the peace option is rejected due to the Israeli arrogance,” it said, “then only the war option remains, and no one knows the repercussions befalling the region, including wars and conflict that will spare no one, including those whose military power is now tempting them to play with fire.”

The Saudis were putting the West on notice that they would not exert pressure on anyone in the Arab world until Washington did something to halt the destruction of Lebanon, Saudi commentators said.

American officials say that while the Arab leaders need to take a harder line publicly for domestic political reasons, what matters more is what they tell the United States in private, which the Americans still see as a wink and a nod.

There are evident concerns among Arab governments that a victory for Hezbollah — and it has already achieved something of a victory by holding out this long — would further nourish the Islamist tide engulfing the region and challenge their authority. Hence their first priority is to cool simmering public opinion.

But perhaps not since President Gamal Abdel Nasser of Egypt made his emotional outpourings about Arab unity in the 1960’s, before the Arab defeat in the 1967 war, has the public been so electrified by a confrontation with Israel, played out repeatedly on satellite television stations with horrific images from Lebanon of wounded children and distraught women fleeing their homes.

Egypt’s opposition press has had a field day comparing Sheik Nasrallah to Nasser, while demonstrators waved pictures of both.

An editorial in the weekly Al Dustur by Ibrahim Issa, who faces a lengthy jail sentence for his previous criticism of President Mubarak, compared current Arab leaders to the medieval princes who let the Crusaders chip away at Muslim lands until they controlled them all.

After attending an intellectual rally in Cairo for Lebanon, the Egyptian poet Ahmed Fouad Negm wrote a column describing how he had watched a companion buy 20 posters of Sheik Nasrallah.

“People are praying for him as they walk in the street, because we were made to feel oppressed, weak and handicapped,” Mr. Negm said in an interview. “I asked the man who sweeps the street under my building what he thought, and he said: ‘Uncle Ahmed, he has awakened the dead man inside me! May God make him triumphant!’ ”

In Lebanon, Rasha Salti, a freelance writer, summarized the sense that Sheik Nasrallah differed from other Arab leaders.

“Since the war broke out, Hassan Nasrallah has displayed a persona, and public behavior also, to the exact opposite of Arab heads of states,” she wrote in an e-mail message posted on many blogs.

In comparison, Secretary of State Condoleezza Rice’s brief visit to the region sparked widespread criticism of her cold demeanor and her choice of words, particularly a statement that the bloodshed represented the birth pangs of a “new Middle East.” That catchphrase was much used by Shimon Peres, the veteran Israeli leader who was a principal negotiator of the 1993 Oslo Accords, which ultimately failed to lead to the Palestinian state they envisaged.

A cartoon by Emad Hajjaj in Jordan labeled “The New Middle East” showed an Israeli tank sitting on a broken apartment house in the shape of the Arab world.

Fawaz al-Trabalsi, a columnist in the Lebanese daily As Safir, suggested that the real new thing in the Middle East was the ability of one group to challenge Israeli militarily.

Perhaps nothing underscored Hezbollah’s rising stock more than the sudden appearance of a tape from the Qaeda leadership attempting to grab some of the limelight.

Al Jazeera satellite television broadcast a tape from Mr. Zawahri (za-WAH-ri). Large panels behind him showed a picture of the exploding World Trade Center as well as portraits of two Egyptian Qaeda members, Muhammad Atef, a Qaeda commander who was killed by an American airstrike in Afghanistan, and Mohamed Atta, the lead hijacker on Sept. 11, 2001. He described the two as fighters for the Palestinians.

Mr. Zawahri tried to argue that the fight against American forces in Iraq paralleled what Hezbollah was doing, though he did not mention the organization by name.

“It is an advantage that Iraq is near Palestine,” he said. “Muslims should support its holy warriors until an Islamic emirate dedicated to jihad is established there, which could then transfer the jihad to the borders of Palestine.”

Mr. Zawahri also adopted some of the language of Hezbollah and Shiite Muslims in general. That was rather ironic, since previously in Iraq, Al Qaeda has labeled Shiites Muslim as infidels and claimed responsibility for some of the bloodier assaults on Shiite neighborhoods there.

But by taking on Israel, Hezbollah had instantly eclipsed Al Qaeda, analysts said. “Everyone will be asking, ‘Where is Al Qaeda now?’ ” said Adel al-Toraifi, a Saudi columnist and expert on Sunni extremists.

Mr. Rabbani of the International Crisis Group said Hezbollah’s ability to withstand the Israeli assault and to continue to lob missiles well into Israel exposed the weaknesses of Arab governments with far greater resources than Hezbollah.

“Public opinion says that if they are getting more on the battlefield than you are at the negotiating table, and you have so many more means at your disposal, then what the hell are you doing?” Mr. Rabbani said. “In comparison with the small embattled guerrilla movement, the Arab states seem to be standing idly by twiddling their thumbs.”

Mona el-Naggar contributed reporting from Cairo for this article, and Suha Maayeh from Amman, Jordan.

 

July 29, 2006

The Wi-Fi in Your Handset

By MATT RICHTEL

What if, instead of burning up minutes on your cellphone plan, you could make free or cheap calls over the wireless networks that allow Internet access in many coffee shops, airports and homes?

New phones coming on the market will allow just that.

Instead of relying on standard cellphone networks, the phones will make use of the anarchic global patchwork of so-called Wi-Fi hotspots. Other models will be able to switch easily between the two modes.

The phones, while a potential money-saver for consumers, could cause big problems for cellphone companies. They have invested billions in their nationwide networks of cell towers, and they could find that customers are bypassing them in favor of Wi-Fi connections. The struggling Bell operating companies could also suffer if the new phones accelerate the trend toward cheap Internet-based calling, reducing the need for a standard phone line in homes with wireless networks.

The spottiness of wireless Internet coverage means that for now, the phones will be more of a supplement to, rather than a replacement for, standard cellphone service. But dozens of American cities and towns are either building or considering wide-area wireless networks that would allow Wi-Fi phones to connect and make free or cheap calls.

“It’s a phone that looks, feels and acts like a cell phone, but it actually operates over the Wi-Fi network,” said Steve Howe, vice president of voice for EarthLink, which is building networks in Philadelphia and Anaheim, Calif.

Later this year it plans to introduce Wi-Fi phone service that Mr. Howe said could cost a fifth as much as traditional cell service.

The technology is in its early stages, and it faces some hurdles to widespread use. But it is being promoted by big technology companies like Cisco Systems and giving rise to new competition in the mobile phone business.

A handful of companies are already using Wi-Fi phones to cut costs within offices or on corporate campuses, and the phones will soon be reaching the consumer market.

Skype, the Internet calling service owned by eBay, said last week that four manufacturers plan to begin shipping Wi-Fi phones that are compatible with the service by the end of September. Among them is Netgear, a maker of networking equipment, which plans to charge $300 for its phone; the other makers include Belkin, Edge-Core and SMC.

Skype allows free calls to other Skype users and usually charges pennies a minute for calls to regular phones, although it has made all domestic calls free through the end of the year.

EarthLink plans to sell phones for $50 to $100, then charge roughly $25 a month for unlimited calling. Initially, the service will work only with hotspots where Internet access is provided by EarthLink, either in homes or on its citywide networks.

The major cellphone companies have taken notice of Wi-Fi phones, and some have chosen to deal with the potential threat by embracing it, building it into their business plans.

Cingular Wireless plans to introduce phones next year that will allow people to connect at home through their own wireless networks but switch to cell towers when out and about.

Later this year, T-Mobile plans to test a service that will allow its subscribers to switch seamlessly between connections to cellular towers and Wi-Fi hotspots, including those in homes and the more than 7,000 it controls in Starbucks outlets, airports and other locations, according to analysts with knowledge of the plans. The company hopes that moving mobile phone traffic off its network will allow it to offer cheaper service and steal customers from cell competitors and landline phone companies like AT&T.

“T-Mobile is interested in the replacement or displacement of landline minutes,” said Mark Bolger, director of marketing for T-Mobile. Wi-Fi calling “is one of the technologies that will help us deliver on that promise.”

Major phone manufacturers including Nokia, Samsung and Motorola are offering or plan to introduce phones designed for use on both traditional cell and Wi-Fi networks. Samsung said last week that it had begun to sell its dual-mode phone in Italy.

Wi-Fi not only has the potential to offer better voice quality than traditional cellular service, but it also opens the door to videoconferencing and other data services on mobile devices. Cellphone users are now often limited to the services offered by their carriers, but Wi-Fi phones could have access to a wider range of offerings on the Internet, in some cases at faster transmission speeds than on the carriers’ networks.

But there are enough limits to the technology that it may be some time before people start tossing out their old cellphones to take advantage of Wi-Fi.

The radio signals sent from standard mobile phones connect to tens of thousands of cell sites on towers or attached to buildings, billboards and other structures. These cells have an average range of two miles, allowing them to blanket much of the country.

Wi-Fi hotspots have a much more limited range, usually no more than 800 feet. Unlike the cellphone towers, which are operated by the carriers, the hotspots tend to be controlled by individuals or smaller companies, and are not coordinated or organized into a larger network.

“It’s going to be a long time before you’ll have a reliable Wi-Fi connection anywhere you go,” said Michael Jackson, director of operations for Skype.

A company called Fon, which is based in Spain and is backed by Skype and Google, is trying to accelerate the spread of Wi-Fi by selling cheap wireless routers to anyone who will agree to let other people in the vicinity use them by paying an access fee. The buyers can choose to split the fee with the company.

In October, Fon plans to begin charging about $150 for a wireless router that also serves as a docking station for a Skype-compatible Wi-Fi phone. The phone will connect easily to hotspots operated by Fon members.

“Wireless Internet infrastructure can be incredibly inexpensive,” said Martin Varsavsky, the founder and chief executive of Fon.

Without special software, like that from Fon, however, hotspots may not automatically set up a connection with the new phones. Instead, until the technology is smoothed out, users might have to configure their phones to connect whenever they are in range of a new hotspot.

“If it takes you five minutes to set up at the airport and you save 50 cents, why would you bother?” said Benoit Schillings, chief technology officer of Trolltech, an Oslo company developing software to make these connections easier.

Another wrinkle is that Wi-Fi networks operate over unlicensed radio spectrum. This spectrum is essentially public space, which means that anyone can make use of it, but it also means that the frequencies can be congested, potentially causing interference and dropped calls.

By contrast, the major cellphone carriers paid billions of dollars to the federal government for the right to use their slices of the radio spectrum. They can control who is on their networks, maintain quality standards and limit overcrowding. But the spectrum fees introduce a layer of costs that Wi-Fi calls are not burdened with.

Companies including Clearwire, founded by the cellphone pioneer Craig O. McCaw, are building subscribers-only wireless data networks using a technology called WiMax that has a much greater reach than Wi-Fi, and mobile phone service is part of their plans.

The hotspot technology has inspired a vigorous and complex discussion in the telecommunications world about how the traditional companies should react.

On its face, the technology would seem to present the carriers with a major problem. The more time subscribers spend connected to Wi-Fi hotspots, the less time and money they spend on the cell network.

Yet carriers also recognize that per-minute charges are falling across the industry, and that the loss of revenue they suffer if they allow people to switch onto a Wi-Fi network could be offset by attracting loyal subscribers who sometimes want to connect that way.

Further, some carriers argue that if people connect to Wi-Fi in their homes and offices, where there are close and reliable hotspots, they will enjoy connections that are better than those via cell towers and will not need standard phone lines. In a home, for example, the mobile phone could connect as effectively through Wi-Fi as traditional cordless phones do now to their base stations.

Larry Lang, general manager of the mobile wireless group at Cisco, said Wi-Fi would allow good service in people’s homes “without having to put up big cellphone towers in the neighborhood.” Cisco makes equipment that phone companies use to handle digitized calls.

Roger Entner, a telecommunications industry analyst with Ovum Research, said some carriers were still wary of Wi-Fi service. He said they were concerned that when hotspot reception was not good — whether at home or elsewhere — they would be blamed.

“The guys who don’t want it are predominately Verizon Wireless,” Mr. Entner said. They do not want a customer who is getting poor service at a hotspot “complaining that Verizon service is responsible,” he said.

A spokesman for Verizon Wireless, Jeff Nelson, said the company was looking at Wi-Fi service but had no plans to offer a product in this area. “At this point, we don’t see a great application for customers,” he said.

Further complicating the business discussion for the carriers are the incestuous ownership arrangements in the telecommunications world. For instance, Cingular Wireless is owned jointly by AT&T and BellSouth, while Verizon Wireless is part owned by Verizon Communications, the regional phone giant.

BellSouth, AT&T and Verizon Communications each have an interest in selling high-speed Internet access for homes and offices. If consumers have an incentive to set up wireless networks in their homes — networks that could be used for superior phone service — it could give them another reason to buy high-speed Internet access.

Of course, as many laptop users have discovered, Wi-Fi Internet access is not always something you pay for. Sometimes it is something you just find, as can be the case when people deliberately or unintentionally leave access points open and unsecured. The phones that work with Skype, and most likely others, will turn the free access point in a neighborhood café — or a neighbor’s house — into a miniature provider of phone service.

“It can be very open, decentralized,” said Mr. Entner of Ovum Research. But, he said, such a grass-roots infrastructure presents many challenges. For example, callers could get frustrated when the hotspot they are relying on for a connection stops working and there is no one to complain to.

Mr. Entner said, “You could knock on your neighbor’s door and say, ‘By the way, buddy, I’ve been bumming your Wi-Fi signal to make calls; please turn it back on.’ ”

John Markoff contributed reporting for this article.

Monumental Disappointment
Marine nonsense.

By Sean Paige

To former President Bill Clinton, environmentalists were just another Democratic-party voting bloc to be charmed, manipulated, or seduced, when it suited him. He stocked his Cabinet with greens, but took a strong personal interest in the agenda only when it was politically useful. He also found clever ways to cut Congress out of the picture when he did act, hogging the accolades for himself and minimizing the need to compromise or overcome resistance. He was slick about it, in other words. No surprise there.

A classic example of this was when Clinton used executive powers granted by the Antiquities Act of 1906 to establish a series of new national monuments across the west. The first series of designations came in a September surprise that helped cement the support of green groups during the run-up to Clinton’s 1996 reelection bid.

The gambit, which was hatched in secret without consultations with Congress or the states affected, stirred local resentment and generated lawsuits. So unpopular was one designation in southern Utah, in fact, that Clinton made the announcement in Arizona. Many westerners and Republicans claimed at the time — and maintain to this day — that Clinton misused the Antiquities Act in designating the monuments. The century-old act was intended to give presidents the power to protect Indian ruins, artifacts and other small historical or scientific sites across the west, and it had seldom before been used on such a massive scale, to impose a new layer of bureaucratic control over millions of acres of federal land. It remains a sore subject with many westerners.

Back in 2001, newly elected “red state” President George W. Bush, aware of western concerns, backed a congressional effort to rein in a president’s power to dramatically alter the status of federal lands without congressional involvement. So imagine the sense of betrayal among Republicans and westerners when Bush on June 15 pulled a Clinton — using the Antiquities Act to establish the 140,000 square mile Northwestern Hawaiian Island Marine National Monument. That’s an area larger than 46 of the 50 states, and more than seven times larger than all other national marine sanctuaries combined.

“Within the boundaries of the monument, we will prohibit unauthorized passage of ships; we will prohibit unauthorized recreational or commercial activity; we will prohibit any resource extraction or dumping of waste, and over a five-year period, we will phase out commercial fishing, as well,” Bush said, claiming the designation as an example of what he calls “cooperative conservation.” But this isn’t cooperative conservation; it’s command-and-control conservation of the worse sort, inviting Washington to dictate virtually all activities in an area 100-times larger than Yosemite National Park.

Much of the area already was in the process of being designated a national marine sanctuary and was under no serious threat. But Bush shortcut the five-year process and declared the area a national monument, to be administered by the department of Interior. In so doing, he also pulled the rug out from under erstwhile allies who decried such uses of the Antiquities Act as abuses of executive power.

Why do this but for crass political and legacy-seeking reasons? President Bush knows, as Clinton did, that the easiest way to set liberal pundits and editorial pages aglow is to pose as the second coming of Teddy Roosevelt. And it seems to have worked, at least in the short run. An editorial in the
Philadelphia Inquirer said Bush now joins the company of Theodore Roosevelt, Jimmy Carter, and Bill Clinton for “grand acts of conservation.” The San Francisco Chronicle said Bush pulled “a stunner.” The St. Petersburg Times dubbed it an “act of environmental heroism.”

Others would call it Clintonesque, hypocritical, given Bush’s past opposition to such misuses of executive power. “I think it is totally inappropriate, unlawful, wrong and not consistent with his alleged philosophy,” Perry Pendley told
Greenwire. Pendley’s Mountain States Legal Foundation unsuccessfully sued to overturn Clinton’s monument designations.

Those reportedly angered by Bush’s move include officials at the National Oceanic and Atmospheric Administration, who saw five years of work establishing a marine sanctuary go down the drain. And commercial fishers in Hawaii are naturally concerned, fearing that their access to traditional fishing areas will be curtailed or denied.

“If NOAA had gone forward with the sanctuary designation, the agency would have made a proposal for fishing regulations, taken public comment, then issued final regulations within the next year,”
Greenwire reported. “The monument designation allowed Bush to institute his own management plans with the stroke of a pen.”

Bush is fond of imposing his will with a stroke of the pen — except, that is, when it comes to using his veto pen to curb the obscene orgy of federal spending that’s gone on during his watch. His executive edicts have frequently proven harmful to civil liberties, the rule of law, and the public’s trust in transparent and above-board government. This most recent pen-stroke only contributes to a monumental credibility problem.

Sean Paige is the editorial-page editor of the Colorado Springs Gazette.

 

The Democrats’ Impeachment Road Map
It’s finished, ready to go -- and waiting for November.

By Byron York

There’s a word you won’t find in the text of Democratic Rep. John Conyers’s new “investigative report” on the Bush administration, “The Constitution in Crisis: The Downing Street Minutes and Deception, Manipulation, Torture, Retribution, and Coverups in the Iraq War, and Illegal Domestic Surveillance.” And the word is…impeachment. Yet the 350-page “Constitution in Crisis,” released last week, is, more than anything else, a detailed road map for the impeachment of George W. Bush, ready for use should Democrats win control of the House of Representatives this November. And Conyers, who would become chairman of the House Judiciary Committee — the panel that would initiate any impeachment proceedings — is the man who could make it happen.

While it’s absent in the body of the report, the I-word does appear a few times in Conyers’s 1,401 footnotes, which include citations of authorities ranging from the left-wing conspiracy website rawstory.com to the left-wing antiwar sites democracyrising.us and afterdowningstreet.org to the left-wing British newspaper the
Guardian to the left-wing magazines The Nation and Mother Jones to the left-wing blogosphere favorite Murray Waas to the New York Times columnists Paul Krugman, Maureen Dowd, Bob Herbert, and Frank Rich to former Clinton aide Sidney Blumenthal to the New Yorker’s Seymour Hersh. (Sources for “The Constitution in Crisis” even include one story co-written by the disgraced Internet writer Jason Leopold.) Relying on such material, Conyers has created what might be called the definitive left-wing blogger’s history of the Bush administration. “I would like to thank the ‘blogosphere’ for its myriad and invaluable contributions to me and my staff,” Conyers writes in the report’s introduction. “Absent the assistance of ‘blogs’ and other Internet-based media, it would have been impossible to assemble all of the information, sources and other materials necessary to the preparation of this report.”

But Conyers’s report is more than the world’s longest blog post. Far more seriously, it is the foundation for possible articles of impeachment, detailing charge after charge against the president. “Approximately 26 laws and regulations may have been violated by this administration’s misconduct,” Conyers wrote Friday in a message posted simultaneously on the
DailyKos and Huffington Post websites. “The report…compiles the accumulated evidence that the Bush administration has thumbed its nose at our nation’s laws, and the Constitution itself.”

A few months ago, when there was speculation that Democrats planned to impeach Bush if they won the House, the party’s leadership moved quickly to stop the discussion. In May, a spokesman for House Minority Leader Nancy Pelosi told the
Washington Post that Pelosi had told her fellow Democrats “impeachment is off the table; she is not interested in pursuing it.” But Conyers, who would likely be the single-most important person in the undertaking, was never on board. “There’s no way I can predict whether there will ultimately be an impeachment proceeding underway or not,” he said last week in an interview with the liberal website tpmmmuckraker.com. “The Constitution in Crisis” is Conyers’s sign that, should the opportunity arise, he is ready to go.

LIES, FRAUD, COVERUPS, RETRIBUTION, TORTURE…
Conyers’s report is divided into two parts. The first accuses the Bush administration of a variety of crimes involving the war in Iraq, and the second of crimes involving what the administration calls the terrorist-surveillance program and what its critics call “domestic spying.” In many areas, legal analysts, Republicans and even some Democrats, might find Conyers’s case so tenuous and ill-sourced as to be laughable. But even a cursory reading of “The Constitution in Crisis” shows that the man who might be chairman is very, very serious.

On the war, Conyers argues that the Bush administration’s case for war, its decision to go to war, and its conduct of the war have been, in essence, an exercise in criminal fraud. The report lists four laws which Conyers says the president violated in the run-up to the war:

Committing a Fraud Against the United States (18 U.S.C. 371)
Making False Statements to Congress (18 U.S.C. 1001)
War Powers Resolution (Public Law 93-148)
Misuse of Government Funds (31 U.S.C. 1301)

On the question of committing a fraud against the United States, Conyers argues that President Bush, intent on “avenging [his] father and working with the neo-cons,” made the decision to go to war in Iraq before asking Congress for the authority to do so. That is the heart of the alleged fraud; every act that followed, Conyers writes, was part of the crime — even if those actions do not, at first glance, appear to be criminal acts. “‘Defrauding the government’ has been defined quite broadly and does not need an underlying criminal offense and alone subjects the offender to prosecution,” Conyers writes in a legal analysis section. “While this statute is similar to obstructing or lying to Congress…it is broader. It covers acts that may not technically be lying or communications that are not formally before Congress.”

Besides the alleged fraud, Conyers also contends that the administration’s preparations for war — the moving of military equipment and personnel to the Gulf region — violated at least two other laws. “Our investigation has found that there is evidence the Bush Administration redeployed military assets in the immediate vicinity of Iraq and conducted bombing raids on Iraq in 2002 in possible violation of the War Powers Resolution, Pub. L. No. 93-148, and laws prohibiting the Misuse of Government Funds, 31 U.S.C. 1301,” he writes. And key elements of the president’s case for war, Conyers says, violated yet another statute. “We have found that President Bush and members of his administration made numerous false statements that Iraq had sought to acquire enriched uranium from Niger,” the report continues. “In particular, President Bush’s statements and certifications before and to Congress may constitute Making a False Statement to Congress in violation of 18 U.S.C. 1001.”

In the next section of the report, Conyers alleges that the administration, in its treatment of prisoners, both in Iraq and in the broader war on terror, has violated three laws:

Anti-Torture Statute (18 U.S.C. 2340-40A)
The War Crimes Act (18 U.S.C. 2441)
Material Witness (18 U.S.C. 3144)

Conyers suggests that American officials might be tried under the War Crimes Act for “grave breaches” of the Geneva Conventions, and might also be liable under the Anti-Torture Statute. “Those who order torture, or in other ways conspire to commit torture, can be held criminally liable under this statute,” the report says. “The statue doesn’t require a person to actually commit torture with his own hands.” Conyers singles out the two attorneys general of the Bush presidency, John Ashcroft and Alberto Gonzales, as potential targets of prosecution.

From the war itself, Conyers moves to the issue of what the report calls “coverups and retribution” related to the war. “Inevitably, information began to seep out exposing the many falsehoods and deceptions concerning the Iraq war,” the report says. “The release of this information — including information detailing the Niger-Iraq uranium forgers — led members of the Bush administration to react with a series of leaks and other actions designed to cover up their misdeeds and obtain retribution against the critics.” In the course of that reaction, Conyers suggests, the president and his aides broke four laws:

Obstructing Congress (18 U.S.C. 1505)
Whistleblower Protection (5 U.S.C. 2302)
The Lloyd-LaFollette Act (5 U.S.C. 7211)
Retaliating against Witnesses (18 U.S.C. 1513)

The most famous case of alleged retribution, of course, involved the former ambassador Joseph Wilson, but Conyers broadens his charges to include alleged retribution against several others who have publicly disagreed with the administration, including former General Eric Shinseki, former Treasury Secretary Paul O’Neill, and former counterterrorism chief Richard Clarke. Conyers also places antiwar protester Cindy Sheehan on the list, and even an ABC News reporter named Jeffrey Kofman. (In that case, the administration, unhappy with a report Kofman had done, allegedly told The Drudge Report about a profile of Kofman published in the gay publication The Advocate, thereby sending out word that Kofman was gay — although the fact that he was profiled in The Advocate suggested that Kofman was already quite open about that fact.) In the case of Sheehan, Conyers describes the administration’s allegedly criminal acts this way:

Instead of meeting with Sheehan, the administration and other conservative media outlets began to attack Sheehan. Columnist Maureen Dowd noted that the “Bush team tried to discredit ‘Mom’ by pointing reporters to an old article in which she sounded kinder to W. If only her husband were an undercover C.I.A. operative, the Bushies could out him. But even if they send out a squad of Swift Boat Moms for Truth, there will be a countering Falluja Moms for Truth.”

The attacks continued as Fred Barnes of
Fox News labeled Sheehan a “crackpot.” Conservative blogs then started talking about Sheehan’s divorce…The president also joined in on the attack by criticizing Sheehan as unrepresentative of most military families he meets….

The final part of “The Constitution in Crisis” is a long discussion entitled “Unlawful Domestic Surveillance and the Decline of Civil Liberties Under the Administration of George W. Bush.” In this matter, Conyers alleges that the president and the administration have broken five laws:

Foreign Intelligence Surveillance Act (50 U.S.C. 1801 et seq.)
National Security Act of 1947 (50 U.S.C. chapter 15)
Communications Act of 1934 (47 U.S.C. 222)
Stored Communications Act of 1986 (18 U.S.C. 2702)
Pen Registers or Trap and Trace Devices (18 U.S.C. 3121)

“The warrantless wiretap program disclosed by The New York Times,” Conyers writes, “directly violates the Foreign Intelligence Surveillance Act, 50 U.S.C. 1801; and the warrant requirement of the Fourth Amendment, and, just as dangerously, threatens to create a precedent that may be used to violate numerous additional laws. The NSA’s domestic database program disclosed by USA Today also appears to violate the Stored Communications Act and the Communications Act of 1934. In addition, the administration appears to have briefed members of the Intelligence Committees regarding these programs in violation of the National Security Act, 50 U.S.C. 401, and we have found little evidence they provided useful intelligence or law enforcement information.”

Most of Conyers’s discussion of surveillance is familiar to anyone who has followed the issue, but some readers may be surprised by his suggestion that the administration, in addition to all of its other alleged crimes, broke the law when it notified Congress about the NSA surveillance program. The administration informed eight top officials in the House and Senate — four Republicans and four Democrats — about the program. Conyers argues that was a crime. “Briefings of this nature would appear to be in violation of the National Security Act of 1947, which governs the manner in which members of Congress are to be briefed on intelligence activities,” he writes. “The law requires the president to keep all members of the House and Senate Intelligence Committees ‘fully and currently informed’ of intelligence activities. Only in the case of a highly classified covert action (when the U.S. engages in operations to influence political, economic or military conditions of another country) does a statute expressly permit the president to limit briefings to a select group of members. Covert actions, pursuant to the statute, do not include ‘activities the primary purpose of which is to acquire intelligence.’“

A CAST OF THOUSANDS
It would take a long discussion — perhaps one as long as “The Constitution in Crisis” itself — to do justice to all of Conyers’s allegations. The same might be said of his sources. For example, one analysis of the administration’s alleged misconduct that Conyers apparently finds quite persuasive — he cites it six times in “The Constitution in Crisis” — is an article originally published by the left-wing website democracyrising.us. Entitled “Bush’s Uranium Lies: The Case for a Special Prosecutor That Could Lead to Impeachment,” it was written by a Connecticut lawyer named Francis T. Mandanici. Readers might remember Mandanici from Whitewater days, when he engaged in a personal crusade against Kenneth Starr, filing ethics complaint after ethics complaint against the independent counsel. Readers with longer memories might recall that before Mandanici attacked Starr, he was fixated on the Bush family. In a November 1992 story about the savings-and-loan investigation involving the first President Bush’s son Neil, the
Washington Post reported the following:

A federal grand jury in Denver investigating the failure of Silverado Banking, Savings and Loan Association heard from an unusual witness yesterday — a Connecticut lawyer with no firsthand knowledge about the Colorado S&L’s collapse, who says that President Bush’s son Neil should face criminal charges for violating banking laws while serving on Silverado’s board.

In a rare legal proceeding, the grand jury investigating Silverado’s collapse spent 1 1/2 hours meeting with Francis Mandanici, a Bridgeport public defender who persuaded the panel to listen to what he has to say about the case.

Motivated by what he admits is a long-standing grudge against President Bush, Mandanici said he researched thousands of pages of documents in the Silverado case and developed what he contends is evidence of a dozen felony violations by the president’s son.

Today, Mandanici seems to be pursuing a similar course with George H. W. Bush’s other son George. As for his motivation, Mandanici once told the online magazine Salon, “I guess I never left the ‘60s.”

Besides Mandanici and the entire liberal side of the
New York Times columnist lineup, other writers cited in “The Constitution in Crisis” include left-wing journalists and bloggers Glenn Greenwald, Robert Dreyfuss, and Larisa Alexandrovna. “The Constitution in Crisis” also cites the occasional unknown writer like Carmen Yarrusso, who, according to a search of the Nexis database, seems to have written mostly letters to the editor — and who in 1998 was described in a brief article in the Boston Globe as being “a humorist from Brookline, N.H.”

Conyers’s defenders will no doubt argue that such writers make up a minority of the sources cited in “The Constitution in Crisis.” But the interpretive structure of the report is undoubtedly inspired their work — and that of similar writers in the left-wing blogosphere. And the nature of the other sources on which the report is based — newspaper articles, transcripts of interviews, and previously released government documents — also suggests that the Conyers report is not the product of a real investigation. Conyers would likely respond by saying that as a member of the minority party in the House, he has no power to issue subpoenas, compel testimony, and demand the production of documents from the administration. That’s true. But if he were to win such power, it seems fair to say that he has already decided the conclusions he will reach.

Reading Conyers’s various statements on the
Huffington Post, where he is a regular contributor, it’s clear that Conyers believes his case against George W. Bush has not received enough attention. And indeed, “The Constitution in Crisis” has been overlooked by many major press outlets. It shouldn’t be. The point is not the legitimacy, or lack of legitimacy, of Conyers’s charges. It is the fact that Conyers might be just a few months away from the chairmanship of the House Judiciary Committee. If he wins that seat, and he moves toward impeachment — and how could he not, if he believes the president broke not one, not two, not three, but 26 laws and regulations? — observers who haven’t been paying attention might express surprise or call such action precipitous. To that, Conyers can answer, correctly, that no one should be surprised. After all, he’s been making the case for a long time, whether or not anyone was listening.

 —
Byron York, NR’s White House correspondent, is the author of the book The Vast Left Wing Conspiracy: The Untold Story of How Democratic Operatives, Eccentric Billionaires, Liberal Activists, and Assorted Celebrities Tried to Bring Down a President — and Why They’ll Try Even Harder Next Time.

 

Throw Out Your Cookie Cutters
Left, Right, and the Jews.

By Catherine Seipp

Reading Arianna Huffington’s latest high-horse commentary about Mel Gibson’s “odious racism” — that is, Gibson’s nutty ravings about “the Jews” when stopped by a traffic cop in Malibu — reminded me of the last time I noticed nutty ravings in the media about “the Jews.”

This was in March, and I was browsing the comments to Arianna’s explanation of the George Clooney affair. (To refresh your memory, Arianna had strung together some comments she’d gleaned from various Clooney interviews into a faux blog post for her site.) I’d noticed this sympathetic response from one of Arianna’s supporters in her comments, who felt that all the flack she’d been getting for stealing Clooney’s thoughts for her blog was unfair: “We are what the MSM is not,” the commenter said of their angelic side of the blogosphere. “We want the truth, we detest lies, we want peace, we hate the war mongers, we want reason, we get angry at rationalizations, we want a democracy for the good of all citizens, we oppose an oligarchy of self rightious political hacks out to promote fascism.”

Cue angelic voices singing “God Bless America.” I often wonder about these people who speak so confidently about all the wonderful things “we” want. But then the commenter got down to brass tacks and I didn’t have to wonder any more: “The only common thread I can see in the misery and death of the American people over the last 40 years is the control of our government by Texans and Jews.”

Oh, so
that’s who “we” are. Or to put it another way: “We” are those brave, beleaguered souls who dare stand up to Bush and the neocons. Thanks for the explanation!

And what reminded me of all
that was seeing Gibson described as “rightwing” a few days ago in a long, front-page Los Angeles Times feature about whether Gibson really is (or isn’t) anti-Semitic:

“I remember the days when Mel Gibson was nearly as lovable as he thought he was,” said film historian David Thomson. “When he began, he was a widely popular rascal. Women went for him in a big way — if you got involved with him, he’s not going to be exactly a gentleman, but you’d have a pretty good time.”

But when he became a director, Thomson added, Gibson seemed to take himself too seriously and emerged as very right-wing.

“He is very anti-English,” said Thomson, pointing to anti-British portrayals in
The Patriot and Braveheart. “And there is a real extraordinary cruelty” in his films.”

Now as it happens, Gibson probably can be fairly described as right-wing, according to other articles I’ve read about him. But the adjective is just plopped into the Times piece as a non-sequitur that can be just left there unexplained... unless you share the belief that being anti-English and fond of cinematic cruelty is, ipso facto, any kind of explanation.

Even in the mainstream media, that’s a pretty strange assumption. But the article’s writer, Mary McNamara, evidently shares the Left’s general, comforting assumption that anti-Semitic automatically equals right-wing. So no further explanation was needed.

Actually, as anyone who’s paid attention to the political shift these last 30 years can’t help but notice, knee-jerk anti-Semitism is now far more commonly found on the Left than on the Right. To me, it’s also more disheartening, because unlike Gibson — who at least has had the grace to be repeatedly and publicly mortified about his drunken remarks once he sobered up — those on the Left never display the least embarrassment about it.

A couple of weeks ago, for instance, I was walking the dog and one of my (conservative, Catholic) neighbors came out of his house to say hello and chat about the news. He told me, still shaking his head with disbelief, that one of his liberal friends had just revealed to him the “real reason” behind Israel’s bombing Hezbollah in Lebanon: Tourism.

The Israelis, this woman had confidently explained, don’t like the competition from all those new hotels in a revived Beirut.

Gee, I guess now that we’re getting a groovy new boutique hotel here in Silver Lake, on the site of what used to be a motel catering to drug addicts, maybe I’d better start building a bomb shelter. But that’s one of the things I like about this neighborhood: Because much of it is still blue-collar/working class, and therefore unexposed to all that anti-America/pro-Arab propaganda common among the educated elite, it’s not as completely and reflexively Lefty as the insulated West Side.

My neighbor’s story reminded me that during the first Gulf War, I was sitting in a Silver Lake coffee shop, feeling depressed from eavesdropping on a conversation at the next table between three men who were evidently crew members for some film production company. A guy in a blond ponytail, who seemed to be in charge, was telling the other two the “real” reason we were in this mess: “Israel.”

I braced myself for the murmurs of agreement I’d been used to hearing on the West Side. But they just stared at him, open-mouthed: “I don’t see what that has to do with Iraq invading Kuwait,” one finally said.

The other day, I noticed commenters at some lefty blog were fuming about my column here last week, the one that had made fun of L.A.’s isolated elites and their 310-area code. I can always tell when these things hit home because the lefties start flailing about with odd accusations — in this case that I’m one of those “gated conservatives” (Silver Lake’s a gated community?); that the West Side is “actually quite conservative” anyway (I guess so, if you define “quite conservative” as “overwhelmingly registered Democrats”); and that, of course, I have no liberal friends.

The truth is that because I live in L.A., most of my friends are liberal, just like if I lived in Rome most would be Roman. Unlike those tolerant Lefties, I don’t limit my friends to people who share my political opinions.

As it happens, my daughter Maia just returned from a month in Santa Barbara at the Young America Foundation’s Reagan Leadership Academy. It was a wonderful experience for many reasons, among them that for once she wasn’t the most conservative person in the crowd.

In fact, they gave Maia a jokey little award calling her a RINO (Republican In Name Only). The truth is that although she’s more conservative than center-right me and sees nothing wrong with being “worldly.” (After all, as I pointed out, Reagan himself must have pretty worldly, since he was president.)

Many of the other kids came from Christian colleges and were homeschooled before that, but — sorry liberal true believers! — they were also all extremely bright and tolerant of dissenting points of view. Maia was one of just three Jewish kids among a class of 26 “future Ann Coulters and Jonah Goldbergs and Fred Barneses,” as she put it. But I’d much rather she have been there when the Israel/Hezbollah war broke out than liberal L.A., having to hear (yet again) about the poor Palestinians.

Maia became especially good friends at the Reagan Leadership Academy with a girl who goes to Brigham Young University and interned for Gov. Mitt Romney of Massachusetts this summer; a Kyrgyz boy from the University of Toronto; and a 17-year-old boy who’d be a freshman at Harvard this September except he wants to take a “gap year” going back to high school for the social life.

The YAF has of course been doing various seminars along these lines for years, but this was the Reagan Leadership Academy’s inaugural program, and so the
New York Times sent out a reporter and ran a feature story last week. (That’s her in the red shoulder bag, if you can log in and see the tiny figure in the group photo.) Maia said briskly over breakfast that she was happy to see reporter Jason DeParle honored her request to keep her quotes off the record.

I agreed that was probably a good idea. “Still,” I said, just thinking aloud, “it might have been nice to see you quoted in the
New York Times, and they might have run a bigger picture of you then...”

“Uh-uh,” she said firmly. “No. I wanted to help him out, so I told him lots of things, but I don’t want to see myself quoted by some reporter.”

Very sensible, and obviously she’s learned something useful by eavesdropping on my various conversations about this topic — not all of them even about the
New York Times’s pushy Sharon Waxman. I’m glad to see she already knows at 17 what most people don’t learn until they’re decades older, sometimes the hard way.

Catherine Seipp is a writer in California who publishes the weblog Cathy’s World. She is an NRO contributor.

 

Simple Things
Tony Blair talks sense to the senseless.

By Denis Boyles

Pity Britain’s irate political class. They see that Blair (and of course his leash-holder, Bush), not Islamic terror or Arabian despotism, has darkened their happy world. And their anger is only intensified by having none of the one thing that might help shed some light: None of them have a real idea. They have regret and perfect hindsight, and they’re angry about Iraq and they’re angry about Israel. But when it comes to really hard thinking, like how to solve this Mideast war that grows more perilous by the second, Blair’s critics have become literally thoughtless.

One obvious example of this state may be seen in the phrase “immediate ceasefire.” Those words have been spoken so many times in Europe that on a fair day, they may be seen in the clouds. The
BBC has exhausted itself, as in this item about cabinet friction, wondering why Blair doesn’t make Bush make Israel accept an immediate ceasefire — forgetting, apparently, that the difference between a ceasefire and surrender is that in a ceasefire, both sides have to cease firing. Since ceasefires are not part of the Syrian-Iranian tradition of paramilitary Islamic proselytization, what they really mean is Israel should surrender to Syria and Iran. Blair, Bush, and the Israelis think that’s a bad idea.

In fact, outside the fairy world of the European media, most people think it’s a bad idea. As the
Guardian noted sadly, not even the EU could bring itself to demand an “immediate ceasefire” this week while some European governments were actually sympathetic to Israel. (The mind of Melanie Phillips has wrapped itself most wonderfully around the media’s distaste for Jews.)

Blair, on his return from America, brought with him what would seem to be a good idea: Instead of defaulting always to war against Islamic extremists, support Islamic moderates and help them to find ways of bringing benefits to all Muslims. The first order of business, according the Blair: growing a middle class in the sands of Araby. Impoverished democracies are pretty rare, so that’s a fairly commonsense idea. To the leader writer at the
Guardian, however, even common sense must yield to the pleasures of blind anger:


Like a man who sets fire to his house and then discusses the flames, Tony Blair has a habit of drawing attention to his policy failures by analysing them. He did it in Los Angeles on Tuesday night in a significant speech on the Middle East that described a region ablaze with conflict without recognising his own role as one of the arsonists.


God forbid the Left should analyze when it’s so much more joyful to simply criticize — and simplicity is everything to the furious. Hence, this “connect the dots” front page of one of Britain’s more simplistic papers, the Independent, where even the normally sane Malcolm Rifkind waxes indignant, angry about what has passed him by, but clueless about where to go next. This is global politics as seen by John and Yoko, it’s sex-as-a-tantrum, and it becomes clear reading this kind of rubbish that for the British Left the only thing that would save Israel’s Jews from the deeply complex problems confronting them is if they were little, furry animals. Like those who hate Bush so much they want to see American reverses around the world, Blair-haters, angry at being ignored more than anything else, are happy to see the kind of guarantee of future bloodshed an “immediate ceasefire” would produce so long as it means humiliation for the man they love to hate.

Often, this kind of prioritization leads to some truly mindless moments, as reporters struggle to penetrate complexity with a simple thought that even they can understand. Thursday night, for example, the
BBC sent one of its hopelessly hardheaded Hardtalk crew to interview Philippe Douste-Blazy, the French foreign minister. Douste-Blazy had just returned to reality after the president of Iran, no enemy of simplicity himself, greeted the French suggestion, as noted by NRO’s David Price-Jones, that Iran could be a “stabilizing influence” in the region, by proposing the destruction of Israel as the solution to the Mideast problem.

Unconcerned by the foreign minister’s perplexing details,
Hardtalk’s Carrie Gracie kept demanding from Douste-Blazy a date when the fighting would stop, and he kept explaining to her that France, after a meaningless call for an “immediate ceasefire,” reported here in Haaretz, was now working with the U.S. He celebrated the fact that some American newspapers had misidentified France as an important ally of the U.S. With U.S. and British encouragement, he was hopeful of producing a no doubt fanciful plan, but at least one paying some obedience to reality: “No international force without a ceasefire and no ceasefire without a political — not a military — solution,” he said, approximately (use the Hardtalk link above to watch the interview). “We do not want to be the ones who disarm Hezbollah.” Unfazed, the blonde reporter kept interrupting by repeating over and over the words, “A date…a date…a date.” Douste-Blazy merely looked at her as if she were a simple thing, a pathetic child, a frustrated, and dateless girl.

 
Denis Boyles is author of Vile France: Fear, Duplicity, Cowardice and Cheese. 

Advocates of 'proportion' are just unbalanced

August 6, 2006

BY MARK STEYN SUN-TIMES COLUMNIST

"Disproportion" is the concept of the moment. Do you know how to play? Let's say 150 missiles are lobbed at northern Israel from the Lebanese village of Qana and the Israelis respond with missiles of their own that kill 28 people. Whoa, man, that's way "disproportionate."

But let's say you're a northwestern American municipality -- Seattle, for example -- and you haven't lobbed missiles at anybody, but a Muslim male shows up anyway and shoots six Jewish women, one of whom tries to flee up the stairs, but he spots her, leans over the railing, fires again and kills her. He describes himself as "an American Muslim angry at Israel" and tells 911 dispatchers: ''These are Jews. I want these Jews to get out. I'm tired of getting pushed around, and our people getting pushed around by the situation in the Middle East.''

Well, that's apparently entirely "proportionate," so "proportionate" that the event is barely reported in the American media, or (if it is) it's portrayed as some kind of random convenience-store drive-by shooting. Pamela Waechter's killer informed his victims that "I'm only doing this for a statement," but the world couldn't be less interested in his statement, not compared to his lawyer's statement that he's suffering from "bipolar disorder.'' And the local FBI guy, like the Mounties in Toronto a month or so back, took the usual no-jihad-to-see-here line. ''There's nothing to indicate it's terrorism related,'' said Special Assistant Agent-In-Charge David Gomez. In America, terrorism is like dentistry and hairdressing: It doesn't count unless you're officially credentialed.

On the other hand, when a drunk movie star gets pulled over and starts unburdening himself of various theories about "f---ing Jews," hold the front page! That is so totally "disproportionate" it's the biggest story of the moment. The head of America's most prominent Jewish organization will talk about nothing else for days on end, he and the media too tied up dealing with Mel Gibson's ruminations on "f---ing Jews" to bother with footling peripheral stories about actual f---ing Jews murdered for no other reason than because they're f---ing Jews.

On the other other hand, when the leader of Hezbollah, Hassan Nasrallah, announces that if Jews "all gather in Israel, it will save us the trouble of going after them worldwide,'' that's not in the least "disproportionate.'' When President Ahmadinejad of Iran visits Malaysia and declares, apropos Lebanon, that "although the main solution is for the elimination of the Zionist regime, at this stage an immediate cease-fire must be implemented," well, that's just a bit of mildly overheated rhetoric prefacing what's otherwise a very helpful outline of a viable peace process: (Stage One) Please don't keep degrading our infrastructure until (Stage Two) we've got the capacity to nuke you.

Right now, Israel's best chance of any decent press would seem to be if Mel Gibson flies in and bawls out his waiter as a "f---ing Jew.''

What can we deduce from these various acts, proportionate and not so? If you talk to European officials, they'll tell you privately that that Seattle shooting is the way of the future -- that every now and then in Seattle or Sydney, Madrid or Manchester, someone will die because they went to a community center, got on the bus, showed up for work . . and a jihadist was there. But they're confident that they can hold it to what the British security services cynically called, at the height of the Northern Ireland ''Troubles,'' ''an acceptable level of violence'' -- i.e., it will all be kept ''proportionate.'' Tough for Pam Waechter's friends and family, but there won't be too many of them.

I wonder if they're right to be that complacent. The duke of Wellington, the great British soldier-politician, was born in Ireland, but, upon being described as an Irishman, remarked that a man could be born in a stable but it didn't make him a horse. That's the way many Muslims feel: Just because you're born in the filthy pigsty of the Western world doesn't make you a pig. What proportion of Muslims is hot for jihad? Well, it would be grossly insensitive and disproportionate to inquire. So instead we'll put it down to isolated phenomena like the supposed "bipolar disorder" of Pam Waechter's killer.

In the struggle between America and global Islam, it's the geopolitical bipolar disorder that matters. Clearly, from his own statements about "our people," for Pam Waechter's killer his Muslim identity ultimately transcended his American one. That's what connects him to what's happening in southern Lebanon: a pan-Islamist identity that overrides national citizenship whether in the Pacific Northwest or the Levant. Not for all Muslims, but for enough that things will get mighty "disproportionate" before they're through.

Twenty-eight dead civilians in a village from which 150 Katyusha rockets have been launched against Israel doesn't seem "disproportionate" to me. What's "disproportionate" is the idea that civilian life should be allowed to proceed normally in what is, in fact, a terrorist launching platform.

But, when an army goes to war against a terrorist organization, it's like watching the Red Sox play Andre Agassi: Each side is being held to its own set of rules. When Hezbollah launches rockets into Israeli residential neighborhoods with the intention of killing random civilians, that's fine because, after all, they're terrorists and that's what terrorists do. But when, in the course of trying to resist the terrorists, Israel unintentionally kills civilians, that's an appalling act of savagery. Speaking at West Point in 2002, President Bush observed: "Deterrence -- the promise of massive retaliation against nations -- means nothing against shadowy terrorist networks with no nation or citizens to defend." Actually, it's worse than that. In Hezbollahstan, the deaths of its citizens works to its strategic advantage: Dead Israelis are good news but dead Lebanese are even better, at least on the important battlefield of world opinion. The meta-narrative, as they say, is consistent through the media's Hez-one-they-made-earlier coverage, and the recent Supreme Court judgment, and EU-U.N. efforts to play "honest broker" between a sovereign state and a genocidal global terror conglomerate: All these things enhance the status of Islamist terror and thus will lead to more of it, and ever more "disproportionately."

©Mark Steyn, 2006

Copyright © Mark Steyn, 2006

ELECTION 2006

Lieberman
The "peace" Democrats are back. It's a dream come true for Karl Rove.

BY MARTIN PERETZ
Monday, August 7, 2006 12:01 a.m.

We have been here before. Left-wing Democrats are once again fielding single-issue "peace candidates," and the one in Connecticut, like several in the 1970s, is a middle-aged patrician, seeking office de haut en bas, and almost entirely because he can. It's really quite remarkable how someone like Ned Lamont, from the stock of Morgan partner Thomas Lamont and that most high-born American Stalinist, Corliss Lamont, still sends a chill of "having arrived" up the spines of his suburban supporters simply by asking them to support him. Superficially, one may think of those who thought they were already middle class just by being enthusiasts of Franklin Roosevelt, who descended from the Hudson River Dutch aristocracy. But when FDR ran for, and was elected, president in 1932, he had already been a state senator, assistant secretary of the Navy and governor of New York. He had demonstrated abilities.

At least in this sense, Mr. Lamont comes to this campaign for the U.S. Senate from absolutely nowhere--and it shows in his pulpy statements on public issues. Here is a paradigmatic one: "We need to provide parents and communities the support they need to assure that children start their school day ready to learn." Of course, he also thinks that U.S. troops should be replaced by the U.N. in Iraq. Does he know anything at all about the history of the idea that he so foolishly rescues from the dust? So what we have in this candidacy is someone, with no public record to speak of but with perhaps a quarter of a billion dollars to his name, who wants to be a senator. Mr. Lamont has almost no experience in public life. He was a cable television entrepreneur, a run-of-the-mill contemporary commercant with unusually easy access to capital.

But he does have one issue, and it is Iraq. He grasps little of the complexities of his issue, but then this, too, is true of the genus of the peace candidate. Peace candidates know only one thing, and that is why people vote for them. I know the type well. I was present at its creation.

I was there, a partisan, as a graduate student at the beginning, in 1962, when the eminent Harvard historian H. Stuart Hughes (grandson of Chief Justice Charles Evans Hughes) ran for the U.S. Senate as an independent against George Cabot Lodge and the victor, Ted Kennedy, a trio of what in the Ivies is, somewhat derisively, called "legacies." Hughes's platform fixed on President John F. Kennedy's belligerent policy towards Cuba, which had been crystallized in the "Bay of Pigs" fiasco. The campaign ended, however, with Hughes winning a dreary 1% of the vote when Krushchev capitulated to JFK just before the election and brought the missile crisis to an end, leaving Fidel Castro in power as an annoyance (which he is still, though maybe not much longer), but not as a threat.

Later peace candidates did better. Some were even elected. Vietnam was their card. One was even nominated for president in 1972. George McGovern, a morally imperious isolationist with fellow-traveling habits, never could shake the altogether accurate analogies with Henry Wallace. (Wallace was the slightly dopey vice president, dropped from the ticket by FDR in 1944, who ran for president on the Progressive Party ticket, a creation of Stalin's agents in the U.S.) Mr. McGovern's trouncing by Richard Nixon, a reprobate president if we ever had one, augured the recessional--if not quite the collapse--of such Democratic politics, which insisted our enemy in the Cold War was not the Soviets but us.

It was then that people like Joe Lieberman emerged, muscular on defense, assertive in foreign policy, genuinely liberal on social and economic matters, but not doctrinaire on regulatory issues. He had marched for civil rights and is committed to an equal opportunity agenda with equal opportunity results. He has qualms about affirmative action. But who, in his hearts of hearts, does not? He is appalled by the abysmal standards of our popular culture and our public discourse. Who really loves our popular culture--or, at least, which parent? He is thoroughly a Democrat. But Mr. Lieberman believes that, in an age of communal and global stress, one would do well to speak with the president (even, on rare occasion, speak well of him) and compromise with him on urgent matters of practical law.

Yes, Mr. Lieberman sometimes sounds a bit treacly. He certainly is preachy, and advertises his sense of his own righteousness. But he has also been brave, and bravery is a rare trait in politicians, especially in states that are really true-blue or, for that matter, really true-red. The blogosphere Democrats, whose victory Mr. Lamont's will be if Mr. Lamont wins, have made Iraq the litmus test for incumbents. There are many reasonable, and even correct, reproofs that one may have for the conduct of the war. They are, to be sure, all retrospective. But one fault cannot be attributed to the U.S., and that is that we are on the wrong side. We are at war in a just cause, to protect the vulnerable masses of the country from the helter-skelter ideological and religious mass-murderers in their midst. Our enemies are not progressive peasants as was imagined three and four decades ago.

If Mr. Lieberman goes down, the thought-enforcers of the left will target other centrists as if the center was the locus of a terrible heresy, an emphasis on national strength. Of course, they cannot touch Hillary Clinton, who lists rightward and then leftward so dexterously that she eludes positioning. Not so Mr. Lieberman. He does not camouflage his opinions. He does not play for safety, which is why he is now unsafe.

Now Mr. Lamont's views are also not camouflaged. They are just simpleminded. Here, for instance, is his take on what should be done about Iran's nuclear-weapons venture: "We should work diplomatically and aggressively to give them reasons why they don't need to build a bomb, to give them incentives. We have to engage in very aggressive diplomacy. I'd like to bring in allies when we can. I'd like to use carrots as well as sticks to see if we can change the nature of the debate." Oh, I see. He thinks the problem is that they do not understand, and so we should explain things to them, and then they will do the right thing. It is a fortunate world that Mr. Lamont lives in, but it is not the real one. Anyway, this sort of plying is precisely what has been going on for years, and to no good effect. Mr. Lamont continues that "Lieberman is the one who keeps talking about keeping the military option on the table." And what is so plainly wrong with that? Would Mahmoud Ahmadinejad be more agreeable if he thought that we had disposed of the military option in favor of more country club behavior?

Finally, the contest in Connecticut tomorrow is about two views of the world. Mr. Lamont's view is that there are very few antagonists whom we cannot mollify or conciliate. Let's call this process by its correct name: appeasement. The Greenwich entrepreneur might call it "incentivization." Mr. Lieberman's view is that there are actually enemies who, intoxicated by millennial delusions, are not open to rational and reciprocal arbitration. Why should they be? After all, they inhabit a universe of inevitability, rather like Nazis and communists, but with a religious overgloss. Such armed doctrines, in Mr. Lieberman's view, need to be confronted and overwhelmed.

Almost every Democrat feels obliged to offer fraternal solidarity to Israel, and Mr. Lamont is no exception. But here, too, he blithely assumes that the Palestinians could be easily conciliated. All that it would have needed was President Bush's attention. Mr. Lamont has repeated the accusation, disproved by the "road map" and Ariel Sharon's withdrawal from Gaza, that Mr. Bush paid little or even no attention to the festering conflict between Israel and the Palestinians. And has Mr. Lamont noticed that the Palestinians are now ruled, and by their own choice, by Hamas? Is Hamas, too, just a few good arguments away from peace?

The Lamont ascendancy, if that is what it is, means nothing other than that the left is trying, and in places succeeding, to take back the Democratic Party. Jesse Jackson, Al Sharpton and Maxine Waters have stumped for Mr. Lamont. As I say, we have been here before. Ned Lamont is Karl Rove's dream come true. If he, and others of his stripe, carry the day, the Democratic party will lose the future, and deservedly.
Mr. Peretz is editor in chief of The New Republic.

Copyright © 2006 Dow Jones & Company, Inc. All Rights Reserved.

 

August 13, 2006

It’s Getting Easier to Be Green

By WILLIAM NEUMAN

THEY are not yet as ubiquitous as the Toyota Prius, the hybrid car popular among the ecologically minded, but “green” apartment buildings have begun popping up around Manhattan. At least six large buildings designed to meet elevated standards for energy efficiency and for the use of environmentally friendly materials have opened in the last three years, and several more are under construction or being planned.

The green designation is conferred on buildings that incorporate recycled or renewable materials and that slash energy use and water consumption with features like photovoltaic cells, internal sewage treatment systems and roofs covered in soil and vegetation.

Developers say they are building green because they believe in it, but they also expect to gain a competitive edge. If faced with the choice of renting or buying two similar apartments, the developers say, consumers increasingly will opt for the one with green features, even if it comes at a higher price.

“We think it’s important to do, and we think that other buildings that don’t do this will become obsolete, and our buildings will continue to maintain their value,” said Douglas Durst, who built 4 Times Square, a pioneering green office building, in the late 1990’s. He is now building his second green apartment tower.

But will New York apartment dwellers share the enthusiasm of developers for going green?

Polly Brandmeyer and her husband, Michael, moved into the country’s first green apartment tower, the Solaire, a rental building at River Terrace and Murray Street in Battery Park City, when the building opened in 2003. They picked it because it was in the neighborhood they wanted (they were moving from two blocks away). They now pay about $6,500 a month for a three-bedroom, three-bath apartment, which is at the upper range of rents in the area.

At the time, the Brandmeyers thought of a green building as little more than a novelty.

“It’s funny,” Ms. Brandmeyer said, “because now the green part of the building is the most important to me. I think this should be the standard. It’s night and day different, the quality of living.”

Since moving in, the Brandmeyers have had two children, Alexa, now 2, and Nicholas, 6 months. Ms. Brandmeyer likes the fact that the air entering the building is filtered and that fresh air is constantly being circulated through her apartment, especially with all the construction around the nearby World Trade Center site. The humidity in the apartment is also regulated, so that the air does not get too dry, and she considers it an advantage that the building uses environmentally friendly cleaning products and paints. “You don’t have fumes everywhere from when they clean the carpets or paint an apartment,” Ms. Brandmeyer said.

Tenants in the city’s six green apartment buildings — five rental towers and a low-rise condominium — generally seem to split into two groups. One is made up of outright enthusiasts like Ms. Brandmeyer. Members of the other group say that while they may not always be able to tell the difference between a green apartment and one that is not, they like the idea of living in a building that, in numerous ways, is designed to tread a little more lightly on the planet.

“With the war in Iraq and gas prices over $3 a gallon, when you’re living in this particular era, you want to do what you can,” said Kelly Caldwell, who rents a one-bedroom apartment at the Helena, a 37-story green building at 57th Street and 11th Avenue. She would not say how much she pays in rent, but a typical one-bedroom in the building is $3,400 a month.

Ms. Caldwell, a freelance researcher, said the air did not seem noticeably fresher or the water purer in her apartment. But she does notice a big difference once a month when the electric bill comes.

In her previous apartment, which was about the same size, she paid about $200 a month in the summer for electricity. At the Helena, with its energy-efficient design, her bills have been about half that amount.

The road to a greener life has not always been without bumps, however.

At 1400 on Fifth, a green condo at 115th Street in Harlem that opened in late 2004, residents said there had been problems with a heating and cooling system that operates on water drawn from deep geothermal wells.

Lark E. Mason Jr., an expert on Chinese antiques who is seen regularly on “Antiques Roadshow” on PBS, moved with his wife, Erica, into a three-bedroom triplex apartment during the recent heat wave, only to find that the air-conditioning was not working properly. Grit from decomposed rock in the water from the geothermal wells was clogging the cooling units in some apartments, and the Masons were told that the developer, Full Spectrum of New York, planned to install filters to remove the grit from the system.

The Masons, who paid slightly under $1 million for their apartment, took the attitude that they were pioneers in a new way of urban living. “The concept is really exciting,” Ms. Mason said. “Practically speaking, there are still some kinks they’re working out.”

Carlton A. Brown, the chief operating officer at Full Spectrum, said that only some of the apartments had been affected and that he expected the filters to take care of the problem.

The Solaire’s 290 luxury rental units were built by the Albanese Organization in accordance with green building guidelines created by the Battery Park City Authority, which now requires all new office and residential buildings under its jurisdiction to meet the criteria.

Next, in late 2004, came 1400 on Fifth, built with support from the city’s Department of Housing Preservation and Development. The building has 129 units, including 85 that were sold at below-market rates to low- or moderate-income buyers.

Two more green buildings opened in 2005. The Related Companies completed TriBeCa Green, a 274-unit rental building at 325 North End Avenue, at Warren Street, across Teardrop Park from the Solaire. And in Hell’s Kitchen, the Durst Organization finished the Helena at 601 West 57th Street, at 11th Avenue, with 597 units. That building includes 120 units offered at below-market rents.

Early this year, Albanese completed its second green rental in Battery Park City, the Verdesian, with 250 units, at 211 North End Avenue, also on Teardrop Park.

Becker & Becker also finished work this year on the Octagon, a 500-unit rental building on Roosevelt Island that incorporates a restored octagonal tower from what was once the New York City Lunatic Asylum; 100 units there are for middle-income tenants.

Anybody can call a building green, so to impose some accountability, the United States Green Building Council created a rating system called LEED, short for Leadership in Energy and Environmental Design, to measure the degree to which buildings incorporate green practices and materials. The Solaire, Helena and TriBeCa Green have received gold ratings, the second-highest rating. Developers for the other buildings said they expect to receive either a gold rating or a silver, one rung below gold.

Several more green apartment buildings are either under construction or being planned. Five are in Battery Park City. Millennium Partners is at work on a 236-unit condo at Little West Street and First Place; Albanese is planning a 250-unit condo tower at 70 Little West Street; and the Sheldrake Organization is putting up a 320-unit condo called One Rockefeller Park, on River Terrace across Murray Street from the Solaire. Milstein Properties is planning two towers with a total of 421 condos on North End Avenue between Warren and Murray Streets. The developers say that new design refinements may qualify the Albanese and Sheldrake buildings for platinum LEED ratings, the highest.

In Midtown, Durst and a partner, Sidney Fetner Associates, are building a tower called the Epic at 125 West 31st Street. It will have about 400 rental units, 20 percent of them at below-market rates. The Dermot Companies are building the Mosaic, with two towers of about 300 rental units each, on 10th Avenue between 51st and 53rd Streets.

And in Harlem, Full Spectrum and a development partner are at work on another project, the Kalahari, with 250 condos on 116th Street between Fifth and Lenox Avenues. Half of the units will go to moderate- or low-income buyers.

The buildings share many similar features. To improve indoor air quality, they circulate filtered air through the apartments. (The windows open, but some tenants say they prefer the indoor air.) They also use products that eliminate or minimize volatile organic compounds, or V.O.C.’s, such as formaldehyde, which can give off unwanted gases. They choose paints that are low in V.O.C.’s and carpets and cabinets with low-V.O.C. adhesives. They also use many recycled products, like carpets made from recycled materials or wood flooring rescued from demolished buildings.

Energy saving is a key factor in building green, and most buildings are expected to use at least 35 percent less energy than typical apartment towers. Most of the buildings have photovoltaic cells to generate electricity used in the lobbies and hallways. The newest buildings have microturbines, powered by natural gas, to generate electricity. Green roofs improve insulation and cut rainwater runoff.

To receive a LEED rating, completed buildings must be evaluated, and points are awarded for their green features.

Bruce S. Fowle’s firm, FXFowle Architects, designed the Helena and the Epic. He said the Helena includes an internal sewage-treatment system that purifies wastewater and recycles it for use in the building’s toilets, which gave the project enough points to qualify for a gold rating. The Epic will not have such a system, although it will be comparable in other ways, like its energy-saving features and environmentally friendly materials. As a result, Mr. Fowle said, it will probably receive a silver rating.

Developers say that features necessary for a gold LEED rating generally add 6 to 8 percent to the cost of a building. In the case of One Rockefeller Park, J. Christopher Daly, the president of Sheldrake, said that he expected to spend an additional 8 percent, or $18 million, for the building’s green elements, which include an unusual double-glass wall that provides an added level of insulation.

Mr. Brown of Full Spectrum said that he faced a distinct challenge, because he was building affordable housing and could not pass on the additional costs of the green features. He estimated that they added only 1 or 2 percent to the cost of his buildings.

While the other green buildings provide some of their electricity with costly photovoltaic cells, Mr. Brown said he looked for cheaper ways to make his buildings energy-efficient. The Kalahari, for instance, will use a heat exchanger that will recycle heat from air exhausted from the apartments.

The city, which has been a partner in Mr. Brown’s Harlem buildings, is taking further steps to make green design available to those who cannot afford a luxury apartment.

The Department of Housing Preservation and Development is working with the New York chapter of the American Institute of Architects to find a development team to build a green affordable housing complex with about 150 units at Brook Avenue and East 156th Street in the Bronx. Young families are one group that seem to be attracted to green buildings. On a recent afternoon, a group of mothers and their babies sat enjoying the shade in Hudson River Park near TriBeCa Green, Related’s green rental building, where several of them lived. (The women said they paid $5,075 to $5,600 a month for their two-bedroom apartments.)

Prompted by a reporter, the conversation turned to the pros and cons of green living.

Lisa Ellis, a management consultant, said that when she and her husband, Greg, were apartment hunting last fall they actually wondered if the building’s green aspects might be more of a drawback than an attraction. They worried that a building that promoted itself as an environmental paragon might give short shrift to basic functional considerations, like water pressure.

But none of those fears have been realized. The women in the group all agreed that the water pressure in the building was very good and that while they felt a certain duty to recycle, it was no more of an obligation than in their previous apartments.

Jacqui Brown, who moved from Toronto last year, said she was glad she lived in a green building, even if she does not necessarily notice its effect on air quality or other aspects of her daily life. When she and her husband went apartment hunting, she said, they narrowed their choices to two Related buildings in Battery Park City: TriBeCa Green and a traditional building across the street, called TriBeCa Park.

All other things being equal, Ms. Brown said, they picked the green building.

We Need Our Own MI5

By Richard A. Posner
Tuesday, August 15, 2006; A13

What lessons can we draw from the recent foiled plot to bring down U.S.-bound airliners with liquid bombs?

The first concerns the shrewdness of al-Qaeda and its affiliates in continuing to focus their destructive efforts on civil aviation. Death in a plane crash is one of the "dreaded" forms of death that psychologists remind us arouse far more fear than others that are much more probable. The concern with air safety, coupled with the fact that protection against terrorist attacks on aviation can be strengthened only at great cost in convenience to travelers, makes the recently foiled plot merely a partial failure for the terrorists. The episode is going to make air travel significantly more costly. The additional costs are no less real for being largely nonpecuniary (fear, and loss of time -- which, ironically, will result in some substitution of less safe forms of travel, namely automobile travel).

The plot has also revealed the indispensability of good counterterrorism intelligence. A defense against terrorists, as against other enemies of the nation, must be multilayered to have a reasonable chance of being effective. One of the outer defenses is intelligence, designed to detect plots in advance so that they can be thwarted. One of the inner defenses is preventing an attack at the last minute, as by airport screening for weapons.

The inner defense would have failed in the recent episode because the equipment for scanning hand luggage does not detect liquid explosives. (The liquid-bomb threat had been known since a similar al-Qaeda plot was foiled in 1995, but virtually nothing had been done to counter it.) Fortunately, the outer defense succeeded.

Intelligence succeeded in part because of the work of MI5, England's domestic intelligence agency. We do not have a counterpart to MI5. This is a serious gap in our defenses. Primary responsibility for national security intelligence has been given to the FBI. The bureau is a criminal investigation agency. Its orientation is toward arrest and prosecution rather than toward the patient gathering of intelligence with a view to understanding and penetrating a terrorist network.

The bureau's tendency, consistent with its culture of arrest and prosecution, is to continue an investigation into a terrorist plot just long enough to obtain enough evidence to arrest and prosecute a respectable number of plotters. The British tend to wait and watch longer so that they can learn more before moving against plotters.

The FBI's approach means that small fry are easily caught but that any big shots who might have been associated with them quickly scatter. The arrests and prosecutions warn terrorists concerning the methods and information of the FBI. Bureaucratic risk aversion also plays a part; prompt arrests ensure that members of the group won't escape the FBI's grasp and commit terrorist attacks. But without some risk-taking, the prospect of defeating terrorism is slight.

MI5, in contrast to the FBI (and to Scotland Yard's Special Branch, with which MI5 works), has no arrest powers and no responsibilities for criminal investigation, and it has none of the institutional hang-ups that go with such responsibilities. Had the British authorities proceeded in the FBI way -- rather than continuing the investigation until virtually the last minute, which enabled them to roll up (with Pakistan's help) more than 40 plotters -- most of the conspirators might still be at large, and the exact nature and danger of the plot might not have been discovered. We need our own MI5, not to supplant but to supplement the FBI.

A New York Times article Sunday on British methods says the British could wait until the last minute because they can detain suspects for up to 28 days without giving them a judicial hearing, while we in the United States can do so for only 48 hours. That is not correct. Normally, it is true, an arrested person in this country is entitled to a probable-cause hearing within 48 hours. But the rule is waived in extraordinary circumstances. The government may have a compelling justification for holding a suspected terrorist incommunicado for more than 48 hours, namely, to avoid tipping off his accomplices that the government has seized him and may be getting information from him that can be used to make further arrests.

But to the extent that our laws do handicap us in fighting terrorism, it is one more sign that we do not take the threat of terrorism seriously enough to be willing to reexamine a commitment to a rather extravagant conception of civil liberties that was formed in a different and safer era.

There is a silver lining in all this: not that the Heathrow plot was foiled, because, as I said, it was only a partial failure. The silver lining is that our close call may shake us out of our complacency. Because we have not been attacked since 2001, we are (or were until last week) beginning to feel safe. We were ostriches. An article in the current Atlantic Monthly proclaims victory over al-Qaeda, arguing that by depriving Osama bin Laden of his sanctuary in Afghanistan we defeated al-Qaeda, and the only danger now is that we will overreact to a diminished terrorist threat. Bin Laden was indeed deprived of his Afghanistan sanctuary, but he promptly found another one, in northwestern Pakistan. Though the plotters of the liquid-bomb attack are British citizens, the plot, in its scope and objective, has al-Qaeda written all over it.

The ostriches may retreat to the claim that "our" Muslims, unlike those in Britain and Canada, are fully integrated into American society and so pose no threat. And the percentage of American Muslims who are potential terrorists is indeed smaller than the corresponding percentages in either Britain or Canada. But there are many more American Muslims than there are British or Canadian ones, and we now know that British (and presumably Canadian) Muslim extremists are bent on attacking the United States, not just their own societies. We cannot afford to assume that we are safe. Perhaps we will now abandon that comfortable assumption.

The writer is a U.S. appeals court judge and author most recently of "Uncertain Shield: The U.S. Intelligence System in the Throes of Reform."

Commentary

Is Hezbollah launching Iran's Armageddon?

It's common wisdom to say that the war between Hezbollah and Israel is a regional struggle that also includes Iran and Syria, who have supported and supplied Hezbollah. What seems to be less understood is that this is the first war between the Islamic Republic of Iran and Israel, via Iran's proxy Hezbollah, and that its overarching purpose is to advance Iran's ambitions to export the Islamic revolution throughout the Middle East.

Thus, while religion has always played an important role in prior Arab-Israeli wars, this time it has moved to center stage. It is the theological aspect of this conflict that makes it so explosive and could lead to its expansion.

As an observer of the conflict from Iraq, I see the signs that Iran may be starting to launch the mullahs' version of an Armageddon, exploiting the religious beliefs of devout Shiites in the region. While this may sound more the stuff of prophecies than international relations, it is important to understand - especially in countries such as Lebanon and Iraq that have large Shiite populations.

President Mahmoud Ahmadinejad in Iran and the Shiite cleric Muqtada al-Sadr in Iraq are both devout believers in the "Imam" of Shia Islam. Also known as "Imam Mehdi" - hence the name of Sadr's militia, the Mehdi Army - he was the 12th grandson of the Prophet Muhammad. According to certain branches of Shia Islam, the return of the "hidden Imam" must be prepared by his followers, in a particular sequence of events. Chaos and rampant violence in the region are supposed to be among signs leading to the main battle in which the Imam will return to lead Shiites to victory.

Whether Ahmadinejad and Sadr personally believe that it is their duty to prepare the ground for the rise of the Imam, or whether they are merely exploiting religious mythology for their own political purposes, Iran and its agents in Iraq are starting to make the connection between the current conflict and the return of Imam Mehdi.

In eastern Baghdad, where Sadr's militias are based, there has been a sudden appearance of banners and writings on the walls carrying religious messages that refer specifically to Imam Mehdi. A large number can be seen near the Interior Ministry complex, home to police forces loyal to Sadr. And reports are surfacing that Sadr's militia is recruiting fighters to travel to Lebanon.

It is not coincidental that these banners appeared within 24 hours of Hezbollah's kidnapping of the Israeli soldiers. The messages on these banners, with their unstable mixture of religion and policy, are ominous, written in a tone that invokes the rise of the Imam. One reads: "By renouncing sin and by integration for the sake of afterlife, we become the best soldiers to our leader and savior, the Mehdi." Integration is one of those words Sadr often uses in reference to preparations for the afterlife.

Throughout Islamic history, rulers have used divine texts to consolidate their power. They did this either by twisting the meaning of the written texts, or by inventing thousands of alleged sayings of the prophet. In this case, it looks like the way is being paved for the "imminent" arrival of the Imam to be announced through the Mumahidoon (those who pave the way for the Imam), which is how Sadr and his followers describe themselves.

In the last quarter of a century, Iran's dreams of exporting the Islamic revolution were stopped by the once strong pan-Arab nationalism in the region. No more. Once the mullahs consolidated their power in Iran through their recent "electoral coup," in which they prohibited close to a thousand candidates from running in the last parliamentary elections and thus eliminated the reformist movement from the political scene, they were able to look outward. Now they are positioning themselves to fill the ideological vacuum left by the demise of pan-Arabist socialist ideologies with Islamic fundamentalism.

Iran's ambitions present a danger not only to Israel, but also to the free world, whose values are fundamentally opposed to those of radical Islamic fundamentalism. It is therefore critical that the West unite behind a clear strategy to thwart Iran's ambitions.

A first step is to recognize that Iran's calculations, which may seem irrational, factor in its potential to exploit deep religious feelings and mobilize Shiite followers to fight in Lebanon, Iraq and elsewhere in preparation for the return of the Savior Imam. It is a wily strategy that must be recognized and addressed by the West, lest Iran's Armageddon Day become a self-fulfilling prophecy.

<hr size=1 width="100%" noshade color="#cccccc" align=center>

Omar Fadhil (itmblog@gmail.com) is a member of Friends of Democracy, a Baghdad-based organization to promote democratic values, and cofounder of the blog Iraq the Model (www.iraqthemodel.blogspot.com).

JEFF JACOBY

A tale of 2 stories about anti-Semitism

By Jeff Jacoby, Globe Columnist  |  August 6, 2006

TWO INCIDENTS occurred on July 28. Both took place on the West Coast; both involved an American venting his hostility to Jews. But only one of them became, in the days that followed, the big national story about anti-Semitism. The other was treated as a serious but local matter, and drew only modest coverage around the country.

Incident A involved a guy spewing crude anti-Semitic slurs when he was arrested for drunk driving; after sobering up, he publicly and profusely apologized. Incident B involved a Muslim gunman's premeditated assault on a prominent Jewish institution; his attack left one woman dead and sent five to the hospital, three of them in critical condition.

Which would you say was the bigger story?

Unless you've spent the past week submersed in the Mariana Trench, you know that the intoxicated driver in Incident A was Hollywood's Mel Gibson, who railed at a Los Angeles County police officer about the ``[expletive] Jews" and how ``the Jews are responsible for all the wars in the world." The story was soon everywhere. In the first six days after his arrest, the media database Nexis logged 888 stories mentioning ``Mel Gibson" and ``Jews." And that didn't include the countless websites, talk shows, and smaller publications that also took it up.

By any rational calculus, Incident B was far more significant. According to police and eyewitness reports, the killer forced his way into the offices of the Jewish Federation of Greater Seattle by holding a gun to the head of a 13-year-old girl. Once inside, Naveed Haq announced, ``I am a Muslim American, angry at Israel," and opened fire with two semiautomatic pistols. Pam Waechter died on the spot; five other women were shot in the abdomen, knee, or arm. When one of the women managed to call 911, Haq took the phone and told the dispatcher: ``These are Jews and I'm tired of getting pushed around and our people getting pushed around by the situation in the Middle East."

At a time when jihadist murder is a global threat and some of the most malevolent figures in the Islamic world -- Iranian president Mahmoud Ahmadinejad and Hezbollah chieftain Hassan Nasrallah, to name just two -- openly incite violence against Americans and Jews, the attack in Seattle should have been a huge story everywhere. Yet after six days, a Nexis search turned up only 236 stories mentioning Haq -- one-fourth the number dealing with Gibson's drunken outburst. Why the disparity?

No doubt part of the answer is that Gibson is a celebrity, and that ``The Passion," his 2004 movie about the crucifixion, was criticized by many as a revival of the infamous anti-Semitic motif of Jews as Christ-killers. Gibson, who belongs to a traditionalist Catholic sect, was already suspected of harboring ill will toward Jews. His crude remarks on July 28 confirmed it, and pushed the subject back into the spotlight. But if previous behavior and religious belief explain the burst of interest in the Gibson story, they only deepen the question of why the Seattle bloodshed was played down. After all, Haq is not the first example of what scholar Daniel Pipes has called ``Sudden Jihad Syndrome," in which a seemingly nonviolent Muslim erupts in a murderous rampage.

Just this year, for example, Mohammed Taheri-azar, a philosophy major at the University of North Carolina, deliberately rammed a car into a crowd of students, saying he wanted to ``avenge the death of Muslims around the world." Michael Julius Ford opened fire in a Denver warehouse, killing one person and injuring five. ``I don't know what happened to him yesterday," his sister Khali told the press. ``He told me that Allah was going to make a choice and it was going to be good and told me people at his job was making fun of his religion."

Other cases in recent years include Hasan Akbar , a sergeant in the 101st Airborne Division, who attacked his fellow soldiers at an American command center in Kuwait with grenades and rifle fire, killing one and wounding 15; Hesham Mohamed Ali Hadayet, who killed two people when he shot up the El Al ticket counter at the Los Angeles airport in 2002; and Ali Hasan Abu Kamal, who was carrying a note denouncing ``Zionists" and others who ``must be annihilated & exterminated" when he opened fire on the observation deck of the Empire State building.

If the Catholic Gibson's nonviolent bigotry is a legitimate subject of media scrutiny, all the more so is the animus that spurs Muslims like Haq and the others to jihadist murder. As The New York Sun asked the other day, how many more Haqs must erupt in a homicidal rage before we open our eyes ``to the possibility that they are part of a war in which understanding the enemy is a prerequisite for victory?"

  The Designer's Notebook: Where's Our Merchant Ivory?

 

The struggle for public respect goes on. As soon as the Entertainment Software Association knocks down one clown-made unconstitutional ordinance designed to censor video games, another one pops up somewhere else. It’s Whack-A-Mole with lawsuits.

 

Video games are an easy target because, unlike the movies, games have no powerful friends and no beautiful film stars to argue for them. But there are many other reasons for our lack of cultural credibility as well. Some of them aren’t our fault, but a surprising number are, and recently I’ve thought of another one: We don’t have any highbrow games.

Almost every other entertainment medium has an élite form. Books have serious literature, the kind that wins Pulitzer and Nobel prizes. Music has classical music—not just popular favorites like Beethoven and Mozart, but other forms that are less familiar and less easy to love: twelve-tone music and grand opera. Dance? Ballet, obviously. TV, the most relentlessly proletarian medium of them all, still manages to devote a handful of channels to science, history, and the arts. (Science, history, and the arts aren’t really highbrow, but programming executives certainly think they are.)

And movies? Movies have Merchant Ivory, a small and very unusual production company. For over 40 years, Ismail Merchant (now deceased, alas), James Ivory, and Ruth Prawer Jhabvala made a string of incredibly beautiful and well-acted movies on subjects that would never be big hits at the shopping mall cineplex. These weren’t “art films,” short low-budget titles filled with impenetrable weirdness; they were rich, thoughtful works that addressed serious issues.

 

Big Hollywood stars lined up to appear in Merchant Ivory films even though the stars didn’t stand a hope in hell of making the kind of money they were used to, because it was worth it just for the prestige value alone. The same is true of Kenneth Branagh’s Shakespeare films. Take a look at the his Hamlet; the credits read like a Who’s Who of Tinseltown. Half the cast could easily get a leading role in a moneymaker, yet they signed up for bit parts in Hamlet just for the chance to say they did it.

 

Even if relatively few people go to operas, read serious literature, or watch Merchant Ivory films, even if art and ballet have to be supported by tax money and donations from wealthy companies and individuals, the very fact that they exist lends credibility to the entire medium of which they are a part.

 

Suppose the only music in all the world were rap or heavy metal. Do you think music would have anything like the level of respect that it does now? Would there be Kennedy Center Honors, with the President in the audience, for 50 Cent or Nuclear Assault? I doubt it.

 

Like comic books, games have no élite form or widely-venerated body of work yet. We produce light popular entertainment, and light popular entertainment is trivial, disposable, and therefore culturally insignificant, at least so far as podunk city councilors and ill-advised state legislators are concerned. They feel no reason not to censor games, because games have no constituency that matters and no history as important forms of expression.

 

Now I know from long experience that a certain percentage of you are making derisive snorts of contempt because you personally care nothing for high culture and see no reason why anyone else would either. But even if you don’t like it, you still need it. And before yet another idiot pipes up with Standard Asinine Comment #1 (“but FUN is the only thing that matters!”), let me just say: No, it's not. Shut up and grow up. Our overemphasis on fun—kiddie-style, wheeee-type fun—is part of the reason we’re in this mess in the first place. To merely be fun is to be unimportant, irrelevant, and therefore vulnerable.

 

The serious games movement will help a little with this problem because serious games aren’t just for fun, but by itself that’s not enough. People write comic books to help teach kids about fire prevention and healthcare, but that doesn’t change the perception that comics are for kids. Serious games that seem unrelated to games for entertainment won’t do much for entertainment itself.

 

Elite forms of a medium help to legitimize that medium. They provide status symbols that people who want to be thought of as important and respectable can support. That’s why big corporations and wealthy families give money to ballet companies and symphony orchestras: Publicly sponsoring the elite forms of these arts reflects well on the givers. The élite forms also create shelter in which the less “worthy” forms of the medium can operate more safely. Once an élite form of video games exists, nobody can ever again say, “video games are just a silly waste of time.” Nobody would dream of saying that about music, even if they thought it was true of bubble-gum pop.

 

Elite forms of media discourage censorship and encourage respect, not only for the works themselves but for their creators. In this regard we might be a little ahead of comics already.

 

At a guess, I’d say that more Americans know who Sid Meier and Will Wright are (who make the games closest to being highbrow of any designers I can name) than who Alan Moore and Art Spiegelman are. But far more still will have at least heard the names of Mozart and Verdi, Rossini and Wagner, the best-known composers of opera.

 

What would a Merchant Ivory video game look like? To begin with, its execution would be flawless. Its music would be great music. Its acting would be world-class acting. Its animation would rank with the best of Disney or Miyazaki. Its user interface would be impeccably smooth but never in your face—like the ride of a Rolls-Royce.

 

A Merchant Ivory video game would be visually opulent, without being about explosions or “bullet time.” Its polygons would be spent on small details rather than large effects. As with a masterwork of painting, you could take a magnifying glass to each frame and see artistry even in the corners and shadows. It would reward close attention and playing more than once.

 

In common with literature or poetry, a highbrow video game would include connections to the wider world; it would tell us something about our society and ourselves. Not the cutesy winking references of postmodernism, but real cultural roots. Like many of the Merchant Ivory films, a Merchant Ivory game might offer us a glimpse of another time and another way of life; but, being interactive, it would allow us to enter and act in that world, not merely observe it. And it would leave us wanting to know more.

 

So what would it be about? The same things that highbrow books and movies and other entertainment forms are about: history, science, technology, politics, music, art, religion, diplomacy, family, manners, love, death, duty, sorrow, revenge, depression, and joy. For starters, anyway. Oh, yes, and probably sex, too, but sex handled with grace and sensitivity.

 

Above all, a Merchant Ivory video game would be about people and ideas. It would appeal to thinkers and creators, which is why the works of Meier and Wright spring to mind as potential examples. It would challenge the player to understand and appreciate new things rather than to jump on platforms or to shoot aliens. There’s nothing intrinsically wrong with jumping on platforms and shooting aliens, but they belong to a different class of products that entertain in a different way.

 

And would it, at the end of the day, be fun? Well, of course it would! The question is, fun for whom? Not for people who enjoy frenzied activity, certainly; more for people who enjoy mysteries, puzzles, and the complex interactions of human beings. It would be fun, or rather, entertainment of a different sort.

 

A Merchant Ivory video game would give the sense of deep satisfaction we feel when we reach the end of a great play or movie or novel, a long-lasting pleasure that the mere memory of the experience evokes years later.

 

Who would build a highbrow video game? Like Merchant Ivory itself, probably a small studio that knows its audience extremely well and is content with moderate rather than massive success. To people who create such works, the most important measure of achievement is not the number of dollars earned but the praise of those whom they respect. The dollars are a means to an end, but not an end in themselves.

 

Some folks will probably accuse me of snobbery for suggesting that we need highbrow video games, but I won’t cop to it. Snobbery is deliberately exclusive; the snob seeks to distance himself from ordinary people. High culture is available to anyone who wants it (though it can be expensive). The difference between high culture and popular culture is that high culture refuses to compromise its standards for the sake of a larger audience. That’s a risky tactic, and many producers of high culture—artists, classical musicians, small movie studios, and public television—have to struggle continuously just to stay afloat. But that should be a familiar feeling to a game studio…

 

So who’s the Merchant Ivory of the video game industry? I mentioned Sid Meier and Will Wright, because some of their games are on interesting and unusual themes. Whoever thought that city planning could be fun? Or knowing the progression of social, technological and political developments that lead to different forms of civilization? Yet all over the world, people are telling each other, “Dude—you can’t have industrialism until you get the assembly line, that’s totally obvious. You are such a n00b.” That may not sound like high culture as we’re used to thinking of it, but an idea is an idea, and Civilization IV is a long, long way from Mario or Black.

 

We need more games like that to help us win the culture wars and to serve a market that we currently ignore for the most part: people who read the Beat poets, people who enjoy comparing different productions of Das Rheingold, people who would rather visit an art museum than attend a Kylie Minogue concert.  And people who watch Merchant Ivory films. If you know of anyone who you think is developing a highbrow game, I’d like to hear about them.

 

Maybe I’ll design one myself, just for the fun of it.

 

August 4, 2006

Hezbollah's Other War

By MICHAEL YOUNG

One evening earlier this summer, Lebanon’s most popular satire show, ‘‘Bas Mat Watan,’’ broadcast a sketch showing an ‘‘interview’’ with Sheik Hassan Nasrallah, Hezbollah’s leader and secretary general. ‘‘Nasrallah’’ was asked whether his party would surrender its weapons. He answered that it would, but first several conditions had to be met: there was that woman in Australia, whose land was being encroached upon by Jewish neighbors; then there was the baker in the United States, whose bakery the Jews wanted to take over. The joke was obvious: there were an infinite number of reasons why Hezbollah would never agree to lay down its weapons and become one political party among others.

But it was the rapid reaction to the satiric sketch that sent the more disquieting message. That very night, angry supporters of Hezbollah closed the airport road with burning tires — a warning that they could block at will the main access point in and out of the country — and marched on mainly Sunni, Druse and Christian quarters in Beirut. In a Christian neighborhood, they clashed with the son of a former president and his comrades, and several youths were taken to hospital.

The leaders of Hezbollah defended these actions, explaining that they were the spontaneous emotional response to the mocking of a cleric. It is just as likely that they were a coordinated effort to intimidate critics. In any case, to me the event seemed an essential one, since it symbolized the duality that has defined Lebanon ever since its civil war came to an end in 1990. The duality was once neatly encapsulated by Walid Jumblatt, the leader of Lebanon’s Druse sect, when he asked, Would Lebanon choose to be Hanoi, circa 1970, or Hong Kong? That is, would it seek to become an international symbol of militancy and armed struggle, particularly against Israel, as represented by Hezbollah, or would it opt for the path laid out by Rafik Hariri, Lebanon’s late prime minister and billionaire developer, who sought to transform his country into a business entrepôt for the region, a bastion of liberal capitalism and ecumenical permissiveness?

In seeking to silence critics of their leader, in momentarily shutting down the airport, Hezbollah struck a blow against Lebanon’s tolerant, if always paradoxical, openness. Once again, it seemed, the Lebanese were suffering the consequences of failing to agree on a common destiny. At the time, the consequences seemed bearable. With the outbreak of the current conflict with Israel, they don’t seem bearable at all.

Lebanon today lies ravaged, its inhabitants suffering the consequences of Hezbollah’s hubris and Israel’s terrible, wanton retribution. Since July 12, when party militants abducted two Israeli soldiers and killed three on the Israeli side of the border, Lebanon has been under a virtually complete Israeli blockade. At the time of writing, nearly 1,000 people have been killed, mostly civilians. Predominantly Shiite areas in the south, Beirut’s southern suburbs and the northern Bekaa Valley have been turned into wastelands; Beirut seems empty. Businesses, when they do open, close early; store owners have cleared out their showrooms. The mood is one of ambient disintegration. Tens, if not hundreds, of thousands of refugees have moved into the capital, even as many of its residents have headed for the mountains. The economy, already precarious before the conflict started, lies in shambles, as does public confidence in the country’s future.

As attention focuses on Israel’s air war and troop movements, there has been less emphasis on the social impact of hundreds of thousands of traumatized Shiites moving into mainly non-Shiite areas. A month into the war, there have been laudable acts of cross-sectarian assistance, with Christian, Sunni and Druse organizations and parties helping refugees in schools and other facilities around the country. Yet there are signs of strain. In an effort to avoid conflicts between Shiite refugees and his own Druse supporters, Walid Jumblatt has allowed the refugees to put up Hezbollah flags and photographs of Nasrallah. The longer the fighting continues, however, the more likely it is that altercations will take place. Israel may have hoped to unite the Lebanese people against Hezbollah and force its government to extend its authority throughout the country. But such unity and such authority are hard to see on the horizon. As recriminations over the war spread, the delivery of aid across group lines will become more difficult, frustration will mount and the sectarian and political divide, already exacerbated by anxiety over Hezbollah's actions and intentions, will only grow.

How long it seems (and yet it is only a year) since the Lebanese were celebrating the Cedar Revolution — or what they always more revealingly called the Independence Intifada. Following the killing of Rafik Hariri in February 2005, it seemed that the Lebanese people were coming together to demand the end of Syrian dominance and the resurrection of their nation’s democracy. In that not so distant past, I had high hopes for the development of a liberal, even libertarian, Lebanon; after all, I reasoned, coexistence, freedom and entrepreneurial drive had been the natural state of the country between independence in 1943 and the start of the civil war in 1975 and even beyond. Maybe I was biased in this regard. My late father was an American, my mother is a Maronite Christian and I spent the first decade of the war living in predominantly Muslim West Beirut, where I came to embrace multiple identities and distrust the exclusivist certitudes of many Lebanese. When I returned to Lebanon in 1992, after several years in the United States, my enduring memories from that earlier time were of a remarkably diverse society that could rebound from its worst calamities, seemingly effortlessly. Many of the clichés were true: a neighborhood firefight might break out between militias in the morning, but by the end of the day people would be repairing their damaged properties. The Lebanese could be infuriatingly anarchic, stupidly selfish, but they were also determined to take initiatives and embrace new departures. This I saw as the essence of the liberal ideal. When the Syrian Army left, I believed, that ideal could at last be fulfilled.

My understanding was a valid one, but in retrospect an incomplete one. The ideals of the Independence Intifada were largely the ideals of an urban middle class — politicians, professionals, journalists and students; mostly Christians and Sunnis but also some Druse — fed up with a vulgar, vampirical Syrian hegemony. But what about that sizable part of Lebanon that had no inclination to see Syria gone?

From the moment of Hariri’s assassination on Feb. 14, 2005, it was clear that the Shiite political parties, particularly Hezbollah, did not share in the national distress surrounding the former prime minister’s death. Certainly, party officials paid their respects to the Hariri family and condemned the crime, but when tens of thousands of Lebanese descended on Martyrs Square in Beirut to bury Hariri, the most obvious question was, Where are the Shiites? Given that Shiites represent perhaps 35 to 40 percent of the Lebanese population, this was no idle question.

Of course, there were Shiites — as individuals. But over the years, Hezbollah had gradually won over a large majority of the community, particularly poorer Shiites, and the party had no wish to assist in Hariri’s elevation from politician to national martyr. It probably sensed as well what many others did at the time — namely that the assassination, blamed by the late prime minister’s allies on the regime of President Bashar al-Assad of Syria, could be used to end Syria’s presence in Lebanon and curb the influence of Syria’s close ally, Hezbollah itself. While other Lebanese saw the prospect of true independence, Hezbollah saw a threat — and this split vision would have grave consequences. Ultimately, a combination of traditional sectarian tensions, audacious political opportunism and the sheer unmovable force of Hezbollah’s state within a state would contribute to defeating the hopes of the Independence Intifada.

Hezbollah’s dependence on Syria and dominance of local Shiite politics were long in the making. In the early 1980’s, the ‘‘Party of God’’ was a loose collection of shady militant groups organized and trained by Iran’s Revolutionary Guard and dedicated to fighting Israel. After vanquishing its Shiite rival, the Amal movement, in fierce street fights, Hezbollah established its headquarters in the southern suburbs of Beirut. When the civil war ended in 1990, with Syria in effective control of the country, it was virtually the only armed group allowed to retain its weapons. The official rationale was that it needed those weapons to continue fighting Israel’s occupation of the south. But Syria had its own reasons to keep Hezbollah armed: as it negotiated with Israel for the return of the Golan Heights, the Assad regime wanted all the military leverage it could get.

Under Syrian tutelage, Hezbollah began to play a role in Lebanon’s political affairs as well. In 1992, Lebanon held its first postwar election, and when Nasrallah chose to participate, the decision created friction within the party, ostensibly because it implied abandoning the goal of creating an Islamic state in Lebanon but also, and more prosaically, because of personal leadership rivalries. Yet the party won an impressive 12 seats, and while it did not enter the government at the time, it firmly anchored itself in Parliament. Making use of the expanded patronage powers at its disposal, it began filling the civil service with supporters, which was a great boon to its often impoverished constituents. The integration of an Islamic militia into the state attracted considerable attention at the time; optimists saw it as a model of how an Islamist party might be ‘‘moderated.’’ In reality, Hezbollah manipulated this process to safeguard its autonomy, even as it expanded its military capabilities under Syria’s approving eye.

Throughout much of the 1990’s, Rafik Hariri, the Sunni billionaire, built up a glittering new Beirut and attracted investors and plaudits from abroad. The Syrians grew wary of Hariri, however, worrying that he moved far too comfortably in the world’s capitals and would one day try to remove Lebanon from their orbit. Hezbollah, the Syrians understood, could serve as a valuable counterweight to Hariri’s ambitions. More cynically, the Syrians realized that Hezbollah’s pariah status in the world community could work to their advantage, for who but Syria could ever hope to bring the violent party under control? To remain relevant in Lebanon and throughout the Middle East, the Syrians helped create a problem that only they could resolve.

But there was more to the Syrian-Shiite alliance than that. Many Shiites were genuinely grateful to Syria for helping them overcome decades of marginalization. The community’s economic and political ascent, and its resistance against Israel, were all encouraged by Syria. You could argue, with some irony, that the Syrians had graciously allowed the Shiites to be their cannon fodder, but for Shiites these events were vital steps in their journey from the periphery of Lebanese political and social life to its very center.

Hezbollah’s crowning moment came in May 2000, when Israeli forces withdrew from Lebanon after a 22-year presence. Refusing to accept the U.N.’s judgment that the withdrawal was complete, Hezbollah vowed to continue its ‘‘resistance.’’ While Hezbollah never quite made clear whether its resistance was ‘‘Lebanese’’ or ‘‘Islamic’’ in spirit (both terms were used interchangeably), this ambiguity went to the heart of the matter. Hezbollah simultaneously represented radical religious militancy and a peculiar sort of Lebanese patriotism, based on an existential struggle against Israel and the convenient ignoring of Syrian domination.

With Hariri’s killing, two Lebanons entered into confrontation. They were distinguished, in large part, by their different visions of the past. One recalled the glories of a cosmopolitan, multiconfessional prewar Lebanon and admired Hariri for seeking to revive those glories. The other one, mainly Shiite, had little such nostalgia: it recalled a prewar, sophisticated, free-market Lebanon that had left them with little worth remembering.

In fact, both of these perceptions were flawed: the pre-1975 country was only partly a Mediterranean pleasure palace; its liveliness and prosperity were centered on a Beirut surrounded by rings of poverty, where the excluded were many. And Shiite misery, while very real, had been recognized in the late 1950’s and early 1960’s, when the state extended its services to the south. It was further alleviated in the 1980’s, and later after Hariri took office in 1992, as Shiite leaders were granted an ample share of the national pie. Any drive around Shiite areas in the last decade would have shown the mark of returning emigrant money in the proliferation of villas, businesses and interests linking Lebanon to communities in Africa, the United States and South America.

Lebanon is a country of simultaneous complex identities, and Hezbollah’s world deftly incorporated paradoxes no less than Hariri’s. The image of a Shiite Lebanon awash in turbans, chadors and prayer beads is a caricature. Secularism and religiousness, wealth and poverty, tradition and modernity, militancy and laid-backedness, Hanoi and Hong Kong — all are present among Shiites, as among other Lebanese communities. Hezbollah’s genius has been to draw from this diversity even as it also seeks to stifle it. It has done so by virtually monopolizing the provision of basic services and patronage jobs to Shiites throughout the country and by convincing its co-religionists that if the party loses political ground, all Shiites lose.

In March 2005, Hezbollah and the rest of Lebanese society faced off in the climactic events of the Independence Intifada. On March 8, as Syrian troops began preparing to leave the country, Hezbollah organized a demonstration in downtown Beirut to ‘‘thank’’ Syria for all its help to Lebanon. Hassan Nasrallah spoke to the assembled masses, followed by an array of lesser pro-Syrian clients.

The Hezbollah-led demonstration was of particular symbolic importance. It was held in Beirut’s rebuilt downtown area, Hariri’s jewel and hitherto the setting for weekly anti-Syrian rallies. In his choice of locale, Nasrallah declared that the downtown area belonged to Shiites as much as to Sunni Muslims, Christians or Druse — the communities leading the opposition to Syria. His supporters pointedly marched under the national flag, reminding their countrymen that Shiites were as Lebanese as anybody else. It was an impressive gathering, with between 200,000 and 400,000 people in attendance.

But Nasrallah had miscalculated. Though there was a smattering of non-Shiites in the crowd, the rally was widely regarded as a sectarian Shiite challenge to the Lebanese independence movement — and this created widespread alarm. One week later, on March 14, the independence movement responded by holding a counterrally. There appeared to be at least three times as many people present on March 14 as on March 8 — Sunnis, Christians and Druse, but also some Shiites, all from the farthest reaches of Lebanon — probably some one million people, with tens of thousands more languishing on blocked access roads to Beirut. In a country of only four million, it was an extraordinarily large gathering. The ‘‘March 14 coalition,’’ as it would come to be known, embodied the idea of coexistence and promised a new beginning.

Or did it? While the March 14 rally was interpreted by many as the defining moment of a new, multisectarian Lebanon, while it was an unforgettable experience for those who attended — and I was there — it also emerged from the viscera of Lebanese sectarianism. Anger against Syria, sorrow over Hariri’s murder and the hope for a free Lebanon all contributed to March 14, but so, too, did revulsion at the image of hundreds of thousands of poor Shiites descending on Beirut’s pot of gold, its downtown area, that receptacle of mainly urban Sunni and Christian achievement. The hinterland had laid claim to the wealth of the capital, and it had done so in the name of a Syrian regime that was also a product of the hinterland. The reflex of Lebanon’s elites and middle class — those who prided themselves on their openness — was to close the door.

Hezbollah, for its part, had much the same reflex. The Lebanese majority, you might think, had spoken. But that night, Hezbollah’s television station, Al Manar, presented the demonstration in the narrowest of sectarian terms: as a resurrection of the right-wing Christian politics of the civil-war era. Viewers were shown images from the march suggesting that a onetime Christian militia, the Lebanese Forces, was staging a comeback. The implication was that collaborators with Israel were at the forefront of the movement. It was pure demagoguery, since the Lebanese Forces had much earlier broken with Israel. But the station’s intent was to sound a persistent Hezbollah trope: those who opposed Syria were really acting on behalf of the United States and Israel — and this was no time for subtlety.

There was something inherently unstable in this situation. On the one hand, the Christians, Sunnis and Druse who (rightly or wrongly) regarded themselves as defenders of Lebanese tolerance and liberalism were animated in part by their own prejudices. On the other hand, the Shiite community was expressing its form of Lebanese patriotism through an implicit reaffirmation of autocratic Syrian rule. Who could untwine such contradictions? The great Lebanese journalist George Naccache once observed that ‘‘two negations do not make a nation’’; he was describing how Lebanon’s Christians and Sunnis had built the newly independent Lebanese state in 1943 on a strange compromise: the Christians would not look to join the West, and the Muslims would not seek to become part of a wider Arab nation. His words remained relevant: if March 8 and March 14 were both founded on negations, the prospects for a united Lebanon were dim.

It was widely hoped that the elections scheduled for May and June 2005 would put Lebanon’s new freedom on a more stable footing. But political maneuverings leading up to the vote soon dashed this hope. Lebanon’s famously complex political system is founded on sectarianism: in an effort to guarantee a voice to every religious group, political offices and parliamentary seats have been apportioned to different sects, depending on their size, ever since independence. The result is a system that requires consensus — even as it also hardens social divisions and encourages bizarre alliances and deals among bitter foes. It should be little surprise, then, that the very elections that were supposed to confirm the end of Syrian rule by handing political power over to the majority of March 14 also ended up tearing that majority apart.

The March 14 coalition was made up of a disparate array of parties, led by the largely Sunni Future Movement of Saad Hariri, the son and political heir of the late prime minister. It also included Walid Jumblatt’s Druse-Christian bloc and a collection of Christian and secular parties. Aligned against the coalition were Hezbollah and the other pro-Syrian Shiite party, Amal, and a flotilla of smaller pro-Syrian groups. For a time, at least, it appeared that the pro- and anti-Syrian factions would face off against each other in a clear contest.

But in Lebanon things are rarely that simple. When Gen. Michel Aoun, a Christian populist and former prime minister, returned from exile in France in May 2005, the March 14 coalition was wary of his intentions: Did he wish to join the movement or take it over — or perhaps wreck it? Aoun was popular in Lebanon’s Christian strongholds and had impeccable anti-Syrian credentials: he had battled the Syrian Army in the final days of the civil war. But his sweeping denunciations of the country’s elites and his apparent willingness to hasten his return by making deals with pro-Syrian politicians, and probably the Syrian regime itself, gave the March 14 leaders pause.

At this point, the volatility of Lebanon’s politicians and the complexity of its political system gave Hezbollah a crucial opportunity. Walid Jumblatt, the Druse leader, was concerned that Aoun’s candidates would take seats away from him in one of two districts the Druse leader considers his reserved constituencies. A cunning, contrapuntal politician, Jumblatt has always advanced his interests through triangulation — working both sides of an issue until one emerges stronger and he can capitalize on it. That is why it was jarring, but not surprising, to see him reach out to Hezbollah and Amal in late March 2005. Jumblatt needed Hezbollah’s votes to overcome the Aounist challenge, and to make sure he got them, he engineered a deal. A new electoral law would protect Hezbollah’s representation in Parliament. In return, Hezbollah would instruct its constituents to vote for Jumblatt’s candidates. The Druse leader forced the inexperienced Hariri to go along with an effort designed to marginalize Aoun and create three large blocs in Parliament: a Jumblatt bloc, a Hariri bloc and a joint Hezbollah and Amal bloc.

I later asked Jumblatt why he had conducted this maneuver. He answered that he hoped to bring Hezbollah into the national consensus in a post-Syria Lebanon and to bargain with it from a position of strength. That was disingenuous. The fractured Lebanese system invites expediency, but also destructiveness. Through his efforts, the Druse leader infuriated the Maronite Christians, notably their patriarch, Nasrallah Boutros Sfeir, Syria's most stalwart and courageous opponent. The community felt betrayed. It had endured the most political isolation during the Syrian years, and it had taken to the streets after Hariri’s death more actively than the initially timorous Sunnis. What it got was a Parliament that made most Christian candidates effectively dependent for their seats on the whims of Hariri, Jumblatt and the Shiite parties. Many Maronites saw this as a denial of their place in Lebanon’s new equation.

In the end, Jumblatt’s maneuver worked: he, Hariri and their Christian allies were able to assemble a parliamentary majority true to the spirit of the March 14 movement and, by consequence, a cabinet majority. But Hezbollah and Amal also had a large bloc, and following Lebanon’s customs of consensus-based governance, they were invited to join the cabinet. The opposition was led by Aoun; in a masterful electoral swerve of his own, the onetime anti-Syrian firebrand had allied himself with various pro-Syrian politicians and acquired their votes. Thus, in only a matter of weeks, the dizzying duplicity of Lebanese politics had swept all concord away, and the idealists of the Independence Intifada either found themselves standing against their former comrades or too disgusted to trust the political class.

It was to the credit of the new prime minister, Fouad Siniora, one of Rafik Hariri’s closest collaborators, that he tried to chart a way through this wilderness of mirrors. Siniora’s job was not an easy one. His first priority was to help the United Nations begin its investigation of Hariri’s assassination. Given the high probability of Syrian involvement, he knew Lebanon would face a reaction from the worried men in Damascus and also from their Hezbollah allies in Lebanon. As anti-Syrian politicians and journalists were marked for assassination throughout the year and a series of bomb explosions tore through Christian areas, these fears seemed justified.

Meanwhile, Siniora also had to handle relations with Hezbollah. Five of the ministers in his cabinet were Shiites, either members of Hezbollah and Amal or named by them. Members of the parliamentary majority affirmed their desire to see Hezbollah integrated into the armed forces and to see the state regain control over all the national territory — meaning Hezbollah must no longer rule over the border with Israel. But desiring Hezbollah’s disarmament was one thing; achieving it, another. When it came to such matters, the parliamentary majority was reluctant to act like a majority. Hariri was especially diffident, probably because his Saudi sponsors advised him to avoid precipitating any Sunni-Shiite showdown that might boomerang in the kingdom. But the chief obstacle, of course, was Hezbollah itself. The militia realized that without its weapons, it would lose its reason to exist as a militant movement, lose its élan and lose its value to Syria — as well as its ties to its main financier and advocate, Iran.

It did not take very long before the rift between Hezbollah’s supporters and detractors was reflected in the cabinet. The most divisive episode came late last year, when the government majority sought to approve a mixed Lebanese-international court to try the suspects in the Hariri assassination. The Shiite ministers refused to go along, arguing that the move was premature. The majority saw this as a ploy to protect Syria at a time when Nasrallah was publicly reaffirming his alliance with the Assad regime. On Dec. 12, in the tense hours following the assassination of the prominent anti-Syrian journalist Gebran Tueni, the government broke the deadlock by voting to approve a mixed tribunal. This was constitutionally defensible, but the Shiite ministers claimed it broke the rule that all important decisions must be made by consensus. They walked out of the government but did not resign. Hezbollah was not about to lose the convenient cover of legitimacy provided by participation in the cabinet, but it had every intention of gumming up the system so that the cabinet majority would not act as a majority again.

For all its efforts, Siniora’s government became less and less able to govern. Early this year, Nabih Berri, the speaker of Parliament, proposed a ‘‘national dialogue’’ of leading politicians to address the most divisive issues — like the fate of Hezbollah’s weapons. But little came of this. In the dialogue, Nasrallah would make concessions and then invariably step back from implementing them. The final straw was the July 12 abduction of the Israelis. For most of the ministers in the government, the operation was nothing less than a coup, a brazen effort to show that the majority had no control over so basic a matter as a declaration of war.

Several months ago, I began participating in a series of informal discussions with orphans of this wretched state of affairs. Our group is heavy on southern Lebanese, both Shiites and Christians, and its very modest ambition is to create a forum for exchanges between individuals unable to identify with any of the major blocs in Parliament. For the Shiites in the group, there is a pressing desire to loosen Hezbollah’s grip on their community. Several come from the 1970’s left, but that is by no means the rule. The organizer of the group is a journalist who was close to Hezbollah a decade ago, having been the host of a program on Al Manar, the Hezbollah television channel, while another, also a journalist, hails from a prominent southern religious family.

Endeavors like these are worthy because their starting point is the assumption that Lebanon really must be governed through mutual concessions and dialogue. Amid the general sectarianism, this may sound absurd. The ideal of Lebanon as a mosaic of separate but collaborating communities has been shattered so many times that it is difficult even to know what collaboration might mean. But it is also true that grounds for hope exist. Over the past half-century, the once-marginalized Shiites have steadily integrated themselves into Lebanese politics and society. While Shiites today largely accept Hezbollah’s claim to be their representative and protector, in the future new forms of Shiite politics and expression may emerge — must emerge.

And yet the current war is pushing the country in precisely the opposite direction. The great fear expressed by many Lebanese is that the country can absorb neither a Hezbollah victory against Israel nor a Hezbollah defeat. If Hezbollah merely survives as both a political and military organization, it can claim victory. The result may be the expansion of the party’s authority over the political system, thanks to its weaponry and its considerable sway over the Lebanese Army, which has a substantial Shiite base. This, in turn, might lead to a solidification of Iranian influence and the restoration of Syrian influence. A Hezbollah defeat, in turn, would be felt by Shiites as a defeat for their community in general, significantly destabilizing the system.

As the violence continues, retribution is in the air. Israel has focused its attacks on Shiites, leaving Sunni, Christian and Druse areas (though not their long-term welfare) relatively intact. Amid all the destruction, many a representative of the March 14 movement has denounced Hezbollah’s ‘‘adventurism,’’ provoking Shiite resentment. As one Hezbollah combatant recently told The Guardian: ‘‘The real battle is after the end of this war. We will have to settle score with the Lebanese politicians. We also have the best security and intelligence apparatus in this country, and we can reach any of those people who are speaking against us now. Let’s finish with the Israelis, and then we will settle scores later.’’

This essentially repeated what Hassan Nasrallah told Al Jazeera in an interview broadcast a week after the conflict began: ‘‘If we succeed in achieving the victory . . . we will never forget all those who supported us at this stage. . . . As for those who sinned against us . . . those who made mistakes, those who let us down and those who conspired against us . . . this will be left for a day to settle accounts. We might be tolerant with them, and we might not.’’

Meanwhile, the country has sunk into deep depression, and countless Lebanese with the means to emigrate are thinking of doing so. The offspring of March 8 and March 14 are in the same boat, and yet still remain very much apart. The fault lines from the days of the Independence Intifada have hardened under Israel’s bombs. Given the present balance of forces, it is difficult to conceive of a resolution to the present fighting that would both satisfy the majority’s desire to disarm Hezbollah and satisfy Hezbollah’s resolve to defend Shiite gains and remain in the vanguard of the struggle against Israel. Something must give, and until the parliamentary majority and Hezbollah can reach a common vision of what Lebanon must become, the rot will set in further.

In his Al Jazeera comments, Nasrallah made it clear that the imperatives of ‘‘resistance’’ still trumped those of conciliation. But he sounded a little more conciliatory in a subsequent speech on Al Manar, when he emphasized that Hezbollah was struggling on behalf of all Lebanese. With hundreds of thousands of his brethren displaced from their homes, with Lebanon already facing an estimated $2.5 billion in direct losses, with Hezbollah having alienated many of its countrymen, even as it has fired off its prize weapons in a war of little benefit, maybe Nasrallah saw something he hadn't earlier: that his party may not always be the only party to hold the weapons. Faced with his intransigence, unable to peacefully settle their differences with Hezbollah, Lebanon’s other communities will likely rearm. The result may be a return to civil war. And if that happens, nothing will put Lebanon — let alone liberal Lebanon — back together again.

Michael Young is the opinion editor of The Daily Star, an English-language newspaper published in Beirut, and a contributing editor at Reason magazine.

August 14, 2006

Op-Ed Contributor

Beyond Propaganda

By JOHN KENNEY

FOR some men, it’s cars, a sports team or watching “The Godfather” over and over. For me, it’s oil companies. They fascinate me. Their size, their power, their reach. So I was particularly interested in the recent news about BP shutting down the nation’s largest oil field, in Prudhoe Bay, Alaska.

I was interested in part because six years ago I helped create BP’s current advertising campaign, the man-in-the-street television commercials. I can’t take credit for changing the company’s name from “British Petroleum” to “beyond petroleum” (lower case is cooler); my boss at the time came up with it.

That was the summer of 2000. Ideas were needed. We were pitching to the top man, Sir John Browne (now Lord Browne). My partner and I got the assignment. Other agencies got to work on Nike, Apple, Super Bowl spots. I would have taken Taco Bell. We got an oil company. At the time, I knew nothing about oil companies.

I started reading. The facts alone are amazing: 85 million barrels of oil a day used worldwide; 250,000 people born every day; climate change. I read Sir John’s speeches and read about BP and its technological achievements and investment in hydrogen.

This wasn’t my idea of an oil company chief. This was hope. Why didn’t they talk about this stuff? And why did all big oil company advertising look alike? The typical helicopter shot of a tanker at sea, sunlight reflecting off the logo as it dissolves to a towheaded urchin on the beach, frolicking in the pristine waters. A voice like Morgan Freeman’s saying, “At Gigantico Petroleum, we’re on the move to keep the world on the move. And to fill this tanker with cash.”

So we thought, what if you stripped away the corporate speak? What if you engaged in the debate that was happening with oil and energy and the environment?

We borrowed a video camera and approached people on the street, asking them questions: Would you rather have your car or a cleaner environment? Is global warming real? (Remember, this was 2000, when only one oil company, BP, had even admitted the possibility of global warming.) If you could say something right now to the head of a big oil company, what would you say?

It was an amazing experience. I had done man-in-the-street interviews for other products and knew that it was exceptionally difficult to get someone to stop and talk. People are simply too busy to talk seriously about, say, toilet paper with a stranger.

But with oil it was different. People stopped. They talked. They were intrigued and passionate and intelligent and a little angry. They understood that oil companies simply deliver a product. Yet — and I think this has to do with their size and profit — people often expected something more from them than they did of other large industries. A gallon of milk costs more than a gallon of gas, but it doesn’t cause global warming. And we don’t need 85 million barrels of it a day.

In short, they knew the power of an oil company executive. And they wanted leaders.

After a day and a half of interviews, we had enough footage for five commercials. They were raw and emotional. The things people said were sometimes none-too-flattering to BP or the industry. At the end of each spot, we put up a list of what BP was doing in terms of cleaner fuels, alternative forms of energy, recognizing global warming and reducing their own emissions; stuff you didn’t hear from an oil company. Before the “beyond petroleum” tagline, we added, “It’s a start.”

We did print ads too. The same way. Real people, real quotes as headlines that challenged BP and the industry. No oil company — few companies at all — had ever spoken like this, confronting the debate so frankly.

They liked it.

Advertising is a funny business. You get to help shape the personalities of huge companies. Most often it’s for cellphone service or credit cards or fast food or paper towels. Rarely are you faced with whether you “believe” in a product or service. This was different. This was serious. I believed wholeheartedly in BP’s message, that we could go — or at least work toward going — beyond petroleum.

The campaign first appeared a few days before Sept. 11, 2001. It was shelved for a long time. Then relaunched. In that time, I moved on to other assignments and later another agency.

The campaign is running again. I heard that the interviewees are prescreened now, which is too bad. And last week, I heard that the pipeline in Prudhoe Bay is corroded and leaking. The company that claims to be beyond petroleum shut down a pipeline that serves up 400,000 barrels of petroleum a day. Maybe Coca-Cola’s new line should be “It’s good for your teeth.”

I read too that the energy expert Daniel Yergin claimed last week that “new analysis of oil-industry activity points to a considerable growth in the capacity to produce oil in the years ahead.” It seems unlikely that anyone’s going to push hard to change our energy future.

I guess, looking at it now, “beyond petroleum” is just advertising. It’s become mere marketing — perhaps it always was — instead of a genuine attempt to engage the public in the debate or a corporate rallying cry to change the paradigm. Maybe I’m naïve.

It’s just that I believe that the handful of men who run these remarkable companies possess something more valuable than wealth, privilege and power. They have at their disposal the truly rare possibility of creating a legacy, the ability to change things, on a huge scale.

I never actually met Lord Browne. He announced recently that he’ll retire at the end of 2008, when he reaches BP’s mandatory retirement age of 60. I have no doubt he is a good, decent and exceptionally bright person. But imagine what the headlines could have read: “Lord Browne to retire; changed oil industry and the world.”

Think of it. Going beyond petroleum. The best and brightest, at a company that can provide practically unlimited resources, trying to find newer, smarter, cleaner ways of powering the world. Only they didn’t go beyond petroleum. They are petroleum.

The problem there is that “are petroleum” just isn’t a great tagline.

John Kenney is a creative director at an advertising agency.

 August 13, 2006

Fat Factors

By ROBIN MARANTZ HENIG

In the 30-plus years that Richard Atkinson has been studying obesity, he has always maintained that overeating doesn’t really explain it all. His epiphany came early in his career, when he was a medical fellow at U.C.L.A. engaged in a study of people who weighed more than 300 pounds and had come in for obesity surgery. “The general thought at the time was that fat people ate too much,” Atkinson, now at Virginia Commonwealth University, told me recently. “And we documented that fat people do eat too much — our subjects ate an average of 6,700 calories a day. But what was so impressive to me was the fact that not all fat people eat too much.”

One of Atkinson’s most memorable patients was Janet S., a bright, funny 25-year-old who weighed 348 pounds when she finally made her way to U.C.L.A. in 1975. In exchange for agreeing to be hospitalized for three months so scientists could study them, Janet and the other obese research subjects (30 in all) each received a free intestinal bypass. During the three months of presurgical study, the dietitian on the research team calculated how many calories it should take for a 5-foot-6-inch woman like Janet to maintain a weight of 348. They fed her exactly that many calories — no more, no less. She dutifully ate what she was told, and she gained 12 pounds in two weeks — almost a pound a day.

“I don’t think I’d ever gained that much weight that quickly,” recalled Janet, who asked me not to use her full name because she didn’t want people to know how fat she had once been. The doctors accused her of sneaking snacks into the hospital. “But I told them, ‘I’m gaining weight because you’re feeding me a tremendous amount of food!’ ”

The experience with Janet was an early inkling that traditional ideas about obesity were incomplete. Researchers and public-health officials have long understood that to maintain a given weight, energy in (calories consumed) must equal energy out (calories expended). But then they learned that genes were important, too, and that for some people, like Janet, this formula was tilted in a direction that led to weight gain. Since the discovery of the first obesity gene in 1994, scientists have found about 50 genes involved in obesity. Some of them determine how individuals lay down fat and metabolize energy stores. Others regulate how much people want to eat in the first place, how they know when they’ve had enough and how likely they are to use up calories through activities ranging from fidgeting to running marathons. People like Janet, who can get fat on very little fuel, may be genetically programmed to survive in harsher environments. When the human species got its start, it was an advantage to be efficient. Today, when food is plentiful, it is a hazard.

But even as our understanding of genes and behavior has become more refined, some cases still boggle the mind, like identical twins who eat roughly the same and yet have vastly different weights. Now a third wave of obesity researchers are looking for explanations that don’t fall into the relatively easy ones of genetics, overeating or lack of exercise. They are investigating what might seem to be the unlikeliest of culprits: the microorganisms we encounter every day.

One year ago, the idea that microbes might cause obesity gained a foothold when the Pennington Biomedical Research Center in Louisiana created the nation’s first department of viruses and obesity. It is headed by Nikhil Dhurandhar, a physician who invented the term “infectobesity” to describe the emerging field. Dhurandhar’s particular interest is in the relationship between obesity and a common virus, the adenovirus. Other scientists, led by a group of microbiologists at Washington University in St. Louis, are looking at the actions of the trillions of microbes that live in everyone’s gut, to see whether certain intestinal microbes may be making their hosts fat.

If microbes help explain even a small proportion of obesity, that could shed light on a condition that plagues millions of Americans. Today 30.5 percent of the American public is obese; that is, nearly a third of Americans have a body-mass index over 30 (which for someone of Janet’s height is 186 pounds). The Department of Health and Human Services says obesity may account for 300,000 deaths a year, making it the second-most-common preventable cause of death after cigarette smoking. It’s been linked to various diseases: diabetes, high blood pressure, heart disease, gallbladder disease, sleep apnea, osteoarthritis and some cancers. “Individuals who are obese,” the department states on its Web site, “have a 50 to 100 percent increased risk of premature death from all causes, compared to individuals with a healthy weight.”

If microbes do turn out to be relevant, at least in some cases of obesity, it could change the way the public thinks about being fat. Along with the continuing research on the genetics of obesity, the study of other biological factors could help mitigate the negative stereotypes of fat people as slothful and gluttonous and somehow less virtuous than thin people. There is, of course, the risk of overemphasizing how potent the biological forces are that make some people prone to gaining weight. Biology sets the context, and that is critical, but obesity still boils down to whether a person eats too much or exercises enough. The danger in bending too far in the direction of a biological explanation — whether that explanation is genetics, infectobesity or some theory yet to be discovered — is that it could be misinterpreted, by fat and thin alike, as saying that behavior is irrelevant.

 

Jeffrey Gordon, whose theory is that obesity is related to intestinal microorganisms, has never had a weight problem. He’s a rangy man, and when I met him he was dressed in a plaid shirt and clean chinos stretching over long, long legs. He wanted to be an astronaut as a kid, but he was too tall, 6-foot-2 by the time he was a teenager, and he says that back then, NASA was training only astronauts short enough to squeeze into the little space capsules of the day. Gordon has a big friendly face and curly brown hair that make him look younger than 58. He was a competitive swimmer as a child, from age 9 through his undergraduate years at Oberlin, but these days he seems more nerd than athlete: he continually makes puns, for one thing, and he alludes frequently to “Star Trek.”

“Are you ready to begin our Vulcan mind meld?” he asked when he collected me at my hotel in St. Louis, where I went to meet him and his colleagues at the Center for Genome Sciences at Washington University, which he directs. In a way, I was indeed hoping for a mind meld; I wanted to find out everything Gordon knows about the bugs in our guts, and how those bugs might contribute to human physiology — in particular, how they might make some people fat.

Of the trillions and trillions of cells in a typical human body — at least 10 times as many cells in a single individual as there are stars in the Milky Way — only about 1 in 10 is human. The other 90 percent are microbial. These microbes — a term that encompasses all forms of microscopic organisms, including bacteria, fungi, protozoa and a form of life called archaea — exist everywhere. They are found in the ears, nose, mouth, vagina, anus, as well as every inch of skin, especially the armpits, the groin and between the toes. The vast majority are in the gut, which harbors 10 trillion to 100 trillion of them. “Microbes colonize our body surfaces from the moment of our birth,” Gordon said. “They are with us throughout our lives, and at the moment of our death they consume us.”

Known collectively as the gut microflora (or microbiota, a term Gordon prefers because it derives from the Greek word bios, for “life”), these microbes have a Star Trek analogue, he says: the Borg Collective, a community of cybernetically enhanced humanoids with functions so intertwined that they operate as a single intelligence, sort of like an ant colony. In its Borglike way, the microflora assumes an extraordinary array of functions on our behalf — functions that we couldn’t manage on our own. It helps create the capillaries that line and nourish the intestines. It produces vitamins, in particular thiamine, pyroxidine and vitamin K. It provides the enzymes necessary to metabolize cholesterol and bile acid. It digests complex plant polysaccharides, the fiber found in grains, fruits and vegetables that would otherwise be indigestible.

And it helps extract calories from the food we eat and helps store those calories in fat cells for later use — which gives them, in effect, a role in determining whether our diets will make us fat or thin.

In the womb, humans are free of microbes. Colonization begins during the journey down the birth canal, which is riddled with bacteria, some of which make their way onto the newborn’s skin. From that moment on, every mother’s kiss, every swaddling blanket, carries on it more microbes, which are introduced into the baby’s system.

By about the age of 2, most of a person’s microbial community is established, and it looks much like any other person’s microbial community. But in the same way that it takes only a small percentage of our genome to make each of us unique, modest differences in our microflora may make a big difference from one person to another. It’s not clear what accounts for individual variations. Some guts may be innately more hospitable to certain microbes, either because of genetics or because of the mix of microbes already there. Most of the colonization probably happens in the first few years, which explains why the microflora fingerprints of adult twins, who shared an intimate environment (and a mother) in childhood, more closely resemble each other than they do those of their spouses, with whom they became intimate later in life.

No one yet knows whether an individual’s microflora community tends to remain stable for a lifetime, but it is known that certain environmental changes, like taking antibiotics, can alter it at least temporarily. Stop the antibiotics, and the microflora seems to bounce back — but it might not bounce back to exactly what it was before the antibiotics.

In 2004, a group of microbiologists at Stanford University led by David Relman conducted the first census of the gut microflora. It took them a year to do an analysis of just three healthy subjects, by which time they had counted 395 species of bacteria. They stopped counting before the census was complete; Relman has said the real count might be anywhere from 500 species to a few thousand.

About a year ago, Relman joined with other scientists, including Jeffrey Gordon, to begin to sequence all the genes of the human gut microflora. In early June, they published their results in Science: some 78 million base pairs in all. But even this huge number barely scratches the surface; the total number of base pairs in the gut microflora might be 100 times that. Because there are so many trillions of microbes in the gut, the vast majority of the genes that a person carries around are more microbial than human. “Humans are superorganisms,” the scientists wrote, “whose metabolism represents an amalgamation of microbial and human attributes.” They call this amalgamation — human genes plus microbial genes — the metagenome.

Gordon first began studying the connection between the microflora and obesity when he saw what happened to mice without any microbes at all. These germ-free mice, reared in sterile isolators in Gordon’s lab, had 60 percent less fat than ordinary mice. Although they ate voraciously, usually about 30 percent more food than the others, they stayed lean. Without gut microbes, they were unable to extract calories from some of the types of food they ate, which passed through their bodies without being either used or converted to fat.

When Gordon’s postdoctoral researcher Fredrik Bäckhed transplanted gut microbes from normal mice into the germ-free mice, the germ-free mice started metabolizing their food better, extracting calories efficiently and laying down fat to store for later use. Within two weeks, they were just as fat as ordinary mice. Bäckhed and Gordon found at least one mechanism that helps explain this observation. As they reported in the Proceedings of the National Academy of Sciences in 2004, some common gut bacteria, including B. theta, suppress the protein FIAF, which ordinarily prevents the body from storing fat. By suppressing FIAF, B. theta allows fat deposition to increase. A different gut microbe, M. smithii, was later found to interact with B. theta in a way that extracts additional calories from polysaccharides in the diet, further increasing the amount of fat available to be deposited after the mouse eats a meal. Mice whose guts were colonized with both B. theta and M. smithii — as usually happens in humans in the real world — were found to have about 13 percent more body fat than mice colonized by just one or the other.

Gordon likes to explain his hypothesis of what gut microbes do by talking about Cheerios. The cereal box says that a one-cup serving contains 110 calories. But it may be that not everyone will extract 110 calories from a cup of Cheerios. Some may extract more, some less, depending on the particular combination of microbes in their guts. “A diet has a certain amount of absolute energy,” he said. “But the amount that can be extracted from that diet may vary between individuals — not in a huge way, but if the energy balance is affected by just a few calories a day, over time that can make a big difference in body weight.”

In another line of research, Gordon and his postdoctoral researcher Ruth Ley compared the microflora in two kinds of mice: normal-weight mice and mice with a genetic mutation that made them fat. Like humans, the mice had microflora consisting almost exclusively of two divisions of bacteria, the Bacteroidetes and the Firmicutes. But the proportions differed depending on whether the host was thin or fat. The normal-weight mice had more Bacteroidetes than Firmicutes in their gut microflora. The genetically obese mice had the opposite proportions: 50 percent fewer Bacteroidetes, 50 percent more Firmicutes.

It isn’t clear what the functional significance is of having more Firmicutes in the gut, nor whether the observed difference is a cause of the obesity or an effect. But Gordon wanted to see whether something comparable happened in humans of different weights. Over the past year, he and his colleagues have evaluated stool samples from 12 obese patients at a weight-loss clinic at Washington University, along with some normal-weight controls. They want to see if there’s such a thing as lean-type and obese-type microflora, and whether weight loss leads to a change in a person’s microbial community.

Gordon says he is still far from understanding the relationship between gut microflora and weight gain. “I wish you were writing this article a year from now, even two years from now,” he told me. “We’re just beginning to explore this wilderness, finding out who’s there, how does that population change, which are the key players.” He says it will be a while before anyone figures out what the gut microbes do, how they interact with one another and how, or even whether, they play a role in obesity. And it will be even longer before anyone learns how to change the microflora in a deliberate way.

 

You might think a microbial theory of obesity could change people’s views about the obese, perhaps even lessen the degree to which people think that obesity is the fat person’s own fault. But anti-fat sentiments seem to be deeply ingrained and resistant to change, as reflected in a rather unlikely place: New Scientist, a British magazine. In an article last year describing the work of Gordon and two groups of researchers in England who were also investigating the link between obesity and gut microflora, the author, Bijal Trivedi, was quite sympathetic to Gordon’s hypothesis. But the article — which is, remember, about a possible biological cause of obesity — was presented with a headline that still managed to depict obese people as lazy and gluttonous. It was called “Slimming for Slackers” and was illustrated with a fat man in a sweatsuit — the “slacker” of the title — sitting beside a partly eaten chocolate doughnut, waiting passively for thinness to arrive.

This is not to single out the New Scientist editors; they are just reflecting the generalized belief that there’s an element of laziness in anyone’s obesity. “Gluttony and sloth are two of the seven deadly sins,” said Ellen Ruppel Shell, author of “The Hungry Gene.” “We ascribe obesity to a character flaw.” This is what leads to the psychic pain of being fat, the social isolation of having a condition that everyone believes to be completely within your control — as if it were a voluntary purgatory, a case of willfully digging your own grave with your dinner fork.

I found that this attitude exists even among obese people, including a woman who was a research subject in Gordon’s clinical study. Joan was one of the obese patients at Washington University who sent Gordon stool samples as she lost weight (15 pounds over the course of a year, which she eventually gained back when she stopped dieting) so they could be tested for various microbes. She said she hasn’t been curious enough to try to find out about her microflora; she’s too busy, and besides, she already knows where to place the blame for her excess weight — not on a microbe but on herself. “I know that I’m not being obedient, I’m not using my body the way God intended,” said Joan, who asked me to refer to her only by her middle name. “I know how I’m supposed to eat, but I’m not having a healthy appetite, you know what I’m saying? I’m not wanting to be obedient.”

But it’s not about obedience — or at least not only about obedience. “The biochemistry of the body of the obese person is very different from that of a lean person,” said Richard Atkinson, Janet S.’s former physician. “If the obese person gets down to a lean person’s weight, their biochemistry is not the same.” Losing weight is hard, keeping it off is harder and, especially for some unfortunate souls, the body seems to work against itself in the struggle.

 

There’s another way that biological middlemen might be involved in obesity — in this case, not the gut microbes (mostly bacteria) with which we co-exist but the viruses and other pathogens that occasionally infect us and make us ill. This is the subspecialty that is being called infectobesity.

The idea of infectobesity dates to 1988, when Nikhil Dhurandhar was a young physician studying for his doctorate in biochemistry at the University of Bombay. He was having tea with his father, also a physician and the head of an obesity clinic, and an old family friend, S. M. Ajinkya, a pathologist at Bombay Veterinary College. Ajinkya was describing a plague that was killing thousands of chickens throughout India, caused by a new poultry virus that he had discovered and named with his own and a colleague’s initials, SMAM-1. On autopsy, the vet said, chickens infected with SMAM-1 revealed pale and enlarged livers and kidneys, an atrophied thymus and excess fat in the abdomen.

The finding of abdominal fat intrigued Dhurandhar. “If a chicken died of infection, having wasted away, it should be less fat, not more,” he remembered thinking at the time. He asked permission to conduct a small experiment at the vet school.

Working with about 20 chickens, Dhurandhar, then 28, infected half of them with SMAM-1. He fed them all the same amount of food, but only the infected chickens became obese. Strangely, despite their excess fat, the infected obese chickens had low levels of cholesterol and triglycerides in their blood — just the opposite of what was thought to happen in humans, whose cholesterol and triglyceride levels generally increase as their weight increases. After his pilot study in 1988, Dhurandhar conducted a larger one with 100 chickens. It confirmed his finding that SMAM-1 caused obesity in chickens.

But what about humans? With a built-in patient population from his clinic, Dhurandhar collected blood samples from 52 overweight patients. Ten of them, nearly 20 percent, showed antibody evidence of prior exposure to the SMAM-1 virus, which was a chicken virus not previously thought to have infected humans. Moreover, the once-infected patients weighed an average of 33 pounds more than those who were never infected and, most surprisingly, had lower cholesterol and triglyceride levels — the same paradoxical finding as in the chickens.

The findings violated three pieces of conventional wisdom, Dhurandhar said recently: “The first is that viruses don’t cause obesity. The second is that obesity leads to high cholesterol and triglycerides. The third is that avian viruses don’t infect humans.”

Dhurandhar, now 46, is a thoughtful man with a head of still-dark hair. Like Gordon, he has never been fat. But even though he is so firmly in the biological camp of obesity researchers, he ascribes his own weight control to behavior, not microbes; he says he is slim because he walks five miles a day, lifts weights and is careful about what he eats. Being overweight runs in his family; Dhurandhar’s father, who still practices medicine in India, began treating obese patients because of his own struggle to keep his weight down, from a onetime high of 220.

Slim as he is, Dhurandhar nonetheless is sensitive to the pain of being fat and the maddening frustration of trying to do anything about it. He takes to heart the anguished letters and e-mail he receives each time his research is publicized. Once, he said, he heard from a woman whose 10-year-old grandson weighed 184 pounds. The boy rode his bicycle until his feet bled, hoping to lose weight; he was so embarrassed by his body that he kept his T-shirt on when he went swimming. The grandmother told Dhurandhar that the virus research sounded like the answer to her prayers. But the scientist knew that even if a virus was to blame for this boy’s obesity, he was a long way from offering any real help.

In 1992, Dhurandhar moved his wife and 7-year-old son to the United State s in search of a lab where he could continue his research. At first, because infectobesity was so far out of the mainstream, all he could find was unrelated work at North Dakota State University. “My wife and I gave ourselves two years,” he recalled. “If I didn’t find work in the field of viruses and obesity in two years, we would go back to Bombay.”

Dhurandhar’s battle against the conventional wisdom was reminiscent of the struggle a decade earlier of two Australian scientists, who were also proposing an infectious cause for a chronic disease, in their case, a bacterium that causes ulcers. The Australians were met with skepticism at first, but eventually they accumulated enough evidence to make it hard to ignore the connection between ulcers and the bacterium, Helicobacter pylori. It helped that one of them, Barry J. Marshall, dramatically swallowed a pure culture of H. pylori — and promptly came down with symptoms of gastritis, the first stage of an ulcer. (The H. pylori story ended with the ultimate vindication: Marshall and his collaborator, J. Robin Warren, won the Nobel Prize in 2005.)

One month before his self-imposed deadline in 1994, Dhurandhar received a job offer from Richard Atkinson, who was then at the University of Wisconsin, Madison. Atkinson, always on the lookout for new biological explanations of obesity, wanted to collaborate with Dhurandhar on SMAM-1. But the virus existed only in India, and the U.S. government would not allow it to be imported. So the scientists decided to work with a closely related virus, a human adenovirus. They opened the catalogue of a laboratory-supply company to see which one of the 50 human adenoviruses they should order.

“I’d like to say we chose the virus out of some wisdom, out of some belief that it was similar in important ways to SMAM-1,” Dhurandhar said. But really, he admitted, it was dumb luck that the adenovirus they started with, Ad-36, turned out to be so fattening.

By this time, several pathogens had already been shown to cause obesity in laboratory animals. With Ad-36, Dhurandhar and Atkinson began by squirting the virus up the nostrils of a series of lab animals — chickens, rats, marmosets — and in every species the infected animals got fat.

“The marmosets were most dramatic,” Atkinson recalled. By seven months after infection, he said, 100 percent of them became obese. Subsequently, Atkinson’s group and another in England conducted similar research using other strains of human adenovirus. The British group found that one strain, Ad-5, caused obesity in mice; the Wisconsin group found the same thing with Ad-37 and chickens. Two other strains, Ad-2 and Ad-31, failed to cause obesity.

In 2004, Atkinson and Dhurandhar were ready to move to humans. All of the 50 strains of human adenoviruses cause infections that are usually mild and transient, the kind that people pass off as a cold, a stomach bug or pink eye. The symptoms are so minor that people who have been infected often don’t remember ever having been sick. Even with such an innocuous virus, it would be unethical, of course, for a scientist to infect a human deliberately just to see if the person gets fat. Human studies are, therefore, always retrospective, a hunt for antibodies that would signal the presence of an infectious agent at some point in the past. To carry out this research, Atkinson developed — and patented — a screening test to look for the presence of Ad-36 antibodies in the blood.

The scientists found 502 volunteers from Wisconsin, Florida and New York willing to be screened for antibodies, 360 of them obese and 142 of them of not obese. Of the leaner subjects, 11 percent had antibodies to Ad-36, indicating an infection at some point in the past. (Ad-36 was identified relatively recently, in 1978.) Among the obese subjects, 30 percent had antibodies— a difference large enough to suggest it was not just chance. In addition, subjects who were antibody-positive weighed significantly more than subjects who were uninfected. Those who were antibody-positive also had cholesterol and triglyceride readings that were significantly lower than people who were antibody-negative — just as in the infected chickens — a finding that held true whether or not they were obese.

Were fat people just more prone to infection? Probably not, because the scientists also screened for antibodies to two other strains of adenovirus, and there was no difference between those who were obese and those who were not. Could the differences be explained by genes instead of by viruses? Probably not, because the scientists controlled for genes in a follow-up study that involved 90 pairs of twins. In the twin study, they found 20 identical-twin pairs who were “discordant” for antibodies to Ad-36, meaning one twin had been exposed to the virus and the other twin had not. In the discordant pairs, the infected twin tended to be fatter, with an average of almost 2 percent more body fat (29.6 percent versus 27.5 percent) than the uninfected twin — even though they shared exactly the same genes.

If Ad-36 is a cause of obesity, Atkinson says, you’re more likely to catch it from a newly infected and still-contagious thin person than from someone who has already gained weight because of its effects. Exactly what the virus does to create this kind of long-term perturbation is still being investigated. In a paper published last year in The International Journal of Obesity, Atkinson and Dhurandhar, along with five of their colleagues, presented evidence for how Ad-36 might affect fat cells directly, “leading to an increased fat-cell number and increased fat-cell size.”

As for the other pathogens implicated in infectobesity — nine in all — certain viruses are known to impair the brain’s appetite-control mechanism in the hypothalamus, as happens in some cases of people becoming grossly obese after meningitis. Scientists also point to a commonality between fat cells and immune-system cells, although the exact significance of the connection is unclear. Immature fat cells, for instance, have been shown to behave like macrophages, the immune cells that engulf and destroy invading pathogens. Mature fat cells secrete hormones that stimulate the production of macrophages as well as another kind of immune-system cell, T-lymphocytes.

Another line of investigation in the field of infectobesity concerns inflammation, a corollary of infection. Obese people have higher levels of two proteins related to inflammation, C-reactive protein and interleukin-6. This may suggest that an infectious agent has set off some sort of derangement in the body’s system of fat regulation, making the infected person fat. A different interpretation is not about obesity causation but about its associated risks. Some scientists, including Jeffrey Gordon’s colleagues at Washington University, are trying to see whether the ailments of obesity (especially diabetes and high blood pressure) might be caused not by the added weight per se, but by the associated inflammation.

Infectobesity has its critics, among them Stephen Bloom, a researcher at Imperial College London. Bloom said that if he were working at a research agency, he’d give money for studies into the viral causes of obesity, just in case there’s something there. But he said he wouldn’t put the theory into a medical-school textbook just yet. His main objection, he said, is that “I don’t think we need that explanation, since we have a perfectly good other explanation.” Like Dhurandhar and Atkinson, Bloom suspects that obesity has a biological cause — but rather than turning to gut microflora or adenovirus infection for an explanation, he is partial to what he calls “the lazy-greedy gene” hypothesis, his slightly disparaging shorthand for what is more generally known as the thrifty genotype.

The thrifty-genotype hypothesis holds that there was, once upon a time, an adaptive advantage to being able to get fat. Our ancestors survived unpredictable cycles of food catastrophes by laying down fat stores when food was plentiful, and using up the stores slowly when food was scarce. The ones who did this best were the ones most likely to survive and to pass on the thrifty genotype to the next generation. But this mechanism evolved to get through a difficult winter — and we’re living now in an eternal spring. With food so readily available, thriftiness is a liability, and the ability to slow down metabolism during periods of reduced eating (a k a dieting) tends to create a fatter populace, albeit a more famine-proof one.

Bloom, by the way, does not give much credence to Dhurandhar’s analogy between the Ad-36-obesity connection and the recent history of H. pylori and ulcers — even though each started out looking like just another wacky idea. “There are so many crazy theories,” he said. “But just because one in a hundred turns out to be correct doesn’t mean all the crazy theories are correct.”

 

Obesity has turned out to be a daunting foe. Many of us are tethered to bodies that sabotage us in our struggle to keep from getting fat, or to slim down when we do. Microbes might be one explanation. There might be others, as outlined in June in a paper in The International Journal of Obesity listing 10 “putative contributors” to obesity, among them sleep deprivation, the increased use of psychoactive prescription drugs and the spread of air-conditioning.

But where does this leave us, exactly? Whatever the reason for any one individual’s tendency to gain weight, the only way to lose the weight is to eat less and exercise more. Behavioral interventions are all we’ve got right now. Even the supposedly biological approach to weight loss — that is, diet drugs — still works (or, more often, fails to work) by affecting eating behavior, through chemicals instead of through willpower. If it turns out that microbes are implicated in obesity, this biological approach will become more direct, in the form of an antiviral agent or a microbial supplement. But the truth is, this isn’t going to happen any time soon.

On an individual level and for the foreseeable future, if you want to lose weight, you still have to fiddle with the energy equation. Weight still boils down to the balance between how much a particular body needs to maintain a certain weight and how much it is fed. What complicates things is that in some people, for reasons still not fully understood, what their bodies need is set unfairly low. It could be genes; it could be microbes; it could be something else entirely.

Janet S. is one such person. Thirty years after her obesity surgery, 170 pounds lighter than when she started, she still needs to restrict her food intake to keep from gaining it all back.

“I definitely have to diet — damn it, I should have a pass on that, don’t you think?” said Janet, now 55, a human-resources administrator in Southern California, married and with a teenage daughter who is tall and slender. Even with the surgery, and even maintaining a weight that is borderline obese (at least according to the government definition; Janet weighs 180 pounds, plus or minus 15, meaning her body-mass index hovers around the magic number of 30), she can never enjoy food with complete and carefree abandon.

This is typical of people who have lost weight — not only a lot of weight, as Janet has, but even a little weight. According to Rudolph Leibel, an obesity researcher at Columbia University who was involved in the discovery of the first human gene implicated in obesity, if you take two nonobese people of the same weight, they will require different amounts of food depending on whether or not they were once obese. It goes in precisely the maddening direction you might expect: formerly fat people need to eat less than never-fat people to maintain exactly the same weight. In other words, a 150-pound woman who has always weighed 150 might be able to get away with eating, say, 2,500 calories a day, but a 150-pound woman who once weighed more — 20 pounds more, 200 pounds more, the exact amount doesn’t matter — would have to consume about 15 percent fewer calories to keep from regaining the weight. The change occurs as soon as the person starts reducing, Leibel said, and it “is not proportional to amount of weight lost, and persists over time.”

For many people, then, losing weight and keeping the weight off requires a constant state of hunger — and when you’re hungry, you’re miserable. You think of nothing but food every moment of the day. All morning you think about lunch, all afternoon you think about dinner, and when you’re asleep, you dream of food.

Or, as Judith Moore put it in her memoir, “Fat Girl”: “Some people daydream heroic deeds or sex scenes or tropical vacations. I daydream crab legs dipped in hot butter.” She wrote about fellow warriors who, like her, struggle to keep off the weight they worked so hard to lose. As they approach the all-you-can-eat buffet, she wrote, “they square their shoulders. They ready for combat with Virginia baked ham, sweet-potato soufflé and those puffy dinner rolls with butter and a three-layer chocolate mousse cake. Food is the enemy. Food is also the mother, the father, the warmhearted lover, the house built of redbrick that not even the wolf can blow down.”

Current public-health messages deny this harsh reality. They make losing weight sound easy, just a simple matter of doing the math and applying some willpower. A pound of fat contains 3,500 calories, government documents say, and if you cut down a week’s worth of food intake or increase exercise by a total of 3,500 calories, then, voilà — you lose a pound. “To lose weight, you must use more energy than you take in,” states the Web site of the Office of the Surgeon General. “A difference of one 12-oz. soda (150 calories) or 30 minutes of brisk walking most days can add or subtract approximately 10 pounds to your weight each year.”

But if genes or viral infection or gut microflora are involved, then for some people 3,500 calories might not equal a pound of fat, and 150 fewer calories a day might not mean they’ll lose 10 pounds in a year. As scientists continue to investigate how obese people are different, we can only hope that a side benefit will be a more largehearted understanding of what it means to be fat and how hard it is to try to become, and to remain, less fat.

A more concrete benefit would be to develop ways to interfere with the action of the offending microbes. Atkinson, for one, foresees a day when Ad-36 antibody screening becomes as routine as cholesterol screening. He has a financial stake in making this happen; when he moved to Virginia two years ago, he started a company called Obetech to market his Ad-36 antibody test, for which he charges $450. But he said he has an altruistic motive as well. The people most likely to benefit from such testing, he said, are not fat people but thin people, whose infections are so recent that they haven’t yet begun to gain weight. But they are the least likely to pay to have it done without it being part of a routine checkup.

Based on animal studies, Atkinson assumes that people infected with Ad-36 have a better than even chance of becoming obese. “But if they watch their diet, and if they exercise, they can avoid it.” Further in the future, he said, there might be a way to administer antiviral drugs to infected individuals early enough to block the effect of Ad-36 on the fat cells.

Gordon, too, is hoping that his research will eventually lead to new strategies for treating obesity. It’s a long way off, he said, but it’s the beacon that keeps him and his colleagues working.

“How can you manipulate the microbial community to more broadly affect energy balance?” he asked, enumerating the research questions still to be tackled. “Can one size fit all, or can you match nutrition to the microbes in your gut?” After obese-type microflora are differentiated from lean-type, Gordon said, the next step would be what he calls “personalized nutrition” — matching diet to the digestive properties of each person’s unique microflora.

Such deliberate manipulation of the gut microflora is a long way off — years and years off, according to Gordon — but its possibility “is what this first phase of our work is underscoring, and we hope it will turn out to be an important tool in the fight against obesity.”

Robin Marantz Henig is a contributing writer to the magazine. Her last cover article was about the science of lie detection.

The Laptop-Buying Learning Curve

By Rob Pegoraro
Sunday, August 13, 2006; F01

Shopping for a laptop is either too easy or too hard.

If you just need a semi-portable machine to move from room to room in your house, without ever untethering it from an electrical outlet, you can't go too wrong with shopping by price alone or some obvious factor such as screen size.

But if your laptop will exit home on any regular basis -- say, if you're a student looking at years of toting the machine from dorm to classroom -- you have far more things to contemplate. And the two most important factors among them, a laptop's weight and battery life, are either routinely hidden in marketing materials or not published at all.

Some manufacturers have good reasons to hide those details, lest they be embarrassed by them. Others may just be deeply confused as to what people want in a portable computer. There is no perfect laptop for everybody, but there may be a perfect one for you -- depending on what you value most in a machine.

The stickiest of these value judgments remains that old standby: Mac or Windows?

The basic trade-off between Mac OS X and Windows XP has changed dramatically since Apple began selling computers that run on the same Intel processors as many PCs. A Mac can now run every single program a PC can, once you install Apple's free Boot Camp software and use that to load a copy of Windows XP on the Mac's hard drive. (Or you can buy "virtualization" software that runs XP in its own window in OS X.)

Instead of having to balance Apple's security and ease of use with the far wider choice of software provided by Windows, you can have both. So if you've been leaning toward getting a MacBook, Apple's consumer-oriented laptop, but worry that you might have to run some Windows-only program -- go ahead and get the Mac.

Windows laptops still offer cost savings -- at least before you pay for a security-software subscription -- and most offer more expansion options than any of Apple's portables.

Many of the cheapest Windows portables are "desktop-replacement" laptops meant to stay anchored to desks -- some of these beasts weigh more than a dozen pounds. (Apple doesn't sell anything in that category at all.)

Avoid the desktop-replacement models if there's any reasonable chance that your laptop will regularly move across campus or across town. In those cases, you probably won't like anything weighing much more than five pounds. (You can get a laptop that weighs a lot less, but good luck finding one for anything under $1,500.)

Don't forget the power adapter, which can add almost a pound in some cases. You may also need to factor in upgrading to a higher-capacity battery: Many manufacturers try to keep a laptop's cost and weight down by including only a starter-sized battery that won't make it through a two-hour DVD movie. (A good laptop should finish a 2- 1/2 -hour DVD before conking out.)

As a general rule, if the company's Web site offers a choice of batteries, you should probably trade up from the standard one.

Your preferences for a laptop's weight -- and your budget -- will then probably dictate its screen size. In the case of an under-$1,500, campus or cross-town laptop, that usually means a display measuring 13 or 14 inches across -- usually a wide screen, proportioned to fit DVD movies. Don't worry too much about settling for "only" a 13-inch screen; odds are, it will have the same resolution as a 14-inch display. You won't see more of a Web page on the bigger display; its text will just look marginally bigger.

Wondering when you should worry about the processor in a laptop? Quite possibly, never -- although the Intel and AMD chips do offer differences in performance, the rest of the hardware inside the machine can have a much bigger effect on its overall utility.

Memory leads that list. The 512 megabytes included on entry-level models will suffice for Windows XP (though not its successor, Windows Vista, due in January) but just aren't enough on an Intel-powered Mac, where Mac OS X needs extra memory to translate older Mac software for the new chip. Take the money you might have spent on a processor upgrade and sink it into more memory instead.

A hard drive that's bigger and faster (as measured in RPM) will also help. It's hard to go wrong buying too much hard drive in a laptop, since replacing it later will be difficult or impossible.

If you've narrowed your choice of laptop down to a handful of models, compare their expandability. More USB ports to plug in peripherals and gadgets are always good. Bluetooth is -- finally -- becoming useful as a way to link up cellphones and handheld organizers wirelessly. (This is separate from WiFi wireless Internet access, a standard feature across the board.) And PC Card and Express Card expansion slots (the latter is a newer, faster standard) let you add entirely new functions, such as high-speed data service from a cellphone carrier.

Don't forget to consider the more subjective area of hardware design. You're going to have a laptop inches from your body for hours at a stretch, so it might as well be comfortable. Try its keyboard and its touchpad (Lenovo also includes a pointing stick mid-keyboard). Make sure that you can easily find its expansion ports and that the screen doesn't pick up too much glare from overhead lights.

This column should conclude with advice on how to compare bundled software and tech support, but there just isn't much to compare in either category.

Although Windows supports the greatest variety of software the world has ever seen, you'd never know it from the dreck preinstalled on most Windows laptops. Innovation in the PC industry seems to count as setting Internet Explorer's home page to Google instead of the usual MSN, Yahoo or AOL.

Tech support is another area crying out for renewed competition. As it stands, you can have somebody take your calls quickly, you can get accurate answers, or you can talk to somebody who will understand your English perfectly.

But you can't get all three.

The Triumph of Unrealism

By George F. Will
Tuesday, August 15, 2006; A13

Five weeks have passed since the kidnapping of two Israeli soldiers provoked Israel to launch its most unsatisfactory military operation in 58 years. What problem has been solved, or even ameliorated?

Hezbollah, often using World War II-vintage rockets, has demonstrated the inadequacy of Israel's policy of unilateral disengagement -- from Lebanon, Gaza, much of the West Bank -- behind a fence. Hezbollah has willingly suffered (temporary) military diminution in exchange for enormous political enlargement. Hitherto Hezbollah in Lebanon was a "state within a state." Henceforth, the Lebanese state may be an appendage of Hezbollah, as the collapsing Palestinian Authority is an appendage of the terrorist organization Hamas. Hezbollah is an army that, having frustrated the regional superpower, suddenly embodies, as no Arab state ever has, Arab valor vindicated in combat with Israel.

Only twice in the United Nations' six decades has it authorized the use of substantial force -- in 1950 regarding Korea and in 1990 regarding Kuwait. It still has not authorized force in Lebanon. What is being called a "cease-fire" resolution calls for Israel to stop all "offensive" operations. Israel, however, reasonably says that its entire effort is defensive. The resolution calls for Hezbollah to stop "all attacks." The United Nations, however, has twice resolved that Hezbollah should be disarmed, yet has not willed the means to that end. Regarding force now, the U.N. merely "expresses its intention to consider in a later resolution further enhancements" of the U.N. force that for 28 years has been loitering without serious intent in south Lebanon.

The "new Middle East," the "birth pangs" of which we supposedly are witnessing, reflects the region's oldest tradition, the tribalism that preceded nations. The faux and disintegrating nation of Iraq, from which the middle class, the hope of stability, is fleeing, has experienced in these five weeks many more violent deaths than have occurred in Lebanon and Israel. U.S. Gen. George Casey says 60 percent of Iraqis recently killed are victims of Shiite death squads. Some are associated with the Shiite-controlled Interior Ministry, which resembles a terrorist organization.

The London plot against civil aviation confirmed a theme of an illuminating new book, Lawrence Wright's "The Looming Tower: Al-Qaeda and the Road to 9/11." The theme is that better law enforcement, which probably could have prevented Sept. 11, is central to combating terrorism. F-16s are not useful tools against terrorism that issues from places such as Hamburg (where Mohamed Atta lived before dying in the North Tower of the World Trade Center) and High Wycombe, England.

Cooperation between Pakistani and British law enforcement (the British draw upon useful experience combating IRA terrorism) has validated John Kerry's belief (as paraphrased by the New York Times Magazine of Oct. 10, 2004) that "many of the interdiction tactics that cripple drug lords, including governments working jointly to share intelligence, patrol borders and force banks to identify suspicious customers, can also be some of the most useful tools in the war on terror." In a candidates' debate in South Carolina (Jan. 29, 2004), Kerry said that although the war on terror will be "occasionally military," it is "primarily an intelligence and law enforcement operation that requires cooperation around the world."

Immediately after the London plot was disrupted, a "senior administration official," insisting on anonymity for his or her splenetic words, denied the obvious, that Kerry had a point. The official told The Weekly Standard:

"The idea that the jihadists would all be peaceful, warm, lovable, God-fearing people if it weren't for U.S. policies strikes me as not a valid idea. [Democrats] do not have the understanding or the commitment to take on these forces. It's like John Kerry. The law enforcement approach doesn't work."

This farrago of caricature and non sequitur makes the administration seem eager to repel all but the delusional. But perhaps such rhetoric reflects the intellectual contortions required to sustain the illusion that the war in Iraq is central to the war on terrorism, and that the war, unlike "the law enforcement approach," does "work."

The official is correct that it is wrong "to think that somehow we are responsible -- that the actions of the jihadists are justified by U.S. policies." But few outside the fog of paranoia that is the blogosphere think like that. It is more dismaying that someone at the center of government considers it clever to talk like that. It is the language of foreign policy -- and domestic politics -- unrealism.

Foreign policy "realists" considered Middle East stability the goal. The realists' critics, who regard realism as reprehensibly unambitious, considered stability the problem. That problem has been solved.

georgewill@washpost.com

 

Too Hot or Too Cold at Work? Best Bet Is to Chill Out

By Shankar Vedantam
Washington Post Staff Writer
Monday, August 14, 2006; A02

Office managers are under siege. They know that if they set the temperature to 74, they hear from the woman in human resources who says it is too cold. If they turn it up to 76, they hear from the man in marketing who wants to know why it is sweltering hot.

It is summer, which means inside the supposed comfort of air-conditioned buildings, thousands of people are swearing that they are dying of heat, freezing to death or otherwise experiencing thermal discomfort.

These are not trivial wars: People have been known to bring their own thermometers to work to triumphantly prove to office managers that the temperature is not what has been advertised.

(This has prompted a certain caution when the topic of temperature comes up in the workplace: When I asked our office manager about the temperature in the newsroom and she called the folks in building engineering to find out, their instant response was a defensive, "What's wrong?")

On the home front, conflict usually rises with the mercury, as people in the same household fight over the thermostat setting. Marital compromises usually leave one party freezing and the other burning up at the same bedroom temperature. At a certain stage of our lives, apparently, we might be willing to concede failure on many fronts, but we are unyielding about what the temperature is and what it ought to be.

Sorry to burst your bubble. Psychological experiments show that people are not remotely as sensitive to the temperature as they think they are.

For one thing, why is it that the same temperature -- say, 78 degrees -- can feel right during the summer but too hot during the winter? And why is it that a person will cite different optimal temperatures when asked in the summer and the winter?

"There is a very large mental component to feeling hot," said the psychologist William C. Howell, who has conducted experiments about how accurate people are at telling what the temperature is and about when people feel comfortable.

The experiments do not mean people cannot tell the difference between 70 degrees and 110. Of course they can. But the experiments do indicate that for the kind of arguments people have all the time -- in which the range of temperature being argued about is often less than five degrees -- psychological factors play at least as large a role in determining comfort as the actual temperature.

In one experiment, Howell had two groups of volunteers describe how comfortable they were in a room. Then he called one group back a couple of days later, after he had raised the temperature by five degrees. He told the volunteers that he had lost their original answers, and quizzed them again about their perceptions of the temperature and their comfort.

With the second group, Howell held the temperature in the room steady but told the volunteers that it was warmer than on the first day. Again, he had them fill out questionnaires about perceived temperature and comfort.

Both groups reported exactly the same changes in perception of temperature and comfort; Howell's suggestion to the second group that it was warmer seems to have had the same effect as actually making the room warmer.

The psychologist thinks our perceptions of comfort and discomfort are at least partly determined by social cues. That annual staple of summer and winter media reports -- tales of unbearably hot summers and unbearably cold winters -- probably contributes to people's perceptions that it is uncomfortably hot or cold.

Howell has lived in both Arizona and Ohio. As people who move from warm places to cold climes and vice versa realize, human beings are capable of adapting to a very wide range of temperatures.

This is not to say that people ought to feel fine when it is zero or 100 degrees. Not everything is psychological. In fact, experiments show that people's ability to attend to a task involving detailed concentration declines after the temperature crosses 79 degrees. Another experiment, which called for sustained attention, found that as the temperature rose from 74 degrees to 82 degrees and then to 90 degrees, people grew more distractible.

"The take-home practical message as far as conservation is concerned is that one or two degrees nationwide could make a huge difference (in energy consumption) without having any substantial effect on comfort at all, if people were not locked into that mind-set," Howell said.

Restaurant owners think they can lose customers by keeping a room too hot but not by keeping it too cold, and they have long erred on the side of freezing. Again, Howell said, raising the thermostat by a few degrees in the summer probably will not hurt business -- and would probably please the people whose teeth chatter when they sit down to lunch.

Of course, many people think that only other people are affected by psychological cues, whereas they themselves are as reliable as thermometers.

Sure.

Tell your office manager.

Federal Pay: Myth and Realities

By Chris Edwards
Sunday, August 13, 2006; B07

We've often heard that civil servants forgo higher private-sector salaries in order to serve the nation selflessly. Many federal bureaucrats are indeed hardworking, but new statistics show that they are anything but underpaid.

The Bureau of Economic Analysis released data this month showing that the average compensation for the 1.8 million federal civilian workers in 2005 was $106,579 -- exactly twice the average compensation paid in the U.S. private sector: $53,289. If you consider wages without benefits, the average federal civilian worker earned $71,114, 62 percent more than the average private-sector worker, who made $43,917.

The high level of federal pay is problematic in and of itself, but so is its rapid growth. Since 1990 average compensation for federal workers has increased by 129 percent, the BEA data show, compared with 74 percent for private-sector workers.

Why is federal compensation growing so quickly? For one thing, federal pay schedules increase every year regardless of how well the economy is doing. Thus in recession years, private pay stagnates while government pay continues to rise. Another factor is the steadily increasing "locality" payments given to federal workers in higher-cost cities.

Rapid growth in federal pay also results from regular promotions that move workers into higher salary brackets regardless of performance and from redefining jobs upward into higher pay ranges. The federal workforce has become increasingly top-heavy.

The structure of that workforce has also changed over time. There are fewer low-pay typists and more high-pay computer experts in the government today than there were a generation ago. But that doesn't explain why, as the BEA data show, federal wages have risen 38 percent in just the past five years, compared with 14 percent in the private sector.

Whatever the reasons, the federal civilian workforce has become an elite island of secure and highly paid workers, separated from the ocean of private-sector American workers who must compete in today's dynamic economy.

Federal workers' unions try to convince Congress that their members suffer from a "pay gap" with the private sector. They point to studies showing that in similar jobs, federal workers are paid less than they would be in large private companies. But such studies typically look only at wages and don't consider the superior benefits enjoyed by federal workers.

Federal workers receive generous health benefits during work and retirement, a pension plan with inflation protection, a retirement savings plan with generous matching contributions, large disability benefits, and union protections. They often have generous holiday and vacation schedules, flexible hours, training options, incentive awards, flexible spending accounts, and a more relaxed pace of work than private-sector workers.

Perhaps the most important benefit of federal employment is extreme job security. According to Bureau of Labor Statistics data, the rate of layoffs and firings in the federal workforce is just one-quarter the rate in the private sector. All these advantages in worker benefits suggest that, in comparable jobs, federal wages ought to be lower than private-sector wages.

One sign that federal workers have a sweeter deal than they acknowledge is the rate of voluntary resignation from government positions: just one-quarter the rate in the private sector, the BLS data show. Long job tenure has its pros and cons, but the fact that many federal workers burrow in and never leave suggests that they are doing pretty well for themselves.

Of course, particular federal jobs may be underpaid and others overpaid. The average annual compensation of federal air traffic controllers is $170,000, which certainly seems excessive. One way to determine proper pay levels objectively would be to privatize services and let the market decide what they're worth.

The Bush administration has tried to bring greater payroll flexibility to the federal government, but it has also presided over large pay increases. To get spending under control, Congress should consider trimming overly generous benefit packages and freezing federal wages for a few years. With federal civilian compensation costing about $200 billion a year, this area is ripe for reform.

Chris Edwards is tax director at the Cato Institute and author of "Downsizing the Federal Government."

August 15, 2006

Elusive Proof, Elusive Prover: A New Mathematical Mystery

By DENNIS OVERBYE

Grisha Perelman, where are you?

Three years ago, a Russian mathematician by the name of Grigory Perelman, a k a Grisha, in St. Petersburg, announced that he had solved a famous and intractable mathematical problem, known as the Poincaré conjecture, about the nature of space.

After posting a few short papers on the Internet and making a whirlwind lecture tour of the United States, Dr. Perelman disappeared back into the Russian woods in the spring of 2003, leaving the world’s mathematicians to pick up the pieces and decide if he was right.

Now they say they have finished his work, and the evidence is circulating among scholars in the form of three book-length papers with about 1,000 pages of dense mathematics and prose between them.

As a result there is a growing feeling, a cautious optimism that they have finally achieved a landmark not just of mathematics, but of human thought.

“It’s really a great moment in mathematics,” said Bruce Kleiner of Yale, who has spent the last three years helping to explicate Dr. Perelman’s work. “It could have happened 100 years from now, or never.”

In a speech at a conference in Beijing this summer, Shing-Tung Yau of Harvard said the understanding of three-dimensional space brought about by Poincaré’s conjecture could be one of the major pillars of math in the 21st century.

Quoting Poincaré himself, Dr.Yau said, “Thought is only a flash in the middle of a long night, but the flash that means everything.”

But at the moment of his putative triumph, Dr. Perelman is nowhere in sight. He is an odds-on favorite to win a Fields Medal, math’s version of the Nobel Prize, when the International Mathematics Union convenes in Madrid next Tuesday. But there is no indication whether he will show up.

Also left hanging, for now, is $1 million offered by the Clay Mathematics Institute in Cambridge, Mass., for the first published proof of the conjecture, one of seven outstanding questions for which they offered a ransom back at the beginning of the millennium.

“It’s very unusual in math that somebody announces a result this big and leaves it hanging,” said John Morgan of Columbia, one of the scholars who has also been filling in the details of Dr. Perelman’s work.

Mathematicians have been waiting for this result for more than 100 years, ever since the French polymath Henri Poincaré posed the problem in 1904. And they acknowledge that it may be another 100 years before its full implications for math and physics are understood. For now, they say, it is just beautiful, like art or a challenging new opera.

Dr. Morgan said the excitement came not from the final proof of the conjecture, which everybody felt was true, but the method, “finding deep connections between what were unrelated fields of mathematics.”

William Thurston of Cornell, the author of a deeper conjecture that includes Poincaré’s and that is now apparently proved, said, “Math is really about the human mind, about how people can think effectively, and why curiosity is quite a good guide,” explaining that curiosity is tied in some way with intuition.

“You don’t see what you’re seeing until you see it,” Dr. Thurston said, “but when you do see it, it lets you see many other things.”

Depending on who is talking, Poincaré’s conjecture can sound either daunting or deceptively simple. It asserts that if any loop in a certain kind of three-dimensional space can be shrunk to a point without ripping or tearing either the loop or the space, the space is equivalent to a sphere.

The conjecture is fundamental to topology, the branch of math that deals with shapes, sometimes described as geometry without the details. To a topologist, a sphere, a cigar and a rabbit’s head are all the same because they can be deformed into one another. Likewise, a coffee mug and a doughnut are also the same because each has one hole, but they are not equivalent to a sphere.

In effect, what Poincaré suggested was that anything without holes has to be a sphere. The one qualification was that this “anything” had to be what mathematicians call compact, or closed, meaning that it has a finite extent: no matter how far you strike out in one direction or another, you can get only so far away before you start coming back, the way you can never get more than 12,500 miles from home on the Earth.

In the case of two dimensions, like the surface of a sphere or a doughnut, it is easy to see what Poincaré was talking about: imagine a rubber band stretched around an apple or a doughnut; on the apple, the rubber band can be shrunk without limit, but on the doughnut it is stopped by the hole.

With three dimensions, it is harder to discern the overall shape of something; we cannot see where the holes might be. “We can’t draw pictures of 3-D spaces,” Dr. Morgan said, explaining that when we envision the surface of a sphere or an apple, we are really seeing a two-dimensional object embedded in three dimensions. Indeed, astronomers are still arguing about the overall shape of the universe, wondering if its topology resembles a sphere, a bagel or something even more complicated.

Poincaré’s conjecture was subsequently generalized to any number of dimensions, but in fact the three-dimensional version has turned out to be the most difficult of all cases to prove. In 1960 Stephen Smale, now at the Toyota Technological Institute at Chicago, proved that it is true in five or more dimensions and was awarded a Fields Medal. In 1983, Michael Freedman, now at Microsoft, proved that it is true in four dimensions and also won a Fields.

“You get a Fields Medal for just getting close to this conjecture,” Dr. Morgan said.

In the late 1970’s, Dr. Thurston extended Poincaré’s conjecture, showing that it was only a special case of a more powerful and general conjecture about three-dimensional geometry, namely that any space can be decomposed into a few basic shapes.

Mathematicians had known since the time of Georg Friedrich Bernhard Riemann, in the 19th century, that in two dimensions there are only three possible shapes: flat like a sheet of paper, closed like a sphere, or curved uniformly in two opposite directions like a saddle or the flare of a trumpet. Dr. Thurston suggested that eight different shapes could be used to make up any three-dimensional space.

“Thurston’s conjecture almost leads to a list,” Dr. Morgan said. “If it is true,” he added, “Poincaré’s conjecture falls out immediately.” Dr. Thurston won a Fields in 1986.

Topologists have developed an elaborate set of tools to study and dissect shapes, including imaginary cutting and pasting, which they refer to as “surgery,” but they were not getting anywhere for a long time.

In the early 1980’s Richard Hamilton of Columbia suggested a new technique, called the Ricci flow, borrowed from the kind of mathematics that underlies Einstein’s general theory of relativity and string theory, to investigate the shapes of spaces.

Dr. Hamilton’s technique makes use of the fact that for any kind of geometric space there is a formula called the metric, which determines the distance between any pair of nearby points. Applied mathematically to this metric, the Ricci flow acts like heat, flowing through the space in question, smoothing and straightening all its bumps and curves to reveal its essential shape, the way a hair dryer shrink-wraps plastic.

Dr. Hamilton succeeded in showing that certain generally round objects, like a head, would evolve into spheres under this process, but the fates of more complicated objects were problematic. As the Ricci flow progressed, kinks and neck pinches, places of infinite density known as singularities, could appear, pinch off and even shrink away. Topologists could cut them away, but there was no guarantee that new ones would not keep popping up forever.

“All sorts of things can potentially happen in the Ricci flow,” said Robert Greene, a mathematician at the University of California, Los Angeles. Nobody knew what to do with these things, so the result was a logjam.

It was Dr. Perelman who broke the logjam. He was able to show that the singularities were all friendly. They turned into spheres or tubes. Moreover, they did it in a finite time once the Ricci flow started. That meant topologists could, in their fashion, cut them off, and allow the Ricci process to continue to its end, revealing the topologically spherical essence of the space in question, and thus proving the conjectures of both Poincaré and Thurston.

Dr. Perelman’s first paper, promising “a sketch of an eclectic proof,” came as a bolt from the blue when it was posted on the Internet in November 2002. “Nobody knew he was working on the Poincaré conjecture,” said Michael T. Anderson of the State University of New York in Stony Brook.

Dr. Perelman had already established himself as a master of differential geometry, the study of curves and surfaces, which is essential to, among other things, relativity and string theory Born in St. Petersburg in 1966, he distinguished himself as a high school student by winning a gold medal with a perfect score in the International Mathematical Olympiad in 1982. After getting a Ph.D. from St. Petersburg State, he joined the Steklov Institute of Mathematics at St. Petersburg.

In a series of postdoctoral fellowships in the United States in the early 1990’s, Dr. Perelman impressed his colleagues as “a kind of unworldly person,” in the words of Dr. Greene of U.C.L.A. — friendly, but shy and not interested in material wealth.

“He looked like Rasputin, with long hair and fingernails,” Dr. Greene said.

Asked about Dr. Perelman’s pleasures, Dr. Anderson said that he talked a lot about hiking in the woods near St. Petersburg looking for mushrooms.

Dr. Perelman returned to those woods, and the Steklov Institute, in 1995, spurning offers from Stanford and Princeton, among others. In 1996 he added to his legend by turning down a prize for young mathematicians from the European Mathematics Society.

Until his papers on Poincaré started appearing, some friends thought Dr. Perelman had left mathematics. Although they were so technical and abbreviated that few mathematicians could read them, they quickly attracted interest among experts. In the spring of 2003, Dr. Perelman came back to the United States to give a series of lectures at Stony Brook and the Massachusetts Institute of Technology, and also spoke at Columbia, New York University and Princeton.

But once he was back in St. Petersburg, he did not respond to further invitations. The e-mail gradually ceased.

“He came once, he explained things, and that was it,” Dr. Anderson said. “Anything else was superfluous.”

Recently, Dr. Perelman is said to have resigned from Steklov. E-mail messages addressed to him and to the Steklov Institute went unanswered.

In his absence, others have taken the lead in trying to verify and disseminate his work. Dr. Kleiner of Yale and John Lott of the University of Michigan have assembled a monograph annotating and explicating Dr. Perelman’s proof of the two conjectures..

Dr. Morgan of Columbia and Gang Tian of Princeton have followed Dr. Perelman’s prescription to produce a more detailed 473-page step-by-step proof only of Poincaré’s Conjecture. “Perelman did all the work,” Dr. Morgan said. “This is just explaining it.”

Both works were supported by the Clay institute, which has posted them on its Web site, claymath.org. Meanwhile, Huai-Dong Cao of Lehigh University and Xi-Ping Zhu of Zhongshan University in Guangzhou, China, have published their own 318-page proof of both conjectures in The Asian Journal of Mathematics (www.ims.cuhk.edu.hk/).

Although these works were all hammered out in the midst of discussion and argument by experts, in workshops and lectures, they are about to receive even stricter scrutiny and perhaps crossfire. “Caution is appropriate,” said Dr. Kleiner, because the Poincaré conjecture is not just famous, but important.

James Carlson, president of the Clay Institute, said the appearance of these papers had started the clock ticking on a two-year waiting period mandated by the rules of the Clay Millennium Prize. After two years, he said, a committee will be appointed to recommend a winner or winners if it decides the proof has stood the test of time.

“There is nothing in the rules to prevent Perelman from receiving all or part of the prize,” Dr. Carlson said, saying that Dr. Perelman and Dr. Hamilton had obviously made the main contributions to the proof.

In a lecture at M.I.T. in 2003, Dr. Perelman described himself “in a way” as Dr. Hamilton’s disciple, although they had never worked together. Dr. Hamilton, who got his Ph.D. from Princeton in 1966, is too old to win the Fields medal, which is given only up to the age of 40, but he is slated to give the major address about the Poincaré conjecture in Madrid next week. He did not respond to requests for an interview.

Allowing that Dr. Perelman, should he win the Clay Prize, might refuse the honor, Dr. Carlson said the institute could decide instead to use award money to support Russian mathematicians, the Steklov Institute or even the Math Olympiad.

Dr. Anderson said that to some extent the new round of papers already represented a kind of peer review of Dr. Perelman’s work. “All these together make the case pretty clear,” he said. “The community accepts the validity of his work. It’s commendable that the community has gotten together.”

August 15, 2006

Essay

How to Make Sure Children Are Scientifically Illiterate

By LAWRENCE M. KRAUSS

Voters in Kansas ensured this month that noncreationist moderates will once again have a majority (6 to 4) on the state school board, keeping new standards inspired by intelligent design from taking effect.

This is a victory for public education and sends a message nationwide about the public’s ability to see through efforts by groups like the Discovery Institute to misrepresent science in the schools. But for those of us who are interested in improving science education, any celebration should be muted.

This is not the first turnaround in recent Kansas history. In 2000, after a creationist board had removed evolution from the state science curriculum, a public outcry led to wholesale removal of creationist board members up for re-election and a reinstatement of evolution in the curriculum.

In a later election, creationists once again won enough seats to get a 6-to-4 majority. With their changing political tactics, creationists are an excellent example of evolution at work. Creation science evolved into intelligent design, which morphed into “teaching the controversy,” and after its recent court loss in Dover, Pa., and political defeats in Ohio and Kansas, it will no doubt change again. The most recent campaign slogan I have heard is “creative evolution.”

But perhaps more worrisome than a political movement against science is plain old ignorance. The people determining the curriculum of our children in many states remain scientifically illiterate. And Kansas is a good case in point.

The chairman of the school board, Dr. Steve Abrams, a veterinarian, is not merely a strict creationist. He has openly stated that he believes that God created the universe 6,500 years ago, although he was quoted in The New York Times this month as saying that his personal faith “doesn’t have anything to do with science.”

“I can separate them,” he continued, adding, “My personal views of Scripture have no room in the science classroom.”

A key concern should not be whether Dr. Abrams’s religious views have a place in the classroom, but rather how someone whose religious views require a denial of essentially all modern scientific knowledge can be chairman of a state school board.

I have recently been criticized by some for strenuously objecting in print to what I believe are scientifically inappropriate attempts by some scientists to discredit the religious faith of others. However, the age of the earth, and the universe, is no more a matter of religious faith than is the question of whether or not the earth is flat.

It is a matter of overwhelming scientific evidence. To maintain a belief in a 6,000-year-old earth requires a denial of essentially all the results of modern physics, chemistry, astronomy, biology and geology. It is to imply that airplanes and automobiles work by divine magic, rather than by empirically testable laws.

Dr. Abrams has no choice but to separate his views from what is taught in science classes, because what he says he believes is inconsistent with the most fundamental facts the Kansas schools teach children.

Another member of the board, who unfortunately survived a primary challenge, is John Bacon. In spite of his name, Mr. Bacon is no friend of science. In a 1999 debate about the removal of evolution and the Big Bang from science standards, Mr. Bacon said he was baffled about the objections of scientists. “I can’t understand what they’re squealing about,” he is quoted as saying. “I wasn’t here, and neither were they.”

This again represents a remarkable misunderstanding of the nature of the scientific method. Many fields — including evolutionary biology, astronomy and physics — use evidence from the past in formulating hypotheses. But they do not stop there. Science is not storytelling.

These disciplines take hypotheses and subject them to further tests and experiments. This is how we distinguish theories that work, like evolution or gravitation.

As we continue to work to improve the abysmal state of science education in our schools, we will continue to battle those who feel that knowledge is a threat to faith.

But when we win minor skirmishes, as we did in Kansas, we must remember that the issue is far deeper than this. We must hold our elected school officials to certain basic standards of knowledge about the world. The battle is not against faith, but against ignorance.

Lawrence M. Krauss is a professor of physics and astronomy at Case Western Reserve University.

August 15, 2006

Editorial Observer

Has Bush v. Gore Become the Case That Must Not Be Named?

By ADAM COHEN

At a law school Supreme Court conference that I attended last fall, there was a panel on “The Rehnquist Court.” No one mentioned Bush v. Gore, the most historic case of William Rehnquist’s time as chief justice, and during the Q. and A. no one asked about it. When I asked a prominent law professor about this strange omission, he told me he had been invited to participate in another Rehnquist retrospective, and was told in advance that Bush v. Gore would not be discussed.

The ruling that stopped the Florida recount and handed the presidency to George W. Bush is disappearing down the legal world’s version of the memory hole, the slot where, in George Orwell’s “1984,” government workers disposed of politically inconvenient records. The Supreme Court has not cited it once since it was decided, and when Justice Antonin Scalia, who loves to hold forth on court precedents, was asked about it at a forum earlier this year, he snapped, “Come on, get over it.”

There is a legal argument for pushing Bush v. Gore aside. The majority opinion announced that the ruling was “limited to the present circumstances” and could not be cited as precedent. But many legal scholars insisted at the time that this assertion was itself dictum — the part of a legal opinion that is nonbinding — and illegitimate, because under the doctrine of stare decisis, courts cannot make rulings whose reasoning applies only to a single case.

Bush v. Gore’s lasting significance is being fought over right now by the Ohio-based United States Court of Appeals for the Sixth Circuit, whose judges disagree not only on what it stands for, but on whether it stands for anything at all. This debate, which has been quietly under way in the courts and academia since 2000, is important both because of what it says about the legitimacy of the courts and because of what Bush v. Gore could represent today. The majority reached its antidemocratic result by reading the equal protection clause in a very pro-democratic way. If Bush v. Gore’s equal protection analysis is integrated into constitutional law, it could make future elections considerably more fair.

The heart of Bush v. Gore’s analysis was its holding that the recount was unacceptable because the standards for vote counting varied from county to county. “Having once granted the right to vote on equal terms,” the court declared, “the state may not, by later arbitrary and disparate treatment, value one person’s vote over that of another.” If this equal protection principle is taken seriously, if it was not just a pretext to put a preferred candidate in the White House, it should mean that states cannot provide some voters better voting machines, shorter lines, or more lenient standards for when their provisional ballots get counted — precisely the system that exists across the country right now.

The first major judicial test of Bush v. Gore’s legacy came in California in 2003. The N.A.A.C.P., among others, argued that it violated equal protection to make nearly half the state’s voters use old punch-card machines, which, because of problems like dimpled chads, had a significantly higher error rate than more modern machines. A liberal three-judge panel of the United States Court of Appeals for the Ninth Circuit agreed. But that decision was quickly reconsidered en banc —that is, reheard by a larger group of judges on the same court — and reversed. The new panel dispensed with Bush v. Gore in three unilluminating sentences of analysis, clearly finding the whole subject distasteful.

The dispute in the Sixth Circuit is even sharper. Ohio voters are also challenging a disparity in voting machines, arguing that it violates what the plaintiffs’ lawyer, Daniel Tokaji, an Ohio State University law professor, calls Bush v. Gore’s “broad principle of equal dignity for each voter.” Two of the three judges who heard the case ruled that Ohio’s election system was unconstitutional. But the dissenting judge protested that “we should heed the Supreme Court’s own warning and limit the reach of Bush v. Gore to the peculiar and extraordinary facts of that case.”

The state of Ohio asked for a rehearing en banc, arguing that Bush v. Gore cannot be used as precedent, and the full Sixth Circuit granted the rehearing. It is likely that the panel decision applying Bush v. Gore to elections will, like the first California decision, soon be undone.

There are several problems with trying to airbrush Bush v. Gore from the law. It undermines the courts’ legitimacy when they depart sharply from the rules of precedent, and it gives support to those who have said that Bush v. Gore was not a legal decision but a raw assertion of power.

The courts should also stand by Bush v. Gore’s equal protection analysis for the simple reason that it was right (even if the remedy of stopping the recount was not). Elections that systematically make it less likely that some voters will get to cast a vote that is counted are a denial of equal protection of the law. The conservative justices may have been able to see this unfairness only when they looked at the problem from Mr. Bush’s perspective, but it is just as true when the N.A.A.C.P. and groups like it raise the objection.

There is a final reason Bush v. Gore should survive. In deciding cases, courts should be attentive not only to the Constitution and other laws, but to whether they are acting in ways that promote an overall sense of justice. The Supreme Court’s highly partisan resolution of the 2000 election was a severe blow to American democracy, and to the court’s own standing. The courts could start to undo the damage by deciding that, rather than disappearing down the memory hole, Bush v. Gore will stand for the principle that elections need to be as fair as we can possibly make them.

 Americans May Be More Religious Than They Realize
Many Without Denomination Have Congregation, Study Finds

By Michelle Boorstein
Washington Post Staff Writer
Tuesday, September 12, 2006; A12

A survey released yesterday posits the idea that the United States -- already one of the most religious nations in the developed world -- may be even less secular than previously suspected.

The Baylor University survey looked carefully at people who checked "none" when asked their religion in polls. Sociologists have watched this group closely since 1990, when their numbers doubled, from 7 percent of the population to 14 percent. Some sociologists said the jump reflects increasing secularization at the same time that American society is becoming more religious.

But the Baylor survey, considered one of the most detailed ever conducted about religion in the United States, found that one in 10 people who picked "no religion" out of 40 choices did something interesting when asked later where they worship: They named a place.

Considering that, Baylor researchers say, the percentage of people who are truly unaffiliated is more like 10.8 percent. The difference between 10.8 percent and 14 percent is about 10 million Americans.

"People might not have a denomination, but they have a congregation. They have a sense of religious connection that is formative to who they are," said Kevin D. Dougherty, a sociologist at Baylor's Institute for Studies of Religion and one of the survey's authors. Baylor is a leading Baptist university, located in Waco, Tex.

The finding reflects the new challenges involved in trying to categorize religiosity in the United States, where people increasingly blend religions, shop for churches and worship in independent communities. Classic labels such as mainline, evangelical and unaffiliated no longer have the same meaning.

For example, 33 percent of Americans worship at evangelical congregations, which sociologists say are places that espouse an inerrant Bible, the importance of evangelizing and the requirement of having a personal relationship with Jesus. But only 15 percent of respondents to the Baylor survey said the term "evangelical" describes their religious identity.

Scholars have been saying for some time that the relevance of denomination is decreasing. But the Baylor survey, which asks about such subjects as God's "personality" and what people pray about, adds to a debate about what that means. It reveals the complex ways Americans describe their religiosity, and the minefield for today's scholars in trying to measure it. Is someone religious if they attend church? If they believe in God? If they identify with a particular religious group? What if they do one but not the others? Which gets more weight?

Academics who study religious demographics disagree about the "nones," and the Baylor study won't end that debate. Some say they are mostly secular -- those who aren't atheist but don't consider religion important. Some say they are in interfaith families and have mixed identities.

Some say they are new immigrants, including many from China, and second-generation Hispanics.

One thing the experts agree on: "Nones" tend to vote liberal but tend not to identify with a political party.

"What is most associated with 'no religion' from a political point of view is independence," said Barry Kosmin, principal investigator of a telephone survey that queried tens of thousands of respondents. His American Religious Identification Survey found that the number of "no religion" Americans jumped from 14.3 million in 1990 to 29.4 million in 2001. "If you don't belong religiously, you don't belong politically," he said.

Among the most innovative aspects of the Baylor survey, say scholars who know about it, are questions about how Americans describe God's personality. Respondents were offered 26 attributes ranging from "absolute" to "wrathful," and were asked whether God is directly involved in and angered by their lives and what happens in the world.

The researchers separated God's attributes into four categories: wrathful, involved, benevolent and uninvolved. They found that the largest category of people -- 31 percent -- was made up of those who said they believe God is both wrathful and highly involved in human affairs.

Beliefs about God's personality are powerful predictors, according to the survey. Those who considered God engaged and punishing were likely to have lower incomes and less education, to come from the South and to be white evangelicals or black Protestants. Those who believed God to be distant and nonjudgmental were more likely to support increased business regulation, environmental protection and the even distribution of wealth.

The changing demographics of the United States demand different polls as well, religion pollsters say. For example, approximately 3 percent of Americans observe faiths other than Christianity and Judaism. While still small, this group is growing rapidly, and scholars say that if current trends continue, that number could reach 10 percent in coming decades.

According to Democratic pollster Anna Greenberg, who focuses on religion, that is already the figure for Americans younger than 25.

Questions about the frequency of attending religious services aren't as relevant to Hindus and Buddhists, who often have worship spaces in their homes. Questions about weekly prayer services aren't as relevant to Muslims, who pray five times a day, she said.

Muslim Candidate Plays Defense
Lead Shrinks as Minnesota Democrat Repudiates Association With Farrakhan

By Alan Cooperman
Washington Post Staff Writer
Monday, September 11, 2006; A03

MINNEAPOLIS -- Keith Ellison is a Democrat running for an open House seat in a heavily Democratic district. But what once looked like a cakewalk has turned into a bruising campaign in which many facts are disputed but a central one is not: If he wins, he will be the first Muslim elected to Congress.

Before he can make history, Ellison must capture Tuesday's hotly contested Democratic primary in Minnesota's 5th Congressional District, which consists of the Minneapolis side of the Twin Cities and an inner ring of suburbs. Whoever gets the Democratic nomination is expected to sweep to victory in November to succeed Rep. Martin O. Sabo (D), who is retiring after 28 years in the House.

Ellison, 43, is a two-term state legislator. He prays toward Mecca five times a day and says he has not eaten pork or had a drink of alcohol since he converted to Islam as a 19-year-old student at Wayne State University in Detroit. When speaking at mosques or to members of Minneapolis's large Somali immigrant population, he opens with "Salaam aleikum," Arabic for "Peace be with you."

Other than that, he seldom refers to his religion on the campaign trail, unless asked.

"I'm a Muslim. I'm proud to be a Muslim. But I'm not running as a Muslim candidate," Ellison said during a break between a commemoration of Hurricane Katrina and an appearance at a public housing project. "I'm running as a candidate who believes in peace and bringing the troops out of Iraq now. I'm running as a candidate who believes in universal, single-payer health care coverage and an increase in the minimum wage."

Despite Ellison's desire to focus on the war and the economy, questions about his faith and character have kept him on the defensive.

The most damaging accusations, says Christopher Gilbert, professor of political science at Gustavus Adolphus College in St. Peter, Minn., concern Ellison's past associations with the Nation of Islam and its leader, Louis Farrakhan.

Although four Democrats are seeking the nomination, Ellison became the candidate to beat in May, when the state's Democratic-Farmer-Labor organization endorsed him.Within days, Michael Brodkorb, author of a Republican blog called MinnesotaDemocratsExposed.com, dug up two articles that Ellison had written under the name of Keith Hakim for the University of Minnesota student newspaper when he was in law school there in 1989 and 1990.

The first article defended Farrakhan against accusations of anti-Semitism. The second called affirmative action a "sneaky" form of compensation for slavery, suggesting instead that white Americans pay reparations to blacks.

Another conservative blog, PowerLineBlog.com, subsequently revealed that the candidate had used the names Keith X Ellison and Keith Ellison-Muhammed during his student days. In more than 20 Web postings titled "Who Is Keith Ellison?" PowerLine asserted that he had been a "local leader" of the Nation of Islam and accused him of "involvement" in anti-Semitism.

Badly stung, Ellison responded quickly. He met privately with key Jewish supporters, spoke publicly at a synagogue in the suburb of St. Louis Park and repudiated Farrakhan in a May 28 letter to the Jewish Community Relations Council in Minneapolis.

While denying that he had ever joined -- much less led -- the Nation of Islam, he acknowledged that he had worked with the group for about 18 months to organize the Minnesota contingent to Farrakhan's 1995 Million Man March in Washington.

In the letter to the council, he apologized for failing to "adequately scrutinize the positions" of Farrakhan and other Nation of Islam leaders. "They were and are anti-Semitic, and I should have come to that conclusion earlier than I did."

In interviews on the campaign trail last week, Ellison said his attraction to Islam in the 1980s "had a political angle to it, a reaction against status quo politics."

But he said he has stayed a Muslim, and grown in his faith, while his political outlook has moderated since he began practicing law, serving in the state legislature and raising four children with his wife, Kim, a high school math teacher who has multiple sclerosis.

When he was one of three blacks among 265 members of the University of Minnesota Law School's class of 1990, he said, "my perspective was a tunnel vision; I was mostly concerned about the welfare of the African American community."

"That was the era of [Spike Lee's film] 'Do the Right Thing,' " he continued. "Remember that? People had their black, yellow and red kufi caps on. There was higher African American consciousness. . . ."

Even in those days, Ellison added, "I never said anything that was anti-Semitic, racist, homophobic in any way." But, he said, he was slow to judge those who did.

"I chalked it up to typical mainstream press attacking African American leadership," he said. "When you're African American, there's literally no leader who is not beat up by the press. . . .

"The change of heart I had is, I did start to look more closely, and I feel that African Americans, having been victims of slavery and Jim Crow, can never justify doing the same thing to anyone else; wrong is wrong everywhere," he said.

Based on such assurances, Jewish Democratic activists have rallied around Ellison. Samuel and Sylvia Kaplan, a Minneapolis couple who are influential fundraisers, said he reminds them of the late senator Paul Wellstone (D-Minn.). Phyllis Kahn, a fellow Democrat in the state legislature, said it is "inconceivable that he could have ever been an anti-Semite."

Mordecai Specktor, editor and publisher of the American Jewish World, Minnesota's Jewish weekly, strongly endorsed Ellison in a Sept. 1 editorial. "His association with the Million Man March -- there are some people in the Jewish community who cannot forgive him for that," Specktor said. "I decided that he had a sincere change of heart and mind."

Among Muslims, Ellison's campaign has generated excitement.

"There are millions of Muslims in this country. It shouldn't have taken this long to elect one to Congress," said Nimco Ahmed, 24, a Somali immigrant and political organizer.

Nihad Awad, executive director of the Washington-based Council on American-Islamic Relations, flew to Minneapolis for an Aug. 25 fundraiser for Ellison, who has collected about $400,000, mostly from individual contributors in his district. Awad said that the attacks of Sept. 11, 2001, have both heightened prejudice against Muslims and spurred Muslims to be more politically active in hopes of countering that prejudice.

According to CAIR and other Muslim groups, Ellison would be the first Muslim elected to national office. Awad said the highest Muslim elected official now is a state senator in North Carolina, Larry Shaw, and the last Muslim to make a serious bid for Congress was Ferial Masry, a Saudi-born woman who lost in California in 2004.

Ellison's Democratic opponents are Ember Reichgott Junge, a former state senator who is backed by Emily's List and other women's groups; Mike Erlandson, who was Sabo's longtime chief of staff and has the retiring congressman's support; and Paul Ostrow, a Minneapolis City Council member.

For the most part, they have refrained from mentioning Ellison's religion or attacking him directly, though Ostrow's campaign manager resigned two weeks ago after admitting that he was the source of anonymous e-mails to reporters accusing Ellison of campaign finance violations

In mid-summer, Ellison was hit by allegations involving unpaid traffic and parking tickets, late payment of some taxes in the 1990s, failure to meet deadlines for financial reports in past election campaigns, and his defense of a gang leader while he was running the Legal Rights Center, a nonprofit law office.

Ellison acknowledged last week that his driver's license had been suspended earlier in the year for failure to pay fines. He said he defended a leader of the Vice Lords gang, Sharif Willis, because Willis was working with local police to broker a gang peace. And he said he was now up-to-date on tickets, taxes and financial filings.

"When will the story stop being Farrakhan or traffic tickets?" he said as he headed toward his next campaign appearance. Then he stopped, as if a thought had just come to him.

"I know this is just a taste of what it will be like if I win," he said.

 

 medical examiner
Killer T-Cells
The promise of using genetic engineering to treat cancer.
By Sydney Spiesel
Posted Monday, Sept. 11, 2006, at 12:34 PM ET

Once a week, it seems, there is news of a breakthrough in cancer treatment. But the announcement 10 days ago that scientists have succeeded in using gene therapy to treat malignant melanoma merits special interest. It involves not an advance in a standard treatment but a new method for fighting cancer—the engineering of the body's own tumor-fighting cells to specifically target malignant ones. This may be the start of a new era in cancer treatment: Genetically modifying a patient's own immune defense cells to fight tumors could be more effective and less invasive than the chemotherapy, radiation, and surgery we now depend on.

The laboratory of Dr. Steven A. Rosenberg of the National Cancer Institute, from which the new report comes, focused on cases of malignant melanoma (skin cancer) that had failed conventional treatment and spread widely. The lab used its experimental method to treat 17 patients. Of these, 15 showed little or no improvement, but two (about 12 percent) were alive and doing well a year and a half later.

For plenty of kinds of cancer—many lymphomas and leukemias, for example—long-term survival and even complete remission are common after treatment. So what's the big deal about a 12 percent success rate? The answer is that malignant melanoma, which usually originates in the pigment-producing cells of the skin, is an especially dangerous form of cancer. It's often aggressive, invasive, hard to control, and unresponsive to chemotherapy. And once malignant melanoma has metastasized, with cells from the primary tumor seeding themselves elsewhere in the body to grow secondary tumors, the prognosis becomes truly dismal. A patient with several distant metastases has only a 1 percent or 2 percent likelihood on average of living for more than a year, even with the most vigorous treatment. Since the patients in Rosenberg's study had already failed conventional treatment, their prospects were probably still poorer. So, his lab's achievement of a 12 percent survival rate at about 18 months seems remarkable, indeed.

Random events and mutations cause rare cells in each of us to become malignant. It's the job of our powerful immunological surveillance system to detect and eliminate these aberrant cells before they can multiply. Cancer occurs when this surveillance system fails. Rosenberg's innovation is to modify the body's own cells to attack the aberrant cells once they've multiplied.

To recognize and destroy abnormal body cells, the body makes use of a special kind of white cell—the killer T-cell. Rosenberg and his colleagues cleverly began harnessing these cells in a previous 2005 study of 35 patients with malignant melanoma. For this research, the team collected samples of the patients' immune-system cells and picked out the killer T-cells whose recognition site matched abnormal molecules (called antigens) on the surface of the melanoma cells. The researchers stimulated these T-cells to multiply like crazy in the lab. They were then injected into the patient after a brief blast of standard chemotherapy drugs made room for them to multiply by depleting the patient's white blood cells. Because the killer T-cells came from the patient, his or her body would not reject them, and the killer cells would not attack the patient's normal tissues.

In the 2005 study, half the treated patients showed clear improvement with reductions in tumor size, and three patients (9 percent) went into complete remission. That was a start. But Rosenberg's initial method was not easily applied outside of an experimental setting. It is difficult and laborious (and sometimes impossible) to find the appropriate killer T-cells and stimulate them to multiply in the laboratory. And it is never certain that the best attack cells and target antigens have been chosen.

Which leads to the research just announced: In Rosenberg's latest study, a patient's own white cells (collected from his or her blood) were genetically engineered to produce the killer T-cells that targeted the melanoma cells. When the engineered cells were injected back into the patient, they performed like normal killer T-cells: They multiplied in the patients and (when things worked well) mounted a targeted lethal attack on the tumor cells, with virtually no side effects.

It's a stunning advance because these were made-to-order killer T-cells, directed precisely at a target—the tumor. If ongoing studies teach us how to improve the success rate of this method, we may soon be able to use it to treat cancer without depending on the good luck of finding the right kind of killer T-cell in a patient's immune system. Instead, we could examine a patient's tumor, test to see which antigens are found on the tumor cells, and then construct the killer T-cells that are needed to go after it. Rosenberg's method of genetic engineering could be directed not just at melanoma, but at virtually any kind of cancer. And, at least in principle, it could generate killer T-cells to attack cells infected with the viruses that cause AIDS and other chronic, lingering infections, like some forms of hepatitis.

Are there tumors that can fend off the killer T-cells? Can we clone the right genes to manufacture the T-cells needed to attack every patient's tumor? Can we make the process more efficient so that a larger percentage of patients will be successfully treated? We don't know the answers to these questions yet. And while this method seems safe so far, it has been tested on only 17 people.

These uncertainties notwithstanding, there is, of course, great desire to develop improved cancer treatments. Cancer chemotherapy, introduced nearly 50 years ago, has been largely based on the use of cell poisons. The chemicals are chosen because they're more toxic to tumor cells than to normal tissue, but none has precisely targeted malignant cells. And there's a familiar downside: Normal cells are also damaged by these chemicals, and the threat this poses can prevent treatment that's aggressive enough to kill the cancer.

The last few years have brought several advances. Drugs like tamoxifen and Erbitux block access to certain tumor cell surface receptors and thus prevent the cell from being stimulated to grow and multiply. And drugs like Gleevec block enzyme systems that certain tumor cells need to function but normal tissues don't need. Still other drugs (thalidomide and Avastin, for example, and others coming along) block the development of the supply of new blood vessels that cancers depend on. All of these new treatments are useful and most are less toxic than the old-line drugs. But most seem limited in the kinds of cancer they can treat and often need to be given in concert with the older drugs.

Rosenberg's method offers a way to target tumor cells that spares normal cells. Any number of difficulties could yet emerge. But so far, the potential seems great—ultimately, this could change the way we think about treating cancer.



sidebar

Return to article

These killer T-cells—sometimes called CD-8—belong to the lymphocyte group of white cells. They are genetically determined to be specific to the patient from whom they come. Cells from one person can almost never be used to treat another person, with the exception of an identical twin. In addition, these killer cells of the immune system target a single antigen molecule embedded in the surface of a body cell. When they identify and kill the antigen-bearing cell, they protect us from cancer. When they target a virus-infected cell, they destroy the infected cell before the virus it contains can reproduce, thus cutting off the progress of the viral infection.



sidebar

Return to article

Rosenberg and his group wanted to know whether they could take the patient's normal, undirected cells and treat them in a way that would produce the specific killer T-cells needed to destroy a tumor. They began by going back to the most successful killer cells in the earlier experiment. These were cells that recognized a cancer antigen called MART-1. They then isolated the T-cell gene that controlled the MART-1 recognition site. Next, they created an artificial virus that carried the cloned anti-MART-1 gene and used this virus to install the gene in lymphocytes taken from the patient's blood. This changed the patient's white blood cells into killer T-cells directed exclusively at the MART-1 cancer antigen found on the surface of malignant melanoma cells. Finally, Rosenberg's team used some laboratory tricks learned earlier to pump up the number of these anti-MART-1 killer cells and then inject them back into the patient.



sidebar

Return to article

We usually think that it is very difficult for the immune defenses to control HIV, the virus that causes AIDS, because the virus suppresses the immune response. However, recent research suggests that early on in an infection, a vigorous killer T-cell immune response keeps HIV in check. The problem is that this response weakens as time goes on because the immune cells become "exhausted." This killer T-cell "exhaustion" (actually down regulation in the language of modern biology) is a side effect of a normal mechanism that functions to protect against autoimmune disease. There are ways to re-energize exhausted killer cells, but these may pose risk to the patient. Rosenberg's work, however, points to a possible way to generate large numbers of highly active killer T-cells only aimed at targets infected with HIV, leaving normal tissues and organs unharmed.

Sydney Spiesel is a pediatrician in Woodbridge, Conn., and associate clinical professor of pediatrics at Yale University's School of Medicine.

 

moneybox
Sept. 11's Financial Heroes
And why they have flopped since then.
By Daniel Gross
Posted Monday, Sept. 11, 2006, at 4:44 PM ET

Though it's scarcely been mentioned in the fifth-anniversary commemorations, the 9/11 attack was a significant economic event. The World Trade Center may not have been home to many huge corporate headquarters or trading floors, but it was an important financial symbol. The affected area surrounding the complex was—and is—home to key global financial institutions: Merrill Lynch and Dow Jones, the Federal Reserve and the New York Stock Exchange. And, most of all, the attacks seemed designed to sap confidence from the entire U.S. economy.

Thousands of individuals at hundreds of companies worked heroically, and under extremely difficult conditions, to get downtown New York up and running. Companies throughout the region devoted resources to helping competitors and colleagues recover. And from Washington to Detroit, leaders of important institutions took extraordinary, sometimes self-abnegating steps to help jolt the economy and the nation out of a state of shock. In the weeks after Sept. 11, heroes emerged among the financial and executive first-responders. But what's interesting is how poorly those business heroes have fared ever since.

Richard Grasso, chairman of the New York Stock Exchange was perhaps the most visible financial first-responder. The NYSE, which stood at the literal, symbolic, and geographic heart of the nation's—and the world's—financial system, closed for trading for four days, its longest closure since 1933. But as the Trade Center site smoldered, Grasso led a round-the-clock effort to get the exchange up and running—and to allow the world's financial markets to price in the shattering events. The re-opening of the NYSE on the morning of Sept. 17, with firemen and police officers ringing the open bell, showed that the effort to destroy New York's financial center hadn't succeeded. The trading—hectic and negative, but orderly, given the circumstances—demonstrated the capital markets' resilience. Grasso was justly lionized for his role. But his status didn't last long. Exactly two years later, on Sept. 17, 2003, Grasso resigned after questions were raised about the enormous compensation he earned for running a not-for-profit organization. He's still fighting a 2004 lawsuit filed by New York Attorney General Eliot Spitzer.

Alan Greenspan, then at the zenith of his powers as chairman of the Federal Reserve Board of Governors, was another financial first-responder. On Sept. 17—the day the markets opened—he slashed the federal-funds rate significantly from 3.5 percent to 3 percent. He followed with two more 50-basis-point cuts on Oct. 2 and Nov. 6. By mid-December 2001, the rate stood at 1.75 percent, half the pre-9/11 rate. The rapid-fire rate cuts helped shock the economy, which had slipped into recession earlier in the year, back into life. Falling rates allowed companies and consumers to refinance their debt, freeing up cash to spend and invest.

But Greenspan may have overstayed his welcome, as a Federal Reserve chairman and as an interest-rate slasher. In the years after 9/11, Greenspan cut rates further, then held them at emergency levels, unleashing a tidal wave of liquidity into the global economic system. Now Greenspan is less likely to be remembered for his 9/11 savvy than for encouraging a real-estate bubble in the current decade, suggesting people switch into adjustable rate mortgages at precisely the wrong time, and for helping create the conditions that have stimulated inflation, which his successor is struggling to control.

Greenspan's interest-rate cuts helped make possible the most significant business response to 9/11: the introduction of zero-percent financing by U.S. automakers. Spurred by a mixture of patriotism and a desire to clear inventory, General Motors—and then Ford and other manufacturers—launched the Keep America Rolling program. It was a brilliant, seemingly selfless move. Lured by the free money, millions of Americans rushed buy American cars at zero-percent interest. But the program had some pernicious long-term effects. Consumers became conditioned to the availability of financing gimmicks, rebates, and incentives, which has killed margins at GM and Ford. And the efforts didn't succeed in stopping the firms' continuing slide in market share. The result: this depressing five-year chart of General Motors and Ford compared with the Standard & Poor's 500. Ultimately, the corporate bosses who helped keep America rolling have seen their careers sputter. Ford CEO William Clay Ford Jr. last week essentially fired himself. GM CEO Rick Wagoner still has his job, but since last year has been under assault from shareholder agitator Kirk Kerkorian.

Oddly, the business people who have done the best since 9/11 are some of the hardest-hit victims. The investment bank Cantor Fitzgerald, whose headquarters were on the upper floors of the North Tower, lost 658 employees; no other institution was hit so hard. Today, Cantor Fitzgerald is larger than it was before the attacks. The investment bank Sandler O'Neill lost 66 employees, more than a third of its total, including two of the top three executives. This weekend, Joe Nocera of the New York Times wrote a poignant article ($ required) about the transformed company, which now employs 25 percent more people than it did on 9/11.

Daniel Gross (www.danielgross.net) writes Slate's "Moneybox" column. You can e-mail him at moneybox@slate.com.

Article URL: http://www.slate.com/id/2149358/

 

moneybox
Why Tom Cruise Really Got Fired
Sumner Redstone's bizarre envy of middle-aged men.
By Daniel Gross
Posted Friday, Sept. 8, 2006, at 12:11 PM ET

These days, Sumner Redstone, the 83-year-old chairman of Viacom and CBS, seems less like a billionaire media mogul and more like the Izzy Mandelbaum character Lloyd Bridges played on Seinfeld—a crusty, vain, ultrafit eightysomething guy continually jeering younger stars for being flabby girlie-men. "I work out every day," Redstone told the New York Times earlier this week. "Do you know any studs who work out 70 minutes a day?"

Redstone, who recently married a woman half his age, puts great stock in his virility. And in recent years, he has periodically released his excess testosterone by firing good-looking, middle-aged bucks who have gotten too big for their britches. Last month, it was megastar Tom Cruise. Earlier this week, he cashiered Tom Freston, the CEO of Viacom and one of the founders of MTV.

Is Redstone's need to fire younger guys merely rational decision-making by a canny CEO? Or does it reflect some bizarre reverse-Oedipal envy? Read the obituaries and decide for yourself!

Media stud: Frank Biondi
Title/position: President and CEO of Viacom
Year fired: 1996
Ostensible reason: Low-key, calm Biondi wasn't sufficiently aggressive to lead an entrepreneurial company in the age of media convergence
Real reason: Redstone resented the attention Biondi received as a member of the establishment in the age of media convergence
What he's doing now: Private equity investor, serial joiner of corporate boards

Media stud: Brent Redstone
Title/position: Son, heir, former board member of Viacom
Year fired: Removed from Viacom's board in 2003
Ostensible reason: Shareholders said there were too many insiders on Viacom's board
Real reason: Sided with his mother, Phyllis Redstone, in bitter divorce fight
What he's doing now: Suing his dad

Media stud: Mel Karmazin
Title/position: President and COO of Viacom
Year fired: 2004 (technically, Karmazin resigned)
Ostensible reason: Redstone and Karmazin clashed over Karmazin's penny-pinching, bean-counting, and reluctance to take big financial risks
Real reason: Analysts and investors liked Karmazin's penny-pinching, bean-counting, and reluctance to take big financial risks and were hoping he'd eventually take over from Redstone
What he's doing now: CEO of Sirius Satellite Radio

Media stud: Tom Cruise
Title/position: Movie star, producer, expert on the history of mental illness, Scientologist
Year fired: 2006
Ostensible reasons: Strange behavior alienated female fans; his movies cost too much
Real reason: Sweet deal Cruise negotiated called for him to get 30 percent of gross; he angered Mrs. Redstone; more money for TomKat and their daughter Suri means less money for Sumner and his daughter Shari
What he's doing now: Charming hedge funds into funding his projects

Media stud: Tom Freston
Title/position: CEO of Viacom
Year fired: 2006
Ostensible reasons: MTV no longer quite so cool, got beat out by Rupert Murdoch for purchase of MySpace.com
Real reason: The move deflects attention from the Hollywood storm created when Redstone summarily fired Tom Cruise
What he's doing now: Counting his massive ($60 million in cash) severance package, talking to Ken Auletta
**
Bonus exotic locale of firing: In Redstone's Beverly Hills mansion, more specifically, "in his library, amid an enormous collection of rare saltwater fish," according to Geraldine Fabrikant and Bill Carter

Daniel Gross (www.danielgross.net) writes Slate's "Moneybox" column. You can e-mail him at moneybox@slate.com.

 

September 11, 2006

Medicare Costs to Increase for Wealthier Beneficiaries

By ROBERT PEAR

WASHINGTON, Sept. 9 — Higher-income people will have to pay higher Medicare premiums than other beneficiaries next year, as the government takes a small but significant step to help the financially ailing program remain viable over the long term.

The surcharge is a major departure from the traditional arrangement under which seniors have generally paid the same premium.

It is expected to affect one million to two million beneficiaries: individuals with incomes exceeding $80,000 and married couples with more than $160,000 of income. For individuals with incomes over $200,000, the premium, now $88.50 a month, is expected to quadruple by 2009.

The surcharge was established under a little-noticed provision of the 2003 law that added a prescription drug benefit to Medicare.

Supporters of the surcharge say it makes sense for wealthy people to pay more at a time when Medicare costs are soaring. But some Medicare experts worry that wealthy retirees will abandon the program and rely on private insurance instead, leaving poorer, sicker people in Medicare.

The premium in question is for Part B of Medicare, a voluntary program that covers doctors’ services, diagnostic tests and outpatient hospital care.

“The higher premiums could drive people with higher incomes out of Medicare,” said Samuel M. Goodman, a 73-year-old retiree in Derwood, Md. “Medicare would then become a welfare program, rather than a universal social insurance program, and it would be easier to attack.”

Federal officials have repeatedly said that Medicare is financially unsustainable in its current form.

Congress said the surcharge would “begin to address fiscal challenges facing the program.”

Joanne S. Shulman, who worked at the Social Security Administration for 35 years, said, “The surcharge will come as a shock to many people because they have not received any warning.”

Most beneficiaries now pay the same premium for Part B of Medicare. That amount has been increasing rapidly even without a surcharge. The standard premium has shot up an average of 12 percent a year since 2001, when it was $50 a month.

The premium is set each year to cover about 25 percent of projected spending under Part B of Medicare, which has been growing because of increases in the number and complexity of doctors’ services. General tax revenues pay 75 percent of the cost.

The Bush administration plans to announce the standard premium for 2007 later this month. In July, Medicare officials estimated that it would be $98.40 a month. The surcharge will be phased in from 2007 to 2009.

Here is how it will work: The surcharge for 2007 will be computed by the Social Security Administration, using income data obtained by the Internal Revenue Service from tax returns for 2005. If an individual has modified adjusted gross income of $80,000 to $100,000, the surcharge will be 13.3 percent, which adds about $13 to the monthly premium, for a total of about $111.50. For a single person with income of more than $200,000, the surcharge will be 73.3 percent, or about $72 a month, for a total premium of about $170.50.

When the transition is complete in January 2009, according to Medicare actuaries, the total premium for a person with income of $80,000 to $100,000 will be 1.4 times the standard premium. A person with income of $100,000 to 150,000 will pay twice the standard premium. A person with income of $150,000 to $200,000 will pay 2.6 times the standard premium, and a beneficiary with more than $200,000 of income will pay 3.2 times the standard amount.

If the basic premium rises 10 percent a year — a relatively conservative forecast — the most affluent beneficiaries will be paying premiums of more than $375 a month in 2009.

Under current law, the $80,000 threshold and the income brackets will be adjusted each year to keep pace with inflation, as measured by the Consumer Price Index.

President Bush recently proposed eliminating these annual adjustments, so that more people would pay a surcharge. “This change gives beneficiaries increased participation in their health care,” he said.

More than 40 million people are in Part B. Medicare officials estimate that 2 percent of them will have to pay a surcharge next year. The Congressional Budget Office says 5 percent of beneficiaries will be affected. The Social Security Administration puts the figure at 4 percent to 5 percent. Most people have their premiums deducted from monthly Social Security checks.

Fiscally conservative Republicans supported the surcharge. But so did some Democrats, who saw it as a progressive way to finance Medicare without cutting benefits or raising payroll taxes.

Senator Dianne Feinstein, Democrat of California, argued that “high-income beneficiaries can afford to pay a larger share of Medicare’s costs,” in part because Congress has cut their taxes in recent years.

The Congressional Budget Office estimates that the surcharge will raise $15 billion from 2007 to 2013.

Representative Nita M. Lowey, Democrat of New York, recently introduced a bill to repeal the surcharge, which she says will hit “more and more middle-class seniors.” Some advocates for older Americans, including the Senior Citizens League, with 1.2 million members, are lobbying for repeal.

A beneficiary can obtain relief from the surcharge by showing that the I.R.S. data was incorrect or that the person’s income declined because of a “major life-changing event” like the death of a spouse or the loss of pension benefits.

Theodore R. Marmor, a professor of political science at Yale, said the surcharge was more important for the politics of Medicare than for the financing of the program.

“The new income-related premium is fundamentally at odds with the premises of social insurance,” Mr. Marmor said. “Large numbers of upper-income people will eventually want to find alternatives to Part B of Medicare and will no longer be in the same pool with other people who are 65 and older or disabled. Congress will then have less reluctance to cut the program.”

Losing the War on Terror
Why Militants Are Beating Technology Five Years After Sept. 11

By Ahmed Rashid
Monday, September 11, 2006; A17

LAHORE, Pakistan -- In the five years since Sept. 11, the tactics and strategy of Islamic extremists fighting U.S. or NATO forces have improved dramatically. To a degree they could not approach five years ago, the extremists are successfully facing off against the overwhelming technological apparatus that modern armies can bring to bear against guerrillas. Islamic extremists are winning the war by not losing, and they are steadily expanding to create new battlefronts.

Imagine an Arab guerrilla army that is never seen by Israeli forces, never publicly celebrates victories or mourns defeats, and merges so successfully into the local population that Western TV networks can't interview its commanders or fighters. Such was the achievement of Hezbollah's 33-day war against Israeli troops, who admitted that they rarely saw the enemy until they were shot at.

Israel's high-tech surveillance and weaponry were no match for Hezbollah's low-tech network of underground tunnels. Hezbollah's success in stealth and total battlefield secrecy is an example of what extremists are trying to do worldwide.

In southern Afghanistan, the Taliban have learned to avoid U.S. and NATO surveillance satellites and drones in order to gather up to 400 guerrillas at a time for attacks on Afghan police stations and army posts. They have also learned to disperse before U.S. airpower is unleashed on them, to hide their weapons and merge into the local population.

In North and South Waziristan, the tribal regions along the border between Pakistan and Afghanistan, an alliance of extremist groups that includes al-Qaeda, Pakistani and Afghan Taliban, Central Asians, and Chechens has won a significant victory against the army of Pakistan. The army, which has lost some 800 soldiers in the past three years, has retreated, dismantled its checkpoints, released al-Qaeda prisoners and is now paying large "compensation" sums to the extremists.

This region, considered "terrorism central" by U.S. commanders in Afghanistan, is now a fully operational al-Qaeda base area offering a wide range of services, facilities, and military and explosives training for extremists around the world planning attacks. Waziristan is now a regional magnet. In the past six months up to 1,000 Uzbeks, escaping the crackdown in Uzbekistan after last year's massacre by government security forces in the town of Andijan, have found sanctuary with al-Qaeda in Waziristan.

In Iraq, according to a recent Pentagon study, attacks by insurgents jumped to 800 per week in the second quarter of this year -- double the number in the first quarter. Iraqi casualties have increased by 50 percent. The organization al-Qaeda in Iraq has spawned an array of new guerrilla tactics, weapons and explosive devices that it is conveying to the Taliban and other groups.

Moreover, efforts by armies to win the local citizens' hearts and minds and carry out reconstruction projects are also failing as extremists attack "soft" targets, such as teachers, civil servants and police officers, decapitating the local administration and terrorizing the people.

No doubt on all these battlefields Islamic extremists are taking massive casualties -- at least a thousand Taliban have been killed by NATO forces in the past six months. But on many fronts there is an inexhaustible supply of recruits for suicide-style warfare.

Western armies, with their Vietnam-era obsession with body counts, are not lessening the number of potential extremists every time they kill them but are actually encouraging more to join, because they have no political strategy to close adjacent borders and put pressure on the neighbors.

Militants from around the Arab world and even Europe are arriving in Iraq to kill Americans. Yet the United States refuses to speak to neighbors Syria and Iran, which facilitate their arrival.

Hundreds of Pakistani Pashtuns are joining the Taliban in their fight against NATO. Yet NATO has adopted a head-in-the-sand attitude, pretending that Afghanistan is a self-contained operational theater without neighbors and so declining to put pressure on Pakistan to close down Taliban bases in Baluchistan and Waziristan.

If this is indeed a long war, as the Bush administration says, then the United States has almost certainly lost the first phase. Guerrillas are learning faster than Western armies, and the West makes appalling strategic mistakes while the extremists make brilliant tactical moves.

As al-Qaeda and its allies prepare to spread their global jihad to Central Asia, the Caucasus and other parts of the Middle East, they will carry with them the accumulated experience and lessons of the past five years. The West and its regional allies are not prepared to match them.

Ahmed Rashid, a Pakistani journalist, is the author of "Taliban" and "Jihad: The Rise of Militant Islam in Central Asia."

 As Homework Grows, So Do Arguments Against It

By Valerie Strauss
Washington Post Staff Writer
Tuesday, September 12, 2006; A04

The nation's best-known researcher on homework has taken a new look at the subject, and here is what Duke University professor Harris Cooper has to say:

Elementary school students get no academic benefit from homework -- except reading and some basic skills practice -- and yet schools require more than ever.

High school students studying until dawn probably are wasting their time because there is no academic benefit after two hours a night; for middle-schoolers, 1 1/2 hours.

And what's perhaps more important, he said, is that most teachers get little or no training on how to create homework assignments that advance learning.

The controversy over homework that has raged for more than a century in U.S. education is reheating with new research by educators and authors about homework's purpose and design.

No one has gone as far as the American Child Health Association did in the 1930s, when it pinned homework and child labor as leading killers of children who contracted tuberculosis and heart disease. But the arguments seem to get louder with each new school year: There is too much homework or too little; assignments are too boring or overreaching; parents are too involved or negligent.

"What should homework be?" asked veteran educator Dorothy Rich, founder of the nonprofit Home and School Institute. "In the biggest parameter, it ought to help kids make better sense of the world. Too often, it just doesn't."

In the nation's classrooms, teachers say they work hard to conform to school board policies and parent demands that do not always match what they think is the best thing for children.

Yet teachers themselves don't uniformly agree on something as basic as the purpose of homework (reviewing vs. learning new concepts), much less design or amount or even whether it should be graded. And the result can be inconsistency in assignments and confusion for students.

That is part of the reason some educators and authors are making new cases for the elimination of homework entirely, including in the new book "The Homework Myth," by Alfie Kohn.

Kohn points to family conflict, stress and Cooper's research as reasons for giving kids other things to do to develop their minds and bodies after school besides homework.

"I am always fascinated when research says one thing and we are all rushing in the other direction," Kohn said.

"It is striking that we have no evidence that there is any academic benefit in elementary school homework," he said. "Then people fall back on the self-discipline argument and how it helps students learn study skills. But that is an urban myth, except that people apply it in the suburbs, too."

In 1989, Cooper, now a professor of psychology and director of Duke's Program in Education, published an analysis of dozens of studies on the link between homework and academic achievement.

His conclusions: The research base showed no correlation between academic achievement and homework -- besides reading -- in elementary school, a small benefit in middle school and more for high school.

This spring, he co-authored another paper in the Review of Educational Research after reviewing various newer studies done on homework from 1987 to 2003, and he offered a few additions to his conclusions.

This time, he said, there was some evidence that, in grades 2 through 5, students do better on unit tests when they do short homework assignments on basic skills that relate directly to the test.

And, he said, it appears that more than two hours of high school homework, and more than 1 1/2 hours of middle school homework, have no academic benefit and may produce negative results.

Other educators, such as Linda Darling-Hammond, a Stanford University education professor and researcher, say that many of the studies Cooper evaluated were not tightly controlled and not authoritative but that his conclusions make sense.

Darling-Hammond said Cooper also is correct in pointing out that many teachers lack the skills to design homework assignments that help kids learn and don't turn them off to learning.

Today, schools of education provide varying levels of training in the art of designing homework assignments that are more than busywork, usually imbedded in courses about curriculum. Many, however, offer none, and teachers say they wish the schools had.

"One isn't born knowing how to make sensible lesson plans and homework assignments," said Karen Zabrowski, a seventh-grade reading teacher at Chippewa Falls Middle School in Wisconsin.

But teacher knowledge is often trumped by school system policies, created by school boards whose members are often not educators, teachers have said.

Timothy Naughton said he learned about homework at Fordham University in the 1990s. "We agreed it wasn't the best practice for younger students, but we knew everybody was going to make us give it anyway, so we talked about how to reconcile the two positions," said Naughton, who has taught in various elementary grades and is a kindergarten teacher in East Stroudsburg, Pa. He gives no homework but suggests that parents read and practice basic skills with their kids.

Kohn said that if he were education czar, kids would not be assigned homework but would wind up learning anyway. That's what happened at the private Kino School in Tucson, where traditional homework was banned but kids designed their own after-school projects because they wanted to keep learning.

Cooper said that eliminating homework makes no more sense than "piling it on" and that the answer is somewhere in between.

Georgia Leigh, 16, an 11th-grader at Bethesda-Chevy Chase High School, tends to support the middle ground. It was not until 10th grade, she said, that homework was more than busywork. What changed then, she said, was that she began to be assigned more reading.

"I feel like I'm learning more when I'm reading than when I'm filling out math sheets," she said. "If homework were eliminated? I'd read anyway."

September 7, 2006

Equal-Opportunity Offender Plays Anti-Semitism for Laughs

By SHARON WAXMAN

LOS ANGELES, Sept. 6 — Fall is traditionally when Hollywood turns to more serious films, and the Toronto International Film Festival is where they are frequently shown. But a new movie that seems certain to raise hackles and induce squirming is a raucous comedy that makes its points by seeming to embrace sexism, racism, homophobia and that most risky of social toxins: anti-Semitism.

Screening at midnight on Thursday in Toronto, “Borat: Cultural Learnings of America for Make Benefit Glorious Nation of Kazakhstan” stars the chameleonlike comedian Sacha Baron Cohen as he impersonates a Kazakh reporter touring the United States, bringing his version of Kazakh culture to real-life Americans.

In one scene Borat insists on driving to California rather than flying, “in case the Jews repeat their attack of 9/11.” As he tours the South, he becomes terrified when he learns that an elderly couple who run an inn are Jewish. When cockroaches crawl under the door of his room, he becomes convinced the innkeepers have transformed themselves into bugs, and throws money at them.

In another scene Borat returns to his home village and participates in an annual ritual, “The Running of the Jews,” complete with giant Jew puppets that the villagers beat with clubs.

This anti-anti-Semitic humor is mixed in with other outrageous behavior, including slurs against Gypsies and gays, and a nude wrestling match. But in a world in which resurgent anti-Semitism has become — sometimes literally — an explosive topic, the movie may well hit a particular nerve, especially in Europe.

The British-born Mr. Baron Cohen, who calls himself an observant Jew, has performed this same high-wire comedy act for his HBO series, “Da Ali G Show,” in which he plays three characters, including Borat, each hilariously offensive in its own right.

The title character of the show, Ali G, is a vaguely Muslim British idiot with a hip-hop persona, who was the subject of a rather tame, and unsuccessful, film in 2002, “Ali G Indahouse,” released straight to video in the United States.

With “Borat,” Mr. Baron Cohen — who shares screenplay credit with several others — decided to head straight for the most sensitive areas of politically incorrect global culture, and for the first time will be doing so for a mass audience, far beyond the sophisticated niche of HBO. The film is to be released by 20th Century Fox on Nov. 3 on more than 2,000 screens nationwide.

(Borat is not explicitly Muslim, but Kazakhstan has a large Sunni Muslim population along with a sizable contingent of Orthodox Christians.)

Mr. Baron Cohen, who is appearing in Toronto as Borat, declined to be interviewed for this article and will be conducting interviews ahead of the film only in character.

20th Century Fox also declined to comment for this article or otherwise participate. Executives at the studio said that they were concerned about overemphasizing the political aspects of the humor, or otherwise labeling the movie, which they said they hoped would have broad appeal to a young audience.

The film is experimental and highly unusual for Hollywood, in some ways reminiscent of the guerrilla humor of Andy Kaufman, who baited members of the unsuspecting public with his characters, or the buffoonery of Charlie Chaplin as a Hitler-esque tyrant in “The Great Dictator” in 1940.

Film historians said that Hollywood was usually reluctant to take on controversy in general and had particularly avoided treating anti-Semitism in the past.

“Hollywood has a history of avoiding controversial topics, and notably did so at the end of the 1930’s, with the rise of Nazism and anti-Semitism,” said Jonathan Kuntz, who teaches American film history at the University of California, Los Angeles. Studios “were afraid of offending audiences, and of limiting their popularity in the European market,” he added. “And because so many moguls were Jewish, they were afraid this would be used to attack Hollywood as anti-Nazi.”

Today too Hollywood is often reluctant openly to discuss anti-Semitism, as was evidenced by the careful debate over Mel Gibson’s 2004 blockbuster, “The Passion of the Christ.” Only when Mr. Gibson was heard making anti-Jewish slurs this summer during a drunken-driving arrest did a few Hollywood veterans speak out against him.

“Borat” was to some extent made outside the Hollywood system. Fox kept the film off its production list and created a separate company, One America, to be the nominal producer. Mr. Baron Cohen also ran into creative differences with his first director, Todd Phillips, who left the production last year, while the film shut down for five months. The veteran comedy director Larry Charles eventually completed the film.

A spokesman for Mr. Baron Cohen said that Mr. Phillips’s departure was “a mutual decision.”

During the shoot Fox ignored numerous protests from the Kazakh Embassy in Washington, whose officials were concerned about the depiction of their country as prejudiced.

Early indications are that the film will be a hit. It rocked audiences with laughter at the Cannes Film Festival, where Mr. Baron Cohen was photographed on the beach wearing a neon-green kind of thong, and won an audience award at Michael Moore’s Traverse City Film Festival in Michigan this summer.

Still, “I can almost guarantee you that not everyone will get the joke,” said Richard B. Jewell, a professor of film history at the University of Southern California. But he added: “In my opinion it’s a very healthy thing. Some of best films that have been made in the last 50 years have been black comedies.” He cited “Dr. Strangelove,” which poked fun at nuclear holocaust.

“What can be more serious?” he asked. “It makes people think about these things in ways they don’t when there are more straightforward, serious, sober films.”

September 5, 2006

Op-Ed Contributor

A Little Learning Is an Expensive Thing

By WILLIAM M. CHACE

Palo Alto, Calif.

When I was a college president, I was never able to give incoming freshmen the honest talk I wanted to. But had I done so, here’s what I would have said:

HELLO to you first-year students. I’m Laudable’s president. The next time we’ll see each other is when you graduate.

Most likely, that won’t be in four years, but in about 55 months. That’s because a good many of you won’t finish on time. And, because college presidents last only about six years and I’ve already been on campus for two, perhaps I won’t even be here.

I know you’re worried about money. I’m not telling you or your folks anything new when I say that Laudable looks expensive. The tuition increases here, just like those of our competitors, have outstripped the rate of increase in the consumer price index for years. This fall, tuition, room and board averages almost $32,000 at Laudable and other private colleges, and more than $15,000 at public ones.

But hold on. Most of you and your parents don’t pay for everything yourselves. In public four-year institutions, some 4 in 10 undergraduates get financial aid. At private places like Laudable, more than 80 percent of you do. Just like the auto industry, we have a sticker price and we have the price people really pay.

And like car dealers, we force you to borrow money to help make up the difference. You will probably owe more than $20,000, on average, when you leave Laudable. Graduates of public institutions will owe, on average, more than $15,000.

How will many of you begin your adult lives?

In serious debt.

We certainly take a lot of your money. But we ship money back to you. How much? This year, Laudable will spend more than $41,000 to educate each of you. At public institutions, it will be more than $31,000 per student. Some schools have huge endowments that help them generate the money they need to educate you (Harvard has more than $26 billion to count on). But most schools are like Laudable: we need your tuition dollars. Bottom line: money in and money out.

Laudable could be cheaper, but you wouldn’t like it. You and your parents have made it clear that you want the best. That means more spacious and comfortable student residences (“dormitories,” we used to call them), gyms with professional exercise equipment, better food of all kinds, more counselors to attend to your growing emotional needs, more high-tech classrooms and campuses that are spectacularly handsome.

Our competitors provide such things, so we do too. We compete for everything: faculty, students, research dollars and prestige. The more you want us to give to you, the more we will be asking you to give to us. We aim to please, and that will cost you. It’s been a long time since scholarship and teaching were carried on in monastic surroundings.

Laudable’s surroundings, by the way, will remind you of where you came from. That’s because your financial circumstances are pretty much the same as those of your classmates. More expensive schools have students from wealthier parents; less expensive schools draw students from families with fewer financial resources. More than half of the freshmen at selective colleges, public and private, come from the highest-earning quarter of households. Tell me the ZIP code and I’ll tell you what kind of college a high-school graduate most likely attends.

After paying (and receiving) all this money, please finish up and get out. Colleges like Laudable are escalators; even if you stand still, they will move you upward toward greater economic opportunity. Once you leave us, you’ll have a better chance for a good job and a way to pay off your debt and to give us more money when we call on you as alumni.

So don’t flunk out; you’ve got too much invested in us, and we have too much invested in you.

As for the way Laudable spends its money, I can assure you that your professors aren’t overpaid. But I am. I take home more money at Laudable than anyone else (save some of the clinical physicians over at our hospital and several coaches). My pay is about five times greater than an average faculty member’s. That’s because I’m thought of as the chief executive of the university and chief executives get paid a lot in America.

But I know I’m not really a chief executive because I don’t hold that kind of executive power. The professors here are Laudable’s most important asset, and they, not I, are the ones who run the show (just ask Larry Summers). Laudable could save some money by paying me less.

I welcome you all — those of you who will heed my advice, and those who will waste their dollars and ours. Whatever, count yourself lucky. Now you know about the money. See you, maybe, in 55 months.

William M. Chace, a former president of Emory and Wesleyan Universities, is the author of “100 Semesters: My Adventures as Student, Professor and University President, and What I Learned Along the Way.”

September 5, 2006

When Toddlers Turn on the TV and Actually Learn

By LISA GUERNSEY

Yelling at the television used to be the domain of adults watching “Jeopardy!” But young children have become the real pros.

Sit down with a 3-year-old to watch “Blue’s Clues” or “Dora the Explorer,” and see the shouting erupt. Whenever a character faces the camera and asks a question, children out there in TV land are usually answering it.

Active engagement with television has been an antidote to criticism that the tube creates zombies. “Blue’s Clues,” which celebrated its 10th anniversary last month, has been credited with helping young children learn from the screen. Academic research has shown that viewers ages 3 to 5 score better on tests of problem solving than those who haven’t watched the show.

But what happens with children younger than 3? Should babies and toddlers be exposed to television at all? Is there any chance that they could actually learn from the screen? While debates rage among parents, pediatricians and critics of baby videos (think “Baby Einstein”), developmental psychologists are trying to apply some science to the question.

Experiments conducted at Vanderbilt University, described in the May/June issue of Child Development, offer some hints about toddlers. They showed that 24-month-olds are more apt to use information relayed by video if they consider the person on the screen to be someone they can talk to. Without that, the children seemed unable to act on what they had seen and heard.

The experiments compared two video experiences: One was based on a videotape. Watching it was similar to watching “Blue’s Clues”; the actor onscreen paused to simulate a conversation, but back-and-forth interaction with the viewer was impossible. A different group of children experienced two-way live video. It worked like a Web cam, with each side responding in real time.

Georgene L. Troseth and Megan M. Saylor, psychologists at Vanderbilt, and Allison H. Archer, an undergraduate student there, designed the study to find out if toddlers would learn from video if they considered the onscreen actors to be, as they put it, “social partners.”

The test hinged on a hiding game. First the 2-year-olds watched the video — either the tape or the live version. The screen showed a person hiding a stuffed animal, Piglet, in a nearby room, often under a table or behind a couch. When the video ended, the children were asked to retrieve Piglet. Those who saw the recorded video had some trouble. They found him only 35 percent of the time. Children in the other group succeeded about 69 percent of the time, a rate similar to face-to-face interaction.

Does this mean that TV programs that simulate interaction are doing nothing for kids? Not necessarily, the researchers say. A few of the children in the recorded video group were especially responsive to the games and pauses, and they were the few children in that group who retrieved the toy.

“We found that if children gave evidence of treating the video as a social partner,” Dr. Troseth said, “they will use the information.”

Their article referred specifically to “Blue’s Clues,” saying the show appeared to be “on the right track” — a point that, not surprisingly, thrilled creators of the program. Alice Wilder, the show’s director of research, said each script was tested in live settings with children to make sure that the show’s hosts — a young man named Steve in the early seasons and the current one, Joe — appear to be having realistic, child-centered conversations with viewers.

Developmental psychologists say the Vanderbilt research offers an intriguing clue to a phenomenon called the “video deficit.” Toddlers who have no trouble understanding a task demonstrated in real life often stumble when the same task is shown onscreen. They need repeated viewings to figure it out. This deficit got its name in a 2005 article by Daniel R. Anderson and Tiffany A. Pempek, psychologists at the University of Massachusetts, who reviewed literature on young children and television.

Child-development experts say the deficit confirms the age-old wisdom that real-life interactions are best for babies. Parents can be assured, they say, that their presence trumps the tube.

But psychologists still want to get to the bottom of what might explain the difference. Is it the two-dimensionality of the screen? Do young children have some innate difficulty in remembering information transmitted as symbols? “It’s definitely still a puzzle, and we’re trying to figure out the different components to it,” said Rachel Barr, a psychologist at Georgetown University who specializes in infant memory. She and Harlene Hayne at the University of Otago in New Zealand published some early evidence of the video deficit in 1999.

The Vanderbilt research offers the possibility that the more socially engaging a video is, the more likely the deficit will disappear. But Dr. Troseth and other psychologists stress that in-person connections with parents are by far a child’s best teacher. No word yet on whether that includes those moments when harried parents are so distracted that TV characters are more responsive than they are.

 September 4, 2006

Op-Ed Contributor

The Summer Next Time

By TOM LUTZ

Palm Desert, Calif.

IN late May, for those of us who teach, the summer stretches out like the great expanse of freedom it was in grammar school. Ah, the days on the beach! The books we will read! The adventures we will have!

But before hunkering down to months of leisurely lolling around a pool slathered in S.P.F. 80, we need to take care of a few things: see what got buried in the e-mail pile over the course of the year, write a few letters of recommendation, and finally get to those book reviews we agreed to do. A few leftover dissertation chapters. The syllabuses and book orders for next year’s classes. Then those scholarly articles we were snookered into writing when the deadlines were far, far in the future — deadlines that now, magically, are receding into the past. My God, did I really tell someone I would write an article called “Teaching Claude McKay”? Before we know it, the summer is eaten up, we’re still behind on our e-mail, and the fall semester looms.

On paper, the academic life looks great. As many as 15 weeks off in the summer, four in the winter, one in the spring, and then, usually, only three days a week on campus the rest of the time. Anybody who tells you this wasn’t part of the lure of a job in higher education is lying. But one finds out right away in graduate school that in fact the typical professor logs an average of 60 hours a week, and the more successful professors work even more — including not just 14-hour days during the school year, but 10-hour days in the summer as well.

Why, then, does there continue to be a glut of fresh Ph.D.’s? It isn’t the pay scale, which, with a few lucky exceptions, offers the lowest years-of-education-to-income ratio possible. It isn’t really the work itself, either. Yes, teaching and research are rewarding, but we face as much drudgery as in any professional job. Once you’ve read 10,000 freshman essays, you’ve read them all.

But we academics do have something few others possess in this postindustrial world: control over our own time. All the surveys point to this as the most common factor in job satisfaction. The jobs in which decisions are made and the pace set by machines provide the least satisfaction, while those, like mine, that foster at least the illusion of control provide the most.

Left to our own devices, we seldom organize our time with 8-to-5 discipline. The pre-industrial world of agricultural and artisan labor was structured by what the historian E. P. Thompson calls “alternate bouts of intense labor and of idleness wherever men were in control of their working lives.” Agricultural work was seasonal, interrupted by rain, forced into hyperactivity by the threat of rain, and determined by other uncontrollable natural processes. The force of long cultural habit ensured that the change from such discontinuous tasks to the regimented labor of the factory never went particularly smoothly.

In 1877 a New York cigar manufacturer grumbled that his cigar makers could never be counted on to do a straight shift’s work. They would “come down to the shop in the morning, roll a few cigars,” he complained to The New York Herald, “and then go to a beer saloon and play pinochle or some other game.” The workers would return when they pleased, roll a few more cigars, and then revisit the saloon, all told “working probably two or three hours a day.” Cigar makers in Milwaukee went on strike in 1882 simply to preserve their right to leave the shop at any time without their foreman’s permission.

In this the cigar workers were typical. American manufacturing laborers came and left for the day at different times. “Monday,” one manufacturer complained, was always “given up to debauchery,” and on Saturdays, brewery wagons came right to the factory, encouraging workers to celebrate payday. Daily breaks for “dramming” were common, with workers coming and going from the work place as they pleased. Their workdays were often, by 20th-century standards, riddled with breaks for meals, snacks, wine, brandy and reading the newspaper aloud to fellow workers.

An owner of a New Jersey iron mill made these notations in his diary over the course of a single week:

“All hands drunk.”

“Jacob Ventling hunting.”

“Molders all agree to quit work and went to the beach.”

“Peter Cox very drunk.”

“Edward Rutter off a-drinking.”

At the shipyards, too, workers stopped their labor at irregular intervals and drank heavily. One ship’s carpenter in the mid-19th century described an almost hourly round of breaks for cakes, candy and whiskey, while some of his co-workers “sailed out pretty regularly 10 times a day on the average” to the “convenient grog-shops.” Management attempts to stop such midday drinking breaks routinely met with strikes and sometimes resulted in riots. During much of the 19th century, there were more strikes over issues of time-control than there were about pay or working hours.

I was recently offered a non-teaching job that would have almost doubled my salary, but which would have required me to report to an office in standard 8-to-5 fashion. I turned it down, and for a moment I felt like the circus worker in the joke: he follows the elephant with a shovel, and when offered another job responds, “What, and give up show business?”

Really, though, I’m more like Jacob Ventling and Edward Rutter. I don’t go out 10 times a day for a dram of rum, but I could. And in fact, maybe I will. Next summer.

Tom Lutz is the author of “Doing Nothing: A History of Loafers, Loungers, Slackers and Bums in America.”

September 2, 2006

At 2-Year Colleges, Students Eager but Unready

By DIANA JEAN SCHEMO

Correction Appended

DUNDALK, Md. — At first, Michael Walton, starting at community college here, was sure that there was some mistake. Having done so well in high school in West Virginia that he graduated a year and a half early, how could he need remedial math?

Eighteen and temperamental, Mickey, as everyone calls him, hounded the dean, insisting that she take another look at his placement exam. The dean stood firm. Mr. Walton’s anger grew. He took the exam a second time. Same result.

“I flipped out big time,’’ Mr. Walton said.

Because he had no trouble balancing his checkbook, he took himself for a math wiz. But he could barely remember the Pythagorean theorem and had trouble applying sine, cosine and tangent to figure out angles on the geometry questions.

Mr. Walton is not unusual. As the new school year begins, the nation’s 1,200 community colleges are being deluged with hundreds of thousands of students unprepared for college-level work.

Though higher education is now a near-universal aspiration, researchers suggest that close to half the students who enter college need remedial courses.

The shortfalls persist despite high-profile efforts by public universities to crack down on ill-prepared students.

Since the City University of New York, the largest urban public university, barred students who need remediation from attending its four-year colleges in 1999, others have followed with similar steps.

California State set an ambitious goal to cut the proportion of unprepared freshmen to 10 percent by 2007, largely by testing them as high school juniors and having them make up for deficiencies in the 12th grade.

Cal State appears nowhere close to its goal. In reading alone, nearly half the high school juniors appear unprepared for college-level work.

Aside from New York City’s higher education system, at least 12 states explicitly bar state universities from providing remedial courses or take other steps like deferred admissions to steer students needing helping toward technical or community colleges.

Some students who need to catch up attend two- and four-year institutions simultaneously.

The efforts, educators say, have not cut back on the thousands of students who lack basic skills. Instead, the colleges have clustered those students in community colleges, where their chances of succeeding are low and where taxpayers pay a second time to bring them up to college level.

The phenomenon has educators struggling with fundamental questions about access to education, standards and equal opportunity.

Michael W. Kirst, a Stanford professor who was a co-author of a report on the gap between aspirations and college attainment, said that 73 percent of students entering community colleges hoped to earn four-year degrees, but that only 22 percent had done so after six years.

“You can get into school,” Professor Kirst said. “That’s not a problem. But you can’t succeed.’’

Nearly half the 14.7 million undergraduates at two- and four-year institutions never receive degrees. The deficiencies turn up not just in math, science and engineering, areas in which a growing chorus warns of difficulties in the face of global competition, but also in the basics of reading and writing.

According to scores on the 2006 ACT college entrance exam, 21 percent of students applying to four-year institutions are ready for college-level work in all four areas tested, reading, writing, math and biology.

For many students, the outlook does not improve after college. The Pew Charitable Trusts recently found that three-quarters of community college graduates were not literate enough to handle everyday tasks like comparing viewpoints in newspaper editorials or calculating the cost of food items per ounce.

The unyielding statistics showcase a deep disconnection between what high school teachers think that their students need to know and what professors, even at two-year colleges, expect them to know.

At Cal State, the system admits only students with at least a B average in high school. Nevertheless, 37 percent of the incoming class last year needed remedial math, and 45 percent needed remedial English.

“Students are still shocked when they’re told they need developmental courses,’’ said Donna McKusik, the senior director of developmental, or remedial, education at the Community College of Baltimore County. “They think they graduated from a high school, they should be ready for college.’’

Across the nation, federal and state education officials are pressing for a K-16 vision of education that runs from kindergarten through college graduation. Such an approach, they say, would help high schools better prepare students for college.

In Florida, Gov. Jeb Bush appointed a Board of Regents to oversee education at all public institutions, from elementary through bachelor’s programs. At Cal State, professors are advising 12th-grade teachers on preparing students to succeed in college.

Starting at a Deficit

As the debate rages, nearly half of all students seeking degrees begin their journeys at community colleges much like the Dundalk campus of the Community College of Baltimore County, two-story no-frills buildings named by letters, not benefactors or grateful alumni. The college’s interim vice chancellor for learning and developmental education, Alvin Starr, said he saw students who passed through high school never having read a book cover to cover.

“They’ve listened in class, taken notes and taken the test off of that,’’ Dr. Starr said.

Though remedial needs are high, Dr. Starr said, the courses offer something invaluable, the chance to overcome basic deficiencies in reading, writing or math.

“You have to figure the cost to society on the other side if you don’t educate these students,’’ he said.

Most of the students expect the transition to community college to be seamless. But the first, and sometimes last, stop for many are remedial math classes.

“It’s the math that’s killing us,’’ Dr. McKusik said.

The sheer numbers of enrollees like Mr. Walton who have to take make-up math is overwhelming, with 8,000 last year among the nearly 30,000 degree-seeking students systemwide. Not all those students come directly from high school. Many have taken off a few years and may have forgotten what they learned, Dr. McKusik said.

More than one in four remedial students work on elementary and middle school arithmetic. Math is where students often lose confidence and give up.

“It brings up a lot of emotional stuff for them,’’ Dr. McKusik said.

She told of 20 students who had just burst into tears on receiving their math entrance exam scores and walked out on college. Mr. Walton remembers a fellow student who failed to hand in a math assignment for the fourth time in the last week of class and learned that he would fail. The student lunged toward the professor and said, “I’ll kill you.”

“You can say whatever you want, but this really isn’t helping your grade,” the professor replied, Mr. Walton said.

The student stormed out the door with a final expletive, leaving the professor shaken.

Fear of Appearing Ignorant

The biggest challenge, professors say, is trying to engage students, to persuade them that ideas matter. Dr. McKusik suspects that behind the apathy is a fear of appearing ignorant.

“Everything in society is geared to celebrate, to value, the winner,” she said. “These are students who haven’t been at the top. They won’t show themselves as vulnerable at all.’’

With most students having commitments to jobs and families, community colleges typically offer little in the way of a social life or school spirit. So they need to find ways to reach their less traditional audience.

“That’s why we’re trying to use pop culture in the classroom, to get their attention,’’ said Betsy Gooden, an English teacher who, in a remedial reading class one day last spring, tried to coax students to discuss a television documentary.

Two or three students in a class of 10 women carried most of the discussion, which seemed more like Ricky Lake than Lit 101, with students reacting to the film almost exclusively in terms of their personal experiences.

They covered love, sex and cheating boyfriends. Before the class was over, two women disclosed that they had been raped. About half the students said nothing at all.

Karen Olson, a history professor, and David Truscello, who teaches English, are trying another common strategy, mixing remedial work with other subjects. They are co-teachers of a course that combines African-American history with composition.

Professor Olson says teachers should stop making “unrealistic assignments’’ like chapters from “600-page textbooks’’ and should meet students at their level, raising abilities by degrees.

In her class, she assigns more manageable readings and carves up the load, so no student is responsible for doing it all.

“It’s not like they’re living four years in a dorm,’’ Professor Truscello said.

Most are working, sometimes at more than one job.

“That impinges on everything,’’ he added. “I have students who take two buses to come to school. It’s amazing that they do it.’’

Solutions and Successes

Another part of the solution at community colleges is in Student Success Centers. They are actually tutoring centers. Dundalk’s is open 63 hours a week.

Along a wall is a rack of handouts explaining points of grammar that might have last been explicitly taught in middle school, a measure of the immense ground to be made up. One covers comparative adjectives, explaining “more” vs. “most” or “smarter” vs. “smartest.” Another discusses using pronouns and verb tenses.

At one table, Kirn Shahzadi, 20, once an A student at Parkville High School, was being tutored a few hours before her final in remedial algebra. In addition to math, Ms. Shahzadi needed remedial courses in reading and one in helping with basic skills like note taking, researching and organizing schedules. By the second week of that course, she said, half the students had dropped out.

Still, the school has winners who make it through and feel that they have to fit into the changing workplace.

Mr. Walton said careers like his father’s as a welder for a major construction company were now harder to find. His father rose to foreman, putting Mr. Walton’s older brother through Johns Hopkins University.

Mr. Walton, who married soon after high school, put himself through the Baltimore community college working as a security guard at $7.80 an hour. He has had shoplifters pull knives on him and spray him with Mace, he said.

His salary covered the utilities and phone bills, and left his wife, an administrative assistant at Johns Hopkins, to pay the mortgage. He added that at times he suspected that she had felt more like a caretaker than a wife, and he worried for their future.

“I know she’s sick and tired of taking care of me,’’ he said in May. “It’s rip-your-hair-out-at-night difficult.’’

But Mr. Walton made it through that remedial math class four years ago, ultimately praising the dean for standing firm. In June, he crossed a stage to receive an associate’s degree in computer science. Next year, he plans to earn another degree in, of all things, math.

He said he would like to earn a full bachelor’s, but hesitates.

“I’m scared to death of going to college,’’ he said. “I’ll be up to my eyeballs in debt.’’

This summer he sent his résumé even to employers demanding bachelor’s degrees and several years’ experience, hoping that his enthusiasm would compensate where credentials fell short. He sought positions that included tuition breaks for employees.

His strategy paid off with two offers, one in data entry at the community college here, a job he held on work study before graduating, and another as a technician repairing copying machines. Mr. Walton went for the second.

It offers benefits, tuition reimbursement and a salary of $22,850 a year, with extra money toward buying a new car every few years.

“I feel a little bit more — I don’t want to say confident — but maybe worthy,’’ Mr. Walton said. “Now, I feel like I’m all that, and a bag of chips.’’

Correction: Sept. 9, 2006

A front-page article last Saturday about the lack of preparation among some students applying for community college misspelled the surname of an official at the Community College of Baltimore County, who said students were shocked to learn they needed remedial courses. She is Donna McKusick, not McKusik. The article also misspelled the given name of a former talk show star, whose habit of personalizing issues on her program was likened to students in a remedial reading class who discussed a television documentary in personal terms. She is Ricki Lake, not Ricky.

 A-Rod Agonistes

Despite his extraordinary numbers, New York fans are quick to discount his contributions. And when things go wrong for A-Rod, even his teammates find him hard to motivate and harder to understand

By Tom Verducci

"Joe wants to see you."

Alex Rodriguez still was weak from a throat infection that had confined him to his Seattle hotel room for the New York Yankees' game the previous night -- not to mention forced him to cancel a recording session for his ringtone endorsement deal -- when he walked into the visitors' clubhouse at Safeco Field on Aug. 24 and was told to go manager Joe Torre's office. Torre asked him to close the door, then motioned to the blue leather couch in the smallish room. "Sit down."

The richest and most talented player in baseball was in trouble. Rodriguez could not hit an average fastball, could not swat home runs in batting practice with any regularity, could not field a ground ball or throw from third base with an uncluttered mind and cooperative feet, could not step to the plate at Yankee Stadium without being booed and could not -- though he seemed unaware of this -- find full support in his own clubhouse.

For 11 summers Rodriguez had been the master of self-sufficiency, a baseball Narcissus who found pride and comfort gazing upon the reflection of his beautiful statistics. His game, like his appearance, was wrinkle-free. Indeed, in December 2003, when the Red Sox were frantically trying to acquire Rodriguez from the Texas Rangers, several Boston executives called on Rodriguez in his New York hotel suite after 1 a.m. Rodriguez answered the door in a perfectly pressed suit, tie knotted tight to his stiff collar. The Red Sox officials found such polished attire at such a late hour odd, even unsettling.

But then Rodriguez has long been the major league equivalent of the prettiest girl in high school who also gets straight A's, which is to say he is viewed with equal parts admiration and resentment. The A-Rod of 2006 was different, though -- unhinged and, in a baseball sense, unkempt. "My seasons have always been so easy," says Rodriguez. "This year hasn't been easy." He adds that his wife, Cynthia, in helping him with his struggles, encouraged him to "turn to the Lord for guidance."

With the boost of a September surge Rodriguez's final numbers will look, as usual, stellar. (At week's end he was hitting .287, with 33 homers and 114 RBIs.) But even Rodriguez admitted early this month that his statistics can't erase the pain he felt during his three-month slip into a dark abyss, when he lost his confidence, withered under media and fan pressure, and, some teammates believe, worked a little too hard at keeping up appearances -- displaying "a false confidence," New York first baseman Jason Giambi said. The slump (a word Rodriguez refuses to utter) revealed that for all his gifts, A-Rod may never be seen by Yankees traditionalists as worthy of his pinstripes.

1 of 6

By Tom Verducci

Yet there's still another chapter to be written in the story of his season. He still, God help him, has October.

Torre had been concerned about Rodriguez and his game for weeks before he called him into his office. Effort hadn't been the issue. If anything, he 31-year-old Rodriguez works too hard, crams too many bits of information into his head. He even studies videotape shot from centerfield cameras to see if he can decode patterns in catchers' signal sequences with a runner on second base.

"I can't help that I'm a bright person," he said last month. "I know that's not a great quote to give, but I can't pretend to play dumb and stupid."

What bothered Torre most was Rodriguez's seeming obliviousness to how badly he was playing. In June, for instance, hitting coach Don Mattingly ordered Rodriguez into the cage and sternly lectured him on the flaws in his swing, which Mattingly thought A-Rod had been unwilling to address. "An intervention," Mattingly called it. "He got to a pretty good point with [his swing], but it lasted only a few days and he went right back to where he was."

In the 80 games the Yankees played from June 1 to Aug. 30 -- almost half a season -- Rodriguez hit .257 with 81 strikeouts while committing 13 errors. Tabloids mocked him. Talk radio used him for kindling. "I haven't seen anything like it since I've been here," said reliever Mariano Rivera, in his 12th year as a Yankee, of the rough treatment.

Torre hit .363 with the St. Louis Cardinals in 1971 and .289 the following season, giving him a deep understanding of the ebb and flow of performance. With veteran players especially he operates like an old fisherman checking the tide charts, believing that the worst of times only means the best is to come. Rodriguez will hit, he thought, and he kept telling his third baseman exactly that.

Torre's trademark placidity ended, though, when Giambi asked to talk to Torre in Seattle. "Skip," Giambi told Torre, "it's time to stop coddling him."

For all the scorn heaped upon Giambi for his ties to the BALCO steroid scandal, he is a strong clubhouse voice because he plays with a passion that stirs teammates and even opponents. This season, for instance, he reprimanded his former Oakland A's teammate, Orioles shortstop Miguel Tejada, for occasionally showing up late to games out of frustration over another losing Baltimore season. "You're better than that," he told Tejada. So Giambi's gripe about Rodriguez sounded an alarm with Torre.

"What Jason said made me realize that I had to go at it a different way," Torre says. "When the rest of the team starts noticing things, you have to get it fixed. That's my job. I like to give individuals what I believe is the room they need, but when I sense that other people are affected, teamwise, I have to find a solution to it."

The players' confidence in Rodriguez was eroding as they sensed that he did not understand how much his on-field struggles were hurting the club. Said one Yankees veteran, "It was always about the numbers in [Seattle and Texas] for him. And that doesn't matter here. Winning is all you're judged on here."

Before Giambi went to Torre, he had scolded Rodriguez after a 13-5 win in Boston on Aug. 19. Irked that Rodriguez left four runners on base in the first three innings against a shaky Josh Beckett, Giambi thought A-Rod needed to be challenged. "We're all rooting for you and we're behind you 100 percent," Giambi recalls telling Rodriguez, "but you've got to get the big hit."

"What do you mean?" was Rodriguez's response, according to Giambi. "I've had five hits in Boston."

"You f------ call those hits?" Giambi said. "You had two f------ dinkers to rightfield and a ball that bounced over the third baseman! Look at how many pitches you missed!

"When you hit three, four or five [in the order], you have to get the big hits, especially if they're going to walk Bobby [Abreu] and me. I'll help you out until you get going. I'll look to drive in runs when they pitch around me, go after that 3-and-1 pitch that might be a ball. But if they're going to walk Bobby and me, you're going to have to be the guy."

(Asked about Giambi's pep talk, Rodriguez said he could not remember what was discussed, though he added, "I'm sure we had a conversation.")

2 of 6

By Tom Verducci

In Seattle, Torre looked at Rodriguez squarely and said, "This is all about honesty. And it's not about anybody else but you. You can't pretend everything is O.K. when it's not. You have to face the reality that you're going through a tough time, and then work from there."

It was as close to a tongue-lashing as the low-key Torre ever gets. When the manager comes down on a player, he will mix in the occasional profanity, but his voice remains even and there are no threats. Here his hammer was in the rebuke that Rodriguez's unwillingness to address his slump head-on was letting himself and theteam down. Torre told him he needed to show some fight, some anger even, rather than continuing to act as if he were doing just fine.

Rodriguez maintained eye contact while Torre spoke and nodded repeatedly. His only sign of discomfort was that he kept twirling his wedding ring around his finger. When Torre was done, he asked A-Rod if he understood what he had just told him. "Yes, 100 percent," Rodriguez said firmly.

Earlier this month, in recalling the meeting with Torre, Rodriguez said, "Oh, he was real tough. That was the toughest he's been on me."

On the night of the meeting Rodriguez struck out as a pinch hitter to end the game. He whacked the dugout railing with his bat, walked up the runway and into the clubhouse, and picked up a folding chair and threw it.

Two days and seven more embarrassing strikeouts later, it seemed as if the meeting with Torre had never happened. It was late afternoon at Angel Stadium of Anaheim, so late that the concession and maintenance workers were long gone as Rodriguez walked through the empty labyrinth of service tunnels from the clubhouse to a rightfield parking lot. "It's not a big deal," he said. "It's only two games. Back in 1999 I was 5 for 81 [actually 6 for 62] and got an 0-and-2 fastball from Esteban Yan over my head and hit it out, and I was fine. This is nothing like that. It's only two games."

It was classic A-Rod: the instant recall of his numbers, the whistling past the graveyard of a slump that was much deeper than two days. He has already hit into more double plays than ever before, and he most likely will exceed his career highs in strikeouts and errors. Rodriguez also hadn't come to terms with his teammates' sense that he wasn't doing enough to shake things up. Torre and his coaches, for instance, lingered late in the Angel Stadium clubhouse on the previous night trying to decide what to do about Rodriguez. Some wanted him dropped in the lineup. Torre came down on the side of moving him up to second in the order.

Despite taking 45 minutes of batting practice after the game that day, Rodriguez continued to flail away, in the midst of what would be a 2-for-20 stretch with 14 strikeouts. Like a blindfolded kid hacking at a piñata, he missed the baseball 19 of the 36 times he swung the bat.

Said centerfielder Johnny Damon during that West Coast trip, "His swing is so mechanical. He's too good to be swinging like that. Just let it flow. See the ball and react to it. And sometimes you need to do whatever you can, especially with two strikes or with runners on, to get the job done. He's not doing that."

3 of 6

By Tom Verducci

"He's guessing," Giambi said, "and he's doing a bad job of it, which is inevitable when you guess as often as he guesses. He's squeezing the f------ sawdust out of the bat."

Said another teammate, "I think he ought to get his eyes checked. I'm not kidding. I don't think he's seeing the ball."

And another: "I honestly think he might be afraid of the ball."

Every clubhouse has a unique current, like that of a river, with a temperature and a pace that can be felt only by wading into it. The A's, taking their cue from general manager Billy Beane's shorts and flip-flops, play as if it's Friday happy hour. The Atlanta Braves, eschewing the clubhouse stereo, have a self-assured, nine-to-five approach. The Yankees, the last baseball bastion in which beards and individualism are verboten, foster a Prussian efficiency.

The old guard with connections to New York's four championship seasons from 1996 to 2000 -- Torre, Rivera, shortstop Derek Jeter, catcher Jorge Posada and outfielder Bernie Williams -- almost never talks about individual numbers because stats are incidental to the team's mission: winning the World Series. Those title teams talked about "passing the baton" -- taking a walk or moving a runner over out of confidence in and respect for the next hitter. Reliance on one another is what mattered. That is still the covenant of the Yankees, though perhaps not as sublimely executed.

One day last month, wading into that current, I asked Rodriguez whom he has relied on most during his difficult summer. He first mentioned Cynthia.

But to whom has he turned on this Yankees team?

He looked down and thought in silence. Ten seconds passed.

Finally he said, "Rob Thomson." Thomson is the team's special-assignment coach who throws batting practice.

"And Mo. Mariano is the best. Those three."

And that was it.

As the conscience and soul of the team, Rivera is everyone's touchstone. When asked if he had counseled Rodriguez this summer, Rivera said, "He has my support, [but] he has to figure it out on his own. Sometimes you try so hard to do things so right that you do them all wrong. It's like moving in quicksand. The more you move, the more you sink."

As revered as Rivera is, though, no one is more important to the Yankees' clubhouse culture than the captain, the 32-year-old Jeter. As younger players Rodriguez and Jeter enjoyed a close friendship, often staying with each other when the Yankees faced the Mariners. But they have had little personal connection since 2001, when Rodriguez referred to Jeter as a number-two hitter in an Esquire story, code for a complementary player. Giambi referred last month to "the heat that exists between them."

Jeter, who publicly supported Giambi when he was being blasted for his BALCO involvement, has refused to throw any life preservers to Rodriguez this summer. I asked Jeter why he hasn't told the critics to ease up on A-Rod. "My job as a player is not to tell the fans what to do," he said. "My job is not to tell the media what to write about. They're going to do what they want. They should just let it go. How many times can you ask the same questions?"

Had he ever seen such persistent criticism? "Knobby," he said, referring to error-prone former second baseman Chuck Knoblauch. "[Roger] Clemens for a whole year. Tino [Martinez]."

Has A-Rod's treatment been worse?

"I don't know," Jeter said. "I don't think about that. I'm just concerned with doing what we can to win."

Here is the way Hall of Fame slugger Reggie Jackson, a Yankees special adviser and a member of the franchise's mythological pinstriped society, explained the yin and yang of the Jeter-Rodriguez relationship: "Alex is too concerned with wanting people to like him. Derek knows he can control only things within the area code DJ."

Rodriguez must be deferential to Jeter because birth order within the Yankees' family is a powerful influence. Rodriguez will never be as popular as Jeter with New York fans, will never catch him in rings or Yankees legacy, in the same way the younger brother never will be the oldest, no matter how many birthdays pass.

When I asked Rodriguez about his relationship with Jeter this year, he replied, "People always want to look at someone's silence and equate that with a negative thing. I don't see it that way."

I reminded him that Jeter's words carry the most weight. "Mariano said good things [about me]. Joe said good things. [G.M. Brian] Cashman said great things," Rodriguez said. "But again, people want to focus on Jeet. Jeet's very quiet by nature, so I wouldn't want him to change who he is to come and defend me. Because I'm a grown man."

Watching a Yankees-Angels game in Anaheim from a television booth, Jackson noticed Rodriguez (the number-two hitter that day) and

Jeter (batting third) near the on-deck circle with their backs to each other. "Classic Ruth-and-Gehrig picture right there," said Jackson, referring to the legends and their frosty relationship.

Jackson likes Rodriguez, recognizes in him the same need for ego massaging that he had as a player. Jackson took him to dinner last month -- yet another intervention -- and described how bad he had it as a Yankee. Jackson talked about when his teammates left notes in his locker telling him that they didn't want him in New York; about how manager Billy Martin so beat it into his head that he was a bad defensive player that on the night Jackson hit three home runs in the 1977 World Series, he played a routine double into a triple because he'd been stricken with fear that he'd screw it up; about when he was in the midst of such a horrific strikeout streak that he pleaded to Detroit Tigers catcher Lance Parrish, "Tell me what's coming, and I promise I'll take a turn right back into the dugout no matter where I hit it. I just want to look like a pro a little bit." (Parrish replied, "F--- you"; Jackson, to his immense satisfaction, grounded out.)

4 of 6

By Tom Verducci

During the game, Jackson told a parable to make a point about Rodriguez. A man is trapped in his house as floodwaters rise. Twice he refuses help, once from rescuers in a boat and then, when the man seeks refuge on his roof, from rescuers in a helicopter. "No, thanks," the man says. "I've got faith." The next thing he knows he is face-to-face with God in heaven.

"But I put my faith in you!" the man cried.

"Yes," God replied, "and I answered your faith and tried to help you twice."

As Jackson spoke, Rodriguez whiffed yet again, this time on a pitch that bounced on the grass in front of home plate. How does a player with so much talent get so bad? It seemed ages ago, but Rodriguez was the American League Player of the Month for May, when he batted .330 with eight homers and 28 RBIs. Then he lost the natural groove and quickness in his stroke. A crisis of confidence befell him when he could not hit the ball out of the park to right centerfield in batting practice.

"BP is a big key for me," Rodriguez said. "And you don't know how devastating it is to hit a ball you think you got squarely and see it die on the warning track. Out of 40 swings in BP, I should hit 22 out of the park. I was hitting three out of 40. I couldn't hit a fastball. Eighty-nine, 90 [mph pitches] were going right past me, and I knew it."

Trying to catch up to fastballs, he started guessing and began his swing early, lunging at the ball with his hips drifting forward, creating a flaw that robbed him of even more power -- or worse, flailing embarrassingly at what turned out to be a slider. Then as he carried the anxiety into the field, his usually reliable glove began to fail him.

"He puts in the work before games and looks textbook out there," third base coach Larry Bowa said last month. "But all of a sudden the game starts, and he quits using his feet and he's fielding with a lazy lower half. That causes his arm to drop, and the ball sails on him."

There was one game against Boston in Yankee Stadium in June when Rodriguez looked so anguished by the rough treatment from New York fans that Red Sox designated hitter David Ortiz, while watching him from the on-deck circle, grew concerned. Ortiz caught Rodriguez's attention and gave him an exaggerated exhale, the way you might when a physician asks you to take a deep breath. Rodriguez would later thank Ortiz. "It was painful to see his face," Ortiz said. "I had to tell him to just breathe and relax."

Asked when his season turned sour, Rodriguez replied, "I was absolutely on fire in Detroit early in the year. Then I got sick and I didn't play for three or four days. And then the whole month was kind of lost. It took a while to get my strength back. I'm not explaining that June, the month I stunk, was because I got sick. Let's make that clear. You ask, 'What's the turning point, going from Player of the Month in May to June?' That's the only thing in the middle."

He did admit that the media and fan criticism caused him stress that crept into his game. "I think it bothered me, early in the year," he said. The jeering of Rodriguez fed on itself, with Yankees fans emboldened by the obvious physical signs from A-Rod that he was unnerved. Posada could go 0 for 25 in August and go uncriticized, but Rodriguez would be excoriated for popping up in the first inning.

Sample A-Rod headlines from the summer:

E-ROD

K-ROD

A-ROD GETS A HIT....

DO YOU HATE THIS MAN?

Said Rodriguez, "It actually reached the point of being so ridiculous that I just had to laugh. It's like if you show up at work one day with a red shirt, and I go, 'Man, that's an ugly shirt.' And the next day you wear a blue shirt, and I go, 'Man, that's an ugly shirt.' And the next day, yellow shirt, same thing. And on and on, every day. At some point you understand it's not really about the shirts. And it becomes easy to dismiss the criticism."

Why must Rodriguez defend himself? He plays hard, is durable, stays out of trouble off the field, has hit more than 460 home runs and might wind up reaching 800, which would place him on the short list of the greatest players in history. He is a career .305 hitter (and has batted nearly the same with runners in scoring position, by the way) with 10 All-Star selections, eight Silver Sluggers, four home run titles, two MVP Awards, two Gold Gloves and one batting title.

And yet A-Rod routinely is treated like the guy in the dunk tank at the county fair, even, most incriminating of all, by his peers. In the past two years he's been called out by Boston pitcher Curt Schilling ("bush league"), Red Sox outfielder Trot Nixon ("He can't stand up to Jeter in my book, or Bernie Williams or Posada"), Chicago White Sox manager Ozzie Guillen ("hypocrite") and New York Mets catcher Paul Lo Duca (who accused him on the field of showing up the Mets by admiring a home run too long).

"One thing people don't like," said one teammate, "is his body language. Too much of what he does on the field looks ... scripted."

I asked Rodriguez why criticism of him from inside and outside the game is so amplified. "We know why," he said.

The contract? That 10-year, $252 million deal that no one has come close to matching for six years? He nodded.

5 of 6

By Tom Verducci

"But I don't expect people to feel sorry for me," he said. "My teammates get more upset about the criticism and booing than I do. A hundred players have come to third base and said, 'This is bulls---. You're having a great year.' You wonder why it bothers players so much. Tim Salmon, Andruw Jones, Chipper Jones, Garret Anderson ... I could throw you a hundred names. They're looking at the scoreboard and saying, 'This guy's got 90 RBIs and I've got 47, and I'm getting cheered?'

"My agent, Scott Boras, was talking about [Oakland third baseman] Eric Chavez, who's a great player. He's hitting .235. He's got 16 home runs, 43 ribbies? This guy is getting cheered every time he comes up to the plate. If I can look back on 2006 and see I made 25 errors, hit .285 and drove in 125, I mean, has God really been that bad to me?"

Alex doesn't know who he is," Giambi said in late August. "We're going to find out who he is in the next couple of months."

October is the foundry of Yankees legend. It's why Scott Brosius will never have to buy another meal in New York, though the third baseman was a career .257 hitter, including .245 with a dreadful .278 on-base percentage in the playoffs. But Brosius had a couple of huge hits, and the Yankees were 11-1 in postseason series with him.

For all his career achievements, Rodriguez cannot become a made Yankee without a memorable October. He won the AL MVP award last year, but what stuck to him was his 2-for-15 showing in a Division Series loss to the Angels. It reinforced his disappearance during New York's historic 2004 ALCS collapse to Boston. Until Game 4 of that series, Rodriguez had hit .372 and slugged .640 in 22 career postseason games. But since then he has hit .125 (4 for 32) and slugged .250 while the Yankees have gone 2-7. It's unfair, of course, but to find real acceptance in New York, Rodriguez must win a ring as a Yankee.

Not that A-Rod believes he has all that much that needs to be redeemed this season. His extreme slump -- not his word, of course -- that peaked in Anaheim didn't seem so bad to him. "Reggie hit .230 one year," Rodriguez said. "That's awful. He struck out 170-something times in a year. I don't care who you are, extremes are just part of the game. I was awful [in Anaheim], but Jeter was 0 for 32 [in 2004], Mo blew three games in one week [last year].... Everybody goes through it."

Rodriguez isn't the only Yankee who needs a good October. When he looks around the clubhouse, he sees more teammates who have never won a title in New York than those who have. And thanks to the Rangers' picking up $67 million of the money left on his contract when he was traded to New York, Rodriguez can find three players in the same room to whom the Yankees are paying more this year -- Jeter ($21 million), Giambi ($19 million) and righthander Mike Mussina ($19 million) -- and a fourth, lefthander Randy Johnson, to whom they pay an equal amount ($16 million). Next year the Yankees will pay outfielder Bobby Abreu ($17.5 million) more than Rodriguez, making A-Rod a veritable bargain. I point out all of this to Rodriguez early this month as we walk underneath the first base stands at Yankee Stadium toward the indoor batting cage.

"Mussina doesn't get hammered at all," he said. "He's making a boatload of money. Giambi's making [$20.4 million], which is fine and dandy, but it seems those guys get a pass. When people write [bad things] about me, I don't know if it's [because] I'm good-looking, I'm biracial, I make the most money, I play on the most popular team...."

He laughed easily, his mood still bright after a Yankee Stadium curtain call the previous day in which Torre told him the fans wanted him, prompting Rodriguez to observe, "I'm very shy when I play. I always wonder, If I was an a------ and a very flamboyant guy, how much attention could I really call to myself?"

Yet that shyness has been his undoing. Rodriguez suffers from an astonishing lack of competitive arrogance proportionate to his immense skill. Jackson, for instance, hates the way A-Rod does his pretty peacock-preening practice swings and then lacks any physical presence once he steps in. Even his infamous gut reaction to Boston pitcher Bronson Arroyo's trying to tag him along the first base line in the 2004 ALCS -- Rodriguez awkwardly slapped Arroyo's glove rather than bulldozing the pitcher or first baseman Doug Mientkiewicz -- was a window into his softer side.

Rodriguez knows reporters' names and their affiliates and will often ask them questions about themselves, a rarity among ballplayers. This solicitousness can be awkward, even detrimental, in the socially stunted environment of a clubhouse and the brutally demanding environment of Yankee Stadium. His blood may not run cold enough.

"You know what you are?" Jackson said to Rodriguez in the New York clubhouse last Thursday. "You're too nice."

With a hitter as talented as Rodriguez, it would seem inevitable that after the drought would come a deluge. ("No," says Rodriguez, "because you don't believe it's inevitable when you can't hit the ball out in batting practice.") On Aug. 31 in the Bronx he banged out three hits against the Detroit Tigers, only his second three-hit game at home since the All-Star break. That triggered a 9-for-17 tear in which Rodriguez smashed five home runs, including one that looked so much like a routine fly-out off the bat that Torre yelled to the runners, "Tag up." Four hundred fifty feet later the ball landed in the black seats beyond centerfield. "Once you're relaxed, you react to the [pitch]," Torre said. "He's reacting to the ball, not predetermining what he was going to do, like before."

Rodriguez hasn't stopped hitting, either, batting .360 since that breakout game as the Yankees, comfortably in control of the AL East, play carefree baseball. He rediscovered his smooth footwork in the field, and his hands felt faster at the plate. He began to wait long enough on pitches to drive them hard to centerfield and rightfield, the satisfying confirmation for a righthanded hitter, like a wink from a pretty girl, that life is good.

After the home runs Rodriguez would credit Torre for helping to put the groove back in his swing. Under the stadium on a cool, wet night in which October seemed so close, I thought about that meeting Torre had with A-Rod in Seattle and had one last question:

What was Joe's main message?

Rodriguez rolled the question around in his head for a moment. He hesitated, "Uh ..." and then answered with this: "'We need you.'"    

Issue date: September 25, 2006

 http://www.latimes.com/features/health/la-he-breakfast18sep18,0,2213526.story?coll=la-home-health

The breakfast hype

Be it eggs or a hearty bowl of oatmeal, morning fare has long been branded the most important meal. Now some scientists are saying: Not so.

By Andreas von Bubnoff
Special to The Times

September 18, 2006

SHELLEY RATTET of Framingham, Mass., has lost about 25 pounds these past few months. It was the first time the 55-year-old clinical psychologist had lost weight in 10 years.

One of the changes she made: Making sure that she ate a good breakfast.

Mark Mattson, a neuroscientist at the National Institute on Aging, disdains the morning repast. He hasn't eaten breakfast in 20 years, ever since he started running early in the mornings.

He says he's skinny and healthy and never felt better.

Whatever you do, don't skip breakfast.

Breakfast: It's the most important meal of the day.

Such pronouncements carry almost the aura of nutritional religion: carved in stone, not to be questioned. But a few nutritionists and scientists are questioning this conventional wisdom.

They're not challenging the practice of sending children off to school with some oat bran or eggs in their belly. They acknowledge the many studies reporting that children who eat breakfast get more of the nutrients they need and pay more attention in class.

They do say, however, that the case for breakfast's benefits is far from airtight — especially for adults, many of whom, if anything, could stand skipping a meal.

"For adults, I think the evidence is mixed," says Marion Nestle, professor of nutrition, food studies and public health at New York University who hasn't eaten breakfast in years because she is just not hungry in the morning.

"I am well aware that everyone says breakfast is the most important meal of the day, but I am not convinced," Nestle wrote in her book, "What to Eat." (She later received many e-mails from readers telling her that they were relieved to hear it.) "What you eat — and how much — matters more to your health than when you eat."

A few scientists go further than this. They say it may be more healthful for adults to skip breakfast, as long as they eat carefully the rest of the day.

"No clear evidence shows that the skipping of breakfast or lunch (or both) is unhealthy, and animal data suggest quite the opposite," wrote Mattson, possibly the ultimate anti-breakfast iconoclast, last year in the medical journal the Lancet. Advice to eat smaller and more frequent meals, he wrote, "is given despite the lack of clear scientific evidence to justify it."

Mattson admits that he hasn't proven his case yet. His studies are still preliminary.

But already, his findings have attracted a cadre of followers who started to skip breakfast once they heard of his results. Meanwhile, a diet plan that involves breakfast skipping — the Warrior Diet — is attracting followers in the U.S. and worldwide.

These aren't the only ones forgoing the morning repast, of course. Surveys show that about one-third of all people in the U.S. and Europe skip breakfast, primarily because they say they don't have enough time in the morning or because they want to lose weight — and what better way to do so than miss a meal?

Most nutritionists and health experts maintain that this is unwise. Breakfast skippers, they say, risk skimping on important nutrients. They also tend to binge later on, actually increasing their risk of gaining weight.

"There isn't any downside to eating a healthy breakfast," says registered dietitian Joan Salge Blake, a clinical assistant professor at Boston University who specializes in weight management. "Currently, Americans, on average, fall short on their daily servings of whole grains, fruits and dairy foods. Eating breakfast is an excellent way to add these foods to the diet."

Breaking the 'fast'

Wherever and whenever the concept was first invented, breakfast today is enjoyed by cultures around the world: coffee with French bread and butter and jam in Algeria; soup and rice porridge in Thailand and Vietnam; stuffed steamed buns and soy milk in northern China; a heart-stopping plate of bacon, eggs, sausages and fried bread in the British Isles.

Breakfast cereals are relatively modern additions, debuting after the invention of "granula" by Dr. James Jackson in 1863, and cornflakes by Dr. John Harvey Kellogg in 1902.

It makes sense that the body would want to refuel after many hours of fasting, says Susan Bowerman, a registered dietitian and assistant director at the UCLA Center for Human Nutrition. In the morning, blood glucose level is generally low. "Since the brain's primary source of fuel is glucose," Bowerman says, "it seems logical that fueling up in the morning … would make sense."

Refueling is not the only benefit, however. "Many of the foods that people consume at breakfast are things they may not consume the rest of the day," such as dairy products, fruits and whole grains, Bowerman says.

Foods generally served at breakfast are good sources of calcium (from milk, yogurt and cheese), fibers (from whole fruits, whole wheat bread and cereal), iron (from fortified breakfast cereals or whole grain breads), and vitamin C or A (from orange juice and fortified milk, respectively).

"If you skip that meal, you will make up for those calories later in the day," Salge Blake says. "But are you going to be reaching for high fiber cereal or nonfat milk that's rich in vitamin D and calcium? Probably not."

Science appears to support this concern. A number of studies find nutrient shortfalls in adult breakfast skippers, says Gail Rampersaud, a registered dietitian at the University of Florida in Gainesville. One study reported this year of almost 16,000 adults 20 years or older found (based on the subjects' own reports of what they ate) that those who don't eat breakfast get fewer micronutrients, including folic acid, vitamin C, calcium, magnesium, iron, potassium and fiber.

Another 1998 study of 504 young adults in Bogalusa, La., reported that breakfast skippers were less likely to meet two-thirds of the recommended dietary intake for many vitamins and minerals, including vitamins D and C, and calcium.

Research also suggests that skipping breakfast could backfire on anyone who's doing it to stay, or become, slim. "The preponderance of studies suggest that breakfast skipping is associated with greater risk of being overweight," says Michael Murphy, a child psychologist at Massachusetts General Hospital and an associate professor at Harvard Medical School.

For example, a 2003 study of more than 10,000 Finnish adolescents and parents showed that both adults and adolescent skippers are significantly more likely to become overweight or obese. Another, 2002 study of 499 adults found a four-fold increased risk of obesity for those who reported skipping breakfast 25% of the time.

And a 2003 study of more than 16,000 U.S. adults reported that, on average, breakfast skippers had higher body mass indexes than people eating cereal or bread for breakfast.

The reason could be that people who skip breakfast make up for that calorie shortfall later — with a vengeance. John de Castro, a psychologist formerly at the University of Texas at El Paso, analyzed seven-day food diaries from about 900 adults and found that people who consume most of their calories later in the day tend to eat more on that day. And a 2003 study of more than 1,200 Swedish adolescents found that breakfast skippers were more likely to get their energy from snack food.

"If people skip breakfast, they will hunt around in the office, and the food they sometimes choose will be more energy dense and not nutrient dense," says Salge Blake, who advises obese clients to introduce breakfast into their diets.

This tip resonates with Shelley Rattet, who is one of Salge Blake's clients.

"I didn't eat breakfast because I was trying to lose weight," Rattet recalls. "But at night I was starving, so I ate whatever tasted good, for example, potato chips, a piece of cake or popcorn."

Breakfast may help prevent chronic disease, says Dr. Walter Willett, chair of the department of nutrition at the Harvard School of Public Health. That's because more frequent smaller meals (including breakfast) are less likely to produce high peaks of glucose and insulin in the blood, which in the long-run can damage the pancreas and increase diabetes risk.

"Spreading out caloric intake, rather than having a few large meals, leads to a better metabolic profile," Willett says.

And breakfast fuels the brain, helping it perform better, says David Benton, a professor in the department of psychology at Swansea University in Wales. In a 1998 study of 137 women and 47 men, Benton found that students who routinely skipped breakfast (including on the morning of a test) recalled fewer words than people who had had breakfast. Their performance improved when they were given a glucose drink.

Shaky science

Given this mound of pro-breakfast data, what could there be to challenge?

Breakfast skeptics point out that the results of studies that support eating breakfast are mixed, and often not solid enough to draw definitive conclusions.

Many who think breakfast is healthful are quick to acknowledge the shortfalls in the science as well.

To start with, some studies don't find a clear relationship between skipping breakfast and obesity. For example, a 12-week clinical trial published in 1992, in which 52 obese women received a reduced-calorie diet, did not find a significant difference in weight loss between a group who skipped breakfast and a group who ate three meals a day.

And even in cases in which effects are observed, studies often depend on data that may be unreliable, such as self-reported diets. "I am not always sure that what people report is what they actually do," says David Levitsky, professor of nutrition and psychology
at Cornell University.

Cause and effect is also hard to prove, making it possible that the relationship between body weight and breakfast is spurious.

For example, a 2005 study by Ruth Striegel-Moore, a professor of psychology at Wesleyan University, followed about 2,400 adolescent girls for nine years. She found that girls who ate breakfast more consistently had a lower body mass index.

But the association between skipping breakfast and being overweight went away when the researchers accounted for other factors that differed among the girls, such as overall energy intake, physical activity levels and parental education.

"My personal view is that breakfast skipping probably doesn't cause health-compromising behavior," says Dr. Anna Keski-Rahkonen, an epidemiologist at the University of Helsinki, Finland, author of the study of Finnish adolescents and their parents. "It's probably really a good indicator of a more unhealthy lifestyle."

Indeed, the committee of scientists who advised the government in crafting its 2005 dietary guidelines concluded that there's insufficient evidence to say breakfast helps people manage their body weight, says Dr. Carlos Camargo, an associate professor of epidemiology at the Harvard School of Public Health, who served on that committee. (The committee did conclude, however, that there was nothing wrong with eating breakfast — it wouldn't make you fatter — and that skipping it could lower the nutritional quality of the diet.)

Case for skipping

A few researchers would go further than saying breakfast is no great shakes. They'd say avoiding it may even be healthy.

Take dieting.

"If you look at the first change that dieters make in their habits, it's [dropping] breakfast," Levitsky says. He thinks they are on the right track. "They know more than the scientists," he says.

Unconvinced by the skip-breakfast-get-fat connection, Levitsky set out to test it in his lab. In a still unpublished study, he had undergraduate students eat well-defined meals under controlled conditions — including an all-you-can-eat breakfast some days and no breakfast on others. Both groups could eat as much as they wanted for the rest of the day.

The skippers, Levitsky found, ate about 150 more calories at lunch — but no extra calories for the rest of the day. As a result, they ate 450 fewer calories.

"If you skip breakfast twice a week, that's about 1000 calories less," Levitsky says — enough, over time, to make a significant difference in one's weight.

Mattson, of the National Institute on Aging, has done similar research, except he asked people to skip not only breakfast, but lunch as well. In a still unpublished study, he enrolled 20 normal-weight adult men and women, then instructed half of them to skip all meals except dinner. They were told to try to eat the same amount of calories.

None of the people on one meal a day ate more than those on three meals, he says. At the end of two months, those who were on one meal a day hadn't gained, or lost, any weight — although he suspects that they would have lost weight, if left to their own devices, because they found it difficult to eat all their allotted calories.

They also had more muscle compared with fat, showed signs of boosted immune responses, and didn't have higher blood insulin levels, as some scientists fear could result. But they did have higher cholesterol levels.

Mattson has also conducted "intermittent fasting" studies, as he terms them, on rodents. He's reported that animals deprived of food every other day have lower blood pressure and heart rate, lower insulin levels and an improved removal of glucose from the blood — all good things.

He would be the first to admit that neither his human or animal studies are quite analogous to just skipping one's morning meal. But, he adds, "My own gut feeling is that when the inter-meal interval is increased — whether through intermittent fasting or skipping breakfast — that will result in qualitatively similar beneficial effects."

Rodent studies are also invoked by another scientist in support of skipping breakfast. Tamas Horvath, a veterinarian and neuroscientist at Yale University, says he has evidence from mice that hunger makes them smarter. He notes that both hungry mice and hungry people have higher blood levels of a hormone called ghrelin, which is released from an empty stomach, signaling hunger to the body. In a study published this year, he found that mice engineered to lack the ghrelin gene took longer to learn how to avoid electric shocks in a maze-running task.

"It has been known for hundreds of years that for an animal to perform, you need to food deprive them," Horvath says. "Who invented breakfast? It was a social thing. Most animals don't have breakfast, lunch and dinner."

In exploring the breakfast issue, some scientists have even experimented on themselves. For Seth Roberts, a professor of psychology at UC Berkeley, years of self experimentation, changing one thing at a time and meticulously recording the effects, showed him that he tended to wake up several hours before breakfast. The effect, called "anticipatory activity," has been known in animals for decades, he says.

So he cut out breakfast. And now he sleeps much better.

"People get it exactly wrong," he says. "Breakfast is the most important meal to avoid."


Pro-breakfast researchers and dietitians are not too impressed by such findings. They note that animal studies may not apply to human beings, and as-yet-unpublished trials on people have not yet passed the test of critical peer review.

The case against breakfast is "based on bad science and spurious assumptions," says Murphy of Harvard.

"Don't throw out breakfast because of a few animal studies," he says. "Even for adults, the evidence is strong."

Many breakfast advocates also say there's a need for better studies — such as formal clinical trials — to study the role of breakfast in promoting good health. But this doesn't mean, they add, that the data for the traditional morning meal aren't pretty persuasive already.

"I totally agree that we need more research," say Striegel-Moore of Wesleyan. "But if pinned to the wall, I would say that breakfast skipping is bad. Is the evidence bulletproof? No. It's like climate change. We haven't experimentally manipulated the Earth, but we have got a lot of evidence."

It is not clear that major, federal money will ever be thrown at settling the breakfast dilemma. In the meantime, anyone who wants to skip it but is worried about those shortfalls in vitamins and minerals can take a handy tip from Mattson.

"Eat breakfast at lunch," he says.

 sports nut
The Secret Lives of Baseball Card Writers
I worked for Topps and lived to tell about it.
By David Roth
Posted Wednesday, Sept. 27, 2006, at 5:51 PM ET

As a child, when I had what might be called a serious baseball card habit, I looked forward to a new year of Topps baseball cards in a way I looked forward to nothing else. In the way things happen when you're a kid, baseball, basketball, and football cards took on an outsized importance in my life. And then, in the way things happen when you're a slightly older kid, cards just stopped mattering to me. I forgot about them for 15 years.

Topps became real to me again thanks to some basketball cards my roommate left around the apartment. Deep in the doldrums of underemployment, I started flipping through them while enjoying an afternoon beer. Inspired by the text on Vitaly Potapenko's 2001 Topps card (his teammates had nicknamed him "Eddie Munster") and with a courage assist from Miller High Life, I sent Topps my résumé. I figured that would be the end of it, but I got an e-mail in response. They asked how I would describe my interest in and knowledge of sports; I answered "freakish/obsessive." I got an interview, and then I got the job.

Starting a job at Topps was stressful. I was about to enter, as an adult, a place I'd always imagined as a gum-scented, Willy Wonkafied dream palace. Before my first day of work, I pictured packs piled in leaning towers, slides from long-ago Darryl Strawberry photo shoots, game-worn Mickey Tettleton jerseys. When I showed up, I found a standard corporate office: cubicles, recycled air, bad carpeting, worse lighting. There was plenty of candy—Topps makes Ring Pops, Push Pops, and Bazooka bubble gum—but few cards in sight. There was little indication that this place churned out baseball cards and not, say, bath mats.

My job was to edit the text and statistics for the card backs. These came from a Virginia-based head writer named Bruce Herman (author of the Potapenko card that led me to Topps) and a Quebecois statistician named Nicolas Chabot, respectively. I did ordinary editor things—assigned text, edited it for accuracy and aesthetics, drew elaborate geometric doodles at meetings—but was buoyed by the fact that I was doing these in a not-so-ordinary environment.

While the text was inescapably repetitive, the stuff I edited was certainly better than the "Hector's hobbies are eating and sleeping" non sequiturs that made up the Topps backs of my youth. Today's cards top out at 400 characters (including spaces), or about 70 words, and usually take the shape of punchy feature articles. My favorite was a card for the St. Louis Rams' Harvard-educated backup quarterback, Ryan Fitzpatrick. The back text dealt with a question posed to him by his offensive line. Figuring that perhaps he'd covered this in Cambridge, they asked Fitzpatrick what would hurt more: getting kicked by a donkey or whipped in the face by an elephant's trunk. Fitzpatrick went with the elephant slap. Bruce provided a source, and I checked it. All true. At times like that, the job was something very close to fun.

Tight deadlines created tension, but it's hard to stay stressed when your bosses are pestering you for 50 words about some punt returner's hobbies. Sadly, though, the same things that bothered me about previous corporate gigs were easy to find at Topps. Upper management was a distant, nepotistic network descending from a mysterious, largely invisible septuagenarian CEO. Below that, departments feuded with other departments. Middle managers skirmished in snarky, caps-locked e-mails CC'd to higher-ups. "Good mornings" seethed with passive aggression.

My co-workers and I shared a sense that our contributions were undervalued. My job's irrelevance—I worked on the less glamorous back half of the card, you see—was confirmed through my absence from the card-distribution rolls. At Topps, the haves receive free boxes of each new product. The have-nots, like me, do not. When I asked for boxes of the products I'd worked on, I got brushed off. Eventually, I gave in and queued up at the company store along with copy editors from the quality-assurance department.

I was frustrated not only because this wasn't what I'd expected—who even has company stores anymore?—but because a myth from my childhood got sullied. Baseball cards, it turned out, are not made in a card-cluttered candy land. Rather, they are created by ordinary men and women who are generally unawed by their proximity to a central part of American boyhood.

Neither trading cards nor "novelty candies" have been breaking any sales records recently. Consequently, Topps has banked increasingly on ultra-high-end trading cards. The company's most expensive "pack," the beautiful, autograph-laden Topps Sterling, comes in a cherry-wood box and costs $250 for five cards. While those cards make money—as, it should be said, do the basic $1.50 packs—the trading-card business has been more or less moribund for a decade. So, it wasn't a total surprise when I was laid off in July, effective mid-September.

I'm glad I got the chance to work at Topps, if only because it was fun to tell people at parties that "I'm in the baseball card business." My Topps experience also helped me remember why collectors collect. It's the hunt for what the brand managers call "white whale" cards. I know it's awfully literal, but mine is the Herman Melville card I wrote for Topps' Allen and Ginter set. That's a new product—scarce around the office, not sold in the company store, $5 a pack in card shops—in which Gilded Age cultural figures mingle with the A-Rods and Nick Puntos. Odd, I know, but I love the set.

Before I left for good, I found what I'd been searching for. It was behind a locked door, which was itself behind an ordinary-looking backroom. I flipped the switch, and lights flickered on overhead, revealing a back-backroom awash in cards. Binders lined the walls, filled with every card in every Topps baseball and football set from the 1950s through the 1990s, all pasted—why?—to white three-hole-punch paper. To get to those shelves, I had to step on and over boxes brimming with loose cards and cards in bricklike 500-count vending boxes. And that was just the cards. A box fell off a shelf and baseballs autographed by Frank Robinson rolled out. Jerseys that were to have been cut up and inserted into "relic" cards gave one dusty corner the look of a chaotic locker room. A box of bats inscribed with the names of journeymen such as Geronimo Berroa and Ron Coomer sat in another.

This back-backroom would not have looked like much to most people. I was relieved, though, to discover that the baseball card wonderland I'd dreamed of was somewhere in that office after all.

David Roth is a writer living in New York. He can be reached at Davidroth11@yahoo.com.

Article URL: http://www.slate.com/id/2150516/

 

moneybox
$1 Billion for Facebook? LOL!
Is the social-networking boom a replay of the '90s dotcom bubble?
By Daniel Gross
Posted Thursday, Sept. 28, 2006, at 11:24 AM ET

The "social-networking" gold rush continues. Last year, MySpace was acquired by News Corp. for $580 million in cash. Now the other big social-networking sites are the subject of rumors, deals, and transactions. Yahoo! was interested in acquiring Facebook for $1 billion, but the company's youthful founders are holding out for more. Warner Music earlier this month cut a revenue-sharing deal with YouTube. In August, Google and MySpace struck a $900-million agreement for Google to sell ads on MySpace.

The Dow's at a record high, youthful entrepreneurs are minting dotcom fortunes, and big media types are talking about "monetizing eyeballs"—close those eyeballs for a moment, and it almost seems like 1999. Back then, big media companies threw huge amounts of cash at the hot new things on the Internet: portals and online news sites that had impressive traffic figures but not-so-impressive profit-and-loss statements. Disney spun off Go.com; NBC created NBC Interactive; CBS helped form CBS Marketwatch.com. The trend peaked when Time Warner accepted the inflated currency of the ultimate eyeball business—AOL—and entered into its disastrous 2000 merger.

So, is the mania for Facebook and MySpace different than the lust for portals and online news sites seven years ago? No, and yes.

What's the same?

1. Lots of traffic, little profits. These of-the-moment social networking businesses are hot because of their impressive traffic figures. As Bambi Francisco of Marketwatch notes, in July, 37.4 million people downloaded 1.46 billion videos on MySpace, and 30.5 million people downloaded 649 million videos on YouTube. But the profit picture is substantially less clear. In its most recent earnings announcement, News Corp. doesn't mention anything about MySpace's financial performance. Writing in Fortune in August, Patricia Seller said that MySpace lost money this year on revenues of $200 million. Last month, Fortune reported that YouTube's founders were coy about their profits. And in August, Business Week noted that Digg.com (potential value: $200 million) "is breaking even on an estimated $3 million in revenues."

2. Eyeballs, baby. As a result, just as in the 1990s, analysts have to devise new metrics to justify values placed on the companies. Forget about cash flow or operating income. Companies are being valued based on how many registered users they have, just as they were during the bubble. Forbes recently cited Tim Boyd, an analyst at Caris & Company, whose "back of the envelope math estimates that Facebook's 9.5 million users may be worth six to eight times what News Corp. paid for MySpace's 30 million users last summer." (News Corp. paid about $19.33 per user.) CNN blogger Dierdre Terry calculated that Sony had purchased video-sharing site Grouper for $65 million, or $70 per user.

3. Me too, Rupert. In the 1990s, big media companies rushed into the net en masse, like a herd of wildebeest. They looked with envy at media sectors where audiences were growing exponentially while their core audiences were stagnant or shrinking. News Corp.'s acquisition of MySpace, which wasn't particularly hailed as a stroke of genius at the time, is now seen in megamedia circles as a masterstroke. One of the reasons Tom Freston got fired from Viacom was because he had failed to bid aggressively on MySpace.

What's different?

1. Deeper streams. Many of the 1990s-vintage portal and content efforts failed because there simply wasn't enough online advertising to go around. But the online advertising market has matured. This year, marketers are expected to spend $16 billion on Internet advertising in the United States, up 28 percent from 2005, according to eMarketer. And real advertisers are paying real money to advertise on social networking sites. An analyst cited in this Forbes article said that online video ads command per-viewer charges that are comparable to those paid for prime-time broadcast television.

2. Better business models. The social-networking companies may lose money, but they don't lose it on the scale that portals and start-up content firms did in the 1990s. These Web 2.0 companies are built on the wreckage of the dotcom/fiber-optic boom and bust of the 1990s. Last decade's over-investment in wires, Web sites, and hype helped spread broadband to homes, created a large community of users, and left behind cheap infrastructure. Web 2.0 companies like YouTube and Facebook have thus been able to gain scale and operate relatively cheaply. Their main costs are hosting services and overhead. Another crucial difference: The 1990s portal businesses devoured cash because they had huge budgets for marketing and advertising and for creating proprietary content. The social networking sites, by contrast, spend virtually nothing on advertising and get virtually all of their content for free courtesy of the users.

3. Well-placed skeptics. Not all the players are jumping into the social networking pool with two feet. Viacom's name hasn't been mentioned in connection with any of the large sites. And last week, once-burned-twice-shy Richard Parsons, the CEO of Time Warner, told ($ required) the Financial Times that his company wasn't particularly interested in Facebook and YouTube. "Valuations that are put on those businesses that currently make no money are astronomical and you have to have a big leap of faith."

Daniel Gross (www.danielgross.net) writes Slate's "Moneybox" column. You can e-mail him at moneybox@slate.com.

Article URL: http://www.slate.com/id/2150498/

 

food
A Dumpling Manifesto
Why Americans must demand better.
By Tim Wu
Posted Wednesday, Sept. 27, 2006, at 4:37 PM ET

Dumpling rage, like road rage, strikes without warning. My first attack came in my mid-20s, while dining at Raku, a Washington, D.C., "pan-Asian" restaurant. I made the mistake of ordering something called Chinese dumplings. Out came a bamboo steamer containing what resembled aged marshmallows—dumplings cooked so long they were practically glued to the bottom of the container. Try as I might, I could not pry them loose, until one ripped in half, yielding a small meatball of dubious composition.

It was an outrage. To my friends' embarrassment, I stood up and shouted at our waiter:

"What are these?"

"Dumplings," he said.

"These," I said, "are not dumplings. The skin is too thick. The meat is too small. It's been cooked too long. The folding is done all wrong." My friends begged me to stop, and the manager threatened to call the police.

But my anger, if ill-directed, was justified. The Chinese dumpling is a magnificent product of the human imagination: At its best, it is charming in appearance, chewy and savory, and can trigger a head rush like sashimi or blue cheese. Such dumplings are not impossible to find in the United States. In fact, I once worked at a shop that produced such delicacies, called Hoo's Dumplings, in Charlottesville, Va. For the most part, however, the dumpling has arrived here in bastardized form, as similar to the real thing as Kraft Parmesan cheese is to its ancestors. That's why it's time for a dumpling revolution.

Nasty American versions of otherwise dignified foods are something of a national tradition. The Parmesan-in-a-can, mentioned above, is perhaps the best example—the greatest cheese in the world, reduced to sawdust. But I am an optimist. Look at American wine, coffee, and sushi, all of which have slowly climbed to palatability after decades of abuse. The American variations may never be exactly like their originals, but they have slowly become great in their own way.

If dumplings are to follow this path to made-in-America greatness, we must understand what plagues our dumplings. Let's start with the skin. As any serious aficionado will tell you, the skin makes or breaks a dumpling. It must be sticky, thin, and chewy at the same time—no easy feat. It's similar to the challenge of making perfect sushi rice or pasta.

Unfortunately, American Chinese and pan-Asian outlets are lazy and suffer badly from a "thick-skin" epidemic, resulting in dumplings that are tough and greasy. A thick skin can also lead to a soggy dumpling, which is the worst fate—imagine eating a sandwich that's been soaked in water.

The real problem with overthickness is that it destroys what I like to call the "magic ratio"—the science behind the art of dumplings. The magic ratio—a factor in foods from sushi to sandwiches—is the perfect ratio of protein to carbohydrate. The right ratio seems to activate some kind of pleasure center in the brain, bringing about calm and quiet elation. Some dumpling devotees describe dumplings, done right, as mildly orgasmic.

Thick or thin, there is no dumpling magic unless the skins are fresh. Most American restaurants don't bother with fresh skins because it requires specialized labor, akin to a sushi counter. But any dumpling joint worth its salt needs a chain gang of workers who roll the skins and fold the dumplings on-site, nonstop, since repeated kneading yields better skins. Some places boil the dough before folding the dumpling, and if you know anything about bagels, you'll know that's also the secret to the New York bagel.

Chinese people have been enjoying dumplings since at least the first century A.D. when, according to legend, Doctor Zhang Zhongjing invented them. Zhang, a Hippocrates-like figure in Chinese history, supposedly discovered dumplings while researching Chinese medicine. The dumplings, the story goes, were a cure for both typhoid and frostbitten ears, which is why dumplings resemble ears. Try not to think about that when you eat them.

Today, like American barbecue, nearly every region in China has its own dumpling, often reflecting regional character. (China has many dough-wrapped snacks that go by the English-word "dumplings," including jiao-zi, wontons, and sometimes bao, but here I'll call them all dumplings.) The Cantonese, clever by nature, are great dumpling innovators. They understand the importance of sticky skin better than any other region, which is why their shrimp dumplings (har gau) are justifiably famous. They are also credited with creating a giant variety of unusual dumplings for dim sum, including what are arguably the best vegetarian dumplings.

Shanghai is the source of China's most seductive dumpling: the soup-filled xiaolongbao, a dish that can easily become a lifelong obsession. (Here is an excellent survey of the best xiaolongbao places in Shanghai.) Unlike its sister dumplings, a xiaolongbao contains hot soup as well as a pork or crab filling, and it explodes when bitten. Many restaurants advise slurping out the soup before biting (in Shanghai, some places provide a straw), but personally, I eat xiaolongbao whole, despite the danger of injury. Oddly, some of the best xiaolongbao aren't in Shanghai but Taipei—most famously, Taipei's DinTaiFueng. As in other areas of the economy, the Taiwanese are selling the dumpling back to mainland China: There are now fancy branches in Shanghai and Beijing. There, the dumplings are in such demand that some people (like my aunt) reserve dumplings days in advance.

Northern China (especially Dongbei and Shangdong), bordering Korea, is a tough place where the people often resemble Koreans and share a similar intransigent personality. Their dumplings are direct and simple but satisfying—comfort dumplings. The skins are extra chewy, and some of the most famous use lamb and pumpkin as stuffing. Xian, China's ancient capital, claims to be the birthplace of the northern dumpling and offers tremendous dumpling variety. It is not unusual to enjoy a meal consisting of 100 types of dumplings, many folded to resemble animals.

The most decadent dumplings come, unsurprisingly, from Hong Kong. Recently, I sampled the "yellow-river crab supreme dumpling," the equivalent of Manhattan's $32 hamburger. Available only in May and June, the dumpling is made in front of you from female crabs whose eggs have been mixed with meat. When consumed, they create a flavor explosion comparable to good foie gras.

What hope is there for the American dumpling? The lessons learned from food battles previously fought is that great food only comes to a demanding audience—a public educated in the scams that sometimes pass for "ethnic food." For now, your best bet is to seek out tiny shops serving northern-style dumplings like the one I used to work in, boasting simple names like "Tasty Dumplings" or "Dumplings." Common in New York and slowly sprouting up across America, these shops often cater to Chinese migrant workers with five-dumplings-for-a-dollar deals.

In my days working at Hoo's, I used to march my co-workers to nearby Starbucks and Japanese restaurants, explaining that once the public gets the idea of quality, they pay more. I'm proud to say that I won a small prize for customer service, mainly on account of my English skills. But I honestly felt we were restoring the dumpling's tarnished reputation and changing the way Americans eat, one jiao-zi at a time.

Tim Wu is a professor at Columbia Law School and co-author of Who Controls the Internet?

Article URL: http://www.slate.com/id/2150499/

 

jurisprudence
The Blind Leading the Willing
A compromise between those who don't care and those who don't want to know.
By Dahlia Lithwick
Posted Wednesday, Sept. 27, 2006, at 6:11 PM ET

Is it still called a compromise when the president gets everything he wanted?

A major detainee bill hurtling down the HOV lane in Congress today would determine the extent to which the president can define and authorize torture. The urgency to pass this legislation has nothing to do with a new need to interrogate alleged enemy combatants. The urgency is about an election.

Last time Congress rubber-stamped a major terrorism-related law no one had bothered to read in the first place, we got the Patriot Act. That alone should lead us to wonder whether there shouldn't be a mandatory three-month cooling-off period whenever Congress enacts broad laws that rewrite the Constitution.

The White House version of the detainee bill met with some resistance among ranking GOP members of Congress last week, but not enough to matter. And now, with a "compromise" at hand, nobody seems to agree on the meaning of the bargain we've struck. Sen. John McCain still believes that he's won on the bedrock principle of U.S. adherence to the Geneva Conventions. The Bush administration sees it as granting the president the authority to decide what Geneva really means.

That led to all the confusion last Sunday, when, appearing on Face the Nation, McCain claimed that the current bill "could mean that … extreme measures such as extreme deprivation—sleep deprivation, hypothermia, and others would be not allowed." This, on the same weekend that the editors at the Wall Street Journal crowed: "It's a fair bet that waterboarding—or simulated drowning, the most controversial of the CIA's reported interrogation techniques—will not be allowed under the new White House rules. But sleep deprivation and temperature variations, to name two other methods, will likely pass muster." So, what did we agree to? Is hypothermia in or out? What about sexual degradation or forcing prisoners to bark like dogs? Stress positions?

I'd wager that any tie goes to the White House. One hardly needs a law degree to understand that in a controversy over detainee treatment between the executive and legislative branches, the trump will go to the guy who's holding the unnamed detainees in secret prisons.

That brings us to a second stunning aspect of the so-called compromise: Not only do our elected officials have no idea what deal they've just struck, but they also have no idea what they were even bargaining about. In his Face the Nation interview, McCain revealed that he was in fact quite clueless as to what these "alternative interrogation measures"—the ones the president insists the CIA must use—actually include. "It's hard for me to get into these techniques," McCain said. "First of all, I'm not privy to them, but I only know what I've seen in public reporting."

Asked whether he had "access to more information about this than any of us because you've been in the negotiations," the senator was not reassuring. He knows "only what the president talked about in his speech." To clarify: McCain, the Geneva Conventions' great defender, is signing off on interrogation limits he knows nothing about. And so, it appears, will the most of the rest of Congress.

But that's not all. Congress doesn't want to know what it's bargaining away this week. In the Boston Globe this weekend, Rick Klein revealed that only "10 percent of the members of Congress have been told which interrogation techniques have been used in the past, and none of them know which ones would be permissible under proposed changes to the War Crimes Act." More troubling still, this congressional ignorance seems to be by choice. Klein quotes Sen. Jeff Sessions, the Alabama Republican, as saying, "I don't know what the CIA has been doing, nor should I know." Evidently, "widely distributing such information could result in leaks."

We've reached a defining moment in our democracy when our elected officials are celebrating their own blind ignorance as a means of keeping the rest of us blindly ignorant as well.

Over at the National Review Online they exult that the CIA torture program isn't just the president's project anymore. "Now it is just as much the program of Congress and of John McCain." Not quite right. Now it's the president's program that John McCain chooses not to know about.

And just to be completely certain, Congress is taking the courts down with it. No serious reader of the detainee-compromise bill can dispute that the whole point here is to sideline the courts. This bill immunizes some forms of detainee abuse and ignores others. It strips courts of habeas-corpus jurisdiction and denies so-called unlawful enemy combatants (a term that sweeps in citizens and noncitizens, Swiss grandmothers and Don Rumsfeld's neighbor if-that-bastard-doesn't-trim-his-hedge) the right to assert Geneva Convention claims in courts. Many detainees may never stand trial on the most basic question of whether they have done anything wrong. And courts will apparently now be powerless to do anything about any of this.

For the five years since 9/11, we have been in the dark in this country. This president has held detainees in secret prisons and had them secretly tortured using secret legal justifications. Those held in secret at Guantanamo Bay include innocent men, as do those who have been secretly shipped off to foreign countries and brutally tortured there. That was a shame on this president.

But passage of the new detainee legislation will be a different sort of watershed. Now we are affirmatively asking to be left in the dark. Instead of torture we were unaware of, we are sanctioning torture we'll never hear about. Instead of detainees we didn't care about, we are authorizing detentions we'll never know about. Instead of being misled by the president, we will be blind and powerless by our own choice. And that is a shame on us all.

Dahlia Lithwick is a Slate senior editor.

Article URL: http://www.slate.com/id/2150495/

September 19, 2006

People Who Share a Bed, and the Things They Say About It

By KATE MURPHY

While researching rural life more than 20 years ago, Paul C. Rosenblatt took his 12-year-old son with him to interview farm families in the Midwest. Father and son stayed in a farmhouse and had to share a bed.

“It was terrible,” said Dr. Rosenblatt, a professor of sociology at the University of Minnesota, Twin Cities, because his son thrashed and turned so much that “his feet were in my face all night.”

Tired and bedraggled the next day, he recalled thinking about how challenging it can be to adapt to sleeping with another person.

In more recent research — on grief — Dr. Rosenblatt interviewed couples whose children had died.

“They quite often would tell me that they dealt with their grief by holding each other and talking together in bed at night,” he said. “It seemed that I kept being reminded of how sharing a bed impacts our lives and sense of well-being.”

And yet, no one had really studied it, perhaps because sharing a bed is so mundane, Dr. Rosenblatt said. So he wrote “Two in a Bed: The Social System of Couple Bed Sharing,” published this summer by State University of New York Press.

“It’s not a self-help book,” he said, but an examination of some of the common and often humorous issues couples face when sharing a bed, including spooning, sheet-stealing and snoring.

“My hope is that the book will influence the world of sleep research so sleep is no longer viewed as an individual phenomenon,” Dr. Rosenblatt said.

There are thousands of studies on sleep and even more on marriage and relationships, but only a handful on couples sleeping together.

The National Sleep Foundation, a nonprofit group in Washington that supports education and research on sleep and sleep disorders, estimates that 61 percent of Americans share their bed with a significant other. And while the very presence of another person in bed increases the chance of sleep disruption, 62 percent of those polled in the foundation’s annual sleep study said they preferred to bed down with their partner.

In researching his book, Dr. Rosenblatt said even though many couples said they slept better alone, they still shared a bed. “When I asked why, they looked at me as if I’d asked them why they keep breathing,” he said.

For “Two in a Bed,” Dr. Rosenblatt interviewed 42 couples. Most of them were married heterosexual couples but some were unmarried hetero- or homosexual couples. Intimacy and comfort were the primary reasons couples gave for sleeping together.

“Some mentioned sex, but not a lot,” Dr. Rosenblatt said. Most reported that the bed is where they talked. “The bed is where they found privacy and were able to leave behind the distractions and separate interests that keep them apart during the day. There’s also something about late night that allowed them to open up and connect.”

Several interviewees reported that difficulty sleeping together or sleeping apart had led to the dissolution of previous marriages, and that sleeping together was essential to maintaining their relationships. Dr. Rosenblatt found that it might also save lives.

“It surprised me how many people thought they were alive today because they shared a bed,” Dr. Rosenblatt said.

For example, he said a woman’s seizure was noticed immediately by her husband with whom she spooned every night. Similar stories came from couples where one partner had a heart attack, stroke or went into diabetic shock.

The couples Dr. Rosenblatt interviewed described how they had had to adjust to sleeping with their partner. Many reported conflicts over bedroom temperature, where to locate the bed and how to make the bed. Watching television, reading and eating in bed were other contentious issues, as was sleeping in the nude. There were quarrels over the alarm clock and whether to allow children or pets into the bed.

“Each couple had to do a lot of problem solving to work out their systems for sleeping together,” Dr. Rosenblatt said. These systems, he said, usually became comforting routines of how couples prepared for bed, got into bed, behaved once in the bed, fell asleep and woke up.

The subjects he interviewed invariably had their own side of the bed, and responsibilities like putting out the cat or opening the windows before turning in. They usually had rituals like watching the television news before lights out or snuggling before falling to sleep. And they often had signals for when they wanted affection, wanted to talk or wanted to be left alone.

“How they arrived at these systems could be said to mirror their relationships,” said Dr. Rosenblatt. The most successful systems were those formed out of compromise and sensitivity to the other’s needs.

“The issues change over time,” Dr. Rosenblatt said.

Whereas a woman might have always been cold at night when she was younger, she might feel like a furnace from menopausal hot flashes as she grows older. Prostate problems might cause a man to get up more often in the night to use the bathroom. Illness and injury might prevent people from sleeping entwined with each other.

Not surprisingly, perhaps, those interviewed said dealing with a partner’s snoring and insomnia profoundly affected the couple’s sleep dynamic.

“These are all things that no one teaches you how to cope with,” said Neil B. Kavey, a psychiatrist and director of the Sleep Disorders Center at New York-Presbyterian/Columbia University Medical Center. “There’s no counseling in this regard, but there should be.”

Sleep centers are primarily concerned with treating disorders and don’t address the impact one partner has on the other. Whatever the cause of unrest, “sleep deprivation has consequences,” Dr. Kavey said. Those include impaired cognitive ability and irritability.

Though Dr. Rosenblatt has written five other books and scores of scholarly essays and papers, he said his book on couples’ sleep has gotten by far the most attention from the news media and fellow academics.

“I think it’s because it’s something most people have struggled with and can relate to,” Dr. Rosenblatt said. “And even though we may take sleeping with our partner for granted, it’s through these kinds of shared social systems that we build and nurture our relationships, and perhaps uncover the underlying meaning of our lives.”

September 27, 2006

Economix

The Choice: A Longer Life or More Stuff

By DAVID LEONHARDT

The most authoritative report on the cost of health insurance came out yesterday, and it’s sure to cause some new outrage.

The average cost of a family insurance plan that Americans get through their jobs has risen another 7.7 percent this year, to $11,500, according to the Kaiser Family Foundation. In only seven years, the cost has doubled, while incomes and company revenue, which pay for health insurance, haven’t risen nearly as much.

These spiraling costs — a phrase that has virtually become a prefix for the words “health care” — are slowly creating a crisis. Many executives have decided that they cannot afford to keep insuring their workers, and the portion of Americans without coverage has jumped 23 percent since 1987.

An industry that once defined the American economy, meanwhile, is sinking in large measure because of the cost of caring for its workers and retirees. For every vehicle that General Motors sells, fully $1,500 of the purchase price goes to pay for medical care. “We must all do more to cut costs,” G.M.’s chief executive, Rick Wagoner, said on Capitol Hill this summer while testifying about health care.

Mr. Wagoner’s argument has become the accepted wisdom about the crisis: the solution lies in restraining costs. Yet it’s wrong. Living in a society that spends a lot of money on medical care creates real problems, but it also has something in common with getting old. It’s better than the alternative.

To understand why, it helps to look back to a time when Americans didn’t worry much about health care costs. In 1950, the country spent less than $100 a year — or $500 in today’s dollars — on the average person’s medical care, compared with almost $6,000 now, notes David M. Cutler, an economist who wrote a wonderful little book in 2004 titled, “Your Money or Your Life.”

Most families in the 1950’s paid their medical bills with ease, but they also didn’t expect much in return. After a century of basic health improvements like indoor plumbing and penicillin, many experts thought that human beings were approaching the limits of longevity. “Modern medicine has little to offer for the prevention or treatment of chronic and degenerative diseases,” the biologist René Dubos wrote in the 1960’s.

But then doctors figured out that high blood pressure and high cholesterol caused heart attacks, and they developed new treatments. Oncologists learned how to attack leukemia, enabling most children who receive a diagnosis of it today to triumph over a disease that was almost inevitably fatal a half-century ago. In the last few years, orphan drugs that combat rare diseases and medical devices like the implantable defibrillator have extended lives. Human longevity still hasn’t hit the wall that was feared 50 years ago.

Instead, a baby born in the United States this year will live to age 78 on average, a decade longer than the average baby born in 1950. People who have already made it to their 40’s can now expect to reach age 80. These gains are probably bigger than the ones the British experienced in the entire millennium leading up to 1800. If you think about this as the return on the investments in medicine, the payoff has been fabulous: Would you prefer spending an extra $5,500 on health care every year — or losing 10 years off your lifespan?

Yet we often imagine that the costs and benefits are unrelated, that we can somehow have 2006 health care at 1950 (or even 1999) prices. We think of health care as if it were gasoline, a product whose price and quality have nothing to do with each other.

There is no question that the American medical system does suffer from a lot of waste, be it insurance industry bureaucracy or expensive procedures that haven’t been proven effective. But the No. 1 cause of the cost increases is still the one you can see at the hospital and in your medicine cabinet — defibrillators, chemotherapy, cholesterol drugs, neonatal care and other treatments that are both expensive and effective.

Not even most forms of preventive care, like keeping diabetes under control, usually save money, despite what many people think. The care itself has some costs, and, more important, patients then live longer than they otherwise would have and rack up medical bills. “When I make this point, people accuse me of wanting people to die earlier. But it’s exactly the opposite,” Dr. Jay Bhattacharya, a researcher at Stanford Medical School, told me. “If these expenditures are keeping people alive, it’s money well spent.”

As Dr. Mark R. Chassin of the Mount Sinai School of Medicine in New York says, “You almost always spend money to gain health.” Of course, the opposite is also true: the best way to reduce health care spending is to reduce health care itself.

Which is exactly what we’re starting to do. The growing number of families without health insurance are, in effect, families who have been kicked off the country’s health care rolls. Many will go without available treatment, will get sicker than they need to get — and will thereby save the rest of us money. They are what now passes for a solution to the health care mess.

The current situation is indeed unsustainable, a point that the conventional wisdom has right. The cost of health insurance can’t keep doubling every seven years, and wasteful spending — the brand-name drugs that are no better than generics, the treatments that haven’t been proved to extend lives or improve health — does need to be reined in.

But far too much of the discussion has been centered on this narrow idea. Somehow, going to the mall to buy clothes has come to be seen as a vaguely patriotic way to keep the economy humming, and taking out a risky mortgage is considered to be an investment in one’s future. But medical care? That’s just a cost.

It’s easy to be against high costs, and it will no doubt be hard to come up with a broad health care solution. But the way to start is by acknowledging that an affluent society should devote an ever-growing share of its resources to the health of its citizens. “We have enough of the basics in life,” Mr. Cutler, the economist and author, points out. “What we really want are the time and the quality of life to enjoy them.”

The Big Question Democrats Are Ducking

By David Ignatius
Wednesday, September 27, 2006; A27

No matter how you slice it, the National Intelligence Estimate warning that the Iraq war has spawned more terrorism is big trouble for President Bush and his party in this election year. It goes to the heart of Bush's argument for invading Iraq, which was that it would make America safer.

Many Democrats act as if that's the end of the discussion: A mismanaged occupation has created a breeding ground for terrorists, so we should withdraw and let the Iraqis sort out the mess. Some extreme war critics are so angry at Bush they seem almost eager for America to lose, to prove a political point. Even among mainstream Democrats, the focus is "gotcha!" rather than "what next?" That is understandable, given the partisanship of Republican attacks, but it isn't right.

The issue raised by the National Intelligence Estimate is much grimmer than the domestic political game. Iraq has fostered a new generation of terrorists. The question is what to do about that threat. How can America prevent Iraq from becoming a safe haven where the newly hatched terrorists will plan Sept. 11-scale attacks that could kill thousands of Americans? How do we restabilize a Middle East that today is dangerously unbalanced because of America's blunders in Iraq?

This should be the Democrats' moment, if they can translate the national anger over Iraq into a coherent strategy for that country. But with a few notable exceptions, the Democrats are mostly ducking the hard question of what to do next. They act as if all those America-hating terrorists will evaporate back into the sands of Anbar province if the United States pulls out its troops. Alas, that is not the case. That is the problem with Iraq -- it is not an easy mistake to fix.

An example of the Democrats' fudge on Iraq was highlighted yesterday by Post columnist Dana Milbank in his description of retired Maj. Gen. John Batiste's appearance before the Senate Democratic Policy Committee. Senators cheered Batiste's evisceration of Defense Secretary Donald Rumsfeld but tuned out Batiste's call for more troops and more patience in Iraq, and his admonition: "We must mobilize our country for a protracted challenge."

Here's a reality check for the Democrats: There is not a single government in the Middle East, with the possible exceptions of Iran and Syria, that favors a rapid U.S. pullout from Iraq. Why? The consensus in the region is that a retreat now would have disastrous consequences for America and its allies. Yet withdrawal is the Iraq strategy you hear from most congressional Democrats, whether they call it "strategic redeployment" or something else.

I wish Democrats (and Republicans, for that matter) were asking this question: How do we prevent Iraq from becoming a failed state? Many critics of the war would argue that the worst has already happened -- Iraq has unraveled. Unfortunately, as bad as things are, they could get considerably worse. Following a rapid American pullout, Iraq could descend into a full-blown civil war, with Sunni-Shiite violence spreading throughout the region. In this chaos, oil supplies could be threatened, sending prices well above $100 a barrel. Turkey, Iran and Jordan would intervene to protect their interests. James Fallows titled his collection of prescient essays warning about the Iraq war "Blind Into Baghdad." We shouldn't compound the error by being "blind out of Baghdad," too.

The Democrat who has tried hardest to think through these problems is Sen. Joseph Biden. He argues that the current government of national unity isn't succeeding in holding Iraq together and that America should instead embrace a policy of "federalism plus" that will devolve power to the Shiite, Sunni and Kurdish regions. Iraqis are already voting for sectarian solutions, Biden argues, and America won't stabilize Iraq unless it aligns its policy with this reality. I disagree with some of the senator's conclusions, but he's asking the right question: How do we fix Iraq?

America needs to reckon with the message of the National Intelligence Estimate. Iraq has compounded Muslim rage and created a dangerous crisis for the United States. The Democrats understandably want to treat Iraq as George Bush's war and wash their hands of it. But the damage of Iraq can be mitigated only if it again becomes the nation's war -- with the whole country invested in finding a way out of the morass that doesn't leave us permanently in greater peril. If the Democrats could lead that kind of debate about security, they would become the nation's governing party. But what you hear from most Democrats these days is: Gotcha.

The writer co-hosts, with Newsweek's Fareed Zakaria, PostGlobal, an online discussion of international issues athttp://www.washingtonpost.com. His e-mail address isdavidignatius@washpost.com.

 supreme court dispatches
Tequila Mockingbird
Justice Scalia opens the 2006 term with a bang.
By Dahlia Lithwick
Posted Tuesday, Oct. 3, 2006, at 6:28 PM ET

My cabdriver seems to be taking pains to stick to the back streets on his way to the Supreme Court this morning; he's steering clear of the streets that bound the Capitol Building, as if all the sex/influence peddling/racism scandals sicking up Capitol Hill might somehow cease miraculously at Maryland Avenue. And as the court resumes hearing cases on this first Tuesday of October (Monday having been called on account of Yom Kippur), I find myself relieved, again, to be covering the one branch of government in which the ick factor consistently remains exceedingly low.

Imagining scandalous IMs from the justices to their clerks takes you no further than:

NINO86 (11:55 pm): Hey, did you remember to cite check the latest draft opinion in Gonzales?
ClerkX (11:57 pm): lol. Sure did. Just got that 565 F. Supp. 110, 118 (ND Ga. 1982) in too!!!!
NINO86 (11:59 pm): Kennedy's clerks still trying to work in quotes from Sartre and Baron de Montesquieu every other line? ;)
ClerkX: (11:59 pm) lol. Yup.
NINO86: (12:00 pm) rotfl. Now get back to work.

Laugh all you want at the pompousness and self-importance of the high court, but the justices are—with few exceptions—pathologically cautious about being decorous. And yet one of those exceptions, Justice Scalia, creeps right up to the line again this morning. And, as is always the case, one has to wonder why.

The first case of the term is a pair of consolidated immigration cases—Lopez v. Gonzales, (out of the 8th Circuit Court of Appeals) and Toledo-Flores v. United States (out of the 5th). Both cases turn on a question of statutory interpretation: Lopez and Toledo-Flores were noncitizens convicted for drug crimes that were felonies under their respective state laws, but misdemeanors under federal law. The Immigration and Nationality Act provides that noncitizens convicted of "aggravated felonies" can be deported. The question for the courts is whether "aggravated felonies" should include convictions that are felonies under state law, but only misdemeanors under federal law.

Lopez was arrested in South Dakota for cocaine possession. The INS, appeals court, and the 8th Circuit all agreed that his state drug felony supports deportation under the immigration laws. Toledo-Flores was convicted of the Texas state felony of possessing 0.16 grams of cocaine. The 5th Circuit affirmed his deportation.

Most of this morning's argument is a deathly parsing of the language in the Immigration and Nationality Act's definition of "aggravated felony," 8 U.S.C. § 1104 (43) (B), which sends us back to the definition of a "drug trafficking crime" under 18 U.S.C. § 924 (c). But in order to parse that, you need to close-read the Controlled Substances Act (that's 21 U.S.C. § 802, for those of you who didn't glaze over at the first sight of a §).

This is only really important insofar as several other circuits have adopted the rule that deportation requires that you commit a felony under federal, and not just state, law. The court needs to resolve the split between the circuits. And the problem, as is often the case in disputes over statutory construction, is that the statutory language is ambiguous.

I pause to add that, for those of you who have relied on my crap handwriting and iffy short-term memory as I have written up these dispatches for the past few years, those days are over: The high court has, as of 2:24 p.m. Eastern, already posted today's transcript right here. A constitutional moment. Thus, I state with confidence that the word "ambiguous" is uttered five times today and "ambiguity" twice, including multiple utterances by Justice Stephen Breyer, who finds the statute both "perfectly ambiguous" and rife with "perfect ambiguity." So, what do the courts do with an ambiguous statute? Pretty much what anyone else would do. They fuss themselves into a lather.

Timothy Crooks is the assistant federal public defender representing Reymundo Toledo-Flores and—as his client has already been deported to Mexico—he's in the unenviable position of having to persuade the justices that his case isn't moot. Crooks states that even though his client is no longer in the United States, "he is still subject to the supervised release portion of his sentence." An incredulous Chief Justice John Roberts wonders how a deportee can possibly be subject to his probation conditions if there is no one to supervise him. Crooks replies that his client is still not allowed to "use alcohol, or associate with persons … " (He is interrupted here.)

Crooks adds that there are cases in which deportees have been extradited back to the United States based on violations of their supervised release, and that he may in the future want a visa to visit the United States, since his children live here. Justice Scalia says that "the doctrine of standing is more than an exercise in the conceivable. … Nobody thinks your client is really, you know, abstaining from tequila down in Mexico because he is on supervised release in the United States."

Nobody laughs. But then, nobody winces or flinches, either. Somehow, a remark that would have flattened us had a Souter spoken it is just a solid day at the office for Scalia. I have no idea where the tequila comment should register on the nation's macaca-meter. The more interesting question is about Scalia's deliberate carelessness with language, his sense that he is somehow above the sorts of linguistic delicacy the rest of us expect in our dealings with others. Indeed, he seems to think it's his obligation to be ever more reckless with his words, perhaps because he's about the only guy left who faces no consequences for his rhetorical body-slams.

Deputy Solicitor General Edwin Kneedler defends the government position, and the liberal justices pound at him awhile over the basic unfairness of a system that would allow for deportation, based on the random accident of which state you were in when you broke the law. Justice David Souter suggests: "It seems very odd given the tension between the state and federal classifications to say that for federal purposes the state classification is going to trump the federal classification." Even Scalia balks at Kneedler's "double inconsistency" that could preclude a deportation if a state treated a crime more leniently than federal law, concluding "you've thoroughly confused me."

If George Allen had uttered Scalia's "nobody thinks your client is abstaining from tequila" crack today, it would have been front-page news. The rest of us would have been forced to form some opinion as to whether it was an "aspersion," a stereotype, a gaffe, or just a celebration of worm-laden beverages. But the court exists on a different plane, and for good reason. We don't want every branch of government to be beholden to the electorate, but that doesn't mean that the justices shouldn't be beholden to themselves. Scalia wants to be a part of the national conversation, but not on the terms the nation has agreed to. And each time he unleashes one of these remarks, I find myself wondering whether he's protecting his right to express himself, or just relishing his free pass.

Dahlia Lithwick is a Slate senior editor.

Article URL: http://www.slate.com/id/2150905/

 

books
Should We Shut Up About Diversity?
A good question, with a cynical answer from Walter Benn Michaels.
By Alan Wolfe
Posted Tuesday, Oct. 3, 2006, at 12:45 PM ET

Let's stop talking so much about race, argues University of Illinois at Chicago English professor Walter Benn Michaels in The Trouble With Diversity; let's talk about class instead. Rarely have I found myself more in agreement with a book's conclusion. Over the past six years, Americans have barely paid attention as every mechanism of government has been mobilized to benefit those who need help the least, punishing, even if they fail to recognize the fact, those who need assistance the most. To focus so obsessively instead on questions of diversity, as if the ideal society were one in which both rich black kids and rich white kids could attend the same elite college, is, as Michaels rightly asserts, to opt for a politics of symbolism over a politics of results.

The interesting question is not whether we should talk more about class but how we should do so. And here, it has to be said, Michaels' book is a failure. Rarely have I found myself more in disagreement about how to reach a conclusion than I did while reading The Trouble With Diversity. Walter Benn Michaels is a master of rhetoric, a dazzling wordsmith who loves to poke holes in what he takes to be conventional thinking. Yet to make his points, he makes a series of assertions that, when examined with care, simply crumble. There is nothing in this book that would help promote informed discussions of economic equality in this country. There is instead a profusion of cynicism incompatible with any serious political agenda, including the one in which Michaels professes to believe.

Here are some examples of Michaels' rhetorical excess. Cultural differences, including those involving race, are "lovable," whereas class differences "are not so obviously appealing." Affirmative action is therefore "a kind of collective bribe rich people pay themselves for ignoring economic inequality." It is absurd to focus so much on affirmative action because "there are no people of different races." It makes more sense to talk about concrete things, such as paying African-Americans reparations for slavery, than it does to engage in symbolic politics in which nothing really is at stake: "No issue of social justice hangs on appreciating hair color diversity; no issue of social justice hangs on appreciating racial or cultural diversity."

Michaels, as these examples illustrates, belongs to the "shock and awe" school of political argument. First, you say something wildly implausible in the hopes that its dramatic counterintuitiveness will make it seem brilliant. Yet in the United States in which I live, race is an obvious fact of life, conversations about it remain awkward and uncomfortable, and both supporters and opponents of affirmative action are sincere in their convictions. It is true that saying such things would make for a very unoriginal book. But at least it would be an accurate one.

Then, you posit false choices. For Michaels, every time we talk about race, we fail the poor. But why should discussions of racial injustice preclude taking on issues of class injustice? Lyndon Johnson's Great Society, the high point of postwar liberalism, featured both a Civil Rights Act and a War on Poverty; one way of redressing racial discrimination, in those days, was to further economic equality. In more recent times, a concern with racial inequality has relied on the same underlying moral logic as a concern with economic inequality: Arbitrary differences are unfair, and their impact ought to be minimized. For all its problems, affirmative action has had one great benefit: It has linked questions of justice to mundane realities, such as college admissions and jobs. That is why former Harvard President Larry Summers insisted on opening up his institution to more working-class students. Without affirmative action in the past, it is hard to imagine Harvard and Princeton abolishing early decision in the present.

Once you have made absurd claims and posited false choices, you can next assume, as Michaels does, an aura of bemused superiority. Diversity advocates on the one hand and conservative activists on the other spend lots of time and money arguing about affirmative action, but Michaels knows, even if they do not, that it is all much ado about nothing: "[I]t doesn't matter which side you're on and it doesn't matter who wins. Either way, economic inequality is absolutely untouched." (But surely it matters who wins, for if conservative opponents of affirmative action are successful in turning Americans away from discussions of racial injustice, they will be emboldened to push for policies resulting in greater economic injustice.) Lots of Jews worry about anti-Semitism—Michaels spends considerable time on Philip Roth—but they are simply mistaken: When "compared to Negrophobia, anti-Semitism was never a very significant factor in American life." (Race, according to Michaels, does not exist, but racism evidently does.) Liberals may want to believe that they won a political victory every now and then, but, at least according to Walter Benn Michaels, they "ended up playing a useful if no doubt unintended role, the role of supplying the right with just the kind of left it wants." And then there is this repeated insistence on the idea that there is no such thing as race; Michaels shakes his head in bewilderment that so many of us just refuse to accept what he knows as true.

Michaels pictures himself as the tough guy willing to take on the hard issues of class while everyone else opts for warm and fuzzy bromides promising cultural and racial diversity. Indeed, he argues, so prevalent is this superficial desire to bring everyone together that Americans apply ideas of tolerance and acceptance to areas where they do not belong, especially the area of religion. "Only someone who doesn't believe in any religion can take the view that all religions may plausibly be considered equal and that their differences can be appreciated," Michaels writes. (I am one of the people he has in mind here.) Like his colleague Stanley Fish, he insists that "if you believe that Jesus is the way and I don't believe Jesus is the way, one of us must be wrong." Believers, including nonbelievers, have no choice but to fight it out. Convincing each other is futile; converting each other is our only option.

With all due respect, Michaels has no idea what he is talking about. He writes about religion without distinguishing between religions. Hence, you would never know that some religions do indeed look for converts, while others actually place barriers in front of those who would join. Nor do all religions assign the same priority to belief as evangelical Christians do; observance, for some, is more important than belief, and so long as a society allows them to keep their strict observance, they can easily live together with others of different convictions. And even those who believe that Jesus is the way have come to accept that others can find God in other ways. Since Nostra Aetate (1965), the Vatican has worked assiduously to recognize the validity of Judaism to Jews, and the great bulk of American evangelicals, for all their talk of witnessing the faith, do not routinely tell their Hindu co-workers that they will burn in hell. In a world in which intermarriage is a fact of life and switching congregations hardly worthy of notice, religious diversity is an inescapable fact, not a logical impossibility.

In the end, The Trouble With Diversity calls more attention to Walter Benn Michaels than it proposes anything of value to American society. Writing in the third person, Michaels tells us his annual salary and frankly confesses his greed (shock and awe, again). He lets us know where he lives and casually mentions that most of his book was written in the course of one summer. (It shows.) By revealing these facts about himself, Michaels hopes to demonstrate that "the validity of the arguments does not depend upon the virtue of the person making them." Not only is his stance trite—economists down the road from him at the University of Chicago have been saying this for some time—it is also, in a way Michaels fails to recognize, much more what the right wants to hear than anything associated with the multicultural left. He winds up including himself in a world in which everyone is motivated by self-interest and everything is hypocritical. If anyone can be accused of doing what Walter Benn Michaels accuses everyone else of doing—ignoring class by talking about race—it is Walter Benn Michaels himself.

Alan Wolfe, professor and director of the Boisi Center for Religion and American Public Life at Boston College, is the author most recently of Does American Democracy Still Work?

Article URL: http://www.slate.com/id/2150826/

 

moneybox
The Oil Conspiracy
Is the Bush administration manipulating oil prices to win elections?
By Daniel Gross
Updated Wednesday, Oct. 4, 2006, at 4:03 PM ET

These days, gas prices interest political consultants as much as they do truckers. Politicos believe there is a direct relationship between the price of a gallon of gas and the fortunes of Democrats at the polls. High gas prices during the fall campaign season? Hello Speaker Pelosi. Falling gas prices in the post-Labor Day period? Crank up the Karl Rove Political Genius Machine again. Before the Foley scandal broke, Matt Drudge was tracking the falling price of a gallon of gas in Iowa on his site.

Prices have been falling significantly in Iowa and everywhere else. The Energy Department shows that retail gasoline prices have fallen sharply since August. The price of crude traded on the New York Mercantile Exchange has fallen by 25 percent in the last two months. Today, it's trading for about $59 a barrel, a price not seen since February.

Of course, the relationship between commodity prices and electoral results is a noisy one. A host of other factors could influence the polls and, ultimately, control of Congress, like Bob Woodward's book, or Rep. Mark Foley's instant messages, or Macacawitz-gate. But that hasn't stopped speculation about conspiracies led by the Bush administration, and those close to it, to engineer a sharp fall in the prices of oil and gas during campaign season. A big chunk of the American public suspects funny business. A USA Today poll from September found that 42 percent of Americans believed the administration deliberately manipulated gas prices ahead of the elections.

So, let's weigh the conspiracy theories. The first actually concerns the 2004 election. Bob Woodward claimed on 60 Minutes that Saudi Ambassador Prince Bandar Bin Sultan told Bush the Saudis could help bring oil prices down before the presidential vote by increasing production by several million barrels a day.

The 2006 theories are more subtle. The administration has taken steps recently to remove a marginal, but important, buyer from the marketplace. After having delayed the summer's deposits to the Strategic Petroleum Reserve until the fall, the Wall Street Journal Monday reported that, "The Energy Department will hold off purchases of oil for the government's emergency reserve through the upcoming winter."

And then there's the strange case of how Goldman Sachs, the investment firm formerly run by Treasury Secretary Henry Paulson, this summer shifted the weighting of gasoline in the Goldman Sachs Commodities Index in such a way that forced investors to dump speculative positions in gasoline, hence pushing down prices. It's a convoluted story, but this article from last Friday's New York Times lays it out pretty well. (Blogger Tim Iacono makes the case here, and University of California, San Diego, economist James Hamilton provides his own description and debunking here.)

Goldman Sachs runs the Goldman Sachs Commodity Index, the largest commodities index. Energy accounts for about 70 percent of the index's weighting. (For more on the index, click here.) In June, Goldman announced that between August and October 2006, it would make some changes to the weighting of the index. The main alteration: Goldman would sharply decrease the weighting given to the New York Harbor Unleaded Gasoline future contract (then 8.72 percent) and introduce a small weighting for the Reformulated Gasoline Blendstock for Oxygen Blending futures contract. (Reformulated blendstock is gas that can be blended with ethanol.) The end result: The weighting for unleaded gas fell from 8.72 percent to 2.31 percent, while the weighting for reformulated blendstock rose from 0 percent to 2.37 percent. Combined, unleaded gasoline and reformulated blendstock today account for 4.67 percent of the index, compared with 8.72 percent a few months ago. The upshot: Of every dollar invested in the index, or in derivatives related to the index, several cents fewer go into unleaded gasoline.

The changes clearly stimulated a market reaction. To keep their weightings consistent with the index, traders were forced to sell quickly contracts on unleaded gasoline (and buy contracts on reformulated blendstock). The New York Times noted that on Aug. 10, the New York Harbor unleaded gasoline contract fell more than 8 percent, or 18 cents, to $1.9889 a gallon. And in commodity markets, as in other markets, investors feed on momentum in both directions. The market prices—and hence the retail price—of gasoline has continued to fall in the weeks since.

So, was this engineered by Henry Paulson and Goldman Sachs? It's doubtful, although Goldman hasn't done much to dispel questions. The bank hasn't offered a good reason as to why it decided to reduce the overall weighting of gasoline in the index this summer. Still, the company is hardly a Republican redoubt. There are likely as many Kerry supporters as Bush supporters in the firm's upper ranks. And if Goldman was trying to manipulate the market for political reasons, it certainly picked an awfully transparent way of doing it. It publicly announced the contours of the changes in advance and gave investors and traders time to plot strategies surrounding the move.

More broadly, though, commodity markets have shown themselves to be beyond the control of presidents, the Saudis, or even Henry Paulson and Goldman Sachs. The world is an increasingly connected, complicated, and volatile place, which makes the prices for commodities that fuel the global economy dependent on a growing range of factors. At root, gasoline is getting cheaper largely because the thing you need to make it—crude oil—has been getting cheaper. And Goldman actually slightly increased the weighting of crude oil in the overall index this summer.

Closer to home, there was plenty of activity in August and September—beyond Goldman's index maneuvers—that helped push market and retail prices of energy lower. They include: a growing sense that the U.S. economy, the largest user of oil on the planet, has been slowing rapidly and might be headed toward a recession; a shift in the mix of the U.S. car fleet away from trucks and SUVs and toward smaller vehicles; a potential big find in the Gulf of Mexico; a growing boomlet in ethanol and alternative energy; a bust of a hurricane season; and the blowup of a gigantic hedge fund with huge positions in natural gas.

So, the recent fall in energy prices is almost certainly not a Bush conspiracy, just a bit of electoral good luck.

Daniel Gross (www.danielgross.net) writes Slate's "Moneybox" column. You can e-mail him at moneybox@slate.com.

Article URL: http://www.slate.com/id/2150903/

 

Punch Lines for Pakistan's President
Jon Stewart Laughs It Up With Musharraf

By Libby Copeland
Washington Post Staff Writer
Wednesday, September 27, 2006; C01

The president of Pakistan has been in the United States lately to discuss matters of global importance and -- in his spare time -- to flog a memoir. Last night he appeared on Comedy Central's "Daily Show" with Jon Stewart, where he demonstrated both a sense of humor and a deep desire to sell "In the Line of Fire," which, incidentally, is now available on Amazon.com for the low, low price of $16.80, plus shipping and handling.

Following Pakistani custom, Stewart started off by offering Gen. Pervez Musharraf some tea. He also gave Musharraf the "American delicacy" known as a Twinkie.

"Is it good?" Stewart asked, then followed up with: "Where's Osama bin Laden?"

"I don't know," Musharraf replied, as the audience roared with laughter. " You know where he is? You lead on, we'll follow you."

Has it really come to this? In recent days, Musharraf has promoted his memoir, published Monday, on "Hannity & Colmes," "Today," "60 Minutes" and "Charlie Rose." He has engaged in long discussions of his country's foreign policy and endured the occasional moment of awkwardness in service to the greater good of book sales.

For example, on "Charlie Rose":

Rose: " 'In the Line of Fire" seems an appropriate title for your memoir, does it not?"

Musharraf: "I think so. That's why I selected it."

Last night was the first appearance of any sitting president on "The Daily Show," and the best lines, as usual, belonged to Stewart. He asked his guest ("Mr. President") about two attempts on his life, which took place on the same bridge.

"I'd come up with a new way to go to work," Stewart advised his guest.

As usual, much of Stewart's humor was rooted in criticism of the U.S. administration. If Musharraf felt such jokes put him in an awkward position with his ally President Bush -- with whom he met on Friday and is scheduled to meet again today -- he did not say so. Rather, he chuckled and played along. For example, Stewart asked Musharraf if Bush ran against bin Laden in a low-level election in Pakistan, who would win. Musharraf responded that both would "lose miserably."

Stewart asked Musharraf why he hadn't made much reference in his book to America's war in Iraq.

"Is that because you felt like it was such a smart move, and has gone so well that to mention it would be gloating?" Stewart asked.

Musharraf laughed and said of the war, "It has led certainly to more extremism and terrorism around the world."

"So we're safer?" Stewart pressed.

Musharraf laughed again. "No, we're not."

Stewart also asked if Bush pays attention when Musharraf meets with him, or whether he might be, say, watching television or sleeping with his eyes open. The president of Pakistan said the president of the United States paid close attention during their last meeting.

Book tours can benefit greatly from juicy details released in advance of publication, and this has proved no less true for Musharraf's book. Last week, it came to light that Musharraf claimed that former deputy secretary of state Richard L. Armitage threatened to bomb his country "back to the Stone Age" if Pakistan did not cooperate in the war on terror. (Armitage has since denied making such a threat.)

Asked about the "Stone Age" quote at a news conference with Bush on Friday, the Pakistani president said he could not discuss his book before it came out, citing an agreement with his publisher.

"In other words, buy the book is what he's saying," Bush said.

Is there any publicity better than that? Of course -- Oprah.

The 'Moderate Republican' Scam

By Harold Meyerson
Wednesday, September 27, 2006; A27

Sen. Lincoln Chafee, Republican of Rhode Island, is seeking reelection in his heavily Democratic state by insisting he's not really a Republican, or at least not part of the gang responsible for the decade's debacles. He didn't even vote for George W. Bush in 2004, he protests. He cast his vote for George H.W. Bush -- a kinder, gentler, more prudent, less strident Republican.

Big deal.

It matters not a damn whom Lincoln Chafee chose to support for president. His vote was one of roughly 435,000 cast in Rhode Island in the 2004 presidential election, and roughly 122 million cast nationwide. The election in which his vote did matter was that for majority leader of the Senate. There, he was one of just 100 electors, in a Senate nearly evenly divided. After this November's elections, control of the Senate may well hang by a single vote.

And if Chafee truly wished to alter the course of his party and his country in the spirit of his vote for Poppy Bush, he would, if reelected, cast his vote for majority leader when the new Senate convenes for Bob Dole or Howard Baker -- former Republican leaders who showed a decent respect for reality and an interest in doing the nation's business.

Rather than support an administration lapdog such as Kentucky's Mitch McConnell, the Republican whip, whom his party will probably put forth to run the Senate next January, Chafee would vote for some old-school GOP pol. Rather than just announce he's against the war and appalled by torture, he'd vote to put the Senate in the hands of someone with enough gumption and wisdom to stand up to a president who's hellbent on a war that's lost its purpose and who believes America should torture its prisoners either because it makes a nifty wedge issue to use against the Democrats or because he actually believes torture is an acceptable U.S. policy. (Or, to give the president the benefit of the doubt, both.)

Chafee and Maine's Olympia Snowe and such deathbed converts to moderation as Ohio's Mike DeWine are seeking reelection to the Senate by claiming that they represent a Republicanism less rabid than the Bush-Rove strain. They point to individual votes in which they broke with the president and flouted the party line. But those votes have been negated a hundred times over by their votes to make Bill Frist the majority leader, just as they would be negated when the new Senate takes office in 2007 if the moderates backed any Republican unwilling to make a fundamental break with Bush and Bushism.

The issue isn't the individual voting records of Frist and McConnell, which are indistinguishable from each other and define the mainstream of today's gorge-the-rich, drown-the-poor, stay-the-course Republicanism. The issue is that under the control of the Republicans, both the Senate and the House have abandoned their constitutionally mandated obligation to oversee executive branch endeavors, most especially endeavors gone as awry as the war in Iraq. The issue is that under Republican control, both houses have abandoned any effort to address America's real problems.

The House and Senate vote to ban flag-burning and gay marriage but never quite find the time to slow the rising costs of health care or raise the minimum wage or mandate fuel efficiency standards lest the polar ice cap melt. Chafee, Snowe and DeWine readily admit that a melted polar ice cap would be troublesome; they will fight it tooth and nail. But come time to vote for majority leader, they always vote for a leader of a party in thrall to big oil.

Problem is, Chafee and his moderate band are an ever weaker force in a party whose very essence is extreme, whose electoral strategy is solely to mobilize its base, whose legislative strategy is never to seek votes across party lines. And unless these moderates boldly go where they have not gone before and cast their vote for majority leader (and I don't mean in caucus, I mean on the Senate floor) for someone other than the nominee of their party caucus, they are not moderates at all. They are loyal and indispensable foot soldiers in the Republicans' continuing campaign to drag the nation rightward and backward.

And guess what. The moderates will vote for the extremist. "Moderate," after all, is only an adjective; "Republican" is a noun. Chafee, Snowe, the whole lot of them, are moderate enablers of an extremist party. That leaves those voters in Rhode Island, Maine, Ohio and other states where these self-proclaimed Republican moderates are running only one choice if they seek a Congress to check and balance the president, if they want a more moderate nation: Vote for the Democrat.

meyersonh@washpost.com

 October 1, 2006

Editorial

America’s Army on the Edge

Even if there were a case for staying the current course in Iraq, America’s badly overstretched Army cannot sustain present force levels much longer without long-term damage. And that could undermine the credibility of American foreign policy for years to come.

The Army has been kept on short rations of troops and equipment for years by a Pentagon more intent on stockpiling futuristic weapons than fighting today’s wars. Now it is pushing up against the limits of hard arithmetic. Senior generals are warning that the Bush administration may have to break its word and again use National Guard units to plug the gap, but no one in Washington is paying serious attention. That was clear last week when Congress recklessly decided to funnel extra money to the Air Force’s irrelevant F-22 stealth fighter.

As early as the fall of 2003, the Congressional Budget Office warned that maintaining substantial force levels in Iraq for more than another six months would be difficult without resorting to damaging short-term expedients. The Pentagon then had about 150,000 troops in Iraq. Three years later, those numbers have not fallen appreciably. For much of that time, the Pentagon has plugged the gap by extending tours of duty, recycling soldiers back more quickly into combat, diverting National Guard units from homeland security and misusing the Marine Corps as a long-term occupation force.

These emergency measures have taken a heavy toll on combat readiness and training, on the quality of new recruits, and on the career decisions of some of the Army’s most promising young officers. They cannot be continued indefinitely.

Now, with the security situation worsening in both Iraq and Afghanistan, the Pentagon concedes that no large withdrawals from either country are likely for the foreseeable future. As a result, even more drastic and expensive steps could soon be needed. The most straightforward would be to greatly increase the overall number of Army combat brigades. That would require recruiting, training and equipping the tens of thousands of additional soldiers needed to fill them.

Yet the Pentagon and Congress remain in an advanced state of denial. While the overall Defense Department budget keeps rising, pushed along by unneeded gadgetry, next year’s spending plan fails to adequately address the Army’s pressing personnel needs. Things have gotten so badly out of line that in August the Army chief of staff held up a required 2008 budget document, protesting that the Army simply could not keep doing its job without a sizable increase in spending.

A bigger army does not fit into Defense Secretary Donald Rumsfeld’s version of a technologically transformed military. And Congress prefers lavishing billions on Lockheed Martin to build stealth fighters, which are great for fighting Russian MIG’s and Chinese F-8’s but not for securing Baghdad. Army grunts are not as glamorous as fighter pilots and are a lot less profitable to equip. Yet we live in an age in which fighting on the ground to rescue failed states and isolate terrorists has become the Pentagon’s most urgent and vital military mission.

America’s credibility in that fight depends on the quality, quantity and readiness of our ground forces. If we go on demanding more and more from them while denying the resources they so desperately need, we could end up paying a terrible price.

October 1, 2006

Experiment Will Test the Effectiveness of Post-Prison Employment Programs

By ERIK ECKHOLM

CHICAGO — As raw garbage streamed by on a conveyer belt, newly released convicts pulled out paper, plastics and other recyclables on a recent morning, throwing aside the occasional brick or mattress.

Noisy, dusty and smelly, paying $6.50 an hour, the jobs yield neither the swagger nor the swag that these men and women chased as drug dealers, thieves or worse. But many of them see the temporary work as a fresh start.

The jobs are arranged by a Chicago charity, the Safer Foundation, which works with current and former prisoners. Offering transitional jobs like these — immediate, closely supervised work and help finding permanent employment — is a growing tactic in the effort to usher felons back to society and curb recidivism. Now the effectiveness of this approach is about to be tested scientifically for the first time.

Starting in January, the employment and recidivism rates of 2,000 newly released male prisoners, all with similar histories of little work and poor schooling, will be studied in Detroit, Milwaukee, St. Paul and Chicago.

Half of the men will receive more limited aid: instruction in work behavior, résumé preparation and other employment skills and help looking for a job. The other half will get those services and also a few months of temporary work in places like the recycling plant here — a chance for them to get into the unfamiliar rhythms of a regular job.

The experiment, which will track the two groups over three years, is being sponsored by the Joyce Foundation in Chicago and directed by the Manpower Development Research Corporation in New York, which specializes in scientific studies of poverty programs.

Separately, the research group is conducting a controlled study of the transitional jobs program at the Center for Employment Opportunities in New York, which provides maintenance crews for public facilities and has been a national model.

“If you ask inmates what they want most, they want a job,” said Mindy Tarlow, executive director of the center in New York. “But they don’t know what that means.”

She added, “What we’re competing with is making some money at night on a street corner instead of having to show up somewhere at 8 a.m. every day.”

Despite the apparent promise of transitional jobs, questions remain about their long-term effectiveness that the study hopes to address.

Are those who last through these programs such a select group — so motivated to change — that they would succeed anyway, or can well-timed help turn others around, too?

Can work-site counseling, sobriety meetings and a strong dose of mainstream work overcome the criminal pull of old haunts and friends?

And more fundamentally: will people with low skills, even if they adapt to steady work, ever make wages high enough to support a family and stifle the temptation to return to crime?

Roberto Reyes, a 36-year-old high school dropout in Chicago who has served seven years on burglary, gun and drug charges, works the conveyer belt at a recycling plant that is run for the city by Allied Waste Services.

Mr. Reyes has labored at the plant for four months, the longest he has held a job. “The money here is not that much, but it’s better than nothing,” he said. “Sometimes you wake up and don’t want to come to work, but I’m not going to leave this until I find another job. I knew I couldn’t just keep going on with that lifestyle and see life pass me by.”

Mr. Reyes’s determination is evident, but the numbers and records of people in his situation are daunting.

In Chicago, more than 20,000 prisoners come home from state facilities each year. Fifty-four percent are re-incarcerated within three years for new crimes or parole violations — a tale of wasted lives and victimized communities that is repeated nationwide among more than 600,000 prisoners that are released annually.

While common sense, and prisoners themselves, say that employment is vital to an honest new life, the obstacles are huge. A majority of those leaving prison did not finish high school and have little legitimate work experience. Many have serious drug or psychological problems that must be treated before they can hold a regular job. And while transitional programs may acclimatize them to the time-clock world of the workplace, many are likely to remain stuck in low-end jobs anyway.

Those who work with prisoners say that enticing onetime thugs to give work a try is not always as hard as it sounds. “They tell us that what comes with the street life is looking over your shoulder all the time,” said Diane Williams, president of the Safer Foundation. One key, she said, seems to be getting released prisoners into work quickly, when the desire to normalize their lives is strongest.

Jimmy Parker, 24, was in and out of prison and hustling until six months ago when he decided, as he put it, “enough is enough.”

“This job is rough, but I’m trying to change my life around,” he said during a break at another recycling site run by Allied. “I’ve accomplished one thing — I got my own studio apartment — and someday I want to get custody of my daughter.”

The Safer Foundation has eight employees who search for companies willing to hire former prisoners. Allied Waste’s experience with such workers has been positive, said Robert Kalebich, general manager for the company in Chicago. Safer keeps a full-time “job coach” at each work site to advise workers and deal with disputes.

“If anything we see an advantage in this arrangement,” Mr. Kalebich said. “If we hire off the street we have to wonder are they trained, are they here legally, are they properly drug tested.”

Raphael Carter served drug time when he was 18, stayed out for nearly a decade, then found himself in prison again. “I woke up and said, I can’t do this anymore, it’s a dead end road,” he recalled, adding that he now has two children, 13 and 6, who depend on him.

“You have to weigh the options, would you rather go back to jail or get a little increment of money and see your kids,” said Mr. Carter, 30, who lives with his girlfriend and her four children. “Being older, I made the right choice.”

He worked for six months at the recycling job, then found a chance in a nearby city driving a forklift for the attractive wage of $11.60 an hour. But his car broke down once too often during the hour and a half drive to work, he said, and he was let go.

Now he works for a company that erects large party tents, a seasonal job at $8.50 an hour, and he is consulting the Safer listings for a permanent job.

“By myself I wouldn’t have had any of these opportunities,” he said.

 October 1, 2006

The Inside Agitator

By MATT BAI

Not all states are equal on an election map, and Alaska is one of those less populous states — like Kansas or Idaho or Alabama — that national Democrats almost never bother to visit. For one thing, just getting there presents a logistical ordeal: the journey from Washington takes as long as it would to reach, say, Nigeria, and even then you sometimes need a hydroplane to get around. And more to the point, there aren’t a whole lot of people to see once you get there. Registered Republicans outnumber Democrats by a margin of 2 to 1 in oil-crazed Alaska, which hasn’t sent a Democrat to the House or Senate in more than 30 years. To put it another way, there were more Democrats in Central Park for the Dave Matthews concert a few years back than there are in the entire state of Alaska — all 656,000 square miles of it.

It seemed somewhat bizarre, then, when Howard Dean, the chairman of the Democratic National Committee, chose to make the long odyssey to Alaska at the end of May, near what was the beginning of one of the most intense and closely contested national election campaigns in memory, when every other Democrat in Washington was talking about potentially decisive states like Ohio, Pennsylvania and Connecticut. It was also strange that no one in Democratic Washington seemed to know he was going. Although I had been following Dean closely for months, I found out about the trip accidentally and invited myself along — an intrusion that Dean seemed merely to tolerate. We met up first in Las Vegas, where he was making appearances with Harry Reid, the Senate minority leader. Dean, who enjoys his image as an unpretentious New Englander, is given to finding his own flights on discount Web sites, so it’s sometimes hard for even his own staff to track his itinerary. On the morning we left for Alaska, Dean went missing for a good half-hour. It turned out that he was in the business center of the MGM Grand, where he had been trying to figure out how to print his boarding pass but somehow ended up in an impromptu game of online backgammon with a guy who claimed to be in China.

Touching down in Anchorage, we were greeted by Jonathan Teeters, a 25-year-old former offensive lineman at the University of Idaho who had been hired to help the state party begin to organize Democrats. It took less than 10 minutes, as Teeters drove us through a pounding rainstorm to the state headquarters, for Dean, seated in front, to unleash his usual brand of havoc on a state unaccustomed to it. First, he absently asked Teeters what kind of radio interviews he would be doing during his 24-hour stay and was told that he was booked on the local Air America affiliate, the only liberal radio option in town. This is what party chairmen get paid to do — rally the faithful, collect their money and urge them to vote.

“Bull,” Dean snapped, using a slightly more elongated version of the term.

“Huh?” Chris Canning, Dean’s personal aide, suddenly looked up from a loose-leaf binder. He seemed to think he had misheard.

“I’m not going to do that,” Dean replied firmly, craning his neck to address Canning in the back seat. “I didn’t come all the way up here just to talk to people who already agree with us. I want to talk to everyone else. I’m fine with doing Air America, but we have to do something else too. Isn’t there some conservative show we can do?” Teeters warned that the few right-wing shows in town could get nasty for the chairman. “If you can set something else up too, great,” Dean said with finality. “Otherwise, I won’t do Air America.”

Then Dean wanted to know how many organizers the state party now had on the ground, and Teeters told him there was just one: Teeters himself. The D.N.C. created his job — along with a position for a communications director — last year as part of Dean’s signature program, known as the 50-state strategy. Under this program, the national party is paying for hundreds of new organizers and press aides for the state parties, many of which have been operating on the edge of insolvency. The idea is to hire mostly young, ambitious activists who will go out and build county and precinct organizations to rival Republican machines in every state in the country. “We’re going to be in places where the Democratic Party hasn’t been in 25 years,” Dean likes to say. “If you don’t show up in 60 percent of the country, you don’t win, and that’s not going to happen anymore.”

In paying for two new staffers, Dean had, virtually overnight, doubled the size of Alaska’s beleaguered state party, which used to consist of only an executive director and a part-time fund-raiser. But now, as Dean considered the vastness of the state’s landscape, he decided that one organizer wasn’t enough. “In most states, we have three or four,” Dean said, thinking out loud. “Seems like you should really have more. We should be able to find that money in the budget.”

That night, after meeting with Dean at the sad little storefront office that houses the state party, Alaska’s party chairman, Jake Metcalfe, announced to 400 assembled Democrats at a fund-raiser that Dean had just promised to hire an additional organizer for the state. The ballroom erupted in grateful applause as Dean sat there beaming. The members of his staff, gently rolling their eyes, began calling back to Washington, warning the political staff that they would need to find the money for yet another salary in, of all places, Alaska.

In just a few hours, Dean had nicely demonstrated why so many leading Democrats in Washington wish he would spend even more time in Alaska — preferably hiking the tundra for a few months, without a cellphone. It’s not that Democrats in Congress don’t like the idea of building better organizations in the party’s forgotten rural outposts. Everyone in Democratic politics agrees, in principle, that party organizations in states like Alaska could use help from Washington to become competitive again, as opposed to the rusted-out machines they have become. But doing so, at this particular moment and in this particular way, would seem to suck away critical resources at a time when every close House and Senate race has the potential to decide who will control the nation’s post-election agenda, and when the party should, theoretically, be focused on mobilizing its base voters — the kind of people who live in big cities and listen religiously to Air America.

It’s true that adding a second organizer in Alaska will cost the national party only a modest sum, maybe $35,000 this year, but that same money could pay the salaries for canvassers in Pennsylvania or Connecticut, where a few thousand votes could mean the difference between swearing in Speaker Hastert or Speaker Pelosi next January. Overall, Dean’s investment in state parties could cost the D.N.C. as much as $8 million this year, every dime of which could be crucial when you consider that the Republican National Committee says it will pour as much as $60 million into local races to defend its Congressional majorities. (The D.N.C. has pledged to spend $12 million on this fall’s races.) With the president’s approval ratings stuck around 40 percent, and polls suggesting that the Democrats may have a real chance of rolling back 12 years of Republican rule, numerous Democratic insiders are privately and, at times, publicly deriding the 50-state strategy as an indulgence that could cost them their best and last opportunity to sweep away the Bush era, once and for all.

This conflict between the party’s chairman and its elected leaders (who tried mightily to keep local activists from giving him the job in the first place) might be viewed as a petty disagreement. But in fact, it represents the deepening of a rift that has its roots in the 2004 presidential campaign — a rift that raises the fundamental issue of what role, if any, a political party should play in 21st-century American life. Dean ran for president, and then for chairman, as an outsider who would seize power from the party’s interest-group-based establishment and return it to the grass roots. And while he has gamely tried to play down his differences with elected Democrats since becoming chairman, it seems increasingly obvious that Dean is pursuing his own agenda for the party — an agenda that picks up, in many ways, where his renegade presidential campaign left off. Now, at power lunches and private meetings, perplexed Washington Democrats, the kind of people who have lorded over the party apparatus for decades, find themselves pondering the same bewildering questions. What on earth can Howard Dean be thinking? Does he really care about winning in November, or is he after something else?

The mere fact that Democrats would consider a “50-state strategy” to be novel — as if a national party might reasonably aspire to something less — says volumes about the rapid deterioration of the party that was, for most of the last century, America’s dominant political force. Back when Democrats were the established majority, the state parties were run by bosses who doled out jobs and delivered votes, while the national party, functioning as a subsidiary of whoever happened to occupy the Oval Office, worried about electing presidents. For decades, the party claimed a sizable majority of the nation’s governors, senators and congressmen, and in every one of the states where it controlled those seats, there was a centralized organization — a party “infrastructure,” in the parlance of today’s activists — whose job it was to recruit candidates and make sure voters got to the polls.

All that began to change with the social movements of the 1960’s and 70’s, which redefined the Democratic Party, in the minds of many rural voters, as mostly a coalition of urban blacks and high-minded intellectuals. From the Deep South up through the populist Plains, voters began abandoning Democratic candidates at the polls, and the old state machines found themselves out of power and starved for patronage. Slowly, the parties in these states atrophied, laying off staff members and allowing their network of local volunteers to dwindle. “We were on the verge of extinction, pretty much,” Barry Rubin, the executive director of the Nebraska Democratic Party, told me recently.

When Dean took over the D.N.C. last year, he sent assessment teams, made up of veteran field organizers and former state party officials, to every state. A typical assessment report on one rural state — I was allowed to see the report only on the condition that I not name the state involved — bluntly stated that its local activists were “aging” and that its central committee was “dysfunctional.” In most states, there were hardly any county or precinct organizations to speak of. More than half the states lacked any communications staff, meaning that no one was there to counter the Republican talking points that passed from Washington to the state parties to the local media with a kind of automated precision.

For the Democrats, winning presidential elections came to mean doing so without any help from the South or West, and that, in turn, meant cobbling together a relatively small number of so-called battleground states rather than running a truly national campaign. The D.N.C. quit doing much of anything in conservative rural states, and the party’s presidential candidates didn’t bother stopping by on their way to more promising terrain. Every four years, the national party became obsessed with “targeting” — that is, focusing all its efforts on 15 or 20 winnable urban states and pounding them with expensive TV ads. The D.N.C.’s defining purpose was to raise the money for those ads. The national party became, essentially, a service organization for a few hundred wealthy donors, who treated it like their private political club.

None of this was much on Howard Dean’s mind when he set about running for president in 2003 with drab notions of health-care reform and a balanced budget; by the time he made his infamous “scream” speech in Des Moines a year later, however, Dean had become a folk hero for marginalized liberals. How this happened has been largely misunderstood. Dean has been credited with inciting an Internet-driven rebellion against his own party, but, in fact, he was more the accidental vehicle of a movement that was already emerging. The rise of Moveon.org, blogs and “meet-ups” was powered to some extent by the young, tech-savvy activists on both coasts who were so closely associated in the public mind with Dean’s campaign. But the fast-growing Internet community was also a phenomenon of liberal enclaves in more conservative states, where disenchanted Democrats, mostly baby boomers, had long felt outnumbered and abandoned. Meet-ups for Dean drew overflow crowds in Austin, Tex., and Birmingham, Ala.; what the Web did was to connect disparate groups of Democratic voters who didn’t live in targeted states and who had watched helplessly as Republicans overran their communities. These Democrats opposed the war in Iraq, but they were also against a party that seemed to care more about big donors and swing states than it did about them. Attracted to Dean’s fiery defiance of the Washington establishment, these voters adopted him as their cause before he had ever heard of a blog.

“What our campaign was about, not that I set out to make it this way, was empowering people,” Dean told me recently. “The ‘you have the power’ stuff — that just arose spontaneously when I realized what incredible potential there was for people to get active who had given up on the political process because they didn’t think either party was helping them.”

Over the course of the campaign, Dean turned into an apostle, in politics, of the economic concept of “disintermediation” — the idea that, in the Internet age, voters could connect with candidates, and with one another, without the party acting as the conduit. In a sense, this is what his candidacy was all about. He still believed, though, that only a strong national party could mobilize voters on Election Day. At the Democratic convention in Boston, six months after he dropped out of the presidential race, he met with frustrated delegations from 18 “untargeted” states, meaning that the national party and its candidate, John Kerry, had completely ignored them. Dean was appalled. “The best window we have to talk to Democrats, the time when they pay the most attention, is in the presidential campaign,” Dean told me, “and we were just saying to the people of those 18 states, ‘We’re not interested in you.’ You cannot be a national party if you say that to anybody. Anybody.”

It didn’t take long, after the election, for a new band of Democratic outsiders — some inspired by Dean’s campaign, others not — to begin asserting themselves on the local level. In Maryland, Terry Lierman, a venture capitalist who had been one of Dean’s campaign-finance chairmen, ran for state party chairman, despite having had no previous involvement in local Democratic politics, and won. In North Carolina, Jerry Meek, a 35-year-old lawyer, took over the state party on a promise of re-energizing county organizations, even though both the governor and the state’s leading national figure, John Edwards, strongly backed an inside candidate. Colorado and Arkansas, too, rejected incumbent chairmen in favor of obscure newcomers. In Texas, Fred Baron, a trial lawyer and Democratic contributor, established a privately financed effort to rebuild the Texas state party from the ground up — without the party’s consent.

Meanwhile, the bloggers who supported Dean were taking up the same cause, inciting sporadic local rebellions. Chris Bowers, an influential blogger on the leftist site MyDD.com, demanded that the national party focus less on targeting races and more on recruiting candidates to run in every Congressional district in America. Bowers’s call for individual activists to overwhelm and rebuild their local parties became a rallying point for the emerging Netroots party-reform movement. Setting his own example, Bowers got himself elected the captain of his local precinct in Philadelphi a’s 27th Ward and then won a seat on the party’s state committee.

The question for Dean was how to harness and aggregate this state-by-state uprising that he had, by example, helped to create. Immediately after dropping out of the presidential race, he formed a political action committee called Democracy for America, whose mission was to raise money for “progressive” candidates seeking local offices, from mayoral and Congressional seats down to the local water board. This was revolt on a small scale, however, and Dean continued to ponder some grander strategy. He admits now that at the time he considered forming a third party, deciding, ultimately, that such ventures never went far in American politics.

Like Ronald Reagan, whose activist insurgency during the 1976 primaries failed to topple the Republican president, Gerald Ford, Dean might have begun work instantly on the next presidential race, building on his support among the Democratic base. But unlike Reagan, Dean had always exhibited more passion for campaigning among the grass roots than he did for the prospect of actually being the nation’s president; he seemed less focused on changing the country than he did on changing the party. And the best way to do that, he concluded, was to run for chairman.

Dean, the celebrity candidate in a crowded if rather underwhelming field, campaigned on what seemed like a brazenly political promise to lavish spoils on the forgotten state parties, whose local activists held most of the votes for the chairmanship. The outcome was never much in doubt, although some skeptical Democrats refused to support him. Metcalfe, the Alaska chairman, told me that he supported Simon Rosenberg, a party strategist. “Simon was saying, ‘I don’t know if I can fund all the states,’ and I thought that was honest,” Metcalfe said. “Dean said he would give money to all the states, and I thought, That’s not going to happen — not out here. I thought I was being realistic. He proved me wrong.”

There were awkward moments during Dean’s first months in Washington, early in 2005, when he found himself working among the party leaders he had repeatedly maligned. In his first official visit to the newly renovated D.N.C. building, Dean was greeted in the lobby by his predecessor, Terry McAuliffe, a close friend of the Clintons and probably the most gifted fund-raiser in the party’s history, whom Dean’s supporters had long pilloried as the personification of a party run by hacks and obsessed with corporate money. McAuliffe, a man of maddeningly good cheer, pointed to the new wall-size glass building dedication in the lobby, which featured McAuliffe’s name at the very top, followed by a list of contributors. “Now, Howard,” he said, “don’t you go chiseling that down.”

Not long after, Dean sat down with the party’s Congressional leaders, Harry Reid and Nancy Pelosi, who had tried, ineptly and with almost comical desperation, to find a candidate who could stop him from becoming chairman. Reid and Pelosi promised to work with Dean, but they asked him to resist speaking out on key policy positions and acting as if he were the party’s public face. In other words, Dean would be doing everyone in Washington a favor if he would just stay out of sight and raise money.

The latter goal proved challenging. The truth was that neither Dean nor the aides he brought with him from the presidential campaign knew much about the inner workings of the national party, and some of what they assumed they understood, based on contempt for anything they perceived to be the status quo, turned out to be more complicated than it first appeared. Determined to break the grip of millionaires on the party apparatus, Dean’s team came into the D.N.C. with a plan to raise huge sums of money online, as Dean had done during the presidential campaign. Dean didn’t bother reaching out to many of the party’s top contributors, who were as suspicious of him as he was of them. But getting small-dollar donors excited about an established party proved a far more arduous task than getting them excited about an insurgent campaign. The situation grew perilous until, several months into his term, Dean relented and brought in one of McAuliffe’s old acolytes, Jody Trapasso, to get the fund-raising operation in order. Trapasso introduced Dean to the big spenders, pushing him to devote a few hours of every day to making calls until the checks started rolling in.

Dean was discovering that he needed to find some Washington insiders to trust, after all — and he found them in what seemed an unlikely quarter. In his primary campaign in 2003, Dean struck up a friendship with Tina Flournoy, a well-respected operative who worked with Al Gore and Joe Lieberman during the 2000 presidential race and who now held a senior position at the American Federation of Teachers, one of the party’s most influential unions. Flournoy was also a charter member of an informal dinner clique whose members referred to themselves, good-naturedly, as the Colored Girls. The core group included several African-American women who had reached the highest echelons of Democratic politics. Donna Brazile, the veteran organizer who managed Gore’s presidential campaign, was a regular; so were Minyon Moore, a consultant who worked in the Clinton White House; Yolanda Caraway, a public-relations specialist; and Leah Daughtry, who was McAuliffe’s chief of staff (and who was retained in that job by Dean). Guest speakers at their dinners frequently included probable presidential candidates and top members of Congress. During the race for chairman, Flournoy brought Dean in as well, and he quickly clicked with the group.

Dean tapped Flournoy to run his transition team, and although she later returned to her job at the teachers’ union, it is now common knowledge among Democrats in Washington that few big decisions are made at the D.N.C. without Flournoy’s approval. The Colored Girls, as a whole, are unusually influential with Dean. It’s an odd pairing, given that Dean governed one of the whitest states in the country, but what Dean and these women share is resentment, sometimes subtle and sometimes not, of the elite Washington Democrats who have always run the national party. Activists like Flournoy and Brazile have attained star status in the party, but they have never thought of themselves as insiders. This is partly because they are black women in a party dominated by white men — men who often seem to prize them more as symbols of diversity than for their expertise. But it is also because the women came up in Democratic politics as local field operatives — that is, as young organizers who knocked on doors, principally for Jesse Jackson — in an era when all of the power in the party was concentrated in the hands of the Washington consultants who made TV ads and polled the electorate. Dean came to Washington vowing to take power from the insiders and give it, instead, to ground-level activists. “That’s our loyalty to Dean,” Brazile says. “He gets it.”

With help from Flournoy and the others, Dean cultivated an outsiders’ culture inside the D.N.C. building. (It is more than symbolic that Dean himself never moved to Washington; he stays at the Capitol Hill Suites a few days a week before heading back to Burlington, Vt.) Dean’s political staff hails largely from the state organizations, rather than from Washington; his political director, Pam Womack, formerly ran the Virginia party and the National Governors Association. Top Washington reporters and senior aides on Capitol Hill frequently complain that they now have trouble getting their calls to the D.N.C. returned, while state activists rave about the new responsiveness at headquarters.

Flournoy also introduced Dean to the pollster Cornell Belcher, who became a constant fixture inside Dean’s D.N.C. Belcher, a deep thinker and jazz aficionado who wears suit coats with unlaced Converse sneakers, had been an outsider, too, in the sense that he didn’t fit into the capital’s pinstriped culture and wasn’t well known before Dean started taking him to meetings on the Hill. In public appearances, Dean almost always refers proudly to the fact that he has retained a “37-year-old African-American pollster” to shake up the staid Washington crowd. In fact, the main theme of Belcher’s work concerns the white middle-class men and women who have deserted the Democrats in recent years. These voters care more about their faith and the character of their communities than they do about individual issues, Belcher says, and Democrats do better with rural and small-town voters when they frame their positions as values rather than as policy prescriptions. This is not an entirely new insight, but to Dean it is critically important. In his mind, it means that any voter in any state can be a Democrat, if only you bother to talk to him, and if only you make the right kind of argument.

The ultimate manifestation of this philosophy, of course, is the 50-state strategy, under which, for the first time, the national party has begun directly financing the staff at all but a few state headquarters. It’s probably fair to say that if there hadn’t been a quagmire in Iraq or a Hurricane Katrina — if the White House’s political fortunes hadn’t imploded over the last year — the 50-state strategy would not have aroused much opposition among Washington Democrats. It was only when they realized that they actually had a chance to take back the House, and maybe the Senate too, that Democratic leaders began to ask, with increasing urgency, what it was that Dean was doing with all the party’s money.

This fall, the question of who will control Congress is likely to come down to about 40 Congressional districts and some dozen states with close Senate races, including such perennial battlegrounds as Pennsylvania, Ohio and Missouri. Candidates raise most of their campaign funds themselves, but they rely on additional money from Washington to pay for voter-turnout programs and last-minute TV ads. Each party has three separate entities to raise and disburse those dollars: a committee for Senate races, a second committee for House campaigns and the national party headquarters.

For Democrats, the fund-raising environment has improved over the last two years, as Bush has blundered from one legislative or foreign-policy disaster to another and as Democratic donors have seen the prospect of controlling at least one house of Congress — a notion that seemed unthinkable in 2004 — become a possibility. The Democrats who lead their party’s Senate and House campaign committees, Senator Chuck Schumer of New York and Representative Rahm Emanuel of Illinois, respectively, have done their parts to make the party competitive. The Democratic Senate committee, which narrowly outperformed its Republican counterpart in 2004, has opened up an even wider margin in this election cycle. The Democratic House committee, which raised only half as much as the G.O.P.’s committee did two years ago, has closed that gap somewhat and, at last count, had virtually the same amount in the bank as its rival. Over at the D.N.C., however, it’s a very different story. In 2004, the D.N.C., under McAuliffe, actually raised slightly more money than the Republican National Committee. Since Dean has taken over, however, the R.N.C. has taken an almost 2-to-1 lead in fund-raising, and going into the fall campaign it had more than $39 million stashed away, compared with just over $11 million for the Democrats. For Schumer and Emanuel, this discrepancy between the two parties is like a train coming down the track, and they’re the ones sitting in its path. The R.N.C. will dump tens of millions of dollars into individual House and Senate races in the closing weeks, through TV ads and get-out-the-vote operations, and Democrats won’t be able to counter it.

In a city rife with unchecked egos, few politicians exhibit the kind of unbridled self-assuredness for which both Emanuel and Schumer are known; to call the two of them pushy would be like calling Tom Cruise excitable. Emanuel, a triathlete who was Bill Clinton’s deputy chief of staff and enforcer, speaks in violent bursts of shrapnel, profanities flying in all directions. Schumer, like his native Brooklyn, can be, by turns, charming or downright dangerous, depending on which route delivers him faster to his destination.

Before this midterm election-year began, but not long after Dean became party head, Emanuel and Schumer decided that if Dean wasn’t going to raise anywhere near as much money as his rivals at Republican headquarters, then he ought to at least give them whatever resources he could muster. They went to work on Dean, pleading with him to transfer as much as $10 million to the two committees to help them respond to the Republican TV barrage. Emanuel told anyone who would listen that back in 1994, when Republicans sensed a similarly historic mood swing in the electorate, the R.N.C. kicked in something like $20 million in cash to its Congressional committees. (This argument was impressive, but not exactly true; the R.N.C. spent roughly that much on federal and local races combined in 1994, and little, if any, of that money went directly to the committees themselves.) Dean categorically refused to ante up. Having opposed the very idea of targeting a small number of states and races, he wasn’t about to divert money from his long-term strategy — what he calls the “unsexy” work of rebuilding the party’s infrastructure — to pay for a bunch of TV ads in Ohio. He wanted to win the 2006 elections as much as anyone, Dean told them, and he intended to help where he could. But Democratic candidates and their campaign committees were doing just fine on fund-raising, and the party couldn’t continue giving in to the temptation to spend everything it had on every election cycle — no matter how big a checkbook the Republicans were waving around.

For Schumer, Emanuel and their allies, this rejection was irritating enough. When they heard the stories of how Dean was actually spending the party’s cash, however, it was almost more than they could take. Dean was paying for four organizers in Mississippi, where there wasn’t a single close House race, but he had sent only three new hires to Pennsylvania, which had a governor’s race, a Senate campaign and four competitive House races. Emanuel said he was all for expanding the party’s reach into rural states — roughly half the House seats he was targeting were in states like Texas, Indiana and Kentucky, after all — but he wanted the D.N.C. to focus on individual districts that Democrats could actually win, as opposed to just spreading money around aimlessly. The D.N.C. was spending its money not only in Alaska and Hawaii, but in the U.S. Virgin Islands as well. Democratic insiders began to rail against this wacky and expensive 50-state plan. “He says it’s a long-term strategy,” Paul Begala, the Democratic strategist, said during an appearance on CNN in May. “What he has spent it on, apparently, is just hiring a bunch of staff people to wander around Utah and Mississippi and pick their nose.”

The disagreement with Emanuel and Schumer frayed Dean’s already fragile détente with Washington’s Democratic elite. Since coming to Washington, Dean had worked hard to forge a level of trust with Congressional leaders, subjugating some of his more combative impulses. In particular, he had formed what he thought of as a genuine friendship with Harry Reid. Nonetheless, the party’s elected leaders and their legions of consultants remained uneasy about Dean. They suspected, correctly, that he strongly sympathized with outside forces — militant bloggers, disillusioned donors, Moveon.org — that were fomenting rebellion at the grass roots. It didn’t help that Dean’s younger brother, Jim, a onetime salesman who had taken over the PAC Dean started, Democracy for America, was out there proselytizing for insurgent candidates like Paul Hackett, whom Schumer eventually muscled out of a Senate primary in Ohio, and Ned Lamont, who upended Joe Lieberman in Connecticut. While campaign laws prohibited the Dean brothers from coordinating their activities, Washington Democrats assumed that Jim Dean’s job was to carry out the chairman’s subversive wishes.

In separate conversations, Reid and Pelosi each asked Dean — Reid in his quiet way, Pelosi more stridently — to send some money to the two campaign committees. Dean rebuffed them too. But he did promise that the D.N.C. would help with get-out-the vote campaigns. Emanuel and Schumer then began pressing Dean for a specific field plan — that is, a blueprint for how the D.N.C. would spend money on mobilizing voters, and where. The argument finally exploded during a meeting in May among Dean, Emanuel and Schumer in Dean’s third-floor office at the D.N.C. Emanuel told Dean that the 50-state strategy was a waste of money; Dean shot back that winning elections wasn’t only about TV ads. Emanuel wanted to know what Dean was doing to help in California’s 50th district, where voters were about to hold a special election. When Dean said he had organizers on the ground, Emanuel erupted. “Who?” he demanded. “Tell me their names!” Emanuel, who had a vote at the Capitol, stormed out of the meeting, cursing as he walked down the hall.

By now, the situation had as much to do with clashing egos as it did with the elections. “The issue here is not our field plan,” Dean told me. “The issue is an issue of control. I’m the new guy on the block, and they thought they were going to get me writing the check.” For his part, Emanuel, who had been a pivotal adviser in several national elections (he was the model for the character Josh Lyman in “The West Wing”), seemed annoyed that Dean wouldn’t defer to Democrats with more experience. That Dean raised money by talking about the closeness of the 2006 elections — and then spent much of that money in states that had nothing to do with the midterms — made Emanuel, whose office sits a floor below Dean’s in the D.N.C. building, want to reach through the tile ceiling and throttle him. “I’m for a long-term strategy,” Emanuel told me, “but I don’t see how you have a long-term strategy if you take a historic election and walk away from it.”

What was remarkable about this fight, as it dragged on throughout the summer, was just how public it became, and the extent to which it seemed to be pulling influential Democrats into its vortex. Bren Simon, a wealthy Democratic patron from Indiana who has entertained virtually every leading Democrat at her second home in Washington, told me that she warned Emanuel and Schumer that she wouldn’t write them any more checks if they didn’t stop fighting Dean over his 50-state strategy. Then there was the morning early in the summer when Brazile ran into Emanuel on the steps of the D.N.C. building and started loudly lecturing him about his attacks on the chairman, in full view of party employees. Emanuel protested that he just wanted to win back the House. Two of the Democratic Party’s leading strategists — one who had helped run the White House, the other who had managed a presidential campaign — stood there barking at each other on the street.

Underneath this clash of field plans and alpha personalities lay a deeper philosophical divide over how you go about rebuilding a party — which was really a dispute about cause and effect. Did you expand the party by winning elections, or did you win elections by expanding the party? Most party insiders had long put their faith in elections first, arguing that the best way to broaden the base of the party was to win more races. Schumer said as much in a written statement that his spokesman forwarded to me in response to my questions about his differences with Dean. “Our long-term goal is the same — a strong Democratic Party,” Schumer stated. “But we” — meaning he and Emanuel — “believe that nothing does more to further that goal in 2006, 2008 and beyond than taking back the House and Senate so that we can implement a Democratic platform.”

Recent history, though, would seem to undercut this theory. In the 1990’s, the Democrats won two presidential elections behind a popular leader, and yet the party didn’t grow. In fact, Democrats lost ground at every level of government except the White House and cemented their position as the party of coastal states. Steadily investing in political activity on the local level, as Republicans have done for years, seems to Dean and his allies a more realistic way for Democrats to expand the electoral map than simply trying, every four years, to piece together the same elusive majorities. Of course, every Democrat in Washington says he’s for expanding the party’s efforts beyond the familiar 18 or 20 battleground states, but only Dean, among his party’s leaders, has been willing to argue that there is a choice involved, that you cannot actually invest for the long term unless you’re willing to forgo some short-term priorities.

It takes courage, Dean told me, to try something new in the face of failure, which is why Washington Democrats were resisting his plan. “I think politicians are incredibly risk-averse, especially legislating politicians,” he said. “This is like deciding to go to a psychiatrist — the risk of staying the same has to be greater than the risk of changing. And right now, in the history of the party, that’s exactly where we are. The risk of doing nothing, the same old thing, is enormous. The risk of trying something new is much smaller. The risk of the 50-state strategy is much smaller than if we continue to do what we’ve been doing.”

But you can accept Dean’s premise and still wonder whether his 50-state strategy is really the best way to go about building the party. Even some Democrats who support Dean’s larger vision have doubts about whether he has built enough accountability into his model for financing state parties. Republicans, as I saw firsthand in Ohio during the 2004 campaign, demand certain metrics of their local organizers. Field workers are expected to sign up so many new voters, or knock on so many doors, by a given date, and people who don’t meet their quotas and deadlines can find themselves replaced — even if they’re volunteers. Republican staffs in the states are required to take part in an unrelenting succession of conference calls with Washington.

By contrast, Jonathan Teeters, the 25-year-old activist I met in Anchorage, told me that he wished he spoke more often with his superiors at the D.N.C. “It’s kind of an as-needed thing,” he said. “As far as I can tell, they trust me to get it done. As long as I’m staying in contact, and as long as we’re having success, that’s how they know we’re getting it done.” When I asked Teeters how he knew if he was having success, he mentioned having attracted several hundred people to “Democratic reunion” barbecues across the state. “The first thing we have to do is create this energy, so people know we’re here and we’re active,” he said.

Dean has no illusions that the 50-state strategy will succeed in every state. “They’re going to make terrible mistakes — I know that,” Dean said. “You never make changes without people making mistakes.” He said he had visited 46 states as chairman, and each time he goes into a state, he gets some sense of the progress on the ground.

Unlike past chairmen, who mostly traveled to see donors and do some interviews, Dean spends a fair amount of time visiting party offices and mingling with grass-roots activists. His trips to more rural, conservative states, however, the kind of places where a sizable segment of voters go to church and follow Nascar, also raise some complicated issues for his fellow Democrats. Dean is treated like a Beatle by rank-and-file activists who have rarely seen a party leader in their midst, but for the rest of the country, Dean is that lefty who howled on national TV. Some Democratic governors and candidates have avoided Dean when he has been in town, for fear that their opponents would portray them as extremists. Which underscores the peculiar situation of Dean and his 50-state plan: he is the one guy in Washington determined to deliver the Democratic message to every part of the country, but as it turns out, he is also a guy from whom much of America doesn’t want to hear it.

It’s not that Dean doesn’t try his damnedest to make himself palatable to culturally conservative voters. Acting on the advice of Cornell Belcher, his young pollster, he has taken to framing his positions in terms of faith and values, sometimes so transparently that it can make you wince. In Las Vegas, I heard Dean, who is not known to be a religious man, say to a Latino audience, “I don’t expect the church to come out for gay marriage, but I do expect that we could say on an issue like this, ‘What would Jesus do?’ Equal rights under the law is not something that can be abridged by the Democratic Party, because it’s really the law under Jesus Christ.” The audience stared at him a little blankly, as you might stare at your mechanic if he rolled out from underneath your car and suddenly started speaking Latin.

Fairly or not, Dean has come to embody a species of Democrat that a lot of Americans of both parties find off-putting: the 60’s antiwar liberal, reborn with a laptop and a Prius. On the day we landed in Anchorage, Tony Knowles, the former Democratic governor of the state, had just announced that he would run to reclaim the post. This was exciting news for Dean, since he and Knowles had served as governors together, and the two men would be attending the fund-raiser that night. But while we were at the party headquarters, the state party’s executive director cautioned Dean, gingerly, that he should probably avoid getting too close to Knowles in public or saying nice things about him from the lectern. “I think he’d prefer to distance himself from the national party as much as he can,” the executive director, Mike Coumbe, said.

Later that night, at the fund-raiser, I approached Knowles and asked him if it was true that he felt he needed to put some space between himself and his old friend. “I think they’re about to introduce me,” Knowles said, glancing helplessly toward the front of the room. “I do want to answer your question.” But he was already backing away.

When dean and I last spoke, in August, I wondered aloud if the entire 50-state program wasn’t, in a very basic way, inconsistent with the larger philosophy that guided his 2004 campaign. If Dean believed in disintermediation, then why was he spending so much money to strengthen the intermediaries? Weren’t the state parties essentially just middlemen between the voters and the Democratic National Committee? What Dean seemed to be creating was a multilevel field organization modeled after the political machines of the 20th century rather than a new party that fostered direct communication between local activists and their leaders in Washington.

“That would be true if we thought we had to be centralized,” Dean replied, raising an index finger. In fact, he went on, the Democratic Party needed to be decentralized, so that grass-roots Democrats built relationships with their state parties but had little to do with Washington at all. “State parties are not the intermediaries,” he said. “If I get them trained right, they’re the principals.”

In other words, I suggested, he was talking about “devolving” the national Democratic Party, in the same way that Reagan and other conservative ideologues had always talked about devolving the federal government and returning power to the states. “That’s what I want to do,” Dean said firmly.

This struck me as a radical idea, and one that went to the heart of what Howard Dean is really thinking. Now that Dean has wrested control of the national party, his real agenda, it seems, is to radically reduce its relevance, in the same way that Grover Norquist and his crowd of conservative activists talk about “starving the beast” of the federal government they now control. Once you understand that, it’s easy to understand why Dean isn’t troubled by having less cash in the bank than people think he should, and why he isn’t concerned about quantifying the success of the state parties he’s financing. In Dean’s mind, every dollar that goes to Alaska or Mississippi, or even to the Virgin Islands, even if it isn’t perfectly utilized, is a dollar that isn’t going into the pockets of the Washington syndicate of admen and pollsters who seem to profit more from each election cycle. And that is an end in itself. By shipping the party’s money out of Washington as fast as he can collect it, Dean is trying to finish what he started three years ago — namely, the slow dismantling of the Democratic establishment.

This philosophical shift is bound to have consequences for the party’s next presidential nominee. Dean argues that the 50-state strategy is actually going to broaden the playing field in 2008. By the time the next nominee is crowned, he says, a field network will already be in place, covering most of the counties and precincts in the United States; flip a switch, and the whole grid will light up with activity, from Baton Rouge to Boise. More than that, rebuilding Democratic ground operations in more states will force Republicans and their nominee to spend millions more dollars in states that the G.O.P. usually takes for granted, dollars that would otherwise be spent in the Midwest or in the pivotal Sun Belt.

This makes some sense, but it’s also true that presidential candidates have long relied on the D.N.C. to do two simple things: underwrite TV ads and coordinate extensive field operations in a handful of perennially contested states — so much so, in fact, that every four years the nominee essentially takes over the D.N.C., installing his own strategists and fund-raisers to provide his campaign with air and ground support. If Dean isn’t going to relinquish control or shift his resources into battleground states, the next nominee could find himself (or herself) outspent by the R.N.C. — and as piqued at Dean as Emanuel and Schumer are now.

Some more conspiracy-minded Democrats discern in Dean’s decentralizing strategy the careful machinations of a shrewd and ambitious politician. After all, if the party’s nominee loses in 2008, doesn’t that set up Dean, potentially, as the grass-roots choice for 2012? Might this be his real plan — to strengthen the local activists who form his natural base of support while at the same time weakening the Washington apparatus that once tried to derail him?

It’s an elegant theory, but it doesn’t take into account Dean’s basic ambivalence about electoral politics. In Vermont, Dean, a physician and part-time lieutenant governor, turned down the chance to run for governor, taking office only when his predecessor died. He may have run for president, as a former confidant of his once explained it to me, largely because he was looking for something to do. Dean has never evidenced much in the way of Machiavellian ambition. It’s easier to imagine him, several years from now, serving as a cabinet secretary, or maybe running a university, than it is to see him rising up to lead a Reagan-like revolt at a raucous nominating convention in Las Vegas or Phoenix.

the more immediate question is whether Dean can make it through this year’s elections without becoming a permanent pariah among his party’s elected leadership. In September, Dean finally reached a compromise of sorts with Rahm Emanuel. In the end, Dean didn’t pull any money from his 50-state strategy, nor did Emanuel successfully pressure him into giving cash directly to the campaign committees. Dean agreed, in principle, to pour about $2.6 million, on top of what he was spending on the 50-state strategy, into field operations in 40 targeted districts. “We have a basic understanding,” Emanuel told me, sounding more peeved than conciliatory. “It’s not everything I want, but I don’t have any time to waste anymore, and I’m not waiting for Godot. I’ve got to get going.” The deal seemed to ensure, at least, that Dean and the House campaign committee would be able to work in harmony through the elections. (Peace talks with Schumer were still going on.) It did not, however, do anything to resolve the underlying, more intractable disagreement: the importance of winning elections versus a longer-term investment in state parties. That argument will continue well after November’s referendum on Republican rule.

Nor, clearly, did the compromise do much to endear Dean to his critics in Washington. “I’m not going to be on his holiday mailing list, and he’s not going to be on my holiday mailing list,” Emanuel snapped at one point during our conversation about Dean. “But this isn’t about him or me.” If Democrats fall short of retaking the House of Representatives in November, the party’s elected leaders will almost certainly blame Dean for the near miss. They will say that he squandered their best chance in more than a decade to control the country. They will say it proves that Dean’s risky strategy has badly hurt the party.

And yet, you could make a compelling argument that anything short of total victory in November would prove precisely the opposite. With polls consistently showing voters to be deeply nervous about a protracted war, high gas prices and stunted wages, this is that rare election that should turn less on tactics than on fundamental choices about the direction of the country; in other words, this election season is about the fear and fury of the electorate, not the addition of a few more door-knockers in New Haven or some negative 30-second spot broadcast in Columbus. As the Democratic strategist James Carville told Al Hunt, the Bloomberg News columnist, in August, “If we can’t win in this environment, we have to question the whole premise of the party.”

Most analysts in both parties now believe that Democrats have better-than-even odds of winning at least the House. But if they don’t, rather than dissect the mechanical failures that cost them a few thousand votes here or there, Democrats might be forced to admit, at long last, that there is a structural flaw in their theory of party-building. Even a near miss, at a time of such overwhelming opportunity, would suggest that a national party may not, in fact, be able to win over the long term by fixating on a select group of industrial states while condemning entire regions of the country to what amounts to one-party rule. Which would mean that Howard Dean is right to replant his party’s flag in the towns and counties along America’s less-traveled highways, even if his plan isn’t perfect, and even if he isn’t the best messenger to carry it out. As another flawed visionary, the filmmaker Woody Allen, once put it, 80 percent of success is just showing up.

Matt Bai, a contributing writer for the magazine, is at work on a book about the future of the Democratic Party.

 The Myth of Prodigy and Why it Matters

By Eric Wargo, Observer Staff Writer

Judging from his boyish appearance and his voracious curiosity, it’s easy to imagine Malcolm Gladwell as some sort of child prodigy. And he was. But not the way you imagined.

As a teenager growing up in rural Ontario, the bestselling author of Blink and The Tipping Point was a champion runner, the number-one Canadian runner of his age. He was encouraged to dream of Olympic gold, and indeed was flown to special training camps with the other elite runners of his generation — on the assumption that creating future world-class athletes meant recognizing and nurturing youthful talent.

Precocity was the subject of Gladwell’s “Bring the Family Address” at this year’s APS Convention, and the account of his own early athletic success served as a springboard. “I was a running prodigy,” he said bluntly. But — and this “but” sounded the theme of his talk to the rapt audience filling the Marquis Marriott’s Broadway Ballroom — being a prodigy didn’t forecast future success in running. After losing a major race at age 15, then enduring other setbacks and loss of interest, Gladwell said, he gave up running for a few years. Taking it up again in college — with the same dedication as before — he faced a disappointing truth: “I realized I wasn’t one of the best in the country … I was simply okay.”

The fall from childhood greatness to a middling state of “simply okay” is, Gladwell suggested, a recurring theme when the cherished notion of precocity is subjected to real scrutiny.

“I think we take it as an article of faith in our society that great ability in any given field is invariably manifested early on, that to be precocious at something is important because it’s a predictor of future success,” Gladwell said. “But is that really true? And what is the evidence for it? And what exactly is the meaning and value of mastering a particular skill very early on in your life?”

There are two ways of answering these questions. One is simply to track the achievements of precocious kids. Gladwell cited a mid-1980s study (Genius Revisited) of adults who had attended New York City’s prestigious Hunter College Elementary School, which only admits children with an IQ of 155 or above. Hunter College was founded in the 1920s to be a training ground for the country’s future intellectual elite. Yet the fate of its child-geniuses was, well, “simply okay.” Thirty years down the road, the Hunter alums in the study were all doing pretty well, were reasonably well adjusted and happy, and most had good jobs and many had graduate degrees. But Gladwell was struck by what he called the “disappointed tone of the book”: None of the Hunter alums were superstars or Nobel- or Pulitzer-prize winners; there were no people who were nationally known in their fields. “These were genius kids but they were not genius adults.”

A similar pattern emerged when Gladwell examined his own cohort of elite teen runners in Ontario. Of the 15 nationally ranked runners in his age class at age 13 or 14, only one of that group had been a top runner in his running prime, at age 24. Indeed, the number-one miler at age 24 was someone Gladwell had known as one of the poorer runners when they were young — Doug Consiglio, a “gawky kid” of whom all the other kids asked “Why does he even bother?”

Precociousness is a slipperier subject than we ordinarily think, Gladwell said. And the benefits of earlier mastery are overstated. “There are surprising numbers of people who either start good and go bad or start bad and end up good.”

Gifted Learning vs. Gifted Doing

The other way to look at precocity is of course to work backward — to look at adult geniuses and see what they were like as kids. A number of studies have taken this approach, Gladwell said, and they find a similar pattern. A study of 200 highly accomplished adults found that just 34 percent had been considered in any way precocious as children. He also read a long list of historical geniuses who had been notably undistinguished as children — a list including Copernicus, Rembrandt, Bach, Newton, Beethoven, Kant, and Leonardo Da Vinci (“that famous code-maker”). “None of [them] would have made it into Hunter College,” Gladwell observed.

We think of precociousness as an early form of adult achievement, and, according to Gladwell, that concept is much of the problem. “What a gifted child is, in many ways, is a gifted learner. And what a gifted adult is, is a gifted doer. And those are quite separate domains of achievement.”

To be a prodigy in music, for example, is to be a mimic, to reproduce what you hear from grown-up musicians. Yet only rarely, according to Gladwell, do child musical prodigies manage to make the necessary transition from mimicry to creating a style of their own. The “prodigy midlife crisis,” as it has been called, proves fatal to all but a handful would-be Mozarts. “Precociousness, in other words, is not necessarily or always a prelude to adult achievement. Sometimes it’s just its own little discrete state.”

Early acquisition of skills — which is often what we mean by precocity — may thus be a misleading indicator of later success, said Gladwell. “Sometimes we call a child precocious because they acquire a certain skill quickly, but that skill turns out to be something where speed of acquisition is not at all important. … We don’t say that someone who learned to walk at four months is a better walker than the rest of us. It’s not really a meaningful category.”

Reading may be like walking in this respect. Gladwell cited one study comparing French-speaking Swiss children, who are taught to read early, with German-speaking Swiss children, who are taught to read later but show far fewer learning problems than their French-speaking counterparts; he also mentioned other research finding little if any correlation between early reading and ease or love of reading at later ages.

When we call a child “precocious,” Gladwell said, “we have a very sloppy definition of what we mean. Generally what we mean is that a person has an unusual level of intellectual ability for their age.” But adult success has to do with a lot more than that. “In our obsession with precociousness we are overstating the importance of being smart.” In this regard, Gladwell noted research by Carol Dweck and Martin Seligman indicating that different dimensions such as explanatory styles and attitudes and approaches to learning may have as much to do with learning ability as does innate intelligence. And when it comes to musicians, the strongest predictor of ability is the same mundane thing that gets you to Carnegie Hall: “Really what we mean … when we say that someone is ‘naturally gifted’ is that they practice a lot, that they want to practice a lot, that they like to practice a lot.”

So what about the ur-child-prodigy, Mozart? Famously, Mozart started to compose music at age four; by six, he is supposed to have traveled around Europe giving special performances with his father, Leopold. “He is of course the great poster child for precociousness,” Gladwell said. “More Upper West Side adults have pointed to Mozart, I’m quite sure, as a justification for sending their kids to excruciating early music programs, than almost any other historical figure.”

Yet Gladwell deftly debunked the Mozart myth. “First of all, the music he composes at four isn’t any good,” he stated bluntly. “They’re basically arrangements of works by other composers. And also, rather suspiciously, they’re written down by his father. … And Leopold, it must be clear, is the 18th-century equivalent of a little league father.” Indeed Wolfgang’s storied performing precocity was exaggerated somewhat by his father’s probable lying about his age. (“Mozart was the Danny Almonte of his time,” Gladwell quipped, referring to the Bronx little league pitcher whose perfect game in 2001 was thrown out of the record books when it was revealed that he was 14, not 12, and thus too old for little league.)

But most importantly, the young Mozart’s prowess can be chalked up to practice, practice, practice. Compelled to practice three hours a day from age three on, by age six the young Wolfgang had logged an astonishing 3,500 hours — “three times more than anybody else in his peer group. No wonder they thought he was a genius.” So Mozart’s famous precociousness as a musician was not innate musical ability but rather his ability to work hard, and circumstances (i.e., his father) that pushed him to do so.

“That is a very different definition of precociousness than I think the one that we generally deal with.”

A better poster child for what precociousness really entails, Gladwell hinted, may thus be the famous intellectual late-bloomer, Einstein. Gladwell cited a biographer’s description of the future physicist, who displayed no remarkable native intelligence as a child but whose success seems to have derived from certain habits and personality traits — curiosity, doggedness, determinedness — that are the less glamorous but perhaps more essential components of genius.

Precocious is Pernicious

Our romanticized view of precociousness matters. When certain kids are singled out as gifted or talented, Gladwell suggested, it creates an environment that may be subtly discouraging to those who are just average. “In singling out people like me at age 13 for special treatment, we discouraged other kids from ever taking up running at all. And we will never know how many kids who might have been great milers had they been encouraged and not discouraged from joining running, might have ended up as being very successful 10 years down the road.”

Although Gladwell acknowledged the wisdom of wanting to provide learning environments suited to different paces of achievement, he suggested that “that very worthy goal is overwhelmed by … our irresistible desire to look at precociousness as a prediction.”

“We thought that Doug Consiglio was a runner without talent,” he said, returning to his earlier example. “But what if he just didn’t take running seriously until he was 16 or 17? What if he suddenly found a coach who inspired him?” Predictions from childhood about adult performance can only be made based on relatively fixed traits, he said. “Unfortunately … many of the things that really matter in predicting adult success are not fixed at all. And once you begin to concede the importance of these kinds of non-intellectual, highly variable traits, you have to give up your love of precociousness.”

Gladwell concluded his talk with a story he said his brother, an elementary school principal, likes to tell — “the story of two buildings. One is built ahead of schedule, and one is being built in New York City and comes in two years late and several million dollars over budget. Does anyone really care, 10 years down the road, which building was built early and which building was built late? … But somehow I think when it comes to children we feel the other way, that we get obsessed with schedules, and not with buildings. I think that’s a shame. … If you want to know whether a 13-year-old runner will be a good runner when they’re 23, you should wait until they’re 23.”

 Should We Worry about Overpopulation?--Posner

 

This posting is stimulated by comments made about my passing reference to overpopulation in Subsaharan Africa in the recent blog on DDT and by an article in the Wall Street Journal last week called "The Coming Crunch" that notes with concern a prediction that the population of the United States will reach 400 million in 35 years.

 

Concerns about overpopulation are ridiculed by conservatives because of the mistaken predictions by Paul Ehrlich (not to mention Thomas Malthus!) in his book The Population Bomb and by other anticapitalists since the first Earth Day (1970), and have spread to liberals because the only way to slow or stop the growth of the U.S. population is by curtailing immigration (e.g., the "fence"). Although I have been strongly critical of the shoddy arguments of Ehrlich and other doomsters (in my book Public Intellectuals), I believe that overpopulation is a serious issue and deserves dispassionate analysis. Just because the problem of overpopulation has been exaggerated in the past doesn’t mean it is not a problem today. The future may not resemble the past. The belief that the mistakes of Malthus, Ehrlich, and other past prophets of doom show that current concerns with overpopulation are unfounded is on a par with the belief that we shouldn't worry about terrorism because many fewer Americans have been killed by terrorists than in automobile accidents. Such arguments confuse frequencies (the past) with probabilities (the future).

 

Economists stress the "demographic transition," that is, the tendency of the birth rate to decline steeply as a nation becomes wealthier. But apart from the fact that not all nations experience significant economic growth, such growth tends, other than in Europe and Japan, not to make the rate of population growth zero or negative. Most demographers forecast that world population, currently somewhat more than 6 billion, will rise to between 9 and 14 billion by mid-century.

 

I shall address the following questions: what are the costs of population increase (1) to the country in which the increase occurs, (2) if that country is the United States; and (3) to other countries; (4) what are the benefits to the country in which the increase occurs and (5) to other countries; and (6), when the costs exceed the benefits, what if anything should be done to slow or arrest population growth?

 

 

1. If the arable and otherwise inhabitable parts of a poor country are densely populated, increased population will result in significantly higher costs of food and other agricultural products by requiring more intensive cultivation, or cultivation of poor soil. It will also increase the cost of water, and time spent in commuting and other transportation. This seems to be the situation in India and much of Africa. And notice that China, though it is en route to becoming a wealthy country, has not abandoned its "one child" policy. That policy is an inefficient method of limiting population growth, but is evidence that China does have a problem of overpopulation. Surely India does as well, though like China its economic output is growing rapidly.

 

2. The United States is not densely populated, but that is only when density is computed on a nationwide basis, i.e., if the total area of the country is divided by the population. Particular areas, mainly coastal (including the Great Lakes coasts), are densely populated, and further population increases in those areas would increase commuting times, which have lengthened in recent years, and in some of these areas (such as California and Arizona) would place strains on the water supply. In principle, however, these problems can be solved by pricing, including greater use of toll roads. Increased commutes impose environmental costs, but tolls could be based on those costs.

 

3. The greatest costs of further population increases are likely to be costs external to individual countries and therefore extremely difficult to control by taxation or other methods of pricing "bads," because most of the benefits of these measures would be reaped by other countries. These are environmental costs, mainly global warming and loss of biodiversity, about which I have written at length in my book Catastrophe: Risk and Response (Oxford University Press, 2004). Of course, population growth per se does not increase global warming, but the burning of forests and, most important, of fossil fuels does, and these activities are positively correlated with population. Not only is it now the scientific consensus that global warming is a serious problem, but its adverse effects are appearing sooner than expected; it is by no means certain that a technological fix will be devised and implemented before the effects of global warming become catastrophic.

 

4. Population growth in productive societies increases the society's total output and hence its geopolitical power. It also has a positive effect on innovation by increasing the size of markets. Innovation involves a high ratio of fixed to variable costs (it costs hundreds of millions of dollars to develop a new drug, yet once it is developed, the drug may be very cheap to produce), so the larger the market for the innovative product or process the likelier are the fixed costs of invention to be recouped in sales revenues.

 

Some people also believe that the larger the population, the more innovators there will be, assuming that a fixed percentage of the population consists of innovators, whatever the size of the population. This is a questionable argument for population growth, as it ignores the fact that a fixed percentage of the population presumably also consists of potential Hitlers and Stalins and Pol Pots, and thus the absolute number of these monsters grows with population growth. Moreover, a population increase that is due to a higher birth rate (as distinct from immigration) increases the number of young people in a society, who are impressionable and therefore more likely than older people to be drawn to extremist politics, including terrorism. In addition, greater competition among innovators may reduce the potential returns to each innovator by increasing the number of simultaneous innovations, and may thus reduce the incentives to innovate.

 

The relationship between aggregate population and creativity seems in any event very loose. The citizen population of Athens in the fifth and fourth centuries B.C. was roughly 25,000, but produced intellectual and artistic works that dwarf those of entire continents. Furthermore, technological growth currently favors destructive over beneficial technologies. The increasing lethality and availability of weapons of mass destruction--the proliferation problem--has a greater short-term downside than benign inventions have an upside, especially since much innovative activity is focused on increasing longevity, and thus population. Policies that accelerate the rate of technological advance are dangerous unless the advance can somehow be channeled into productive forms. It cannot be.

 

A dubious benefit of population growth is that it lowers the average age of the population and therefore the burden of the elderly. That is a Ponzi scheme rationale for encouraging growth of population, since as soon as the growth ceases, the average age will shoot up--especially if it is correct that population growth increases the rate of medical innovation and thus the life span!

 

5. An increase in one nation's power reduces the power of other nations; so there is again a negative externality. The increase in the world's Muslim population is a negative externality for non-Muslim nations, especially the European nations, with their shrinking or about-to-start-shrinking populations. But by the same token an increase in the non-Muslim population of Europe would probably be a boon for the European nations. And an increase in the rate of innovation in one nation will benefit other nations unless intellectual-property laws are extremely strict (which would have its own negative economic effects).

 

6. If, apart from poor countries, the major costs of population growth are external to the particular nations in which population is growing, there is very little that can be done, given the weakness of international institutions, which is due in turn to the number and diversity of nations that have to be coordinated for effective action against global problems. Moreover, limits on immigration do not reduce global population growth and thus do not respond to the global-warming problem. Rich countries, however, can aid poor countries to reduce their rate of population increase by encouraging family planning and, in particular, female education, since educated women have higher opportunity costs of fertility, and hence fewer children, than uneducated ones. Where, as in the United States, the costs of population increase are concentrated in particular areas (whether in geographical areas or along highways), the costs can be neutralized by increasing prices proportionally tied to density by taxation or other methods of pricing negative externalities.

 

Posted by Richard Posner at 08:02 PM

Comment on Overpopulation-BECKER

 

Posner makes as good a case as can be made for worrying a lot about overpopulation, but I do not believe the case is good enough. I will argue that at this time, in the United States and most other parts of the world, greater population has greater benefits than costs. I will to some extent be reiterating arguments I made in my blog posting on October 3, 2005.

 

In considering the effects of greater population it is important to distinguish clearly between more rapid population growth and larger population levels. I start first with an evaluation of population growth rates. With the present system of financing social security and medical care of the elderly, faster population growth helps since it increases the number of working individuals relative to the number of retired persons. For taxes on workers provide the revenue to finance the spending and care of retirees. So with greater numbers of younger person relative to older persons, tax revenues would rise relative to payouts to the elderly. To be sure, I have argued in previous blog postings for a different system of financing income and health care to the elderly, but until we get these reforms, additional younger persons help reduce the burden of the elderly. Although the present system has clear flaws, it is not a ponzi scheme in the sense that it could continue for many, many generations if there are enough younger persons with the incomes to be taxed.

 

Younger persons also produce a disproportionate share of the new ideas and products, whether in science, business, or the arts. Declines in their numbers, absolutely and even relatively, lead to more stagnating societies. These innovations have been good for economies and culture, unless one believes that the typical person in the world was better off 250 years ago.

 

Population grows faster in a country mainly if either fertility is higher or more people immigrate into the country. Both contribute to an increase in the number of younger persons, although the fertility effects on the number of working individuals are delayed. Immigration has an immediate effects since most immigrants are young and of working ages, but there is opposition in most countries to large numbers of immigrants. Higher fertility will tend to negatively affect how much parents and societies invest in younger persons because the total cost of these investments become greater where there are more children to invest in. This is a serious consideration for many African countries, or Asian countries like Bangladesh, with very high birth rates, but is much less important in Europe or Japan or China where birth rates are low. Even in the United States the typical family has only a little less than two children, so the trade off with investment per child is not a big factor here either.

 

Although, of course, faster population growth will lead to larger populations, population level effects differ from these population growth effects. I believe there are two fundamental positive aspects of larger populations. The greater the population, the larger the market for new products, such as medical drugs, iPods and other high tech innovations, and for still other new products that depend on larger markets. This has been convincingly demonstrated in studies of pharmaceutical innovations-for example, the larger the number of elderly persons, the more new drugs developed to help diseases of the elderly (see e.g., Acemoglou and Linn, “Market size in Innovations: Theory and Evidence From the Pharmaceutical Industry”, Quarterly Journal of Economics, August, 2004.).

 

In addition, the larger is the level of population, the greater the scope for the division of labor, either within a country, or worldwide when considering world population levels. It might seem that with 6 billion persons on the earth, there is more than enough population for the finest degree of specialization and division of labor. However, the growth of global trade has made the gains from increasing degrees of specialization and trade much greater than in the past. Outsourcing and the rapid growth of China and India are just examples of this development.

 

The advantages of greater population are more questionable for poor dense populated countries with high birth rates. Bangladesh, Pakistan, and some African nations fit this description. Yet, I would not overemphasize this point since India, which is a rather densely populated country with only limited high quality land and other natural resources, showed that it could grow rapidly once it reformed economic policies. So I am doubtful whether India's large and rapidly growing population had in the past hindered its growth in per capita incomes or improvements in health of the average Indian family.

 

To be sure, the main focus nowadays of the opponents of greater population is the effects on the environment, both within nations, and globally through greenhouse warming and other forms of global pollution. It is interesting how the arguments of Malthusians and neo-Malthusians have shifted over time as each of their predictions bit the dust. Yet while these falsified predictions makes one alert to the dubious assumptions of many Malthusian-like arguments, it does not mean there is no reason to be concerned about harmful environmental effects.

 

Clearly, with per capita income, technologies, and pricing held fixed, greater population would lead to increased congestion and emission of more harmful pollutants. But there is no reason to believe that these variables will be held fixed. Per capita income will be growing, and given my arguments above, perhaps even faster with larger populations. Then the so-called Kuznets environmental curve will kick in. This curve summarizes a well-documented empirical relation that as a country's income begins to grow, at first its environment gets worse. Then, however, the environment gets better as the country spends more on reducing pollutants and has better technologies to do this.

 

My argument above also suggests that technologies to control pollution are likely to be rising in population, country or worldwide, because the market for these technologies from both the private sector and from governments would expand. The error made in many of the scariest environmental scenarios is the implicit assumption that technologies are held fixed as population and other variables of environmental concern increase. In fact, technologies progress rapidly in the modern world, and more rapidly as population is larger or per capita incomes are larger. So while I am not claiming to have disposed of the many legitimate environmental concerns of greater population, I do believe that they are considerably exaggerated by neglecting the Kuznets curve, and the effects of exogenous and induced technological advances.

 

 

 

THE FORMULA

by MALCOLM GLADWELL

What if you built a machine to predict hit movies?

Issue of 2006-10-16
Posted 2006-10-09

One sunny afternoon not long ago, Dick Copaken sat in a booth at Daniel, one of those hushed, exclusive restaurants on Manhattan’s Upper East Side where the waiters glide spectrally from table to table. He was wearing a starched button-down shirt and a blue blazer. Every strand of his thinning hair was in place, and he spoke calmly and slowly, his large pink Charlie Brown head bobbing along evenly as he did. Copaken spent many years as a partner at the white-shoe Washington, D.C., firm Covington & Burling, and he has a lawyer’s gravitas. One of his best friends calls him, admiringly, “relentless.” He likes to tell stories. Yet he is not, strictly, a storyteller, because storytellers are people who know when to leave things out, and Copaken never leaves anything out: each detail is adduced, considered, and laid on the table—and then adjusted and readjusted so that the corners of the new fact are flush with the corners of the fact that preceded it. This is especially true when Copaken is talking about things that he really cares about, such as questions of international law or his grandchildren or, most of all, the movies.

Dick Copaken loves the movies. His friend Richard Light, a statistician at Harvard, remembers summer vacations on Cape Cod with the Copakens, when Copaken would take his children and the Light children to the movies every day. “Fourteen nights out of fourteen,” Light said. “Dick would say at seven o’clock, ‘Hey, who’s up for the movies?’ And, all by himself, he would take the six kids to the movies. The kids had the time of their lives. And Dick would come back and give, with a completely straight face, a rigorous analysis of how each movie was put together, and the direction and the special effects and the animation.” This is a man who has seen two or three movies a week for the past fifty years, who has filed hundreds of plots and characters and scenes away in his mind, and at Daniel he was talking about a movie that touched him as much as any he’d ever seen.

“Nobody’s heard of it,” he said, and he clearly regarded this fact as a minor tragedy. “It’s called ‘Dear Frankie.’ I watched it on a Virgin Atlantic flight because it was the only movie they had that I hadn’t already seen. I had very low expectations. But I was blown away.” He began, in his lawyer-like manner, to lay out the plot. It takes place in Scotland. A woman has fled an abusive relationship with her infant son and is living in a port town. The boy, now nine, is deaf, and misses the father he has never known. His mother has told him that his father is a sailor on a ship that rarely comes to shore, and has suggested that he write his father letters. These she intercepts, and replies to, writing as if she were the father. One day, the boy finds out that what he thinks is his father’s ship is coming to shore. The mother has to find a man to stand in for the father. She does. The two fall in love. Unexpectedly, the real father reëmerges. He’s dying, and demands to see his son. The mother panics. Then the little boy reveals his secret: he knew about his mother’s ruse all along.

“I was in tears over this movie,” Copaken said. “You know, sometimes when you see a movie in the air you’re in such an out-of-body mood that things get exaggerated. So when I got home I sat down and saw it another time. I was bawling again, even though I knew what was coming.” Copaken shook his head, and then looked away. His cheeks were flushed. His voice was suddenly thick. There he was, a buttoned-down corporate lawyer, in a hushed restaurant where there is practically a sign on the wall forbidding displays of human emotion—and he was crying, a third time. “That absolutely hits me,” he said, his face still turned away. “He knew all along what the mother was doing.” He stopped to collect himself. “I can’t even retell the damn story without getting emotional.”

He tried to explain why he was crying. There was the little boy, first of all. He was just about the same age as Copaken’s grandson Jacob. So maybe that was part of it. Perhaps, as well, he was reacting to the idea of an absent parent. His own parents, Albert and Silvia, ran a modest community-law practice in Kansas City, and would shut down their office whenever Copaken or his brother had any kind of school activity or performance. In the Copaken world, it was an iron law that parents had to be present. He told a story about representing the Marshall Islands in negotiations with the U.S. government during the Cold War. A missile-testing range on the island was considered to be strategically critical. The case was enormously complex—involving something like fifty federal agencies and five countries—and, just as the negotiations were scheduled to begin, Copaken learned of a conflict: his eldest daughter was performing the lead role in a sixth-grade production of “The Wiz.” “I made an instant decision,” Copaken said. He told the President of the Marshall Islands that his daughter had to come first. Half an hour passed. “I get a frantic call from the State Department, very high levels: ‘Dick, I got a call from the President of the Marshall Islands. What’s going on?’ I told him. He said, ‘Dick, are you putting in jeopardy the national security of the United States for a sixth-grade production?’ ” In the end, the negotiations were suspended while Copaken flew home from Hawaii. “The point is,” Copaken said, “that absence at crucial moments has been a worry to me, and maybe this movie just grabbed at that issue.”

He stopped, seemingly dissatisfied. Was that really why he’d cried? Hollywood is awash in stories of bad fathers and abandoned children, and Copaken doesn’t cry in fancy restaurants every time he thinks of one of them. When he tried to remember the last time he cried at the movies, he was stumped. So he must have been responding to something else, too—some detail, some unconscious emotional trigger in the combination of the mother and the boy and the Scottish seaside town and the ship and the hired surrogate and the dying father. To say that he cried at “Dear Frankie” because of that lonely fatherless boy was as inadequate as saying that people cried at the death of Princess Diana because she was a beautiful princess. Surely it mattered as well that she was killed in the company of her lover, a man distrusted by the Royal Family. Wasn’t this “Romeo and Juliet”? And surely it mattered that she died in a tunnel, and that the tunnel was in Paris, and that she was chased by motorbikes, and that she was blond and her lover was dark—because each one of those additional narrative details has complicated emotional associations, and it is the subtle combination of all these associations that makes us laugh or choke up when we remember a certain movie, every single time, even when we’re sitting in a fancy restaurant.

Of course, the optimal combination of all those elements is a mystery. That’s why it’s so hard to make a really memorable movie, and why we reward so richly the few people who can. But suppose you really, really loved the movies, and suppose you were a relentless type, and suppose you used all of the skills you’d learned during the course of your career at the highest rungs of the law to put together an international team of story experts. Do you think you could figure it out?

The most famous dictum about Hollywood belongs to the screenwriter William Goldman. “Nobody knows anything,” Goldman wrote in “Adventures in the Screen Trade” a couple of decades ago. “Not one person in the entire motion picture field knows for a certainty what’s going to work. Every time out it’s a guess.” One of the highest-grossing movies in history, “Raiders of the Lost Ark,” was offered to every studio in Hollywood, Goldman writes, and every one of them turned it down except Paramount: “Why did Paramount say yes? Because nobody knows anything. And why did all the other studios say no? Because nobody knows anything. And why did Universal, the mightiest studio of all, pass on Star Wars? . . . Because nobody, nobody—not now, not ever—knows the least goddamn thing about what is or isn’t going to work at the box office.”

What Goldman was saying was a version of something that has long been argued about art: that there is no way of getting beyond one’s own impressions to arrive at some larger, objective truth. There are no rules to art, only the infinite variety of subjective experience. “Beauty is no quality in things themselves,” the eighteenth-century Scottish philosopher David Hume wrote. “It exists merely in the mind which contemplates them; and each mind perceives a different beauty.” Hume might as well have said that nobody knows anything.

But Hume had a Scottish counterpart, Lord Kames, and Lord Kames was equally convinced that traits like beauty, sublimity, and grandeur were indeed reducible to a rational system of rules and precepts. He devised principles of congruity, propriety, and perspicuity: an elevated subject, for instance, must be expressed in elevated language; sound and signification should be in concordance; a woman was most attractive when in distress; depicted misfortunes must never occur by chance. He genuinely thought that the superiority of Virgil’s hexameters to Horace’s could be demonstrated with Euclidean precision, and for every Hume, it seems, there has always been a Kames—someone arguing that if nobody knows anything it is only because nobody’s looking hard enough.

In a small New York loft, just below Union Square, for example, there is a tech startup called Platinum Blue that consults for companies in the music business. Record executives have tended to be Humean: though they can tell you how they feel when they listen to a song, they don’t believe anyone can know with confidence whether a song is going to be a hit, and, historically, fewer than twenty per cent of the songs picked as hits by music executives have fulfilled those expectations. Platinum Blue thinks it can do better. It has a proprietary computer program that uses “spectral deconvolution software” to measure the mathematical relationships among all of a song’s structural components: melody, harmony, beat, tempo, rhythm, octave, pitch, chord progression, cadence, sonic brilliance, frequency, and so on. On the basis of that analysis, the firm believes it can predict whether a song is likely to become a hit with eighty-per-cent accuracy. Platinum Blue is staunchly Kamesian, and, if you have a field dominated by those who say there are no rules, it is almost inevitable that someone will come along and say that there are. The head of Platinum Blue is a man named Mike McCready, and the service he is providing for the music business is an exact model of what Dick Copaken would like to do for the movie business.

McCready is in his thirties, baldish and laconic, with rectangular hipster glasses. His offices are in a large, open room, with a row of windows looking east, across the rooftops of downtown Manhattan. In the middle of the room is a conference table, and one morning recently McCready sat down and opened his laptop to demonstrate the Platinum Blue technology. On his screen was a cluster of thousands of white dots, resembling a cloud. This was a “map” of the songs his group had run through its software: each dot represented a single song, and each song was positioned in the cloud according to its particular mathematical signature. “You could have one piano sonata by Beethoven at this end and another one here,” McCready said, pointing at the opposite end, “as long as they have completely different chord progressions and completely different melodic structures.”

McCready then hit a button on his computer, which had the effect of eliminating all the songs that had not made the Billboard Top 30 in the past five years. The screen went from an undifferentiated cloud to sixty discrete clusters. This is what the universe of hit songs from the past five years looks like structurally; hits come out of a small, predictable, and highly conserved set of mathematical patterns. “We take a new CD far in advance of its release date,” McCready said. “We analyze all twelve tracks. Then we overlay them on top of the already existing hit clusters, and what we can tell a record company is which of those songs conform to the mathematical pattern of past hits. Now, that doesn’t mean that they will be hits. But what we are saying is that, almost certainly, songs that fall outside these clusters will not be hits—regardless of how much they sound and feel like hit songs, and regardless of how positive your call-out research or focus-group research is.” Four years ago, when McCready was working with a similar version of the program at a firm in Barcelona, he ran thirty just-released albums, chosen at random, through his system. One stood out. The computer said that nine of the fourteen songs on the album had clear hit potential—which was unheard of. Nobody in his group knew much about the artist or had even listened to the record before, but the numbers said the album was going to be big, and McCready and his crew were of the belief that numbers do not lie. “Right around that time, a local newspaper came by and asked us what we were doing,” McCready said. “We explained the hit-prediction thing, and that we were really turned on to a record by this artist called Norah Jones.” The record was “Come Away with Me.” It went on to sell twenty million copies and win eight Grammy awards.

The strength of McCready’s analysis is its precision. This past spring, for instance, he analyzed “Crazy,” by Gnarls Barkley. The computer calculated, first of all, the song’s Hit Grade—that is, how close it was to the center of any of those sixty hit clusters. Its Hit Grade was 755, on a scale where anything above 700 is exceptional. The computer also found that “Crazy” belonged to the same hit cluster as Dido’s “Thank You,” James Blunt’s “You’re Beautiful,” and Ashanti’s “Baby,” as well as older hits like “Let Me Be There,” by Olivia Newton-John, and “One Sweet Day,” by Mariah Carey, so that listeners who liked any of those songs would probably like “Crazy,” too. Finally, the computer gave “Crazy” a Periodicity Grade—which refers to the fact that, at any given time, only twelve to fifteen hit clusters are “active,” because from month to month the particular mathematical patterns that excite music listeners will shift around. “Crazy” ’s periodicity score was 658—which suggested a very good fit with current tastes. The data said, in other words, that “Crazy” was almost certainly going to be huge—and, sure enough, it was.

If “Crazy” hadn’t scored so high, though, the Platinum Blue people would have given the song’s producers broad suggestions for fixing it. McCready said, “We can tell a producer, ‘These are the elements that seem to be pushing your song into the hit cluster. These are the variables that are pulling your song away from the hit cluster. The problem seems to be in your bass line.’ And the producer will make a bunch of mixes, where they do something different with the bass lines—increase the decibel level, or muddy it up. Then they come back to us. And we say, ‘Whatever you were doing with mix No. 3, do a little bit more of that and you’ll be back inside the hit cluster.’ ”

McCready stressed that his system didn’t take the art out of hit-making. Someone still had to figure out what to do with mix No. 3, and it was entirely possible that whatever needed to be done to put the song in the hit cluster wouldn’t work, because it would make the song sound wrong—and in order to be a hit a song had to sound right. Still, for the first time you wouldn’t be guessing about what needed to be done. You would know. And what you needed to know in order to fix the song was much simpler than anyone would have thought. McCready didn’t care about who the artist was, or the cleverness of the lyrics. He didn’t even have a way of feeding lyrics into his computer. He cared only about a song’s underlying mathematical structure. “If you go back to the popular melodies written by Beethoven and Mozart three hundred years ago,” he went on, “they conform to the same mathematical patterns that we are looking at today. What sounded like a beautiful melody to them sounds like a beautiful melody to us. What has changed is simply that we have come up with new styles and new instruments. Our brains are wired in a way—we assume—that keeps us coming back, again and again, to the same answers, the same pleasure centers.” He had sales data and Top 30 lists and deconvolution software, and it seemed to him that if you put them together you had an objective way of measuring something like beauty. “We think we’ve figured out how the brain works regarding musical taste,” McCready said.

It requires a very particular kind of person, of course, to see the world as a code waiting to be broken. Hume once called Kames “the most arrogant man in the world,” and to take this side of the argument you have to be. Kames was also a brilliant lawyer, and no doubt that matters as well, because to be a good lawyer is to be invested with a reverence for rules. (Hume defied his family’s efforts to make him a lawyer.) And to think like Kames you probably have to be an outsider. Kames was born Henry Home, to a farming family, and grew up in the sparsely populated cropping-and-fishing county of Berwickshire; he became Lord Kames late in life, after he was elevated to the bench. (Hume was born and reared in Edinburgh.) His early published work was about law and its history, but he soon wandered into morality, religion, anthropology, soil chemistry, plant nutrition, and the physical sciences, and once asked his friend Benjamin Franklin to explain the movement of smoke in chimneys. Those who believe in the power of broad patterns and rules, rather than the authority of individuals or institutions, are not intimidated by the boundaries and hierarchies of knowledge. They don’t defer to the superior expertise of insiders; they set up shop in a small loft somewhere downtown and take on the whole music industry at once. The difference between Hume and Kames is, finally, a difference in kind, not degree. You’re either a Kamesian or you’re not. And if you were to create an archetypal Kamesian—to combine lawyerliness, outsiderness, and supreme self-confidence in one dapper, Charlie Brown-headed combination? You’d end up with Dick Copaken.

“I remember when I was a sophomore in high school and I went into the bathroom once to wash my hands,” Copaken said. “I noticed the bubbles on the sink, and it fascinated me the way these bubbles would form and move around and float and reform, and I sat there totally transfixed. My father called me, and I didn’t hear him. Finally, he comes in. ‘Son. What the . . . are you all right?’ I said, ‘Bubbles, Dad, look what they do.’ He said, ‘Son, if you’re going to waste your time, waste it on something that may have some future consequence.’ Well, I kind of rose to the challenge. That summer, I bicycled a couple of miles to a library in Kansas City and I spent every day reading every book and article I could find on bubbles.”

Bubbles looked completely random, but young Copaken wasn’t convinced. He built a bubble-making device involving an aerator from a fish tank, and at school he pleaded with the math department to teach him the quadratic equations he needed to show why the bubbles formed the way they did. Then he devised an experiment, and ended up with a bronze medal at the International Science Fair. His interest in bubbles was genuine, but the truth is that almost anything could have caught Copaken’s eye: pop songs, movies, the movement of chimney smoke. What drew him was not so much solving this particular problem as the general principle that problems were solvable—that he, little Dick Copaken from Kansas City, could climb on his bicycle and ride to the library and figure out something that his father thought wasn’t worth figuring out.

Copaken has written a memoir of his experience defending the tiny Puerto Rican islands of Culebra and Vieques against the U.S. Navy, which had been using their beaches for target practice. It is a riveting story. Copaken takes on the vast Navy bureaucracy, armed only with arcane provisions of environmental law. He investigates the nesting grounds of the endangered hawksbill turtle, and the mating habits of a tiny yet extremely loud tree frog known as the coqui, and at one point he transports four frozen whale heads from the Bahamas to Harvard Medical School. Copaken wins. The Navy loses.

The memoir reads like a David-and-Goliath story. It isn’t. David changed the rules on Goliath. He brought a slingshot to a sword fight. People like Copaken, though, don’t change the rules; they believe in rules. Copaken would have agreed to sword-on-sword combat. But then he would have asked the referee for a stay, deposed Goliath and his team at great length, and papered him with brief after brief until he conceded that his weapon did not qualify as a sword under §48(B)(6)(e) of the Samaria Convention of 321 B.C. (The Philistines would have settled.) And whereas David knew that he couldn’t win a conventional fight with Goliath, the conviction that sustained Copaken’s long battle with the Navy was, to the contrary, that so long as the battle remained conventional—so long as it followed the familiar pathways of the law and of due process—he really could win. Dick Copaken didn’t think he was an underdog at all. If you believe in rules, Goliath is just another Philistine, and the Navy is just another plaintiff. As for the ineffable mystery of the Hollywood blockbuster? Well, Mr. Goldman, you may not know anything. But I do.

Dick Copaken has a friend named Nick Meaney. They met on a case years ago. Meaney has thick dark hair. He is younger and much taller than Copaken, and seems to regard his friend with affectionate amusement. Meaney’s background is in risk management, and for years he’d been wanting to bring the principles of that world to the movie business. In 2003, Meaney and Copaken were driving through the English countryside to Durham when Meaney told Copaken about a friend of his from college. The friend and his business partner were students of popular narrative: the sort who write essays for obscure journals serving the small band of people who think deeply about, say, the evolution of the pilot episode in transnational TV crime dramas. And, for some time, they had been developing a system for evaluating the commercial potential of stories. The two men, Meaney told Copaken, had broken down the elements of screenplay narrative into multiple categories, and then drawn on their encyclopedic knowledge of television and film to assign scripts a score in each of those categories—creating a giant screenplay report card. The system was extraordinarily elaborate. It was under constant refinement. It was also top secret. Henceforth, Copaken and Meaney would refer to the two men publicly only as “Mr. Pink” and “Mr. Brown,” an homage to “Reservoir Dogs.”

“The guy had a big wall, and he started putting up little Post-its covering everything you can think of,” Copaken said. It was unclear whether he was talking about Mr. Pink or Mr. Brown or possibly some Obi-Wan Kenobi figure from whom Mr. Pink and Mr. Brown first learned their trade. “You know, the star wears a blue shirt. The star doesn’t zip up his pants. Whatever. So he put all these factors up and began moving them around as the scripts were either successful or unsuccessful, and he began grouping them and eventually this evolved to a kind of ad-hoc analytical system. He had no theory as to what would work, he just wanted to know what did work.”

Copaken and Meaney also shared a fascination with a powerful kind of computerized learning system called an artificial neural network. Neural networks are used for data mining—to look for patterns in very large amounts of data. In recent years, they have become a critical tool in many industries, and what Copaken and Meaney realized, when they thought about Mr. Pink and Mr. Brown, was that it might now be possible to bring neural networks to Hollywood. They could treat screenplays as mathematical propositions, using Mr. Pink and Mr. Brown’s categories and scores as the motion-picture equivalents of melody, harmony, beat, tempo, rhythm, octave, pitch, chord progression, cadence, sonic brilliance, and frequency.

Copaken and Meaney brought in a former colleague of Meaney’s named Sean Verity, and the three of them signed up Mr. Pink and Mr. Brown. They called their company Epagogix—a reference to Aristotle’s discussion of epagogic, or inductive, learning—and they started with a “training set” of screenplays that Mr. Pink and Mr. Brown had graded. Copaken and Meaney won’t disclose how many scripts were in the training set. But let’s say it was two hundred. Those scores—along with the U.S. box-office receipts for each of the films made from those screenplays—were fed into a neural network built by a computer scientist of Meaney’s acquaintance. “I can’t tell you his name,” Meaney said, “but he’s English to his bootstraps.” Mr. Bootstraps then went to work, trying to use Mr. Pink and Mr. Brown’s scoring data to predict the box-office receipts of every movie in the training set. He started with the first film and had the neural network make a guess: maybe it said that the hero’s moral crisis in act one, which rated a 7 on the 10-point moral-crisis scale, was worth $7 million, and having a gorgeous red-headed eighteen-year-old female lead whose characterization came in at 6.5 was worth $3 million and a 9-point bonding moment between the male lead and a four-year-old boy in act three was worth $2 million, and so on, putting a dollar figure on every grade on Mr. Pink and Mr. Brown’s report card until the system came up with a prediction. Then it compared its guess with how that movie actually did. Was it close? Of course not. The neural network then went back and tried again. If it had guessed $20 million and the movie actually made $110 million, it would reweight the movie’s Pink/Brown scores and run the numbers a second time. And then it would take the formula that worked best on Movie One and apply it to Movie Two, and tweak that until it had a formula that worked on Movies One and Two, and take that formula to Movie Three, and then to four and five, and on through all two hundred movies, whereupon it would go back through all the movies again, through hundreds of thousands of iterations, until it had worked out a formula that did the best possible job of predicting the financial success of every one of the movies in its database.

That formula, the theory goes, can then be applied to new scripts. If you were developing a $75-million buddy picture for Bruce Willis and Colin Farrell, Epagogix says, it can tell you, based on past experience, what that script’s particular combination of narrative elements can be expected to make at the box office. If the formula says it’s a $50-million script, you pull the plug. “We shoot turkeys,” Meaney said. He had seen Mr. Bootstraps and the neural network in action: “It can sometimes go on for hours. If you look at the computer, you see lots of flashing numbers in a gigantic grid. It’s like ‘The Matrix.’ There are a lot of computations. The guy is there, the whole time, looking at it. It eventually stops flashing, and it tells us what it thinks the American box-office will be. A number comes out.”

The way the neural network thinks is not that different from the way a Hollywood executive thinks: if you pitch a movie to a studio, the executive uses an ad-hoc algorithm—perfected through years of trial and error—to put a value on all the components in the story. Neural networks, though, can handle problems that have a great many variables, and they never play favorites—which means (at least in theory) that as long as you can give the neural network the same range of information that a human decision-maker has, it ought to come out ahead. That’s what the University of Arizona computer scientist Hsinchun Chen demonstrated ten years ago, when he built a neural network to predict winners at the dog track. Chen used the ten variables that greyhound experts told him they used in making their bets—like fastest time and winning percentage and results for the past seven races—and trained his system with the results of two hundred races. Then he went to the greyhound track in Tucson and challenged three dog-racing handicappers to a contest. Everyone picked winners in a hundred races, at a modest two dollars a bet. The experts lost $71.40, $61.20, and $70.20, respectively. Chen won $124.80. It wasn’t close, and one of the main reasons was the special interest the neural network showed in something called “race grade”: greyhounds are moved up and down through a number of divisions, according to their ability, and dogs have a big edge when they’ve just been bumped down a level and a big handicap when they’ve just been bumped up. “The experts know race grade exists, but they don’t weight it sufficiently,” Chen said. “They are all looking at win percentage, place percentage, or thinking about the dogs’ times.”

Copaken and Meaney figured that Hollywood’s experts also had biases and skipped over things that really mattered. If a neural network won at the track, why not Hollywood? “One of the most powerful aspects of what we do is the ruthless objectivity of our system,” Copaken said. “It doesn’t care about maintaining relationships with stars or agents or getting invited to someone’s party. It doesn’t care about climbing the corporate ladder. It has one master and one master only: how do you get to bigger box-office? Nobody else in Hollywood is like that.”

In the summer of 2003, Copaken approached Josh Berger, a senior executive at Warner Bros. in Europe. Meaney was opposed to the idea: in his mind, it was too early. “I just screamed at Dick,” he said. But Copaken was adamant. He had Mr. Bootstraps, Mr. Pink, and Mr. Brown run sixteen television pilots through the neural network, and try to predict the size of each show’s eventual audience. “I told Josh, ‘Stick this in a drawer, and I’ll come back at the end of the season and we can check to see how we did,’ ” Copaken said. In January of 2004, Copaken tabulated the results. In six cases, Epagogix guessed the number of American homes that would tune in to a show to within .06 per cent. In thirteen of the sixteen cases, its predictions were within two per cent. Berger was floored. “It was incredible,” he recalls. “It was like someone saying to you, ‘We’re going to show you how to count cards in Vegas.’ It had that sort of quality.”

Copaken then approached another Hollywood studio. He was given nine unreleased movies to analyze. Mr. Pink, Mr. Brown, and Mr. Bootstraps worked only from the script—without reference to the stars or the director or the marketing budget or the producer. On three of the films—two of which were low-budget—the Epagogix estimates were way off. On the remaining six—including two of the studio’s biggest-budget productions—they correctly identified whether the film would make or lose money. On one film, the studio thought it had a picture that would make a good deal more than $100 million. Epagogix said $49 million. The movie made less than $40 million. On another, a big-budget picture, the team’s estimate came within $1.2 million of the final gross. On a number of films, they were surprisingly close. “They were basically within a few million,” a senior executive at the studio said. “It was shocking. It was kind of weird.” Had the studio used Epagogix on those nine scripts before filming started, it could have saved tens of millions of dollars. “I was impressed by a couple of things,” another executive at the same studio said. “I was impressed by the things they thought mattered to a movie. They weren’t the things that we typically give credit to. They cared about the venue, and whether it was a love story, and very specific things about the plot that they were convinced determined the outcome more than anything else. It felt very objective. And they could care less about whether the lead was Tom Cruise or Tom Jones.”

The Epagogix team knocked on other doors that weren’t quite so welcoming. This was the problem with being a Kamesian. Your belief in a rule-bound universe was what gave you, an outsider, a claim to real expertise. But you were still an outsider. You were still Dick Copaken, the blue-blazered corporate lawyer who majored in bubbles as a little boy in Kansas City, and a couple of guys from the risk-management business, and three men called Pink, Brown, and Bootstraps—and none of you had ever made a movie in your life. And what were you saying? That stars didn’t matter, that the director didn’t matter, and that all that mattered was story—and, by the way, that you understood story the way the people on the inside, people who had spent a lifetime in the motion-picture business, didn’t. “They called, and they said they had a way of predicting box-office success or failure, which is everyone’s fantasy,” one former studio chief recalled. “I said to them, ‘I hope you’re right.’ ” The executive seemed to think of the Epagogix team as a small band of Martians who had somehow slipped their U.F.O. past security. “In reality, there are so many circumstances that can affect a movie’s success,” the executive went on. “Maybe the actor or actress has an external problem. Or this great actor, for whatever reason, just fails. You have to fire a director. Or September 11th or some other thing happens. There are many people who have come forward saying they have a way of predicting box-office success, but so far nobody has been able to do it. I think we know something. We just don’t know enough. I still believe in something called that magical thing—talent, the unexpected. The movie god has to shine on you.” You were either a Kamesian or you weren’t, and this person wasn’t: “My first reaction to those guys? Bullshit.”

A few months ago, Dick Copaken agreed to lift the cloud of unknowing surrounding Epagogix, at least in part. He laid down three conditions: the meeting was to be in London, Mr. Pink and Mr. Brown would continue to be known only as Mr. Pink and Mr. Brown, and no mention was to be made of the team’s current projects. After much discussion, an agreement was reached. Epagogix would analyze the 2005 movie “The Interpreter,” which was directed by Sydney Pollack and starred Sean Penn and Nicole Kidman. “The Interpreter” had a complicated history, having gone through countless revisions, and there was a feeling that it could have done much better at the box office. If ever there was an ideal case study for the alleged wizardry of Epagogix, this was it.

The first draft of the movie was written by Charles Randolph, a philosophy professor turned screenwriter. It opened in the fictional African country of Matobo. Two men in a Land Rover pull up to a soccer stadium. A group of children lead them to a room inside the building. On the ground is a row of corpses.

Cut to the United Nations, where we meet Silvia Broome, a young woman who works as an interpreter. She goes to the U.N. Security Service and relates a terrifying story. The previous night, while working late in the interpreter’s booth, she overheard two people plotting the assassination of Matobo’s murderous dictator, Edmund Zuwanie, who is coming to New York to address the General Assembly. She says that the plotters saw her, and that her life may be in danger. The officer assigned to her case, Tobin Keller, is skeptical, particularly when he learns that she, too, is from Matobo, and that her parents were killed in the country’s civil war. But after Broome suffers a series of threatening incidents Keller starts to believe her. His job is to protect Zuwanie, but he now feels moved to act as Broome’s bodyguard as well. A quiet, slightly ambiguous romantic attraction begins to develop between them. Zuwanie’s visit draws closer. Broome’s job is to be his interpreter. On the day of the speech, Broome ends up in the greenroom with Zuwanie. Keller suddenly realizes the truth: that she has made up the whole story as a way of bringing Zuwanie to justice. He rushes to the greenroom. Broome, it seems, has poisoned Zuwanie and is withholding the antidote unless he goes onstage and confesses to the murder of his countrymen. He does. Broome escapes. A doctor takes a look at the poison. It’s harmless. The doctor turns to the dictator, who has just been tricked into writing his own prison sentence: “You were never in danger, Mr. Zuwanie.”

Randolph says that the film he was thinking of while he was writing “The Interpreter” was Francis Ford Coppola’s classic “The Conversation.” He wanted to make a spare, stark movie about an isolated figure. “She’s a terrorist,” Randolph said of Silvia Broome. “She comes to this country to do a very specific task, and when that task is done she’s gone again. I wanted to write about this idea of a noble terrorist, who tried to achieve her ends with a character assassination, not a real assassination.” Randolph realized that most moviegoers—and most Hollywood executives—prefer characters who have psychological motivations. But he wasn’t trying to make “Die Hard.” “Look, I’m the son of a preacher,” he said. “I believe that ideology motivates people.”

In 2004, Sydney Pollack signed on to direct the project. He loved the idea of an interpreter at the United Nations and the conceit of an overheard conversation. But he wanted to make a commercial movie, and parts of the script didn’t feel right to him. He didn’t like the twist at the end, for instance. “I felt like I had been tricked, because in fact there was no threat,” Pollack said. “As much as I liked the original script, I felt like an audience would somehow, at the end, feel cheated.” Pollack also felt that audiences would want much more from Silvia Broome’s relationship with Tobin Keller. “I’ve never been able to do a movie without a love story in it,” he said. “For me, the heart of it is always the man and the woman and who they are and what they are going through.” Pollack brought Randolph back for rewrites. He then hired Scott Frank and Steven Zaillian, two of the most highly sought-after screenwriters in Hollywood—and after several months the story was turned inside out. Now Broome didn’t tell the story of overhearing that conversation. It actually happened. She wasn’t a terrorist anymore. She was a victim. She wasn’t an isolated figure. She was given a social life. She wasn’t manipulating Keller. Their relationship was more prominent. A series of new characters—political allies and opponents of Zuwanie’s—were added, as was a scene in Brooklyn where a bus explodes, almost killing Broome. “I remember when I came on ‘Minority Report,’ and started over,” said Frank, who wrote many of the new scenes for “The Interpreter.” “There weren’t many characters. When I finished, there were two mysteries and a hundred characters. I have diarrhea of the plot. This movie cried out for that. There are never enough suspects and red herrings.”

The lingering problem, though, was the ending. If Broome wasn’t after Zuwanie, who was? “We struggled,” Pollack said. “It was a long process, to the point where we almost gave up.” In the end, Zuwanie was made the engineer of the plot: he fakes the attempt on his life in order to justify his attacks on his enemies back home. Zuwanie hires a man to shoot him, and then another of Zuwanie’s men shoots the assassin before he can do the job—and in the chaos Broome ends up with a gun in her hand, training it on Zuwanie. “The end was the hardest part,” Frank said. “All these balls were in the air. But I couldn’t find a satisfying way to resolve it. We had to put a gun in the hand of a pacifist. I couldn’t quite sew it up in the right way. Sydney kept saying, ‘You’re so close.’ But I kept saying, ‘Yeah, but I don’t believe what I’m writing.’ I wonder if I did a disservice to ‘The Interpreter.’ I don’t know that I made it better. I may have just made it different.”

This, then, was the question for Epagogix: If Pollack’s goal was to make “The Interpreter” a more commercial movie, how well did he succeed? And could he have done better?

The debriefing took place in central London, behind the glass walls of the private dining room of a Mayfair restaurant. The waiters came in waves, murmuring their announcements of the latest arrival from the kitchen. The table was round. Copaken, dapper as always in his navy blazer, sat next to Sean Verity, followed by Meaney, Mr. Brown, and Mr. Pink. Mr. Brown was very tall, and seemed to have a northern English accent. Mr. Pink was slender and graying, and had an air of authority about him. His academic training was in biochemistry. He said he thought that, in the highly emotional business of Hollywood, having a scientific background was quite useful. There was no sign of Mr. Bootstraps.

Mr. Pink began by explaining the origins of their system. “There were certain historical events that allowed us to go back and test how appealing one film was against another,” he said. “The very simple one is that in the English market, in the sixties on Sunday night, religious programming aired on the major networks. Nobody watched it. And, as soon as that finished, movies came on. There were no lead-ins, and only two competing channels. Plus, across the country you had a situation where the commercial sector was playing a whole variety of movies against the standard, the BBC. It might be a John Wayne movie in Yorkshire, and a musical in Somerset, and the BBC would be the same movie everywhere. So you had a control. It was very pure and very simple. That was a unique opportunity to try and make some guesstimates as to why movies were doing what they were doing.”

Brown nodded. “We built a body of evidence until we had something systematic,” he said.

Pink estimated that they had analyzed thousands of movies. “The thing is that not everything comes to you as a script. For a long period, we worked for a broadcaster who used to send us a couple of paragraphs. We made our predictions based on that much. Having the script is actually too much information sometimes. You’re trying to replicate what the audience is doing. They’re trying to make a choice between three movies, and all they have at that point is whatever they’ve seen in TV Guide or on any trailer they’ve seen. We have to take a piece here and a piece here. Take a couple of reference points. When I look at a story, there are certain things I’m looking for—certain themes, and characters you immediately focus on.” He thought for a moment. “That’s not to deny that it matters whether the lead character wears a hat,” he added, in a way that suggested he and Mr. Brown had actually thought long and hard about leads and hats.

“There’s always a pattern,” he went on. “There are certain stories that come back, time and time again, and that always work. You know, whenever we go into a market—and we work in fifty markets—the initial thing people say is ‘What do you know about our market?’ The assumption is that, say, Japan is different from us—that there has to be something else going on there. But, basically, they’re just like us. It’s the consistency of these reappearing things that I find amazing.”

“Biblical stories are a classic case,” Mr. Brown put in. “There is something about what they’re telling and the message that’s coming out that seems to be so universal. With Mel Gibson’s ‘The Passion,’ people always say, ‘Who could have predicted that?’ And the answer is, we could have.”

They had looked at “The Interpreter” scripts a few weeks earlier. The process typically takes them a day. They read, they graded, and then they compared notes, because Mr. Pink was the sort who went for “Yojimbo” and Mr. Brown’s favorite movie was “Alien” (the first one), so they didn’t always agree. Mr. Brown couldn’t remember a single script he’d read where he thought there wasn’t room for improvement, and Mr. Pink, when asked the same question, could come up with just one: “Lethal Weapon.” “A friend of mine gave me the shooting script before it came out, and I remember reading it and thinking, It’s all there. It was all on the page.” Once Mr. Pink and Mr. Brown had scored “The Interpreter,” they gave their analyses to Mr. Bootstraps, who did fifteen runs through the neural network: the original Randolph script, the shooting script, and certain variants of the plot that Epagogix devised. Mr. Bootstraps then passed his results to Copaken, who wrote them up. The Epagogix reports are always written by Copaken, and they are models of lawyerly thoroughness. This one ran to thirty-eight pages. He had finished the final draft the night before, very late. He looked fresh as a daisy.

Mr. Pink started with the original script. “My pure reaction? I found it very difficult to read. I got confused. I had to reread bits. We do this a lot. If a project takes more than an hour to read, then there’s something going on that I’m not terribly keen on.”

“It didn’t feel to me like a mass-appeal movie,” Mr. Brown added. “It seemed more niche.”

When Mr. Bootstraps ran Randolph’s original draft through the neural network, the computer called it a $33-million movie—an “intelligent” thriller, in the same commercial range as “The Constant Gardener” or “Out of Sight.” According to the formula, the final shooting script was a $69-million picture (an estimate that came within $4 million of the actual box-office). Mr. Brown wasn’t surprised. The shooting script, he said, “felt more like an American movie, where the first one seemed European in style.”

Everyone agreed, though, that Pollack could have done much better. There was, first of all, the matter of the United Nations. “They had a unique opportunity to get inside the building,” Mr. Pink said. “But I came away thinking that it could have been set in any boxy office tower in Manhattan. An opportunity was missed. That’s when we get irritated—when there are opportunities that could very easily be turned into something that would actually have had an impact.”

“Locale is an extra character,” Mr. Brown said. “But in this case it’s a very bland character that didn’t really help.”

In the Epagogix secret formula, it seemed, locale matters a great deal. “You know, there’s a big difference between city and countryside,” Mr. Pink said. “It can have a huge effect on a movie’s ability to draw in viewers. And writers just do not take advantage of it. We have a certain set of values that we attach to certain places.”

Mr. Pink and Mr. Brown ticked off the movies and television shows that they thought understood the importance of locale: “Crimson Tide,” “Lawrence of Arabia,” “Lost,” “Survivor,” “Castaway,” “Deliverance.” Mr. Pink said, “The desert island is something that we have always recognized as a pungent backdrop, but it’s not used that often. In the same way, prisons can be a powerful environment, because they are so well defined.” The U.N. could have been like that, but it wasn’t. Then there was the problem of starting, as both scripts did, in Africa—and not just Africa but a fictional country in Africa. The whole team found that crazy. “Audiences are pretty parochial, by and large,” Mr. Pink said. “If you start off by telling them, ‘We’re going to begin this movie in Africa,’ you’re going to lose them. They’ve bought their tickets. But when they come out they’re going to say, ‘It was all right. But it was Africa.’ ” The whole thing seemed to leave Mr. Pink quite distressed. He looked at Mr. Brown beseechingly.

Mr. Brown changed the subject. “It’s amazing how often quite little things, quite small aspects, can spoil everything,” he said. “I remember seeing the trailer for ‘V for Vendetta’ and deciding against it right there, for one very simple reason: there was a ridiculous mask on the main character. If you can’t see the face of the character, you can’t tell what that person is thinking. You can’t tell who they are. With ‘Spider-Man’ and ‘Superman,’ though, you do see the face, so you respond to them.”

The team once gave a studio a script analysis in which almost everything they suggested was, in Hollywood terms, small. They wanted the lead to jump off the page a little more. They wanted the lead to have a young sidekick—a relatively minor character—to connect with a younger demographic, and they wanted the city where the film was set to be much more of a presence. The neural network put the potential value of better characterization at an extra $2.46 million in U.S. box-office revenue; the value of locale adjustment at $4.92 million; the value of a sidekick at $12.3 million—and the value of all three together (given the resulting synergies) at $24.6 million. That’s another $25 million for a few weeks of rewrites and maybe a day or two of extra filming. Mr. Bootstraps, incidentally, ran the numbers and concluded that the script would make $47 million if the suggested changes were not made. The changes were not made. The movie made $50 million.

Mr. Pink and Mr. Brown went on to discuss the second “Interpreter” screenplay, the shooting script. They thought the ending was implausible. Charles Randolph had originally suggested that the Tobin Keller character be black, not white, in order to create the frisson of bringing together a white African and a black American. Mr. Pink and Mr. Brown independently came to the same conclusion. Apparently, the neural network ran the numbers on movies that paired black and white leads—“Lethal Weapon,” “The Crying Game,” “Independence Day,” “Men in Black,” “Die Another Day,” “The Pelican Brief”—and found that the black-white combination could increase box-office revenue. The computer did the same kind of analysis on Scott Frank’s “diarrhea of the plot,” and found that there were too many villains. And if Silvia Broome was going to be in danger, Mr. Bootstraps made clear, she really had to be in danger.

“Our feeling—and Dick, you may have to jump in here—is that the notion of a woman in peril is a very powerful narrative element,” Mr. Pink said. He glanced apprehensively at Copaken, evidently concerned that what he was about to say might fall in the sensitive category of the proprietary. “How powerful?” He chose his words carefully. “Well above average. And the problem is that we lack a sense of how much danger she is in, so an opportunity is missed. There were times when you were thinking, Is this something she has created herself? Is someone actually after her? You are confused. There is an element of doubt, and that ambiguity makes it possible to doubt the danger of the situation.” Of course, all that ambiguity was there because in the Randolph script she was making it all up, and we were supposed to doubt the danger of the situation. But Mr. Pink and Mr. Brown believed that, once you decided you weren’t going to make a European-style niche movie, you had to abandon ambiguity altogether.

“You’ve got to make the peril real,” Mr. Pink said.

The Epagogix revise of “The Interpreter” starts with an upbeat Silvia Broome walking into the United Nations, flirting with the security guard. The two men plotting the assassination later see her and chase her through the labyrinthine cor-ridors of what could only be the U.N. building. The ambiguous threats to Broome’s life are now explicit. At one point in the Epagogix version, a villain pushes Broome’s Vespa off one of Manhattan’s iconic East River bridges. She hangs on to her motorbike for dear life, as it swings precariously over the edge of the parapet. Tobin Keller, in a police helicopter, swoops into view: “As she clings to Tobin’s muscular body while the two of them are hoisted up into the hovering helicopter, we sense that she is feeling more than relief.” In the Epagogix ending, Broome stabs one of Zuwanie’s security men with a knife. Zuwanie storms off the stage, holds a press conference, and is shot dead by a friend of Broome’s brother. Broome cradles the dying man in her arms. He “dies peacefully,” with “a smile on his blood-spattered face.” Then she gets appointed Matobo’s U.N. ambassador. She turns to Keller. “‘This time,’ she notes with a wry smile . . . ‘you will have to protect me.’ ” Bootstraps’s verdict was that this version would result in a U.S. box-office of $111 million.

“It’s funny,” Mr. Pink said. “This past weekend, ‘The Bodyguard’ was on TV. Remember that piece of”—he winced—“entertainment? Which is about a bodyguard and a woman. The final scene is that they are right back together. It is very clearly and deliberately sown. That is the commercial way, if you want more bodies in the seats.”

“You have to either consummate it or allow for the possibility of that,” Copaken agreed.

They were thinking now of what would happen if they abandoned all fealty to the original, and simply pushed the movie’s premise as far as they could possibly go.

Mr. Pink went on, “If Dick had said, ‘You can take this project wherever you want,’ we probably would have ended up with something a lot closer to ‘The Bodyguard’—where you have a much more romantic film, a much more powerful focus to the two characters—without all the political stuff going on in the background. You go for the emotions on a very basic level. What would be the upper limit on that? You know, the upper limit of anything these days is probably still ‘Titanic.’ I’m not saying we could do six hundred million dollars. But it could be two hundred million.”

It was clear that the whole conversation was beginning to make Mr. Pink uncomfortable. He didn’t like “The Bodyguard.” Even the title made him wince. He was the sort who liked “Yojimbo,” after all. The question went around the room: What would you do with “The Interpreter”? Sean Verity wanted to juice up the action-adventure elements and push it to the $150- to $160-million range. Meaney wanted to do without expensive stars: he didn’t think they were worth the money. Copaken wanted more violence, and he also favored making Keller black. But he didn’t want to go all the way to “The Bodyguard,” either. This was a man who loved “Dear Frankie” as much as any film he’d seen in recent memory, and “Dear Frankie” had a domestic box-office gross of $1.3 million. If you followed the rules of Epagogix, there wouldn’t be any movies like “Dear Frankie.” The neural network had one master, the market, and answered one question: how do you get to bigger box-office? But once a movie had made you vulnerable—once you couldn’t even retell the damn story without getting emotional—you couldn’t be content with just one master anymore.

That was the thing about the formula: it didn’t make the task of filmmaking easier. It made it harder. So long as nobody knows anything, you’ve got license to do whatever you want. You can start a movie in Africa. You can have male and female leads not go off together—all in the name of making something new. Once you came to think that you knew something, though, you had to decide just how much money you were willing to risk for your vision. Did the Epagogix team know what the answer to that question was? Of course not. That question required imagination, and they weren’t in the imagination business. They were technicians with tools: computer programs and analytical systems and proprietary software that calculated mathematical relationships among a laundry list of structural variables. At Platinum Blue, Mike McCready could tell you that the bass line was pushing your song out of the center of hit cluster 31. But he couldn’t tell you exactly how to fix the bass line, and he couldn’t guarantee that the redone version would still sound like a hit, and you didn’t see him releasing his own album of computer-validated pop music. A Kamesian had only to read Lord Kames to appreciate the distinction. The most arrogant man in the world was a terrible writer: clunky, dense, prolix. He knew the rules of art. But that didn’t make him an artist.

Mr. Brown spoke last. “I don’t think it needs to be a big-budget picture,” he said. “I think we do what we can with the original script to make it a strong story, with an ending that is memorable, and then do a slow release. A low-budget picture. One that builds through word of mouth—something like that.” He was confident that he had the means to turn a $69-million script into a $111-million movie, and then again into a $150- to $200-million blockbuster. But it had been a long afternoon, and part of him had a stubborn attachment to “The Interpreter” in something like its original form. Mr. Bootstraps might have disagreed. But Mr. Bootstraps was nowhere to be seen

Posted by Gary Becker at 07:21 PM

October 08, 2006

Taxing Fat-BECKER

 

There is growing concern in rich countries, especially in the United States, about the increase in consumption of fats and sugar, and the related increase in obesity. These trends are particularly noticeable among teenagers and even younger children, who consume large quantities of fast foods and soft drinks. Some localities, like New York City, and countries like Denmark, have proposed to either phase out or restrict sharply the use of trans fats in french fries, margarine, and other foods. The concern goes far beyond trans fats, however, and includes proposals to restrict the sale of foods high in saturated fats, such as big Macs.

 

One proposal receiving some attention is to impose a tax on foods that contain high quantities of saturated fat in the hope of cutting down consumption of these foods. The basic law of demand states that a tax on saturated fat would raise the price of fatty foods, and thereby would reduce their consumption. A good analogy is with other "sin"taxes, such as the very heavy tax in most countries on cigarettes, or the large tax in many countries on alcoholic beverages. These taxes have greatly raised the price of these goods and reduced their consumption. For example, it is estimated that every 10% increase in the retail price of cigarettes due to higher taxes cuts smoking by about 4% after the first year, and by a considerable 7% after a few years. Responses are greater in the longer run because more people decide over time not to start smoking (or drinking), and many of those who were smoking (or drinking) eventually manage to quit or cut down the amounts used.

 

I do not know of any estimates of the responsiveness of the consumption of bad fats to higher fat prices, but I am confident it would be reasonably large, particularly for teenagers and lower income families who have the highest rates of obesity, and are more sensitive to these prices. I also believe it would be possible to define a fat tax that would effectively target foods that are high in saturated fat content. Yet I would like to express some doubts about whether that would be good public policy.

 

First of all, public policy should not ignore the pleasure consumers get from cheeseburgers, french fries, and other high fat foods, or for that matter from soft drinks, smoking, alcoholic drinks, and other such "sins". Good policies require that these pleasures are more than offset by strong negative public consequences.

 

Although the growing obesity of teenagers and of adults too during the past 25 years may be partly related to the greater consumption of fats, a stronger factor seems to be the increased time spent at sedentary activities, and a corresponding reduced time spent exercising and at other active calorie burning activities. These sedentary activities include watching television, surfing the Internet, playing computer games, communicating on chat rooms and through instant messaging, listening to music on iPods, and other devices. For a careful analysis of the growth in weight of teenagers that concludes that increased sedentary activities is the main culprit, see the 2006 PhD thesis by Fernando Wilson in the Economics Department of the University of Chicago.

 

The reduced exercise rate of teenagers is not mainly because they are too fat to have the energy to be active, but rather due to technological developments, such as the internet, computer games, iPods, television, and the like. Put differently, lack of exercise has caused obesity (to a large extent) rather than that obesity has caused reduced exercise. I doubt if there would be much of a call for taxes on computer games, or iPods, or use of the Internet in order to reduce obesity. Dr. Michael Roizen has pointed out, however, that certain types of computer games do require manual dexterity and other exercise.

 

Suppose, however, that increased fat consumption is the major cause of the gain in weight. Is this enough reason to justify active public interventions? I raise this question not only because of the pleasure received from eating foods with saturated fats, but also because doubts have been raised about the connection between excess weight and medical problems like cardiovascular diseases, diabetes, cancers, and other serious diseases. Of course, no one denies that extreme overweight is dangerous to health, such as a body-mass index (BMI) of over 45. This would mean that a male of average height weighs over 300 pounds, and less than one % of the American male population is that heavy relative to their height. And often an important distinction is drawn between overall weight and how much is concentrated in the belly, the later being much more hazardous to health.

 

A possibly more important consideration than the connection between fat consumption and weight may be that the consumption of fats crowds out diets richer in fruits and vegetables. Diets heavy on fruits and vegetables appear to reduce the incidence of various serious diseases, such as colon cancer and heart attacks. If such diets were to be encouraged, a more direct and powerful approach than taxing fat consumption would be to subsidize fruits and vegetables. Yet teenagers, the group that elicits greatest concern, are likely to have weak responses to lower prices of fruits, and of vegetables like broccoli.

 

Even if excess weight and bad diets are very unhealthy with present medical knowledge, is it irrational for teenagers and other young persons to ignore the recommendations of nutritionists and medical associations, and to consume diets heavy in fats and gain weight? Not necessarily if they recognize the trade off between present pleasures and future harms, but which they may not recognize. An additional and highly important consideration that is almost never mentioned is that the next 20-30 years will probably bring at least as much improvement in medical knowledge and new drugs as the past several decades did. We now have drugs that greatly reduce the potential health hazards of high (bad) cholesterol, drugs to lower blood pressure greatly, drugs to reduce the consequences of mental depression, and many other important drugs that were unavailable a few decades ago.

 

The not so distant future will very likely see big advances in fighting various cancers, colon and lung cancer included, in preventing or better controlling adverse effects of diabetes, in preventing or slowing Alzheimer's disease, and in reducing still further the risks of strokes and heart attacks. The many teenagers who are unaware of these medical trends, and are inactive, gain weight, eat few veggies, and consume much fat will still benefit from these medical advances during the next several decades.

 

Yet suppose medical progress slowed down, and that heavy saturated fat consumption significantly would raise the probability of contracting a major disease in the future. Are public policy interventions then justified? A common affirmative answer relies on the fact that overweight people who get serious diseases use health resources that are partly financed by taxpayers. This argument has some merit because of heavy taxpayer involvement in health spending.

 

But the major flaw is in the health payment system that would be largely corrected by providing stronger incentives to economize on health spending through encouraging health saving accounts, and requiring compulsory private catastrophic health insurance. These important changes in the health delivery system would give individuals much greater incentive then they have at present, partly due to greater insurance company pressure, to reduce their health spending by getting into better shape, eating better diets, and in other ways. To be sure, if the health delivery system were not greatly improved, the health spending "externality" from consuming fat would become more relevant.

 

I believe that aside from this externality argument about the use of taxpayers' monies, there is little reason for governments to intervene in eating decisions, with some important exceptions. The main ones might include policies to give greater publicity to the health advantages of better diets, and policies that kept unhealthy foods and possibly soft drinks out of school cafeterias and school dispensing machines. Perhaps a "say no" campaign against saturated fats would work, but I am dubious about its effectiveness.

 

Sometimes I wonder whether much of the public outcry over the gain in weight of teenagers and adults stems mainly from the revulsion that many educated people experience when seeing very fat people. Surely, though, this should hardly be the ground for interventionist policies!

 

Posted by Gary Becker at 09:59 PM

The Fat Tax--Posner's Comment

 

I share much of Becker's skepticism about a "fat tax" (see my article with Tomas J. Philipson, "The Long-Run Growth in Obesity as a Function of Technological Change," Perspectives in Biology and Medicine, Summer 2003 Supplement, p. S87), though I would look favorably on a tax on soft drinks; I would even consider a ban on the sale of soft drinks to children, as I explain later.

 

The case for a fat tax, as an economist would be inclined to view it, is that a high-calorie diet contributes to obesity, which contributes to bad health, which imposes costs that are borne in part by thin people (thin taxpayers, in particular). I do think, despite skepticism in some circles, that obesity, even mild obesity, has negative health consequences, including diabetes, high blood pressure, joint problems, and certain cancers, and that much of the cost of medical treatment is externalized. But as Philipson and I emphasized in our article and Becker emphasizes too, lack of exercise is also an important factor in obesity. Moreover, the significance of an externality lies in its effect on behavior, and I am dubious that people would consume fewer calories if they had to pay all their own medical costs rather than being able to unload many of those costs on Medicaid, Medicare, or the healthy members of private insurance pools.

 

Indeed, if as I believe obesity is positively correlated with poverty, reducing transfer payments to people of limited income might result in more obesity. Indeed, high-caloric "junk food" might conceivably though improbably turn out to be the first real-world example of a "Giffen good," a good the demand for which rises when the price rises because the income effect dominates the substitution effect. A heavy tax on high-caloric food might so reduce the disposable income of the poor that they substituted such food for healthful food, since fatty foods tend to be very cheap and satisfying, and often nutritious as well. However, this is unlikely because food constitutes only a small percentage (no more than 20 percent) of even a poor family's budget.

 

A fat tax would not only be regressive; to the extent it induced the substitution of more healthful foods (as opposed to the Giffen effect), it would as Becker notes reduce the utility (pleasure) of the people who love junk food. This assumes that the junk-food lovers are rational and reasonably well informed, so that they trade off the pleasure gains of eating such food against the health costs. Here I begin to have doubts. I don't think the fact that obesity is correlated with poverty is due entirely to the fact that fatty foods tend to be cheap as well as tasty and satisfying. I suspect that many of the people who become obese as a result of what they eat do not understand how, for example, something as innocuous as a soft drink can produce obesity. I also suspect that producers of soft drinks and other fatty foods are ingenious in setting biological traps--designing foods that trigger intense pleasure reactions caused by brain structures formed in our ancestral environment (the prehistoric environment in which human beings attained approximately their current biological structure), when a taste for fatty foods had significant survival value. (The producers of soft drinks and other junk food also place vending machines in schools, when permitted.) I am doubtful, however, that much can be done about this problem. I do not think, for example, that a campaign of public education would be effective, because it could be neutralized by industry advertising (which, however, would have the indirect effect of a tax--it would increase the food producers' marginal costs) and because the people who most need the education are probably the least able to absorb it.

 

However, the consumption by children of soft drinks that contain sugar presents a distinct and perhaps soluble social problem. Soft drinks have virtually no nutritional content (unlike foods rich in cream or butter), and recent studies indicate that they are a significant factor in obesity, as well as a source of caffeine dependence and dental problems. They also have good substitutes in the form of drinks sweetened artifically rather than by sugar. And while generally parents know better than government what is good for their children, many parents who permit their children to drink soft drinks do not. Banning the sale of soft drinks to children could not have a Giffen effect and would not be much more costly to enforce than the ban on the sale of cigarettes to children, and might well be a justifiable policy measure.

 

Now any measure for improving public health has the following limitation: if people are healthier and live longer, this does not necessarily reduce their lifetime expenditures on health care. Most of those expenditures are incurred in the last six months of life, and no matter how long people live, they will eventually enter that terminal phase. However, the longer their healthier lives, the lower their average lifetime health-care expenditures and the greater their productivity, as well as the greater their utility since poor health reduces utility. (Besides its health effects, obesity reduces physical comfort and attractiveness.) I would therefore expect a ban on sale of soft drinks to children to yield a modest net increase in social welfare.

 

Posted by Richard Posner at 09:40 PM

Women in Science; DDT and Overpopulation--Posner's Response to Comments

 

I want to reply to some of the comments on both my last posting, which was on the NAS report on women in science, and also the previous one, on DDT.

 

Women in Science . I notice that the comments in defense of the NAS report tend to be--defensive; and also emotional. One comment suggests that if a committee 17/18 female is likely to be biased, any male who comments on the report is likely to be biased too. But I did not suggest that the committee should have been composed primarily of men, only that it should have been more balanced, and that the fact that the only man on the committee could not, because of his position, dissent from the report, made his inclusion, as the lone man on the committee, entirely unprofessional. Another commenter vigorously denies that there is any difference between men and women, then states that he prefers female doctors because they are more caring!

 

A number of comments point to the range of differences between men and women, encompassing behaviors (crime, sports), preferences, test results, psychology, and much else besides, including the tendency of women in science to prefer the less mathematical fields (I gave the example of primatology). These differences could I suppose all be the product of discrimination, but that seems highly unlikely.

 

One comment states that the underrepresentation of women in science may be a result of path dependency (where you start may determine where you end up)--the fewness of women in science in past times. This is not persuasive, because there were virtually no women in academic law when I was a law student in the 1950s, but now about half of all law professors are women.

 

One last point: a good test for whether there is discrimination against or in favor of a group is its average performance in the profession alleged to be a site of discrimination relative to that of the majority. If women were discriminated against in science, one would expect the average woman in science to outperform the average man in publications, awards, etc., simply because only women who were better than men could overleap the discrimination hurdle. But if there is discrimination in favor of women in science, then the average man should outperform the average woman, because then it is the men who have to overcome the discrimination barrier. (If there is no difference in average performance of men and women in a given field, the inference is that there is no sex discrimination in that field--employers and other performance evaluators regard sex as irrelevant.) Since men outperform women in science rather than vice versa, the inference is that there is discrimination in favor of women.

 

DDT and Overpopulation I repeat my abject apology for calling DDT a herbicide rather than a pesticide. Some comments suggest that the mistake reveals my complete incompetence to discuss environmental issues. That seems a bit harsh. The reason for the mistake was simply that herbicides play a particularly important role in diminution of genetic diversity--thanks in part to the ban on DDT--so I was thinking about herbicides when I was considering the effects of DDT.

 

Some comments point out correctly that interior spraying won't eliminate mosquitoes and therefore malaria; and that is true. But complete eradication may not be cost justified. Costs and benefits must be compared at the margin. If 99 percent of deaths from malaria can be eliminated by interior spraying, it may not be worthwhile to spend billions of dollars developing and producing a vaccine. That is why I find the Gates Foundation's campaign to eradicate malaria puzzling. (Actually, I don't think it's very puzzling. There is often a strong political and public-relations dimension to foundation giving, even foundation giving for activities thought nonpolitical, such as saving lives. Somehow giving money to spray the interior of houses with DDT lacks pizzazz and could even be thought politically incorrect.)

 

Most of the comments fasten on the following paragraph in my posting: "Not that eliminating childhood deaths from malaria (I have seen an estimate that 80 percent of malaria deaths are of children) would be a completely unalloyed boon for Africa, which suffers from overpopulation. But on balance the case for eradicating malaria in Africa, as for eradicating AIDS (an even bigger killer) in Africa, is compelling. Malaria is a chronic, debilitating disease afflicting many more people than die of it, and the consequence is a significant reduction in economic productivity." Many commenters regard "unalloyed boon" as a particularly callous chardacterization. I think some of the commenters don't understand the meaning of the word "unalloyed." I did not say it was a good thing that children die of malaria; I just said that it was not just a good thing, if the deaths reduce population. Now, they may not, as one comment explains, because a family that loses a child to malaria may decide to have another child in its place, and indeed if the family is risk averse it may end up having more children because of the high risk of losing one or more of them to malaria than if there were no such risk. That is an interesting empirical question. I suspect that on balance there will be fewer children surviving to adulthood, simply because of the cost of additional children.

 

I continue to insist that overpopulation, including in subsaharan Africa, is a real problem. It is true but absurdly irrelevant that New York City has a greater population density than Africa. Overpopulation is not a simple matter of dividing people by square miles. In an agricultural society, population density tends to be negatively correlated with wealth, simply because the land must be worked harder to obtain food. Good land is not the only resource that is in limited supply--so is fresh water, forest products, game, and mineral resources. Scarcities in these resources can be overcome, but only at a cost. It is true as several comments point out that as a society grows wealthier, the birthrate tends to drop (the "demographic transition"), but Africa seems to be trapped by extreme poverty exacerbated by overpopulation.

 

Is it foolish for China to try to limit its population? If not, the case for limiting the African population is much stronger, because Africa has a far less productive population.

 

And so far I have been speaking only of the effects of population on the populous country. There are external effects as well. The effects of population on the destruction of forests and on the demand for electricity and cars are major contributors to global warming.

 

Posted by Richard Posner at 09:02 AM

October 02, 2006

The Shalala Report on Women in Science and Engineering--Posner

 

Beyond Bias and Barriers: Fulfilling the Potential of Women in Academic Science and Engineering, is a book-length study published last month by the National Academy of Sciences. The study was conducted by a committee appointed by the NAS (along with the National Academy of Engineering), and it concludes that women's underperformance in academic science and engineering relative to men is caused not by any innate differences between men and women but by subtle biases, and by barriers in the form of refusing to make science jobs more "woman friendly." The study is available online at http://darwin.nap.edu/books/0309100429/html/R7.html.

 

The study will, one hopes, be carefully dissected by experts, but I will be surprised if it stands up to expert scrutiny. Of the 18 members of the authorial committee, only one was a man, and only five were members of the National Academy of Sciences and only one was a member of the National Academy of Engineering. The one man, Robert J. Birgenau, although a distinguished physicist, happens to be the Chancellor of the University of California; for him to have dissented from the report would have condemned him to the same fate as Lawrence Summers, and swiftly too. The composition of the committee shows remarkable insensitivity. The theme of the report is the importance of unconscious bias with respect to issues of gender; did it not occur to the members and to the NAS and NAE that women might have unconscious biases regarding the reasons for the underperformance of women in science and engineering relative to men?

 

Economists, foremost among them Gary Becker, have done a great deal of work on issues of sex discrimination and women's career choices. The only economist on the committee, however, was Alice Rivlin, a specialist in the federal budget. Her Brookings website lists works such as "Restoring Fiscal Sanity," but lists no book or paper relating to gender issues.

 

The problem of the committee's biased makeup would be less serious if the report itself were transparent, but it is not. Although it cites a great many academic studies, it does not give the reader enough information about them (the methods used, the robustness of the findings, the quality of the journal in which the study was published, the professional standing of the authors, the reception of the study in the relevant professional community, etc.) to enable an evaluation. Some of the observations in the report suggest a distinct lack of academic rigor, as when it reports that Japanese schoolgirls do better on math tests than American schoolboys. Since there is much more job discrimination against women in Japan than in the United States (see, e.g., http://www.pbs.org/nbr/site/research/educators/060106_04c/), one would expect Beyond Bias and Barriers to predict that Japanese girls would do very poorly on math exams.

 

The report expresses particular concern with underperformance of black women in science and engineering, who underperform not only white men and women but also black men, even though black women generally outperform black men in educational attainment. This suggests that maleness rather than race explains differential performance in science. Other obvious objections to findings favored by this biased report are ignored. For example, there is a large difference in the average research output of male and female scientists. However, that difference is greatly diminished when the comparison is between male and female scientists in leading research universities; the obvious but unmentioned reason is that these universities are not discriminating in favor of women but merely applying the same high standards to both sexes. No one thinks that no female scientists are comparable to excellent male scientists; the issue is why there are so few female scientists in those top-tier universities. Another example: from the fact that the gender gap in science has diminished in recent decades one cannot reason, as the report does, that there are no genetic or otherwise innate differences in preferences or aptitudes for a scientific career. If a gender or racial gap is due partly to discrimination and partly to innate factors, then eliminating discrimination will narrow the gap, but will not eliminate it.

 

The study is notably deficient in comparisons between women in science and in other demanding occupations. Women do better, relative to men, in academic law than they do in academic science, mathematics, and engineering yet law is a highly demanding field. And how to explain their domination of primatology, a scientific field? The problems that women in science face, particularly in highly mathematized fields such as physics, in combining family and career seem no different from the problems they face in other fields inside and outside of science. If the report's ambitious program of making science woman-friendly, for example by more financial aid, day care, and the stretching out of degree programs, were extended--and why shouldn’t it be?--to other demanding fields, there would be no basis that I can find in the study for predicting that more women would enter science rather than the fields that they appear to prefer.

 

Posted by Richard Posner at 09:46 PM

Comment on the NAS Report on Women in Science and Engineering-BECKER

 

Posner makes excellent points, so I will fill in at a few places. First, it is common for National Academy of Science sponsored Reports on economic and social issues, such as this one, to have few members of the NAS on the report committees. Unfortunately, it is also common for Reports on these issues to be poorly executed, and to be driven more by wishful thinking than by scientific findings. The low quality of many NAS Reports on economic and social issues goes back at least to the first such report I evaluated in the 1970's at the request of a Vice-President of the Academy, just after I was elected to the Academy. The Report was on the future of energy resources, contained virtually no economics, and was filled with common prejudices about how fast the world was running out of oil and other energy sources. I have felt since then that the NAS should not lend its prestigious name to reports on such issues.

 

Unfortunately, this NAS Report on women in science is no exception to the tendency of its Reports to be heavy on beliefs and weak on carefully documented analysis. To be sure, everyone who has seriously studied this question agrees that women in the past suffered greatly from discrimination in gaining entry into many professions, including but not limited to the sciences. In addition, however, there is also agreement that discrimination declined greatly over time, which is partly reflected in the data presented in this Report on the now substantial enrollments of women at technical schools such as MIT.

 

How much discrimination remains? The evidence is still unclear, so considerable disagreement remains over the respective roles of discrimination in access to education and jobs, women's responsibilities for childcare and other household activities, social conditioning, genetic differences, and possibly other factors. The report does not advance our ability to discriminate among these explanations. An Appendix to the Report discusses in an uninspired way various theories of discrimination, including mine, but no clear-cut conclusion is reached about which theory, if any, is highly applicable to the realities of women's position in science.

 

The summary of the Report says that it cannot be that women in academia, and sciences in particular, are now recipients of favoritism because affirmative action that selects candidates on the basis of race or sex is illegal. Well, legal or not, anyone who has sat in on academic departmental or divisional meetings, and my wife is also a professor, knows how often preference is given to candidates because they are women, even when male candidates have better records. Of course, not every professor or every department acts this way, but a strong and aggressive number of professors do, and deans and other university administrators frequently back their position.

 

The Report dismisses the importance of women’s interest in child rearing and other family activities in limiting their scientific accomplishments by stating that "many women scientists and engineers persist in their pursuit of academic careers despite conflicts between their roles as parents and as scientists and engineers". No one would deny that statement, but the relevant question is whether the considerable time spent by most women in child rearing is an important factor in their generally less outstanding achievements as scientists and engineers. Common sense and many studies suggest that the many hours spent on child rearing at least makes it much harder for women to produce distinguished research.

 

The Report recognizes that women take much more time off than men not only to take care of children after they are born, but also when children are sick, when a parent is needed to visit their children's school, and in other situations. The Report counters that over a lifetime men make up for this by taking more time off as sick leave. True, but women reduce their working time they could be spending on research at younger ages when scientific productivity peaks, while men generally become sick at older ages after productivity is on the decline.

 

Larry Summers was forced to resign as President of Harvard in major part because of his well publicized comments on why relatively few women are in scientific positions at the best universities. He attributed this partly to discrimination and the difficulty of combining family responsibilities with research. No trouble there. He got into trouble with many women's groups when he raised the issue of whether women on average had as much capacity as men to make outstanding scientific contributions. The Report denies that there are "any significant biological differences between men and women in performing science and mathematics that can account for the lower representation of women in academic faculty and scientific leadership positions in these fields". Account fully or only partially?

 

I am no expert on this evidence, and I do try in my own research to see how far I can go in understanding the different achievements of working men and women without assuming innate gender differences in capacities. Still, that is very different from claiming the evidence is fully persuasive on this point, or in more technical language in claiming that the variability in women's capacities is not less than the variability in men's, regardless of how their mean capacities compare.

 

Sweden probably has the strongest commitment to gender equality of any country. It implements this commitment with a liberal system of childcare allowances and facilities, a generous system of government paid leaves open to both sexes-with men required to take some of the leave- and a strong anti-gender discrimination attitude. I cannot speak with authority about Swedish scientists, but I can say with confidence that while there are excellent female Swedish economists, yet at younger, middle, and older ages, the best of Swedish economists are very predominantly men, perhaps even more so than in the United States.

 

To conclude, I have very strongly opposed discrimination against women in general and for academic positions in particular. While I am not sympathetic to strong government involvement in paid leaves for childbearing or for childcare facilities, I can see a possible case for some government actions along these lines. However, such attitudes on these issues do not justify a Report on women in science that does not really meet the fundamental criteria for a scientific Report. Instead, it provides further evidence on why the NAS should not be sponsoring Reports on economic and social issues.

 

Posted by Gary Becker at 08:26 PM

September 30, 2006

Response on Malaria and DDT-BECKER

 

Thanks for some informative comments. Clearly, I should have said the WHO rather than the WTO. I apologize for this carelessness that is especially disturbing to me since I often write about the WTO.

 

I also regret that I probably exaggerated how many lives could have been saved over the years by extensive use of DDT spraying in houses. However, I am not guilty of saying that DDT spraying alone would do the job, for I did say that mosquito nets and drugs are also useful. A combination is the best approach, but these other methods are just not a good enough substitute for DDT spraying. So I do stand behind a claim that opposition to DDT spraying by many organizations caused a very large number of needless deaths from malaria.

 

Does the recent WHO statements supporting the use of DDT in homes reflect a change in attitudes toward DDT home use by this organization? One strong critic of my discussion points out several errors in what I said, and I am indebted to him for these corrections. However, he is inconsistent on this issue of whether the WHO has "changed" its position. On the one hand, he says that "The WHO…has always supported its use" (that is, DDT spraying), but then quotes with approval a statement by another critic of DDT spraying that "The World Health Organization's new (!) stance on DDT" (my parenthesis). "New" or not new, that is the question? I was wrong to say that the WHO had banned the use of DDT in homes until recently. However, it is accurate to say I believe that the WHO had not strongly endorsed its use until a few weeks ago, and that many donor agencies were for this reason reluctant to finance purchases of DDT for household spraying.

 

One commenter challenged me (and his challenge was very well answered by another commenter) as to whether DDT house spraying does pass a relevant benefit-cost criterion. Accepting his assumptions, DDT spraying would cost $12 per year per person. That amount seems to be a highly worthwhile expenditure if we relate it to estimates of the value of saving the lives of young persons even in very poor countries. Of course, a full analysis would require knowing the money value placed on their utility by people in poor countries (my paper with Rodrigo Soares and Tomas Philipson in the March 2005 issue of the American Economic Review on declines in mortality in poor countries tries to measure utility value of improved life expectancy, not improvements in GDP alone), the probabilities that such spraying would save lives or significantly improve the quality of lives, the productivity of alternative uses of these funds, such as to find an effective vaccine, and so forth. I, have not, nor has any one else to my knowledge, made these calculations, but if spraying only costs $12 per year, and it is effective in significantly cutting deaths from malaria (some commenters dispute that), to me that seems like a great use of private or public funds.

 

Posted by Gary Becker

September 25, 2006

Correction--Posner

 

Unforgivably, I referred to DDT as a "herbicide." It is, of course, a pesticide. A herbicide is used to destroy weeds and other plants.

 

Posted by Richard Posner at 10:00 PM

September 24, 2006

DDT and Deaths From Malaria –BECKER

 

The world health community justifiably pays enormous attention to the number of deaths from Aids, which amounts to about 3 million persons a year worldwide. Malaria receives far less attention, even though it too is very deadly, causing about 11/2 million deaths per year. The world Trade Organization (WTO) declared in 1998 a "war on malaria" that aimed to cut malaria deaths in half by 2010. Instead, deaths from malaria have been increasing, not falling. The reason for the failure of this malaria war is mainly that in the name of environmentalism, the WTO and other international organizations rejected the use of an effective technique, namely spraying DDT on the walls of homes in malaria-infected areas.

 

What is especially disheartening about the huge number of deaths from malaria, and a fact that sharply distinguishes malaria from Aids, is that malaria deaths could be greatly reduced in a cheap way without requiring any fundamental changes in behavior, A small amount of DDT sprayed on the walls of homes in vulnerable malaria regions is highly effective in deterring malaria-bearing mosquitoes from entering these homes. Finally recognizing this, a couple of weeks ago the WTO relaxed its support of the ban on DDT, and instead supported spraying of DDT on house walls in malaria-ridden areas. This decision is likely to influence the position on DDT spraying of the World Bank, UDAID, and other relevant organizations. Some African countries, like Zambia and South Africa, which are not dependent on international support for their efforts at fighting disease, had already started to use DDT as a fundamental malaria-fighting weapon prior to the new WTO guidelines. South Africa decided to use DDT in the face of EU opposition after suffering a deadly malaria outbreak. DDT apparently helped that country greatly reduce its incidence of malaria.

 

DDT was developed as the first modern insecticide during World War II, and was remarkably successful in reducing deaths from malaria, typhus, and other insect-borne human diseases. DDT was extensively used worldwide in the subsequent two decades with continued success as protection against these diseases, and was employed even more extensively to rid cotton and other crops of destructive insects. In 1959, the United States alone used 80 million pounds of DDT, with the overwhelming share being devoted to spraying crops. This widespread spraying of crops with DDT generated strong opposition to its use because of evidence that DDT was destroying some wildlife.

 

This opposition was sparked by Rachel Carson’s 1962 best selling book Silent Spring, which alleged that DDT caused cancer and harmed bird reproduction. Harm to birds and other species is pretty well documented, but after over 50 years of trying, no real evidence has been found linking DDT to cancer or other serious human diseases. In any case, by the end of 1972, DDT's use in the United States was effectively banned. That ban soon became common in all rich countries, and in most poor countries too, as they responded to pressure from international organizations and Western governments.

 

One unintended consequence of the DDT ban was a devastating comeback by malaria and some other diseases after they had been in retreat. Other pesticides that replaced DDT have been much less effective at reducing malaria and other diseases transmitted by insects. The USAID has been a strong advocate of mosquito bed nets as an alternative to DDT. Mosquitoes operate mainly from dusk until dawn, so netting over beds can be effective if used persistently and correctly. Unfortunately, in many African countries bed nets are not readily available, and they are often not used to protect children since poor families may only have one or two nets. Moreover, families frequently do not bother to use these nets during some of the hours when mosquitoes are still active. So while bed nets could be a useful part of an overall strategy against malaria, they are not a good substitute for DDT.

 

Drugs that had been effective for a while in curing malaria or preventing its occurrence have become obsolete over time as the pathogens they target mutate into resistant strains. This means that drugs used to fight malaria need to be continually updated, but unfortunately international organizations are notoriously slow at responding with newer more effective drugs.

 

I am an "environmentalist", but I do not believe that all reasonable cost-benefit analysis should be suspended when discussing environmental issues. The ban on using DDT in houses to fight malaria is an example of environmentalism that lost all sense of proportion. As has happened with nuclear power and in other environmental situations, exaggerated claims about negative environmental effects of DDT on humans were publicized, and these claims were further exaggerated after being picked up by the media and politicians. As a result of the hysteria against the use of DDT for any purpose, millions of lives were lost unnecessarily during the past several decades to malaria and some other insect-borne diseases. These deaths occurred only, I repeat only, because of international pressure on African and other poor countries not to use DDT and certain other pesticides in fighting malaria and other diseases caused by insect bites. The fact is that the quantities of DDT needed to be quite effective against malaria in tropical and other countries, where it is often at epidemic levels, is a tiny fraction of the amounts that had been used to rid crops of pesticides.

 

Opponents of DDT use in disease control should wake up and realize that there has been a health "crisis" for decades, a crisis that could have been controlled if more common sense had guided international policy. The WTO's reversal of its position to allow small amounts of DDT to be used on the walls of houses to prevent mosquitoes from entering them is a belated but welcome recognition of this continuing health crisis.

 

The Hard Facts of Black America

A journalist decried as a turncoat by community leaders defends his views on African Americans helping themselves.

By Juan Williams

October 12, 2006

WHY NOT just go ahead and call me an Uncle Tom and a sellout? Why bother with trying to put a new coat of paint on the same old personal attacks by saying that I am "demeaning black people," that I'm the "black Ann Coulter" and a turncoat against the cause of racial progress for black people in the United States?

That's a sampling of the nastiness flying at me since I wrote a book that holds today's civil rights leaders accountable for serious problems inside black America. I've suggested that many poor people are capable of helping themselves by graduating high school, keeping a job and having children when they're married and ready to be parents.

It is easier to attack me than to deal with some hard facts. Here I go again, but let's look at the facts.

One hard, unforgiving fact is that 70% of black children are born today to single mothers. This is at the heart of the breakdown of the black family, the cornerstone of black life for generations. Some of these children without two parents may turn out just fine, but most add stress to the lives of their grandparents, neighbors, police and teachers who have to take up the slack for absent or bad parents.

It is easier to attack me than to deal with the hard fact of a dropout rate now at about 50% nationwide for black and Latino students. The average black student who gets a high school diploma today is reading and doing math at an eighth-grade level. Even with a diploma, that young person is ill-prepared to compete for entry-level jobs or for a college degree.

In an era of global economic competition — when it is harder to find a job, pay the rent and afford health insurance — there is little room to argue with the fact that it is a national crisis to find so many children of any race failing in school. But it is especially disturbing that so many of those children are black and Latino; they have the added burden of being people of color in a society in which race remains a real factor.

And what about the tragic fact of a 25% poverty rate among black Americans? That's more than twice the 12% national poverty rate and more than triple the poverty rate among whites.

My critics are busy blaming racism for all this poverty. But that tactic is losing its punch because so many people of color, including black people from Africa and the Caribbean, arrive in this country and outperform native-born black people in educational achievement and income. And it is hard to make the old "racism is the whole problem" argument when the other 75% of black America is taking advantage of 50 years of new opportunities — since Brown vs. Board of Education and the Civil Rights Act — to create the largest black middle class in history, with unprecedented wealth and political power.

The core group of black people trapped in poverty today is not defined by lack of opportunity as much as by bad choices. Black youth culture is boiling over with nihilism. It embraces failure and frustration, including random crime and jail time, as the authentic expression of black life. "Keeping it real" and "street cred" in that destructive world require gunshot victims, the "N-word" and treating women as "bitches" and "hos." There is no arguing that this is a sick mind-set.

Here are some more facts: 44% of the nation's prison population is made up of black people, and blacks account for 37% of violent crimes, although black Americans are only 13% of the population. Who can make the case that this is anything but a social disaster?

Yet I'm condemned for asking why today's prominent civil rights leaders, such as Jesse Jackson, Al Sharpton and Maxine Waters, are not dealing with these problems. They prefer to call for more government programs and more white guilt.

And yet a poll done by the Pew Research Center a week after Hurricane Katrina found that two-thirds of black Americans agree with 75% of white Americans who say that too many poor people are overly dependent on government programs. In other words, a clear majority of the nation, including most black people, are saying that the poor need to look in the mirror and halt self-defeating behavior.

Most of all, black people are saying that the poor are not victims but people who are capable of helping themselves.

These are the facts, whether or not you call me a Tom — and whether or not I write them.



JUAN WILLIAMS is a senior correspondent for National Public Radio, a Fox News analyst and author of "Enough: The Phony Leaders, Dead-End Movements, and Culture of Failure That Are Undermining Black America -- and What We Can Do About It."

 

How Sleep Works

by Marshall Brain

 

 

 

 

Why Sleep?
No one really knows why we sleep. But, there are all kinds of theories, including these:

Sleep gives the body a chance to repair muscles and other tissues, replace aging or dead cells, etc.

Sleep gives the brain a chance to organize and archive memories. Dreams are thought by some to be part of this process.

Sleep lowers our energy consumption, so we need three meals a day rather than four or five. Since we can't do anything in the dark anyway, we might as well "turn off" and save the energy.

According to ScienceNewsOnline: Napless cats awaken interest in adenosine, sleep may be a way of recharging the brain, using adenosine as a signal that the brain needs to rest: "Since adenosine secretion reflects brain cell activity, rising concentrations of this chemical may be how the organ gauges that it has been burning up its energy reserves and needs to shut down for a while." Adenosine levels in the brain rise during wakefulness and decline during sleep.

What we all know is that, with a good night's sleep, everything looks and feels better in the morning. Both the brain and the body are refreshed and ready for a new day.

Dreams
Why do we have such crazy, kooky dreams? Why do we dream at all for that matter? According to Joel Achenbach in his book Why Things Are:

The brain creates dreams through random electrical activity. Random is the key word here. About every 90 minutes the brain stem sends electrical impulses throughout the brain, in no particular order or fashion. The analytic portion of the brain -- the forebrain -- then desperately tries to make sense of these signals. It is like looking at a Rorschach test, a random splash of ink on paper. The only way of comprehending it is by viewing the dream (or the inkblot) metaphorically, symbolically, since there's no literal message.

This doesn't mean that dreams are meaningless or should be ignored. How our forebrains choose to "analyze" the random and discontinuous images may tell us something about ourselves, just as what we see in an inkblot can be revelatory. And perhaps there is a purpose to the craziness: Our minds may be working on deep-seated problems through these circuitous and less threatening metaphorical dreams.

Here are some other things you may have noticed about your dreams:

Dreams tell a story. They are like a TV show, with scenes, characters and props.

Dreams are egocentric. They almost always involve you.

Dreams incorporate things that have happened to you recently. They can also incorporate deep wishes and fears.

A noise in the environment is often worked in to a dream in some way, giving some credibility to the idea that dreams are simply the brain's response to random impulses.

You usually cannot control a dream -- in fact, many dreams emphasize your lack of control by making it impossible to run or yell. (However, proponents of lucid dreaming try to help you gain control.)

Dreaming is important. In sleep experiments where a person is woken up every time he/she enters REM sleep, the person becomes increasingly impatient and uncomfortable over time.

To learn more, check out How Dreams Work.

How Much Sleep Do I Need?
Most adult people seem to need seven to nine hours of sleep a night. This is an average, and it is also subjective. You, for example, probably know how much sleep you need in an average night to feel your best.

The amount of sleep you need decreases with age. A newborn baby might sleep 20 hours a day. By age four, the average is 12 hours a day. By age 10, the average falls to 10 hours a day. Senior citizens can often get by with six or seven hours a day.

Tips to Improve Your Sleep

Exercise regularly. Exercise helps tire and relax your body.

Don't consume caffeine after 4:00 p.m. or so. Avoid other stimulants like cigarettes as well.

Avoid alcohol before bedtime. Alcohol disrupts the brain's normal patterns during sleep.

Try to stay in a pattern with a regular bedtime and wakeup time, even on weekends.

 

supreme court dispatches
Button It
The Supreme Court learns to stay out of this messy business of deciding cases.
By Dahlia Lithwick
Posted Wednesday, Oct. 11, 2006, at 6:16 PM ET

Metro fare from Farragut North to Union Station: $1.35
World's smallest bag of Cheez-Its from Supreme Court cafeteria: $1.65
Caribou coffee spilled all over pants: $1.85
Replacement pants to wear to oral argument: $29.99
Getting to watch David Souter wigging out in true New England fashion: priceless.

Nobody is wearing buttons on their lapels at this morning's oral argument in Carey v. Musladin. The case probes whether jurors were improperly influenced by buttons worn by the family of the victim at the criminal trial of Matthew Musladin. It's probably a good thing that nobody is wearing buttons, because just about every judge who has reviewed the case and the majority of the justices who speak today agree that buttons prejudice jurors. As Justice Stephen Breyer puts it this morning: "Every judge in this case says wearing buttons is a bad idea. For obvious reasons … . And at some point … does it not become pretty clear that it's pretty unfair and unconstitutional?"

But the question for the court isn't whether it's a bad idea to allow families to wear buttons with the pictures of victims. The question is whether judges even get to say it's a bad idea—and whether a judge's failure to put a halt to the practice violates an established constitutional rule. It seems today that not even the most liberal justices, except Souter, think there's a role for judges to play here. And that seems to make Souter even more squirrelly. Indeed, he seems to have fallen victim to the notion that if you just keep browbeating appellate counsel to concede that the answer is "obvious," you might actually make it so.

The story begins in San Jose, Calif., in May 1994, when Musladin arrived at the home of his estranged wife Pamela and her new fiance, Tom Studer, to pick up his 3-year-old son, Garrick, for a weekend visit. The couple had been through an ugly custody battle, and as the child was handed over to his father, Musladin knocked Pamela to the ground. Studer was shot in the altercation. Musladin claims self-defense, while the state argues that he shot to kill. Experts on both sides agree that Studer died from a ricochet shot.

According to the 9th Circuit opinion that the justices have to work with today, Studer's family wore buttons (2 to 4 inches in diameter) on each of the 14 days of the trial, and the judge refused to stop them, despite the objection of Musladin's lawyers. Musladin was convicted of first-degree murder and sentenced to 32 years to life in prison. He appealed, first to the state court of appeals. It determined that, "While we consider the wearing of photographs of victims in a courtroom to be an 'impermissible factor coming into play,' the practice of which should be discouraged, we do not believe the buttons in this case branded defendant 'with an unmistakable mark of guilt' in the eyes of the jurors." In short, said the reviewing court, this was a mistake, but not bad enough to reverse the conviction.

After exhausting his state-court appeals, Musladin turned to the federal courts. The lower court denied him. He hit three cherries when he got to the 9th Circuit Court of Appeals.

Under the 1996 federal statute known as the Antiterrorism and Effective Death Penalty Act, federal appeals courts can't second guess state-court decisions unless they are "contrary to, or involved an unreasonable application of, clearly established federal law, as determined by the Supreme Court of the United States." There is no Supreme Court case law on the books pertaining to influencing jurors with buttons, only some general precedent prohibiting the state from dressing defendants up in prison grab and shackles. That makes proving that inflammatory buttons are somehow "clearly established federal law" somewhat problematic.

So, one problem facing Musladin is that the 9th Circuit seems to have used 9th Circuit case law to alter the Supreme Court's AEDPA test. Justice Anthony Kennedy seems unbothered by this, suggesting to Gregory Ott, the deputy district attorney from California, that it would hardly make sense for the California courts to ignore their own precedent. Justice Ruth Bader Ginsburg asks whether reviewing courts are just meant to "exclude entirely … any federal court of appeals decisions."

"Yes," replies Ott.

"So, the only thing that is proper to look at," says Ginsburg, "are the decisions of this court, and if you don't have a case on all fours, as we have no buttons case, then that's the end of it?" Yup.

Kennedy asks Ott what he thinks about banners.

"I haven't seen a case involving banners," replies Ott.

"And I think I know why," retorts Kennedy. "Because it affects the atmospherics of a trial."

Justice Antonin Scalia notes that "tank shirts and beanie hats" are also not allowed at trial.

Kennedy wonders whether anything the court says at this point could turn the justices' general sense that it's wrong to allow buttons at trials into a "clearly established law."

"Supposing," he says, "that we all thought that this practice in this particular case deprived the defendant of a fair trial, but we also agreed with you that AEDPA prevents us from announcing such a judgment. What if we wrote an opinion saying it is perfectly clear there was a constitutional violation here, but Congress has taken away our power to reverse it. Then a year from now, the same case arises … could the district court follow our dicta?"

No, says Ott.

Makes you wonder why we have judicial review in the first place, huh?

Here we pause for another little chapter in the Seduction of Anthony Kennedy. As Kennedy speaks, Breyer nods so vigorously, I want to call in a chiropractor. Everybody seems so desperate to get Kennedy on their team these days, you half expect Clarence Thomas to climb up into his lap and start stroking his hair.

Around now is when Souter comes up with his Hypothetical That Will Not Die: What if, instead of wearing a button with a picture of the victim, the family were all wearing buttons that read "Hang Musladin." Should the defendant get a new trial? Chief Justice John Roberts and Scalia helpfully tell Ott that the answer to that question should be "no." So he says, not necessarily.

Souter won't let go. He asks several times why Ott won't concede that buttons are improper, then adds, "Is there any question in your mind that allowing the family members to display this message to a jury" raises an impossible risk of bias? Admit it. Say it. Concede it. Ott can barely speak. When he does it's to say that the hypo is not this case.

David Fermino, representing Musladin, quickly gets pinned by Roberts, who seems to be of the impression that there is no difference between a victim's family that wears buttons and one that just sits there. "A typical jury is going to understand that the victim has a family and that they're going to be sorry that he's dead," the chief justice says. Later, he asks the same question about a family that wears black, and Kennedy wants to know about a family that weeps openly.

Souter answers for Fermino: Crying and wearing black are what victims' families naturally do. Going out of one's way to wear buttons is not. "I view the wearing of buttons as abnormal … and intended to get the jury's attention." Then he offers a long soliloquy mourning that no other court has agreed with his conviction that buttons are bad. "I'm raising a question about my own judgment in relation to the fact that no other court seems to have come to that conclusion," he says.

It's like Hamlet on the ramparts. You half expect Fermino to come back with: "Was there a question in there?"

Considering that much of the court considers the buttons to be unseemly at best and prejudicial at worst, this might be a close case. Except that what they think about the buttons doesn't matter anymore. Congress has told the courts to butt out, and this court is learning to do just that. No wonder Souter is having some sort of existential/constitutional crisis. Who wants to schlep all the way down from New Hampshire to hear a case you're not even allowed to decide?

Dahlia Lithwick is a Slate senior editor.

Article URL: http://www.slate.com/id/2151352/

 

 Privacy under attack, but does anybody care?

It's vanishing, but there's no consensus on what it is or what should be done

By Bob Sullivan

Technology correspondent

MSNBC

 

Updated: 10:47 a.m. ET Oct 16, 2006

Someday a stranger will read your e-mail, rummage through your instant messages without your permission or scan the Web sites you’ve visited — maybe even find out that you read this story.

You might be spied in a lingerie store by a secret camera or traced using a computer chip in your car, your clothes or your skin.

Perhaps someone will casually glance through your credit card purchases or cell phone bills, or a political consultant might select you for special attention based on personal data purchased from a vendor.

In fact, it’s likely some of these things have already happened to you.

Who would watch you without your permission?  It might be a spouse, a girlfriend, a marketing company, a boss, a cop or a criminal. Whoever it is, they will see you in a way you never intended to be seen — the 21st century equivalent of being caught naked. 

Psychologists tell us boundaries are healthy, that it’s important to reveal yourself to friends, family and lovers in stages, at appropriate times. But few boundaries remain. The digital bread crumbs you leave everywhere make it easy for strangers to reconstruct who you are, where you are and what you like. In some cases, a simple Google search can reveal what you think. Like it or not, increasingly we live in a world where you simply cannot keep a secret.

The key question is: Does that matter?

For many Americans, the answer apparently is “no.” 

When pollsters ask Americans about privacy, most say they are concerned about losing it. An MSNBC.com survey, which will be covered in detail on Tuesday, found an overwhelming pessimism about privacy, with 60 percent of respondents saying they feel their privacy is “slipping away, and that bothers me.”

People do and don't care
But people say one thing and do another.

Only a tiny fraction of Americans – 7 percent, according to a recent survey by The Ponemon Institute – change any behaviors in an effort to preserve their privacy. Few people turn down a discount at toll booths to avoid using the EZ-Pass system that can track automobile movements.

And few turn down supermarket loyalty cards. Carnegie Mellon privacy economist Alessandro Acquisti has run a series of tests that reveal people will surrender personal information like Social Security numbers just to get their hands on a measly 50-cents-off coupon.

But woe to the organization that loses a laptop computer containing personal information.

When the Veterans Administration lost a laptop with 26.5 million Social Security numbers on it, the agency felt the lash of righteous indignation from the public and lawmakers alike. So, too, did ChoicePoint, LexisNexis, Bank of America, and other firms that reported in the preceding months that millions of identities had been placed at risk by the loss or theft of personal data

So privacy does matter – at least sometimes. But it’s like health: When you have it, you don’t notice it. Only when it’s gone do you wish you’d done more to protect it.

But protect what?  Privacy is an elusive concept. One person’s privacy is another person’s suppression of free speech and another person’s attack on free enterprise and marketing – distinctions we will explore in detail on Wednesday, when comparing privacy in Europe and the United States.

Still, privacy is much more than an academic free speech debate. The word does not appear in the U.S. Constitution, yet the topic spawns endless constitutional arguments. And it is a wide-ranging subject, as much about terrorism as it is about junk mail. Consider the recent headlines that have dealt with just a few of its many aspects:

  Hewlett Packard executives hiring private investigators to spy on employees and journalists.

  Rep. Mark Foley sending innuendo-laden instant messages – a reminder that digital communication lasts forever and that anonymous sources can be unmasked by clever bloggers from just a few electronic clues.

  The federal government allegedly compiling a database of telephone numbers dialed by Americans, and eavesdropping on U.S. callers dialing international calls without obtaining court orders.

Privacy will remain in the headlines in the months to come, as states implement the federal government’s Real ID Act, which will effectively create a national identification program by requiring new high-tech standards for driver’s licenses and ID cards. We'll examine the implications of this new technological  pressure point on privacy on Thursday.

What is privacy?
Most Americans struggle when asked to define privacy. More than 6,500 MSNBC readers tried to do it in our survey. The nearest thing to consensus was this sentiment, appropriately offered by an anonymous reader: “Privacy is to be left alone.”

The phrase echoes a famous line penned in 1890 by soon-to-be Supreme Court Justice William Brandeis, the father of the American privacy movement and author of “The Right to Privacy.” At the time, however, Brandeis’ concern was tabloid journalism rather than Internet cookies, surveillance cameras, no-fly lists and Amazon book suggestions.

As privacy threats multiply, defending this right to be left alone becomes more challenging. How do you know when you are left alone enough? How do you say when it’s been taken?  How do you measure what’s lost? What is the real cost to a person whose Social Security number is in a data-storage device left in the back seat of a taxi?

Perhaps a more important question, Acquisti says, is how do consumers measure the consequences of their privacy choices? 

In a standard business transaction, consumers trade money for goods or services. The costs and the benefits are clear. But add privacy to the transaction, and there is really no way to perform a cost-benefit analysis.

If a company offers $1 off a gallon of milk in exchange for a name, address, and phone number, how is the privacy equation calculated? The benefit of surrendering the data is clear, but what is the cost?  It might be nothing. It might be an increase in junk mail. It might be identity theft if a hacker steals the data. Or it might end up being the turning point in a divorce case. Did you buy milk for your lactose-intolerant child? Perhaps you’re an unfit mother or father.

Unassessable costs
“People can't make intelligent (privacy) choices,” Acquisti said. “People realize there could be future costs, but they decide not to focus on those costs.

The simple act of  surrendering a telephone number to a store clerk may seem innocuous — so much so that many consumers do it with no questions asked. Yet that one action can set in motion a cascade of silent events, as that data point is acquired, analyzed, categorized, stored and sold over and over again. Future attacks on your privacy may come from anywhere, from anyone with money to purchase that phone number you surrendered.

If you doubt the multiplier effect, consider your e-mail inbox. If it's loaded with spam, it's undoubtedly because at some point in time you unknowingly surrendered your e-mail to the wrong Web site.

Do you think your telephone number or address are handled differently? A cottage industry of small companies with names you've probably never heard of — like Acxiom  or Merlin — buy and sell your personal information the way other commodities like corn or cattle futures are bartered.

You may think your cell phone is unlisted, but if you've ever ordered a pizza, it might not be.  Merlin is one of many commercial data brokers that advertises sale of unlisted phone numbers compiled from various sources -- including pizza delivery companies.

These unintended, unpredictable consequences that flow from simple actions make privacy issues difficult to grasp, and grapple with.

Privacy’s nebulous nature is never more evident than when Congress attempts to legislate solutions to various perceived problems.

Marc Rotenberg, who runs the Electronic Privacy Information Center and is called to testify whenever the House or Senate debates privacy legislation, is often cast as a liberal attacking free markets and free marketing and standing opposite data collection capitalists like ChoicePoint or the security experts at the Department of Homeland Security. He once whimsically referred to privacy advocates like himself as a “data huggers.”

Yet the “right to be left alone” is a decidedly conservative -- even Libertarian -- principle.  Many Americans would argue their right to be left alone while holding a gun on their doorstep. 

In a larger sense, privacy also is often cast as a tale of “Big Brother” -- the government is watching you or a big corporation is watching you. But privacy issues don’t necessarily involve large faceless institutions: A spouse takes a casual glance at her husband’s Blackberry, a co-worker looks at e-mail over your shoulder or a friend glances at a cell phone text message from the next seat on the bus.

‘Nothing to hide’
While very little of this is news to anyone – people are now well aware  there are video cameras and Internet cookies everywhere – there is abundant evidence that people live their lives ignorant of the monitoring, assuming a mythical level of privacy.  People write e-mails and type instant messages they never expect anyone to see.  Just ask Mark Foley or even Bill Gates, whose e-mails were a cornerstone of the Justice Department’s antitrust case against Microsoft. 

It took barely a day for a blogger to track down the identity of the congressional page at the center of the Foley controversy. The blogger didn’t just find the page’s name and e-mail address; he found a series of photographs of the page that had been left online. 

Nor do college students heed warnings that their MySpace pages laden with fraternity party photos might one day cost them a job. The roster of people who can’t be Googled shrinks every day.

And polls and studies have repeatedly shown that Americans are indifferent to privacy concerns.

The general defense for such indifference is summed up a single phrase: “I have nothing to hide.”  If you have nothing to hide, why shouldn’t the government be able to peek at your phone records, your wife see your e-mail or a company send you junk mail?  It’s a powerful argument, one that privacy advocates spend considerable time discussing and strategizing over.

It is hard to deny, however, that people behave different when they’re being watched. And it is also impossible to deny that Americans are now being watched more than at any time in history. 

That’s not necessarily a bad thing. Without an instant message evidence trail, would anyone believe a congressional page accusing Rep. Foley of making online advances? And perhaps cameras really do cut down on crime.

No place to hide
But cameras accidentally catch innocents, too. Virginia Shelton, 46, her daughter, Shirley, 16; and a friend, Jennifer Starkey, 17, were all arrested and charged with murder in 2003 because of an out-of-synch ATM camera.  Their pictures were flashed in front of a national audience and they spent three weeks in a Maryland jail before it was discovered that the camera was set to the wrong time.  

“Better 10 guilty persons escape than one innocent person suffer” is a phrase made famous by British jurist William Blackstone, whose work is often cited as the base of U.S. common law, and is invoked by the U.S. Supreme Court when it wants to discuss a legal point that predates the Constitution.

It is not clear how the world of high-tech surveillance squares with Blackstone’s ratio.   What would he say about a government that mines databases of telephone calls for evidence that someone might be about to commit a crime? What would an acceptable error rate be?

Rather than having “nothing to hide,” author Robert O’Harrow declared two years ago that Americans have “No Place to Hide” in his book of the same name. 

“More than ever before, the details about our lives are no longer our own,” O’Harrow wrote. “They belong to the companies that collect them, and the government agencies that buy or demand them in the name of keeping us safe.”

That may be a trade-off we are willing, even wise, to make. It would be, O’Harrow said, “crazy not to use tech to keep us safer.” The terrorists who flew planes into the World Trade Center were on government watch lists, and their attack was successful only because technology wasn’t used efficiently.

Time to talk about it
But there is another point in the discussion about which there is little disagreement: The debate over how much privacy we are willing to give up never occurred. When did consumers consent to give their entire bill-paying histories to credit bureaus, their address histories to a company like ChoicePoint, or their face, flying habits and telephone records to the federal government? It seems our privacy has been slipping away -- 1s and 0s at a time -- while we were busy doing other things.

Our intent in this week-long series is to invite readers into such a debate.

Some might consider the invitation posthumous, delivered only after our privacy has died. Sun’s founder and CEO Scott McNealy famously said in 1999 that people “have no privacy – get over it.”  But privacy is not a currency. It is much more like health or dignity or well-being; a source of anxiety when weak and a source of quiet satisfaction when strong. 

Perhaps it’s naïve in these dangerous times to believe you can keep secrets anymore – your travels, your e-mail, your purchasing history us readily available to law enforcement officials and others. But everyone has secrets they don’t want everyone else to know, and it’s never too late to begin a discussion about how Americans’ right to privacy can be protected.

© 2006 MSNBC Interactive

URL: http://www.msnbc.msn.com/id/15221095/

 

School of Shock
Inside a school where mentally disturbed students are jolted into good behavior

by Jarrett Murphy
October 10th, 2006 12:00 PM

 

For their last field day of summer, the students of the Judge Rotenberg Center, a private boarding school for special-education students in Canton, Massachusetts, have gotten lucky; it is an exquisite afternoon. As cars whiz by the two-building complex, the late-September sun gleams off the basketball backboard and young bodies jostle for position on the asphalt court below. The playground in the middle of the parking lot is empty, but won't be for long: Students who earned their way out of the classroom for good behavior or class performance will get, as reward, a smooth ride on the school's newly assembled swing set.

The only thing that sets these students apart from kids at any other school in America—aside from their special-ed designation—is the electric wires running from their backpacks to their wrists. Each wire connects to a silver-dollar-sized metal disk strapped with a cloth band to the student's wrist, forearm, abdomen, thigh, or foot. Inside each student's backpack is a battery and a generator, both about the size of a VHS cassette. Each generator is uniquely coded to a single keychain transmitter kept in a clear plastic box labeled with the student's name. Staff members dressed neatly in ties and green aprons keep the boxes hooked to their belts, and their eyes trained on the students' behavior. They stand ready, if they witness a behavior they've been told to target, to flip open the box, press the button, and deliver a painful two-second electrical shock into the student at the end of the wire.

Surveying the seemingly cheery outdoor scene is the school's founder and executive director, Matthew Israel. A trim 73-year-old with a head of curling white hair, Israel wears a gray sports coat over a black shirt and black-and-white-striped tie. The Harvard-educated psychologist speaks in soft tones, but he offers a full-throated defense of the skin shock treatments provided by his school, which Israel says derive from the teachings of his mentor, the famous and controversial behavioral psychologist B.F. Skinner.

Israel has about 230 "clients"—full-time students at the Rotenberg Center—who are mentally retarded, developmentally disabled with diseases like autism, or have been diagnosed with ailments such as depression, schizophrenia, or conduct disorder. Most come to this complex south of Boston from New York, but some travel from as far away as California. Many of them come not in spite of the skin shocks, but because of them. The Judge Rotenberg Center or JRC is the only school in the country that uses that type of behavioral therapy, and has come under fire from those who find its techniques cruel and unusual.

"They don't really understand," Israel says of critics who oppose his use of painful physical punishments—called "aversive stimuli"—to control behavior. "The students with whom we use the skin shock are students who can't be served anywhere else."

Over the past 35 years, Israel has repelled several attempts by regulators and legislators to shut his school down, and has grown to become not only a practitioner of aversive methods but also their champion. Now, yet again, he has a fight on his hands—this time with New York state government. New rules that the New York State Board of Regents adopted this summer on an emergency basis (and could make permanent later this month) ban the use of aversive stimuli—a range of tactics that includes not just skin shocks, but also slapping, ice applications, pinching, strangling, noxious smells and tastes, withholding food, and sleep deprivation—on New York students, even those who travel to Massachusetts to attend the Rotenberg Center.

But the Regents rules won't put Israel out of business, because the regulations allow exceptions for kids who pose a real danger to themselves or others, and for whom all other therapies fail. Opponents of aversive stimuli continue to fight for a total ban. "We are talking about the torture of school children," wrote State Senator Richard Gottfried in a letter to the Regents in August. "If we discovered that these regulations were in place at Guantanamo or Abu Ghraib, no one would have to demand Donald Rumsfeld's resignation."

Meanwhile, Israel wants the loophole opened even wider, to give the Rotenberg Center the freedom to impose its methods on children it deems in need. He recalls some of the children he has treated at JRC—kids who slapped themselves into blindness, or were so violent that a scrum of staff members struggled to hold them down. He remembers patients who rammed their heads onto tables or reached into their rectums to make themselves bleed. Israel claims that the sting of skin shocks made those kids better. In fact, he contends, the pain saved their lives.


Every inch of the Rotenberg Center's two buildings, the play area outside, and the student residences scattered around the area is monitored at all times by surveillance cameras. A team of employees watches the broadcasts from these cameras, and the people watching the cameras are observed by other cameras. The monitors look out for staff abuse and evaluate employees following every shift, to make sure students' treatment plans are followed. Some use the cameras to follow specific students who've been deemed particularly dangerous. Signs of the skin shock treatment are everywhere; the students have their backpacks near at all times, and a staff member might have as many as seven triggers hanging from his or her belt at any given moment.

Despite these hints of danger, it's hard for an outsider to detect the risks. Hillary, with long blond hair framing a soft face behind thick glasses, looks like a typical shy teenage girl—clad in baggy clothes, shrugging when introduced to a stranger, concentrating on a game of computer solitaire. It's only when she's out of earshot that you learn what happened at Hillary's last school, in Florida, where she hid in a bush, then tried to slice a staff member's neck with the jagged edge of a broken CD. When she arrived at the school, Hillary stabbed a staffer in the gut with a pencil. "She's very dangerous," says Sue Parker, the school's head of programming. "She could kill someone."

Parker has been at the Rotenberg Center for two decades, and bears scars from students who scratched her; she has had her ribs cracked three times. "We witness the tremendous progress that they've made," she says, explaining her longevity. "And I really think it's the GED," referring to the Graduated Electronic Decelerator, the shocking device's technical name.

Some of the scariest students never need the shocks; according to staff members, the mere threat of an electric jolt alone snaps them into shape. Other students actually ask to be wired up, say staff members, because they witness the improvement their peers make and the privileges they earn. But other kids don't have to ask. As Israel and Parker lead their tour of the facility, a staff member walks to the bathroom leading a kid wearing protective mitts. Every few steps the kid stops, shouts something inarticulate, then moves on. Finally, he makes it to the toilet.

"Hmmm," Parker frowns.

"Yes," Israel says, "it might be time for the GED."

One thing you won't see at the center is traditional psychological counseling. While students do meet with clinicians, there are no regular appointments or group therapy. School literature states that counseling is done "as needed," but not when it could be seen as a reward for bad behavior, and adds: "The purpose of the counseling is to enhance the student's cooperation with, and progress within the program." You also won't see most students on psychiatric drugs, even though many arrive at the school having tried several of them (one patient had been on 29 different meds) and suffering from side effects like tremors. Israel sees those meds as tools for warehousing students, not treating them.

What you do see here is a lot of color—an avalanche of it. The reception area is full of oversize lime-green chairs, and the walls are hung with bold glass renderings of blooming flowers. All over the school are couches and chairs in pink and yellow, overstuffed and inviting. The walls in the main building are covered top to bottom with bright prints of flowers, while in the newly refinished classroom building the hallways are painted a pleasing dark green. The splashes of color give JRC a lively feel. But there are no traditional classrooms. Each student works on an individualized program that is computer-based; there are no teachers writing math on blackboards or lecturing on American history.

Each classroom, however, is slightly different because JRC students exhibit a range of abilities and behaviors. In a classroom of lower-functioning students, one of the girls can't stop bouncing up and down, and her peers wear mitts to prevent scratching or grunt instead of talk. But down the hall, a higher-functioning class has kids studying chemistry and a girl named Fatima who's starting a job at Bertucci's that afternoon. Other rooms are "alternative learning centers," where extra staff is on hand to monitor kids who are too unruly for regular classes; there are mats on the floor and restraints at the ready because the students are so often wrestled down or bound to a chair.

But in every class the logic of the Skinner Box comes into play. There are rewards for acting the right way. Kids wear cards on their belts, where they collect tokens for good behavior, hard work, or adhering to a "contract" to sit still for a few minutes or get through the morning without acting out. Most classrooms have a "reward box" full of goodies like puzzles and games that the kids can take home, and a "reward corner" where deserving students can watch cartoons for a few minutes at a time. There's also a dazzling "reward room," equipped with a pool table and arcade games, to which the well behaved earn entrance, as well as a "contract store" where students can buy DVDs or handbags with points they've earned for staying on track. Pizza parties, weekly field days, and less restrictive housing placements are also part of its positive programming. There's even a "whimsy room," a magical-looking chamber with color-crowded walls, a cartoonishly enormous chandelier out of a Dr. Seuss book, and a grand table with high-backed chairs made of clear plastic laced with color. The room, which exists for parties, looks like a designer's attempt to paint a picture of fun.


In the early days of his work with aversive stimuli, Israel and his staff used spanking, pinches, muscle squeezes, water sprays, aromatic ammonia, and unpleasant tastes to punish problematic behavior. They still withhold food from some students as an aversive, but shocks are their main treatment. The school began using electric shock in 1989, but the device they first used, called SIBIS, was so weak that many students grew accustomed to it, eroding its effectiveness. So Israel developed the GED, which he registered with the Food and Drug Administration in 1995. (The GED was classified in such a way that it only required FDA registration, not approval.) When students grew innured to that, Israel brought forth the GED-4, three times as powerful as the original GED. That version is not registered with the FDA, which now says the Rotenberg Center is exempt because it's only using the machines in-house. The skin shocks at Rotenberg aren't a form of "electroshock therapy," which involves far more powerful shocks traveling through the brain. The GED-4 sends 45 milliamperes into the surface of the skin, the kind of current that a fairly weak recharger can send to your laptop battery. It's enough to hurt, delivering a rapid, vibrating pain. Some compare the sensation to a strong pinch, a bee sting, or a tattoo needle's bite. "Painful shock, muscular control is lost" is one federal- government shorthand for the experience.

Aside from a momentary tingling, the faint whiff of singed hair, and a couple small pinpoint marks on the skin, a single shock administered to a visitor at Rotenberg didn't produce any lasting physical effects. Five of the kids under Israel's care have died in the 35 years he's run the school, but none of those deaths were linked to aversive therapy. Israel insists the GED is better than the alternatives for his students—debilitating drugs or physical restraints.

**There are around 150 New Yorkers at the Center; 100 or so are from New York City. About half the students at JRC, and half the New Yorkers as well, get skin shocks. The JRC obtains local court approval and an independent psychologist's review before it can physically punish a student. And, Israel says, he always obtains a parent's permission (Parents can even log on to a special website to see how often their kid gets shocked.)

Students usually start by wearing three GED devices so they won't know where the next shock will hit, and won't be able to pull off all the devices at once. A person might wear up to five, but only one operates at a time. Every hour in each classroom, a computerized voice tells the teachers to rotate the GEDs so students don't get zapped repeatedly in the same area. Most students wear GEDs in which the electrodes are right next to each other. But some wear a different version that arrays the electrodes several inches apart, so that the current runs from the palm to the tip of a finger or from the ankle to the ball of the foot, and hurts more—or as the staff puts it, is "more aversive." Students wear the GEDs 24 hours a day. If a student's behavior improves, the GEDs are removed one at a time. Then the student goes GED-free for an hour, then two, and so on, until he or she is completely off the machine. They can always be hooked up again, however, if they lapse.

The goal of the GED, explains Israel, is to deliver punishment immediately so that even a student with a low IQ or a severe psychiatric disorder might be made to understand that whatever he just did was unacceptable. Even kids who hurt themselves, he says, react differently to pain outside their control. Each student has a sheet listing the types of behaviors that prompt a staff member to administer a shock. When one of the target behaviors occurs, the staffer is supposed to confirm with a colleague that a shock is warranted.

While psychologists write the aversive treatment plans for JRC students, it's the school's "mental-health aides"—required only to have a high school diploma, complete a two-week course, and attend regular in-service training—who monitor the classes and do the shocking. With confirmation in hand, the staff member zaps the student and then explains to him why he's being punished.

Sometimes the explanation to the student—and to outside observers—is simple and obvious: no tearing out your hair, no hitting yourself, stop scratching. But sometimes, the reasons are more obscure. Don't raise your hands, no swearing, stay in your seat. In the school's point of view, dangerous behaviors are sometimes preceded by seemingly benign ones. When the school detects a pattern, it might punish the prelude in order to prevent the harmful act. If a student typically slaps the arms of his chair, swears, and stands up before he attacks a teacher, a staffer might shock him when he stands up, when he swears, or perhaps when he slaps the arms of his chair. This approach is valid, say psychologists who defend Israel's approach—as long as whoever is administering the shock is sure that the minor behavior he's punishing is actually a predictor of something serious.

That caution also applies to the automatic shocking devices that the facility sometimes uses. A child who tears his hair out might be told never to put his hands to his head. He might be instructed not to even raise his hands from his sides. To enforce this rule, the center in some cases will rig plastic holsters to the student's hips. He has to keep his hands in the holsters. If he lifts his hands out of them, a device automatically shocks him, and keeps shocking him at one-second intervals until he puts his hands back. The rationale behind the device is that punishment must be immediate to be effective.

But after some serious incidents the student is not punished right away. For example, when a student attacks a staff member in a life-threatening manner, "we don't go to the cops," says Israel. "We don't do that." Instead, Rotenberg Center officials keep both crime and punishment in-house: The student has his hands and feet restrained and is then shocked five times, at random intervals, over a period that can last up to 30 minutes.

Sometimes, the student gets shocked for doing precisely what he's told. In a few cases where a student is suspected of being capable of an extremely dangerous but infrequent behavior, the staff at Rotenberg won't wait for him to try it. They will exhort him to do it, and then punish him. In these behavior rehearsal lessons, staff members will force a student to start a dangerous activity—for a person who likes to cut himself, they might get him to pick up a plastic knife on the table—and then shock him when he does.

Automatic devices, lengthy shocking sessions, and behavior rehearsal lessons are not what typical students receive. Israel says that among the students who get skin shocks, the average is one zap a week. Rarely does someone get shocked as often as 15 times a day, but Israel wouldn't be embarrassed if they did. He's sure it works, recalling one of his toughest cases—a kid who made himself vomit constantly and was at risk of starving to death. "I mean, his life was saved," Israel says. "If we hadn't had the GED, I don't know how we would have kept him alive."

But the GED isn't only used when a life is at stake, or when a student hurts himself or another, but also for "noncompliance" or "simple refusal." "We don't allow individuals just to stay in bed all day," says Dr. Robert von Heyn, a Rotenberg clinician, in a video for parents. "We want to teach people. So we may use the GED to treat noncompliance." Other behavior that doesn't appear dangerous also could earn a zap. While it might seem excessive to shock a student for nagging his teacher, Israel asks, what if the kid nags all the time, every minute, every day? The nagging interferes with his learning, so he can't learn self-control and develop normally. JRC's choice is to shock him, stop the nagging, and let him learn.


Amid the black leather couches and abundant glass sculptures in Israel's office, a curious collection of boutique kaleidoscopes is displayed on the coffee table. Peering into each tube, watching the crystals shift together and apart, you see the picture constantly changing. Whether it looks like chaos or beauty depends on the beholder. The decor is fitting: Israel knows that outsiders and laypeople get upset when they see kids getting shocked at JRC, but he says that's because they don't understand the true impact of what they witness. A half-century ago Israel was the one laboring to clear the picture, a college student shifting the shards of 1950s Cold War ideological struggle for an explanation of human behavior, with its stark choice between Communist materialism and democratic capitalism. He discovered another option. "Skinner said a man isn't good or evil," Israel recalled of the philosophy that inspired him. "He's what he's made by his environment and his genetics. . . . Human behavior is lawful."

Before B.F. Skinner, a lot of psychology concerned itself with understanding how the inner workings of the mind affect the way people act. Skinner thought this approach was nonsense; he believed that it was neither possible nor necessary to know what was going on in someone's head; all that mattered was behavior. He wasn't the first psychologist to adopt a behavioral approach, but he took it further than his predecessors. He argued that people's behaviors were purely the product of their environment, specifically of a process called "operant conditioning," in which the consequences of our action determine whether we repeat it: If it's rewarded, we do it again; if not, we stop.

The experiment that most clearly illustrated this was the so-called Skinner Box, a cage in which a rat had a bar to press. If Skinner awarded a food pellet when the rat pushed the bar, the rat would push it again. As Skinner changed the pattern of awards, the rat's behavior changed. Skinner extrapolated the logic of the Skinner Box to society as a whole, believing that all human suffering could be eased through the application of proper conditioning, and even penned a utopian novel in 1948, Walden II, that depicted such a world.

The seeming elegance of Skinner's approach moved Israel to dedicate his life to applying it. After leaving Harvard with a Ph.D. in 1960, Israel started a company to manufacture so-called "teaching machines," one of the technologies Skinner advocated to properly condition young learners. By the late '60s, Israel had started two communes that applied behavioral techniques. But the teaching-machine business was never successful enough to support the communes. So Israel instead launched a school that applied "Skinnerian" techniques to students with severe behavioral problems. The Behavioral Research Institute began in Providence, Rhode Island, in 1971. In the mid 1970s it opened branches in Massachusetts. Israel later changed the school's name to honor a Massachusetts judge, Ernest Rotenberg, who had sided with Israel in a battle against Bay State regulators in the mid 1980s over his use of painful aversive stimuli.

Aversive therapy first emerged in experiments with animals. Then in the 1960s, around the time Skinner's behavioral analysis was dominating psychology, some scientists used aversion to try to "cure" homosexuals. But Skinner was never a major advocate for aversive stimuli. His work concentrated mainly on the use of rewards to encourage good behavior, not punishments to discourage bad conduct. In the world he envisioned in Walden II, Skinner foresaw little punishment. But Israel says Skinner acknowledged that places like JRC were not utopias. JRC does employ a comprehensive program of positive reinforcement, consisting of those prizes and privileges that students can earn for the simplest tasks. But for Israel, punishments are just the flip side of rewards.

That view is not universal. The American Association for Mental Retardation calls aversive therapies "inhumane" and wants them eliminated. The New York Civil Liberties Union seeks a total ban in New York, dubbing aversive therapies "outmoded and ineffective." But while there's not an abundance of research on the effectiveness of skin shocks because of the ethical issues involved with shocking human subjects, many psychologists believe that in a very few, very serious instances of dangerous behavioral problems, skin shocks might be a legitimate therapy option. "Only in your most extreme cases where there's a threat of harm would you use it," says Kathryn Potoczak, a professed Skinnerian psychologist at Shippensburg University, a public college in Pennsylvania. She, like many psychologists, believes the choice in those cases is between shocking patients and allowing them to hurt themselves so severely they might die.


Students who end up at the Rotenberg Center usually begin their educations in a local school district's special-education programs. When regular schools cannot handle a child, local officials and parents look for private school options, including those out of state. No matter where the child goes, the state assumes the cost, under its obligation to provide a sound education for everyone until the age of 21. (Most students return to New York once they reach 21, but there are 24 New York adults who've remained at Rotenberg.)

The Rotenberg Center—with an annual tuition of $214,000—has been positioned as the program of last resort: It doesn't automatically reject anyone except for sex offenders and those with very serious medical conditions. Many of its students were thrown out or refused by other schools.

That's what happened to Samantha, a 13-year-old with autism from Roslyn Heights who has been at the school since March 2005. "We had her in four different schools and they tried all kinds of therapy, all kinds of positive behavioral therapy, and we had various therapists coming all over the house and it basically didn't work," says her father, Dr. Mitchell Shear, an internist who practices in the Bronx. "She became more aggressive. She would bite and scratch people. She was basically constantly crying." She also smacked herself in the head so hard she detached both retinas. The Anderson School in Purchase, where she'd been staying, said they couldn't handle Samantha anymore. A person at Anderson recommended Rotenberg to the Shear family.

The Shears' desperation resembles that of Bronx resident Lorraine Slaff 18 years ago. Slaff's autistic son Matthew had troubles early; she recalls having to pad his crib because he kept ramming his head into the sides. As he grew up, he began banging his head on sharp points like the corner of a table, bashing deep holes into his scalp. When he was home, Slaff didn't sleep for fear that she'd miss the sound of her son trying to do himself harm. When other facilities told Slaff that they couldn't handle her then 17-year-old boy, Rotenberg offered itself as a willing alternative. The catch: Slaff would have to consent to her child being subjected to physical pain. Shear faced the same choice. Neither parent blinked. "It didn't bother me because I thought he was going to die," recalls Slaff. "There was nothing else." Matthew's twin, Stewart, is also autistic, but exhibited symptoms later than his brother, and now Slaff believes Stewart would benefit from aversive therapy. But she cannot obtain that treatment for him in New York—because adult facilities here don't use aversives—or get him into the Rotenberg Center. While some children remain at the center after they reach adulthood, the state does not place adults there.

While many psychologists agree with Israel that aversive therapy can work as a last resort in a very few cases to control dangerous behaviors—the school contends that the skin shocks are almost 100 percent effective in reducing those—there's less consensus on whether a method like skin shocks can really cure someone.

Israel's theory is that by shocking to discourage dangerous behavior, the therapist buys time to use positive approaches that teach patients how to control themselves. But evaluating whether the school has succeeded with students is difficult because they arrive with such different talents and troubles. Higher-functioning students—those with normal IQs but severe emotional problems, who constitute about half the school—can have normal lives: The center's website features testimonials from kids who have joined the Marines, or have been the first in their family to complete high school, or have even gone on to college.

Other students are severely mentally retarded or developmentally disabled, and have no such prospects. "They're never going to be normal, fruitful taxpayers, but they can have some dignity and happiness," says Israel. A student named Caroline, who is in her thirties and has lived at the facility for more than 20 years, still wears a protective helmet and requires one-on-one staff monitoring, but JRC staff consider the fact that she's still alive a measure of success. Slaff's son Matthew also remains at JRC. He has stopped banging his head and can take vacations with his mom, but he still hurts himself sometimes, and still wears the GED.

Shear says he and his wife only visit their daughter once every six weeks or so; he doesn't know how long Samantha will be there. He does know the limits of optimism. "She'll never be cured of what she has. Her mental capacity will never approach that of a normal person," he says. "I believe that the GED will eventually come off her and she'll be able to maintain control of her behavior and be happy because she's not hurting herself or crying all the time."

Shear believes Samantha has already come a long way in her time at the Rotenberg Center. "I mean, when we went up last time," he says, "she was actually happy."


But after visiting the Rotenberg Center this spring, New York state inspectors concluded that "the background and preparation of staff is not sufficient," that JRC shocks students "without a clear history of self-injurious behavior," and that it uses the GED "for behaviors that are not aggressive, health dangerous, or destructive, such as nagging, swearing, and failing to keep a neat appearance." What's more, the inspectors said, the program for withholding food raised health concerns, and the classroom instruction was substandard.

Israel says the inspection was conducted by psychologists biased against his methods. But the New York report is just the start of JRC's current troubles. The Massachusetts agency (all JRC's operations have been located in the Bay State since 1996) that licenses JRC will inspect the school in coming months to see if requirements it imposed after a 2003 visit have been met. A separate Massachusetts agency has referred an allegation of abuse at JRC to local police; the claim is that the GED burned a student. Meanwhile, a Long Island mother whose son Antwone was treated at the JRC has sued her local school board and the center for using aversive therapy that allegedly caused the boy "serious physical injuries and mental anguish." At the same time, the New York legislature is considering a new bill that would ban skin shock outright on New York students.

Then there are the Regents regulations, which were prompted by the spring inspection. They prohibit all aversive stimuli but permit certain limited exceptions. Israel says the New York rules would tie his hands by restricting the skin shocks to kids who are endangering their lives or others— preventing shocks in cases of "health dangerous" behavior. The rules also bar automatic shocks. A group of JRC parents who agree with Israel went to federal court this summer to stop the imposition of the new rules on their children. The case is still pending, but the judge did block some of the rules temporarily for the students whose parents sued. Israel says that other New York kids who are no longer getting skin shocks are regressing.

But if that's true, it only fuels Israel's critics who say that all he's doing is hurting kids, not curing them. "This isn't a bell ringing. This is somebody getting an electric shock. It hurts them, so they stop," says Beth Haroules, a staff attorney at the NYCLU. "But if you take away the pain device, they haven't learned to stop what they're doing."

Even the center's aggressive methods— like automatic shocks and behavioral rehearsal lessons—have some scientific support. But the endorsements are cautious, and limited only to cases where painful techniques are the only hope—and where they work. Experts note that there is a "slippery slope" risk with aversives: If they work for a very serious behavior, why not use them for a slightly less serious one? And then there's the question posed by partial success: If skin shocks reduce a behavior but don't eliminate it, do you keep shocking for months, years, or even decades? The scientific process of peer review could address some of these questions. But many practitioners admit that when it comes to aversives, pure science isn't the only issue. The ethical limits on how to use science are also in play.


Albany's recent regulatory attention to his practices puzzles Israel. "It isn't as if we just started to do something unusual," Israel says. "We've been doing the service since the 1970s for New York." So why is the state only acting now?

People on all sides of the debate over aversives ask the same question. New York showed some concerns about the school's approach in the '70s and '80s; the state balked at paying for the school until parents sued. But it wasn't until this summer—with a lawsuit in the mix—that the New York State Education Department moved to regulate the use of aversive techniques on its students. (While the Rotenberg Center is the only place where New York students get skin shocks, two private preschools that New Yorkers attend—one near Albany and the other in Maine—use noxious tastes like lemon juice to punish kids.) The New York State Office of Mental Health bars any aversive techniques. Eleven other states already ban or restrict aversive therapies. And while psychologists largely support the validity of aversive methods, practitioners generally believe that such techniques must be used sparingly and very carefully. But only now is New York attempting to control their use.

Rebecca Cort, who oversees special-education placements for the state education department, says the need for rules only became apparent in 2005 when New York did a routine inspection of the institution. "A much higher number and percentage of students who were coming from New York State were being placed on aversive intervention," she says. That's partly because in the past couple years, the number of New Yorkers going to the school has swelled—but not necessarily because their behaviors led other schools to pass on them. "It was that the in-state beds were full," Cort says. "They were getting a larger number of students because of a lack of capacity in New York State."

Cort says the state is trying to build beds here, prodded by the legislature to do so. The alleged abuse of a New York man named Vito "Billy" Albanese, who'd suffered a traumatic brain injury, in a New Jersey facility a few years ago prompted state lawmakers in 2005 to pass the so-called "Billy's Law," which tries to tilt special- education placements toward in-state facilities.*** That's how the new regulations have to be seen—not just regulating Rotenberg, but erecting a framework for someday treating some of the worst behavioral disorders within New York's borders.

Given that context, some say the Regents have built a flawed framework. The New York State Psychological Association says the rules "effectually legalize corporal punishment." More than one New York school district is being sued for the use of "time-out rooms," but the new rules permit them. And there's not much confidence that the state education department—which only last year was found to have put residents at its School for the Blind in "immediate jeopardy to individual's health or safety"—is up to the task of handling people who, had they gone to the Rotenberg Center, would have received the treatment of last resort.

Schools using skin shock could open here. Or the Rotenberg Center could move to New York State, an option Israel says he has considered. But even though Cort says there's no move to take the Rotenberg Center off the approved list of out-of-state facilities, Israel claims the state's education department now discourages parents from placing their children with him. Even if he had a branch of JRC inside New York, Israel acknowledged by e-mail, the hostility toward the Rotenberg Center would not change. And so, unless lawmakers or regulators stop his practices, Israel and his school will remain where they are, and the shocks will continue.

 Win-Win-Win-Win Situation

By Michael Kinsley
Friday, September 22, 2006; A17

Harold Pinter wrote a play a while back called "Betrayal." (Rent the movie: It's terrific.) The plot was a fairly mundane story about an adulterous affair among affluent London literati. What gives the tale its haunting magic is that Pinter tells it in reverse: starting with the couple breaking up and ending with that first ambiguous flirtation.

Others have tried this device. Martin Amis used it in a novel called "Time's Arrow" to make some point or other about the dangers of nuclear war. There is a Stephen Sondheim musical called "Merrily We Roll Along," which starts with the hero as an unattractive middle-aged Hollywood power player and ends with him as an idealistic youth gazing toward "the hills of tomorrow." A clever movie several years ago called "Memento" used the time-backward trick as a way to imitate for the audience the effect of amnesia.

So it's been used by some of the masters. And it's a good trick: disorienting, as modern art is supposed to be, and with built-in poignancy. But that doesn't mean that anyone can pull it off. Frankly, I would have pegged George W. Bush -- whose awareness of his own weaknesses is one of his more attractive traits -- as just about the last person in the world who would try this literary jujitsu. But in his own narrative of his own war (the one in Iraq) he has done it. If you trace the concept of "victory" in his remarks on Iraq, and those of subordinates, you discover a war that was won 3 1/2 years ago and today has barely started.

Return with me, if you will, to May 1, 2003. That was the day Bush landed on the USS Abraham Lincoln and -- under a banner declaring "Mission Accomplished" -- declared that "major combat operations in Iraq have ended" and "the United States and our allies have prevailed. (Applause.)" (This is from the official White House transcript.) The White House later claimed that the banner was somebody else's idea and that Bush didn't declare victory in so many words. But Bush did use the word "victory," saying that Iraq was "one victory in a war on terror." As I recall, the occasion was pretty triumphal. Perhaps you remember differently. And in his radio address two days later, Bush used the term "victory" unabashedly.

Soon, however, the concept of "victory" became more fluid. There is not just one victory but many. Or, as White House press secretary Scott McClellan put it in August 2004, "Every progress made in Iraq since the collapse of Saddam's regime is a victory against the terrorists and enemies of Iraq." And there was a subtle shift from declaring how wonderful victory was to emphasizing how wonderful it will be. "The rise of democracy in Iraq will be an essential victory in the war on terror," Vice President Cheney said in April 2004.

In the 2004 campaign, Bush said repeatedly that one reason to vote for him over Sen. John Kerry was that he, Bush, had "a strategy that will lead to victory. And that strategy has four commitments." By October 2005 these four "commitments" had been honed down to three "prongs." Then they metastasized into four "categories for victory. And they're clear, and our command structure and our diplomats in Iraq understand the definition of victory." It's nice that someone does.

It was during the 2004 campaign that Bush offered his most imaginative explanation for why victory in Iraq looked so much like failure. "Because we achieved such a rapid victory" -- note that it is once more, briefly, a victory -- "more of the Saddam loyalists were [still] around."

On May 1, 2006, the third anniversary of "mission accomplished," McClellan was asked whether "victory" had been achieved in Iraq. He said, "We're making real progress on our plan for victory. . . . We are on the path to victory. We are winning in Iraq. But there is more work to do." Democrats should shut up because their criticism of the president "does nothing to help advance our goal of achieving victory in Iraq." (Once victory is achieved, presumably, it will be okay for Democrats to criticize.) And make no mistake, Bush said July 4: "When the job in Iraq is done, it will be a major victory."

On Aug. 28, criticizing "self-defeating pessimism," Cheney said there are "only two options in Iraq -- victory or defeat." On Aug. 31 Bush said that "victory in Iraq will be difficult, and it will require more sacrifice." He predicted that "victory in Iraq will be a crushing defeat for our enemies" -- which, as a tautology, is a safe bet.

Which brings us to last week, and Bush's television speech on the fifth anniversary of Sept. 11, 2001. "Bush Says Iraq Victory Is Vital" was The Post's accurate headline. And Bush was eloquent. "Once more into the breach, dear friends, once more. . . ." Well maybe not that eloquent. But his point was the same as Henry V's: Don't give up now! "Mistakes have been made in Iraq," he conceded. He even conceded that "Saddam Hussein was not responsible for the 9/11 attacks." But let us not, for mercy's sake, learn anything from five years of experience. Instead, let's just pretend it all never happened. After all, we won this war back in 2003.

kinsleym@washpost.com

 November 14, 2006

As Math Scores Lag, a New Push for the Basics

By TAMAR LEWIN

SEATTLE — For the second time in a generation, education officials are rethinking the teaching of math in American schools.

The changes are being driven by students’ lagging performance on international tests and mathematicians’ warnings that more than a decade of so-called reform math — critics call it fuzzy math — has crippled students with its de-emphasizing of basic drills and memorization in favor of allowing children to find their own ways to solve problems.

At the same time, parental unease has prompted ever more families to pay for tutoring, even for young children. Shalimar Backman, who put pressure on officials here by starting a parents group called Where’s the Math?, remembers the moment she became concerned.

“When my oldest child, an A-plus stellar student, was in sixth grade, I realized he had no idea, no idea at all, how to do long division,” Ms. Backman said, “so I went to school and talked to the teacher, who said, ‘We don’t teach long division; it stifles their creativity.’ ”

Across the nation, the reconsideration of what should be taught and how has been accelerated by a report in September by the National Council of Teachers of Mathematics, the nation’s leading group of math teachers.

It was a report from this same group in 1989 that influenced a generation of teachers to let children explore their own solutions to problems, write and draw pictures about math, and use tools like the calculator at the same time they learn algorithms.

But this fall, the group changed course, recommending a tighter focus on basic math skills and an end to “mile wide, inch deep” state standards that force schools to teach dozens of math topics in each grade. In fourth grade, for example, the report recommends that the curriculum should center on the “quick recall” of multiplication and division, the area of two-dimensional shapes and an understanding of decimals.

The Bush administration, too, has created a panel to study research on teaching math. It is expected to issue recommendations early next year.

Here in Washington, Gov. Chris Gregoire has asked the State Board of Education to develop new math standards by the end of next year to bring teaching in line with international competition, and a year later to choose no more than three curriculums to replace the dozens of teaching methods now in use. Ms. Gregoire, a Democrat, also wants new math requirements for high school graduation.

In Utah and Florida, too, state education officials are re-examining their math standards and curriculum.

Grass-roots groups in many cities are agitating for a return to basics. Many point to California’s standards as a good model: the state adopted reform math in the early 1990s but largely rejected it near the end of the decade, a turnaround that led to rising math achievement.

“The Seattle level of concern about math may be unusual, but there’s now an enormous amount of discomfort about fuzzy math on the East Coast, in Maine, Massachusetts and Pennsylvania, and now New Jersey is starting to make noise,” said R. James Milgram, a math professor at Stanford University. “There’s increasing understanding that the math situation in the United States is a complete disaster.”

Schools in New York City use a reform math curriculum, Everyday Mathematics, but some parents there, too, would like to see that changed, a step they are advocating through NYC HOLD, a group of parents and teachers that has a Web site with links to information on math battles nationwide.

A spokesman for the New York City Department of Education said that Everyday Mathematics covered both reform and traditional approaches, emphasizing knowledge of basic algorithms along with conceptual understanding. He added that research gathered recently by the federal Department of Education had found the program to be one of the few in the country for which there was evidence of positive effects on student math achievement.

The frenzy has been prompted in part by the growing awareness that, at a time of increasing globalization, the math skills of children in the United States simply do not measure up: American eighth-graders lag far behind those from Singapore, South Korea, Hong Kong, Taiwan, Japan and elsewhere on the Trends in International Mathematics and Science Study, an international test.

Parental discontent here in Washington State intensified after the announcement in September that only 51 percent of 10th graders passed the math part of state assessment tests, far fewer than showed proficiency in reading or writing.

“Math is on absolutely everybody’s radar in the state right now,” said Ms. Backman, whose Where’s the Math? group drew hundreds of parents and math teachers last month to a forum on K-12 math.

Many parents and teachers remain committed to the goals of reform math, having children understand what they are doing rather than simply memorizing and parroting answers. Traditional math instruction did not work for most students, say reform math proponents like Virginia Warfield, a professor at the University of Washington.

“It produces people who hate math, who can’t connect the math they are doing with anything in their lives,” Dr. Warfield said. “That’s why we have so many parents who see their children having trouble with math and say ‘Honey, don’t worry. I never could do math either.’ ”

“In Asian cultures,” she added, “the assumption is that everyone learns mathematics, and of course, parents will help with mathematics.”

But even many of those who admire the goals of reform math want their children to have more drills.

“My mother is a high school math tutor, and her joke is that this math is what’s kept her in business,” said Marcy Berejka, who each week brings Ben, 8, and Dana, 6, to Kumon, a tutoring center based in Japan that has more than a dozen franchises in the Seattle area. “There’s a lot that’s good in the new curriculum, but if you don’t memorize the basic math facts, it gets harder as math gets more complicated.”

The state’s superintendent of public instruction, Terry Bergeson, a supporter of reform math, said in an interview: “I came through the reading wars years ago, and now we’re right in the middle of that with mathematics. It comes back to balance. Of course you need to know your math facts, but you also have to understand what you’re doing. The whole country has been in denial about mathematics, and now we’re sort of at a second Sputnik moment.”

In part, the math wars have grown out of a struggle between professional mathematicians, who say too many American students never master basic math skills, and math educators, who say children who construct their own problem-solving strategies retain their math skills better than those who just memorize the algorithm that produces the correct answer.

After Dr. Milgram of Stanford appeared at a Where’s the Math? meeting, Dr. Warfield, an expert on teaching math educators, wrote in a newsletter that when Dr. Milgram told parents to fight for change, it was “implicit in the instructions that mathematicians who do not agree are classified as mathematics educators (a rung or two below the night custodian).”

The battle here has left many parents frustrated, confused and not sure if they should trust their children’s schools to give them the skills they need. Many have already voted with their feet, enrolling their children in math tutoring.

State Representative Glenn Anderson, a Republican member of the House education committee who has fought for a more rigorous curriculum, said state data showed that Washington residents spent $149 million on tutoring and other education support services in 2004, more than three times the $44 million they spent 10 years earlier.

Kumon, which has a global clientele of more than four million children in 43 countries, focuses on drilling children on basics. Students work their way through hundreds of assignments that move in incremental steps from tracing numerals all the way through differential calculus.

Every week for five years, Tove Burrows has brought her son, Petter, 13, to the Kumon Center in Mercer Island to turn in the worksheets he has done at home, sit down to new drills and pick up a set of assignments for the week ahead.

“If the math curriculum in the schools were different, I would not be doing Kumon,” said Ms. Burrows, whose son is an A student at Islander Middle School. “But I want to make sure he’s mastered the basics, and in school they don’t spend enough time on basics to get that mastery.”

On Mercer Island, an affluent suburb of Seattle that had the state’s best scores on the 10th-grade test, the pendulum has begun to swing toward emphasizing computational skills, especially in high school.

“We’re looking at texts that have more numbers and less language,” said Lisa Eggers, president of the Mercer Island School Board, who at one point sent two of her three children to Kumon. “And we’re one of the few districts where the math scores are going up.”

Even so, seeking outside math help is common in the district, with almost 100 students leaving the high school for math and going instead to nearby private academies for one-on-one tutoring, for which the school give will give them credit.

John Harrison, principal of Mercer Island High School, estimates that as many as 10 percent of his school’s 1,400 students are getting outside math help. “It’s not surprising that math is so important in Seattle, with so many people earning their living at Microsoft or Boeing,” Mr. Harrison said. “Our kids do very well on the state tests, compared to the state averages, but even here, math proficiency is less than reading and writing.”

What the Election Proves? Not Much---BECKER

I do not believe the election proves much other than that corruption scandals and the Iraq war hurt Republicans. Posner gives a very good discussion of some of the criticisms made about the American system. I will comment on a couple of the issues.

I have seen no convincing evidence that past limits on campaign contributions have improved the political process, or weakened the significance of interest groups. A major problem is that campaign limits have been difficult to enforce. Also, as Posner indicates, the Internet has opened up the possibilities of raising large sums in small individual contributions. I have doubts too about whether it is better or fairer when some groups use lots of time of young people and others to help in campaigning rather than money.

In addition, some of the most powerful interest groups have not mainly had rich members, such as farmers, the teachers union, craft unions (at the local level), and groups in favor of sharp restrictions on immigration. Very rich Americans and large American corporations have political power to be sure. Yet the tax on corporate profits in the United States is much higher than in most Western European nations, and the American tax on inheritances is far from the lowest among rich nations.

The United States does have greater earnings inequality and proportionately more rich businessmen than European countries or Japan. This is partly because the United States has higher before-tax returns to education and other skills than these other economies, even though a larger fraction of Americans get a college education than in most European nations. The main explanation for the difference is that the United States has a much more flexible economy than most other nations. In a knowledge economy, this produces bigger benefits to greater education and other skills. It is also easier to become an entrepreneur in this country than in most other countries.

Political scientists have long wondered why anyone votes in a democracy since any individual's vote is very likely to have a minuscule effect on the outcome of a political race. The same logic implies that voters have little incentive to be informed about the issues, even aside from the fact mentioned by Posner that many of these issues are highly complex and difficult to understand--such as the effects of a federal budget deficit on the economy. This means that many votes, particular those least committed to voting, are likely to be swayed by political advertising and emotional appeals. Under these circumstances, it is not obviously advantageous to have large turnout rates.

International comparisons of political outcomes in Europe vs. American suggest that Europeans put more emphasis on equality than Americans do, and less emphasis on efficiency. These and other differences might be attributed, although causation as usual is tricky, to the fact that European nations have much sharper restrictions on campaign contributions, less opportunity for gerrymandering, generally greater voting participation rates, and apparently better "informed" voters than in the United States.

Although the Europeans have less inequality typically, they tolerate much higher rates of long-term unemployment than America has. All studies show that long-term unemployment is the most destructive of self-confidence and measures of "happiness". European countries protect agriculture against imports from poor nations more strongly than America does, has a poorer environment to start businesses by individuals with limited resources, and generally have policies that are less tolerant of immigrants from the third world. The European social security system that provides retirement income and unemployment benefits is much more generous than the American one. However, Europe spends a lot less on health, including the health of the elderly and the poor, than America does.

Which system is better: the American political system, or the European model with lower campaign contributions, few opportunities for gerrymandering, larger voter turnouts, and apparently a politically better informed population? One can differ on the answer, but it is far from obvious to me that the European approach works out better in practice.

Posted by Richard Posner at 07:18 PM | Comments (3) | TrackBack (0)

November 12, 2006

What the Election Proves--Posner

Critics of the American electoral process have long complained that the process was poisoned as the combined result of (1) gerrymandering, (2) inadequate limitations on donations to political campaigns, (3) barriers to third parties, (4) barriers to voting (such as registration requirements and conducting elections on workdays rather than weekends or holidays), (5) public ignorance of policy issues and the consequent ability of political advisers, consultants, media specialists, pollsters, etc., to manipulate the public’s voting behavior, and (6) mistake-prone voting equipment, such as the notorious punchcards that cast a shadow over the 2000 presidential election in Florida. But the outcome of Tuesday's midterm election suggests that these problems are less serious than the critics believe. A Newsweek poll taken days after the election reported that 51 percent of those polled thought the Democrats' election victory a good thing; the election gave the Democrats approximately 51 percent of the seats in both the House of Representatives and the Senate (counting the two independent Senators as Democrats, though technically they are independents).

Gerrymandering poses the issue of the democratic legitimacy of our electoral system in its starkest form, as the avowed purpose is to reduce the number of legislators elected by the party not in control of the gerrymandering process. But this, it turns out, is easier said that done. For one thing, there is an inherent tension between incumbents and challengers in the same party running in different districts. An incumbent wants his district so configured that it will be dominated by members of his own party. But challengers belonging to the same party do not want the incumbent'’ districts to be packed with members of the party because that reduces the number of party members in the challengers' districts. So if incumbents prevail in districting, and in the next election the electorate proves to be hostile to incumbents, the gerrymander may boomerang, because challengers from the same party to incumbents of the other party will have fewer supporters in their districts.

The idea that without strict limitations on campaign finance the wealthy will dominate campaigns and tug the nation rightward also turns out to be questionable. There are many wealthy liberals and the Internet has made it much easier than it used to be to obtain modest campaign donations from the nonwealthy. Moreover, the effect of political advertising (which is where most campaign donations go) is diluted by the fact that voters are exposed to vast amounts of information and opinion that derive from sources other than advertising--not ony the mainstream media but, increasingly, blogs and other informal media. As usual in this election, Republicans outspent Democrats but were badly beaten anyway.

The states are permitted by the courts to establish barriers to third parties, mainly by requiring that a party have a large number of signatures from registered voters in order to get a place on the ballot. Yet despite this requirement, third parties are on the ballot in many states and the fact that they usually obtain only a handful of votes (though sometimes they play a spoiler role, as the Reform Party did in both the 1992 and 2000 elections--and in last week's election Joseph Lieberman was reelected to the Senate, running as an independent) is due more to the inherent difficulty of third parties in a presidential (as distinct from parliamentary) political system (for example, because a third party is so unlikely to produce a president it has difficulty attracting ambitious candidates) than to the barrier to entry that ballot-access rules create.

Although voter turnout is lower in the United States than in other countries, the consequences again are slight because those persons who are eligible to vote but don't bother to do so tend to have the same political opinions as those who do vote. Nor is it obviously wrong as a matter of democratic theory to discourage from voting those people whose interest in the political process is so attenuated that they are unwilling to incur the modest inconvenience that the American system imposes on would-be voters.

The poor voting equipment in many precincts throughout the country undoubtedly disfranchises a number of voters; but as with other barriers to voting this one affects outcomes only if the people disfranchised have systematically different political preferences from others. This is rare although it may have happened in Florida in the 2000 election. It wasn't a factor in the recent election.

Finally, although surveys reveal that most Americans are indeed political ignoramuses, even the significance of this fact for the healthy functioning of the democratic process can be doubted. Issues of public policy, especially at the federal level, and issues of the competence and leadership qualities of officials at that level, are so difficult for outsiders to government to assess that it is unrealistic to think that the electorate could become well informed--unless the American population reallocated a substantial amount of its time from work, family, and cultural and other leisure activities to the study of politics and policy. That might not be an efficient reallocation of time, especially if its principal product was confusion.

If the electorate can be expected to focus only on highly salient issues of policy and leadership, it may not need to be well informed. Maybe all it needs to know is that things are going badly or well and that the party in power bears some responsibility for the situation. Expert commentators on the recent election results, regardless of their politics, are virtually unanimous in the view that the Republicans deserved the severe rebuke that they received from the electorate. These experts may be right or wrong--a question on which I would be uncomfortable, being a judge, to offer an opinion. What is pertinent to the present discussion is only that if the franchise were confined to experts, the results of the election would have been the same or very similar. If the electorate comes to the same conclusion as the experts, the implication is that democracy can work quite well even when the electorate lacks expertise.

Posted by Richard Posner at 07:21 PM | Comments (11) | TrackBack (0)

 

 Reply on Polygamy--BECKER

I am sorry for this late reply to the many interesting posts on polygamy. I will not be able to consider all the issues raised, but I discuss a few. I come back to the comments on crime next week.

Someone asked about how polygamy would affect the incentives of men to invest in more skills, etc in order to be more competitive in marriage markets? We do know from historical data that men tend to marry later in polygynous societies in order to have enough income and wealth to be sufficiently attractive as mates. However, if sharing of resources in a marriage is determined by supply and demand considerations, one can show that the investments in skills and other ways to be more attractive are efficient (see the discussion in the chapter on marriage in the book, Social Economics, by Kevin M. Murphy and me). However, if sharing is based on rigid rules, such as 50/50 split, investments in skills may be either excessive or insufficient from an efficiency perspective.

Clearly, polygyny has sometimes been encouraged when there is a shortage of men, as when many men have been killed in war. As William Julius Wilson and others have emphasized, there is now a significant shortage of eligible men in black communities. Perhaps that would lead to a little polygyny if it were allowed, but as I said in my post, polygyny is quite rare in modern societies even when permitted. I explained this by a substitution of quality of children for quantity. I agree with one comment that it also involves a substitution of one higher educated wife for two or more less educated ones.

I do not believe there is much of a biological argument against polygyny. For most non-human species are polygynous, not monogamous. I believe polygyny has also been more common among mammals than monogamy. Probably, a majority of humans in the more distant past also lived in polygamous, not monogamous, societies. So if anything, monogamy has evolved due to culture and against biology.

Do girls need protection against polygamy? Not if it were openly allowed rather than an illegal activity in some remote rural area of Utah. I repeat what I said in my post: I have considerably more confidence than some of the posters that young women can make at least as considered decisions with respect to marriage as young men. If, however, there is a belief that young girls will be taken advantage of, have a law that raises the minimum age before girls can enter into polygynous marriages.

Some completely erroneous statements were made in one of the comments about the relative commitment to marriage of men and women. These were well answered by other posters, so I will only add that I assume this view also implies that women are less interested in having custody of children than men?

My comments on gays having children were made only to point out how many marital practices that were forbidden in the past are now allowed. Perhaps I am wrong in my view that there tends to be negative effects on children raised by gays. I do not strongly hold this view, and look forward to the time when we have more convincing evidence, one way or the other.

books
How Do You Measure People Skills?
The elusive landscape of social intelligence.
By Paul Harris
Posted Monday, Nov. 13, 2006, at 7:27 AM ET

Going to school in England during the 1950s, I came to accept the idea that intellectual ability is pretty much all of one piece—you have a lot or you have a little. In fact, the entire United Kingdom school system reinforced that way of thinking. At the age of 11, all children in state primary schools sat the so-called 11+ exam, and we were allocated to different secondary schools depending on whether we "passed" or "failed." Three decades later, Howard Gardner presented a persuasive case for intellectual multiplicity in his book Frames of Mind. He argued against the central idea that we English schoolchildren had been measured by—the concept of g, a general factor pervading all aspects of intelligence. Instead, Gardner proposed a set of seven relatively distinct intelligences. Individuals, he claimed, show peaks and valleys across those seven domains. Mathematical prodigies might display low interpersonal intelligence; the mathematically challenged might have high verbal ability; and students with modest mathematical and verbal abilities might nonetheless show extraordinary musical talent. Many educators were convinced by Gardner's neuropsychological evidence showing that parts of the brain specialize in different domains—spatial, verbal, musical, and so on—and also by the compelling case studies of individuals with obvious gifts in one domain and poor functioning in others. Yet, despite its success and influence in the field of education, Gardner's message did not achieve much traction in the wider world.

In 1995, Daniel Goleman, a Harvard-educated psychologist and a science writer for the New York Times, took up Gardner's emphasis on mental specialization. Unlike Gardner, however, he insisted that traditional measures of intelligence, whether conceived as a unified trait or divided by seven, do not pay enough attention to the critical role of emotion. Because of the focus in education from kindergarten through graduate school on purely cognitive intelligence—whether verbal, mathematical, spatial, or logical—the study and nurturance of emotional intelligence, he argued, has been neglected. Goleman's focus on emotion, and its pivotal role in success outside the classroom, had an enormous appeal—and well beyond educational circles. Emotional Intelligence was on the best-seller list for a year and a half, sold more than 5 million copies, and introduced the term EQ into common parlance.

Goleman's new book, Social Intelligence, has two themes. First, he situates emotional intelligence much more explicitly in the context of interpersonal relations. If the hallmark of the emotionally intelligent is awareness and regulation of the self, the hallmark of the socially intelligent is awareness of, and sensitivity toward, other people. Second, he ties his proposals concerning social intelligence to the burgeoning field of social neuroscience. Gardner drew primarily on findings from brain damage—noting, for example, what happens to a person with damage to the left, as compared to the right, hemisphere. Three decades later, Goleman is able to draw on the recent explosion of "imaging" studies in the field of social neuroscience. The various parts of an intact brain are monitored while their owner, temporarily entombed in a scanner, is presented with emotionally charged inputs. Goleman is keen to draw implications from data that are, for the time being, suggestive rather than definite. Still, in his efforts to bring order to a dizzying array of evidence, he deserves credit for being comprehensive and well-informed about emerging trends.

The emotional lives of human beings are a complicated mixture of rapidly elicited, semiconscious reactions to interpersonal signals and a slower, more articulate reflection on what we feel, how we felt earlier, and the appropriateness of those feelings. Goleman proposes two relatively distinct brain pathways to explain this mix: a "low road" for the rapid processing of interpersonal signals, be they cries of distress, flirtatious smiles, or the clasp of a comforting hand; and a "high road" that permits a more reflective awareness, communication, and regulation of our emotional experience.

Goleman argues that low-road emotional signals, when transmitted repeatedly between two people, effectively set up a "brain-to-brain" link that acts as a double-edged sword. Emotionally positive signals between two people have beneficial effects on their respective health and welfare, whereas sustained negative signals have a toxic effect. These positive and negative effects are also transmitted across generations. When antagonistic parents express contempt for one another, they are likely to have children who find it difficult to negotiate peer relationships. When couples display more warmth and empathy during disagreements, they are likely to have children with better social skills. Goleman emphasizes that the low-road system typically operates on nonverbal emotional signals and is relatively automatic, fast acting, and largely unconscious. It is a universal mode of communication that emerges early in life, as babies burble in response to smiles and fret when confronted by an angry face.

The high road, as Goleman construes it, involves a distinct set of neural processes that permit the reworking of emotion on another plane. In the course of development, we start to be aware of our feelings; we acquire the ability to put those conscious feelings into words; and we are increasingly able to exercise some control over the expression, duration, and intensity of our emotions. We also end up with a working theory of the psychology of emotion that goes beneath and beyond the decoding and transmission of nonverbal signals. We realize, for example, that the emotion that someone displays on his face may not correspond to how he really feels. We also realize that he may feel several conflicting emotions at the same time, and we reckon with the fact that whatever emotion he may be expressing—or masking—right now, it will almost certainly dissipate as time passes. It seems unlikely that any other species has this capacity for psychological insight and reflection. Certainly, no other species voices an apology or says, "I love you."

Goleman's distinction between the low road and the high road provides a useful metaphor for synthesizing and organizing a large body of research. Still, the alleged existence of those two neural pathways creates a major problem for his central theme: the nature and functioning of social intelligence. The difficulty is in deciding how exactly to measure and combine low- and high-road skills. One solution might be to say that social intelligence is just general intelligence—as measured by the traditional IQ test—plus the type of low-road skills that Goleman emphasizes with respect to the rapid, intuitive processing of nonverbal interpersonal signals. However, that solution will not work. In his book Descartes' Error, neuropsychologist Antonio Damasio describes the plight of patients with damage to the prefrontal cortex—a brain area that is critical to the reflective, high-road skills. Such patients are notorious for their difficulties in interpersonal relationships—they are likely to be insensitive in social situations and emotionally volatile. In many ways, they lack social intelligence. At the same time, these patients can perform well on standard measures of IQ.

A different tack—one that corresponds to Goleman's own position—is to think of social intelligence as a combination of low-road and high-road skills, with each set being distinct from IQ as ordinarily measured. Yet, it looks like wishful thinking on Goleman's part to group these two sets of skills together. He provides no evidence that they correlate. Intuitively, there seem to be obvious, everyday examples of their separation. The friend who is poor at managing his or her long-term social relationships may still be someone who is remarkably attuned to our current feelings. Babies may be exquisitely sensitive to the shifting moods of their caregivers from the earliest months, but that sensitivity is no guarantee of mature emotional and interpersonal functioning in later life. So, in the end, Goleman's book falls victim to its own dissecting logic. Once we start multiplying intelligence, how do we know when to stop, and how do we put things back together?

Paul Harris teaches developmental psychology at the Graduate School of Education, Harvard University. He is currently writing a book about the psychology of trust, thanks to a Guggenheim Fellowship.

Article URL: http://www.slate.com/id/2153385/

 November 5, 2006

Freakonomics

The Price of Climate Change

By STEPHEN J. DUBNER and STEVEN D. LEVITT

The famous old quip about the weather — everyone talks about it but nobody does anything about it — is not as true as it once was. Alarmed by the threat of global warming, lots of people are actively trying to change human behaviors in order to change the weather.

Even economists are getting into the weather business. Olivier Deschênes of the University of California at Santa Barbara and Michael Greenstone of the Massachusetts Institute of Technology have written a pair of papers that assess some effects of climate change. In the first, they use long-run climatological models — year-by-year temperature and precipitation predictions from 2070 to 2099 — to examine the future of agriculture in the United States. Their findings? The expected rises in temperature and precipitation would actually increase annual agricultural production, and therefore agricultural profits, by about 4 percent, or $1.3 billion. This hardly fulfills the doomsday fears conjured by most conversations about global warming.

For other economists, meanwhile, the weather itself has proved useful in measuring wholly unrelated human behaviors. From an economist’s perspective, the great thing about the weather is that there is nothing humans can do to affect it (at least until recently).

Contrast this with social changes that people enact: a new set of laws, for instance. Very often, new laws come about when there is a perception that a big social problem — think violent crime or corporate fraud — is growing worse. After a while, and after the laws have been enacted, the problem diminishes. So did the new laws fix the problem, or would it have improved on its own? Politicians will surely claim that it was their laws that fixed the problem, but it’s hard to know for sure.

The weather, however, is different; the beauty of weather is that it does its own thing, and whether the weather is good or bad, you can be pretty sure that it didn’t come about in response to some human desire to fix a problem. Weather is a pure shock to the system, which means that it is a valuable tool to help economists make sense of the world.

Consider 19th-century Bavaria. The problem there was rain — too much of it. As Halvor Mehlum, Edward Miguel and Ragnar Torvik explained in a recent paper, excessive rain damaged the rye crop by interfering with the planting and the harvest. Using a historical rainfall database from the United Nations, they found that the price of rye was significantly higher in rainy years, and since rye was a major staple of the Bavarian diet, food prices across the board were considerably higher in those years, too. This was a big problem, since a poor family at the time would have been likely to spend as much as 80 percent of its money on food. The economists went looking for other effects of this weather shock. It turns out that Bavaria kept remarkably comprehensive crime statistics — the most meticulous in all of Germany — and when laid out one atop the other, there was a startlingly robust correlation between the amount of rain, the price of rye and the rate of property crime: they rose and fell together in lockstep. Rain raised food prices, and those prices, in turn, led hungry families to steal in order to feed themselves.

But violent crime fell during the rainy years, at the same time property crimes were on the rise. Why should that be? Because, the economists contend, rye was also used to make beer. “Ten percent of Bavarian household income went to beer purchases alone,” they write. So as a price spike in rye led to a price spike in beer, there was less beer consumed — which in turn led to fewer assaults and murders.

It turns out that rainfall often has a surprisingly strong effect on violence. In a paper on the economic aftermath of the hundreds of riots in American cities during the 1960’s, William Collins and Robert Margo used rainfall as a variable to compare the cities where riots took place with cities where riots probably would have taken place had it not rained. Few things can dampen a rioter’s spirit more than a soaking rain, they learned. After two days of rioting in Miami in the summer of 1968 were finally quelled by rain, they write, the Dade County sheriff joked to The New York Times that he had ordered his off-duty officers to pray for more rain.

The economists Edward Miguel, Shanker Satyanath and Ernest Sergenti have written a paper that uses rainfall to explore the issue of civil war in Africa. Twenty-nine of 43 countries in sub-Saharan Africa, they note, experienced some kind of civil war during the 1980’s or 1990’s. The causes of any war are of course incredibly complex — or are they? The economists discovered that one of the most reliable predictors of civil war is lack of rain. Using monthly rainfall data from many different African countries (most of which, significantly, are largely agricultural), they found that a shortage of rain in a given growing season led inevitably to a short-term economic decline and that short-term economic declines led all too easily to civil war. The causal effect of a drought, they argue, was frighteningly strong: “a 5-percentage-point negative growth shock” — a drop in the economy, that is — “increases the likelihood of civil war the following year by nearly one-half.”

Since the weather yields such interesting findings about the past, it makes sense that economists are also tempted to use it to anticipate the future. In their second paper on the potential effects of global warming, Deschênes and Greenstone try to predict mortality rates in the U.S. in the last quarter of the current century.

Unlike in their paper on agriculture, the news in this one isn’t good. They estimate, using one of the latest (and most dire) climatological models, that the predicted rise in temperature will increase the death rate for American men by 1.7 percent (about 21,000 extra fatalities per year) and for American women by 0.4 percent (about 8,000 deaths a year). Most of these excess deaths, they write, will be caused by hot weather that worsens cardiovascular and respiratory conditions. These deaths will translate into an economic loss of roughly $31 billion per year. Deschênes and Greenstone caution that their paper is in a preliminary stage and hasn’t yet been peer-reviewed and that the increased mortality rate may well be offset by such simple (if costly) measures as migration to the Northern states — a repopulation that, even a decade ago, might have seemed unimaginable.

Their paper on agriculture also has some wrinkles. While arguing that global warming would produce a net agricultural gain in the United States, they specify which states would be the big winners and which ones would be the big losers. What’s most intriguing is that winners’ and losers’ lists are a true blend of red states and blue states: New York, along with Georgia and South Dakota, are among the winners; Nebraska and North Carolina would lose out, but the biggest loser of all would be California. Which suggests that in this most toxic of election seasons, when there seems not a single issue that can unite blue and red staters (or at least the politicians thereof), global warming could turn out to be just the thing to bring us all together.

Stephen J. Dubner and Stephen D. Levitt are the authors of "Freakonomics." More information on the research behind this column is at www.freakonomics.com.

Milton Friedman--Posner's Comment

I knew Milton Friedman, but not well; and I am not competent to express an informed opinion on his major academic work, which was in macroeconomics. The economists of his generation with whom I principally associated were George Stigler, Ronald Coase, and Aaron Director (Friedman's brother-in-law)--microeconomists who had a major impact on the law and economics movement.

I did, however, read a few of Friedman's essays. Two in particular struck me around the time I came to Chicago. One was his essay on the methodology of positive economics, in which he argued that the way to test a theory was not by assessing the realism of its assumptions, but by assessing the accuracy of its predictions. Economics makes heavy use of unrealistic assumptions, primarily concerning rationality, and yet the predictions generated by models based on those assumptions are often accurate. Where they are inaccurate, this is a spur to reexamining the assumptions and perhaps modifying them, as is occurring in such fields as finance, where assuming a more complex human psychology than finance theorists traditionally assumed has helped to explain anomalies (from a rational-choice perspective) in the behavior of financial markets.

The emphasis on predictions connects Friedman's essay to Karl Popper's philosophy of science, in which the scientific method is viewed as a matter of making bold hypotheses, confronting them with data, and ascribing tentative (always tentative) validity to the hypotheses that survive the confrontation. Popper's methodology of fallibilism has strong affinities with Friedman's methodology. Both are strongly empiricist. Stigler in conversation merged these two closely related approaches, and I was very struck by the melded approach.

The other essay of Friedman's that struck me was an essay on taxation in which he argued, contrary to the conventional view at the time (though I gather the argument was not original with him), that there was no theoretical reason for supposing income taxes superior in point of efficient resource allocation to excise taxes. An excise tax--say, a 10 percent tax on yachts--drives a wedge between cost and price and so deflects buyers to substitutes that may cost more to produce but look cheaper because they are not taxed at so high a rate. (The effect is the same as monopoly pricing.) But Friedman argued that income taxes have the same effect, by driving a wedge between the cost of work and the wage (price) received by the worker, thus deflecting him to untaxed substitutes, such as leisure, or to jobs that generate untaxed benefits, including leisure in the case of teaching (for example), but also prestige, amenities, tax-favored fringe benefits, and job security. This idea of the parity of excise and income taxes has wide-ranging implications for public policy, since the tendency (still) is to neglect the misallocative effects of income taxation--a neglect of which I think even Friedman was sometimes guilty, as I am about to argue.

Perhaps his most important general contribution to economic policy was the simple, but when he first propounded it largely ignored or rejected, point that people have a better sense of their interests than third parties, including government officials, do. Friedman argued this point with reference to a host of issues, including the choice between a volunteer and a conscript army. With conscription, government officials determine the most productive use of an individual: should he be a soldier, or a worker in an essential industry, or a student, and if a soldier should he be an infantryman, a medic, etc.? In a volunteer army, in contrast, the determination is made by the individual--he chooses whether to be a soldier or not, and (within limits) if he decides to be a soldier what branch, specialty, etc., to work in. A volunteer army should provide a better matching of person to job than conscription, and in addition should create a more efficient balance between labor and capital inputs into military activity by pricing labor at its civilian opportunity costs.

But this is in general rather than in every case. The smaller the armed forces and the less risk of death or serious injury in military service, the more efficient a volunteer army is relative to a conscript one. These conditions are not satisfied in a general war in which a significant fraction of the young adult population is needed for the proper conduct of the war and the risk of death or serious injury is substantial--the situation in World War II. For then the government's heavy demand for military labor, coupled with the high cost of military service to soldiers at significant risk, would drive the market wage rate for such service through the roof. Very heavy taxes would be required to defray the expense of a volunteer army in these circumstances and those taxes would have misallocative effects that might well exceed the misallocative effects of conscription.

I mention this example because I find slightly off-putting what I sensed to be a dogmatic streak in Milton Friedman. I think his belief in the superior efficiency of free markets to government as a means of resource allocation, though fruitful and largely correct, was embraced by him as an article of faith and not merely as a hypothesis. I think he considered it almost a personal affront that the Scandinavian nations, particularly Sweden, could achieve and maintain very high levels of economic output despite very high rates of taxation, an enormous public sector, and extensive wealth redistribution resulting in much greater economic equality than in the United States. I don't think his analytic apparatus could explain such an anomaly.

I also think that Friedman, again more as a matter of faith than of science, exaggerated the correlation between economic and political freedom. A country can be highly productive though it has an authoritarian political system, as in China, or democratic and impoverished, as was true for the first half century or so of India's democracy and remains true to a considerable extent, since India remains extremely poor though it has a large and thriving middle class--an expanding island in the sea of misery. What is true is that commercial values are in tension with aristocratic and militaristic values that support authoritarian government, and also that as people become economically independent they are less subservient, and so less willing to submit to control by politicians; and also that they become more concerned with the protection of property rights, which authoritarian government threatens. But Friedman seemed to share Friedrich Hayek's extreme and inaccurate view that socialism of the sort that Britain embraced under the old Labour Party was incompatible with democracy, and I don't think that there is a good theoretical or empirical basis for that view. The Road to Serfdom flunks the test of accuracy of prediction!

I imagine that without the element of faith that I have been stressing, Friedman might have lacked the moral courage to propound his libertarian views in the chilly intellectual and political climate in which he first advanced them. So it should probably be reckoned on balance a good thing, though not to my personal taste. His advocacy of school vouchers, the volunteer army (in the era in which he advocated it--which we are still in), and the negative income tax demonstrates the fruitfulness of his master micreconomic insight that, in general, people know better than government how to manage their lives. But perhaps not always.

On Milton Friedman's Ideas--BECKER

Milton Friedman died this past week. He was the most influential economist of the 20th century when one combines his contributions to both economic science and to public policy. I knew him for many decades starting first when I was a graduate student at Chicago, and then as a colleague, mentor, and very close friend.

I will not dwell here on what a remarkable colleague he was. However, I do want to describe my first exposure to him as a teacher since he enormously changed my approach to economics, and to life itself. After my first class with him a half-century ago, I recognized that I was fortunate to have an extraordinary economist as a teacher. During that class he asked a question, and I shot up my hand and was called on to provide an answer. I still remember what he said, "That is no answer, for you are only restating the question in other words." I sat down humiliated, but I knew he was right. I decided on my way home after a very stimulating class that despite all the economics I had studied at Princeton, and the two economics articles I was in the process of publishing, I had to relearn economics from the ground up. I sat at Friedman's feet for the next six years-- three as an Assistant Professor at Chicago-- learning economics from a fresh perspective. It was the most exciting intellectual period of my life. Further reflections on Friedman as a teacher can be found in my essay on him in the collection edited by Edward Shils, Remembering the University of Chicago: Teachers, Scientists, and Scholars, 1991, University of Chicago Press.

In considering his many contributions to economics I will pass over his major innovations in scientific economics. These include his emphasis on permanent income in explaining aggregate consumption and savings, his study of the monetary history of the United States, his explanation of the stagflation of the 1970's, his analysis of the value of a stable and predictable monetary framework to help stabilize the economy, his early contributions to the theory and measurement of human capital, his discussion of choice under uncertainty, and his famous essay on methodology in economics.

I will discuss instead several ideas in his remarkable book, Capitalism and Freedom, published in 1962, that contains almost all his well-known proposals on how to improve public policy in different fields. These proposals on based on just two fundamental principles. The first is that in the vast majority of situations, individuals know their own interests and what is good for them much better than government officials and intellectuals do. The second is that competition among providers of goods and services, including among producers of ideas and seekers of political office, is the most effective way to serve the interests of individuals and families, especially of the poorer members of society.

The famous education voucher system found in this book, and based on an article published in the 1950's, embodies both principles: that parents generally know the interests of their children better than teachers unions and school boards do, and that competition among schools is the best way to serve the educational interests of children. He added the further insight that one can and should separate government financing of education from government running of schools. The voucher system retains government financing, but forces public schools to compete for funds against private for-profit and non-profit schools. The voucher proposal has I believe won the intellectual battle over the value of competition among schools at the k-12 school level as well as at the college level, but so far vouchers have won only limited political victories in terms of actual implementation. This is mainly due to the dedicated opposition of public school teachers unions who fear competition from private schools.

Both individual choice and competition are the foundation of Friedman's 1962 radical proposal to privatize the social security system. He argued, correctly in my judgment, that the vast majority of families could be trusted to provide for their retirement if given appropriate incentives, and that they should be allowed to invest in retirement funds provided by competitive investment companies. The government-run social security systems then in effect in the United States and all other countries with retirement systems taxed earnings in ways that discouraged effort and encouraged underground activities. These tax receipts were then paid out to retirees according to politically determined criteria. Chile started the first private system of personal accounts modeled along the lines laid out in Capitalism and Freedom, and Chile has since been followed to some degree by many other countries, such as Mexico, Singapore, and Great Britain. The United States has its tax-free IRA's and Sep savings accounts, but this country has not yet implemented privatization of its basic social security system, even though an enormous financial deficit on this system will occur in about 15 years unless the system is significantly reformed.

Friedman also proposed a flat income tax rate in Capitalism and Freedom, and showed that a rate of about 20% in the United States could raise the same revenue and in a much simpler and far less costly way than the quite progressive income tax system in effect in the early 1960's. Further theoretical analysis of what is called optimal tax rates has generally concluded that a rather flat tax would be best at combining efficiency with redistribution of income to poorer families. The appeal to Friedman of the flat tax was based again on his confidence that individuals react to incentives, and that they take steps to further their interests. In this case, he argued that highly progressive taxes induce taxpayers to find and exploit tax loopholes, so that legally, and at times illegally, taxpayers cut their tax payments by hiding income or converting income into other forms. A flat income tax was early introduced by Hong Kong, and has in recent years been followed by many countries, including Russia and eight other Eastern European countries. The United States significantly flattened its income tax structure since the time Friedman wrote this book, especially as a result of the tax reform act of 1986. Unfortunately, a more progressive structure has crept back since that reform.

The voluntary army was not discussed in Capitalism and Freedom, but Friedman did propose to replace the military draft in several articles published about the same time as the book was published. He argued that a voluntary army would attract at reasonable cost a dedicated military force of men and women who volunteered due to a combination of patriotism and economic opportunities. A voluntary system is especially effective in situations where full-scale mobilization of available manpower is not required. His advocacy of the voluntary army induced President Nixon to put Friedman on a committee to consider whether the United States should replace its military draft by a fully voluntary armed force. Many persons on the committee initially opposed this idea, especially General William Westmoreland, head of military operations in Vietnam. Friedman's persuasiveness eventually won over the vast majority of the members to this position, and in 1973 the United States changed to a voluntary armed force. Seeing how well this system has operated, very few military leaders now want to return to a draft.

Friedman proposed in Capitalism and Freedom, and earlier in an article in the 1950's, to abolish the Bretton Woods System of fixed exchange rates, and move to fully flexible exchange rates. Under a flexible exchange system, rates are determined by the competitive supply and demand for different currencies by individuals and businesses. The prevailing view had been that such a system of flexible exchange rates would be unstable, so he argued at length why flexible exchange rates would be not constant but stable--unstable rates implied, he argued, that speculators on the average would lose money, which he did not believe was likely. This view of the behavior of speculators was challenged, but I believe Friedman was basically right. In any case, the issue was decisively settled after Nixon took the United States off the gold standard in 1972, and replaced it with a system of flexible rates in 1973. The Chicago Mercantile Exchange led by Leo Melamed then saw the opportunity to set up futures markets in currencies, which it did with Friedman's help. These markets were enormously successful, and put to rest forever the belief that one could not have an effective system of flexible exchange rates. They provide an opportunity for businesses to hedge their currency risks by trading on currency futures.

The first chapter of Capitalism and Freedom considers the link between economic and political freedom. He argues there that economic freedom promotes political freedom, and that political freedom is not likely to persist without economic freedom. "The kind of economic organization that provides economic freedom directly, namely, competitive capitalism, also promotes political freedom because it separates economic power from political power and in this way enables the one to offset the other." Findings since then suggest that while economic freedom can begin under totalitarian regimes, such as under General Pinochet in Chile and General Chiang Kai-Shek in Taiwan, economic freedom produces economic growth and other changes that usually eventually lead to much greater democracy, as in Taiwan, South Korea, and Chile. The important implication is that China would become more democratic if it continues on its path of greater economic freedom and greater growth.

On whether one can have democracy without economic freedom, Friedman said, "I know of no example in time or place of a society that has been marked by a large measure of political freedom, and that has not also used something comparable to a free market to organize the bulk of economic activity." Sweden and other Scandinavian countries have been vibrant democracies, and yet governments in these countries tax away more than half the income. However, the majority of these taxes are transferred back to individuals in the form of retirement incomes, medical care, and in other ways. Although these countries mainly rely on private enterprise, not government enterprises, to organize their economies, is that "enough" freedom to qualify as economically free? That depends on the definition of economic freedom, yet I believe Friedman is right that thoroughgoing restrictions on economic freedom would turn out to be inconsistent with democracy.

To conclude on a more personal level, I was most impressed by Milton Friedman's sterling character--he would never soften his views to curry favor--his perennial optimism, his loyalty to those he liked, his love of a good argument without any personal attacks on his opponents, and his courage in the face of prolonged and virulent attacks on him by others. I cannot count the number of times I participated with him in seminars, nor how many visits my wife and I shared with Milton and Rose, his wife of almost 70 years. Rose, a fine economist, would not hesitate to differ with her husband when she believed his arguments were wrong or too loose.

When I spoke on the phone with him last Monday, he sounded strong and a bit optimistic about his health, even though he had just returned from a one-week hospital stay with a severe illness, an illness that a few days later took his life. Although his ideas live on stronger than ever, it is hard to believe that he is not here. I can no longer seek his opinions on my papers, but I will continue to ask myself about any ideas I have: would my teacher and dear friend Milton Friedman believe they are any good?

 Niall Ferguson: The death of monetarism

Economist Milton Friedman passed away, and so has his idea that central bankers should target money supply to control inflation.

Niall Ferguson

November 20, 2006

'INFLATION IS always and everywhere a monetary phenomenon." I can think of few sentences in economics that have engraved themselves more deeply in my memory than Milton Friedman's famous line in his Encyclopedia Britannica entry for "Money."

Even before I went to university, I had become fascinated by the problem of inflation. No wonder: In 1975, when I was 11, the annual rate hit 27% in Britain. At Oxford, however, I was prescribed John Maynard Keynes and John Kenneth Galbraith. I discovered Friedman only when I began work on my doctoral dissertation on the German hyperinflation of 1923. Suddenly all became clear. I just needed to figure out why the Weimar Republic printed such an insane quantity of banknotes. And, sure enough, it turned out that socialist politicians had been trying, among other things, to spend their way to full employment.

In 1920s Germany, however, just like in 1970s Britain, the notion of a trade-off between inflation and unemployment proved to be illusory — precisely as Friedman argued in his celebrated 1967 address to the American Economic Assn. Gradually, people got wise to what was happening, prices soared sky high and the economy collapsed.

It wasn't just that Friedman rehabilitated the quantity theory of money. It was his emphasis on people's expectations that was the key, because that was what translated monetary expansion into higher prices. In this, as in all his work, Friedman combined skepticism toward government with faith in individual rationality and therefore freedom. He was a libertarian across the policy board.

Nevertheless, the question is: Do people still believe in monetarism, Friedman's most important theory, which argues that inflation can be defeated only by targeting the growth of the money supply and thereby changing expectations? Not too many. From all the host of tributes from politicians, central bankers and economists on both sides of the Atlantic last week, of which the most vivid ("an intellectual freedom fighter") came from Lady Thatcher, you'd never know that Friedman's monetarism seemingly predeceased him by a decade or more.

The death of monetarism is usually explained as follows: In the course of the 1980s, pragmatic politicians and clever central bankers came to realize that it was difficult to target the growth of the money supply. Margaret Thatcher's ministers preferred to raise interest rates or to target the exchange rate. At the Federal Reserve too, Friedman's rules — once zealously applied by Paul Volcker — gradually gave way to Alan Greenspan's discretion. And, for all the praise he heaped on Friedman last week, Greenspan's successor, Ben Bernanke, is dismissive of monetarism. Earlier this year, the Fed ceased to track and publish the M3 money supply number (the broadest monetary aggregate). It is the inflation rate that today's central bankers want to target, not the supply of money.

Anti-monetarists point out that the relationship between monetary growth and inflation has broken down. Inflation is low nearly everywhere. The latest figure for the annual growth in core consumer prices is just 2.3% in the United States, down from 3.8% in May. But the annual growth rate of M3 — which diehard monetarists have continued to track unofficially — is just under 10%.

Yet simply because consumer price inflation has remained low does not mean that money is irrelevant. On the contrary, it is the key to understanding the world economy today. For there is nothing in Friedman's work that states that monetary expansion is always and everywhere a consumer price phenomenon.

In our time, unlike in the 1970s, oil-price pressures have been countered by the entry of low-cost Asian labor into the global workforce. Not only are the things Asians make cheap and getting cheaper, competition from Asia also means that Western labor has lost the bargaining power it had 30 years ago. Stuff is cheap. Wages are pretty flat.

As a result, monetary expansion in our time does not translate into significantly higher prices in shopping malls. We don't expect it to. Rather, it translates into significantly higher prices for capital assets, particularly real estate and equities.

No one can say for sure what the consequences will be of this new variety of inflation. For the winners, one asset bubble leads merrily to another; the key is to know when to switch from real estate to paintings by Gustav Klimt. For the losers, there is the compensation of cheap electronics. Why worry when China is willing to buy any amount of U.S. dollars the Fed cares to print in order to keep its currency from appreciating and its exports cheap?

Last week, the People's Bank of China announced that its international reserves had reached the dizzying figure of $1 trillion, 70% of which is held in dollar notes and bonds. If you were wondering where all the money went, that's part of the answer. Unnerving, isn't it?

No, the theorist may be dead, but long live the theory.

"Inflation is always and everywhere a monetary phenomenon." It's true, no matter what is inflating.

 

Allowing everything but the veil

The Netherlands, famously tolerant of prostitution and drug use, wants to outlaw face veils in public places.


November 18, 2006

SAME-SEX MARRIAGE, euthanasia, drug use, prostitution — in the Netherlands those are perfectly fine. But the one thing the Dutch apparently will not tolerate is what they perceive to be intolerance. In defense of their cherished tradition of gedogen — which loosely translates as "to live and let live" — the Dutch are ready to force the assimilation of conservative Muslim immigrants, who are deemed intolerant of fabled Dutch tolerance and must therefore no longer be tolerated. Got that?

Five days before a national election, the Netherlands' center-right government announced that it would introduce legislation to ban the wearing of burkas, veils and similar garments in public places. Should it pass, the most famously accepting country in Europe would have the most restrictive anti-Muslim laws on the Continent.

This is a spectacularly bad case of overreaching, even if you believe — as we do — that it's unfortunate that some women are forced by their culture to cloak themselves in anonymity before going out in public. Also, there are some Muslim women who feel exposed without covering up.

If anything, the proposed law, which is being justified on security grounds, could backfire by encouraging more immigrants to reach for their veils. And it risks further victimizing women by forcing them to stay indoors. A tiny minority of the roughly 1 million Muslims in the Netherlands are conservative enough to be affected by the proposed ban, but the message to all of them is loud and clear. And menacing.

Dutch anxiety about immigrants' rejection of Western culture is understandable. The nation is still traumatized by the 2004 killing of filmmaker Theo van Gogh, who made a film critical of Islam. But to force assimilation is to fight intolerance with intolerance. The ban would undermine the very culture — Dutch culture — it seeks to protect.

In the past, the Dutch may have been too indulgent of immigrant communities' desires to remain culturally separate. Because of its permissiveness, Holland has allowed a community that promulgates extremist strains of Islam to flourish. Hundreds of schools, partly funded by Saudi Arabia and subsidized by the Dutch government, helped keep the two cultures apart. A better way to root out intolerance might be to start with these schools, where anti-Western ideology is spread. A ban limited to head scarves and veils in schools, similar to the one in France, would be more defensible than an outright ban on everyone.

Part of Dutch identity, since the Netherlands welcomed Jews fleeing the Spanish Inquisition, not to mention our own Pilgrim ancestors fleeing England, has been tolerance. By outlawing a religious and cultural practice they fear, the Dutch would be sacrificing some of their own identity.

 

A man who hated government

Conservative economic guru and liberal nemesis Milton Friedman disliked intervention of any sort, whether in the market or in recreational drug use.

By Brad DeLong

Nov. 17, 2006 | "Lord, enlighten thou our enemies," prayed 19th century British economist and moral philosopher John Stuart Mill in his "Essay on Coleridge." "Sharpen their wits, give acuteness to their perceptions, and consecutiveness and clearness to their reasoning powers. We are in danger from their folly, not from their wisdom: their weakness is what fills us with apprehension, not their strength."

For every left-of-center American economist in the second half of the 20th century, Milton Friedman (1912-2006), Nobel Prize winner, founder of the conservative "Chicago School" of economics and advisor to Republicans from Goldwater to Reagan, was the incarnate answer to John Stuart Mill's prayer. His wits were sharp, his perceptions acute, his arguments strong, his reasoning powers clear, coherent and terrifyingly quick. You tangled with him at your peril. And you left not necessarily convinced, but well aware of the weak points in your own argument.

Gen. William Westmoreland, testifying before President Nixon's Commission on an All-Volunteer [Military] Force, denounced the idea of phasing out the draft and putting only volunteers in uniform, saying that he did not want to command "an army of mercenaries." Friedman, a member of the 15-person commission, interrupted him. "General," Friedman asked, "would you rather command an army of slaves?" Westmoreland got angry: "I don't like to hear our patriotic draftees referred to as slaves." And Friedman got rolling: "I don't like to hear our patriotic volunteers referred to as mercenaries." And he did not stop: " If they are mercenaries, then I, sir, am a mercenary professor, and you, sir, are a mercenary general. We are served by mercenary physicians, we use a mercenary lawyer, and we get our meat from a mercenary butcher." As George Shultz liked to say: "Everybody loves to argue with Milton, particularly when he isn't there."

Thinking as hard as he could until he got to the root of the issues was his most powerful skill. "Even at 94," wrote "Freakonomics" author Steven Levitt, currently a professor in the same University of Chicago economics department where Friedman taught from 1946 to 1976, "he would teach me something about economics whenever we talked." In Friday's New York Times, Chicago economist Austen Goolsbee quotes from Milton Friedman's Nobel autobiography:

Friedman said that when he arrived [at the University of Chicago] in the 1930s, he encountered a "vibrant intellectual atmosphere of a kind that I had never dreamed existed."

"I have never recovered."

His worldview began with a bedrock belief in people and their ability to make judgments for themselves, and thus an imperative to maximize individual freedom. On top of that was layered a trust in free markets as almost always the best and most magical way of coordinating every conceivable task. On top of that was layered a powerful conviction that a look at the empirical facts -- a comparison, or a "marking to market," of one's beliefs with reality -- would generate the right conclusions. And crowning that was a fear and suspicion of government as an easily captured tool for the enrichment of cynical and selfish interests. Suffusing all was a faith in the power of argument and the primacy of reason. Friedman was an optimist. He was convinced people could be taught the truths of economics, and if people were properly taught, then institutions could be built to protect society as a whole against the corruption and overreach of the government.

And he did fear the government. He was a conservative of the old, libertarian school, from the days before the scolds had captured the levers of power in the conservative movement. He hated any government intrusion into people's private business. And he interpreted "people's private business" extremely widely. He detested the war on drugs, which he saw as a cruel and destructive breeder of crime and violence. He scorned government licensing of professionals -- especially doctors, who heard over and over again about how their incomes were boosted by restrictions on the number of doctors that made Americans sicker. He abhorred deficit spending -- again, he was a conservative from another era. He feared that cynical politicians could pretend that the costs of government were less than they were by pushing the raising of taxes to pay for spending off into the future. He sought to inoculate citizens against such political games of three-card monte. "Remember," he would say, "to spend is to tax."

This did not mean that government had no role to play. He endorsed the enforcement of property rights, adjudication of contract disputes -- the standard and powerful rule-of-law underpinnings of the market -- plus a host of other government interventions when empirical circumstances made them appropriate. Sometime empirical circumstances could win Friedman some unexpected allies. Left-wing Mayor Ken Livingstone's congestion tax on cars in central London is an idea straight out of Milton Friedman. Friedman's negative income tax is one of the parents of what is now America's largest anti-poverty program: the earned-income tax credit, which was greatly expanded by Bill Clinton. And, most important, government had a very powerful and necessary role to play in keeping the monetary system working smoothly through proper control of the money stock. If there was always sufficient liquidity in the economy -- enough but not too much -- then you could trust the market system to do its job. If not, you got the Great Depression, or hyperinflation.

Prior to Friedman, the economic giant of the previous generation, John Maynard Keynes, was an equally ferocious debater. The Great Depression had convinced Keynes that central bankers alone could not rescue and stabilize the market economy. In Keynes' view, stronger and more drastic strategic interventions were needed to boost or curb demand directly. Keynes was perhaps the prime influence on U.S. liberals and U.S. economic policy up through the Reagan era; Friedman worked tirelessly to supplant and minimize his influence.

In their "Monetary History of the United States," Friedman and coauthor Anna J. Schwartz argued that the Keynesian reliance on intervention was a misreading of the lessons of the Depression. Friedman did think that government was required to undertake relatively narrow but crucial strategic interventions to stabilize the macroeconomy -- keep production, employment and prices on an even keel. But he believed the Depression might have been rapidly alleviated by skillful monetary management alone. Over the course of 40 years, Friedman's position carried the day, in a few developing economies like Chile that have applied Chicago School theories, and at home. Current Federal Reserve chairman Ben Bernanke now holds Friedman's view, not Keynes', of what kind of strategic interventions in the economy are necessary to provide for maximum production, employment and purchasing power, and stable prices.

Friedman's thought is, I believe, best seen as the fusion of two strong and very American currents: libertarianism and pragmatism. Friedman was a pragmatic libertarian. He believed that -- as an empirical matter -- giving individuals freedom and letting them coordinate their actions by buying and selling on markets would produce the best results. It was not that he thought this was a natural law. He didn't believe that markets always worked best. It was, rather, that he believed that places where markets failed were atypical; that where markets failed there were almost always enormous profit opportunities from entrepreneurial redesign of institutions; and that the market system would create new opportunities for trade that would route around market failures. Most important, his distrust of government told him that government failure was pervasive, and that any expansion of government beyond the classical liberal state would be highly likely to cause more trouble than it could solve.

For right-of-center American libertarians, Milton Friedman was a powerful leader. For left-of-center American liberals, Milton Friedman was an enlightened adversary, and one whose view is now ascendant. We are all the stronger for his work. We will miss him.

November 17, 2006

Economic Scene

A Charismatic Economist Who Loved to Argue

By AUSTAN GOOLSBEE

Someone walked into our lunchroom yesterday at the University of Chicago and announced that Milton Friedman had died. Mr. Friedman spent his intellectual life here, so I started asking people here about him and what they remembered. It became clear that despite retiring almost 30 years ago (and despite being only 5-foot-3), he still casts a long shadow.

To much of the world, he is known for his free market, antigovernment message and his influence on conservative leaders. But in an interview, Mr. Friedman once said that while his efforts to influence public policy had received more public attention, they had been more of an avocation. “My real vocation,” he said, “has been scientific economics.”

What struck me as I talked with my colleagues yesterday was how Mr. Friedman’s legacy among economists is in some ways similar but in some ways quite different from the public view. His manner of research, his personality, even the topics he studied spawned a great deal of the economics we know today — even among economists whose politics differ greatly from his. A striking number of topics he worked on, for example, ultimately developed into other people’s Nobel awards.

One of Mr. Friedman’s major impacts on economics was in establishing a basic worldview. Economics is not a game or an academic exercise, in that view. Economics is a powerful tool to understand how the world works. He used straightforward theory. He gathered data from anywhere he could get it. He wanted to see how well economics fitted the world. That view now holds sway throughout much of the profession.

Mr. Friedman loved to argue. They say he was the greatest debater in all of economics. As improbable as it sounds, given Mr. Friedman’s small frame and thick glasses, few who saw him would deny that he had an astounding amount of charisma. It probably explains why he was so successful on television. While being an academic powerhouse, he really could explain things clearly.

Mr. Friedman brought his brashness and his love of debate to the University of Chicago and commenced the golden age for the most heralded center of economics. In his autobiographical statement for the Nobel in economic science, which he received in 1976, Mr. Friedman said that when he arrived in the 1930s, he encountered a “vibrant intellectual atmosphere of a kind that I had never dreamed existed.”

“I have never recovered.”

And we never recovered, either. Chicago remains a place with an intensity without precedent in the world of economics, where we seem to eat, drink and breathe economics, and Mr. Friedman’s personality has much to do with that. He always wanted to engage in a debate on something (or, according to his detractors, to make a pronouncement about something). Nowadays, much of the political edge to the research is gone — there are Democrats and Republicans on the faculty — but the intensity remains.

The funny thing about Mr. Friedman’s transition to iconic status is that it happened without his ever losing his bluntness. He wasn’t, necessarily, polite. Even at 93, he was out declaring that fixed exchange rates are price controls and so the euro is doomed. He really didn’t care if you liked what he said. That was true within economics just as much as it was in the policy arena.

Mr. Friedman was proof that a great economist could become famous for just talking about economics. But he wasn’t afraid to poke his nose in places where people said economists had no business being. He passed that attitude on to students like Gary S. Becker, who would win the Nobel in 1992, and in the wider profession, especially among a younger set of economists like Steven D. Levitt of “Freakonomics” fame.

Mr. Friedman’s legacy might mean laissez-faire politics to the outside world, but to economists — and especially Chicago economists — it is more about trying to understand how the world works and engaging in a debate about it.

When we heard the news at the University of Chicago that he had died, we actually stopped arguing and were quiet for a moment. It was a most extraordinary event for Chicago economists. Each of us seemed to contemplate Mr. Friedman’s legacy for ourselves. After that bit of calm, the argument resumed. It was, perhaps, just what the old man would have wanted.

Austan Goolsbee is a professor of economics at the University of Chicago Graduate School of Business. E-mail: goolsbee@gsb.uchicago.edu.

Could smog protect against global warming?

By Charles J. Hanley
The Associated Press

NAIROBI, Kenya — If the sun warms the Earth too dangerously, the time may come to draw the shade.

The ''shade'' would be a layer of pollution deliberately spewed into the atmosphere to help cool the planet. This over-the-top idea comes from prominent scientists, among them a Nobel laureate. The reaction here at the U.N. conference on climate change is a mix of caution, curiosity and some resignation to such ''massive and drastic'' operations, as the chief U.N. climatologist describes them.

The Nobel Prize-winning scientist who first made the proposal is himself ''not enthusiastic about it.''

''It was meant to startle the policy makers,'' said Paul J. Crutzen, of Germany's Max Planck Institute for Chemistry. ''If they don't take action much more strongly than they have in the past, then in the end we have to do experiments like this.''

Serious people are taking Crutzen's idea seriously. This weekend, NASA's Ames Research Center in Moffett Field, Calif., hosts a closed-door, high-level workshop on the global haze proposal and other ''geoengineering'' ideas for fending off climate change.

In Nairobi, meanwhile, hundreds of delegates were wrapping up a two-week conference expected to only slowly advance efforts to rein in greenhouse gases blamed for much of the 1-degree rise in global temperatures in the past century.

The 1997 Kyoto Protocol requires modest emission cutbacks by industrial countries — but not the United States, the biggest emitter of carbon dioxide and other heat-trapping gases, because it rejected the deal. Talks on what to do after Kyoto expires in 2012 are all but bogged down.

When he published his proposal in the journal Climatic Change in August, Crutzen cited a ''grossly disappointing international political response'' to warming.

The Dutch climatologist, awarded a 1995 Nobel in chemistry for his work uncovering the threat to Earth's atmospheric ozone layer, suggested that balloons bearing heavy guns be used to carry sulfates high aloft and fire them into the stratosphere.

While carbon dioxide keeps heat from escaping Earth, substances such as sulfur dioxide, a common air pollutant, reflect solar radiation, helping cool the planet.

Tom Wigley, a senior U.S. government climatologist, followed Crutzen's article with a paper of his own on Oct. 20 in the leading U.S. journal Science. Like Crutzen, Wigley cited the precedent of the huge volcanic eruption of Mount Pinatubo in the Philippines in 1991.

Pinatubo shot so much sulfurous debris into the stratosphere that it is believed it cooled the Earth by .9 degrees for about a year.

Wigley ran scenarios of stratospheric sulfate injection — on the scale of Pinatubo's estimated 10 million tons of sulfur — through supercomputer models of the climate, and reported that Crutzen's idea would, indeed, seem to work. Even half that amount per year would help, he wrote.

A massive dissemination of pollutants would be needed every year or two, as the sulfates precipitate from the atmosphere in acid rain.

Wigley said a temporary shield would give political leaders more time to reduce human dependence on fossil fuels — the main source of greenhouse gases. He said experts must more closely study the feasibility of the idea and its possible effects on stratospheric chemistry.

Nairobi conference participants agreed.

''Yes, by all means, do all the research,'' Indian climatologist Rajendra K. Pachauri, chairman of the 2,000-scientist U.N. network on climate change, told The Associated Press.

But ''if human beings take it upon themselves to carry out something as massive and drastic as this, we need to be absolutely sure there are no side effects,'' Pachauri said.

Philip Clapp, a veteran campaigner for emissions controls to curb warming, also sounded a nervous note, saying, ''We are already engaged in an uncontrolled experiment by injecting greenhouse gases into the atmosphere.''

But Clapp, president of the U.S. group National Environmental Trust, said, ''I certainly don't disagree with the urgency.''

In past years scientists have scoffed at the idea of air pollution as a solution for global warming, saying that the kind of sulfate haze that would be needed is deadly to people. Last month, the World Heath Organization said air pollution kills about 2 million people worldwide each year and that reducing large soot-like particles from sulfates in cities could save 300,000 lives annually.

American geophysicist Jonathan Pershing, of Washington's World Resources Institute, is among those wary of unforeseen consequences, but said the idea might be worth considering ''if down the road 25 years, it becomes more and more severe because we didn't deal with the problem.''

By telephone from Germany, Crutzen said that's what he envisioned: global haze as a component for long-range planning. ''The reception on the whole is more positive than I thought,'' he said.

Pershing added, however, that reaction may hinge on who pushes the idea. ''If it's the U.S., it might be perceived as an effort to avoid the problem,'' he said.

NASA said this weekend's conference will examine ''methods to ameliorate the likelihood of progressively rising temperatures over the next decades.'' Other such U.S. government-sponsored events are scheduled to follow.

November 17, 2006

Milton Friedman, 94, Free-Market Theorist, Dies

By HOLCOMB B. NOBLE

Correction Appended

Milton Friedman, the grandmaster of free-market economic theory in the postwar era and a prime force in the movement of nations toward less government and greater reliance on individual responsibility, died yesterday. He was 94 and lived in San Francisco.

His death was confirmed by Robert Fanger, a spokesman for the Milton and Rose D. Friedman Foundation in Indianapolis.

Conservative and liberal colleagues alike viewed Mr. Friedman, a Nobel laureate, as one of the 20th century’s leading economic scholars, on a par with giants like John Maynard Keynes and Paul Samuelson.

Flying the flag of economic conservatism, Mr. Friedman led the postwar challenge to the hallowed theories of Lord Keynes, the British economist who maintained that governments had a duty to help capitalistic economies through periods of recession and to prevent boom times from exploding into high inflation.

In Mr. Friedman’s view, government had the opposite obligation: to keep its hands off the economy, to let the free market do its work. He was a spiritual heir to Adam Smith, the 18th-century founder of the science of economics and proponent of laissez-faire: that government governs best which governs least.

The only economic lever that Mr. Friedman would allow government to use was the one that controlled the supply of money — a monetarist view that had gone out of favor when he embraced it in the 1950s. He went on to record a signal achievement, predicting the unprecedented combination of rising unemployment and rising inflation that came to be called stagflation. His work earned him the Nobel Memorial Prize in Economic Science in 1976.

Rarely, colleagues said, did anyone have such impact on his own profession and on government. Though he never served officially in the halls of power, he was around them, as an adviser and theorist.

“Among economic scholars, Milton Friedman had no peer,” Ben S. Bernanke, the Federal Reserve chairman, said yesterday. “The direct and indirect influences of his thinking on contemporary monetary economics would be difficult to overstate.”

Professor Friedman also fueled the rise of the Chicago School of economics, a conservative group within the department of economics at the University of Chicago. He and his colleagues became a counterforce to their liberal peers at the Massachusetts Institute of Technology and Harvard, influencing close to a dozen American winners of the Nobel in economics.

It was not only Mr. Friedman’s antistatist and free-market views that held sway over his colleagues. There was also his willingness to create a place where independent thinkers could be encouraged to take unconventional stands as long as they were prepared to do battle to support them.

“Most economics departments are like country clubs,” said James J. Heckman, a Chicago faculty member and Nobel laureate. “But at Chicago you are only as good as your last paper.”

Alan Greenspan, the former Federal Reserve chairman, said of Mr. Friedman in an interview Tuesday: “From a longer-term point of view, it’s his academic achievements which will have lasting import. But I would not dismiss the profound impact he has already had on the American public’s view.”

To Mr. Greenspan, Mr. Friedman came along at an opportune time. The Keynesian consensus among economists, he said — one that had worked well from the 1930s — could not explain the stagflation of the 1970s.

But he also said that Mr. Friedman had made a broader political argument: that you have to have economic freedom to have political freedom.

Mr. Friedman had a gift for communicating complicated ideas in simple and lucid ways, and it served him well as the author or co-author of more than a dozen books, as a columnist for Newsweek from 1966 to 1983 and even as the star of a public television series. He was a bridge between the academic and popular worlds, and his broader impact stemmed in large part from the fact that he was preaching a gospel of capitalism that fit neatly into American self-perceptions. He was pushing on an open door.

A Staunch Libertarian

As a libertarian, Mr. Friedman advocated legalizing drugs and generally opposed public education and the state’s power to license doctors, car drivers and others. He was criticized for those views, but he stood by them, arguing that prohibiting, regulating or licensing human behavior either does not work or creates inefficient bureaucracies.

Mr. Friedman insisted that unimpeded private competition produced better results than government systems. “Try talking French with someone who studied it in public school,” he argued, “then with a Berlitz graduate.”

Once, when accused of going overboard in his antistatism, he said, “In every generation, there’s got to be somebody who goes the whole way, and that’s why I believe as I do.”

In the long period of prosperity after World War II, when Keynesian economics was riding high in the West, Mr. Friedman alone warned of trouble ahead, asserting that policies based on Keynesian theory were part of the problem.

Even as he was being dismissed as an economic “flat-earther,” he predicted in the 1960s that the end of the boom was at hand. Expect unemployment to grow, he said, and inflation to rise, at the same time. The prediction was borne out in the 1970s. It was Paul Samuelson who labeled the phenomenon stagflation.

Mr. Friedman’s analysis and prediction were regarded as a stunning intellectual accomplishment and contributed to his earning the Nobel for his monetary theories. He was also cited for his analyses of consumer savings and of the causes of the Great Depression: he blamed the Federal Reserve, accusing it of bad monetary policy and saying it had bungled early chances for recovery. His prestige and that of the Chicago school soared, and his analysis of the Depression changed the way that the Fed thought about monetary policy.

Government leaders like President Ronald Reagan and Prime Minister Margaret Thatcher of Britain were heavily influenced by his views. So was the quietly building opposition to communism within the East bloc.

As the end of the century approached, Professor Friedman said events had made his views seem only more valid than when he had first formed them. One event was the fall of communism. In an introduction to the 50th-anniversary edition of Friedrich A. Hayek’s book predicting totalitarian consequences from collectivist planning, “The Road to Serfdom,” Mr. Friedman wrote it was clear that “progress could be achieved only in an order in which government activity is limited primarily to establishing the framework with which individuals are free to pursue their own objectives.”

“The free market is the only mechanism that has ever been discovered for achieving participatory democracy,” he said.

Professor Friedman was acknowledged to be a brilliant statistician and logician. To his critics, however, he sometimes pushed his data too far. To them, the debate over the advantages or disadvantages of an unregulated free market was far from over.

Milton Friedman was born in Brooklyn on July 31, 1912, the last of four children and only son of Jeno S. Friedman and Sarah Landau Friedman. His parents worked briefly in New York sweatshops, then moved their family to Rahway, N.J., where they opened a clothing store.

Mr. Friedman’s father died in his son’s senior year at Rahway High School. Young Milton later waited on tables and clerked in stores to supplement a scholarship he had earned at Rutgers University. He entered Rutgers in 1929, the year the stock market crashed and the Depression began.

Mr. Friedman attributed his success to “accidents”: the immigration of his teen-age parents from Carpatho-Ruthenia, at the time a province of Austria-Hungary and now part of Ukraine, enabling him to be an American and not the citizen of a Soviet-bloc state; the skill of a high-school geometry teacher who showed him a connection between Keats’s “Ode on a Grecian Urn” and the Pythagorean theorem, allowing him to see mathematical beauty; the receipt of a scholarship that enabled him to attend Rutgers and there have Arthur F. Burns and Homer Jones as teachers.

He said Mr. Burns, who later became chairman of the Federal Reserve, instilled in him a passion for scientific integrity and accuracy in economics; Mr. Jones interested him in monetary policy and a graduate school career at Chicago.

In his first economic-theory class at Chicago, he was the beneficiary of another accident — the fact that his last name began with an “F.” The class was seated alphabetically, and he was placed next to Rose Director, a master’s-degree candidate from Portland, Ore. That seating arrangement shaped his whole life, he said. He married Ms. Director six years later. And she, after becoming an important economist in her own right, helped Mr. Friedman form his ideas and maintain his intellectual rigor.

After he became something of a celebrity, Mr. Friedman said, many people became reluctant to challenge him directly. “They can’t come right out and say something stinks,” he said. “Rose can.”

In 1998, he and his wife published a memoir, “Two Lucky People” (University of Chicago Press), in which they reveled in “having intellectual children throughout the world.”

His wife is among his survivors. They also include a son, David, and a daughter, Janet Martel, four grandchildren and three great-grandchildren.

A Fateful Class

That fateful class at the University of Chicago also introduced him to Jacob Viner, regarded as a great theorist and historian of economic thought. Professor Viner convinced Mr. Friedman that economic theory need not be a mere set of disjointed propositions but rather could be developed into a logical and coherent prescription for action.

Mr. Friedman won a fellowship to do his doctoral work at Columbia, where the emphasis was on statistics and empirical evidence. He studied there with Simon Kuznets, another American Nobel laureate. The two turned Mr. Friedman’s thesis into a book, “Income From Independent Professional Practice.” It was the first of more than a dozen books that Mr. Friedman wrote alone or with others.

It was also the first of many “Friedman controversies.” One finding of the book was that the American Medical Association exerted monopolistic pressure on the incomes of doctors; as a result, the authors said, patients were unable to reap the benefits of lower fees from any real price competition among doctors. The A.M.A., after obtaining a galley copy of the book, challenged that conclusion and forced the publisher to delay publication. But the authors did not budge. The book was eventually published, unchanged.

During the first two years of World War II, Mr. Friedman was an economist in the Treasury Department’s division of taxation. “Rose has never forgiven me for the part I played in devising and developing withholding for the income tax,” he said. “There is no doubt that it would not have been possible to collect the amount of taxes imposed during World War II without withholding taxes at the source.

“But it is also true,” he went on, “that the existence of withholding has made it possible for taxes to be higher after the war than they otherwise could have been. So I have a good deal of sympathy for the view that, however necessary withholding may have been for wartime purposes, its existence has had some negative effects in the postwar period.”

After the war, he returned to the University of Chicago, becoming a full professor in 1948 and commencing his campaign against Keynesian economics. Robert M. Solow of M.I.T., a Nobel laureate who often disagreed with Mr. Friedman, called him one of “the greatest debaters of all time.” But his wisecracking style could infuriate opponents.

Mr. Samuelson, also of M.I.T., who was not above wisecracking himself, had a standard line in his economics classes that always brought down the house: “Just because Milton Friedman says it doesn’t mean that it’s necessarily untrue.”

But Professor Samuelson said he never joked in class unless he was serious — that his friend and opponent was, in fact, often right when at first he sounded wrong.

Mr. Friedman’s opposition to rent control after World War II, for example, incurred the wrath of many colleagues. They took it as an unpatriotic criticism of economic policies that had been successful in helping the nation mobilize for war. Later, Mr. Samuelson said, “probably 98 percent of them would agree that he was right.”

In the early 1950s, Mr. Friedman started flogging a “decomposing horse,” as Mrs. Thatcher’s chief economic adviser, Alan Waters, later put it. The horse that most economists thought long dead was the monetarist theory that the supply of money in circulation and readily accessible in banks was the dominant force — or in Mr. Friedman’s view, the only force — that should be used in shaping the economy.

In the 1963 book “A Monetary History of the United States, 1867-1960,” which he wrote with Anna Jacobson Schwartz, Mr. Friedman compiled statistics to buttress his theory that recessions, as well as the Great Depression, had been preceded by declines in the money supply. And it was an oversupply, he argued, that caused inflation.

In the late 1960s, Mr. Friedman used his knowledge of empirical evidence and statistics to calculate that Keynesian government programs had the effect of constantly increasing the money supply, a practice that over time was seriously inflationary.

Paul Krugman, a Princeton University economist and Op-Ed columnist for The New York Times, said Mr. Friedman then managed “one of the decisive intellectual achievements of postwar economics,” predicting the combination of rising unemployment and rising inflation that came to be called stagflation.

In this regard, his Nobel award cited his contribution to the now famous concept “the natural rate of unemployment.” Under this thesis, the unemployment rate cannot be driven below a certain level without provoking an acceleration in the inflation rate. Price inflation was linked to wage inflation, and wage inflation depended on the inflationary expectations of employers and workers in their bargaining.

A spiral developed. Wages and prices rose until expectations came into line with reality, usually at the natural rate of unemployment. Once that rate is achieved, any attempt to drive down unemployment through expansionary government policies is inflationary, according to Mr. Friedman’s thesis, which he unveiled in 1968.

For years economists have tried to pinpoint the elusive natural rate, without much success, particularly in recent years.

Mr. Friedman was right on the big economic issue of that time — inflation. And his prescription — to have the governors of the Federal Reserve System keep the money supply growing steadily without big fluctuations — figured in the thinking of policy makers around the world in the 1980s.

A Retort to Kennedy

Mr. Friedman also pursued his attack on Keynesianism in a more general way. He warned that a government allowed to regulate the economy could not be trusted to keep its hands off individual liberties.

He had first been exposed to this line of attack through his association with Mr. Hayek, who was predicting in the early 1940s that communism would lead inevitably to totalitarianism and the crushing of individual rights. In an introduction to a 1971 German edition, Professor Friedman called Mr. Hayek’s book “a revelation particularly to the young men and women who had been in the armed forces during the war.”

“Their recent experience had enhanced their appreciation of the value and meaning of individual freedom,” he wrote.

In 1962, Mr. Friedman took on President John F. Kennedy’s popular inaugural exhortation: “Ask not what your country can do for you. Ask what you can do for your country.” In an introduction to his classic book “Capitalism and Freedom,” a collection of his writings and lectures, he said President Kennedy had got it wrong: You should ask neither.

“What your country can do for you,” Mr. Friedman said, implies that the government is the patron, the citizen the ward; and “what you can do for your country” assumes that the government is the master, the citizen the servant. Rather, he said, you should ask, “What I and my compatriots can do through government to help discharge our individual responsibilities, to achieve our several goals and purposes, and above all protect our freedom.”

It was not that Mr. Friedman believed in no government. He is credited with devising the negative income tax, which in a modern variant — the earned-income tax credit — increases the incomes of the working poor. He also argued that government should give the poor vouchers to attend the private schools he thought superior to public ones.

In forums he would spar over the role of government with his more liberal adversaries, including John Kenneth Galbraith, who was also a longtime friend (and who died in April 2006). The two would often share a stage, presenting a study in contrasts as much visual as intellectual: Mr. Friedman stood 5 feet 3; Mr. Galbraith, 6 feet 8.

Though he had helped ignite the conservative rebellion after World War II, together with intellectuals like Russell Kirk, William F. Buckley Jr. and Ayn Rand, Mr. Friedman had little or no influence on the administrations of Presidents Dwight D. Eisenhower, Kennedy, Lyndon B. Johnson and Richard M. Nixon. President Nixon, in fact, once described himself as a Keynesian.

It was frustrating period for Mr. Friedman. He said that during the Nixon years the talk was still of urban crises solvable only by government programs that he was convinced would make things worse, or of environmental problems produced by “rapacious businessmen who were expected to discharge their social responsibility instead of simply operating their enterprises to make the most profit.”

Rising With Reagan

But then, after the 1970s stagflation, with Keynesian tools seemingly broken or outmoded, and with Ronald Reagan headed for the White House, Mr. Friedman’s hour arrived. His power and influence were acknowledged and celebrated in Washington.

With his wife, in 1978 he brought out a best-selling general-interest book, “Free to Choose,” and went on an 18-month tour, from Hong Kong to Ottumwa, Iowa, preaching that government regulation and interference in the free market was the stifling bane of modern society. The tour became the subject and Mr. Friedman the star of a 10-part PBS series, “Free to Choose,” in 1980.

In 1983, having retired from teaching, he became a fellow at the Hoover Institution at Stanford University. Five years later he was awarded the Presidential Medal of Freedom and the National Medal of Science.

The economic expansion in the 1980s resulted from the Reagan administration’s lowered tax rates and deregulation, Professor Friedman said. But then the tide turned again. The expansion, he argued, was halted when President George H. W. Bush imposed a “reverse-Reaganomics” tax increase.

What was worse, by the mid-1980s, as the finance and banking industries began undergoing upheavals and money began shifting unpredictably, Mr. Friedman’s own monetarist predictions — of what would happen to the economy and inflation as a result of specific increases in the money supply — failed to hold up. Confidence in his monetarism theory waned.

Prof. Robert Solow of M.I.T., a Nobel laureate himself, and other liberal economists continued to raise questions about Mr. Friedman’s theories: Did not President Reagan, and by extension Professor Friedman, they asked, revert to Keynesianism once in power?

“The boom that lasted from 1982 to 1990 was engineered by the Reagan administration in a straightforward Keynesian way by rising spending and lowered taxes, a classic case of an expansionary budget deficit,” Mr. Solow said. “In fairness to Milton, however, it should be said that one of the reasons for his wanting a tax reduction was to force the spending cuts that he presumed would follow.” Professor Samuelson said that “Milton Friedman thought of himself as a man of science but was in fact more full of passion than he knew.”

Mr. Friedman remained the guiding light to American conservatives. It was he, for example, who provided the economic theory behind “prescriptions for action,” as his onetime professor, Jacob Viner, put it, like the landslide Republican victory in the off-year Congressional elections of 1994.

By then Professor Friedman had grown into a giant of economics abroad as well. He was sharply criticized for his role in providing intellectual guidance on economic matters to the military regime in Chile that engineered a coup in the early 1970s against the democratically elected president, Salvador Allende. But for Mr. Friedman that was just a bump in the road.

In Vietnam, where the Constitution was amended in 1986 to guarantee the rights of private property, the writings of Mr. Friedman were circulated at the highest levels of government. “Privatize,” he told Chinese scholars at a meeting at Fudan University in Shanghai; and he told those in Moscow and elsewhere in Eastern Europe: “Speed the conversion of state-run enterprises to private ownership.” They did.

Mr. Friedman had long since ceased to be called a flat-earther by anyone. “What was really so important about him,” said W. Allen Wallis, a former classmate and later faculty colleague at the University of Chicago, “was his tremendous basic intelligence, his ingenuity, perseverance — his way of getting to the bottom of things, of looking at them in a new way.”

Louis Uchitelle and Edmund L. Andrews contributed reporting.

Correction: Nov. 18, 2006

An obituary yesterday about the economist Milton Friedman referred incorrectly to the land from which his parents emigrated as teenagers. It was Carpatho-Ruthenia, at the time a province of Austria-Hungary and now part of Ukraine; it was not Czechoslovakia. (Carpatho-Ruthenia was part of Czechoslovakia during the two decades before World War II.)

The obituary also rendered incorrectly the title of a poem by John Keats that a teacher of Mr. Friedman’s had drawn on in demonstrating the beauty of the Pythagorean theorem. It is “Ode on a Grecian Urn” — not “Ode to a Grecian Urn.”

MILTON FRIEDMAN: 1912-2006

Economist changed the world

By Jonathan Peterson
Times Staff Writer

November 17, 2006

MILTON FRIEDMAN, a brilliant champion of free-market economics and individual freedom who almost single-handedly altered the boundaries of public debate on an array of national issues, died Thursday in San Francisco. He was 94.

The cause was heart failure, said Robert Fanger, a spokesman for the Milton and Rose D. Friedman Foundation in Indianapolis.

Friedman was considered a leading economic thinker of the 20th century. His many prescriptions for policy, notably on managing the nation's money supply and curbing the welfare state, influenced presidents and presidential candidates dating to the 1960s. President Reagan and Margaret Thatcher, the former British prime minister, were among his fans. Friedman's sweeping, pro-capitalist ideas earned him legions of followers both domestically and overseas, while also sparking dissent and controversy.

He was awarded the Nobel Prize in economics in 1976 for a body of "original and weighty work," including his money supply research, which jurors said had influenced fellow scholars as well as the U.S. Federal Reserve and the central banks of other nations.

"He was a great man," said Allan H. Meltzer, an economics scholar at Carnegie Mellon University and the American Enterprise Institute. "It's hard to think of anybody who never held a government position of any importance who influenced our country — and the whole world — as much as he did."

The longtime San Francisco resident had been a senior research fellow at Stanford University's Hoover Institution since 1977.

Yet Friedman's influence extended far beyond the ivory tower. He became an economist-celebrity, promoting his passionate beliefs in books, magazines and television appearances. With confidence and a professor's logic, he sought to demolish the conventional belief after World War II that government should play a sweeping role in people's lives.

Taxes should be cut and simplified, he said, and society would benefit when personal choice reigned supreme.

Though some of his teachings about money lost influence over time, he ultimately attained the status of a capitalist icon through the purity and force of his broader world view.

"He had been a fixture in my life both professionally and personally for half a century," former Federal Reserve Chairman Alan Greenspan said in a statement Thursday. "My world will not be the same."

The current Fed chairman, Ben S. Bernanke, said in a statement that "Friedman had no peer" among economic scholars. "Just as important, in his humane and engaging way, Milton conveyed to millions an understanding of the economic benefits of free, competitive markets, as well as the close connection that economic freedoms bear to other types of liberty. He will be sorely missed."



Freedom to choose

In the 1960s, Friedman argued that personal retirement accounts made more sense than a mandatory system of Social Security, helping set the stage for the recent national debate about the issue. Similarly, he contended that parents should be allowed to choose which schools their children attend, laying the foundation for ongoing arguments about school choice.

This latter belief animated Friedman's final years, and promoting choice in schools became the mission of his foundation. "Why do America's universities have a greater reputation around the world than its public schools?" he once asked. "You have choice. That makes all the difference in the world."

His views did not break down into a right-left framework. He emerged as a leading voice in the Vietnam-era movement to end the draft, a position ultimately endorsed by President Nixon.

Friedman offered blunt advice on subjects as personal as laws against prostitution — he saw them as incursions into individual choice — and as sweeping as the international system of relatively fixed exchange rates, which he sought to overturn and which indeed collapsed in the early 1970s.

He became the human face of the influential "Chicago school" of economics, emphasizing the role of monetary policy, which affects interest rates, and the benefits of laissez-faire or free-market approaches to the economy.

Political leaders listened, granting almost unparalleled influence to a scholar whose free-market religion once seemed out of step with the times.

At one point, Stanford University tried to lure the influential professor from the University of Chicago to a "free enterprise" chair. But Friedman did not wish to be pigeon-holed, and he turned down the offer. "He felt it would have restricted him or branded him, and he didn't want to be branded," Meltzer said Thursday.

Not all of his ideas found lasting acceptance. The U.S. Federal Reserve, the Bank of England and other central banks eventually abandoned much of his monetary prescription.

After a series of corporate and financial scandals, the Friedman-style mantra for deregulation lost its allure for much of the public. Some of Friedman's social priorities, notably the legalization of drugs, never caught on.

Friedman himself acknowledged that in hindsight he might not have pushed the technical aspects of his money supply theories so aggressively. Yet he never wavered from the essence of his world view.

"You form a philosophy at a certain stage, and for the rest of your life it dominates," he told the Financial Times in June 2003. "On the big issues of policy, I don't think there is anything I've changed my mind about."



Growing up Friedman

Friedman was born in Brooklyn on July 31, 1912, the fourth child in a family of struggling Jewish immigrants from a region that is now part of Ukraine. When he was 13 months old, the family moved to the New Jersey town of Rahway.

The household was warm and supportive, and his childhood was generally happy, Friedman would later recall. But money was tight. Friedman's parents engaged in various enterprises with limited success, including a small store and an ill-fated ice cream parlor.

"Among my most vivid memories are heated discussions between my parents at night about where the money was to come from to pay incoming bills," Friedman recalled in his 1998 memoir, "Two Lucky People," written with wife, Rose.

Friedman made heavy use of Rahway's small library — "almost exhausting the contents" — and was an enthusiastic Boy Scout. Slight of stature — Friedman was about 5 foot 2 as an adult — he played on the school's chess team.

After graduating, he entered Rutgers University on a scholarship. Always enterprising, he supplemented the aid by waiting on tables, hustling fireworks, tutoring high-school students and clerking in a retail store. Friedman also began fruitful intellectual relationships with two professors: Arthur F. Burns, later to become chairman of the Federal Reserve, and Homer Jones, who lectured students on the importance of individual freedom.

He planned to become an actuary, but explained in an October 2000 PBS interview why he changed his mind during the Great Depression: "If you're a 19-year-old college senior, which is going to be more important to you: figuring out what the right prices ought to be for life insurance, or trying to understand how the world got into that kind of a mess?"

Friedman chose to pursue the latter question as a University of Chicago graduate student in 1932.

He went on to study economics at Columbia University, did a stint on a New Deal economics project in Washington, the National Resources Committee, and worked for the nonprofit National Bureau of Economic Research in New York.

He discovered firsthand that scholarly conclusions can lead to controversy. Friedman wrote in his doctoral thesis that physicians gained extra income through the powers of the American Medical Assn. to restrict entry into the profession. Those same powers, Friedman wrote, reduced the availability of medical care for the public.

Columbia University held up accepting Friedman's thesis for five years. "I was young and innocent at the time, and did not realize that a storm of protest would develop," Friedman said in his memoir. "I soon learned better."



Shaping monetary views

Friedman had not yet arrived at a vehement rejection of tax-and-spending policies as a way to steer the national economy. During World War II, he even worked for the federal government, advising the Treasury Department on wartime tax policy and other matters.

There followed a brief stint at the University of Minnesota. Then in 1946 Friedman joined the faculty of the University of Chicago, which would serve as his home base for the next 30 years.

"Being a student of Milton's was magic indeed," Gary S. Becker, a 1951 graduate student of Friedman who would win his own Nobel Prize, recalled years later. "People would always ask me, 'Why are you so excited? Are you going out on a date with a beautiful woman?' I said, 'No, I'm going to a class in economics.' "

Early on, Friedman arrived at one of his famous findings: The Federal Reserve had blundered terribly in the late 1920s and early 1930s, triggering the 1929 stock market crash and deepening the Great Depression through excessive tightness in the money supply. That knowledge, he believed, should protect America from a recurrence.

Friedman's lifelong views on freedom and personal choice also emerged in the 1940s and 1950s. In particular, he was moved by an economics professor named Friedrich Hayek, whose intellectual attacks on socialism galvanized a small group of true-believers.

At the time, mainstream Democrats and Republicans assumed that the federal government would retain a huge, ambitious role throughout society, serving as an equalizer for the disadvantaged. John Maynard Keynes, an eminent British economist, was the intellectual leader of the prevailing view.

Friedman set out to provide an alternative. In 1962, he published "Capitalism and Freedom," a widely read primer on his anti-government convictions, containing thoughts on monetary policy, public education, welfare and poverty. He blasted Social Security for restricting people's options in saving for retirement, pushed for a single-rate "flat" tax to replace the more complex system of multiple brackets, and called for an end to licensing boards.

The following year he published what many consider his masterpiece, an 800-page "Monetary History of the United States," written with economist Anna Jacobson Schwartz. It provided detailed evidence for his monetarist view that careful growth of the money supply was the paramount factor in sustaining economic growth.

His monetary views also enabled Friedman to predict the inflation of the 1960s and 1970s, relying on an analysis that was outside the mainstream of his profession.

In the 1964 presidential campaign, Friedman stepped directly onto the political battlefield.

Lyndon Johnson, the Democratic incumbent, advocated a Keynes-style Great Society of ambitious federal programs. Sen. Barry Goldwater, the conservative Republican nominee, was drawn to Friedman's policies and picked the economist as an advisor.

Johnson won in a landslide. But Friedman stayed at the forefront of the policy struggle, soon beginning a Newsweek column that provided him a forum for his ideas on taxes, red tape, import restrictions, the Federal Reserve and matters unrelated to economic policy.



Opposition to the draft

When Nixon appointed Friedman to a panel examining whether to abolish the draft, Friedman found that his anti-draft views put him at odds with Gen. William Westmoreland, the Army chief of staff and former Vietnam War commander.

At one point, Westmoreland said he did not want to command an army of "mercenaries."

"I stopped him and said, 'General, would you rather command an army of slaves?' " Friedman later recalled. "He drew himself up and said, 'I don't like to hear our patriotic draftees referred to as slaves.' I replied, 'I don't like to hear our patriotic volunteers referred to as mercenaries.' "

U.S. officials ended the draft in 1973. Friedman also offered a range of economic advice to the Nixon White House, including a plan for a negative income tax providing new aid to the poor.

But the economist was horrified when Nixon imposed wage and price controls, and Friedman blasted the plan as "pure window dressing which will do harm rather than good." He also was disappointed at Nixon's willingness to accept a batch of new regulatory agencies, including the Environmental Protection Agency, the Occupational Safety and Health Administration, and the Consumer Product Safety Commission.



An economics celebrity

Despite frustrations, his star was rising. On Oct. 14, 1976, reporters swarmed Friedman as he entered a Detroit news conference about a proposed state spending limit. To Friedman's surprise, they informed him that he had won the Nobel Prize in economics.

The Swedish academy credited Friedman with insights into managing the nation's money supply — it should grow gradually and steadily, he believed — and how households decide what they can afford to spend, based on expected lifetime earnings. They also cited Friedman for exploding the old fallacy that inflation could reduce unemployment.

"It is very rare for an economist to wield such influence, directly and indirectly, not only on the direction of scientific research but also on actual policies," the judges said.

But the prize also brought controversy.

Friedman had enraged human rights activists the previous year by giving a series of lectures in Chile, which was controlled by the military dictatorship of Augusto Pinochet. Critics accused the economist of bolstering Pinochet, whose government included Friedman disciples.

When the Friedmans traveled to Stockholm for the Nobel award, protesters were waiting. The furor annoyed the economist, who believed that economic freedom was a force for political freedom.



Friedman's involvement in policy deepened. In 1980 he served as an advisor to the presidential campaign of Ronald Reagan, and continued to offer some advice after Reagan was in the Oval Office.

Reagan soon supported a politically costly effort by the Federal Reserve to clamp down on the money supply and fight inflation, adhering to Friedman's principles. Inflation was conquered, but at the cost of a steep recession.

Friedman's celebrity skyrocketed in 1980 with the 10-part PBS series "Free to Choose," a compilation of Milton and Rose Friedman's shared philosophy about personal, political and economic freedom. A book based on the project became the year's top nonfiction seller. Friedman continued his research at Stanford's Hoover Institution until the end of his life. He had a history of heart problems, including bypasses, but remained intellectually engaged, reading from a computer screen with enlarged type. One friend said Friedman had fallen in the shower shortly before his hospitalization in San Francisco.

In addition to his wife, he is survived by daughter Janet, an attorney, and son David, a professor of law and economics at Santa Clara University. Family members Thursday requested that any donations be made to the Milton and Rose D. Friedman Foundation. Friedman will be cremated and his ashes spread across the San Francisco Bay, his family said.

 

 November 12, 2006

How to Be Funny

Compiled by JOHN HODGMAN

How to Direct a Comedy Legend
By Paul Feig, director of the upcoming “Unaccompanied Minors”

When approaching the task of directing a comedy legend, the utmost care and skill must be applied. The reasons for this are threefold:

1. You do not want to make the legend come off badly onscreen by giving him or her poor direction.

2. You do not want to be so specific in your direction that you restrict the legend’s comedic and improvisational skills and, most important. . . .

3. You do not want to look like a talentless idiot in front of the legend.

Especially when that legend is Teri Garr.

I have directed a few legends in my career. I am proud of the fact that I have directed both the Oscar nominee Joan Plowright and the ultraconservative rocker Ted Nugent in acting roles. (Sadly, they did not perform together. Perhaps a future production of “Love Letters” could prove the proper forum to unite their talents.) But the thought of directing Teri Garr put me in a panic.

For you see, I spent most of my teenage and young-adult years in love with her. Ever since I saw her in “Young Frankenstein,” she has been my dream girl. She was funny, she was pretty, she was quirky. I thought she was the perfect woman. In fact, every woman I ever dated, including my wife, all bear a striking resemblance to Ms. Garr.

So how could I now stand on a movie set and tell her what to do? I mean, I had seen the woman naked in “One From the Heart,” for crying out loud.

The minute I got to the set and met her, I realized that my angst had been misplaced. She was a wonderfully warm person, just the way I had always hoped and imagined she would be. As soon as the cameras started rolling, Teri Garr was funny. She ad-libbed, taking jokes I had written and making them funnier. In one scene, in which Teri was supposed to wake up out of a peppermint-schnapps-fueled haze, she sat up and revealed that she had a candy cane stuck in her bangs, which almost made me ruin the take by bursting into laughter. And I was suddenly a teenager sitting in a Michigan multiplex, in love with her all over again.

And that’s why directors work with legends — because they do not make us look like talentless idiots. They actually make us look good.

How to Be Directed by a Comedy Nonlegend
By Teri Garr, actress, “Unaccompanied Minors”

I’m going to try to be as gentle about this as possible, but for me it was not that easy working with a nonlegend. At this point in my life I try to surround myself with as many legends as possible. For example, the guy who reads my gas meter — a legend. My housekeeper — a legend. My dog — a legend (though only in our neighborhood). So to put myself in harm’s way like this was risky.

I had done this before, though. I risked doing “Young Frankenstein” because after seeing “The Producers,” I believed that Mel Brooks could be funny if only someone would let him cut loose. I agreed to do “Tootsie” with Dustin Hoffman even though he had done little else besides “The Graduate,” “Midnight Cowboy” and “Kramer vs. Kramer,” because I saw something there. But lately I seem to be running into more and more young people who in their own way are good, even great, but who are certainly not legends.

Whenever I’m asked to participate in a project, I ask myself three questions: 1) Does the character speak to me? 2) Does the film present a message the world needs to hear? 3) Is the check going to clear?

Often, to save time, I go right to the third question, and if the answer is yes, I’m in.

Working with Paul Feig, though it had its pitfalls, was a delight. He seemed to have a full knowledge of my oeuvre. This put me at ease and made me forget for a moment that he is not a legend. He directs actors on the “give them enough rope and they’ll hang themselves” theory. In other words, at the end of a take he doesn’t say, “Cut”; he just stares at you and hopes you’ll come up with something usable. Very clever, really. After a while I got used to this and prepared myself by thinking up dialogue in advance or just saying things backward.

In the end, I was glad I took a chance with this whippersnapper. They say, “Be nice to the people you meet as you’re climbing up the ladder of success because you’ll meet the same people coming down that ladder.” Not true, really.

I find that as I gently descend the ladder of fame (the same one I viciously clawed my way up), I’m meeting an entirely different set of people.

How to Write Your First Hollywood Comedy
By Garrison Keillor, star and screenwriter, “Prairie Home Companion”

1. Don’t start writing yet. (Very important.) Postpone writing. Too many writers make the mistake of plunging right in — Scene 1. Ext: the home of the zany holmberg clan. The camera pans slowly across toward the driveway, where the young couple are necking in the back seat of the white Buick, and we see the three figures approaching with the water hose don’t do this. Writing the screenplay will only tangle you up in a lot of minutiae and inevitably lead to discouragement. Get the money first, then write.

2. Find a director. A famous one who is older than you and who is famous for improvised dialogue. This takes so much pressure off the screenwriter. Let’s say you choose Robert Altman. Call up your friend who knows a guy who went to college with a guy who is now Robert Altman’s attorney and wangle a dinner date with Mr. Altman. A threecourse meal in a place with ficus plants and white tablecloths. Mr. Altman has just finished shooting a new picture and he is in a grand mood. He regales you with stories about his famous movies, and then, polite man that he is (he is from the Midwest), he asks if there was something you wished to talk about. “Yes, sir,” you say, “there is.”

3. Do not lead with your best idea. Your first idea is going to get shot down. Do not lead the ace. Lead the two of clubs.

You say: “Mr. Altman, I want to make a movie about a family named Boblett whose grandpa dies, and they have to bring his ashes to South Dakota and scatter them at Mount Rushmore — Gramps was a crusty old Republican and wanted his remains to be put up Jefferson’s left nostril. Anyway, it’s all about this family — one is into heavy metal and one is obsessive-compulsive about nasal cleanliness and one is a Wiccan covered with tattoos — and they have various misadventures and car breakdowns and then must try to climb up to the nostril. And there’s a lady park ranger named Chloe who accidentally takes a love potion.”

Mr. Altman looks off into the distance, pauses a decent interval and says: “It’s not for me. But keep in touch. Maybe we could come up with something else.”

4. Start writing Something Else. You set Mr. Altman up with the “Looking for Jefferson” idea, a weak one, and now he will read your new screenplay and say, “I can’t believe this came from the same bozo who tried to sell me the nostril picture.”

5. And here’s how you write the thing. You rewrite it, that’s how you write it. You rewrite the rewrite, then prune that and add other stuff. Your wife reads it and does not laugh at any of the hilarious parts, so you replace them with funny stuff. You turn the script over to Mr. Altman, and as he reads it, you reach over his shoulder and cross out lines.

Then Mr. Altman directs in his own inimitable style, encouraging improvisation, so in the end nobody quite understands it, and critics hail it as “one of his better pictures, if not among the very best,” which is not bad for you, and they offer you a nice deal to write your second picture. But that’s another problem. I can’t help you there.

How to Be Funny When You Are Incredibly Good-Looking
By Paul Rudd, actor, “The 40-Year-Old Virgin”

Comedy has always imperiled the attractive. Don’t think I don’t know it. Yet what rarefied air! To go where eagles soar. The greats: Grant. Beatty. Redford. The master classes: “Bringing Up Baby.” “Shampoo.” “Barefoot in the Park.” Still, lest you fly too close to the sun, mind the wing-melting failures: “Operation Petticoat.” “Ishtar.” “Legal Eagles.”

Alphas, I give you reason to rejoice! After years of study, I have come up with a near-foolproof guide for those, like me, who bear the unwished-for burden of physical near-perfection.

1. Do a silly dance every once in a while so people think you don’t take yourself too seriously. (Once, in an audition, I threw caution to the wind and danced an impromptu “Macarena.” Yes, I lost the role of Oskar Schindler, but I gained the respect of an industry.)

2. One thing you can control is how to wear your hair. A funny haircut can make a gorgeous person look almost average. Example: George Clooney on “Roseanne.”

3. Fight the urge to dress in tight clothing. We know we look good, but remember: sleeveless T-shirts = not funny. M.C. Hammer pants = funny.

4. Here’s one for the boys: Let yourself get kicked in the groin. If Zeppo Marx had taken one to the groin just once, it would have been a completely different story, believe me.

5. Don’t be afraid to manufacture a flaw. Hugh Grant’s affected stammer, for example. Or the famed pratfalls of Chevy Chase, which led a nation to wonder, Yes, he is a hottie — but does he have some horrible inner-ear problem?

6. In the same vein, spit takes and flatulence are always funny, regardless of how chiseled your chin or glutes.

7. Try alcohol to break down those inhibitions and see where that takes you. Who’s better looking, Jerry Lewis or Dean Martin? Got it? O.K., now who was more drunk? Exactly.

8. Finally, if all else fails, just be ugly inside. You’ll be surprised at the results!

To be extremely good-looking and funny may be hard, but it can be done. Look at me. In some circles I’m referred to as the “seventh Friend,” and I’m way better-looking than anyone on that show. If you’re ugly, pay no heed to these chestnuts and relish your unfair natural advantage. But to all you foxes out there, study closely, and who knows? With a little luck you just might be the next Alan Thicke.

How to Draw Funny Pictures
By Brad Bird, creator of “The Incredibles”

Because animation is a relatively complicated process, and because it is not spontaneous, it is often mischaracterized as purely mechanical. In reality, and at its best, the art of character animation exists somewhere between silent comedy and dance. Its success depends on finding a physical expression that is recognizable yet beyond what occurs in real life.

Fred Astaire had unusually large hands and learned how to use them in a way that made his dance more dynamic; he’d fold his hands for most of a routine, then flash them out for accents at key points. Their sudden increase in size made those moves pop in a way that other dancers couldn’t match. Animators use tricks like this all the time in ways that the audience never sees but always feels. Bugs Bunny, imitating the conductor Leopold Stokowski in concert, will violently raise his arms in onetwelfth of a second (two frames of film). Every part of his body will be rock-still — save for Bugs’s quivering hand.

It is impossible for a living being to do this, but not for Bugs. He is truly Stokowski, more Stokowski than Stokowski was himself, because Bugs is the impression of Stokowski: his power, his arrogance, his supreme control over his musicians, perfectly boiled down to its essence. We laugh because it is completely unreal and utterly truthful in the same moment.

How to Punch It Up
By Patton Oswalt, actor and screenwriter

I do a lot of punch-up in Hollywood. Punchup is where they get a bunch of screenwriters and comedians together to sit around a table and add jokes to a yet-to-be-filmed script. It’s fun. They usually have it at a nice hotel, and there’s coffee and bagels, and later they bring in lunch. Then snacks.

The only people who get asked to do punchup are people who have already written some very decent original scripts of their own. The kind of scripts where you racked your brain coming up with an original concept, ground your teeth making sure the characters and their dialogue were alive and funny and, finally, drank a lot of Red Bull to finish the thing on the last night of the eight-week period you had to write it. These scripts then make the rounds of the studios, where studio people read them, roll them into a tube, put the tube in a rocket and then shoot it into the ocean.

But the studio people remember your script. And they remember your name when they give some other writer a tugboat full of heroin and diamonds for his first draft. That’s when they realize they need you and all your friends (whose names are on the same list as yours because of their scripts that have been shot into the ocean) to punch it up. So what’s really going on is this: A mediocre writer is being punished with a huge paycheck and a produced movie while a bunch of funny, talented writers are being rewarded by getting to punch up his horrible script.

Lately I’ve been doing punch-up on computer-animated films, but the trick with doing punch-up on these movies is that unlike the live-action script, which hasn’t been filmed yet, the computer-animated film is usually 80 percent complete by the time we see it. And when I say 80 percent complete, I mean, “We’ve spent $120 million on this, so we really can’t change anything.”

“Uh, well then,” you’ll ask, through a mouthful of takeout Chinese, “what exactly do you want us to do?”

“What we need is for you guys to come up with funny off-screen voices yelling funny things over the unfunny action.”

I didn’t know you could make comedies that way! This is comforting news. Can I take old super-8 footage of a kid’s birthday party, where none of the other kids showed up? And he’s sitting at the kitchen table, and he’s got his little birthday hat on, and a lonely little cake, and he’s crying, and just when you’re about to kill yourself from the pathos, someone offscreen yells:

“I just fell on my fanny in some butterscotch!”

Wow, you’ll think, suddenly cheerful. Someone I can’t see, or will ever see, just fell into some butterscotch and is now talking about it out loud the way no one does or has, ever!

Did I mention there’s lunch?

How to Do a Deadpan
By Bob Balaban, actor

Deadpan: a vaudeville term coined in the 1920s to describe a comic with an expressionless face, pan being slang for face, and dead being dead. Think Jack Benny. Buster Keaton. Christopher Guest. Deadpan is the double take without the take, the mysterious, hysterically funny nothing.

In “Steamboat Bill Jr.,” Buster Keaton walks into a neighborhood that has been devastated by a cyclone. He stops in front of a house. The house begins to fall. Keaton, of course, is unaware of it. The shot is wide enough for you to know that Keaton is really there and that the house is really falling. Audiences reportedly shouted: “Look out! Look out!” at the movie screen during this sequence. The house falls, the audience gasps, the dust rises, and when it clears, there is Keaton, expressionless, standing in the safety of an open attic window that has fallen around him. He walks away as if nothing has happened to him. That’s major deadpan.

Here are some rules for deadpan:

1. This thing works better the less you do. You could actually be dead and get pretty good results. Lowercase yourself — clear your mind, silence your inner voices, disappear, be nothing. Don’t forget, nothing can be really something. An accomplished deadpan can create a force field akin to a black hole.

2. Don’t act. Deadpan is, by definition, the antithesis of acting. Deadpan allows the audience to imagine your reaction. You are the ultimate Rorschach test. You are Peter Sellers in “Being There.”

3. Like the proverbial dark gray suit, deadpan is appropriate for almost every occasion. It’s the way to go whether you are on the receiving end of spoonfuls of baby food thrown by a peckish infant, or in a speedboat and the female bass player to whom you have just proposed takes off her wig and tells you she’s a man, or if you are about to have a house fall on you.

4. And this is an absolute absolute — do not comment on the deadpan. The audience must never know that you know a house has fallen on you. You are not in on the joke, and the audience will love you for it. They will feel superior. Let them.

Doing nothing is not for everyone. A great deadpan is a rara avis. But who knows? In the brave new world of Botox and Restalane, today’s Jim Carrey may become tomorrow’s master of the expressionless expression.

How to Dress Funny
By Jerusha Hess, co-writer and costume designer, “Napoleon Dynamite”

I had a whopping $1,500 to spend on the entire wardrobe for “Napoleon Dynamite.” Everything was bought secondhand, borrowed or handed down. I made Napoleon’s “Vote for Pedro” shirt in 15 minutes from a handful of iron-on letters and an old ringer T. The iconoclastic moon boots were donated by my uncle Wally, already stinking. Napoleon’s Hammer pants were a direct fashion theft from my beefcake brother circa 1992. When I dressed Jon Heder for the first day of shooting, his hair still reeking from the home perm my cousin and I gave him the night before, I put him in an “Endurance” T-shirt tucked into a pair of gray acid-washed jeans, which in turn were tucked into the boots. I thought, Now this looks good. Though his wardrobe was basically a collection of superworn T-shirts with various Dungeons and Dragons paraphernalia screen-printed on them, he worked them.

There are many other movies whose costumes make me laugh: the hockey players’ stormtrooper-like outfits in “Strange Brew,” any clothes worn in “Logan’s Run,” the Reynolds twins in the dance scene from the BMX movie “Rad” and the Kryptonian bad guys in “Superman II.” I guess I just really like poly-blend spacesuits.

Want to dress funny yourself? For warm weather, begin with a pair of pastel culottes and then don an oversize polo of coordinating color. Layer two pairs of different colored socks so they match your shorts and shirt. And finally throw on your favorite pair of Tevas. If it’s cold out, grab your old Girbaud jeans, the ones with all of those comfort pleats in front. Perhaps moon boots have come and gone (for the second time), but here are a few things I think will never go out of style: flesh-colored eye-patches, large Mormon families at Disneyland in matching neon green T-shirts, capes, Ace bandages, families with matching perms, orthodonture on adults and anything you wore eight years ago. Now put those culottes back on, you look great.

How to Play the Straight Man
By Luke Wilson, actor

Three days ago I watched a documentary about Tom Dowd, the longtime Atlantic Records producer and engineer, who worked with musicians ranging from Otis Redding to Booker T. and the MGs to Eric Clapton, turning the dials, encouraging and coaxing great musicians during legendary recordings. I think Tom Dowd’s role was similar to the straight man’s: you are there while someone else shines. And if you play your part well, the other person shines even brighter.

I think I’ve been playing the straight man ever since I first realized I was in over my head academically. Math in particular. And science, come to think of it. Not to overlook foreign languages. Not really knowing what was going on in class — and not really caring to understand or actually taking the time to study — I put a great deal of effort into my expression. Earnest yet vacant. Yearning yet lost. I had one simple goal for the teachers. I wanted them to think: This Wilson kid might not be that bright, but damn it, he’s trying. The poor bastard.

How to Be Funny While Going Very Fast
By Adam McKay, director, “Anchorman” and “Talladega Nights”

From what I’ve seen, there are basically three jokes you can make in vehicles that are going superfast. There’s the one when the usually well-spoken and wise-cracking main character is reduced to saying simply “Holy [expletive]!” as the alien spacecraft he has hijacked rockets out of the mother ship or his Humvee falls from a skyscraper (before he realizes that it has a parachute). This can also be just “[Explet-i-i-i-i-i-i-i-i-i-ve]!” without the “Holy.” That’s a creative call. But this line or joke always works. Always.

The second joke is the one when the incredibly tricked-out vehicle goes on some amazing chase before crashing or stopping in some end-over-end way, and the main character suddenly remembers what country he’s in and becomes a proper consumer and says, “I gotta get me one of these.”

The third joke for vehicles going superfast is the “You thought I knew what I was doing, but I don’t” joke. This involves the confident main character taking the wheel of the imposing vehicle (starship, tank, ghost ship, submarine, W.W.I. dirigible with side-mounted Gatling guns) and starting it up.

The sidekick then says, “You can drive this, can’t you?”

The confident main character then says:

“No. Can’t you?” or “I think so,” or “I saw it in

a movie once,” or “Define ‘drive.’ ”

And then we’re off. Halfway through the chase, you can also have the sidekick say,

“So were you ever going to tell me you didn’t know how to fly one of these?” And the main character can say, simply, “Nope,” and then crash into something. Once again, this never fails.

John Hodgman is a contributing writer for the magazine and the author of “The Areas of My Expertise.”

Economics Of Abundance Getting Some Well Deserved Attention

from the about-time dept

It's great to see Chris Anderson getting lots of attention for his recent talk on "the economics of abundance," because it's exactly the type of thing that a lot of people have been discussing for a while -- but still hasn't permeated the mainstream. In fact, it's quite similar to the talk I gave at the Cato Institute back in April in discussing why certain people seemed to have so much trouble grasping the economics of the digital age. Basically, it's the problem that occurs when people focus too hard on the idea that economics is the study of resource allocation in the presence of scarcity. That only makes sense when there's scarcity -- and in digital goods, scarcity doesn't exist.

Dave Hornik has a wonderful post about Anderson's talk while Ross Mayfield is also discussing how he's come to realize that the economics of scarcity doesn't apply digitally. Now, if we stuck with the focus on "scarcity," then I should be upset that these two are basically repeating the "idea" I discussed back in April. Those who keep harping on the importance of "property" and love to say that you can "steal" content might even say that this idea was "stolen." That, obviously, is ridiculous. These are basic ideas that we have all realized is fundamental and a truth of economics. And, it's hardly a new idea (which is why my one quibble with Anderson's own post is his decision to call the idea of the economics of abundance a "radical attack"). Mayfield talks about those who helped him realize it, from Jerry Michalski to Howard Rheingold. In the comments to that post, Julian Bond brings up the ideas of Buckminster Fuller and and Alan Cooper. In my case, the inspiration came from many different people, including the teachings of Alan McAdams (my old mentor and professor) and the writings of Brian Arthur (who focused on "increasing marginal returns" rather than diminishing ones) and even back to Thomas Jefferson, who famously said:

If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.

That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation. Inventions then cannot, in nature, be a subject of property.


So, no, it's not a new idea at all -- and yet, many people still don't seem to want to understand it. They don't believe that the free market can function with a lack of scarcity. It's understandable why that could make some uncomfortable -- but, it's a fundamental misunderstanding based on this desire to force scarcity where there is none, just so economics can continue to be the study of scarcity. It's this inability to get rid of that scarcity thinking that's holding back a number of developments these days, and the more people who realize this and the more people talking about this, the better. And it is fitting with the theory of abundance. The more abundant this discussion is, the more likely people will grasp it. And, it's especially exciting that someone like Chris Anderson is pitching it, because of his ability to take complex economic ideas and make them easy to understand, while getting people to listen. Hopefully, this is just the beginning of a widespread discussion about this topic.

 

 The Importance Of Zero In Destroying The Scarcity Myth Of Economics

from the let's-go-back-to-basic-math dept

A couple weeks ago, I wrote about all the well-deserved attention being placed on the idea of the economics of abundance. The discussion around that topic, both in the comments and in a series of emails has been quite interesting. Not everyone agrees with the concept, and that presents something of a challenge. It always interests me to figure out what the points of disagreement are in various intellectual debates. Larry Lessig recently quoted Al Gore in talking about the importance of "removing blocks." Basically, the idea is that, when people disagree with you, look closely at the points on which they disagree, and try to figure out why that disagreement is occurring. You can often learn some very interesting things, while challenging your own opinion on things. I am planning a series of posts on this topic, to see if we might be able to clear out the blocks concerning the economics of digital content. To start it off, I'm going to recap the talk (and the reasoning behind it) I gave back in April.

When Jim Harper of the Cato Institute kindly invited me to be on a panel discussion about copyrights at Cato in their Washington DC office, I had a lot of trouble figuring out what I was going to talk about. I had been spending a lot of time trying to understand why there was such a split among folks who prided themselves on having a "free market" or libertarian view of the world -- but who seemed to completely disagree on the economics of content. It bothered me that people who started with the same fundamental toolbox ("the free market is good") would end up at such widely divergent views. On the one side were folks like the Progress & Freedom Foundation, who felt that strong intellectual property laws (including things like stronger protections of DRM) were necessary to build an economy around content. On the other, were folks like myself, Tim Lee and David Levine, who saw that these intellectual property laws were basically government granted monopolies that could hold back economic progress.

As I mentioned in my recap to the panel discussion, I had my "Eureka!" moment on the airplane to DC. While I'd been reading through a bunch of text books on the economics of intellectual property, the history of intellectual property and the history of economics -- none seemed to answer the question of where the breakdown occurred. So I gave it all up, and decided to reread a book I'd picked up at a used book sale a few years back, called "Zero: The Biography of a Dangerous Idea" which is a fascinating history of the number zero -- and the fact that not only did it take societies ages to even recognize the number zero, it was considered heretical in some areas for a while. Zero caused all sorts of problems in that it didn't work like other numbers. It isn't a number. It's the absence of a number, and that screws up a lot of things. For thousands of years, it held back progress. You can't have advanced math or physics without an understanding of zero -- and the difficulty in accepting it was a real problem.

Of course, for all of us who learned about zero in elementary school, this seems laughable. How could zero be such a difficult concept to understand? Except, as I read the book, it occurred to me that it's the exact same problem that was causing this breakdown in the discussion. It's incredibly easy to misunderstand zero in economics. That's because economics, we're often taught, is the "science of scarcity" or understanding resource allocation in the presence of scarcity. All too often, economics itself is defined by scarcity. The "zero" changes all of that. Plugging a zero into an equation that expects a non-zero sends it haywire (think of what happens when you divide by zero) -- and that leads people to think that the equation must be broken. So, for example, basic economics tells you that a free market will push prices towards their marginal costs. If their marginal costs are zero (as is the case with digital goods and intellectual property), then it says that price will get pushed towards zero. However, this makes people upset, and makes them suggest the model is broken when a zero is applied. They see a result where there is no scarcity, and it doesn't make sense to them since they've always understood economics in the context of scarcity.

However, the point is that if you understand the zero, there's nothing to worry about and the model works perfectly. It just requires a recognition that the scarcity doesn't exist. Instead, you have abundance. You can have as much content as you need -- and in that world, it makes perfect sense that there's no costs, because without scarcity there need not be a cost. Supply is infinite, and price is zero. That does not mean, however, that there's no business. Instead, it just means you need to flip the equation and use the zero to your advantage. Instead of thinking of it as forcing a "price" of zero, you think of it as being a "cost" of zero. Suddenly, you've lowered the cost of making something to nothing -- and you should then try to use as much of it as you can. One simple example of this is to use that item that "costs" zero as a promotional good for something that does not have a zero marginal cost. When you realize how zero factors in, you realize that there's nothing new or radical here at all. It's just coming to terms with the idea that free market economics still works in the face of zero (in fact, it thrives) and there's no reason to put in place government-sanctioned barriers to shape the market.

music box
Borat Owes Me 97 Dollars
How Sacha Baron Cohen is Jewish vaudeville.
By Jody Rosen
Posted Friday, Nov. 3, 2006, at 1:18 PM ET

Sacha Baron Cohen has pulled off some coups in his brief career as a comic trickster, but nothing—not Ali G's inveigling of Pat Buchanan into a discussion of the Iraq War and Saddam's possession of "BLTs," not his bum-rushing of the Alabama cheerleading squad in the guise of Bruno, the gay fashion reporter from Austria—has come close to topping Borat's performance of "In My Country There Is Problem (Throw the Jew Down the Well)" on an episode of Da Ali G Show. It's a densely packed piece of sociopolitical parody: an incitement to pogrom ("Throw the Jew down the well/ So my country can be free/ You must grab him by his horns/ Then we have a big party") sung by a British Jew disguised as a Central Asian bumpkin before a whooping, Bud-swilling audience at a Tucson, Ariz., honky-tonk. It's hilarious. It's catchy. And it's a perfect distillation of Borat's satirical attack, designed to offend and indict just about everyone: Old Europe and Middle America, fulminating right-wingers and piously PC liberals, Kazakhstan's President Nursultan Nazarbayev and Anti-Defamation League director Abraham Foxman.

Pundits have lumped Baron Cohen in with Ricky Gervais, Larry David, and other practitioners of the new "cringe comedy." There's plenty to cringe at in Borat—see, for example, the episode in which he regales his elderly lunch companions with tales of his dalliance with a Gambian prostitute. But Baron Cohen is drawing on a much older tradition—he's really a vaudevillian. Borat revives the dialect comedy that thrived during the first decades of the last century, when American popular culture functioned as a kind of psychic clearinghouse for anxieties about the millions of new European immigrants and black Southern migrants flooding into the nation's big cities. In those years, the vaudeville stage was overrun with singing and joke-telling "impersonators" of ethnics. Sheet-music covers from the period—the songs churned out by Tin Pan Alley for vaudeville routines—offers a panorama of the types: the loutish, drunken Irishman; the mustachioed, stiletto-wielding Italian; the inscrutable, opium-smoking Chinese; the grasping, hook-nosed Jew; and, of course, the ubiquitous "coon" depicted by a million shucking-and-jiving wearers of burnt cork.

Dialect comedy is still a staple of popular entertainment—minstrelsy hasn't gone away so much as taken on slightly subtler forms. The shock of Baron Cohen's shtick, though, is its coarseness; from his gross sexual proclivities (peccadilloes include prostitutes, incest, and—jackpot!—sex with his prostitute sister), to his malapropisms, to his tacky fake mustache that looks like it was lifted straight out of the prop box of old-time vaudeville stars like the famous "Dutch" (that is, German) impersonators Joe Weber and Lew Fields, Borat is a throwback to the crudest kind of vaudevillian ethnic burlesque, the stuff that we thought was smoothed out of pop culture long ago. The essence of Borat's act is the same as the dialect comics of 1910: the slapstick story of a greenhorn immigrant, bumbling his way across America, mangling the English language, misapprehending the native customs, and looking ridiculous in a big cowboy hat.

I've spent a dozen or so years digging in libraries and trawling flea markets while researching the Jewish variant of this vaudeville type, the comic "Hebrew." (The result of these efforts—besides a massive eBay debt and a collection of sheet music upsetting to the mishpochah—is a new compilation CD of vaudeville-era Jewish ditties, Jewface.) Hebrew comedy offers some interesting parallels with Borat's "Throw the Jew Down the Well." Songs such as "Cohen Owes Me 97 Dollars," "Get a Girl With Lots of Money Abie," and "I Want to Be An Oy, Oy, Oyviator" traded in some of the oldest and most grotesque Jewish stereotypes, depicting Jews as schlemiels, cowards, money-grubbers, and buffoons.

Yet while Jews of a certain station—the Central Conference of American Rabbis and, yes, the Anti-Defamation League—organized campaigns to wipe out Hebrew comedy, they were met with furious resistance in, of all places, the Jewish press. Hebrew comedy, it turns out, was a Jewish enterprise: The songs were largely composed by Jewish songwriters, published by Jewish-owned music firms, performed by Jewish vaudevillians in Jewish-run theatrical circuits before cheering Jewish audiences. There are precedents, in other words, for "Throw the Jew Down the Well," an anti-Semitic song bellowed heartily by a Jew.

The cultural dynamics of Sacha Baron Cohen's song and Irving Berlin's "Cohen Owes Me 97 Dollars" (1915) are vastly different—the difference is 90 years of Jewish history. "Cohen" and its ilk were assimilationist anthems: The Jews who embraced caricatures of blundering greenhorns were asserting their sophistication, laughing at the comic hebe to prove that they had passed out of their own awkward greenhorn phase. The songs were love letters to the New World, designed to cleanse all who got the joke of the Old World taint. A decade after "Cohen Owes Me 97 Dollars," Berlin and a new generation of Jewish tunesmiths had moved on to crafting elegant love songs for all-American crooners. (Berlin, of course, would become the specialist in post-ethnic musical Americana: "God Bless America," "Easter Parade," "White Christmas.") The next time Jewish dialect music surfaced in the pop mainstream was on Broadway, when a new generation began to sentimentalize, of all things, the deprivations and piety of the Pale of Settlement. What was the Fiddler on the Roof score if not a collection of "Hebrew" dialect tunes?

Viewed against this backdrop, "Throw the Jew Down the Well" looks like nothing less than the angriest and most extraordinary piece of Jewish-themed music that has ever bubbled to the surface of American popular culture. It's a dialect song sung not in the voice of the greenhorn, or the assimilated Jewish-American smoothie, or the saintly shtetl-dweller, but by the Old World tormentor. And, Borat's performance of the song insists, in the face of nearly a century of Jewish pop-cultural passing and ventriloquism, that the Jews never did assimilate after all, that the lynch mob is waiting just over the hill—or downing brews beneath Stetsons at the local watering hole—waiting to "grab him by his horns" and hurl him down. It's classic Jewish paranoia, of the kind voiced darkly in the privacy of Jewish homes, and in the lyrics of another famous novelty song, Tom Lehrer's "National Brotherhood Week" (1965): "Oh, the Protestants hate the Catholics/ And the Catholics hate the Protestants/ And the Hindus hate the Moslems/ And everybody hates the Jews." You want to dismiss it out of hand, but Borat's song isn't just a comedy number—it's an exposé. Watch those bar patrons singing along and you can't help but wonder: In my country is there problem?

Jody Rosen is Slate's music critic. He lives in New York City. He can be reached at slatemusic@gmail.com.

Get ready for 24-hour living

18 February 2006

NewScientist.com news service

Graham Lawton

SO MUCH to do, so little time. Between a hectic work schedule and a thriving social life, Yves (not his real name), a 31- year-old software developer from Seattle, often doesn't have time for a full night's sleep. So he swallows something to make sure he doesn't need one. "If I take a dose just before I go to bed, I can wake up after 4 or 5 hours and feel refreshed," he says. "The alarm goes off and I'm like, let's go!"

Yves is talking about modafinil, a stimulant that since its launch seven years ago has acquired a near-mythical reputation for wiring you awake without the jitters, euphoria and eventual crash that come after caffeine or amphetamines. Yves has been popping modafinil on and off for the past three years and says it is "tremendously useful". "I find I can be very productive at work," he says. "I'm more organised and more motivated. And it means I can go out partying on a Friday night and still go skiing early on Saturday morning."

Modafinil is just the first of a wave of new lifestyle drugs that promise to do for sleep what the contraceptive pill did for sex - unshackle it from nature. Since time immemorial, humans have structured their lives around sleep. In the near future, we will, for the first time, be able to significantly structure the way we sleep to suit our lifestyles.

"The more we understand about the body's 24-hour clock the more we will be able to override it," says Russell Foster, a circadian biologist at Imperial College London. "In 10 to 20 years we'll be able to pharmacologically turn sleep off. Mimicking sleep will take longer, but I can see it happening." Foster envisages a world where it's possible, or even routine, for people to be active for 22 hours a day and sleep for two. It is not a world that everyone likes the sound of. "I think that would be the most hideous thing to happen to society," says Neil Stanley, head of sleep research at the Human Psychopharmacology Research Unit in the University of Surrey, UK. But most sleep researchers agree that it is inevitable.

If that sounds unlikely, think about what is already here. Modafinil has made it possible to have 48 hours of continuous wakefulness with few, if any, ill effects. New classes of sleeping pills are on the horizon that promise to deliver sleep that is deeper and more refreshing than the real thing. Further down the line are even more radical interventions - wakefulness promoters that can safely abolish sleep for several days at a stretch, and sleeping pills that deliver what feels like 8 hours of sleep in half the time. Nor is it all about drugs: one research team even talks about developing a wearable electrical device that can wake your brain up at the flick of a switch.

To some degree, we are already adept at controlling sleep. Most people in full-time work deprive themselves of sleep during the week, deliberately or otherwise, and catch up at the weekend. We often augment our sleep-suppressing powers with caffeine, nicotine or illegal stimulants such as cocaine and amphetamines. We are also highly dependent on substances that help us sleep. According to some estimates, 75 per cent of adults suffer at least one symptom of a sleep problem a few nights a week or more. In 1998, a team from the Henry Ford Health Sciences Research Institute in Detroit, Michigan, published a study revealing that 13 per cent of adult Americans had used alcohol to help them get to sleep in the previous year, and 18 per cent had used sleeping pills (Sleep, vol 21, p 178).

Despite the enormous resources that we pour into getting good sleep and wakefulness when we want them, most of the drugs at our disposal are crude instruments at best. The vast majority of sleeping pills - known in the business as hypnotics - are simply "knockout drops" that put you in a state almost like sleep but without its full restorative properties. "Hypnotic-induced sleep is better than no sleep, but it isn't natural sleep," says Stanley. With their addictive nature, the drugs we use to keep us awake, such as coffee and amphetamines, are even worse. In combination with our clock-watching lifestyles, these sleep and wake aids are driving ever more people into what Foster calls the "stimulant-sedative loop" where they need nightly help getting to sleep and daily help staying awake.

Modafinil has changed the rules of the game. The drug is what's known as a eugeroic, meaning "good arousal" in Greek. It delivers natural-feeling alertness and wakefulness without the powerful physical and mental jolt that earlier stimulants delivered. "There are no amphetamine-like feelings," says Yves. And as Yves' way of taking it shows, being on modafinil doesn't stop you from falling asleep if you want to.

In fact, its effects are so subtle that many users say they don't notice anything at all - until they need to. "I wouldn't say it makes me feel more alert or less sleepy. It's just that thoughts of tiredness don't occur to me," says Yves. "If there's a job at hand that I should be doing, I'm focused, but if I'm watching a movie or something, there is no effect."

People who take modafinil for medical reasons usually take just enough of the drug in the morning to see them through the day, but it also seems to be able to deliver sustained wakefulness - for a couple of days at least. "The military has tested sequential dosing," says Jeffrey Vaught, president of R&D at Cephalon, modafinil's Pennsylvania-based manufacturer. "It works for 48 hours or so, but eventually you need to sleep."

Perhaps the most remarkable thing about modafinil is that users don't seem to have to pay back any "sleep debt". Normally, if you stayed awake for 48 hours straight you would have to sleep for about 16 hours to catch up. Modafinil somehow allows you to catch up with only 8 hours or so. Well before Cephalon took an interest in the drug, French researchers discovered this effect in cats back in the early 1990s (Brain Research, vol 591, p 319), and it has since been found to apply to humans too.

So how does modafinil work? "No one really knows," admits Vaught. He says that Cephalon thinks it understands the drug, but is keeping the details under wraps. What is clear is that, like other stimulant drugs, modafinil prevents nerve cells from reabsorbing the excitatory neurotransmitter dopamine once they release it into the brain. The difference is that it somehow does so without producing the addictive highs and painful crashes associated with most stimulants. A number of independent studies suggest that this might be because it also interferes with the reuptake of another neurotransmitter, noradrenalin.

However it works, modafinil is proving hugely successful. Since it hit the market in 1998, sales have been climbing steadily - from $25 million in 1999 to around $575 million in 2005. Cephalon insists that the drug is for treating "medical" sleepiness caused by diseases such as narcolepsy and sleep apnoea.

Even so, it's clear that modafinil is becoming a lifestyle drug for people like Yves who want off-the-peg wakefulness. "At first I got it from a friend, and then I got diagnosed as a narcoleptic online," says Yves.

All the indications are that modafinil is extremely safe. The drug can have side effects, most commonly headaches, but up to now there have been no severe reactions, says Vaught. In fact, it is hard to find anyone with a bad word to say about modafinil, except that there may be unseen problems down the line as the drug becomes more widely used. "I think it's unlikely that there can be an arousal drug with no consequences," says Foster. In the long run, it is possible that casual users might have to keep upping their dose to get the same effect. Stanley has similar worries. "Is it a potential drug of abuse?" he asks. "Will it get street value? We'll see."

Cephalon does not seem to be worried. Modafinil's success has spurred it to develop a successor, armodafinil. The company is also developing other eugeroics - one experimental drug called CEP-16795 switches off the H3 histamine receptor, which appears to be one of the molecular switches that controls the sleep-wake cycle. However, Vaught claims that the original will be a tough act to follow. "Modafinil is very effective and very safe," he says. "How do you beat it?"

There are ideas as to how. Last year, Sam Deadwyler of Wake Forest University in Winston-Salem, North Carolina, reported the results of an experiment with a drug called CX717. The findings suggest that modafinil won't have the field to itself forever.

Deadwyler kept 11 rhesus monkeys awake for 36 hours, throughout which they performed short-term memory and general alertness tests (Public Library of Sciences Biology, vol 3, p 299). At that level of sleep deprivation, a monkey's performance would normally drop to the point where it could barely function at all, but Deadwyler found that CX717 had remarkable restorative powers. Monkeys on the drug were doing better after 36 hours of continual wakefulness than undrugged monkeys after normal sleep. When Deadwyler imaged their brains with functional magnetic resonance imaging, (fMRI), he found that the drug maintained normal activity even in severely sleep-deprived individuals. The results build on those of an earlier, small-scale trial on 16 men that found CX717 could largely reverse the cognitive decline that comes with 24 hours of sleep deprivation (New Scientist, 14 May 2005, p 6).

Soldiers get high

CX717 belongs to a class of drugs called ampakines, which subtly ramp up brain activity by enhancing the action of its main excitatory neurotransmitter, glutamate. Cortex Pharmaceuticals of Irvine, California, which developed CX717, originally saw the drug as a cognitive booster for people with Alzheimer's, but it is its potential to counter the effects of sleep deprivation that is attracting the most attention.

Later this year, the Defense Advanced Research Projects Agency (DARPA), based in Arlington, Virginia, will put CX717 through its paces as a wakefulness promoter for combat. In an experiment designed to mimic the harsh demands of special ops, investigators will push 48 volunteers to the limit - four consecutive nights of hard work with only 4 hours of recovery sleep in between. "They'll go from being tired to exhausted to crashing," says Roger Stoll, Cortex's chief executive. For some of them, however, the ordeal will be softened by regular doses of CX717. DARPA hopes the drug will counteract the sleep deprivation.

The trial should help answer some outstanding questions about CX717's potential. "We don't know yet if it eliminates feelings of sleepiness," says Stoll. "The early signs are that people function better, their brain is a little more hyped. But we haven't tested sleepiness directly." As with modafinil, the evidence suggests that people struggle to tell if they're on the drug or not, and that hasn't turned out to be much of a problem for modafinil.

Whatever the outcome of the DARPA trial, CX717 won't be the last word on eugeroics. Stoll says Cortex has similar but more powerful molecules up its sleeve. Thought they are being developed mainly as memory enhancers, some may turn out to be powerful wakefulness promoters too. Industry giants GlaxoSmithKline and Eli Lilly have ampakine programmes of their own, and at least one other company, Arena Pharmaceuticals of San Diego, California, has declared an interest in wakefulness promoters, though it hasn't released any details of its research.

When and if those drugs come through, the US military is sure to be interested. DARPA is one of the most active players in the drive to conquer sleep, setting up and funding much of the basic research on wakefulness. The army and air force have research programmes too.

It's easy to see why DARPA is interested. "We make the assumption that soldiers are going to be sleep-deprived," says DARPA neuroscientist Amy Kruse, who runs the agency's sleep-deprivation research programme. "We want to know what we can do to bring them back up to the level they would be at if they had a good night's sleep."

When DARPA talks about sleep deprivation, it really means it. Soldiers on special ops sometimes have to be awake, alert and active for 72 hours at a stretch with only minimal rest. That's like starting work on Monday morning and not stopping until Thursday. "Three days, that's when they really start hurting," says Kruse.

The military has a long history of using caffeine and amphetamines to get its people through. It has now added modafinil to the list, and is clearly interested in CX717. And Kruse says she is confident that there is lots of room for further improvement.

Last year, a DARPA-funded team led by Giulio Tononi at the University of Wisconsin Madison discovered a strain of fruit flies that gets by on just a third the normal amount of sleep. The "minisleep" mutant carries a change to a single gene, encoding a protein involved in potassium transport across cell membranes. Intriguingly, defects in potassium channels are associated with reduced sleep in humans, particularly in the autoimmune disease Morvan's syndrome, one symptom of which is chronic sleeplessness. What that suggests, says Kruse, is that new drugs designed to latch onto potassium channels in the brain could radically alter the need for sleep. There are also likely to be other molecular targets in the brain just waiting to be exploited, she says.

DARPA is meanwhile pursuing other strategies to conquer sleep deprivation. At Yaakov Stern's lab at Columbia University in New York, DARPA-funded neuroscientists have used fMRI to image the brains of sleep-deprived people, to find out which regions are affected when you are very tired. Then they used a transcranial magnetic stimulation (TMS) machine - routinely used to switch localised brain regions on and off - to switch off those areas and see if that reversed the effects.

"This is all proof of concept," says Stern. "It's hard to imagine a sleep deprived pilot using TMS," not least because the machines are too bulky to fit in a cockpit. "The next step is to apply TMS before or during sleep deprivation to see if it blunts the effect. That has more of a shot at a lasting effect." Stern says his team is also looking into a new technique called DC brain polarisation, which has similar brain-boosting effects to TMS but uses DC current instead of magnetism. The beauty of this "poor man's TMS" is that the equipment is significantly smaller and cheaper - it could even be incorporated into headgear that gives you a jolt of wakefulness at the flick of a switch. And then there's always neurofeedback - training people to activate the brain regions that get hit by sleep deprivation, effectively willing themselves awake.

The military isn't just interested in wakefulness. It also has a keen interest in the other side of the coin. John Caldwell works at the US Air Force Research Laboratory in San Antonio, Texas. He has spent most of his career testing the effects of stimulants, including modafinil, on pilots. "I'm the guy who puts sleep-deprived pilots in a plane, gives them drugs and says, did it work?" he says. He has also done a handful of studies on sleep aids - testing the best way to help night pilots sleep well during the day, for example. In recent months Caldwell has become aware that there is a quiet revolution going on in sleep medicine. "There's a new idea out there," he says. "Drugs that change sleep architecture."

Sleep researchers have known for over 50 years that sleep isn't merely a lengthy period of unconsciousness, but consists of several different brain states (see Diagram). How those states are put together to build a full night's sleep is called sleep architecture.

Catching the slow waves

In the past, says Caldwell, sleeping pills were designed not to mess with sleep architecture, although they generally do, suppressing the deepest and most restorative "slow-wave" sleep in favour of shallower stage 2 sleep. Now, though, modifying sleep architecture is seen as the way forward. There are two new drugs in the offing that significantly increase the amount of slow-wave sleep. One of them, gaboxadol, made by Merck, is in phase III clinical trials and could be on the market next year. To Caldwell these drugs hold out the promise of a power nap par excellence. "Maybe you can make a short period of sleep more restorative by filling it with up with slow-wave sleep," he says.

Much like modafinil, gaboxadol and the other slow-wave sleep promoter - Arena Pharmaceuticals' APD125, currently in phase II - are the start of something bigger. For more than 35 years, sleeping pills have been a one-trick pony. If you wanted to send someone to the land of nod, there was only one way of doing so - targeting the neurotransmitter GABA, which is the brain's all-purpose dimmer switch. Old-fashioned hypnotics such as barbiturates and benzodiazepines work by making neurons more sensitive to the soporific effects of GABA. It's also why alcohol makes you sleepy. Even the newer, cleaner sleeping pills, such as the market leader Ambien, work through the GABA system.

Manipulating the GABA system is a sure-fire way of putting people to sleep, but it has its problems. One is that the brain adapts to the drugs, which means that most cannot be taken for more than a few days without losing their potency. The effects often linger well into the morning, making people feel groggy and hung over. Many are also addictive.

What's more, sleep quality has rarely been considered. "In the past we would take a hypnotic and say, does it put you to sleep?," says Stanley. "That's a pretty inexact way of dealing with it. In that respect, alcohol is a good hypnotic." Now, however, there is a recognition that there is much more to sleep than the GABA system. Last year the first non-GABA sleeping pill came onto the market - the first new class of hypnotic for 35 years. Rozerem, made by Japanese firm Takeda, mimics the effects of the sleep-promoting hormone melatonin. Nor is it the only one. There are at least three other new classes of hypnotic that don't go anywhere near the GABA system. And though gaboxadol works through GABA, it hits a type of receptor that has never been targeted by drugs before.

According to Stanley, there is even more scope for improvement. "It is possible that pharmaceuticals will allow you a condensed dose of sleep," he says, "and we are not that far away from having drugs that put you to sleep for a certain length of time." He predicts you could soon have tablet combining a hypnotic with an antidote or wakefulness promoter designed to give you a precise number of hours' sleep. "A 4, 5 or 6-hour pill."

We seem to be moving inescapably towards a society where sleep and wakefulness are available if not on demand then at least on request. It's not surprising, then, that many sleep researchers have nagging worries about the long-term impact of millions of us using drugs to override the natural sleep-wake cycle.

Stanley believes that drugs like modafinil and CX717 will tempt people to overdose on wakefulness at the expense of sleep. "Being awake is seen to be attractive," he says. "It's not cool to be asleep." Foster has similar worries. "It seems like that technology will help us cope with 24/7, but is coping really living?" he asks. Others point out that there are likely to be hidden health costs to overriding our natural sleep-wake cycles. "Pharmaceuticals cannot substitute for normal sleep," says Vaught.

Still, even the doubters admit that to all intents and purposes we are already too far down the road of the 24-hour society to turn back. For millions of people, good sleep and productive wakefulness are already elusive, night work or nightlife a reality, and the "stimulant-sedative" loop all too familiar. As Vaught puts it, "We're already there." So why not make it as clean and safe as possible?

November 12, 2006

Op-Ed Columnist

2006: The Year of the ‘Macaca’

By FRANK RICH

OF course, the “thumpin’ ” was all about Iraq. But let us not forget Katrina. It was the collision of the twin White House calamities in August 2005 that foretold the collapse of the presidency of George W. Bush.

Back then, the full measure of the man finally snapped into focus for most Americans, sending his poll numbers into the 30s for the first time. The country saw that the president who had spurned a grieving wartime mother camping out in the sweltering heat of Crawford was the same guy who had been unable to recognize the depth of the suffering in New Orleans’s fetid Superdome. This brand of leadership was not the “compassionate conservatism” that had been sold in all those photo ops with African-American schoolchildren. This was callous conservatism, if not just plain mean.

It’s the kind of conservatism that remains silent when Rush Limbaugh does a mocking impersonation of Michael J. Fox’s Parkinson’s symptoms to score partisan points. It’s the kind of conservatism that talks of humane immigration reform but looks the other way when candidates demonize foreigners as predatory animals. It’s the kind of conservatism that pays lip service to “tolerance” but stalls for days before taking down a campaign ad caricaturing an African-American candidate as a sexual magnet for white women.

This kind of politics is now officially out of fashion. Harold Ford did lose his race in Tennessee, but by less than three points in a region that has not sent a black man to the Senate since Reconstruction. Only 36 years old and hugely talented, he will rise again even as the last vestiges of Jim Crow tactics continue to fade and Willie Horton ads countenanced by a national political party join the Bush dynasty in history’s dustbin.

Elsewhere, the 2006 returns more often than not confirmed that Americans, Republicans and Democrats alike, are far better people than this cynical White House takes them for. This election was not a rebuke merely of the reckless fiasco in Iraq but also of the divisive ideology that had come to define the Bush-Rove-DeLay era. This was the year that Americans said a decisive no to the politics of “macaca” just as firmly as they did to pre-emptive war and Congressional corruption.

For all of Mr. Limbaugh’s supposed clout, his nasty efforts did not defeat the ballot measure supporting stem-cell research in his native state, Missouri. The measure squeaked through, helping the Democratic senatorial candidate knock out the Republican incumbent. (The other stem-cell advocates endorsed by Mr. Fox in campaign ads, in Maryland and Wisconsin, also won.) Arizona voters, despite their proximity to the Mexican border, defeated two of the crudest immigrant-bashing demagogues running for Congress, including one who ran an ad depicting immigrants menacing a JonBenet Ramsey look-alike. (Reasserting its Goldwater conservative roots, Arizona also appears to be the first state to reject an amendment banning same-sex marriage.) Nationwide, the Republican share of the Hispanic vote fell from 44 percent in 2004 to 29 percent this year. Hispanics aren’t buying Mr. Bush’s broken-Spanish shtick anymore; they saw that the president, despite his nuanced take on immigration, never stood up forcefully to the nativists in his own camp when it counted most, in an election year.

But for those who’ve been sickened by the Bush-Rove brand of politics, surely the happiest result of 2006 was saved for last: Jim Webb’s ousting of Senator George Allen in Virginia. It is all too fitting that this race would be the one that put the Democrats over the top in the Senate. Mr. Allen was the slickest form of Bush-Rove conservative, complete with a strategist who’d helped orchestrate the Swift Boating of John Kerry. Mr. Allen was on a fast track to carry that banner into the White House once Mr. Bush was gone. His demise was so sudden and so unlikely that it seems like a fairy tale come true.

As recently as April 2005, hard as it is to believe now, Mr. Allen was chosen in a National Journal survey of Beltway insiders as the most likely Republican presidential nominee in 2008. Political pros saw him as a cross between Ronald Reagan and George W. Bush whose “affable” conservatism and (contrived) good-old-boy persona were catnip to voters. His Senate campaign this year was a mere formality; he began with a double-digit lead.

That all ended famously on Aug. 11, when Mr. Allen, appearing before a crowd of white supporters in rural Virginia, insulted a 20-year-old Webb campaign worker of Indian descent who was tracking him with a video camera. After belittling the dark-skinned man as “macaca, or whatever his name is,” Mr. Allen added, “Welcome to America and the real world of Virginia.”

The moment became a signature cultural event of the political year because the Webb campaign posted the video clip on YouTube.com, the wildly popular site that most politicians, to their peril, had not yet heard about from their children. Unlike unedited bloggorhea, which can take longer to slog through than Old Media print, YouTube is all video snippets all the time; the one-minute macaca clip spread through the national body politic like a rabid virus. Nonetheless it took more than a week for Mr. Allen to recognize the magnitude of the problem and apologize to the object of his ridicule. Then he compounded the damage by making a fool of himself on camera once more, this time angrily denying what proved to be accurate speculation that his mother was a closeted Jew. It was a Mel Gibson meltdown that couldn’t be blamed on the bottle.

Mr. Allen has a history of racial insensitivity. He used to display a Confederate flag in his living room and, bizarrely enough, a noose in his office for sentimental reasons that he could never satisfactorily explain. His defense in the macaca incident was that he had no idea that the word, the term for a genus of monkey, had any racial connotation. But even if he were telling the truth — even if Mr. Allen were not a racist — his non-macaca words were just as damning. “Welcome to America and the real world of Virginia” was unmistakably meant to demean the young man as an unwashed immigrant, whatever his race. It was a typical example of the us-versus-them stridency that has defined the truculent Bush-Rove fearmongering: you’re either with us or you’re a traitor, possibly with the terrorists.

As it happened, the “macaca” who provoked the senator’s self-destruction, S. R. Sidarth, was not an immigrant but the son of immigrants. He was born in Washington’s Virginia suburbs to well-off parents (his father is a mortgage broker) and is the high-achieving graduate of a magnet high school, a tournament chess player, a former intern for Joe Lieberman, a devoted member of his faith (Hindu) and, currently, a senior at the University of Virginia. He is even a football jock like Mr. Allen. In other words, he is an exemplary young American who didn’t need to be “welcomed” to his native country by anyone. The Sidarths are typical of the families who have abetted the rapid growth of northern Virginia in recent years, much as immigrants have always built and renewed our nation. They, not Mr. Allen with his nostalgia for the Confederate “heritage,” are America’s future. It is indeed just such northern Virginians who have been tinting the once reliably red commonwealth purple.

Though the senator’s behavior was toxic, the Bush-Rove establishment rewarded it. Its auxiliaries from talk radio, the blogosphere and the Wall Street Journal opinion page echoed the Allen campaign’s complaint that the incident was inflated by the news media, especially The Washington Post. Once it became clear that Mr. Allen was in serious trouble, conservative pundits mainly faulted him for running an “awful campaign,” not for being an awful person.

The macaca incident had resonance beyond Virginia not just because it was a hit on YouTube. It came to stand for 2006 as a whole because it was synergistic with a national Republican campaign that made a fetish of warning that a Congress run by Democrats would have committee chairmen who are black (Charles Rangel) or gay (Barney Frank), and a middle-aged woman not in the Stepford mold of Laura Bush as speaker. In this context, Mr. Allen’s defeat was poetic justice: the perfect epitaph for an era in which Mr. Rove systematically exploited the narrowest prejudices of the Republican base, pitting Americans of differing identities in cockfights for power and profit, all in the name of “faith.”

Perhaps the most interesting finding in the exit polls Tuesday was that the base did turn out for Mr. Rove: white evangelicals voted in roughly the same numbers as in 2004, and 71 percent of them voted Republican, hardly a mass desertion from the 78 percent of last time. But his party was routed anyway. It was the end of the road for the boy genius and his can’t-miss strategy that Washington sycophants predicted could lead to a permanent Republican majority.

What a week this was! Here’s to the voters of both parties who drove a stake into the heart of our political darkness. If you’ll forgive me for paraphrasing George Allen: Welcome back, everyone, to the world of real America.

history lesson
The Myth of the Six-Year Itch
The laws of history didn't doom the Republicans.
By David Greenberg
Posted Wednesday, Nov. 8, 2006, at 5:33 PM ET

Whenever the party in power takes a hit in the midterms, it takes refuge in the past. Since the governing party almost always loses seats in off-year races, the president is said merely to have fallen prey to the ineluctable tides of history—much in the way that presidents who face economic depressions disown any blame by fingering the all-powerful "business cycle."

In particular, parties that incur setbacks in their presidents' second terms like to hide behind the "six-year itch," to use an ungainly term favored by political scientists. Typically a president's second-term off-year losses outstrip his first-term losses, and it's tempting to imagine some iron law of history at work—a structural force that has afflicted even popular presidents, such as Dwight Eisenhower in 1958 and Ronald Reagan in 1986. George Will, for one, made this argument on ABC last night.

But plans to invoke the six-year itch ought to be scratched. Politics has no iron laws: As circumstances change, so does political behavior. (Significantly, Bush in 2002 and Bill Clinton in 1998 both defied the trends, suggesting that gerrymandering, microtargeting, polarization, or other factors have scrambled historical patterns.) What's more, there have been too few sixth-year elections to be statistically meaningful. Most important, the variables in any given election—wars, recessions, scandals, social crises—matter more than tendencies built in to the system. On inspection, the six-year itch resembles less a chronic disease than a phantom illness on the order of chronic fatigue syndrome.

To be sure, certain structural forces do favor the nonpresidential party in the midterms. According to a theory of "surge and decline," presidents have coattails when they're elected, carrying into office their party-mates. But (to shift metaphors) when those legislators have to run a play without the president as their offensive line, many get thrown for a loss. Ronald Reagan's sixth-year setbacks fit this pattern: The Republicans gained control of the Senate in Reagan's 1980 landslide but lost it—despite the absence of scandal, recession, or other disaster—in 1986.

More provocatively, legal scholar Akhil Amar has suggested in America's Constitution: A Biography that the 22nd Amendment limiting presidents to two terms, ratified in 1951, has weakened second-term presidents. It's well-known that second-term presidents become embroiled in scandal: Nixon with Watergate, Reagan with Iran-Contra, Clinton with Lewinsky, Bush with faulty prewar intelligence (among other issues). One explanation is that re-elected presidents grow arrogant and reckless, as Nixon certainly did. But Amar suggests another reason: The 22nd Amendment effectively makes second-termers four-year lame ducks, unable to exact retribution upon antagonists on the Hill—thereby encouraging congressional investigations and commanding less party loyalty. By the same logic, the president is arguably now less able to pass legislation and otherwise work his will in his sixth year, thus deepening his midterm losses.

Nonetheless, in almost every case of a six-year itch, Occam's Razor suggests more direct reasons for a president's party's losses. In 1874, for example, the Democrats made big gains in Republican President Ulysses Grant's second term. Yet they plainly benefited from the financial panic of 1873, as well as from the Credit Mobilier scandal—considered the third-worst presidential scandal after Watergate and Teapot Dome.

In 1938, the six-year itch did seem to be at work in the defeats that the Democrats endured after scoring routs not only in 1932 and 1936 but also—bucking historical trends—in the off-year contests of 1934. Yet again, more proximate causes abound. Franklin Roosevelt blundered in 1937 when he proposed a massive overhaul of the Supreme Court to get it to uphold his New Deal legislation. Moreover, after much progress in reducing joblessness and reviving public confidence, he cut government spending to bring the budget into balance, thereby kicking the economy back into recession. By 1938, the unemployed had swelled from 5 million to 12 million, crippling the Democrats in November.

Even in 1958, when the widely beloved Eisenhower lost 48 House and 13 Senate seats, contemporary events, more than structural dynamics, were at fault. Most significantly, the severe economic downturn that year hit especially hard in the Midwest, depressing turnout among Eisenhower's most natural constituents. Besides, the Soviet Union had launched just Sputnik, spurring a panic about the state of American defense, education, and—most important—nerve and morale. This disillusionment with Ike's governance was an early expression of the now-canonical critique of his presidency—what the journalist William Shannon termed "The Great Postponement."

A similar disillusionment may have been in effect his year, as historian Niall Ferguson argued before the election. There's also a whiff of 1966 in the air, as I've noted elsewhere—1966 was another year in which voters rebelled (in a second-year, not a sixth-year, election) against one-party rule. This year, the discontent that exit polls found with what they labeled "corruption" should be more properly understood as a rebuke to the general arrogance of the unchecked Republican Party. Think of corruption in Lord Acton's sense.

In retrospect, though, the Republicans' losses seem most similar to those in three other midterm races: 1918, 1950 (not properly a sixth-year midterm, since Harry Truman entered the presidency in 1945 through accession, not election), and 1942 (technically a "10th-year" itch). All of those setbacks came amid wars. In 1950, Republicans painted the controversial adventure in Korea as a result of Truman's weakness. In 1942, Democrats suffered when World War II was going poorly in both the Pacific and European theaters. In 1918, the Allies were nearing victory in World War I, but there was much resentment in the land, especially after Woodrow Wilson, having promised to keep the United States out of war, now explicitly called on Americans to vote Democratic as a show of support for his policies.

Wars help presidents so long as the rally-round-the-flag effect holds up. The Iraq war did so for Bush in 2002 and even 2004 (though by then it was becoming uncertain whether the Iraq war was helping or hurting Bush). On the other hand, a conflict that has no clear end in sight vexes Americans of all political stripes, summoning up deep strains of both conservative isolationism and liberal anti-imperialism. As my Rutgers colleague Ross K. Baker, a congressional expert, wrote last spring, "Combat fatigue is not a condition found only on the battlefield; it is also an affliction that has often been diagnosed in the voting booth." If there's a history lesson to be drawn from this year's election results, that one would be closest to the mark.

David Greenberg writes Slate's "History Lesson" column and teaches at Rutgers University. He is the author of Nixon's Shadow: The History of an Image and Calvin Coolidge (forthcoming).

 

Hi, how are you? You don't know me, but I could be the guy that's fucking your wife. Notice I said "could be", because I think you should know up front that so far the only fucking that has gone on between us has mostly consisted of me, alone and naked, on a toilet, with a picture of your wife from the local newspaper, when she won the "best lawn in the county" award. Did I mention how hot she is in that photo?

You seem to be disturbed, and you have every right to be. After all, how many times during the day does one man come up to another and make such a bold declaration of masturbatorial intent with another man's wife? You were right to punch me just now, but let me explain.

It all began three weeks ago, when I was in the supermarket trying to get the telephone number of the underage cashiers who work there. I was only having success with the USCAN register as far as that goes, and your wife saw me unsuccessfully attempt to convince the robotic female voice that I was of sound moral character when she noted that I had purchased condoms and beer and an issue of "Ladies Home Journal" with Eva Longoria on the cover. She smiled, shifted her buggy to the side, and told me that my dollar bill was "all crinkled, you need to smooth it out". I don't have to tell you, but that's a sure sign of romantic interest if I ever heard one.

So I made it my life's mission to fuck your wife...not right there in the grocery store, though that is number #5 on my list of alltime sexual fantasies (up there with being spanked by someone dressed like Ruth Bader Ginsberg, but just below putting my entire fist into the vagina of a nice Jewish girl from uptown who "normally doesn't do those kind of things"). If you noticed any strange noises, odd lights, grunting sounds, etc., in the vicinity of your bathroom window, don't be alarmed. I was just trying to mentally signal you to leave and let your wife continue showering in peace, you can finish your Sports Illustrated later.

Oh, now *I've* got problems? Let me tell you buddy, the shoe fits on your foot as well...yeah, you neglect her. I know, you see, I'm always around when you leave for work, and I keep up observation during the day to make sure she's okay. I care about...whatever the fuck her name is. Oh, like you know it either!

But never fear, I will be the gentleman, and let you crazy kids reconcile your obviously loveless marriage. It's the least I can do. I'll always have the hours of surveillance footage with her in various stages of undress and whatnot to amuse me, keep me company, and so forth. I'm a lover, not a fighter, so please stop pummeling me, sir.

You've proven your point, you are my physical equal. If not my superior, for I admire your stamina. But I must warn you; you can get rid of me easily enough, but another will come along to fantasize about your wife's perfect bosom late into the lonely night. And he might even have muscles. Keep that in mind, you jealous freak.

Now if you'll excuse me, there's a very attractive USCAN machine that hasn't had a real man in years, if you catch my drift...don't be a hater.

So McToken chokes out McDreamy over calling poor Georgie-porgy a McFaggot even after he just came out? Seriously? Seriously? It sounds like the whole cast is a bunch of whining vajayjays to me. Honestly, I wouldn’t be surprised to see Dick Grayson’s codpiece swell at the thought of plowing McSteamy’s chiseled McMuffins. If you have no idea what I’m talking about count yourself lucky that you have managed to avoid all talk of the male cast of “Grey’s Anatomy” acting like the “Desperate Housewives” at a swimsuit photoshoot. You are officially allowed to resume the search for Wesley Snipes on behalf of the attorney general’s office. I recommend looking for him during the day though, because unless you got mad jokes to get him smiling, looking for Blade at night is pointless. If I had played a drag queen named Noxeema in Too Wong Foo I’d be hiding the fuck out too. I mean, I wouldn’t want to get called a faggot and choked out by Isiah Washington. To top it off, after all the Grey’s hype and coverage they couldn’t even muster a new episode this week. Even ABC is smart enough not to try to counter program for women during the World Series. All a guy asks for is one week a year where his lady can’t complain about America’s pastime dominating the idiot box.

But in spite of everything wrong with TV this week; like another dismal edition of Jack is a doctor, Sawyer is a conman and Kate is a pretty tomboy on "Lost"; hot chicks complaining about how ugly their short hair or extra two ounces of weight makes them on "Top Model"; or another run at the top for a bulimic Sting (I mean the wrestler with the NWA title not the singer with a lute cameo on "Studio 60"); one thing is still bugging me above and beyond all others. How the fuck do you fall off the top of a log and somehow get stuck to the bottom of it? I don’t care how pretty all the ladies out there think Wetworth Miller is, that doesn’t do me a damn bit of good. I mean, sure the DeLa Hoya factor of watching just to see if he’ll get his handsome profile mushed in like a baby’s head under a semi tire is valid, but if Fox expects both male and female demos to keep watching “Prison Break” then they are going to have to come up with something a little more compelling than trying to outwit the vicious mighty oak for 60 minutes. If I didn’t give a shit when the Saruman took on a whole army of tress then you know I certainly don’t give a fuck about a tree trying to drown Sucre either. “Prison Break” better figure out how to Jack Bauer themselves through the rest of this storyline really soon or I’m gonna have to pullout my DVDs of Oz for a dose of real prison drama like that "Law and Order" guy taking it up the poop chute or Brodie playing prison corner boy to Mr. Ecko.

November 6, 2006

Op-Ed Contributor

The Deciding Vote

By DALTON CONLEY

THE Democrats may or may not capture the House or Senate tomorrow. But one thing appears certain: There will be a lot of close races where the results are uncertain late into the night (and perhaps even the next morning) and where the outcome may hinge on legal rulings about which ballots count and which don’t.

After all, in the last few years, several statistical dead-heat elections have ended up in court. The mayoralty of San Diego and the governorship of Washington are just two of the more high-profile examples since Bush v. Gore in 2000 in which elections were decided by a few votes and controversy followed the winner into office.

The rub in these cases is that we could count and recount, we could examine every ballot four times over and we’d get — you guessed it — four different results. That’s the nature of large numbers — there is inherent measurement error. We’d like to think that there is a “true” answer out there, even if that answer is decided by a single vote. We so desire the certainty of thinking that there is an objective truth in elections and that a fair process will reveal it.

But even in an absolutely clean recount, there is not always a sure answer. Ever count out a large jar of pennies? And then do it again? And then have a friend do it? Do you always converge on a single number? Or do you usually just average the various results you come to? If you are like me, you probably settle on an average. The underlying notion is that each election, like those recounts of the penny jar, is more like a poll of some underlying voting population.

In an era of small town halls and direct democracy it might have made sense to rely on a literalist interpretation of “majority rule.” After all, every vote could really be accounted for. But in situations where millions of votes are cast, and especially where some may be suspect, what we need is a more robust sense of winning. So from the world of statistics, I am here to offer one: To win, candidates must exceed their rivals with more than 99 percent statistical certainty — a typical standard in scientific research. What does this mean in actuality? In terms of a two-candidate race in which each has attained around 50 percent of the vote, a 1 percent margin of error would be represented by 1.29 divided by the square root of the number of votes cast.

Let’s take the Washington gubernatorial race in 2004 as an example. After a manual recount, Christine Gregoire was said to have 1,373,361 votes, 48.8730 percent, while her Republican rival, Dino Rossi, garnered 1,373,232, or 48.8685 percent (a third-party candidate got 63,465 votes). That’s a difference of only 129 votes, or .0045 percent. The standard error for a 99 percent certainty level was 0.078 percentage points. Since Ms. Gregoire’s margin of victory didn’t exceed this figure, under this system she wouldn’t be certified as the victor.

If we apply the same methodology to Bush v. Gore in 2000, the results are equally ambiguous. The final (if still controversial) vote difference for Florida was 537 (or .009 percent). Given Florida’s vote count of 5,825,043, (excluding third party votes) this margin fails to exceed the 99 percent confidence threshold. New Mexico, which Al Gore won by 366 votes out of a much smaller total, is also up for grabs in this situation.

So what should we do in such cases, where no winner can be declared with more than 99 percent statistical certainty? Do the whole shebang all over again. This has the advantage of testing voters’ commitment to candidates. Maybe you didn’t think the election was going to be as close as it was, so you didn’t vote. Well, now you get a second chance.

And if there were hanging chads (as in Florida in 2000) or unshaded bubbles (as in the 2004 San Diego mayoral race) or dubiously included or excluded ballots, voters could make extra sure to do it right the second time round.

Yes, it costs more to run an election twice, but keep in mind that many places already use runoffs when the leading candidate fails to cross a particular threshold. If we are willing to go through all that trouble, why not do the same for certainty in an election that teeters on a razor’s edge? One counter-argument is that such a plan merely shifts the realm of debate and uncertainty to a new threshold — the 99 percent threshold. However, candidates who lose by the margin of error have a lot less rhetorical power to argue for redress than those for whom an actual majority is only a few votes away.

It may make us existentially uncomfortable to admit that random chance and sampling error play a role in our governance decisions. But in reality, by requiring a margin of victory greater than one, seemingly arbitrary vote, we would build in a buffer to democracy, one that offers us a more bedrock sense of security that the “winner” really did win.

Dalton Conley, the chairman of New York University’s sociology department, is the author of “The Pecking Order: Which Siblings Succeed and Why.”

 

politics
The Doctor Is In
Howard Dean isn't getting tossed from the DNC.
By John Dickerson
Posted Wednesday, Nov. 22, 2006, at 6:14 PM ET

If you can handle only one internal Democratic Party squabble at a time, you might have missed the dustup between Howard Dean and James Carville. It happened while the larger battle for House majority leader was taking place between Stenny Hoyer and John Murtha, Speaker Nancy Pelosi's failed pick. What you missed was Carville, a former Clinton adviser, charging Dean with "leadership that was Rumsfeldian in its incompetence" and arguing that if it hadn't been for Dean, the party would have gained even more seats in the midterms. At a gathering of the Association of State Democratic Chairs in Wyoming, Dean responded: "This is the new Democratic Party. The old Democratic Party is back there in Washington; sometimes they still complain a little bit."

This is the place in the narrative where the writer usually starts with the jokes about the Democratic Party's historic infighting. My gosh, the Democrats won, and they're already fighting. But this argument isn't about the party's direction, ideology, or platform. It's a practical argument about whether it can get more bang for its buck. The big complaint about Dean is that his strategy to build up the party in all 50 states wastes resources on noncompetitive parts of the country. This conflict erupted in August at a meeting on Capitol Hill, when House Democratic Campaign Chairman Rahm Emanuel, and his Senate counterpart, Chuck Schumer, asked Dean to match the Republican National Committee's expected outlay in the fall campaign. Dean refused to budge from his long-term strategy. This led Emanuel, who can exfoliate with expletives, to spit out a few before storming from the room.

So, now that we've had an election, who was right? When I dialed around looking for strategists to take sides, I got plenty of bile about both Carville—"hasn't run a real race in a while"—and dislike of Dean—"he's naive and Napoleonic"—but people went floppy when I tried to get them to choose sides. They said things you'd expect to hear in therapy sessions: "Where you stand depends on where you sit."

There are at least two debates taking place. The first is about resources devoted to specific races in 2006. Could more money from the DNC have tipped those 14 or so House races where Democrats lost by a razor-thin margin? It's impossible to say, since the correlation between dollars spent and success is murky. Josh Kraushaar at Hotline makes a good case that only four of those races could have been saved by a late dose of cash. To knock off Dean, Carville would have to make the case that he bungled egregiously, and that's an impossible case to make. Unlike with Rumsfeld, the result on the battlefield looks good. Carville's case mirrors the argument made by Republicans who say Karl Rove shouldn't be blamed for the GOP's poor showing because the party could have lost more seats without him.

The larger philosophical battle is over whether Dean's plan for a 50-state campaign makes sense as he's designed it. The lines of attack aren't much clearer there, either. Even Dean's detractors say they want to compete everywhere in America. They say the debate is about how you go about doing that. Sure, spend money in Ohio, but don't waste it in Nebraska. Dean was right in arguing that even if you have a good candidate in an area that's not traditionally a Democratic stronghold, you need resources on the ground. If there had been a better Democratic Party machine in Tennessee, Harold Ford might have won. Campaign chairmen like Emanuel and Schumer always want more money for television ads and get-out-the-vote efforts, and it's Dean's job to know when to save some for the larger goal of building infrastructure for the presidential year and the future. He, like all party chairmen, has to know when spending more money is wasting it.

But those who attack Dean also use Harold Ford's senatorial race in Tennessee to make their case. Spend time and money finding ideologically diverse candidates like Harold Ford, and the ground organization will form around them. You build a party by winning races, not by hiring party staffers and opening offices.

While this argument continues, Dean isn't following Rumsfeld out of town. He would have to be voted out by the members of the National Committee, and they all like him. The bureaucratic shrewdness of his national strategy is that it lavishes cash and attention on the state party officials who elect the chairman of the DNC. Dean is also beloved among the party's bloggers and liberal activists. Carville's attack only strengthened Dean in their eyes as a man of the grassroots fighting against Washington insiders. Dean could be ousted by the 2008 Democratic nominee, but doing so would alienate a core constituency of the party that the nominee will need. This is why rumors that Carville was secretly working on Hillary Clinton's behalf are strained—Hillary is trying to build support among liberal activists, which means hugging, not attacking, Dean.

The better case against Dean is that he's a blowhard who says the wrong thing repeatedly in front of cameras. We know that's true, but Dean has gotten better at behaving himself—his mild response to Carville being a case in point. Even his detractors point out that while Republicans tried hard to make Dean an issue, they couldn't. This time, it was John Kerry who provided the late-in-the-election radioactive gaffe. Dean has been so disciplined that most people don't even know he's there. In a recent Pew poll that asked people to name the leader of the Democratic Party, only 3 percent named Dean. More people named Hillary Clinton, Bill Clinton, and Nancy Pelosi as Democratic leader. Sixty percent of respondents don't think the party has one.

John Dickerson is Slate's chief political correspondent and author of On Her Trail. He can be reached at slatepolitics@gmail.com .

 

The Phony World of the Minimum Wage

By William F. Buckley Jr.

Nancy Pelosi, the new speaker of the House, has told us that she will call up as maybe the very first order of business increasing the minimum wage. Here are the relevant facts:

The federal minimum wage, enacted in 1938, was last raised in 1997. From that point on, with certain exceptions, you could not lawfully hire someone to work without paying him or her at least $5.15 per hour. Paying that much would yield $206 per week, or $10,712 per year. A different federal agency defines poverty as annual earnings of $9,827 or less for a single person. The mathematics of the above informs us that the existing federal minimum wage barely keeps a single worker out of poverty.

Of course, many states and localities have enacted higher minimum wages than the federal one. In San Francisco, you need to pay a worker $8.50 an hour; in New York State, $6.75; in Wisconsin, $5.70.

We learn that 60 percent of minimum-wage earners — two-thirds of them women — are working in restaurants and bars; 73 percent, by the way, are white, and 70 percent have high-school diplomas. Nearly 60 percent work part time.

Now we can leech from these figures several observations:

(1) It can be very difficult to tell what a minimum wage worker is actually making. Many of those who work in restaurants and bars receive tips; then again, the minimum wage is substantially lower for people in that situation.

(2) A high-school diploma will not in and of itself give the worker merchandisable skills o'erleaping the minimum wage.

(3) Since there are part-time workers who receive only the minimum wage, a moment's reflection makes it obvious that they receive, by whatever means, income that makes life possible.

Now on the matter of what to do about it, we should begin by acknowledging that any argument for circumventing the market wage is sophistry. The market will tell you, even in San Francisco, what you need to pay in order to hire an hour's labor. But sophistry is sometimes in order. We do not allow child labor — except in certain circumstances: Peter Pan, at the neighborhood theater, is allowed to work even if he is only 12 years old.

Monopolies are not permitted to set prices. The idea is that in a free society, you must not tolerate any constriction in production. But again, sophistry is permitted, because labor unions, in many fields of endeavor, practice exactly that — a monopoly on the price of labor. What do we do about that? Exactly what we do about waiters who don't list their tips: We ignore it.

We learn that one individual American last year received compensation of $1.5 billion. This leads us indignantly to our blackboard, where we learn that the average chief executive officer earns 1,100 times what a minimum-wage worker earns. What some Americans are being paid every year is describable only as: disgusting. But that disgust is irrelevant in informing us what the minimum wage ought to be. The one has no bearing on the other.

We are bent on violating free-market allocations. Doing this is not theologically sinful, but it is wise to know what it is that we are doing, and to know that the consequence of taking such liberties is to undermine the price mechanism by which free societies prosper.

Milton Friedman taught that "the substitution of contract arrangements for status arrangements was the first step toward the freeing of the serfs in the Middle Ages." He cautioned against set prices. "The high rate of unemployment among teenagers, and especially black teenagers, is both a scandal and a serious source of social unrest. Yet it is largely a result of minimum-wage laws." Those laws are "one of the most, if not the most, anti-black laws on the statute books."

Professor Friedman is no longer here to testify, but his work is available — even in San Francisco.

November 27, 2006

Editorial

When Don’t Smoke Means Do

Philip Morris has adopted the role of good citizen these days. Its Web site brims with information on the dangers of smoking, and it has mounted a campaign of television spots that urge parents, oh so earnestly, to warn their children against smoking. That follows an earlier $100 million campaign warning young people to “Think. Don’t Smoke,” analogous to the “just say no” admonitions against drugs.

All this seems to fly against the economic interests of the company, which presumably depends on a continuing crop of new smokers to replace those who drop out or die from their habit. But in practice, it turns out, these industry-run campaigns are notably ineffective and possibly even a sham. New research shows that the ads aimed at youths had no discernible effect in discouraging smoking and that the ads currently aimed at parents may be counterproductive.

That disturbing insight comes from a study just published in The American Journal of Public Health by respected academic researchers who were supported by the National Cancer Institute, the National Institute on Drug Abuse and the Robert Wood Johnson Foundation. Using sophisticated analytical techniques, the researchers concluded that the ads aimed directly at young people had no beneficial effect, while those aimed at parents were actually harmful to young people apt to see them, especially older teenagers. The greater the teenagers’ potential exposure to the ads, the stronger their intention to smoke and the greater their likelihood of having smoked in the past 30 days.

Just why the costly advertising campaigns produce no health benefits is a rich subject for exploration. The ads are fuzzy-warm, which could actually generate favorable feelings for the tobacco industry and, by extension, its products. And their theme — that adults should tell young people not to smoke mostly because they are young people — is exactly the sort of message that would make many teenagers feel like lighting up. (Trial testimony has made it clear that the goal of Philip Morris’s youth smoking prevention programs is to delay smoking until adulthood, not to discourage it for a lifetime.)

The most exhaustive judicial analysis of the industry’s tactics, by Judge Gladys Kessler of the Federal District Court for the District of Columbia, concluded that the youth smoking prevention programs were not really designed to effectively prevent youth smoking but rather to head off a government crackdown. They are minimally financed compared with the vast sums spent on cigarette marketing and promotion; they are understaffed and run by people with no expertise; and they ignore the strategies that have proved effective in preventing adolescent smoking. The television ads, for example, do not stress the deadly and addictive impacts of smoking, an emphasis that has been shown to work in other antitobacco campaigns.

Philip Morris says it has spent more than $1 billion on its youth smoking prevention programs since 1998 and that it devised its current advertising campaign on the advice of experts who deem parental influence extremely important. But the company has done only the skimpiest research on how the campaign is working. It cites June 2006 data indicating that 37 percent of parents with children age 10 to 17 were both aware of its ads and spoke to their children about not smoking. How the children reacted has not been explored. And somehow the company forgot to tell the parents, as role models, to stop smoking themselves.

Philip Morris, the industry’s biggest and most influential company, is renowned for its marketing savvy. If it really wanted to prevent youth smoking — and cut off new recruits to its death-dealing products — it could surely mount a more effective campaign to do so.

 If Barack Obama is as good a politician as he is a writer, he will soon be President

This is not a political blog. I have no interest in politics. But I have been reading a great book that happens to be written by a politician.

The first time I heard of Barack Obama is when I saw his name springing up on those political signs people put in their front yards in election years. I knew nothing about him except that he was affiliated with the University of Chicago law school and he was running some hopeless campaign for the U.S. Senate. I figured the support he was getting in my home town at the time was probably the only support he would get in the whole state. The city I lived in, Oak Park, is left wing to the point of comedy at times. For instance, as you cross into the city, a sign informs you that you are entering a nuclear-free zone. I thought it would take little more than him having a name like “Barack Obama” to win over the folks in Oak Park.

I was not paying any attention to the Senate race when I happened to get called at random for a poll being conducted by the Chicago Tribune. They asked me who I was going to vote for in the upcoming Senate election. Just out of sympathy and loyalty to the University of Chicago, I said I would vote for Obama. That way, when the results of the poll came out, he would have a few percent of the electorate behind him and he wouldn’t feel so bad. I was flabbergasted when I saw the results of the poll on the front-page of the newspaper a few days later: Obama was in the lead for the democratic primary! (This, of course, was well before he got tapped to give the keynote address at the Democratic convention.)

I am not very interested in politics, so I didn’t pay much attention to the Senate race (which eventually was a landslide with Obama crushing—of all people—Alan Keyes). I saw him give two speeches: the Democratic convention one and his acceptance speech the night he won. Both times, I felt like he cast some sort of spell over me. When he spoke, I wanted to believe him. I can’t remember another politician ever having that effect on me. One friend, who knows Barack and who also knew Bobby Kennedy, said he had not seen anyone like Kennedy until he met Barack.

Anyway, all of this is just a long prelude to the fact that I picked up his book The Audacity of Hope and was blown away at how well written it is. His stories sometimes make me laugh out loud and at other times well up with tears. I find myself underlining the book repeatedly so I can find the best parts quickly again in the future. I am also almost certain he wrote the whole thing himself, based on people I know who know him. I have no interest in politics, yet I am devouring this book. If you aren’t giving Freakonomics as a Christmas gift this year—probably you gave it to everyone on your list last Christmas—this would make a great gift.

I suppose I shouldn’t be that surprised at what a good writer he is because I read his first book Dreams from My Father two years ago and loved that one as well. But unlike that first book, written 15-20 years ago before he had political ambitions, I thought this new one would just be garbage. Rarely does a book so exceed my expectations. Also, I should stress that I don’t agree with all his political views, but that in no way detracts from the enjoyment of reading the book.

If he has the same effect on others as he does on me, you are looking at a future president.

Posted by Steven D. Levitt @ 11:54 pm on Saturday, November 25, 2006 in General

The Perfect and the Good

I wrote a piece for the The New Yorker a few weeks ago about a group of people who have created a neural network that predicts (or tries to predict) the box office of movies from their scripts. (It's not up on my site yet, but will be soon).

The piece drew all kinds of interesting responses, a handful of which pointed out obvious imperfections in the system. Those criticisms were entirely accurate. But they were also, I think, in some way beside the point, because no decision rule or algorithm or prediction system is ever perfect. The test of these kinds of decision aids is simply whether--in most cases for most people--they improve the quality of decision-making. They can't be perfect. But they can be good.

In "Blink," for instance, I wrote about the use of a decision tree at Cook County Hospital in Chicago to help diagnose chest pain. Lee Goldman, the physican who devised the chest pain decision rule, says very clearly that he thinks that there are individual doctors here and there who can make better decisions without it. But nonetheless Goldman's work has saved lots and lot of lives and millions and miillions of dollars because it improves the quality of the average decision.

Is the average movie executive better off with a neural network for analyzing scripts than without it? My guess is yes. That's why I wrote the piece. I think that one of the most important changes we're going to see in lots of professions over the next few years is the emergence of tools that close the gap between the middle and the top--that allow the decision-making who is merely competent to avoid his errors to be reach the level of good.

I think the same perspective should be applied to the basketball algorithms I've been writing about. It is easy to point out the ways in which either Hollinger's system or Berri's system fail to completely reflect the reality of what happens on the basketball court. But of course they are imperfect: neither Berri or Hollinger would ever claim that they are not. The issue is--are we better off using them to assist decision-making that we are making entirely judgements about basketball players using conventional metrics? Here I think  the answer is a resounding yes. (Keep in mind that I live in New York City and have had to watch Mr. Thomas bungled his way toward disaster. I would think that.)

And the reason that lots of smart people, like Berri and Hollinger and others, spend so much time arguing back and forth about different variations on these algorithms, is that every little tweak raises the quality of decision-making in the middle part of the curve just a little bit higher. That's a pretty noble goal.

That said, here are the latest updates on the Hollinger-Berri back and forth. And remember. I don't think this is a question of one of them being wrong and the other right. They are both right. It's just that one of them may be a little more right than the other.

Here we go. First Hollinger's response, courtesy of truehoop.com, (an excellent site by the way.)

And then. Berri's response.

On Raising the Federal Minimum Wage--BECKER

An increase in the minimum wage has several distinctive negative effects on the economy. While the wages of some low skilled workers would improve, it would reduce employment opportunities for teenagers and other lower skilled workers. They are pushed either into unemployment or the underground economy. A bigger minimum also raises prices of fast foods and other goods produced with large inputs of unskilled labor. Workers who receive on the job training must accept lower wages in return. A higher floor on wages prevents the wages of lower skilled workers from being reduced much, and hence discourages firms from providing much training to these employees.

A rise in the minimum wage increases the demand for workers with greater skills because it reduces competition from low-skilled workers. This is an important reason why unions have always been strong supporters of high minimum wages because these reduce the competition faced by union members from the largely non-union workers who receive low wages.

Most knowledgeable supporters of a higher minimum wage do not believe it is an effective way to reduce the poverty rate. Poorer workers who are lucky enough to retain their jobs at a higher wage obviously do better, but the poorer workers who are priced out of the above ground economy are made worse off. Moreover, many of those who receive higher wages are not poor, but are teenagers and other secondary workers in middle class and rather rich families. Poor families are also disproportionately hurt by the rise in the cost of fast foods and other goods produced with the higher priced low-skilled labor since these families spend a relatively large fraction of their incomes on such goods.

A recent petition by over 600 economists, including 5 Nobel Laureates in Economics, advocated a phased-in rise in the federal minimum wage to a much higher $7.25 per hour from the present $5.15 per hour. This petition received much attention, and the number of economists signing is impressive (and depressing). Still, the American Economic Association has over 20,000 members, and I suspect that a clear majority of these members would have refused to sign that petition if they had been asked. They believe, as I do, that the negative effects of a higher minimum wage would outweigh any positive effects. That is one reason I would surmise why only a fraction of the 35 living economists who received the Nobel Prize signed on to the petition--I believe all were asked to sign.

Controversy remains in the United States (and elsewhere) over the effects of the minimum wage mainly because past changes in the U.S. minimum wage have usually been too small to have large and easily detectable general effects on employment and unemployment. The effects of an increase to $7.25 per hour in the federal minimum wage that many Democrats in Congress are proposing would be large enough to be easily seen in the data. It would be a nice experiment from a strictly scientific point of view, for it would help resolve the controversy over whether the effects of large increases in the minimum wage would be clearly visible in data on employment, training, and some prices. Presumably, even the economists and others who are proposing this much higher minimum must believe that at some point a still higher minimum would cause too much harm. Otherwise, why not propose $10 or $15 per hour, or an even higher figure? I am confident that for this and other reasons, the actual immediate increase in the federal minimum wage is likely to be significantly lower than $2.10 per hour.

A number of countries, including France, have already initiated this experiment, for the ratio of the minimum wage to the average wage in these countries is much higher than in the United States. The effects of the French minimum have been carefully studied by two excellent economists, Guy Laroque and Bernard Salanie, in a series of articles, such as " Labor Market Institutions and Employment in France," Journal of Applied Econometrics, 2002. They find that the relatively high minimum in France explains a significant part of the low employment rate of married women in France. Salanie has also argued that the high French minimum wage is important in explaining the dismal employment prospects of young persons in France, and the huge unemployment rate of Muslim youths there, estimated to be about 40 per cent.

On Raising the Federal Minimum Wage--BECKER

An increase in the minimum wage has several distinctive negative effects on the economy. While the wages of some low skilled workers would improve, it would reduce employment opportunities for teenagers and other lower skilled workers. They are pushed either into unemployment or the underground economy. A bigger minimum also raises prices of fast foods and other goods produced with large inputs of unskilled labor. Workers who receive on the job training must accept lower wages in return. A higher floor on wages prevents the wages of lower skilled workers from being reduced much, and hence discourages firms from providing much training to these employees.

A rise in the minimum wage increases the demand for workers with greater skills because it reduces competition from low-skilled workers. This is an important reason why unions have always been strong supporters of high minimum wages because these reduce the competition faced by union members from the largely non-union workers who receive low wages.

Most knowledgeable supporters of a higher minimum wage do not believe it is an effective way to reduce the poverty rate. Poorer workers who are lucky enough to retain their jobs at a higher wage obviously do better, but the poorer workers who are priced out of the above ground economy are made worse off. Moreover, many of those who receive higher wages are not poor, but are teenagers and other secondary workers in middle class and rather rich families. Poor families are also disproportionately hurt by the rise in the cost of fast foods and other goods produced with the higher priced low-skilled labor since these families spend a relatively large fraction of their incomes on such goods.

A recent petition by over 600 economists, including 5 Nobel Laureates in Economics, advocated a phased-in rise in the federal minimum wage to a much higher $7.25 per hour from the present $5.15 per hour. This petition received much attention, and the number of economists signing is impressive (and depressing). Still, the American Economic Association has over 20,000 members, and I suspect that a clear majority of these members would have refused to sign that petition if they had been asked. They believe, as I do, that the negative effects of a higher minimum wage would outweigh any positive effects. That is one reason I would surmise why only a fraction of the 35 living economists who received the Nobel Prize signed on to the petition--I believe all were asked to sign.

Controversy remains in the United States (and elsewhere) over the effects of the minimum wage mainly because past changes in the U.S. minimum wage have usually been too small to have large and easily detectable general effects on employment and unemployment. The effects of an increase to $7.25 per hour in the federal minimum wage that many Democrats in Congress are proposing would be large enough to be easily seen in the data. It would be a nice experiment from a strictly scientific point of view, for it would help resolve the controversy over whether the effects of large increases in the minimum wage would be clearly visible in data on employment, training, and some prices. Presumably, even the economists and others who are proposing this much higher minimum must believe that at some point a still higher minimum would cause too much harm. Otherwise, why not propose $10 or $15 per hour, or an even higher figure? I am confident that for this and other reasons, the actual immediate increase in the federal minimum wage is likely to be significantly lower than $2.10 per hour.

A number of countries, including France, have already initiated this experiment, for the ratio of the minimum wage to the average wage in these countries is much higher than in the United States. The effects of the French minimum have been carefully studied by two excellent economists, Guy Laroque and Bernard Salanie, in a series of articles, such as " Labor Market Institutions and Employment in France," Journal of Applied Econometrics, 2002. They find that the relatively high minimum in France explains a significant part of the low employment rate of married women in France. Salanie has also argued that the high French minimum wage is important in explaining the dismal employment prospects of young persons in France, and the huge unemployment rate of Muslim youths there, estimated to be about 40 per cent.

 

interrogation
Behind The Wire
David Simon on where the show goes next.
By Meghan O'Rourke
Posted Friday, Dec. 1, 2006, at 2:27 PM ET

The fourth season of HBO's The Wire comes to an end next Sunday. A show of remarkable complexity, co-written by two former Baltimore Sun reporters, David Simon and Ed Burns, it is perhaps the most critically acclaimed TV program of the season. What critics and fans alike have noted is The Wire's remarkable narrative compression; as in the best novels, there is a sense that every detail has a purpose. Early on, The Wire may have impressed viewers with its cop-show chops—the first season focused on the Barksdale drug crew and the investigative police force trying to bring them down—but the show was always about something bigger—namely, the life of the city itself. In the fourth season, which concludes on Dec. 10, the show has expanded its focus from local politics and the drug trade to the public school system; with only one remaining season scheduled, we pressed David Simon on what The Wire adds up to, how the writers' room operates, and what might be in store in Season 5. Simon spoke with me by phone from his office in Baltimore.

Slate: What did you think made The Wire different from The Corner, the HBO miniseries that preceded it?

Simon: The Wire concerned those parts of the book [Simon's original nonfiction account] about why the drug war doesn't work. But we realized that explaining that why the drug war doesn't work would get us only through the first season. So, we started looking at the rest of what was going on in the city of Baltimore. Ed [Burns] and I knew we wanted to touch on education. I had grown up as a reporter at the Baltimore Sun, and I had seen many aspects of local and city administration. Once we began to come up with these different ways of addressing the city as a whole, we had a blueprint for the show.

Slate: If you had to sum up what The Wire is about, what would it be?

Simon: Thematically, it's about the very simple idea that, in this Postmodern world of ours, human beings—all of us—are worth less. We're worth less every day, despite the fact that some of us are achieving more and more. It's the triumph of capitalism.

Slate: How so?

Simon: Whether you're a corner boy in West Baltimore, or a cop who knows his beat, or an Eastern European brought here for sex, your life is worth less. It's the triumph of capitalism over human value. This country has embraced the idea that this is a viable domestic policy. It is. It's viable for the few. But I don't live in Westwood, L.A., or on the Upper West Side of New York. I live in Baltimore.

Slate: What are your models?

Simon: There were no models for us in TV. I admire the storytelling of The Sopranos, though I don't watch it consistently. And Deadwood; I don't watch it, but I admire their storytelling. We certainly weren't paying attention to network TV.

Instead, the impulse on my part is rooted in what I was supposed to be in life, which was a journalist. I'm not interested in conducting morality plays using TV drama—in stories of good versus evil. I'm not interested in exalting character as a means of maintaining TV franchise. Most of TV works this way: You try to get something up and running, and once you do, you just try to keep it going, because there's a lot of money involved. That's not in my head. What's in my head is what I covered, what I saw as true or fraudulent, what made me smile, as a reporter. I've been mining that ever since. To be honest, at the end of The Wire, I'll have said all I have to say about Baltimore. I don't have another cop show in me. I don't have another season about Baltimore. What I'm saying is that I have to go back to the well.

Slate: Do you feel the well is starting to go dry?

Simon: We're catching up. We started with a case Ed did in the late '80s, then a case in the '90s. And all along we've been pulling things that are going on in Baltimore contemporaneously. We still now consult active detectives, journalists. The processes we're describing are not timeless, but they are time-tested. In Season 2, we said if someone didn't fix the grain pier [a shipping facility on the Baltimore harbor], someone would come along and turn it into condos. At the time it was sitting idle. By the time we were working on Season 3, they had sold it, and now there are condos over there. The bar where we had the stevedores hang out is being remodeled for a yuppie fern joint. We discussed how police officers can juke stats to make it look like crime disappears, and that was a huge issue in the recent election. The same games are always being played.

Slate: The show is a bleak yet accurate portrait of social realities in Baltimore's inner city, and you have said in interviews that the show is designed to be "a political provocation." Would you consider yourself a social crusader? What, if any, changes would you like to see the show catalyze?

Simon: I don't consider myself to be a crusader of any sort. I was bystander to a certain number of newspaper crusades. They end badly, in terms of being either fraudulent or by inspiring legislations that makes things worse. So, I regard myself as someone coming to the campfire with the truest possible narrative he can acquire. That's it. What people do with that narrative afterward is up to them. I am someone who's very angry with the political structure. The show is written in a 21st-century city-state that is incredibly bureaucratic, and in which a legal pursuit of an unenforceable prohibition has created great absurdity.

Slate: You have been pessimistic in public comments you've made about the possibility of political and social change. Do you think change is possible?

Simon: No, I don't. Not within the current political structure. I haven't met any politicians with that kind of courage. I wasn't fond of his performance as mayor, but Kurt Schmoke's merest suggestion that we discuss drug decriminalization was very brave. The idea that we would address this issue as a matter of effective social policy! He was pilloried. It destroyed what remained of his political career. He was a prophet without honor in his own city. People, especially people from outside the city, want to say that Schmoke was soft on drugs. But the police department had locked up more people than any previous administration. To no avail! He had the temerity to say so, and look where he is now. He is dean of Howard Law School. Martin O'Malley has arrested so many Baltimoreans that the ACLU and other civil rights leaders have rightly, to my mind, questioned the constitutionality of the city police department's arrest policy. When we finish filming at 1 in the morning, it's even odds that one of the African-African members of the cast and crew will be detained. My first assistant director was arrested, dumped unceremoniously at central booking, and ultimately released after seeing a court commissioner. The charge against him was never brought into court. This is common in Baltimore under the current administration. Other members of my crew have suffered similar indignities. And it hasn't reduced crime significantly. That's not how you reduce crime.

Slate: Let's talk a little about process. In contrast to other shows on TV, The Wire seems to me to have a remarkable degree of narrative compression; there's a sense that every detail is planned and relates to another detail. I don't normally feel this on TV, where there's a sense that shows exist to fill up the hour.

Simon: I keep using this metaphor whenever I'm on set and we have problems with the actors—and we don't have many problems—losing sight of the whole. I say, "We're building a house here. Every single one of us, all the writers, all the actors, all the crew, all the directors; everything in our bag of tricks, it's all tools in the toolbox. It's not about how often the hammer comes out; it's about the house we're building. So, all the details are essential. The only thing I care about in the end is the house. In the writers' room at least, that's a given.

The big thematic heavy lifting was done in Seasons 1 and 2, when Ed and I were figuring out what we wanted to do: how many seasons, etc. We came up with five. We talked about many things; nothing seems substantial enough for a Season 6. When other writers came onto the show, George Pelecanos, Richard Price, we would throw it at them: This is what we came up with, five things. If there's anything else you have, any ideas for extending the series, say so. There was no general agreement on anything but the five. When I've done my begging with HBO—and begging it is—it has been on behalf of those five seasons. To be honest, one writer came up with another idea, and a really good one, but we realized that it would require so much research on our part that we couldn't do the work quickly enough to keep it in this dramatic world.

Slate: It wasn't this idea of examining the influx of Hispanics in Baltimore, was it?

Simon: Yes! It was.

Slate: David Mills mentioned it in the Slate "TV Club" on The Wire. I thought it was a fabulous idea.

Simon: Until now, Baltimore had no Hispanic population. And all of a sudden now we do—a large Central American population. Here's this remarkable new trend and it's also relevant to the life of the city. Two things preclude me to keep me from jumping up and down with HBO: One, I just did everything I could for Season 5; two, none of us is fluent in Spanish; none of us is intimately connected to the lives of Hispanics in Baltimore. None of us could do it with the degree of verisimilitude we demand of ourselves. We don't have that world in our pocket. By time we did the research, The Wire would have been off for two years. It's one thing when we take six months off to learn how the port works; we're still in the world we know. But I did no decent journalism about East Baltimore, where most Central Americans are living. It would be great if we could. When I saw the idea in print, I think I reacted as you did: Oh shit! Someone came up with Season 6! For all I know, David Mills mentioned it to me a few years ago, but it didn't have the import then that it does today. Someone should get to that story. It's very typical of Baltimore in that we would be late on that. Until now, Baltimore had never had this kind of population—it was only 2, 3 percent Hispanic.

Slate: How far in advance have you scripted out a given season? In mapping it out, do you know what the end will be? Or do you go from the beginning forward?

Simon: There are discussions during Season 3 that happen on the fly, in which you need to remember which characters need to be where for Season 4. For example, Prez. We knew we needed to have Prez in the schools at the beginning of Season 4. So, you know that Prez will be a teacher and that you'll be hiring kids. But what themes? What do you want to say about the education system? The first step is sitting down and figuring that out. We kicked that around a lot in the writers' room. This year Ed was predominant in the writers' room, because he had actual teaching experience. David Mills came and helped. He did this because George Pelecanos couldn't be available; he was working on The Night Gardener. (Though he did do one episode.) And myself and Bill Zorzi and Ed and Chris Collins and Eric Overmyer, who was on this year as a producer.

And then at some point, when we feel we've got the themes ready, we start to look at the characters and where we need to send them by the end of the run. We know what we want to say. We know what we think is fair and just to say about economic opportunities for these kids. But we don't know which kid is going to say what about those opportunities; that's all argued out, and this season we went through various scenarios. There was a lot of debate about what should happen to Dukie, for example.

There's always someone in the room saying, "I've seen that before." (George's favorite line.) Or another line is, "But what are we saying?" You get to the end, and someone comes up with a great ending, but you ask, "But what does it mean? What are we saying?" Which is not to say that you want the characters to be devices for your didacticism. But you want to be true about what you say about equality of opportunity.

Slate: The show brings in a lot of different high-profile writers and directors, including Richard Price, for example. How does that work?

Simon: There is a lot of arguing. There's a lot of ego in the room. There are a lot of authors in the room with a lot of success in different media. George Pelecanos knows how to write a book with a beginning, middle, and an end. So does Richard Price. Ed Burns wrote a nonfiction book. Not to mention that I can be a pretty big shithead myself. All of us together, it can be miserable for a while. But what attracts everyone to it, even though they've got their own gigs, is a fealty to the entire story, to the whole. You don't have people being protective of the single episode or idea; you have people being protective of the whole story.

We let all the writers know what's happening, the larger arc. Not the actors, of course—then they'd telescope. We want their characters to be living in the present tense.

Slate: What role, if any, do actors themselves play in the dialogue they speak? Is a finished episode relatively faithful to the original script?

Simon: Pretty faithful. Ninety-five percent or so. One of the writers is always on set. If someone comes up with an ad-lib or a different intonation and it doesn't work, or it's not our intention, we bring them back to the book. If someone comes up with something, and it's good, it serves the story, or it's just generally funny, we let it ride. But because the story is so ornate and because we're looking at this thing as a 66-hour movie, when we're done, it's the writer, the people with the constant awareness of the story as whole, who need to make decisions as to whether or not an ad-lib would work.

Slate: I'm interested that you said you see this as a 66-hour movie. One thing that has attracted me, like many viewers of the show, is that its sheer length allows you to show in detail many things you just never see in cop films.

Simon: On The Wire, we were trying to explore this stuff you don't see—the dope on the table, all that has been done to death. Sometimes the real poetry of police work is a couple of detectives with their feet on a desk in the backroom looking at ballistics. And that sounds like anti-drama. But that's the trick to making good drama; the drama has to be earned. There have to be moments of anti-drama. You can't make a good show based on pure verisimilitude, pure anti-drama. But you have to acknowledge a lot of ordinary life. Most TV doesn't do that.

If I had to write a police procedural right now, I'd put a gun to my head. And I really have to say this, even Homicide [on which Simon was a producer and writer] was prisoner of the form. On shows where the arrest matters, where it's about good and evil, punishing crime, the poor and the rich, the suspect exists to exalt the good guys, to make the Sipowiczs and the Pembletons and the Joe Fridays that much more moral, that much more righteous, that much more intellectualized. It's to validate their point of view and the point of view of society. So, you end up with same stilted picture of the underclass. Either they're the salt of earth looking for a break, and not at all responsible, or they're venal and evil and need to be punished. That's a good precedent for creating an alienated America.

Slate: One thing that struck me about the show, from the get-go—and this may sound like base flattery: It reminded me of Shakespearean drama for the way that even the villains are humanized. No one is just a bad guy. Even Avon, whom I loathed at the opening of Season 1, I came to like.

Simon: It's funny you should say that, because the portrayals in Deadwood are in the Shakespearean model. On The Sopranos, there's an awful lot of Hamlet and Macbeth in Tony. But the guys we were stealing from in The Wire are the Greeks. In our heads we're writing a Greek tragedy, but instead of the gods being petulant and jealous Olympians hurling lightning bolts down at our protagonists, it's the Postmodern institutions that are the gods. And they are gods. And no one is bigger.

By the way: If at any point any character on the show ever talks as I'm talking right now, it would suck. It's crucial that the characters can't lecture us.

Slate: The second season is focused largely on white dock workers in Baltimore, and less on the inner-city ghetto. What was behind that decision?

Simon: If we hadn't gone somewhere else in Baltimore, we couldn't have said to anyone we were trying to write about the city. Ed and myself and Robert Colesbury—who inspired the visual look of the show, and who sadly passed away—the three of us said, we want to build a city. If we get on a run, we want people to say, "That is an American city, those are its problems, and that's why they can't solve its problems." If we had just gone back to the ghetto and continued to plumb the Barksdale story, it would have been a much smaller show, and it would have claimed a much smaller canvas.

Originally, the show created a new target each season. By the time we ended Season 1, we realized we could extend the Barksdale story over Season 3, to Hamsterdam, and that we could extend that target over the City Hall story. One of our five themes was the death of work and the death of the union-era middle class. So, we thought, do we go to the port? Do we go to GM? Do we got to Beth[lehem] Steel? They probably weren't going to let us film at GM, and Beth Steel was bankrupt at that point. We put out a few feelers and GM wasn't really open. But the Port Authority was open to talking to us. So, that's where we were going and everything developed from there.

You know, sometimes people in West Baltimore say to me, about Season 2, "We know you tried to take our show white, but it didn't work—then you came back to us." And I have to say, "Dawg, no. The second season was the most watched season." A lack of audience is not why we left it behind.

Slate: Do you think it was the most watched season because more of the characters were white?

Simon: It certainly helped. There are limits to empathy in this country. By the way, viewership for The Wire is now up—it's up 15 percent on HBO on Demand, and on second airings.

Slate: You've killed more characters than any show I can think of. Who was the hardest character to kill off?

Simon: I miss all of them—I miss Wallace, I really miss D'Angelo. I miss Idris [the actor who played Stringer Bell]. I saw him at the HBO premiere after we killed him off. I was just beaming. All these theories that we kill off guys because they get contracts elsewhere, it's not true. The fact is, if you're not willing to kill your babies—isn't that a Faulkner line?—well, that's no good. You have to kill your babies if the story demands it. Stringer tried to reform the drug trade; it doesn't bear reform. Colvin tried to reform the drug war; it doesn't bear reform. But for me, the most painful death was Wallace. By the way, our own crew was really upset. Even they're not used to this kind of show. It came as a surprise to them. When the dailies came in, we were like, jeez, that's horrible. It was quiet when we saw this scene.

Slate: Marlo is the only character on the show thus far who seems to be out-and-out bad—almost a sociopath. Avon was cold-blooded, but his friendship with Stringer humanized him. Is this intentional on your part? Or do I just dislike Marlo (even though the actor is brilliant)?

Simon: Yeah, we have made him sociopathic. No, you know what—sociopathic to a lot of people really means something beyond Marlo. In our mind, Marlo is the logical extension of every single lesson that the drug war holds true. There is a lot of sociopathic impulse that is excused and justified by that. To say that he is sociopathic, no; he has real allegiance to a few others. There are a few select people, subordinates, to whom he has allegiance. Let me ask you this: Did you have any allegiance to the Greek in Season 2?

Slate: The Greek? No, I don't think I did.

Simon: That's because he represented capitalism in its purest form. There are certain people who represent the boundary to the form. At another moment, perhaps next season, the point of view might shift and the window into that character might shift and our allegiances with it, because we are only experiencing a character from a certain point of view. If we were to have followed the Greek too far, we would have wandered far afield from the main story, the stevedores.

You're right to feel that Marlo is enigmatic and distant now. And you're also right to feel he's doing an awful lot of bad stuff. But he's not any less complex than the other characters. He's just not showing other sides of himself. In other words, if anyone is feeling empathetic for him right now, it's not because of what the writers did.

Slate: Some of our readers have been offering up what amounts to a racialist critique of white, middle-class writers presuming to tell black ghetto stories. And in Slate's "TV Club" on The Wire, Steve James and Alex Kotlowitz touch on a question that they have been asked (and asked themselves) over the years: Can a white person honestly and accurately capture black culture?

Simon: Well, I have a couple answers to that. On one level, I'm becoming impatient, because I feel the work has answered the question. But let me answer. The people in that room on The Wire miss certain things because we're white. I'm sure we do. We miss certain things about black life—or not entirely; we miss the subtlety that a black writer of a commensurate skill could achieve. But it is possible that there are things we catch because we are who we are—we are not necessarily of the place, and this may allow for whatever distance is necessary to see some things.

The other thing is that I didn't ask for this gig. I got hired out of University of Maryland by the Baltimore Sun to be a political reporter in a city that was 65 percent African-Americans. If I didn't do my best to listen to those voices, to acquire some of those voices for my storytelling, I wouldn't have been doing my job. If I'd been a higher-education reporter, maybe I wouldn't have written The Wire. But I didn't ask for the job. They gave me that beat. I wasn't after these stories. (Likewise, Ed grew up in Baltimore and, after he came back from Vietnam, he became a reporter, and they put him in Western.) If we tried to tell these stories, and they were not credible, and if the voices weren't sufficiently authentic, we'd have our heads handed to us—not only by social critics and literati, but by viewers, by regular folk.

I don't know how popular The Wire is on the Upper West Side of New York or Westwood or Des Moines. But I know that in West Baltimore, Omar can't get to the set, because we have people going nuts. Or Stringer Bell or Prop Joe. The show has an allegiance in that community. That's its own answer—not that it's popular, but that it's credible. I was just on 92Q, the hip-hop station. The call came in with someone who said, why did you kill Stringer Bell when the real Stringer Bell is still alive? And I said, oh, you mean Mr. Reed? I explained that Reed was not the real Stringer, but that we mix and match stories. But there we were, talking intimately about the history of West Baltimore drug trade as if we were talking about baseball. If it was as lamely white and unnuanced as some people claim, we'd have been found out a long time ago.

Having said all that, the show is very conscious of trying to bring in African-American writers. I tell agents in Hollywood, don't send me scripts unless they're by African-American writers. From the moment the show was conceived, I asked David B. to produce it with me. I would have loved to have his voice in the show—not just because he's African-American but because he can write the hell out of it. A young writer named Joy Lusco did a few episodes. We've been trying to leaven the writers' room in that way. But it's a very hard show to write, as you can imagine. It's not as if all these scripts came in from agents, and we read them and think, "Based on this spec script from NYPD Blue, I'm confident I'll get what we need." You're looking for people who've worked on this level before, and when you find them, you beg them to help out.

We have done better in having an African-American hand in some of our crew departments and in directing. Nobody has directed more episodes than Ernest Dickerson—he's Spike Lee's former cinematographer. We've also broken someone: Anthony Hemingway, AD, directed our first episode last year. And now we may not be able to get him back, he's got so much work.

It's our hope—this is a little premature—to get Spike Lee for the first episode next year. He said he was interested last year, but we had some miscommunication. His agent said he wasn't available. We are very conscious of the race disparity. We look around the room and see, oh shit, we're a bunch of white guys! But you look at what Price and Pelecanos and McCain and Burns have done. … We're not trying to exclude in any sense, and it's not a good-old-boy network, because some of these people never met before this show.

Slate: Can you tell us a little about Season 5?

Simon: Yes, the last season. The last theme is basically asking the question, why aren't we paying attention? We got everything right in the last four seasons in depicting this city-state. How is it that these problems—which have been attendant problems regardless of who is in power—how is it that they endure? That brings into mind one last institution, which is the media. What are we paying attention to? What are we telling ourselves about ourselves? A lot of people think that we're going to impale journalists. No. It's not quite that. What stories do we want to hear? How closely do they relate to truth; how distant are they from the truth? We have a story idea about media and consumers of media. What stories get told and what don't and why it is that things stay the same.

What's happened to the Baltimore Sun locally is what has happened to that whole second tier of journalism—below the New York Times and the Washington Post: They're being eviscerated by price per share. There used to be 500 reporters; now there are 300. They keep telling us they can do the same job, they just need to be more effective. Bullshit. Five hundred reporters is 500; 300 is 300; you can't cover the city the same way with fewer people.

I don't want it to become onanistic. Obviously, I have a lot of memories of the Baltimore Sun. One thing I've always hated about TV portrayal of media is that it's always unfeeling assholes throwing microphones in the face of someone as he comes down City Hall steps.

I'll tell you a story. We had a press conference the first season. We staged it as a press conference really would be: a small room, some empty chairs. TV reporters are looking at print reporters to see what they ask; there is a pile of dope on the table; there is no sense of urgency. That is the way it always was. This was one of the only [production] notes we got [from HBO] the first season: What's up with that press conference? It looked so fake. At the time, I didn't have enough credibility with HBO to argue with the note, but I said Carolyn [Strauss, president of entertainment at HBO], you're raised on too much TV press.

The low end of journalism is not what concerns me. It's not that sensational stuff I'm worried about. It's that there may be no high end anymore, that the kind of thing journalists once aspired to, especially in the Washington Post-Watergate era, may no longer exist.

Meghan O'Rourke is Slate's culture editor.

Is Student Debt Too High?--BECKER

Some members of the new Congress are claiming that the debt of students to finance their college education is too high, and that more generous federally funded student grants should be available. Reforms of the college loan program are desirable, but when placed in a proper perspective, college students generally receive an excellent deal on their student loans.

Over 60 per cent of students who finished in 2003-04 college or graduate studies with a Certificate or a Degree had taken out a loan. This percent was highest at 70 to 80 per cent for students who received a professional degree, was also high for students who attended for-profit colleges, while the percent was lowest for students who graduated from two-year public institutions. This difference by type of college is partly explained by the fact that the fraction taking loans is much larger for students from families with low incomes since poorer students are more likely to go to for-profit colleges.

Even after adjusting for inflation, the average student loan increased by about 50% in the decade prior to 2004. The average size of the loan for those with loans was about $15,000 for graduates in 2003-04 with Bachelor's Degrees, it was much lower naturally for those who received certificates or degrees after two years of college, and was substantially higher for those with Masters and other post-graduate degrees. Perhaps surprisingly, the average loan did not vary much between for-profit, other private, and public colleges.

Although the debt of graduating students is not a minor burden, it is not usually a major one either, if the size of loans is related to benefits from college as well as to financial and other costs. Costs measured by tuition did increase at a rapid rate since 1980. According to calculations by Pablo Pena at the University of Chicago, tuition at private non-profit four-year colleges rose 140 per cent in real terms from 1980 to 2005, which means an annual rate of increase of over 3.5 per cent. Public schools charge a lot less but they too had rather rapid increases in tuition. Students who are from poor families have the most trouble paying for college, and obviously that burden gets heavier when tuition is higher. This helps explain the increase over time in both the fraction of students who take out loans, and the size of the typical loan.

On the other side of the ledger, higher tuition over time was related to sharply higher financial benefits from a college education. The typical college graduate earned per hour about 50 per cent more than the typical high school graduate in 1980, and the gap is now about 95 per cent. Earnings of graduates with a professional degree or other post-graduate education grew even faster over time than did earnings of college graduates.

The net benefits from graduating from college are determined by the higher earnings college graduates would receive over their lifetime compared to what they would receive if they started working after high school, minus tuition and any other costs of a college education. The data I have just given show that the increase in the earning advantage from a college education during the past couple of decades was far greater than the increase in tuition, so that average rates of return on a college education--a measure of the net benefit-- increased greatly. In addition, various non-monetary benefits of a college education also grew over time. Probably the two most important of these benefits are that higher education increases health through the improvements it induces in lifestyles and medical care, and higher education also improves investments in the learning and behavior of one's children.

How big a burden is the average loan for college graduates who take loans, which is about $15,000 to $20,000? The net present value of the earnings of typical graduates of four-year colleges over their lifetimes after discounting future earnings and subtracting out tuition and other costs has been shown to be over $300,000 more than what high school graduates earn. Even a $20,000 student loan debt is small relative to such a large benefit. Put differently, if the only way to go to college would be to borrow $20,000 under a student loan program at the prevailing 7 per cent interest rate on these loans, the returns from college to a typical graduate would be big enough to allow the borrower to pay off the loan and have a lot left over.

Of course, such loans would be a much greater burden for students who only received two years of college, perhaps because they dropped out of a four-year program, or because they received an Associate Degree. Such students do not earn nearly as much as graduates of four-year colleges. However, the burden of a fixed amount borrowed is not the right comparison since as I indicated, graduates of two-year programs borrow much less than do graduates of four-year programs. In reality, the burden of what graduates of two year programs typically borrow is not much greater compared to their discounted earnings than the rather minor burden of the actual loans taken by graduates of four-year colleges.

Within any category of graduates, earnings vary considerably by type of job-- teachers and clergymen earn a lot less than investment bankers--and by degree of success within jobs. Fixed interest loans are not the best way to borrow when loans are used for risky activities. Returns on higher education are rather risky, even after adjusting for how they co-vary with returns on assets. Businesses often borrow with the equivalent of equity to finance start-ups and other risky activities, where the equity pays off well if the venture is successful, and pays little if the venture fails.

This suggests that student loans should not have fixed interest rates that require a fixed amount to be repaid per $1,000 borrowed, but rather should have the equivalent of an "equity" repayment system. That would mean that persons who earn very little repay little, while those who earn a lot repay a lot (per $1,000 borrowed). Requiring individuals who are repaying student loans to submit their income tax statements each year, so that lenders could document what the borrowers earned, could enforce such an income-contingent repayment system.

The United States already has a small student loan program that allows repayments to be conditional on the incomes of borrowers. But a system with both fixed interest loans and income-contingent loans has a "moral hazard" problem. Students who expect to go into well-paying jobs would tend to borrow at fixed interest rates since that would be cheaper to them than repayments that rise with higher earnings. A possible reform of the federal program that would reduce this moral hazard would be to shift entirely to an income-contingent system, where persons with student loans who earn little would repay relatively little, and those who earn a lot would repay much more.

A full income-contingent loan program would not be without its own problems since it would attract students who expect to go into low- paying occupations and repel students who expect higher earnings. In addition, such a program would "tax" high earnings that would further discourage effort by high earners, and further encourage them to try to hide incomes. But it might work better than either the present largely fixed-interest system, or a dual system that allowed both fixed-interest loans and income-contingent loans, with the choice between these systems determined by students.

Student Loans--Posner's Comment

Generally, it is more efficient both socially and privately for the consumer, in this case the student, to pay the full cost of the goods and services that he buys than for the government to pick up any part of the tab. A student admitted to an elite college like Harvard and Yale has high expected lifetime earnings, and it seems absurd that the federal taxpayer should be required to defray a part of the cost of his education. This is not a point about distributive justice, for nowadays most federal income tax is paid by high-income individuals. It is a point about the inefficiency of using the federal tax and spending power to subsidize purchases by affluent (as measured by expected earnings over the recipient's lifetime) consumers. The inefficiency lies not only in the transaction costs associated with the subsidy (including lobbying expenses), but also and more importantly in distorting the allocation of resources. Suppose that for some marginal student the expected return from a college education, net of tuition and opportunity cost (the forgone income from working if the student attends college instead), is negative, but turns positive if his tuition expense is subsidized; then the subsidy is inducing a waste of resources.

This would be obvious if the subsidy were for a course in automobile repair, but maybe there is something special about college or university education that distinguishes it from other services, including other educational services. There are two arguments. One is that higher education (lower also, presumably) confers social benefits (that is, benefits not captured by the student), whether by making people more informed voters, or by making them more productive workers (assuming they cannot capture their entire contribution to social output in their wage), or by reducing subsidized health costs by increasing health (Becker notes that educated people are healthier than uneducated people).

This is not a good argument for a subsidy because it does not appear that many persons who would benefit from a college education fail to obtain one. As Becker points out, the private returns (higher earnings) from a college degree are very great and a student can borrow to finance the tuition and other costs of the degree. It seems unlikely, though it is not impossible, that kids who would not personally benefit from college nevertheless would, if paid to go to college, confer the social benefits of a college education that the students who do benefit personally might be thought to confer.

But the points I have made so far really argue just against increasing the existing subsidies for college education, rather than against any subsidies. College education is already heavily subsidized, notably in the case of state and city colleges, where the taxpayer picks up a big share of the cost; but private colleges receive various tax breaks, so they are subsidized too. (I would not call alumni donations "subsidies," however, since they are voluntary and give value to the donor.) Since a worker usually cannot recapture in his earnings the full effect of his labor on output (because he produces some consumer as well as producer surplus), and college increases the productivity of those students who are intelligent enough to benefit from a college education, there is an argument for making college affordable by any qualified applicant. However, it is unclear to me whether this requires any subsidies; all that is required is that the boost in expected earnings from attending college exceed the cost of the loans, or other costs, that the student must incur for college to be a rumerative choice. For then the student will be motivated to attend college even though his doing so will produce social as well as private benefits. All that is important from the motivational standpoint is that the private benefits exceed the private costs.

The second argument for subsidizing higher education is that its high cost nowadays, which for students who must borrow to pay tuition and living expenses forces them to go into debt, deflects students from nonremunerative jobs, such as (in the case of debt-written law students) public interest legal practice, or public school teaching, that (especially teaching) may confer substantial social benefits. (I doubt that public interest law practice does.) The students have too much debt to be able to pay it off without taking a high-paying job. However, a student loan subsidy is a clumsy device for channeling students into employment that is underpaid from a social standpoint, since every student gets the subsidy but only a handful are induced by it to enter the desired channel. It would be more efficient to raise the pay for the jobs that are thought to confer social benefits. A loan-forgiveness program, where forgiveness is conditioned on taking one of the favored jobs, is better tailored to the end of encouraging students to take such jobs than a loan subsidy; it operates to raise the full income of the job.

To repeat an earlier point, which tends to be neglected in discussions of the student-loan issue, if I am right that very few persons who could benefit from a college education are deterred by its cost, the main effect of increasing the subsidy will be to attract applicants who would not benefit if they weren't being "paid" to attend college. That would be a misallocation of resources.

I conclude that the case--for which I gather there will be support in the new Congress--for increasing the student-loan subsidy by having the federal government subsidize a larger part of the interest on the loan is a weak one.

 What's good for pharma is good for America

Critics of drug companies vastly overstate the industry's financial well-being -- and overlook its indispensable contributions to the future of public health

By Richard A. Epstein  |  December 3, 2006

THE WINDS OF POLITICAL FORTUNE have brought the Democrats into power in both houses of Congress, and high on their 2007 agenda is tightening the regulatory screws on the pharmaceutical industry. It seems highly likely that the new Congress will seek to intervene on such hot-button issues as FDA oversight of drug safety, patent protection, and drug pricing.

The implicit premise behind this looming regulatory offensive is that Big Pharma (an epithet) is a 900-pound gorilla in need of domestication. In recent years, notable authors such as Arnold Relman, Marcia Angell, and Jerome Kassirer -- all former editors in chief of the New England Journal of Medicine -- have penned searing indictments of the industry.

These and other critics treat the industry's multibillion dollar profits as a sure sign of its permanent robust economic status. But those numbers conceal deep vulnerabilities. It is no accident that the shares of major pharmaceutical houses have been hammered over the past three of four years, even as profits appear to be at record highs. Wall Street values companies not only on current earnings, but also on long-term prospects, which are cloudy at best for research pharmaceutical firms. Just this past week, for example, Pfizer announced plans to cut one-fifth of its United States sales force, with a promise of further restructuring in January.

We shouldn't be surprised. The huge profits of major drug firms are often tied to one or two drugs, such as Pfizer's Lipitor or Viagra -- profits that evaporate when their patents expire and generics enter the marketplace. The Standard & Poor's review of pharmaceuticals thus starts somberly, noting that products with $21 billion in US drug sales are going off patent in 2006, with another $24 billion to follow over the next three years -- a sharp dent for an industry that today generates about $250 billion in revenue. All the while, the pharmaceutical houses also must absorb the legal and business risks needed to identify, patent, test, license, and market any new drug.

These trends should worry us all. Pharmaceuticals are not tobacco. There is no reason to rejoice in putting pharma on the ropes if its business reversals hurt the very consumers they are trying to serve. The medical advances of the past 30 years are not just a matter of dumb luck. They are very heavily dependent on the patent law, pricing freedom, and marketing strategies that have allowed these firms to bring a wide variety of vital products to market.

The champions of further regulation argue that their efforts won't limit innovations or curtail the widespread use of new drugs. But there are no free fixes. Too often ill-designed regulation gives us the worst of both worlds -- slower innovation and more limited drug use.

...

We have much to fear in any new round of regulation. Bringing a new drug to market is already an arduous task. The FDA has consistently upped the number and type of clinical trials for companies seeking approval of new drugs, so that today as many as 60 separate trials are often required. Fewer drugs make it through these hurdles, and those that survive the ordeal cost ever more to bring to market.

Firms are thus caught in a two-way vise. They have to spend more to reach the market, yet once there they have a shorter period of patent exclusivity in which to recover their extensive front-end costs. (One consequence is that it has become ever harder to persuade companies to invest in drugs that attack diseases or conditions that afflict small populations -- thus exposing companies to the charge that they heartlessly put profits before patient health.)

The risks of marketing a new drug have been further compounded as the FDA has become more willing to remove drugs from markets at the first sign of any real or imagined dangerous side effects. But while such FDA actions often lead to accusations that drug companies have not come clean about a product's risks, it is usually the FDA that makes the incorrect risk calculation.

Last year, for example, early clinical trials showed great promise for a cancer drug called Iressa, which was used with success by many patients. After the early successes were not replicated in further clinical studies, the FDA adopted a Solomon-like solution: It allowed current users to continue receiving the drug, but otherwise took it off the market. The FDA's rationale was that a new drug, Tarceva, worked better. Yet it could never explain why patients for whom all other therapies had failed should prefer one last-ditch option to two. What is needed is good information about Iressa's successes and failures. If that is supplied, surely oncologists can do a better job calculating the odds than the FDA, which has to deal with averages, not individual cases.

With other established drugs, like the antidepressants Zoloft and Prozac, the FDA leaves them on the market, but requires they be sold with severe "black-box" warnings that overstate the risks (in the case of Zoloft and Prozac, of suicide). Fearful physicians thus shy away from prescribing such drugs -- not because of the dangers the drugs pose, but because they fear the warnings expose them to greater risk of medical malpractice suits.

Pharmaceutical companies meanwhile have their own lawsuits to worry about. The liability risks of mass-marketed drugs have increased significantly in recent years. Consumer fraud class actions, now common, arise after drugs have been withdrawn for some adverse side effect. Nonetheless, litigants are often allowed to sue for refunds not only for unused drugs, but also for the drugs that were successfully used, on the grounds that if the truth about the side effects had not been concealed (itself a debatable proposition), the pills would never have been purchased in the first place. The resulting loss in revenue leaves drug companies with even fewer resources to cover the thousands of suits for compensatory and punitive damages for drug-related injuries, like the multiple suits brought against Merck for its drug Vioxx.

Personal injury claims are immensely expensive to defend individually and their outcomes are fraught with error. Often they are propelled by inflammatory trial techniques that obscure the scientific evidence, which lay juries find hard to assess in the first place. It is stark evidence of how dire the situation is for pharmaceutical companies that the FDA, typically no friend of the drug companies on safety issues, has now actively intervened on their side in personal injury suits that attack the adequacy of FDA approved warnings.

The common judicial refrain in tort litigation has long been that FDA oversight, no matter how comprehensive, supplies only a "minimum" set of warnings. In reality, however, excessive warning is the greater peril. The FDA faces fierce criticism from Congress, the medical profession, and the popular press whenever any approved drug exhibits adverse side effects. Yet these watchdogs offer little or no outcry when the FDA keeps a new drug off the market. Visible injuries are easier to track than lost opportunities for cure.

. . .

Perhaps the biggest threat on the horizon for the drug industry is mounting pressure to submit to price controls. One possibility is that the government will set uniform prices for all drugs. Another is that it would require a company to sell to all customers at the lowest price charged to any customer within the past year. But no matter how such controls are calculated, they could devastate the business. What's more, they're just not necessary.

Traditionally, patent holders could decide how much to charge for their wares. Public protection against excessive profits for drug companies came from three sources. First, the patent period is limited to 20 years, with about half that time used to shepherd a new drug through the FDA approval process. Once the patent expires, the entry of low-cost generics sharply reduces the cost of proven drugs. Second, the rapid pace of invention means that consumers frequently can choose between two or more patented drugs in the same class (Lipitor, Crestor, and Zocor, for example, are three statins used to lower cholesterol), effectively blunting the monopoly power of all patent holders. Third, antitrust laws make it illegal for any makers of the same or similar drugs to conspire to raise prices or reduce output.

Within these constraints, of course, the research pharmaceutical firms still must recover their huge front-end costs, which can run over $1 billion for a new drug, over an ever shorter useful patent life. In addition, their successful drugs must generate additional revenues to cover the predictable flops. Yet companies need to charge someone for the initial costs of production, not just for the small cost of producing additional pills.

One common argument for price controls is that drug companies should only spend money for research but not for lavish marketing. Yet that short-sighted argument assumes that pharmaceutical companies could sell the same quantities of drugs without advertising them. Of course, the cost of marketing raises the total cost of production, but by expanding the consumer base, it lowers the average costs consumers pay per unit. Any system of direct price controls would thus play havoc with both research and marketing, drying up the capital needed for innovation.

The overall picture today shows a research drug industry under constant pressure from all sides. Industry critics greatly fear letting bad drugs on the market, while simultaneously underestimating the real costs (in the form of forgone health benefits) of keeping good drugs off the market. In reality any sound risk assessment, whether by regulation or litigation, should take into account both kinds of error.

Critics also naively assume that investors and firms will continue to make huge investments in new products without any assurance of recouping their costs in the marketplace. But the drug business is too vast and complex to depend on individual altruism or government bureaucrats to fuel medical advances. As Adam Smith recognized long ago, the profit motive is the only constant and reliable spur to making the major investments on which the prosperity (and health) of any nation depends. Today's pharmaceutical industry is not exempt from that enduring insight.

Richard A. Epstein is a law professor at the University of Chicago and a senior fellow at the Hoover Institution. His new book is "Overdose: How Excessive Government Regulation Stifles Pharmaceutical Innovation" (Yale). He has from time to time consulted for Pfizer and PhRMA, an industry trade group. 

 

On drug prices, are Democrats in a fix?

By Ricardo Alonso-Zaldivar
Times Staff Writer

November 26, 2006

WASHINGTON — With millions of seniors facing premium hikes for their Medicare prescription plans, Democrats say they have a solution: Use the government's massive buying power to bargain for rock-bottom drug prices. The Department of Veterans Affairs does it for 5 million patients, they point out, so why not Medicare with its 43 million?

Medicare already sets rates for hospitals, doctors and medical equipment such as power wheelchairs — as well as drugs administered in doctors' offices. It was only the Republicans' ideological commitment to the private sector that led them to bar the government from negotiating discounts with drug companies, Democrats contend.

But the VA model may not be readily adaptable to Medicare, some independent experts say. And policy differences among Democrats, along with the Bush administration's opposition to government price-setting, may further complicate the task of reaching a goal that Democrats have set for themselves when they take over Congress in January.

In addition, newly announced discounts by drug companies could have an impact on the Democrats' effort before it gets started. At least one major manufacturer is offering help to seniors who have trouble paying for their drugs.

"From a rhetorical perspective, Democrats may feel like they gain a lot with this issue, but there are many substantive hurdles that the government faces in trying to negotiate prices," said Dan Mendelson, president of Avalere Health, a consulting firm that tracks the Medicare prescription program.

"If you look historically at the government's experience in trying to regulate prices, it's poor."



VA's lower prices

Although costs for the Medicare drug program are lower than the government originally projected, there is evidence that prices could be lower still. A recent Consumers Union study of the prices charged in South Florida for six widely used drugs found that the Veterans Administration's average prices were 54% lower than Medicare's.

"Medicare is overwhelmingly the largest purchaser, and it's ridiculous for Medicare not to get the best deal of all institutional purchasers," said Ron Pollack, executive director of the advocacy group Families USA. The VA's experience shows what the potential could be, he added.

Yet applying the VA approach to Medicare may prove difficult. For one thing, Medicare is much larger and more diverse.

VA officials can negotiate major price discounts because they restrict the number of drugs on their coverage list. Instead of seven or eight drugs for a given medical problem, the VA list may contain three or four. If a drug company fails to offer a hefty discount, its product may not make the cut.

For example, VA beneficiaries can get Zocor for high cholesterol, but not Lipitor. In all, the VA covers about 1,300 medications. By comparison, the most popular Medicare plan — AARP MedicareRx — covers about 4,300.

But VA patients who want drugs that are not on the department's list must go outside the system.

In other words, the VA offers lower drug prices, but fewer choices.

American consumers have repeatedly resisted efforts to save money on medical care by restricting choice. Health maintenance organizations, for example, were once seen as the answer to rising healthcare costs, but millions of people rejected the approach, saying they wanted the freedom to choose their doctors.

One prominent advocate of government-negotiated prices has had a change of heart. Tommy G. Thompson, President Bush's first Health and Human Services secretary, once expressed regret that he hadn't been given the power to bargain.

But in a recent interview he said: "This plan is working much better than ever anticipated. When you've got a law that is working well in the federal government, why change it?"

If Medicare had the legal authority to negotiate prices, some experts predict it would do well in categories of drugs where there are lots of choices, such as blood pressure pills. But for new or cutting-edge drugs, a manufacturer could have the upper hand. A company could launch a television ad blitz to pressure Congress into raising Medicare prices.

"For categories of drugs that are not competitive, my guess is the manufacturer would not cave in on prices," said economist John E. Calfee of the business-oriented American Enterprise Institute. "Those are the drugs that have defied price controls."

Other experts say it wouldn't be that easy to push Medicare around.

"The VA had to resort to a preferred drug list, but I don't think that Medicare would have to restrict its list very much because it is such a huge purchaser," said Dr. David Blumenthal, a professor of medicine and public health at Harvard University. "Medicare's power in the marketplace is such that I think every manufacturer would have to take its prices."



Two possible paths

House Democrats have promised action in the first 100 hours of the new Congress to give Medicare bargaining power. One poll showed that 77% of Americans supported the idea.

When the GOP-controlled Congress created the Medicare prescription plan in 2003, it expressly barred the government from negotiating prices on the theory that private insurance plans — which were assigned the job of delivering the benefit — could do a better job of getting discounts.

Critics called it a giveaway to the pharmaceutical industry.

"In the current program, there are no protections for beneficiaries that aren't exceeded ten times over by benefits to the pharmaceutical industry," said Rep. Pete Stark (D-Fremont), who is expected to lead the House subcommittee that oversees Medicare.

Democrats haven't spelled out a specific proposal, and Stark said there were at least two possibilities.

One would repeal the prohibition on negotiating and order Medicare to set a top price for each drug available under the program. Private insurers would still be free to bargain for even lower prices.

Such an approach would be similar to how Canada and European countries control prescription drug prices. Canadian prices are 30% to 60% lower than in the U.S. Some critics say Canadian price controls mean that patients there are not always able to get the latest medications.

The other possibility is to set up a Medicare-run plan to compete against the private plans. That is, Medicare would set up a plan using drug prices it negotiated with manufacturers. The plan, available nationwide, would compete with the privately operated plans — encouraging private plans to bargain harder for lower prices.

The two approaches aren't mutually exclusive, Stark said.

The proposal to let the government bargain with manufacturers could face additional hurdles in the Senate, where a prominent Democrat is among the leading skeptics.

As chairman of the finance committee, Sen. Max Baucus of Montana will have jurisdiction over Medicare. This year he was one of only two Democrats to vote against allowing Medicare to negotiate drug prices.

A spokeswoman said Baucus planned to hold hearings next year but had not made up his mind on the issue.

The drug industry hopes that even if it loses the first high-profile battle in the House, it can still win the legislative war.

"We fully expect that Speaker-elect Nancy Pelosi will deliver on her political promise, but after that we are equally confident that we will have an opportunity to educate both the members of Congress and beneficiaries across America about the benefits of the current program," said Ken Johnson, senior vice president of communications at the Pharmaceutical Research and Manufacturers of America.



Corporate outreach

Reaching out to seniors in the Medicare prescription plan, one major drug company has launched a discount program that softens one of the most worrisome gaps in the current benefit, known as the doughnut hole. Under that provision — created to save money for the government — seniors must pay the full cost after their annual drug spending exceeds $2,250; coverage kicks in again once the total exceeds $5,100.

To help seniors caught in the doughnut hole, AstraZeneca has created a program called AZ Medicine & Me.

It makes brand-name medications such as the widely used breast cancer drug Arimidex available to qualifying seniors at $25 or less for a 30-day supply. Arimidex could cost ten times that at regular prices.

The plan was announced two days after the election.

The timing was purely coincidental, said AstraZeneca spokeswoman Abigail Baron. The company's plan had been in the works for months and was announced as soon as it received government approval, she explained.

Said Johnson: "I think you are going to see more and more of our companies trying to assist seniors who hit the doughnut hole."

 

By Lindsay Beck Mon Nov 27, 8:17 AM ET

BEIJING (Reuters) - Hai dreams of learning Arabic and hopes one day to study in the Middle East.

For now, the 25-year-old is stuck in Beijing.

It's thousands of miles from Mecca, but more and more Chinese Muslims are fulfilling their dreams of learning about their faith as the government relaxes controls over Islam to win hearts in the Middle East, where it seeks to strengthen trade and oil ties.

Hai goes to the mosque every day to pray as he did growing up in the northwestern Chinese region of Ningxia, home to a majority of the country's estimated 20 million Muslims -- as many as live in

Top of Form

Bottom of Form

Syria or Yemen.

"Not everyone was like that but my family was, and now more and more people are. Our religion is developing very quickly," said Hai, who declined to give his full name.

Pottering around his "Muslim products" shop, which sells everything from Islamic skullcaps and headscarves to dried figs and beaded handbags, Hai says his customers include a growing number of visitors from the Middle East.

The increase in Muslim visitors to China -- tourists, businessman and expatriates -- is causing a rise in religious observance among China's Hui, a Muslim group that traces its heritage to the Middle East and Central Asia.

"There is a strong influence of radical theology imported from the Middle East," said Nicholas Bequelin, a researcher for Human Rights Watch and a specialist on China's Muslims.

"It's been very noticeable speaking to people in mosques across China. Whereas before they were completely cut off from the mainstream Muslim community, they're not anymore," he said.

CHARM OFFENSIVE

That could pose a challenge for China's Communist -- and officially atheist -- rulers, who seek to control organized religion to prevent potential challenges to their rule.

China cut ties with the

Top of Form

Bottom of Form

Vatican in 1951, leaving its Catholic community split between an underground church loyal to the Holy See and the official, state-backed church.

The government has also been cracking down on Christian "house churches," congregations of people who worship in private homes, away from the glare of officialdom.

But Christianity does not come with an economic or energy element, key for China's rapidly expanding economy.

"The relationship with the Muslim Hui has always been a stake of international diplomacy, part of a charm offensive by China," said Bequelin. "This to a certain extent explains why the authorities have been more lenient."

The leniency extends only as far as the Hui.

China's other Muslim group, the Uighurs, live mainly in the northwestern region of Xinjiang and have close linguistic and cultural links with Central Asia.

With aspirations for greater autonomy, Uighurs are seen as an ethnic problem and subject to much tighter controls.

In dusty Tongxin, a Hui Muslim-majority county in Ningxia, the area's mosques, devastated in the frenzy of the Cultural Revolution, have been rebuilt with surprising splendor for one of the country's poorest regions.

One religious leader, who runs an Islamic girls school, in the town of 300,000 was vague about funding for the rebuilding.

"We rely on introductions from friends coming here and giving a bit of money or help," said the woman, who asked not to be named.

MINARETS AND DOMES

The minarets and domed roofs of Tongxin's new mosques could be mistaken for any in the Middle East and are a stark contrast to the main mosque in Beijing, whose courtyard architecture and low, sloping roofs reflect a more traditional Chinese style.

The religious leader says she has 68 students in her school. The youngest is 15, despite an official ban on religious education for anyone under 18.

"When I graduated from high school, in 1986, the situation was very difficult," said the woman. "Now the religious policies are more relaxed. We can go ahead without fear."

Most of her students wear headscarves, although it is rare to see women wearing the Islamic headdress in the area.

A record 9,600 Chinese Muslims are expected to leave for a pilgrimage to Mecca this year, escorted by China's Patriotic Islamic Association. Many more will likely go independently, through a third country.

China's Religious Affairs Bureau did not respond to faxed questions on numbers of Chinese making the pilgrimage, funding links and student exchanges.

The government is betting on its unspoken compromise with China's Hui that the community will steer clear of political engagement in exchange for greater religious freedom.

"They're banking on the fact that China's Muslims are aware of the limits and the rules and they know how to play the game," said Dru Gladney, an expert at Pomona College in California.

For now, it's a compromise that seems to be working.

The religious leader in Ningxia says she's happy to be able to worship in peace and teach her community freely, after enduring a lifetime of much stricter controls.

"The national policies are opening up and as long as you don't go against the country's religious policies and regulations, you can freely progress," she said.

(Additional reporting by Emma Graham-Harrison in Ningxia)

 December 3, 2006

Op-Ed Columnist

Has He Started Talking to the Walls?

By FRANK RICH

IT turns out we’ve been reading the wrong Bob Woodward book to understand what’s going on with President Bush. The text we should be consulting instead is “The Final Days,” the Woodward-Bernstein account of Richard Nixon talking to the portraits on the White House walls while Watergate demolished his presidency. As Mr. Bush has ricocheted from Vietnam to Latvia to Jordan in recent weeks, we’ve witnessed the troubling behavior of a president who isn’t merely in a state of denial but is completely untethered from reality. It’s not that he can’t handle the truth about Iraq. He doesn’t know what the truth is.

The most startling example was his insistence that Al Qaeda is primarily responsible for the country’s spiraling violence. Only a week before Mr. Bush said this, the American military spokesman on the scene, Maj. Gen. William Caldwell, called Al Qaeda “extremely disorganized” in Iraq, adding that “I would question at this point how effective they are at all at the state level.” Military intelligence estimates that Al Qaeda makes up only 2 percent to 3 percent of the enemy forces in Iraq, according to Jim Miklaszewski of NBC News. The bottom line: America has a commander in chief who can’t even identify some 97 percent to 98 percent of the combatants in a war that has gone on longer than our involvement in World War II.

But that’s not the half of it. Mr. Bush relentlessly refers to Iraq’s “unity government” though it is not unified and can only nominally govern. (In Henry Kissinger’s accurate recent formulation, Iraq is not even a nation “in the historic sense.”) After that pseudo-government’s prime minister, Nuri al-Maliki, brushed him off in Amman, the president nonetheless declared him “the right guy for Iraq” the morning after. This came only a day after The Times’s revelation of a secret memo by Mr. Bush’s national security adviser, Stephen Hadley, judging Mr. Maliki either “ignorant of what is going on” in his own country or disingenuous or insufficiently capable of running a government. Not that it matters what Mr. Hadley writes when his boss is impervious to facts.

In truth the president is so out of it he wasn’t even meeting with the right guy. No one doubts that the most powerful political leader in Iraq is the anti-American, pro-Hezbollah cleric Moktada al-Sadr, without whom Mr. Maliki would be on the scrap heap next to his short-lived predecessors, Ayad Allawi and Ibrahim al-Jaafari. Mr. Sadr’s militia is far more powerful than the official Iraqi army that we’ve been helping to “stand up” at hideous cost all these years. If we’re not going to take him out, as John McCain proposed this month, we might as well deal with him directly rather than with Mr. Maliki, his puppet. But our president shows few signs of recognizing Mr. Sadr’s existence.

In his classic study, “The Great War and Modern Memory,” Paul Fussell wrote of how World War I shattered and remade literature, for only a new language of irony could convey the trauma and waste. Under the auspices of Mr. Bush, the Iraq war is having a comparable, if different, linguistic impact: the more he loses his hold on reality, the more language is severed from its meaning altogether.

When the president persists in talking about staying until “the mission is complete” even though there is no definable military mission, let alone one that can be completed, he is indulging in pure absurdity. The same goes for his talk of “victory,” another concept robbed of any definition when the prime minister we are trying to prop up is allied with Mr. Sadr, a man who wants Americans dead and has many scalps to prove it. The newest hollowed-out Bush word to mask the endgame in Iraq is “phase,” as if the increasing violence were as transitional as the growing pains of a surly teenager. “Phase” is meant to drown out all the unsettling debate about two words the president doesn’t want to hear, “civil war.”

When news organizations, politicians and bloggers had their own civil war about the proper usage of that designation last week, it was highly instructive — but about America, not Iraq. The intensity of the squabble showed the corrosive effect the president’s subversion of language has had on our larger culture. Iraq arguably passed beyond civil war months ago into what might more accurately be termed ethnic cleansing or chaos. That we were fighting over “civil war” at this late date was a reminder that wittingly or not, we have all taken to following Mr. Bush’s lead in retreating from English as we once knew it.

It’s been a familiar pattern for the news media, politicians and the public alike in the Bush era. It took us far too long to acknowledge that the “abuses” at Abu Ghraib and elsewhere might be more accurately called torture. And that the “manipulation” of prewar intelligence might be more accurately called lying. Next up is “pullback,” the Iraq Study Group’s reported euphemism to stave off the word “retreat” (if not retreat itself).

In the case of “civil war,” it fell to a morning television anchor, Matt Lauer, to officially bless the term before the “Today” show moved on to such regular fare as an update on the Olsen twins. That juxtaposition of Iraq and post-pubescent eroticism was only too accurate a gauge of how much the word “war” itself has been drained of its meaning in America after years of waging a war that required no shared sacrifice. Whatever you want to label what’s happening in Iraq, it has never impeded our freedom to dote on the Olsen twins.

I have not been one to buy into the arguments that Mr. Bush is stupid or is the sum of his “Bushisms” or is, as feverish Internet speculation periodically has it, secretly drinking again. I still don’t. But I have believed he is a cynic — that he could always distinguish between truth and fiction even as he and Karl Rove sold us their fictions. That’s why, when the president said that “absolutely, we’re winning” in Iraq before the midterms, I just figured it was more of the same: another expedient lie to further his partisan political ends.

But that election has come and gone, and Mr. Bush is more isolated from the real world than ever. That’s scary. Neither he nor his party has anything to gain politically by pretending that Iraq is not in crisis. Yet Mr. Bush clings to his delusions with a near-rage — watch him seethe in his press conference with Mr. Maliki — that can’t be explained away by sheer stubbornness or misguided principles or a pat psychological theory. Whatever the reason, he is slipping into the same zone as Woodrow Wilson did when refusing to face the rejection of the League of Nations, as a sleepless L.B.J. did when micromanaging bombing missions in Vietnam, as Ronald Reagan did when checking out during Iran-Contra. You can understand why Jim Webb, the Virginia senator-elect with a son in Iraq, was tempted to slug the president at a White House reception for newly elected members of Congress. Mr. Bush asked “How’s your boy?” But when Mr. Webb replied, “I’d like to get them out of Iraq,” the president refused to so much as acknowledge the subject. Maybe a timely slug would have woken him up.

Or at least sounded an alarm. Some two years ago, I wrote that Iraq was Vietnam on speed, a quagmire for the MTV generation. Those jump cuts are accelerating now. The illusion that America can control events on the ground is just that: an illusion. As the list of theoretical silver bullets for Iraq grows longer (and more theoretical) by the day — special envoy, embedded military advisers, partition, outreach to Iran and Syria, Holbrooke, international conference, NATO — urgent decisions have to be made by a chief executive who is in touch with reality (or such is the minimal job description). Otherwise the events in Iraq will make the Decider’s decisions for him, as indeed they are doing already.

The joke, history may note, is that even as Mr. Bush deludes himself that he is bringing “democracy” to Iraq, he is flouting democracy at home. American voters could not have delivered a clearer mandate on the war than they did on Nov. 7, but apparently elections don’t register at the White House unless the voters dip their fingers in purple ink. Mr. Bush seems to think that the only decision he had to make was replacing Donald Rumsfeld and the mission of changing course would be accomplished.

Tell that to the Americans in Anbar Province. Back in August the chief of intelligence for the Marines filed a secret report — uncovered by Thomas Ricks of The Washington Post — concluding that American troops “are no longer capable of militarily defeating the insurgency in al-Anbar.” That finding was confirmed in an intelligence update last month. Yet American troops are still being tossed into that maw, and at least 90 have been killed there since Labor Day, including five marines, ages 19 to 24, around Thanksgiving.

Civil war? Sectarian violence? A phase? This much is certain: The dead in Iraq don’t give a damn what we call it.

December 3, 2006

The Way We Live Now

The New, Soft Paternalism

By JIM HOLT

When the government tells you that you can’t smoke marijuana or that you must wear a helmet when you ride your motorcycle even if you happen to like the feeling of the wind in your hair, it is being paternalistic. It is largely treating you the way a parent treats a child, restricting your liberty for what it deems to be your own good. Paternalistic laws aren’t very popular in this country. We hew to the principle that, children and the mentally ill apart, an individual is a better judge of what’s good for him than the state is and that people should be free to do what they wish as long as their actions don’t harm others. Contrary to what many people believe, you can even commit suicide legally (although if you don’t live in Oregon, you should think twice about seeking assistance).

But what if it could be shown that even highly competent, well-informed people fail to make choices in their best interest? And what if the government could somehow step in and nudge them in the right direction without interfering with their liberty, or at least not very much? Welcome to the new world of “soft paternalism.” The old “hard” paternalism says, We know what’s best for you, and we’ll force you to do it. By contrast, soft paternalism says, You know what’s best for you, and we’ll help you to do it.

Here’s an example. In some states with casino gambling, like Missouri and Michigan, compulsive gamblers have the option of putting their names on a blacklist, or “self-exclusion” list, that bars them from casinos. Once on the list, they are banned for life. If they violate the ban, they risk being arrested and having their winnings confiscated. In Missouri, more than 10,000 people have availed themselves of this program. In Michigan, the first person to sign up for it was, as it happens, also the first to be arrested for violating its terms when he couldn’t resist sneaking back to the blackjack tables; he was sentenced to a year’s probation, and the state kept his winnings of $1,223.

The voluntary gambling blacklist is an example of what’s called a self-binding scheme. It is a way of restructuring the external world so that when future temptations arise, you will have no choice but to do what you’ve judged to be best for you. The classic case is that of Ulysses, who ordered his men to tie him to the mast of his ship so that he could hear the song of the Sirens without being lured to his destruction. As a freely chosen hedge against weakness of the will, self-binding would seem to enlarge individual liberty, not reduce it. So what is there to object to in a program like Missouri’s or Michigan’s?

Plenty, say libertarian critics. To begin with, they don’t like soft paternalism when it involves the state’s coercive power; they are much happier with private self-binding schemes, like alcoholism clinics, Christmas savings clubs and Weight Watchers. They also worry that soft paternalism can be a slippery slope to the harder variety, as when campaigns to discourage smoking give way to “sin taxes” and outright bans. But some libertarians have deeper misgivings. What bothers them is the way soft paternalism relies for its justification on the notion that each of us contains multiple selves — and that one of those selves is worth more than the others.

You might naïvely imagine that you are one person, the same entity from day to day. To the 18th-century philosopher David Hume, however, the idea of a permanent “I” was a fiction. Our mind, Hume wrote, “is nothing but a bundle or collection of different perceptions, which succeed each other with an inconceivable rapidity, and are in a perpetual flux and movement.” According to this way of thinking, the self that inhabits your body today is only similar to, not identical with, the self that is going to inhabit your body tomorrow. And the self that will inhabit your body decades hence? A virtual stranger.

The idea of multiple selves may seem like a stoner’s fantasy, but economists who study human decision-making have found it surprisingly useful. Consider: Most people, if given a choice today between doing seven hours of irksome work on May 1, 2007, versus eight hours on May 15, 2007, opt for the former. When May 1 arrives, however, they will find that their preference has flipped: they now wish to put off the work for a couple of weeks, even at the cost of having to do the extra hour’s worth. Why this inconsistency, if the self calling the shots is one and the same?

Further evidence for the fragmented self comes from neuroscience. Brain scans show that the emotional part of the brain, the limbic system, is especially active when the prospect of immediate gratification presents itself. But choice among longer-term options triggers more activity in the “reasoning” part of the brain, located (suitably enough) higher up in the cortex. Now suppose you’re tempted by a diet-violating Twinkie. Which part of your brain — the shortsighted emotional part or the farsighted reasoning part — gets to be the decider? There may be no built-in hierarchy here, just two autonomous brain modules in competition. That is why you might find yourself eating the Twinkie even while knowing it’s bad for you. (A similar disconnect between two parts of your brain occurs when a visual illusion doesn’t go away even after you learn it’s an illusion.)

The short-run self cares only about the present. It is perfectly happy to indulge today and offload the costs onto future selves. For example, recent studies show that teenage smokers do not underestimate the risk of getting lung cancer as an adult (if anything, they tend to overestimate it); they simply don’t mind making the future self suffer for the pleasure of the moment. The long-run self may deplore this ruinous behavior, but its prudent resolutions are continually ignored. Yet it can enforce its will indirectly by shaping the environment to constrain some short-run selves from exploiting others — by, say, putting a time lock on the refrigerator.

But why, some skeptics ask, should the government side with your prudent long-run self against your hedonistic short-run selves? What’s so great about the long-run self, anyway? As the economist Glen Whitman has observed in a shrewd critique of soft paternalism, the harms that selves impose on one another are reciprocal: “The long-run self can harm the short-run self by adopting self-control devices — such as flushing cigarettes down the toilet, refusing to allow ice cream in the house, checking into a clinic and so on.” It is not good to be profligate, lazy and obese, but neither is to good to be a miser, a workaholic or an anorexic.

If the goal is to promote freedom, though, there is an interesting argument favoring the long-run self. A distinctive quality of humans, as the third earl of Shaftesbury observed three centuries ago, is that we do not simply have desires; we also have feelings about our desires. Take the unhappy heroin addict: he gives himself an injection because he desires the drug, but he also has a desire to be rid of this desire. The philosopher Harry Frankfurt has given such “second order” desires a central role in his analysis of free will: we act freely, he submits, when we act on a desire that we actually desire to have, one that we endorse as our own. Beings that do not reflect on the desirability of their desires — like animals and infants and, perhaps, our short-run selves — are what Frankfurt calls “wantons.”

People have fashioned a wide range of techniques for keeping their inner wantons under control — like buying a pint of ice cream instead of the more economical quart because they know they would end up consuming the latter in one sitting. So why can’t soft paternalism be left to the private sector, as some libertarians prefer? The problem is that private self-binding schemes are easily subverted when someone can make a buck off your weakness of will. One Michigan man who signed up for a casino’s private self-blacklisting program found the owners all too accommodating when he had a change of heart. “Within a half an hour, I was back in,” he said.

Editorializing against soft paternalism earlier this year, The Economist warned that “life would be duller if every reckless spirit could outsource self-discipline to the state.” There are certainly more exalted ways to achieve mastery over unwelcome impulses. Thinkers of an existentialist kidney, like Jean-Paul Sartre, used to insist that each of us is free to redefine his character through an act of radical choice. For the religiously inclined, an access of divine grace might be what is needed to stiffen the will.

But what if you are one of those people who rely on more mundane stratagems, like self-binding? The general problem you face (as put by the political theorist Jon Elster) is this: For a given uphill goal and a given strength of will, does there exist a path, however circuitous, that will get you to the top of the hill? By adding a new path here and there, state soft paternalism makes it more likely that the answer will be yes.

Jim Holt, a regular contributor to the magazine, is working on a book about the puzzle of existence.

December 3, 2006

Idea Lab

Do Immigrants Make Us Safer?

By EYAL PRESS

Although the midterm election failed to render a clear verdict on illegal immigration, the new Democratic Congress may enact sweeping legislation tightening border controls and allowing more guest workers next year. If that happens, the rancorous debate about how undocumented workers affect jobs and wages in the United States will be rejoined. So, too, will an equally rancorous, if less prominent, debate: Do immigrants make the U.S. more crime-ridden and dangerous?

In an age of Latino gangs and Chinese criminal networks, the notion that communities with growing immigrant populations tend to be unsafe is fairly well established, at least in the popular imagination. In a national survey conducted in 2000, 73 percent of Americans said they believe that immigrants are either “somewhat” or “very” likely to increase crime, higher than the 60 percent who fear they are “likely to cause Americans to lose jobs.” Cities like Avon Park, Fla., have considered ordinances recently to dissuade businesses from hiring illegal immigrants, whose presence “destroys our neighborhoods.” Even President Bush, whose perceived generosity to undocumented workers has earned him vilification on the right, commented in a speech this May that illegal immigration “strains state and local budgets and brings crime to our communities.”

So goes the conventional wisdom. But is it true? In fact, according to evidence cropping up in various places, the opposite may be the case. Ramiro Martinez Jr., a professor of criminal justice at Florida International University, has sifted through homicide records in border cities like San Diego and El Paso, both heavily populated by Mexican immigrants, both places where violent crime has fallen significantly in recent years. “Almost without exception,” he told me, “I’ve discovered that the homicide rate for Hispanics was lower than for other groups, even though their poverty rate was very high, if not the highest, in these metropolitan areas.” He found the same thing in the Haitian neighborhoods of Miami. In his book “New York Murder Mystery,” the criminologist Andrew Karmen examined the trend in New York City and likewise found that the “disproportionately youthful, male and poor immigrants” who arrived during the 1980s and 1990s “were surprisingly law-abiding” and that their settlement into once-decaying neighborhoods helped “put a brake on spiraling crime rates.”

The most prominent advocate of the “more immigrants, less crime” theory is Robert J. Sampson, chairman of the sociology department at Harvard. A year ago, Sampson was an author of an article in The American Journal of Public Health that reported the findings of a detailed study of crime in Chicago. Based on information gathered on the perpetrators of more than 3,000 violent acts committed between 1995 and 2002, supplemented by police records and community surveys, it found that the rate of violence among Mexican-Americans was significantly lower than among both non-Hispanic whites and blacks.

In June, Sampson and I drove out to a neighborhood in Little Village, Chicago’s largest Hispanic community. The area we visited is decidedly poor: in terms of per capita income, 84 percent of Chicago neighborhoods are better off and 99 percent have a greater proportion of residents with a high-school education. As we made our way down a side street, Sampson noted that many of the residents make their living as domestic workers and in other low-wage occupations, often paid off the books because they are undocumented. In places of such concentrated disadvantage, a certain level of violence and social disorder is assumed to be inevitable. As we strolled around, Sampson paused on occasion to make a mental note of potential trouble signs: an alley strewn with garbage nobody had bothered to pick up; a sign in Spanish in several windows, complaining about the lack of a park in the vicinity where children can play. Yet for all of this, the neighborhood was strikingly quiet. And, according to the data Sampson has collected, it is surprisingly safe. The burglary rate in the neighborhood is in the bottom fifth of the city. The overall crime rate is nearly in the bottom third.

The safety of neighborhoods like these has received little attention in the debate about immigration — or, for that matter, the debate about crime. Ever since cities like New York began cracking down on panhandling and loitering in the mid-1990s, a move that coincided with a precipitous drop in violence, policy makers have embraced the so-called broken-windows theory, which emphasizes the deterrent effects of punishing such minor offenses. Lately, though, scholars have begun to question whether “broken windows” deserves all the credit for diminishing crime after all. Some researchers have linked progress to the cessation of the crack epidemic. Others point to an improved economy, community-policing initiatives or even the legalization of abortion, which reduced the number of poor, unwanted children growing up in high-risk neighborhoods.

Sampson’s theory may be the most provocative yet. Could America’s cities be safer today not because fewer unwanted children live in them but because a lot more immigrants do? Could illegal immigration be making the nation a more law-abiding place?

There are, to be sure, scholars who take issue with this rosy picture. Wesley Skogan, a political scientist at Northwestern University, has spent the past 13 years tracking violence and social disorder in the white, black and Latino communities in Chicago. In a new book, “Police and Community in Chicago: A Tale of Three Cities,” just out from Oxford University Press, Skogan concludes that the big success story took place not in immigrant areas but in African-American ones, where participation in community-policing programs was highest and violence fell the most. “About two-thirds of the crime decline in Chicago since 1991 took place in black neighborhoods,” Skogan says. In Hispanic communities, by contrast, Skogan found that the fear of crime, as measured in surveys of residents, and real social disorder — gang activity, loitering — actually became worse as the foreign-born population increased. Skogan acknowledges that Hispanic immigrants don’t show up much in arrest records, but he says he believes part of the explanation for this rests in the fact that those who are undocumented go to enormous lengths to “stay off the radar.” Many also come from a country, Mexico, where distrust of law enforcement is endemic, which is why he suspects they underreport crime and participate less in community-policing programs, as his study found.

Sampson doesn’t deny that crime may be underreported in immigrant neighborhoods. Nonetheless, he is quick to note that as the ranks of foreigners in the United States boomed during the 1990s — increasing by more than 50 percent to 31 million — America’s cities became markedly less dangerous. That these two trends might be related has been overlooked, he says, in part because immigrants, like African-Americans, often trigger negative associations regardless of how they actually behave. Not long ago, Sampson and Stephen W. Raudenbush, a sociologist who teaches at the University of Chicago, conducted an experiment to test this idea. The experiment drew on interviews with more than 3,500 Chicago residents, each of whom was asked how serious problems like loitering and public drinking were where they lived. The responses were compared with the actual level of chaos in the neighborhood, culled from police data and by having researchers drive along hundreds of blocks to document every sign of decay and disorder they could spot.

The social and ethnic composition of a neighborhood turned out to have a profound bearing on how residents of Chicago perceived it, irrespective of the actual conditions on the streets. “In particular,” Sampson and Raudenbush found, “the proportion of blacks and the proportion of Latinos in a neighborhood were related positively and significantly to perceived disorder.” Once you adjusted for the ethnic, racial and class composition of a community, “much of the variation in levels of disorder that appeared to be explained by what residents saw was spurious.”

In other words, the fact that people think neighborhoods with large concentrations of brown-skinned immigrants are unsafe makes sense in light of popular stereotypes and subliminal associations. But that doesn’t mean there is any rational basis for their fears. Such a message hasn’t sat well with everyone. As the debate about immigration has grown more heated and polarized, Sampson has found himself barraged with hate mail. “Vicious stuff,” he told me, “you know, thinly veiled threats, people saying, ‘You should just come and look at the Mexican gangs here.’ ” But Sampson has also won some far-flung admirers. In Mexico, one of the nation’s leading dailies, La Reforma, published a story hailing his findings, under the triumphal heading, “Son barrios de paisanos menos violentos que los blancos ” (“Neighborhoods of our countrymen are less violent than white ones”).

If immigrants really are making America safer, why is this so? “That,” Sampson says, “is the $64,000 question.” In discussing the persistence of poverty and the causes of crime, sociologists on the left often emphasize the importance of “structural” factors like unemployment and racism, while scholars on the right tend to focus on individual behavior like having an illegitimate child and using drugs. Sampson prefers to focus on the nature of the social interactions taking place in particular neighborhoods. At one point in Little Village, we strolled past a house where a couple of young girls were playing outside. It didn’t seem that anybody was supervising them. Next door, however, an elderly woman was standing just inside the window. The window was open, and as Sampson and I passed by, her eyes did not leave us. “Did you notice that?” asked Sampson as we proceeded down the block. She was making sure the two strangers who had appeared weren’t dangerous. It was an example of the kind of informal social control that Sampson says can prevent even the poorest neighborhoods from spiraling into chaos and that he suspects may distinguish many tightknit immigrant communities.

But Sampson also notes the importance of another factor, one often stressed by conservatives: Mexicans in Chicago, his study found, are more likely to be married than either blacks or whites. “The family dynamic is very noticeable here,” Sampson remarked as we passed a girl with long braided hair clutching her mother’s hand. Her father followed a few steps behind. Sampson does not believe family structure explains everything: the data showed that in immigrant neighborhoods, even individuals who are not in married households are 15 percent less likely to engage in crime. Yet neither did he discount its significance.

To the extent a strong family structure does play a role, it has left Sampson understandably mystified why the most strident opponents of immigration so often come from the right. Shouldn’t conservatives concerned about the breakdown of traditional values be celebrating these family-oriented newcomers? This is indeed what David Brooks argued not long ago in a column in The New York Times, gently chiding his fellow conservatives for reflexively assuming foreigners have had a corrosive impact on the nation’s moral fiber. “As immigration has surged, violent crime has fallen 57 percent,” Brooks noted in the column, which was titled “Immigrants to Be Proud Of.”

Sampson wrote Brooks a note complimenting him on the piece. But he is under no illusions that his views on crime and immigration will endear him to Republicans clamoring for America’s borders to be sealed. On the other hand, it might not make his colleagues on the left any happier. The flip side of the impulse to demonize immigrants is, after all, the tendency to romanticize them as hard-working Horatio Alger types who valiantly lift themselves out of poverty — with the implication that if they can avoid falling victim to drugs, gangs and other inner-city scourges, those who succumb to these forces have only themselves to blame. In calling attention to the virtues of immigrant communities, there is a risk that Sampson’s work will be taken by some as a commentary on the high crime rate in some poor African-American communities.

Of course, comparing the experiences of Mexican immigrants and African-Americans may seem grossly unfair, not least because studies have shown that many employers are willing to hire foreigners (on the assumption they work hard) but not blacks (on the assumption they don’t). Yet the fact that it is unfair hardly means such comparisons won’t be made — even though immigrants commit less crime not only than African-Americans in inner-city neighborhoods but less than American-born white people as well.

Before anyone rushes to conclude that crime would vanish from America’s cities if only more foreigners moved here, it is worth considering something else Sampson’s study uncovered. It is a finding as troubling as his basic thesis about immigrants is hopeful. Second-generation immigrants in Chicago were significantly more likely to commit crimes than their parents, it turns out, and those of the third generation more likely still.

Opponents of immigration frequently charge that Mexican immigrants threaten America’s national identity because of their failure to assimilate. A more reasonable concern might be the opposite of this: not that foreigners in low-income neighborhoods refuse to adopt the norms of the native culture but that their children and grandchildren do.

The sociologists Alejandro Portes and Rubén G. Rumbaut conducted a multiyear longitudinal study of immigrant children in Miami and San Diego. The offspring of foreigners who grow up in impoverished ghettos, they have argued, particularly Mexican-Americans exposed to racial as well as economic discrimination, often lose the drive and optimism their parents had and come to share the widespread attitude among their inner-city peers that survival depends on brandishing an oppositional stance toward school authorities and, more broadly, a culture that looks down on them. “The learning of new cultural patterns and entry into American social circles does not lead in these cases to upward mobility but to exactly the opposite,” Portes and Rumbaut contend, a process of “downward assimilation” that has created a new “rainbow underclass.” Astoundingly, in a recent paper, Rumbaut and several doctoral students found that the incarceration rate among second-generation Mexicans was eight times higher than for the first generation; among Vietnamese, it was more than 10 times higher. Where the first-generation immigrants in their data were less likely to wind up in prison than native-born whites, the second (with the exception of Filipinos and Chinese) were more likely.

Such findings suggest the class and race divisions that cleave America’s social landscape may prove decisive after all. In Sweden, a country with markedly less inequality and more generous social welfare policies — and far less violent crime — studies have shown the rate of offending tends to be lower for the second generation of immigrants than for the first. Of course, America has historically done an admirable job of assimilating newcomers, and the theory of “downward assimilation” has not gone unchallenged. Recently, a team of researchers completed a study in New York of more than 2,200 second-generation immigrants and 1,200 native-born Americans that allowed them to compare the rate of offending among various groups, West Indians versus African-Americans, for instance, or Russians versus American-born whites. According to John Mollenkopf, a political scientist at the CUNY Graduate Center, the arrest rates among the children of immigrants were the same or lower in every case. “The second-generation immigrants are doing better, on the whole, than the native-born,” he said.

Clearly, the debate over assimilation will continue, as Sampson acknowledges. When I asked him why he thought the positive trends he and his colleagues had discovered in Chicago seemed to become diluted by the second and third generations, he paused.

“That’s another $64,000 question,” he said, chuckling softly. Part of the explanation, he went on to speculate, may rest in the exposure subsequent generations have to the things that often lure young people in America’s cities to engage in illicit activities: drugs, cash, cars, contraband. Part of it, as well, might be the adoption of streetwise attitudes that lead people to react quickly to insults in the United States. One thing it is difficult for Americans to realize, he said, is how unusually violent their country is, particularly in light of its inordinate wealth. Recently, scholars have become increasingly interested in the historical origins of American violence. Richard Nisbett of the University of Michigan and others have traced our “culture of violence” back to the valorization of retribution and dueling among Scotch-Irish immigrants in the American South, suggesting that antique folkways have become encoded into the nation’s DNA.

It is a dark view, perhaps, but Sampson is hopeful that the good news about crime in recent years can continue, albeit under certain conditions, among them less alarmism about the supposedly dangerous foreigners in our midst. Sampson shook his head when describing some of the correspondence he has received from people absolutely certain that immigrants are sowing mayhem in our streets. In the last few years, he noted, such people have had somewhat less cause for worry, since the numbers show the flow of newcomers has subsided a bit. Meanwhile, the crime rate in some cities has begun to creep back up. Sampson, for one, does not think this is a mere coincidence. Those clamoring for America to close its borders in order to prevent violence-prone strangers from flooding our shores may well get their way, he acknowledged, but they ought to be careful what they wish for.

Eyal Press, a contributing writer for The Nation, is the author of “Absolute Convictions: My Father, a City and the Conflict That Divided America.”

 December 3, 2006

Prisoners of Sex

By NEGAR AZIMI

Mostafa Bakry has a knack for reinventing himself. He is an old-school Arab nationalist, newspaper editor and parliamentarian, and has managed to keep himself in the middle of the Egyptian political scene for almost two decades. He rails against decadence, against corruption — anything that can get the otherwise sleepy Egyptian public excited. This past July, he took on the issue of homosexuality, introducing a motion in Parliament calling for censorship of several scenes in a popular new film, “The Yacoubian Building,” and denouncing the racier parts of the movie as “spreading obscenity and debauchery.” One of the central characters in the story — a mosaic of downtown Cairo life complete with political intrigue, love triangles, the specter of extremism and more — is an affluent, dashing, Francophone newspaper editor who happens to be gay. He has an affair with a simple soldier from the countryside, and thus begins a tale of lust that ends in murder.

“It is a travesty,” Bakry told me not long ago when we met in the downtown Cairo office of his newspaper, Al Osboa (“The Week”). Shelves around his desk were stuffed with plaques, honorary degrees and dozens of gilt replicas of Jerusalem’s Dome of the Rock. He fingered fancy prayer beads as he expounded in the way one would to an adoring crowd. “The American agenda is promoting the rights of homosexuals,” he said in Arabic. “I am not against freedom of expression, but this abnormal phenomenon should not be presented as natural. Even if it has roots here, it is rejected by society. And by Islam.”

In the end, 112 parliamentarians from across the political spectrum signed onto Bakry’s motion. The gesture, however, had little effect. By the beginning of September, the film was still doing well at the box office, and no censorship was in sight. But it didn’t matter. The parliamentarian had made his point; he had raised the flag of morality, religion and public virtue.

The politics of homosexuality is changing fast in the Arab world. For many years, corners of the region have been known for their rich gay subcultures — even serving as secure havens for Westerners who faced prejudice in their own countries. In some visions, this is a part of the world in which men could act out their homosexual fantasies. These countries hardly had gay-liberation moments, much less movements. Rather, homosexuality tended to be an unremarkable aspect of daily life, articulated in different ways in each country, city and village in the region.

But sexuality in general and homosexuality in particular are increasingly becoming concerns of the modern Arab state. Politicians, the police, government officials and much of the press are making homosexuality an “issue”: a way to display nationalist bona fides in the face of an encroaching Western sensibility; to reject a creeping globalization that brings with it what is perceived as the worst of the international market culture; to flash religious credentials and placate growing Islamist power. In recent years, there have been arrests, crackdowns and episodes of torture. In Egypt, the most populous country in the Arab world, as in Morocco, Saudi Arabia, the United Arab Emirates — even in famously open and cosmopolitan Lebanon — the policing of homosexuality has become part of what sometimes seems like a general moral panic.

Egypt’s most famous crackdown got under way at a neon floating disco, the Queen Boat, docked on the wealthy Nile-side island of Zamalek, just steps from the famously gay-friendly Marriott Hotel. In the early-morning hours of May 11, 2001, baton-wielding police officers descended upon the boat, where men were dancing and drinking. Security officials rounded up more than 50 of them — doctors, teachers, mechanics. Those who were kept in custody became known among Egyptians as the Queen Boat 52. The detained men were beaten, bound, tortured; some were even subjected to exams to determine whether they had engaged in anal sex. In the weeks that followed, official, opposition and independent newspapers printed the names, addresses and places of work of the detained. Front pages carried the men’s photographs, not always with black bars across their eyes. The press accused the men of sexual excesses, dressing as women, devil worship, even dubious links to Israel. Bakry’s newspaper, Al Osboa, helped lead the charge.

The Queen Boat was just the beginning. Agents of the Department for Protection of Morality, a sort of vice squad within the Ministry of Interior’s national police force, began monitoring suspected gay gathering spots, recruiting informants, luring people into arrest via chat sites on the Internet, tapping phones, raiding homes. Today, arrests and roundups occur throughout the country, from the Nile Delta towns of Damanhour and Tanta to Port Said along the Suez Canal and into Cairo.

The city’s central Tahrir Square is a vast plaza with awkward pedestrian islands separated by traffic, lined with a Kentucky Fried Chicken, the Arab League headquarters and the Egyptian government’s hulking bureaucratic headquarters, the Mugamma. On summer evenings, it is full of people. Men whistle at passing women, couples linger, tourists are accosted by the oddly seductive call of “You look like an Egyptian” and hawkers promote their wares — not the least of which is sex. In early July of this year, 11 men, said to be conspicuously homosexual, were picked up.

Many of the police reports on arrests of homosexuals have cited “the protection of the society’s values” as a motivating factor, adding that the arrested threatened to harm “the country’s reputation on the international level.” The country’s image is of the utmost importance for the officials responsible for these campaigns. Still, homosexual acts are not against the law in Egypt; most men caught in these roundups are charged with fujur, or the “habitual practice of debauchery.” Some countries in the region, like Saudi Arabia or the United Arab Emirates, expressly criminalize homosexual acts. But in Egypt, the charges have increasingly involved a creative interpretation of a law introduced in 1951 to combat prostitution — drafted as a response to what was viewed as a remnant of Egypt’s colonial past. (The British introduced the licensing of brothels.)

The Queen Boat affair roughly coincided with a number of circuslike controversies in Cairo surrounding public morality: the outrage following the publication of the Syrian author Haider Haider’s novel “Banquet for Seaweed” (which incited riots at Al Azhar University in Cairo, as the book, about two Iraqi exiles in the 1970s, was interpreted as offensive to Islam); the trial of Saad Eddin Ibrahim, an Egyptian-American university professor and human rights activist accused of embezzlement, illegally accepting foreign funds and sullying Egypt’s image abroad; and the trial in 2002 of a prominent businessman who had taken 19 wives. Meanwhile the Muslim Brotherhood, which often positions itself in opposition to what it describes as a decadent, secular regime, won 17 seats in Parliament in 2000.

Public regulation of morality is an area in which the secular regime — often through its mouthpiece religious institution, Al Azhar — is in harmony with the Islamists. Al Azhar, Sunni Islam’s highest authority, was brought under direct state control by President Gamal Abdel Nasser in 1961. Through Al Azhar, the secular regime throws the occasional bone to the religious opposition — most often on issues of women and the family. Sometimes, avowedly secular officials and politicians even try to outdo the Islamists in this tug of war over who can win the public’s favor as the guardian of morality.

Tanta is a drab industrial town on the Nile, halfway between Cairo and the Mediterranean city of Alexandria. With a population of about 350,000, Tanta has a university and a plethora of cotton-gin and oil factories. It is probably best known for its moulid, a gathering celebrating Al-Sayyed Ahmed Al-Badawi, a 13th-century holy man of Moroccan origin credited with being the founder of the Badawiyyah Sufi order. Al Badawi died in Tanta in 1276, and each year in October, just at the end of the cotton harvest, some two million Egyptians descend upon Tanta and Al Badawi’s shrine for a week of recitations, performances, dancing and devotion.

The rest of the year Tanta is remarkably quiet. One afternoon in August, I met a young man named Hassan at a baroque, upscale hotel steps away from the shrine. Though it is difficult to speak of a gay community in Tanta (not all men who sleep with men in Egypt use the term “gay,” much less identify themselves as such), Hassan is a ringleader of sorts, a thread between generations. A youthful 37, he comes from a working-class family — his father runs an auto-parts shop — and he told me, mischievously, that he got out of military service because he is the only son among girls. For Hassan and many gay men in Tanta, the last few years have been especially hard. “First, there was Shibl’s death, then the affair of Ahmed, then Adel’s death and the arrests,” he explained.

Shibl was a friend of Hassan’s, caught with another man in the baths of the shrine — a gathering ground for many gay men at the time. In 2002 he was beaten so badly in detention that he died of cardiac arrest. Ahmed, another friend, was arrested from his home later that year, accused of having sex with two other men in his flat and “forming a group of Satan worshipers.” In prison, he was forced to strip down to his underwear, then was humiliated and beaten to the point of hemorrhaging. After his release, he lost his job as a schoolteacher. One local paper wrote, “A male teacher puts aside all principles and follows his perverted instincts, putting on women’s clothes and makeup on his face to seduce men who seek forbidden pleasures.”

Adel, a third friend of Hassan’s, was killed by an occasional lover. The ensuing investigation, not far removed from a witch hunt, resulted in many suspected homosexuals in Tanta being arrested, including Hassan. He and others arrested told me that they were held in a police interrogation room called “the refrigerator,” marked by a carpet brought in by the police that was caked in Adel’s blood. Detainees were tortured nightly for more than two weeks, from 2 a.m. to 6 a.m., according to the same sources. Hassan estimates that at least 100 men were detained and tortured. Some men were forced to stand on their tiptoes for those hours; others got electric shocks to the penis and tongue; still others were beaten on the soles of their feet with a rod called a felaqa, to the point of losing consciousness.

Most men were held until they broke, agreeing to work as informants, walking the street to pick up other homosexuals and reporting in each night. “They told us Adel deserved to die,” Hassan told me. “They said they wished all gays would die.” This went on for at least a month, Hassan and others say, in a pattern of detention, torture, informing, more torture.

On my second visit to Tanta, in August, I sat down for a lunch of kapsa, a sweet Saudi rice specialty, with Hassan and Mo, a slight student of English literature at Tanta University. The discussion turned to Islam and homosexuality. Both of them considered themselves practicing Muslims. Mo has combed the Internet for signs as to whether homosexuality is at odds with Islam. He said he had browsed the popular Egyptian lay preacher Ahmed Khaled’s Web site and found nothing. But he did see that Sheik Yussuf Al-Qaradawi had called homosexuals “perverts.” Al-Qaradawi, an Egyptian cleric generally considered a liberal, is best known for his television program “Shariah and Life” on the satellite channel Al Jazeera, and for his Web site, Islamonline.

“There is nothing clear about homosexuality in the Koran,” Hassan said. “It reads that the man who does it should be hurt. What does it mean ‘to be hurt’? In the Arabian peninsula they used a stick the size of this pencil (he raises my pencil) to punish men. It’s not like thievery or adultery. And anyway the Prophet was promised boys in heaven. Not girls.”

“I read that one should have their head cut off or be thrown from a mountain,” Mo continued.

Hassan disagreed: “There is no explicit punishment for gays in the Koran.”

Mo countered, “The problem is not the punishment, it is the scandal.”

Hassan, looking triumphant, told us that Pope Shenouda III, the head of Egypt’s Coptic Orthodox Church, had also spoken out against homosexuality. (Most famously, in 1990, he asked, “What rights are there for homosexuals?”) “It’s more complicated than you think,” Hassan said to Mo.

Countless interpretations of the story of the prophet Lot — the source of much of the commentary on homosexuality in Islam, as well as in Judaism and Christianity — have been offered. Ambiguities abound, and while there is no consensus on where Islam stands, popular and legalistic reinterpretations take liberties in selecting the bits that suit particular worldviews — whether they are liberal or intolerant. In October of last year, the Iraqi Shiite cleric Grand Ayatollah Ali al-Sistani issued a fatwa against homosexuals on the Arabic-language version of his Web site. It was inexplicably removed last May (some say international outrage swayed the image-conscious cleric). And while Al-Qaradawi did call homosexuals sexual perverts, he also noted “there is disagreement” over punishment.

Perched on a hill at the end of a windy road in Helwan, an industrial town south of Cairo and once the summer romping ground for the city’s well-to-do, is the Behman Hospital. With its pruned bushes and tennis courts, Behman looks more like a country club than a psychiatric institute. Dr. Nasser Loza is the medical director there; he is also an adviser to the Ministry of Health and runs a clinic in the upscale neighborhood of Mohandiseen. I had heard through friends that Loza counsels homosexual couples, so I went looking for him.

“They come in with quite banal relationship problems,” Loza told me when we met one afternoon at the hospital. “They manage to have very normal, quiet lives despite society’s negative views about being gay.” He added that on average he sees about one new couple every two or three months. “I suppose most are high-level professionals, some are of mixed cultural backgrounds.” Loza’s patients are the people you hear less of in the din of discussion surrounding homosexuality in this part of the world. Take M., for example, a successful businessman who was among the 52 arrested on the Queen Boat. He has since moved to the States, and recently wrote me in an e-mail message: “Money gave me security. I met my partner at a dinner party. I could travel. And I didn’t have my family on my back because I had moved out. I had a normal life until this happened.”

Most often, Loza sees families. “Typically, a family comes in with their son or daughter who has just announced that they are homosexual,” Loza explained. “They want me to help. The first reaction on the part of the family is denial, and then incredible blame.” In 1990, the World Health Organization removed homosexuality from its list of mental disorders, but Loza told me that “whether it is treated as a disease or not really depends on the doctor.” While a combination of counseling and antidepressants seems the norm, you still sometimes hear of the application of electroshock therapy.

L., a lesbian originally from Alexandria, is seeing a Cairo psychiatrist. Women have not been subject to the same kind of attacks that men have been in Egypt, perhaps because of their relative invisibility — an invisibility that can itself be oppressive. It can be virtually impossible to meet other gay women. For L., the brunt of the problem is her family. “I’ve been to three psychiatrists, each time taken in by my parents,” she told me. “The first two prescribed antidepressants, they told me it was a phase, that I should ‘cheer up.’ The third prescribed electroshock therapy. I never went back.”

In Cairo, L. is studying communications. She has nothing to do with her family and, through the Internet, has found a supportive partner. The weight of the stigma remains. “When a Muslim dies, there is a required 30 minutes of prayer,” she wrote to me in a recent e-mail message. “When a gay person dies, they bury him and flee.”

T

here is a searing scene in the Moroccan writer Mohamed Choukri’s 1973 novel “For Bread Alone” in which a desperate young man, having recently moved from the country to the city in colonial Morocco, sells himself to an elderly Spaniard. The scene is explicit (they have oral sex in a car), and the novel, which has been banned or caused controversy in many Arab countries, serves as a stunning condemnation of the power disparities engendered by colonialism. Symbolism like Choukri’s is common in Arabic literature and cinema, providing for what the British writer Brian Whitaker has referred to as a “reverse Orientalism,” in which sex, and specifically homosexual sex, is presented as a foreign incursion, a tool of colonial domination.

Sometimes a stigma hangs over efforts to protect homosexuals from repression or attack. Negad Al Boraei, an Egyptian attorney and human rights activist, has irritated many in the local human rights community by a number of his stances, including his willingness to accept American financing for his work. (He readily dismisses his critics as “communists” and “revolutionaries.” He was one of the first recipients in Egypt of financing from the State Department’s Middle East Partnership Initiative.) I went to Al Boraei to talk about how sexual rights fit into the broader human rights agenda.

“I was telling a friend of mine who works for Amnesty International, we have a lot of problems here — torture, violations against street children, we are full of problems,” he told me. As he spoke he gesticulated wildly with his ring-covered hands. “To come in and talk about gays and lesbians, it is nice, but it’s not the major issue. It’s like I am starving and you ask me what kind of cola I want. Well, I want to eat first. Then we can talk about cola! It’s a luxury to talk about gay rights in Egypt.”

When the raid on the Queen Boat occurred, much of the human rights community declined to take the case on, Al Boraei included. (Some activists even attacked those who met with the defendants.) Hossam Bahgat, a young Alexandrian working at the Egyptian Organization for Human Rights, told me he was quietly dismissed after he wrote an article calling upon the human rights community to overcome its fears about working on the case. In the West, however, the Queen Boat became something of a cause célèbre. Amnesty International supported protests in front of the Egyptian Embassy in London. A Web site called GayEgypt.com called on Egypt’s homosexuals to wear red on the two-year anniversary of the Queen Boat raid (an invitation to be arrested, it seems), while 35 members of the U.S. Congress wrote to Egypt’s president, Hosni Mubarak, asking for a stop to the anti-homosexual crusade. It was no wonder that amid this, the Egyptian newspaper Al-Ahram al-Arabi proclaimed, “Be a pervert and Uncle Sam will approve.”

“This was framed locally as an attack from the West,” says Bahgat, who eventually collaborated with Human Rights Watch on the case and later opened his own organization, the Egyptian Initiative for Personal Rights. “It was important to show that working for the rights of the detained was not a gay agenda, or a Western agenda, that this was linked to Egypt’s overall human rights record. Raising the gay banner when most sexual and other human rights are systematically violated every day is never going to get you far in this country.”

In the end, Human Rights Watch avoided laying itself open to easy attack as the bearer of an outsider’s agenda, packaging Queen Boat advocacy in the larger context of torture. Many of the arrested men were tortured, and torture is something that, at least in theory, most people agree is a bad thing. In Human Rights Watch’s 150-page report on the crackdown, references to religion, homosexual rights or anything else that could be seen or used as code for licentiousness were played down. Torture was played up, and it may very well be the first and last human rights report to cite Michel Foucault’s “History of Sexuality.” Upon release of the report in March 2004, Kenneth Roth, Human Rights Watch’s executive director, and Scott Long, director of the organization’s Lesbian, Gay, Bisexual and Transgender Rights Project, met with Egypt’s public prosecutor, the assistant to the interior minister and members of the Foreign Ministry. Their effort seemed to have had some effect; although occasional arrests continue, the all-out campaign of arrest and entrapment of men that began with the Queen Boat incident came to an end. One well-connected lawyer noted that a high-ranking Ministry of Interior source told him, “It is the end of the gay cases in Egypt, because of the activities of some human rights organizations.”

When I spoke to Long about his work on the Queen Boat case and its aftermath, he reflected on his advocacy methods in a context in which human rights, and especially gay rights, are increasingly associated with Western empire-building. “Perhaps we had less publicity for the report in the United States because we avoided fetishizing beautiful brown men in Egypt being denied the right to love,” he said. “We wrote for an Egyptian audience and tried to make this intelligible in terms of the human rights issues that have been central in Egyptian campaigns. It may not have made headlines, but it seemed to make history.” Whether the effort made history or simply interrupted it remains to be seen. Long himself noted, “The fact that the crackdown came apparently out of nowhere is a reminder that the repression could revive anytime.”

The possibilities for official repression exist across the Arab world. Early one morning this past August in Saudi Arabia, the police raided a wedding party in the town of Jizan, arresting 20 men “impersonating women,” according to the newspaper Al Watan. Similarly, late last year, 26 men were arrested when a party in Ghantout, a desert region on the Dubai-Abu Dhabi highway in the United Arab Emirates, was raided. The press went into typical scandal mode, and images of some of the men in women’s clothing circulated on cellphones. A government spokesman was quoted in The Khaleej Times, “Because they’ve put society at risk they will be given the necessary treatment, from male hormone injections to psychological therapies.” Arrests have also taken place in Lebanon — despite its being perceived as having more liberal social mores — as well as Morocco.

In Egypt, religiosity — along with an associated emphasis on public involvement in the private sphere — continues to rise. For the 2005 campaign the Muslim Brotherhood listed beauty pageants, music videos and sexy photographs as issues needing public debate; banning female presenters (even in veils) from state-run television and expanding religious education in public schools were also on the agenda. The brotherhood won 88 seats. And in most cases, there has been complete impunity for perpetrators of attacks on gay men; individual officers responsible for attacks have been promoted or shuffled around. As recently as September, at least one entrapment case occurred in Cairo; a young man was lured via a chat site and tortured — badly beaten and subject to electroshock on his genitals — by the same office of the public morality squad that had conducted Internet-based entrapments.

In the meantime, routine scapegoating of the West, and of its real and perceived agendas in the region, seems to be reaching new highs. The Egyptian government, despite its intimate strategic relationship with the U.S., has been increasing its rhetorical assaults on what is blithely reduced to an imperial, meddling West — ostensibly to parade its nationalist credentials in the face of America’s disastrous exploits in the Middle East. (In September, Gamal Mubarak, the president’s smooth-talking, Western-educated son and heir apparent, went so far as to dismiss Western initiatives designed to foster democratization in the region at a policy conference of the ruling National Democratic Party). Blanket attacks on what is vaguely referred to as “human rights” continue; in late August, Mostafa Bakry’s newspaper, Al Osboa, assailed Hossam Bahgat’s organization, along with an NGO that works on AIDS, for defending “perverts.” The ingredients for another crackdown exist in abundance in Egypt and the region at large.

Today the Queen Boat continues to sit docked on the Nile, its name clumsily respelled “Queen Boot” in garish green neon. It is hardly the gay hangout it once was, instead catering to the very occasional budget tourist. Many dragged away by the police that evening five years ago have since left the country, and others keep a low profile, although there are signs that young people have begun cruising the Nile banks again and meeting on the Internet.

As I prepared to leave Cairo at the beginning of the fall, I received an e-mail message from M., the businessman from the Queen Boat, since relocated to the States. “I sit here, and the Americans talk about something called Islamic fascism, the Arabs go on about their values,” he wrote. “All of us, and I don’t mean gay men, I mean all of us who don’t fit the norm — democracy activists, queens, anything — it’s us who get branded as Western, fifth columnists. We pay the price.”

Negar Azimi is senior editor of Bidoun, an arts-and-culture magazine based in New York.

December 3, 2006

Health Hazard: Computers Spilling Your History

By MILT FREUDENHEIM and ROBERT PEAR

BILL CLINTON’S identity was hidden behind a false name when he went to NewYork-Presbyterian Hospital two years ago for heart surgery, but that didn’t stop computer hackers, including people working at the hospital, from trying to get a peek at the electronic records of his medical charts.

The same hospital thwarted 1,500 unauthorized attempts by its own employees to look at the patient records of a famous local athlete, said J. David Liss, a vice president at NewYork-Presbyterian.

And just last September, the New York City public hospital system said that dozens of workers at one of its Brooklyn medical centers, including doctors and nurses, technicians and clerks, had improperly looked at the computerized medical records of Nixzmary Brown, a 7-year-old who prosecutors say was beaten to death by her stepfather last winter.

Powerful forces are lobbying hard for government and private programs that could push the nation’s costly and inefficient health care system into the computer age. President Bush strongly favors more use of health information technology. Health insurance and medical device companies are eager supporters, not to mention technology companies like I.B.M. and Google. Furthermore, Intel and Wal-Mart Stores have both said they intend to announce plans this week to embrace electronic health records for their employees.

Others may soon follow. Bills to speed the adoption of information technology by hospitals and doctors have passed both chambers of Congress.

But the legislation has bogged down, largely because of differences over how to balance the health care industry’s interest in efficiently collecting, studying and using data with privacy concerns for tens of millions of ordinary Americans — not just celebrities and victims of crime.

Advocates of such legislation, including Representative Joe L. Barton, the Texas Republican who is the chairman of the House Energy and Commerce Committee, said that concern about snooping should not freeze progress on adopting technology that could save money and improve care.

“Privacy is an important issue,” said Mr. Barton, who will lose the chairman’s post when the Democrats take over next year, “but more important is that we get a health information system in place.” Congress can address privacy later “if we need to,” he said.

Democrats, however, have made it clear that they are determined to address the issue of medical-records privacy once they take command of both houses of Congress next month. “There is going to be much more emphasis placed upon privacy protections in the next two years than we have seen in the last 12 years,” said Representative Edward J. Markey, Democrat of Massachusetts and a longtime privacy advocate.

Mr. Markey, a member of the Energy and Commerce committee, said he supported legislation that would allow individuals to keep their medical records out of electronic databases, and require health care providers to notify patients when health information is “lost, stolen or used for an unauthorized purpose.”

Representative John D. Dingell of Michigan, the ranking Democrat who is expected to become chairman of the Energy and Commerce committee next month, said that expanding electronic health care systems “clearly has great potential benefit.” But he added that “it also poses serious threats to patients’ privacy by creating greater amounts of personal information susceptible to thieves, rascals, rogues and unauthorized users.” Members of his committee, as well as the House Ways and Means Committee, have been struggling with such issues.

Academic medical centers like NewYork-Presbyterian have considerable experience with electronic records. But many other hospitals have been slow to jump on board, as have doctors and patients. Only one in four doctors used electronic health records in 2005, according to a recent study by researchers at Massachusetts General Hospital and George Washington University, and fewer than 1 in 10 doctors used the technology for important tasks like prescribing drugs, ordering tests and making treatment decisions.

Cathy Schoen, a senior researcher at the Commonwealth Fund, a nonprofit foundation, said primary-care doctors in the United States were far less likely than doctors in other industrialized countries to use electronic records. In Britain, 89 percent of doctors use them, according to a recent report in the online edition of the journal Health Affairs; in the Netherlands, 98 percent do.

Technology experts have many explanations for the slow adoption of the technology in the United States, including the high initial cost of the equipment, difficulties in communicating among competing systems and fear of lawsuits against hospitals and doctors that share data.

But the toughest challenge may be a human one: acute public concern about security breaches and identity theft. Even when employers pay workers to set up computerized personal health records, many bridle, fearing private information will fall into the wrong hands and be used against them.

“When I talk to employees, the top concern they have is: ‘What happens to my information? What about the Social Security numbers on my employee insurance, as well as the identity threat now appearing in health care?’ ” Harriett P. Pearson, I.B.M.’s chief privacy officer, said in a recent interview. “We have to be proactive about addressing privacy issues.”

Dr. J. Brent Pawlecki, associate medical director at Pitney Bowes, the business services company, said that people in the United States are most concerned that they could lose their health insurance, based on something in their health records. Pitney Bowes is weighing the pros and cons of electronic personal health records for its employees.

The worries are widely held. Most Americans say they are concerned that an employer might use their health insurance records to limit job opportunities, according to several surveys, including a recent one by the nonprofit Markle Foundation.

Some patients are so fearful that they make risky decisions about their health. One in eight respondents in a survey last fall by the California HealthCare Foundation said they had tried to hide a medical problem by using tactics like skipping a prescribed test or asking the doctor to “fudge a diagnosis.”

Dr. Stephen J. Walsh, a psychiatrist and former president of the San Francisco Medical Society, said, “I see many patients who don’t want any information about their seeing a psychiatrist on a record anywhere.”

CONGRESS addressed some of these concerns in 1996, when it passed the Health Insurance Portability and Accountability Act. That made it a federal crime, albeit rarely punished, to disclose private medical information improperly.

But critics say that the law has some worrisome loopholes, that infractions are rarely prosecuted, and that violators have almost never been punished. The law, for example, lets company representatives review employees’ medical records in order to process health insurance claims.

Critics say that it would not be unusual in some companies for the same supervisor to be in charge both of insurance claims and of hiring and firing decisions; this could allow companies to comb their ranks for people with expensive illnesses and find some reason to fire them as a way to keep health costs under control. Easily accessible computerized files would make the job that much easier, the critics say.

Joy L. Pritts, a health policy analyst at Georgetown University, said that in developing and promoting health information technology, the government seemed to assume that it could “tack on privacy protections later.” But she warned: “That attitude can really backfire. If you don’t have the trust of patients, they will withhold information and won’t take advantage of the new system.”

Executives can hire private tutors who specialize in teaching how to stay on the right side of the rules. But based on the experience so far, there is little chance that executives will be punished if they break them.

The Office for Civil Rights in the Department of Health and Human Services has received more than 22,000 complaints under the portability law since the federal privacy standards took effect in 2003; allegations of “impermissible disclosure” have been among the most common complaints. But the civil rights office has filed only three criminal cases and imposed no civil fines. Instead, it said, it has focused on educating violators about the law and encouraging them to obey it in the future.

With federal enforcement so weak, privacy advocates say they are also concerned about recent efforts in Congress to pre-empt state consumer protection laws. They often provide stronger privacy rights and remedies, particularly for information on H.I.V. infection, mental illness and other specific conditions.

State laws, unlike the federal law, have resulted in some stiff penalties. Last April, a California state appeals court approved a malpractice award of $291,000 to Nicholas Francies, a San Francisco restaurant manager, who lost his job after his doctor disclosed his H.I.V.-positive status in a worker’s compensation notice to Mr. Francies’s employer. He also got $160,000 from his employer in a settlement.

Dr. Deborah C. Peel, a psychiatrist and privacy advocate in Austin, Tex., has assembled a broad group called the Patient Privacy Rights Foundation, to lobby in Washington. Members span the political spectrum, from the American Civil Liberties Union and the U.S. Public Interest Research Group to the American Conservative Union and the Family Research Council.

Newt Gingrich, the Republican former House speaker, has called for “a 21st-century intelligent health system” based on electronic records. He also says individuals “must have the ability to control who can access their personal health information.”

“People do have a legitimate right to control their records,” said Mr. Gingrich, who has worked closely with Senator Hillary Rodham Clinton, Democrat of New York, on the issue of computerized records. On their own, they have also advocated strict rules to protect privacy.

Mr. Gingrich noted that the Senate had twice passed bills to prohibit discrimination based on personal genetic information; the House did not vote on them. Democrats say the outlook for such legislation will improve when they take control of Congress.

EVEN without new federal laws to guide them, some companies have begun to encourage their employees to embrace electronic medical records. At Pitney Bowes, employees are paid a bonus if they store a copy of their personal health records on WebMD.com, the medical Web site.

“We haven’t pushed that, except to make an offering,” Dr. Pawlecki said. But for those without electronic records, he added, “any time you go to a different system or a different doctor, the chances are that your records will not be able to follow you.” As a result, there is a risk of “harmful care,” like drug interactions or side effects, he said, as well as risks of omitting needed care and conducting duplicate tests.

Pitney Bowes and WebMD Health are among a group of 25 companies meeting with Ms. Pearson of I.B.M. to develop a set of principles and best practices that she said would help persuade people that their employers really did not look at private information stored online.

Ms. Pearson’s group is working with Janlori Goldman, director of the Health Privacy Project in Washington. Employers need to adopt standards for personal health records that address their workers’ privacy, confidentiality and security concerns, Ms. Goldman said.

WebMD, which manages employees’ health records for dozens of companies, had discussions earlier this year with Google, which is developing a Web site called Google Health, according to people familiar with the project. Google has not commented on its plans. But commenting generally on the issues, Adam Bosworth, the vice president for engineering at Google, said that privacy is a hurdle for technology companies addressing health care problems.

“There is a huge potential for technology to improve health care and reduce its cost,” Mr. Bosworth said in a statement. “But companies that offer products and services must vigorously protect the privacy of users, or adoption of very useful new products and services will fail.”

Even before the theft this year of a Veterans Affairs official’s laptop that contained private medical records of 28 million people, a consumer survey found that repeated security breaches were raising concerns about the safety of personal health records.

About one in four people were aware of those earlier breaches, according to a national telephone survey of 1,000 adults last year for the California HealthCare Foundation. The margin of error was plus or minus 3 percentage points.

The survey, conducted by Forrester Research, also found that 52 percent were “very concerned” or “somewhat concerned” that insurance claims information might be used by an employer to limit their job opportunities.

The Markle survey, to be published this week, will report even greater worry — 56 percent were very concerned, 18 percent somewhat concerned — about abuse by employers. But despite their worries, the Markle respondents were eager to reap the benefits of Internet technology — for example, having easy access to their own health records.

Companies that have tried to use computers to increase the efficiency of medical care say their success has hinged on security. “The privacy piece was critical,” said Al Rapp, corporate health care manager at United Parcel Service, which recently introduced a health care program built on computerizing the records of 80,000 nonunion employees.

U.P.S. offers to add $50 each to workers’ flexible spending accounts if they agree to supply information for a personal “health risk appraisal.” They can receive another $50 if spouses also participate. More than half accepted, Mr. Rapp said, with the understanding that the information would go to data archives at UnitedHealth Group and Aetna. “We are not involved in any way,” he said, referring to U.P.S.’s managers.

Aetna and UnitedHealth combine these appraisals with each person’s history of medical claims and prescription drug purchases. When the software signals a personal potential for costly conditions like diabetes, heart problems and asthma, an insurance company nurse, or health coach, telephones the employee with suggestions for preventive care and reminders for checkups, taking medications and the like.

“The employee can tell the nurse who calls that they don’t want to participate,” Mr. Rapp said. “Thus far, it has been very well accepted.”

Last week, he said, the health coach reached out to the spouse of an employee after noting that her condition and weight suggested a potential risk for a heart attack.

“She asked this person, ‘Are you taking your cholesterol medication, Lipitor?’ She said, ‘I won’t take Lipitor,’ ” and went on to mention the side effects she had read about on the Internet, Mr. Rapp said.

The nurse informed the woman’s doctor, who changed her prescription to a similar drug, Mr. Rapp said. He added that he was one of “a very few select people in the human resources department” who are permitted to see personal health records, under the federal privacy rules.

“I can see the names, to see the issues,” Mr. Rapp said. “I manage the program. I have responsibility for the success of the program.” But he added that he was prohibited under the law from sharing the employee’s data with other U.P.S. managers. “Generally speaking, U.P.S. would have no knowledge of it,” Mr. Rapp said.

Still, worries linger across the health care system. Hospital executives say that private investigators have often tried to bribe hospital employees to obtain medical records that might be useful in court cases, including battles over child custody, divorce, property ownership and inheritance.

But computer technology — the same systems that disseminate data at the click of a mouse — can also enhance security.

Mr. Liss, of NewYork-Presbyterian, said that when unauthorized people tried to gain access to electronic medical records, hospital computers were programmed to ask them to explain why they were seeking the information.

Moreover, Mr. Liss said, the computer warns electronic intruders: “Be aware that your user ID and password have been captured.”

December 3, 2006

Blowing the Whistle on Big Oil

By EDMUND L. ANDREWS

Honolulu

DURING a 22-year career, Bobby L. Maxwell routinely won accolades and awards as one of the Interior Department’s best auditors in the nation’s oil patch, snaring promotions that eventually had him supervising a staff of 120 people.

He and his team scrutinized the books of major oil producers that collectively pumped billions of dollars worth of oil and gas every year from land and coastal waters owned by the public. Along the way, the auditors recovered hundreds of millions of dollars from companies that shortchanged the government on royalties.

“Mr. Maxwell’s career has been characterized by exceptional performance and significant contributions,” wrote Gale A. Norton, then the secretary of the interior, in a 2003 citation. Ms. Norton praised Mr. Maxwell’s “perseverance and leadership” while cataloguing his “many outstanding achievements.”

Less than two years later, the Interior Department eliminated his job in what it called a “reorganization.” That came exactly one week after a federal judge in Denver unsealed a lawsuit in which Mr. Maxwell contended that a major oil company had spent years cheating on royalty payments.

“When I got this citation, they told me this would be very good for my career,” said Mr. Maxwell, smiling during an interview here. “Next thing I knew, they fired me.” Today, at 53, Mr. Maxwell lives on a $44,000 annual pension in a two-bedroom bungalow in the hills outside the Hawaiian capital.

But Mr. Maxwell has hardly disappeared. Instead, he is at the center of an escalating battle with both the oil industry and the Bush administration over how the federal government oversees about $60 billion worth of oil and gas produced every year on federal property. In the process, he has become one of the most nettlesome whistle-blowers Big Oil has ever encountered, a face-off that offers an inside look at how the industry and the government do business together.

Invoking a law that rewards private citizens who expose fraud against the government, Mr. Maxwell has filed a suit in federal court in Denver against the Kerr-McGee Corporation. The suit accuses the company, which was recently acquired by Anadarko Petroleum, of bilking the government out of royalty payments. It also contends that the Interior Department ignored audits indicating that Kerr-McGee was cheating. Three other federal auditors, who once worked for Mr. Maxwell and still work at the Interior Department, have since filed similar suits of their own against other energy companies.

Several of the nation’s biggest oil producers, including Exxon Mobil, Chevron, Shell and ConocoPhillips, failed in an effort to block Mr. Maxwell’s suit, arguing before an appellate judge that his case would “open the floodgates” to suits by other federal auditors. But the court rejected their pleas, and a trial is set to start on Jan. 16.

Mr. Maxwell’s self-interest is as much in play in the suit as is the public interest. If he wins, Kerr-McGee could be forced to pay more than $50 million in unpaid royalties and penalties, Mr. Maxwell said. Mr. Maxwell and his lawyers could be entitled to keep as much as 30 percent of any funds the government recovers — enough to make him a wealthy man.

Anadarko says that the government’s rules were followed and that it owes no money because the Interior Department never asked it to pay more. But it is now trying to negotiate a settlement before the trial begins.

“We believe the case is without merit,” said John Christiansen, an Anadarko spokesman. “However, as is a fairly common practice, both sides have agreed to meet with a mediator prior to trial.”

THE actions of Mr. Maxwell and the other auditors have coincided with broader investigations by Congress and the Interior Department’s own inspector general into whether the agency properly collects the money for oil and gas pumped from public land. Investigators say they have found evidence of myriad problems at the department: cronyism and cover-ups of management blunders; capitulation to oil companies in disputes about payments; plunging morale among auditors; and unreliable data-gathering that often makes it impossible to determine how much money companies actually owe.

In February, the Interior Department admitted that energy companies might escape more than $7 billion in royalty payments over the next five years because of errors in leases signed in the 1990s that officials are now scrambling to renegotiate. The errors were discovered in 2000, but were ignored for the next six years and have yet to be fixed.

Congressional investigators are worried about other problems, as well. The Interior Department’s inspector general told a House subcommittee in September that senior officials at the agency had repeatedly glossed over ethical lapses and bungling. “Short of crime, anything goes at the highest levels of the Department of the Interior,” declared Earl E. Devaney, the inspector general.

The Interior Department, which has described Mr. Maxwell as a renegade former employee motivated by the riches his lawsuit might bring, said its auditing efforts received a clean bill of health from an outside accounting firm in 2005. “The results are clear and irrefutable,” the agency said in a statement, adding that it was “accomplishing its job on behalf of the American public.”

But Republican leaders on the House Government Reform Committee, which has been reviewing the flawed leases, recently accused Interior officials of perpetuating a “culture of irresponsibility and lack of accountability” at the agency.

The committee ordered the Government Accountability Office, the investigative arm of Congress, to begin a broad examination of issues ranging from the agency’s rules and enforcement practices to the accuracy of its most rudimentary data.

The Interior Department “clearly doesn’t view their responsibility as maximizing revenue to the American people for resources that belong to the American people,” said Representative Darrell E. Issa, a California Republican who oversaw hearings on the flawed leases. “We don’t have a system that accurately tells us how much oil is being taken out of the ground.”

No one, says Mr. Maxwell, knows that better than he does.

BORN “Bobby,” not “Robert,” Mr. Maxwell is bald, burly and speaks with a back-country drawl that is occasionally punctuated by a cackling laugh. He said he thrived in the world of wells, pipelines and offshore rigs and meshed comfortably with the oil cultures of Texas and Oklahoma. Genial and unflappable, he describes his politics as “very conservative” and cringes at being labeled a rebel.

“I like the oil and gas industry,” he said, as he reminisced about his years as a federal auditor. “We are neither for nor against the profits they make. Our job is to make sure the American public gets a fair return on its assets.”

Mr. Maxwell grew up in a poor family in rural Tennessee. After serving in the Army in Hawaii, he earned a bachelor’s degree in business at Chaminade University here. He later became a certified public accountant and earned a master’s degree in business from Texas A&M. In 1983, after stints as an auditor with the Air Force and the Department of Energy, he joined the Minerals Management Service of the Interior Department.

The M.M.S. is responsible for collecting and overseeing the royalties for all the oil, gas, timber and coal that is produced on federal property. Last year, companies paid nearly $10 billion in royalties on oil and gas alone.

“I loved going to the oil companies,” Mr. Maxwell recalled, saying he spent about half of his time on the road — some of it at offshore drilling rigs. “Sometimes, there would be nothing more than a room and a helicopter pad to land on. But it was an education, and it was highly practical.”

Despite Mr. Maxwell’s placid demeanor, friends and former colleagues say he was a dogged auditor who sometimes rankled his own bosses. In 1988, he visited a coal mine on federal land in Montana even though, he says, his supervisors scoffed at the trip because they thought that the mine was too small to scrutinize. When Mr. Maxwell arrived, he found that the mine only looked small because it had been underpaying royalties for years. Within months, the company agreed to pay $43 million in back royalties, and it eventually paid more than $100 million.

In 1993, during the Clinton administration, officials in Washington told him to drop a complex dispute with ARCO because they thought his reasoning was dubious. But just as he was about to back down, ARCO executives volunteered to pay $20 million.

The Interior Department gave Mr. Maxwell an award that year, noting that he had “vehemently pressed his position” even though “support seemed lacking.” It attributed his success to both his “measured combatant style” and his personal rapport with ARCO executives.

“If he thought there was a case, he had a reason,” said Barbara Rothway, a retired Interior Department auditor. But, she added, “I could see how he might sometimes bother the people he worked for.”

Patient and stubborn, Mr. Maxwell spent years poring over the accounts of the Jicarilla Apache Tribe in New Mexico, which had long complained that companies were underpaying them for natural gas extracted on their tribal lands. Tribal officials said Interior Department officials had paid little heed until Mr. Maxwell came along. He collected and organized data going back 30 years and found pervasive errors by both oil companies and the government. The tribe eventually recovered more than $20 million in back royalties.

“Anyone knowledgeable about the way Interior processes payments knows there are all sorts of ways for companies to underpay,” said Alan R. Taradash, a lawyer in Albuquerque who has represented the tribe for years. “Bobby was one of the few who stood up with us and demanded that the auditing be done properly.”

MR. MAXWELL says his first serious doubts about the Interior Department originated in 1998, when the agency reluctantly began to investigate accusations of systematic cheating on royalties for oil.

Several of the nation’s biggest oil companies eventually settled that investigation by paying nearly $440 million. The investigation occurred only after outside whistle-blowers argued for years that the government was losing billions.

The cases had been sparked in part by a former oil trader at ARCO named J. Benjamin Johnson Jr., known as Benji, who contended that oil companies had used elaborate swapping schemes to cheat on royalties owed to private and state landowners, as well as the federal government.

Mr. Johnson and another former ARCO trader quit their jobs and became expert witnesses for landowners and state governments who wanted to sue oil companies for back payments. The two traders soon realized that the biggest case by far belonged to the federal government. But no federal officials were interested.

“It was unbelievably difficult,” Mr. Johnson recalled in a recent interview. “They brought me to Denver, to the M.M.S. office, in a room with about 30 auditors and managers from the solicitor’s office. Their reaction was that I was crazy, that it was impossible, that there was no way they could have actually missed this.”

But Mr. Johnson found a way to recover the federal royalties on his own. In 1995, he filed suit under the False Claims Act, a longstanding law intended to encourage whistle-blowers. Under the act, best known for its use against overbilling by military contractors, a private citizen can sue a company, contending that it defrauded the federal government. Companies found guilty have to pay as much as three times the amount of their fraudulent gains, and any person who files a suit is entitled to keep up to 30 percent of the money recovered.

The Justice Department initially refused to join Mr. Johnson’s suit, but it eventually did so, in 1998 after concluding that he and other whistle-blowers — including a nonprofit group called the Project on Government Oversight — were on to something. That was when Mr. Maxwell became involved. Working for the Interior Department, he collected data on a number of oil companies, pooling his material with that of Mr. Johnson.

“Bobby was with us,” Mr. Johnson recalled. “He was a straight-shooter, and he saw it.” Mobil was the first to settle and paid more than $40 million in 1998. Chevron paid $95 million. Shell paid $110 million. By 2002, 15 oil companies had paid a total of almost $440 million.

The Johnson lawsuit taught Mr. Maxwell three lessons. One was that Mr. Johnson became a wealthy man. (The oil trader and his lawyers received more than $30 million, while two other groups of whistle-blowers received more than $40 million.) The second was that top Interior officials had been obstinately blind to the oil industry’s practices. The third was about the power of the False Claims Act. Until Mr. Johnson came along, no one had used it to go after royalty underpayments.

Created in 1849 to manage the nation’s publicly owned natural resources, from national forests to parks and waterways, the Interior Department oversees timber, grazing, mining and oil drilling operations over many tens of millions of acres. And while the agency has often tried to increase drilling and mining under Democrats and Republicans alike, current and former officials say the Bush administration elevated that goal above almost all of the department’s other mandates.

Mr. Maxwell says his frustrations with the Interior Department escalated after the Bush administration took office in 2001. The Interior Department’s top priorities became increasing domestic oil and gas production, offering more incentives to drillers in the Gulf of Mexico and pushing to open the Arctic National Wildlife Refuge and other wilderness areas to drilling. The department trimmed spending on enforcement and cut back on auditors, and sped up approvals for drilling applications.

The agency’s senior ranks also became more heavily populated with officials friendly to the energy industry. For example, its new deputy secretary, J. Steven Griles, worked as an oil industry lobbyist before joining the department, and Chevron and Shell had paid him as an expert witness on their behalf in the Benji Johnson case. Mr. Griles declined to comment.Auditors, according to Mr. Maxwell and many others, were told to devote less energy to time-consuming audits and rely more on a computerized monitoring system called “compliance review.” Auditors complained that the new system was superficial and riddled with technical problems. Even when the new system flagged potential underpayments, Mr. Maxwell said, it often failed to supply conclusive information.

“We were getting shut down on all kinds of cases,” he said. “We started to wish that someone would come along and file a False Claims suit, so we could jump onboard.”

Auditing and compliance review had generated an average of about $176 million annually in the 1990s, with an extraordinary peak of $331 million in 2000, according to data from the Congressional Budget Office and the Interior Department. But from 2001 through 2005, a period when energy prices soared to new highs, enforcement revenue averaged about $46 million a year.

In 2004, the Interior Department’s inspector general issued a blistering report about the auditing system, saying that many auditors were unqualified, that essential documents were being lost and that the internal review process was “ineffective.”

BY 2002, Mr. Maxwell was fed up. He and a team of auditors had worked for months to dissect a complicated marketing deal by Kerr-McGee that they believed was cheating the government. He concluded that Kerr-McGee was selling all its oil at below-market prices to another company that compensated Kerr-McGee by assuming many of its marketing and administrative costs.

Simply put, Kerr-McGee seemed to be getting paid in both cash and services, but it was calculating its royalties only on the cash it received. Mr. Maxwell calculated that the company had underpaid the government by as much as $12 million between 1997 and 2002. State auditors in Louisiana had been investigating exactly the same issue in 2001, and ordered Kerr-McGee to pay an extra $2 million in royalties on oil from state lands.

According to Mr. Maxwell and Interior Department officials, when Mr. Maxwell presented his findings to his superiors, agency lawyers told him to drop the case. When he continued to argue, he recounts in his lawsuit, the agency’s chief of enforcement warned him that the director of the M.M.S., Johnnie M. Burton, would be “very upset” if he persisted.

“The word came down from the top, not to issue this order,” Mr. Maxwell said in an interview, speaking about the Interior Department. “There have always been people who don’t want to pursue things. But now it’s grown into a major illness. It’s dysfunctional.”

In a written response to questions from The New York Times, the Minerals Management Service said it was “rare” to overrule an auditor but that Mr. Maxwell’s contentions involved a “questionable application of M.M.S. regulations.”

The matter might have ended there, except that Mr. Maxwell decided to retire in June 2003. As soon as he left the agency, he began researching the possibility of emulating Benji Johnson and filing his own suit under the False Claims Act. As it happened, Mr. Maxwell’s “retirement” was brief. In October, the Interior Department rehired him to fill a management gap in its Denver office, and it put him in charge of a 120-person auditing team monitoring the Gulf of Mexico.

Despite rejoining the government, Mr. Maxwell filed his suit against Kerr-McGee in June 2004. The case was unsealed on Jan. 20, 2005; a week later, Mr. Maxwell lost his job.

Arriving at his office shortly before 8 a.m. on Jan. 27, Mr. Maxwell said he was summoned to a meeting with a senior M.M.S. official, who had flown in from Washington. The official handed him a memo, explaining that his job responsibilities were being moved to Houston and that his position would be eliminated.

To this day, Interior officials say they never fired Mr. Maxwell. They say, according to court papers, that he was merely a “re-employed annuitant” who was no longer needed. “The position disappeared,” said Ms. Burton, the M.M.S. director, in a meeting with energy reporters in September.

Mr. Maxwell protested. He said he would have relocated or taken a different job. He added that he was a certified public accountant and had a master’s degree in business administration; his successor lacked those degrees. But the fight did not last long. When Interior officials offered Mr. Maxwell a confidential financial settlement to leave without a fight, he took the deal and moved to Hawaii.

And he continued to press his case against Kerr-McGee. The company tried and failed to have the suit dismissed, arguing that a federal auditor should not be allowed to take information from work and file a lawsuit under the False Claims Act. In October, eight major oil companies including Chevron and Exxon Mobil weighed in on Kerr-McGee’s side.

“Government employees who uncover suspected fraud in the course of carrying out their jobs will receive a tacit invitation” to sue as private citizens, the companies argued in their brief. After the appellate court rejected the industry’s plea, Mr. Maxwell’s trial was rescheduled for January. Settlement talks are scheduled to begin in Denver this week.

Interior officials contend that Mr. Maxwell and other auditors have a conflict of interest when they stand to gain personally from their enforcement work. “These people could be entitled to collect 30 percent of the money that is owed to the government, for work they were already being paid to do,” Ms. Burton told reporters in September.

If Mr. Maxwell had not acted, of course, the government would have had no chance of recovering any money at all. James W. Moorman, president of Taxpayers Against Fraud, a nonprofit organization that specializes in the False Claims Act, acknowledged that it was less than ideal for federal investigators to have a financial stake in rebelling against their bosses. But he said the alternatives might be worse.

“You’re talking about a situation where there seems to be complete breakdown in the system,” Mr. Moorman said. “If they’ve got evidence of fraud, why shouldn’t a court hear it?”

STANDING in his garage recently, Mr. Maxwell pointed to six big cartons stacked along the wall. They contained 60,000 pages of documents that Kerr-McGee and the Interior Department had provided as part of the discovery process in his lawsuit. Mr. Maxwell, true to form, has gone through every page, and distilled his case into seven modestly sized binders he keeps in his living room. He says he is ready for trial, and even for the possibility of losing.

If he does lose, his lawyers will not charge for their work but he will have to pay about $125,000 to expert witnesses he has hired. “I can manage it,” he said. He has saved money all his life, he said, and can live on his savings and his pension.

“He’s thought about all the options, and none of them seem life-threatening to him,” said his daughter, Angela Maxwell Horn. “What can they do him? They’ve already fired him.”

December 2, 2006

Justices to Decide if Citizens May Challenge White House’s Religion-Based Initiative

By LINDA GREENHOUSE

WASHINGTON, Dec. 1 — The Supreme Court agreed Friday to decide whether private citizens are entitled to go to court to challenge activities of the White House office in charge of the Bush administration’s religion-based initiative.

A lower court had blocked a lawsuit challenging conferences the White House office holds for the purpose of teaching religious organizations how to apply and compete for federal grants. That constitutional challenge, by a group advocating the strict separation of church and state, was reinstated by an appeals court; the administration in turn appealed to the Supreme Court.

The case is one of three appeals the justices added to their calendar for argument in February. A question in one of the other cases is whether a public school principal in Juneau, Alaska, violated a student’s free-speech rights by suspending him from school for displaying, at a public off-campus event, a banner promoting drug use.

Together with a third new case, on whether federal land-management officials can be sued under the racketeering statute for actions they take against private landowners, the additions to the court’s docket raised the metabolism of what had begun to look like an unusually quiet term. It had been just short of a month since the justices accepted any new cases.

As in the case the justices heard on Wednesday on the administration’s refusal to regulate automobile emissions that contribute to climate change, the question in the White House case is the technical one of “standing to sue.” And as the argument on Wednesday demonstrated, standing is a crucially important aspect of litigation against the government.

In its lawsuit challenging the White House conferences, filed in Federal District Court in Madison, Wis., in 2004, an organization called the Freedom From Religion Foundation named as defendants more than a dozen administration officials who oversaw or participated in the conferences.

The lawsuit alleged that the officials were using tax dollars in ways that violated the separation of church and state required by the Establishment Clause of the First Amendment. For example, the complaint quoted Rod Paige, then the secretary of education, as telling the audience at a 2002 White House conference that “we are here because we have a president, who is true, is a true man of God” and who wanted to enable “good people” to “act on their spiritual imperative” by running social service programs with federal financial support.

Judge John C. Shabaz of Federal District Court dismissed the lawsuit for lack of standing, finding that the officials’ activities were not sufficiently tied to specific Congressional appropriations. Taxpayers’ objections to the use of general appropriations could not be a basis for standing, he said. The president’s Faith-Based and Community Initiative was created through a series of executive orders and not by Congress, he noted.

The decision was overturned, and the lawsuit reinstated, in a 2-to-1 ruling by the United States Court of Appeals for the Seventh Circuit, in Chicago. Writing for the majority, Judge Richard A. Posner said the distinction cited by Judge Shabaz made no difference. Judge Posner said the plaintiffs were entitled to challenge the conferences “as propaganda vehicles for religion,” even if they were neither financed through a specific Congressional appropriation nor made grants directly to religious groups.

As a general matter, people do not have standing, based solely on their status as taxpayers, to challenge the expenditure of federal money. The Supreme Court’s precedents have carved out religion cases as an exception to this general rule.

In its appeal, Hein v. Freedom From Religion Foundation, No. 06-157, the administration is arguing the exception is a narrow one, “designed to prevent the specific historic evil of direct legislative subsidization of religious entities,” a definition that the administration says does not apply to the conferences. For the federal courts to permit such a lawsuit, its brief asserts, would upset “the delicate balance of power between the judicial and executive branches” and open the courthouse door to anyone with a “generalized grievance.”

The student free-speech case the justices accepted, Morse v. Frederick, No. 06-278, is an appeal by a high school principal, Deborah Morse, who suspended a student, Joseph Frederick, after an incident during the Olympic Torch Relay that came through Juneau in 2002. Students were allowed to leave class to watch the parade. Mr. Frederick and some friends unfurled a 20-foot-long banner proclaiming “Bong hits 4 Jesus,” a reference to smoking marijuana.

When the student refused to take down the banner, claiming a First Amendment right to display it off school property, the principal confiscated it and eventually suspended him for 10 days. Mr. Frederick filed a lawsuit, which the Federal District Court in Juneau dismissed.

But the United States Court of Appeals for the Ninth Circuit held that the punishment violated the student’s First Amendment rights and, further, that the principal was liable for damages, in an amount to be determined by the district court. Ms. Morse’s Supreme Court appeal challenges both the appeals court’s interpretation of the First Amendment and its refusal to shield her from financial liability through a doctrine known as qualified immunity.

The third new case, Wilkie v. Robbins, No. 06-219, is a government appeal on behalf of employees of the Bureau of Land Management in a dispute with a Wyoming landowner who charged them with using tactics amounting to extortion to get him to grant public access to his property. The federal appeals court in Denver held that a racketeering suit based on the extortion charge could proceed.